mersenneforum.org  

Go Back   mersenneforum.org > Factoring Projects > Msieve

Reply
 
Thread Tools
Old 2008-10-17, 18:15   #1
jasonp
Tribal Bullet
 
jasonp's Avatar
 
Oct 2004

3,559 Posts
Default The future of Msieve

You may have noticed in the last 12-18 months that the frequency of big changes in Msieve has dropped to essentially zero. That's a combination of my time being limited and the onset of diminishing returns. It's also because I'm not sure how to proceed from here.

Unfortunately, if the rate of increase of really large factoring jobs continues, we will rapidly reach the point where the current code will quite simply run out of steam. I'm not exactly sure where that point lies, but it's very close right now (perhaps 185-190 digit GNFS or 275-280 digit SNFS). Some of the current limits are as follows (in order of severity):

- only 32-bit primes supported throughout the library (this is a problem in the sieving tools as well :)

- limited parallelism in the linear algebra

- only 4 billion relations allowed; the number assigned to relations is 32 bits in size. While this is an order of magnitude larger than any job has needed so far, it's not enough for truly big jobs

- associative data structures used throughout the code that are very clumsy and statically sized; all of these are completely inappropriate for jobs of such large magnitude

- the handling of data on disk is much more clumsy than necessary

- a brute-force square root that would have extreme difficulty dealing with datasets larger than about 2x what people are throwing at it now.


These obstacles are all going to require massive amounts of work to remove. It frankly amazes me that the current code has scaled as much as it already has. Early in the QS days, a 100-digit job seemed enormous; when that became routine in the early NFS days, a 155-digit job seemed impossible. But now we're talking about a scale of job that won't comfortably fit inside a single computer, which means the scale of jobs people want to factor now is starting to outpace Moore's Law.

At the same time, the CADO Workshop drove home how many skilled mathematicians and programmers (often both simultaneously), are working full-time on NFS. From the CADO group's own NFS implementation to CWI completely revamping their NFS suite to Chris Monico's update of GGNFS to include multithreaded block Wiedemann, the NFS code landscape is drastically different now compared to even three years ago.

So I guess I'm wondering where to go from here. The current Msieve version is still capable of performing tons of latter-day factoring work, at a scale that's just under state-of-the-art. Does it need to scale more? Is anyone willing to tackle world record size factoring jobs? Is it time to merge what I have into another distinct package? Many of the readers here are involved with GGNFS, and the merge I suggested between the two packages earlier this year has so far been a non-starter. Personally I think there are research-like things to do for all the NFS phases, especially after talks like Thorsten's at the Workshop; but there's a balance between doing what users want and playing around with ideas I have. I can't do both; heck, I may not even be able to do one :)

So where do you want Msieve to go today?
jasonp is offline   Reply With Quote
Old 2008-10-17, 21:02   #2
FactorEyes
 
FactorEyes's Avatar
 
Oct 2006
vomit_frame_pointer

23×32×5 Posts
Default I have some talk for sale, cheap!

Preface: I have contributed nothing to ggnfs or msieve so far, so I'm not going to judge anyone for not contributing or ceasing work on these projects, and you should take my comments with a grain of salt.

With that out of the way, I'd like to see the state-of-the-art square root in msieve, and a rewrite for wholesome 32-bitness, as daunting as that sounds. (I'm willing to help with that part, as it's one of the things I have noticed a lot in the 40% of the msieve code I have browsed -- but talk is cheap. )

I suspect that the filtering code can be improved, based on a failure I recently saw on a tiny 126-digit SNFS job, and a weird recent factorization which took 5 square roots.

As an aside, I know you have little interest in writing a lattice siever, with good reason, but I"m looking forward to the day when I can peruse lattice-siever code which I can comprehend. A re-factoring of the you-know-who lattice siever code, without rewriting it per se, would be a dream to have in hand, but a nightmare to undertake.

Last fiddled with by FactorEyes on 2008-10-17 at 21:09
FactorEyes is offline   Reply With Quote
Old 2008-10-17, 21:03   #3
frmky
 
frmky's Avatar
 
Jul 2003
So Cal

33×97 Posts
Default

Jason,

msieve was created simply because you wanted to create a good SIQS implementation. It was for your own entertainment, and it must remain so. The only two teams pushing msieve to its limits are Bruce and myself, and Tom's coordinated group here. As far as I am aware, the current version of msieve meets the needs of everyone else. I definitely think you should prioritize playing with your own ideas over maintenance of the current code.

BTW, I am very much looking forward to trying out the CADO code and Chris Monico's new code.

Greg
frmky is offline   Reply With Quote
Old 2008-10-17, 21:45   #4
R.D. Silverman
 
R.D. Silverman's Avatar
 
"Bob Silverman"
Nov 2003
North of Boston

23·3·313 Posts
Default

Quote:
Originally Posted by jasonp View Post
At the same time, the CADO Workshop drove home how many skilled mathematicians and programmers (often both simultaneously), are working full-time on NFS. From the CADO group's own NFS implementation to CWI completely revamping their NFS suite
When did CWI do a re-write? I heard nothing about it.. What did they do?
I still use their filtering and square root code.
R.D. Silverman is offline   Reply With Quote
Old 2008-10-17, 22:04   #5
jasonp
Tribal Bullet
 
jasonp's Avatar
 
Oct 2004

3,559 Posts
Default

Quote:
Originally Posted by R.D. Silverman View Post
When did CWI do a re-write? I heard nothing about it.. What did they do?
I still use their filtering and square root code.
I don't know that they have anything complete yet, but they have big plans. Perhaps you can ask Herman or Peter? Peter's talk at the workshop has many juicy details.
jasonp is offline   Reply With Quote
Old 2008-10-17, 22:09   #6
R.D. Silverman
 
R.D. Silverman's Avatar
 
"Bob Silverman"
Nov 2003
North of Boston

23×3×313 Posts
Default

Quote:
Originally Posted by jasonp View Post
I don't know that they have anything complete yet, but they have big plans. Perhaps you can ask Herman or Peter? Peter's talk at the workshop has many juicy details.
I wish *I* had the time to revamp my code and write a block Wiedemann
solver. However, this kind of work is not supported by my company at all
and being a single parent eats up a large chunk of my personal time......

OTOH, my code is currently adequate to handle any task within my current
CPU resources. Indeed, there are matrices I can't handle because my
largest machine only has 2 gig.

I may try the CADO code when it becomes available.
R.D. Silverman is offline   Reply With Quote
Old 2008-10-17, 22:13   #7
Wacky
 
Wacky's Avatar
 
Jun 2003
The Texas Hill Country

21018 Posts
Default

Jason,

Quote:
Originally Posted by jasonp View Post
You may have noticed in the last 12-18 months that the frequency of big changes in Msieve has dropped to essentially zero.
Just as the changes to NFSNet have done over the last 12-48 months :(

Quote:
Unfortunately, if the rate of increase of really large factoring jobs continues, we will rapidly reach the point where the current code will quite simply run out of steam.
Why else are we "doing this"? As Dr. Silverman points out, the only real value in our efforts occurs when we "push the limits" and, in the process, learn something new.

Quote:
limits are as follows (in order of severity):

- only 32-bit primes supported throughout the library (this is a problem in the sieving tools as well :)

- limited parallelism in the linear algebra

- only 4 billion relations allowed; the number assigned to relations is 32 bits in size. While this is an order of magnitude larger than any job has needed so far, it's not enough for truly big jobs

- associative data structures used throughout the code that are very clumsy and statically sized; all of these are completely inappropriate for jobs of such large magnitude

- the handling of data on disk is much more clumsy than necessary

- a brute-force square root that would have extreme difficulty dealing with datasets larger than about 2x what people are throwing at it now.


These obstacles are all going to require massive amounts of work to remove
No doubt!

My interest is in attempting to "partition" the problem. There are a number of separate areas where the limits can be "pushed". What I would like to see is a concerted effort within the "NFS" community to establish "standards for interchange" between the various phases (poly selection, sieving, filtering and matrix building, matrix solution, and sqrt) so that individuals can work on one part and "automatically" have their results both (a) accept the results from ANY implementation of the prior phase functionality, and (b) allow ANY implementation of the succeeding phases to utilize their implementation for the particular phase for which it is designed.

In essence, this gets back to the "Unix" design concept where each element accepts a common input, performs its task, and emits a standard output.
In doing so, we, the "researcher", are able to "mix and match" the best available components.

Quote:
Is anyone willing to tackle world record size factoring jobs? Is it time to merge what I have into another distinct package? Many of the readers here are involved with GGNFS, and the merge I suggested between the two packages earlier this year has so far been a non-starter. Personally I think there are research-like things to do for all the NFS phases
Why else are we here? I relate to your "non-starter" remark. Far too few seem to recognize the value in a collaborative effort.

When revived, NFSNet sieving was based on the generous availability of the CWI Siever. It was the only siever that seemed to run in a reliable fashion across multiple platforms. Unfortunately, over those years, the "state of the art" has reached a point where "lattice" sieving is much more efficient and practical.
Unfortunately, we at NFSNet have been unable to obtain a siever that is both reliable across a wide range of platforms and sufficiently modular to allow the addition of "tracking meta-data" to allow the control of hundreds or thousands of sieving cores. But we keep "hoping".

I would like to see the community advance and I GREATLY appreciate the effort that you have made to make that happen.

Richard "Wacky" Wackerbarth

Last fiddled with by Wacky on 2008-10-17 at 22:22
Wacky is offline   Reply With Quote
Old 2008-10-17, 23:21   #8
fivemack
(loop (#_fork))
 
fivemack's Avatar
 
Feb 2006
Cambridge, England

2·7·461 Posts
Default

I've been a bit wary about commenting on NFSnet after my total failure to get anything productive assembled from the bits of statistic-server that I looked at a few years ago - the machine I had with internet-connectivity was one on which the sysadmin wouldn't let me run large pieces of C++ as CGI scripts.

I got the impression that the integration of the Kleinjung lattice siever was going pretty well, for a while I was providing a test build machine and it seemed to be building acceptably. Is a wide range of platforms really that important - 32-bit and 64-bit x86 Linux and Windows feels as if it ought to cover enough machines.

NFSNet with the Kleinjung lattice siever should be enough to get to problems too big for the back-end easily; after all, I can line up enough sievers to get near enough there by purely manual means.

The real limit is that the linear algebra on msieve in its current state has simultaneously hit fits-in-reasonable-computer and runs-in-reasonable-time. The matrix-build and square-root steps may use a fair amount of memory, but they don't take very long, so if I need to borrow a 16GB machine for them over a long weekend it's a matter of mild diplomacy.

But a matrix that fits in a 16GB quad-core will take a year to run on it, and you can't borrow somebody else's 16GB quad-core for that long; you can't run in the background, since you're reading through the whole matrix at each iteration, and at the moment anyone who has a 16GB machine bought it because from time to time they want to run 16GB jobs, whereas there are lots of people with 4GB machines of which you can devote 3GB to lattice sieving without affecting their daily activity.

I'm hoping that that year turns into six months with the new Nehalem machines with their much faster main memory, though that will definitely need a bit of fiddling around with cache-blocking; obviously I will provide Jason with an account on any Nehalem box I buy.

Parallelism and block-Wiedemann don't really help from the hobbyist perspective; having to devote four expensive machines for three months is rather worse than one expensive machine for a year, because the machines cost much more than their running costs, and four expensive machines is rather more compute power than I otherwise have any use for. Despite the spectacular performance possibilities mentioned by Stach at the workshop, GPUs don't help me because, to get enough memory on GPUs, I'd need to buy eight expensive GPUs and four expensive machines to host them; I'm not prepared to pay the price of a car in order to run matrix jobs 24/365.

I suppose I'd conclude that msieve in its current state is fine for me, the jump in resources needed to run jobs bigger than I can do at the moment is one I've not got the money or desire to make, and I'm content to wait three or four years until Moore's Law lets me have a cluster in my study (perhaps four nodes with cheap 8GB-graphics-memory GPUs in each node) on which block-Wiedemann can run a 1024-bit SNFS matrix in less than a season.


From a more theoretical point of view I would like to see msieve able to take advantage of vast oversieving; at the moment there's really only a factor two in relation count between being unable to build a matrix because the excess is too small, and being unable to build a dense enough matrix at all. It would be very interesting to know whether Wiedemann techniques are able to find kernels for the matrices which are presently too sparse to run - and that's something for which a very naive one-CPU Wiedemann would be fine.

Last fiddled with by fivemack on 2008-10-18 at 00:16
fivemack is offline   Reply With Quote
Old 2008-10-18, 01:10   #9
frmky
 
frmky's Avatar
 
Jul 2003
So Cal

33×97 Posts
Default

Quote:
Originally Posted by fivemack View Post
Parallelism and block-Wiedemann don't really help from the hobbyist perspective;
It will help with those of us in academia with larger, underutilized machines. In my case, I have access to a 230 GFLOPS beowulf cluster and a 200 GFLOPS shared memory system. Both were purchased on grant funds for a specific research calculation. They are now used for physics research about 5% of the time. The remainder of the time, they are free for NFS runs. The shared memory system has a NUMA architecture, so I'm hoping Chris' multithreaded block Wiedemann doesn't run into the same bandwidth problems as the block Lanczos. And perhaps there could be an MPI version sometime in the future suitable for the cluster.

Greg
frmky is offline   Reply With Quote
Old 2008-10-18, 01:43   #10
Wacky
 
Wacky's Avatar
 
Jun 2003
The Texas Hill Country

32×112 Posts
Default

Quote:
Originally Posted by fivemack View Post
I've been a bit wary about commenting on NFSnet after my total failure to get anything productive assembled from the bits of statistic-server that I looked at a few years ago - the machine I had with internet-connectivity was one on which the sysadmin wouldn't let me run large pieces of C++ as CGI scripts.
And directly related to MY problem. ... I am but a "pensioner" and my wife does not understand why I should fund machines and "connectivity" adequate to handle a load of the scope needed to be successful in this realm.

I did do one large matrix on my 8GB - 2 x Dual Core Xeon "workstation", but mostly, I have been attempting to track just what progress we are making on "NFSNet numbers" when I have to reinterpret most of the contributed relations because they are not in a "standard" format that includes "meta-data" to describe the work done.

I am pleased to report that David Cleaver recently made a significant breakthrough in getting the (now years old) Franke lattice siever to compile and execute in a Windows environment. Now we need to get all of the CPU specific assembler code working so that we can run efficiently.


Quote:
I got the impression that the integration of the Kleinjung lattice siever was going pretty well, for a while I was providing a test build machine and it seemed to be building acceptably. Is a wide range of platforms really that important - 32-bit and 64-bit x86 Linux and Windows feels as if it ought to cover enough machines.
It would be IF they were "stable" and "efficient". Unfortunately, I have neither the hardware resources, nor the personal time, to make significant progress.

Quote:
NFSNet with the Kleinjung lattice siever should be enough to get to problems too big for the back-end easily; after all, I can line up enough sievers to get near enough there by purely manual means.
Yes, but ...
I need a stable, reliable implementation that is "efficient" on each platform. Then I need to convince "Greg, Bruce, Tom, et. al.", to commit to "NFSNet" rather than "doing their own thing". Although each of you have been SIGNIFICANT contributors to the NFSNet effort, I perceive an interest to "strike out on your own" rather than directing your focus on building the NFSNet capability.
Wacky is offline   Reply With Quote
Old 2008-10-18, 02:45   #11
joral
 
joral's Avatar
 
Mar 2008

5×11 Posts
Default

Quote:
Originally Posted by Wacky View Post
Now we need to get all of the CPU specific assembler code working so that we can run efficiently.
It is a bit slow going, but I am making progress with the AMD 64 code. The only thing I'll need to do is sometime find out who to contact about uploading the changes back to the SF project. (Whether I have to submit a patch file to someone to check in, or what.)
joral is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
What does net neutrality mean for the future? jasong jasong 1 2015-04-26 08:55
Future of Primes. mfgoode Lounge 3 2006-11-18 23:43
The future of NFSNET JHansen NFSNET Discussion 15 2004-06-01 19:58
GIMPS future Complex33 Lounge 31 2003-12-05 09:08
15k Future? PrimeFun Lounge 21 2003-07-25 02:50

All times are UTC. The time now is 21:20.


Tue Mar 21 21:20:54 UTC 2023 up 215 days, 18:49, 0 users, load averages: 1.36, 0.98, 0.89

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.

≠ ± ∓ ÷ × · − √ ‰ ⊗ ⊕ ⊖ ⊘ ⊙ ≤ ≥ ≦ ≧ ≨ ≩ ≺ ≻ ≼ ≽ ⊏ ⊐ ⊑ ⊒ ² ³ °
∠ ∟ ° ≅ ~ ‖ ⟂ ⫛
≡ ≜ ≈ ∝ ∞ ≪ ≫ ⌊⌋ ⌈⌉ ∘ ∏ ∐ ∑ ∧ ∨ ∩ ∪ ⨀ ⊕ ⊗ 𝖕 𝖖 𝖗 ⊲ ⊳
∅ ∖ ∁ ↦ ↣ ∩ ∪ ⊆ ⊂ ⊄ ⊊ ⊇ ⊃ ⊅ ⊋ ⊖ ∈ ∉ ∋ ∌ ℕ ℤ ℚ ℝ ℂ ℵ ℶ ℷ ℸ 𝓟
¬ ∨ ∧ ⊕ → ← ⇒ ⇐ ⇔ ∀ ∃ ∄ ∴ ∵ ⊤ ⊥ ⊢ ⊨ ⫤ ⊣ … ⋯ ⋮ ⋰ ⋱
∫ ∬ ∭ ∮ ∯ ∰ ∇ ∆ δ ∂ ℱ ℒ ℓ
𝛢𝛼 𝛣𝛽 𝛤𝛾 𝛥𝛿 𝛦𝜀𝜖 𝛧𝜁 𝛨𝜂 𝛩𝜃𝜗 𝛪𝜄 𝛫𝜅 𝛬𝜆 𝛭𝜇 𝛮𝜈 𝛯𝜉 𝛰𝜊 𝛱𝜋 𝛲𝜌 𝛴𝜎𝜍 𝛵𝜏 𝛶𝜐 𝛷𝜙𝜑 𝛸𝜒 𝛹𝜓 𝛺𝜔