20130125, 18:26  #23 
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
2^{3}·3·419 Posts 
Correct.
Your reported SALPMAX 4294967296, SRLPMAX 4294967296 were the indication that 2^32 large prime bounds were used. Sometimes referred to as a 32bit project (e.g. NFS @ Home has a whole spectrum from 29 to 33bit, and they, too, use this lingo). For 6,373 and for 3,613, we were using 2^33 large prime bounds but ~2 times lower FAMAX/FRMAX of 2^28 (vs your 476900000) and starting from 20000000. I've seen similar settings work well in other hands too, so I didn't improvise too much. I remember some threefour year old trailblazing projects from Greg where he reached the limit, too (only 15e siever was available at the time), so they sieved extra on the other side. It turned out to be quite redundant, but it was the best what could be done for that project at that time. You'd need to page down for quite a while in the Factoring (or possibly in Cunninghams') forum to find notes from that project. So sieving on both sides was deemed to be generally avoided. 
20130221, 06:51  #24 
Jun 2012
Boulder, CO
5^{2}×17 Posts 
In general, what's the strategy most people take for really large GNFS/SNFS projects? Sieve the same q values (e.g. q = 1e7 through q = 2^311) with different large prime bounds, e.g. 29 <= lpbr <= 33? Something else?
Is this described in any documentation/papers anywhere? Thanks! 
20130221, 09:22  #25 
(loop (#_fork))
Feb 2006
Cambridge, England
1936_{16} Posts 
Sieving the same region with different largeprime bounds is a complete waste of time.
There is a limit to the size of jobs that can be efficiently done with the current versions of the current tools; that limit is rather smaller than the size of some jobs than have been done (that is, those jobs were done inefficiently). Fixing the tools to allow larger largeprime bounds is not trivial but is what would need doing; fixing them to allow larger sieving areas is much less trivial but would also be useful. 
20130221, 18:20  #26  
Just call me Henry
"David"
Sep 2007
Liverpool (GMT/BST)
37·163 Posts 
Quote:
Did you mean the factor base size? I just compiled the siever with the change and sieved with 34 bit large primes. I ran the results through a filtering run in msieve. There were no errored relations. To prove there were 34bit large primes in there, I sieved again with 33 bit large primes and there was less relations. I repeated with a bound of 40 and same results. More relations and no errors. 

20130221, 18:40  #27 
Tribal Bullet
Oct 2004
3^{2}·5·79 Posts 
The concern is that every extra bit in large primes means you need about twice as many relations in order to converge to a matrix. The current postprocessing code we use has a hard limit of 4G relations, and we're 25% of the way there right now with 33bit LPs.
Msieve can handle up to 48bit large primes, an arbitrary number of them per relation, but cannot handle a dataset with 10^13 relations... 
20130221, 19:47  #28  
(loop (#_fork))
Feb 2006
Cambridge, England
2×7×461 Posts 
Quote:
(though this is a matter for experiment; maybe two 35bit large primes give useful yield) Doing the uniqifying and easysingletonremoval passes before giving the data to msieve gets the size down enough to allow maybe one more bit of large primes. 

20130221, 19:58  #29  
Just call me Henry
"David"
Sep 2007
Liverpool (GMT/BST)
37×163 Posts 
Quote:
What causes this limit is it extendable? 

20130221, 22:13  #30 
Jun 2012
Boulder, CO
5^{2}×17 Posts 
For the record, I wasn't asking whether using different large prime bounds was the most efficient way to do large jobs. I was only asking what's *possible*. :)
Currently once you hit q=2^301, that's too high for GGNFS to handle, and you need a different strategy. If your polynomial isn't good enough to get all the needed relations by q=2^301, what strategies are possible to get to the linear algebra stages? Or are factorizations of these large GNFS/SNFS jobs simply impossible at this point with the tools available? 
20130221, 22:53  #31 
Tribal Bullet
Oct 2004
3^{2}·5·79 Posts 
Msieve refers to a relation during the postprocessing by its line number in the save file, and this is currently a 32bit integer. It's no big deal to make it a larger integer, but the assumption of 32bit relation numbers is everywhere in a complex codebase. It's in file formats, hashtables, etc.

20130221, 23:21  #32  
Just call me Henry
"David"
Sep 2007
Liverpool (GMT/BST)
1011110001111_{2} Posts 
Quote:
There is also a built in ecm and p1 routine that is used if you provide a parameter file. I worked out the parameter file format but didn't make much headway against getting optimal parameters(I imagine we could improve the speed of numbers we are currently factoring by at least 10% with this if we work out parameters. Potentially we could gain much more). I now see there is a program ecmstat that might help with this. I think the v5 siever can do more than we are asking of it currently. If we run out of prime special Qs for a number we can always sieve some composite ones with the f version of the siever as well(from memory 1/3 of the speed). 

20130222, 01:15  #33 
Tribal Bullet
Oct 2004
6743_{8} Posts 
The CADO code also includes a tremendously fast ECM/P+1 implementation for up to 126 bit operands (two 64bit words max) that Alex reports is even faster than the MPQS code in the Franke/Kleinjung sievers.

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
29 to 30 bit large prime SNFS crossover  VBCurtis  Factoring  11  20150309 07:01 
How many jobs should I run?  Warlord  Software  12  20131011 22:18 
Advice for large GNFS jobs?  WraithX  Factoring  59  20130730 01:13 
doing large NFS jobs on Amazon EC2?  ixfd64  Factoring  3  20120606 08:27 
Filtering on large NFS jobs, particularly 2^908+1  bdodson  Factoring  20  20081126 20:45 