![]() |
![]() |
#23 |
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
23·3·419 Posts |
![]()
Correct.
Your reported SALPMAX 4294967296, SRLPMAX 4294967296 were the indication that 2^32 large prime bounds were used. Sometimes referred to as a 32-bit project (e.g. NFS @ Home has a whole spectrum from 29 to 33-bit, and they, too, use this lingo). For 6,373- and for 3,613-, we were using 2^33 large prime bounds but ~2 times lower FAMAX/FRMAX of 2^28 (vs your 476900000) and starting from 20000000. I've seen similar settings work well in other hands too, so I didn't improvise too much. I remember some three-four year old trailblazing projects from Greg where he reached the limit, too (only 15e siever was available at the time), so they sieved extra on the other side. It turned out to be quite redundant, but it was the best what could be done for that project at that time. You'd need to page down for quite a while in the Factoring (or possibly in Cunninghams') forum to find notes from that project. So sieving on both sides was deemed to be generally avoided. |
![]() |
![]() |
![]() |
#24 |
Jun 2012
Boulder, CO
52×17 Posts |
![]()
In general, what's the strategy most people take for really large GNFS/SNFS projects? Sieve the same q values (e.g. q = 1e7 through q = 2^31-1) with different large prime bounds, e.g. 29 <= lpbr <= 33? Something else?
Is this described in any documentation/papers anywhere? Thanks! |
![]() |
![]() |
![]() |
#25 |
(loop (#_fork))
Feb 2006
Cambridge, England
193616 Posts |
![]()
Sieving the same region with different large-prime bounds is a complete waste of time.
There is a limit to the size of jobs that can be efficiently done with the current versions of the current tools; that limit is rather smaller than the size of some jobs than have been done (that is, those jobs were done inefficiently). Fixing the tools to allow larger large-prime bounds is not trivial but is what would need doing; fixing them to allow larger sieving areas is much less trivial but would also be useful. |
![]() |
![]() |
![]() |
#26 | |
Just call me Henry
"David"
Sep 2007
Liverpool (GMT/BST)
37·163 Posts |
![]() Quote:
Did you mean the factor base size? I just compiled the siever with the change and sieved with 34 bit large primes. I ran the results through a filtering run in msieve. There were no errored relations. To prove there were 34-bit large primes in there, I sieved again with 33 bit large primes and there was less relations. I repeated with a bound of 40 and same results. More relations and no errors. |
|
![]() |
![]() |
![]() |
#27 |
Tribal Bullet
Oct 2004
32·5·79 Posts |
![]()
The concern is that every extra bit in large primes means you need about twice as many relations in order to converge to a matrix. The current postprocessing code we use has a hard limit of 4G relations, and we're 25% of the way there right now with 33-bit LPs.
Msieve can handle up to 48-bit large primes, an arbitrary number of them per relation, but cannot handle a dataset with 10^13 relations... |
![]() |
![]() |
![]() |
#28 | |
(loop (#_fork))
Feb 2006
Cambridge, England
2×7×461 Posts |
![]() Quote:
(though this is a matter for experiment; maybe two 35-bit large primes give useful yield) Doing the uniqifying and easy-singleton-removal passes before giving the data to msieve gets the size down enough to allow maybe one more bit of large primes. |
|
![]() |
![]() |
![]() |
#29 | |
Just call me Henry
"David"
Sep 2007
Liverpool (GMT/BST)
37×163 Posts |
![]() Quote:
What causes this limit is it extendable? |
|
![]() |
![]() |
![]() |
#30 |
Jun 2012
Boulder, CO
52×17 Posts |
![]()
For the record, I wasn't asking whether using different large prime bounds was the most efficient way to do large jobs. I was only asking what's *possible*. :)
Currently once you hit q=2^30-1, that's too high for GGNFS to handle, and you need a different strategy. If your polynomial isn't good enough to get all the needed relations by q=2^30-1, what strategies are possible to get to the linear algebra stages? Or are factorizations of these large GNFS/SNFS jobs simply impossible at this point with the tools available? |
![]() |
![]() |
![]() |
#31 |
Tribal Bullet
Oct 2004
32·5·79 Posts |
![]()
Msieve refers to a relation during the postprocessing by its line number in the save file, and this is currently a 32-bit integer. It's no big deal to make it a larger integer, but the assumption of 32-bit relation numbers is everywhere in a complex codebase. It's in file formats, hashtables, etc.
|
![]() |
![]() |
![]() |
#32 | |
Just call me Henry
"David"
Sep 2007
Liverpool (GMT/BST)
10111100011112 Posts |
![]() Quote:
There is also a built in ecm and p-1 routine that is used if you provide a parameter file. I worked out the parameter file format but didn't make much headway against getting optimal parameters(I imagine we could improve the speed of numbers we are currently factoring by at least 10% with this if we work out parameters. Potentially we could gain much more). I now see there is a program ecmstat that might help with this. I think the v5 siever can do more than we are asking of it currently. If we run out of prime special Qs for a number we can always sieve some composite ones with the f version of the siever as well(from memory 1/3 of the speed). |
|
![]() |
![]() |
![]() |
#33 |
Tribal Bullet
Oct 2004
67438 Posts |
![]()
The CADO code also includes a tremendously fast ECM/P+-1 implementation for up to 126 bit operands (two 64-bit words max) that Alex reports is even faster than the MPQS code in the Franke/Kleinjung sievers.
|
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
29 to 30 bit large prime SNFS crossover | VBCurtis | Factoring | 11 | 2015-03-09 07:01 |
How many jobs should I run? | Warlord | Software | 12 | 2013-10-11 22:18 |
Advice for large GNFS jobs? | WraithX | Factoring | 59 | 2013-07-30 01:13 |
doing large NFS jobs on Amazon EC2? | ixfd64 | Factoring | 3 | 2012-06-06 08:27 |
Filtering on large NFS jobs, particularly 2^908+1 | bdodson | Factoring | 20 | 2008-11-26 20:45 |