mersenneforum.org  

Go Back   mersenneforum.org > Factoring Projects > Factoring

Reply
 
Thread Tools
Old 2013-01-25, 18:26   #23
Batalov
 
Batalov's Avatar
 
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2

23A716 Posts
Default

Correct.

Your reported SALPMAX 4294967296, SRLPMAX 4294967296 were the indication that 2^32 large prime bounds were used. Sometimes referred to as a 32-bit project (e.g. NFS @ Home has a whole spectrum from 29 to 33-bit, and they, too, use this lingo).

For 6,373- and for 3,613-, we were using 2^33 large prime bounds but ~2 times lower FAMAX/FRMAX of 2^28 (vs your 476900000) and starting from 20000000. I've seen similar settings work well in other hands too, so I didn't improvise too much.

I remember some three-four year old trailblazing projects from Greg where he reached the limit, too (only 15e siever was available at the time), so they sieved extra on the other side. It turned out to be quite redundant, but it was the best what could be done for that project at that time. You'd need to page down for quite a while in the Factoring (or possibly in Cunninghams') forum to find notes from that project. So sieving on both sides was deemed to be generally avoided.
Batalov is offline   Reply With Quote
Old 2013-02-21, 06:51   #24
ryanp
 
ryanp's Avatar
 
Jun 2012
Boulder, CO

101101102 Posts
Default

In general, what's the strategy most people take for really large GNFS/SNFS projects? Sieve the same q values (e.g. q = 1e7 through q = 2^31-1) with different large prime bounds, e.g. 29 <= lpbr <= 33? Something else?

Is this described in any documentation/papers anywhere?

Thanks!
ryanp is offline   Reply With Quote
Old 2013-02-21, 09:22   #25
fivemack
(loop (#_fork))
 
fivemack's Avatar
 
Feb 2006
Cambridge, England

23×3×5×53 Posts
Default

Sieving the same region with different large-prime bounds is a complete waste of time.

There is a limit to the size of jobs that can be efficiently done with the current versions of the current tools; that limit is rather smaller than the size of some jobs than have been done (that is, those jobs were done inefficiently).

Fixing the tools to allow larger large-prime bounds is not trivial but is what would need doing; fixing them to allow larger sieving areas is much less trivial but would also be useful.
fivemack is offline   Reply With Quote
Old 2013-02-21, 18:20   #26
henryzz
Just call me Henry
 
henryzz's Avatar
 
"David"
Sep 2007
Cambridge (GMT/BST)

2×2,861 Posts
Default

Quote:
Originally Posted by fivemack View Post
Sieving the same region with different large-prime bounds is a complete waste of time.

There is a limit to the size of jobs that can be efficiently done with the current versions of the current tools; that limit is rather smaller than the size of some jobs than have been done (that is, those jobs were done inefficiently).

Fixing the tools to allow larger large-prime bounds is not trivial but is what would need doing; fixing them to allow larger sieving areas is much less trivial but would also be useful.
Increasing the max large primes is a trial change in the siever. You just need to comment out the check.
Did you mean the factor base size?

I just compiled the siever with the change and sieved with 34 bit large primes. I ran the results through a filtering run in msieve. There were no errored relations. To prove there were 34-bit large primes in there, I sieved again with 33 bit large primes and there was less relations.
I repeated with a bound of 40 and same results. More relations and no errors.
henryzz is offline   Reply With Quote
Old 2013-02-21, 18:40   #27
jasonp
Tribal Bullet
 
jasonp's Avatar
 
Oct 2004

DC916 Posts
Default

The concern is that every extra bit in large primes means you need about twice as many relations in order to converge to a matrix. The current postprocessing code we use has a hard limit of 4G relations, and we're 25% of the way there right now with 33-bit LPs.

Msieve can handle up to 48-bit large primes, an arbitrary number of them per relation, but cannot handle a dataset with 10^13 relations...
jasonp is offline   Reply With Quote
Old 2013-02-21, 19:47   #28
fivemack
(loop (#_fork))
 
fivemack's Avatar
 
Feb 2006
Cambridge, England

23×3×5×53 Posts
Default

Quote:
Originally Posted by henryzz View Post
Increasing the max large primes is a trial change in the siever. You just need to comment out the check.
No, I meant max large primes - the MPQS at present only works with 96-bit cofactors and you need larger cofactors if you want sensibly to use three larger primes.

(though this is a matter for experiment; maybe two 35-bit large primes give useful yield)

Doing the uniqifying and easy-singleton-removal passes before giving the data to msieve gets the size down enough to allow maybe one more bit of large primes.
fivemack is offline   Reply With Quote
Old 2013-02-21, 19:58   #29
henryzz
Just call me Henry
 
henryzz's Avatar
 
"David"
Sep 2007
Cambridge (GMT/BST)

572210 Posts
Default

Quote:
Originally Posted by jasonp View Post
The concern is that every extra bit in large primes means you need about twice as many relations in order to converge to a matrix. The current postprocessing code we use has a hard limit of 4G relations, and we're 25% of the way there right now with 33-bit LPs.

Msieve can handle up to 48-bit large primes, an arbitrary number of them per relation, but cannot handle a dataset with 10^13 relations...
If we are 25% there now with 33-bit, we should be able to hopefully do 35-bit.
What causes this limit is it extendable?
henryzz is offline   Reply With Quote
Old 2013-02-21, 22:13   #30
ryanp
 
ryanp's Avatar
 
Jun 2012
Boulder, CO

2668 Posts
Default

For the record, I wasn't asking whether using different large prime bounds was the most efficient way to do large jobs. I was only asking what's *possible*. :)

Currently once you hit q=2^30-1, that's too high for GGNFS to handle, and you need a different strategy. If your polynomial isn't good enough to get all the needed relations by q=2^30-1, what strategies are possible to get to the linear algebra stages? Or are factorizations of these large GNFS/SNFS jobs simply impossible at this point with the tools available?
ryanp is offline   Reply With Quote
Old 2013-02-21, 22:53   #31
jasonp
Tribal Bullet
 
jasonp's Avatar
 
Oct 2004

3,529 Posts
Default

Quote:
Originally Posted by henryzz View Post
If we are 25% there now with 33-bit, we should be able to hopefully do 35-bit.
What causes this limit is it extendable?
Msieve refers to a relation during the postprocessing by its line number in the save file, and this is currently a 32-bit integer. It's no big deal to make it a larger integer, but the assumption of 32-bit relation numbers is everywhere in a complex codebase. It's in file formats, hashtables, etc.
jasonp is offline   Reply With Quote
Old 2013-02-21, 23:21   #32
henryzz
Just call me Henry
 
henryzz's Avatar
 
"David"
Sep 2007
Cambridge (GMT/BST)

2×2,861 Posts
Default

Quote:
Originally Posted by fivemack View Post
No, I meant max large primes - the MPQS at present only works with 96-bit cofactors and you need larger cofactors if you want sensibly to use three larger primes.

(though this is a matter for experiment; maybe two 35-bit large primes give useful yield)

Doing the uniqifying and easy-singleton-removal passes before giving the data to msieve gets the size down enough to allow maybe one more bit of large primes.
That is in the v4 siever. v5 has a second mpqs routine(mpqs3.c) that looks like it does upto 148 bits.
There is also a built in ecm and p-1 routine that is used if you provide a parameter file. I worked out the parameter file format but didn't make much headway against getting optimal parameters(I imagine we could improve the speed of numbers we are currently factoring by at least 10% with this if we work out parameters. Potentially we could gain much more). I now see there is a program ecmstat that might help with this.
I think the v5 siever can do more than we are asking of it currently.
If we run out of prime special Qs for a number we can always sieve some composite ones with the f version of the siever as well(from memory 1/3 of the speed).
henryzz is offline   Reply With Quote
Old 2013-02-22, 01:15   #33
jasonp
Tribal Bullet
 
jasonp's Avatar
 
Oct 2004

3,529 Posts
Default

The CADO code also includes a tremendously fast ECM/P+-1 implementation for up to 126 bit operands (two 64-bit words max) that Alex reports is even faster than the MPQS code in the Franke/Kleinjung sievers.
jasonp is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
29 to 30 bit large prime SNFS crossover VBCurtis Factoring 11 2015-03-09 07:01
How many jobs should I run? Warlord Software 12 2013-10-11 22:18
Advice for large GNFS jobs? WraithX Factoring 59 2013-07-30 01:13
doing large NFS jobs on Amazon EC2? ixfd64 Factoring 3 2012-06-06 08:27
Filtering on large NFS jobs, particularly 2^908+1 bdodson Factoring 20 2008-11-26 20:45

All times are UTC. The time now is 09:05.

Wed Sep 30 09:05:04 UTC 2020 up 20 days, 6:16, 0 users, load averages: 1.50, 1.47, 1.50

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.