![]() |
![]() |
#1 |
Tribal Bullet
Oct 2004
23×5×89 Posts |
![]()
I realized last week that the arbitrary-precision library needed by msieve's algebraic square root code could be put to other uses, in particular it can be used to implement Bernstein's batch factoring algorithms to speed up the sieving stage. As a proof-of-concept I modified msieve's line siever to use three rational and/or algebraic large primes, but to defer actually trying to factor bi- and tri-composites until a few hundred thousand of them have accumulated. Once that happens, batch factoring isolates the ~2.5% of relations whose cofactors actually need processing.
As an example, I used a recent C135 factored by Hallstein using GNFS. His run used 28-bit large primes, and my mods used the product of all primes < 2^26 to perform batch factoring. Sieve reports with remaining cofactors smaller than 2^81 get batched and submitted to Bernstein's algorithm, and bi- and tri-composites that do not contain at least one factor below 2^26 are aborted. The results are really encouraging. The current code finds twice as many relations and only takes 1.5x longer to do so, for a net speedup of 25%. This is because the vast majority of tri-composites need no explicit factoring at all, and only 1% of the tri-composites that need factoring actually need to be split into three primes (the input would need all three large primes < 2^26, which is extremely rare). The speedup approaches 50% as the norms increase with larger b values, when it becomes feasible to use three large primes on both sides. The extra time needed seems to be split evenly between the batch factoring and the factoring of a much larger number of sieve reports. This isn't going to make a line siever competitive with a good lattice siever, but the same batch factoring techniques can be used with a lattice siever and could conceivably gain the same kinds of speedups for large jobs. jasonp |
![]() |
![]() |
![]() |
#2 |
"Ben"
Feb 2007
23×163 Posts |
![]() |
![]() |
![]() |
![]() |
#3 | |
"Bob Silverman"
Nov 2003
North of Boston
23·3·313 Posts |
![]() Quote:
that memory requirements were too great. Furthermore, my code spends so little time splitting the cofactors, that the speed increase offered by batch factoring did not seem worth it. Optimizing something that takes less than 1% of the run-time is generally not productive,. BTW, I split my cofactors with a 'tiny' QS implementation fine tuned for 63-bit cofactors. Extending this to say 93 bits would be easy, and the increase in time would not be great. |
|
![]() |
![]() |
![]() |
#4 | |
Tribal Bullet
Oct 2004
23·5·89 Posts |
![]() Quote:
bsquared, the algorithm is from page 18 of http://cr.yp.to/talks/2004.07.07/slides.pdf. Bernstein wants to use it to factor entire sieve values instead of just the parts containing large primes, but I believe in incremental changes :) I'm still getting a handle on the memory use needed, you don't need to batch very many relations in order to get most of the asymptotic speedup (100k is plenty). Basically it looks like sieving speed can double if you can spare 100-150MB of memory. Dumping the batched relations to disk is also an option, and the dump files can be combined and moved to a high-memory machine for batch factoring if necessary. However, it's kind of wasteful of disk space when only 2% of what you dump will end up being useful. Last fiddled with by jasonp on 2007-12-04 at 18:18 |
|
![]() |
![]() |
![]() |
#5 |
"Nancy"
Aug 2002
Alexandria
2,467 Posts |
![]()
One part of the PhD I'm working on is optimizing ECM, P-1 and some other factoring algorithms (maybe P+1, Pollard rho is most likely useless) for NFS with more than two large primes on one side.
Peter's new idea for the P+/-1 stage 2 looks very attractive for the job as the asymptotic complexity drops from O(d (log d)^2) to O(d log d), d the degree of the polynomial we evaluate, and perhaps more importantly the implied constant drops by rather a lot. I.e. or a c200, B2=10^9 the old code took 4.0 seconds, the new code takes 1.0 second. I'm hopeful that a properly optimized implementation operating on, say, 96 or 128 bit moduli would be quite useful for refactoring. However, at the moment, even the GMP-based implementation in GMP-ECM isn't 100% complete so the small-modulus version will take a while yet. Alex |
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Where do I send my PRP primes with large k? | Trilo | Riesel Prime Search | 3 | 2013-08-20 00:32 |
48-bit large primes! | jasonp | Msieve | 24 | 2010-06-01 19:14 |
lots of large primes | Peter Hackman | Factoring | 2 | 2008-08-15 14:26 |
Why only three large primes | fivemack | Factoring | 18 | 2007-05-10 12:14 |
What is the use of these large primes | Prime Monster | Lounge | 34 | 2004-06-10 18:12 |