![]() |
![]() |
#1 |
May 2018
538 Posts |
![]() |
![]() |
![]() |
![]() |
#2 |
"Curtis"
Feb 2005
Riverside, CA
22×31×43 Posts |
![]()
900 core-years computation time (800 sieve, 100 matrix) on 2.1ghz Xeons gold. They observe this job ran 3x faster than would be expected from an extrapolation from RSA-768, and in fact would have been 25% faster on identical hardware than RSA-768 was.
I'd love a more detailed list of parameters! Perhaps a future CADO release will include them in the c240.params default file. :) For comparison, we ran a C207 Cunningham number 2,2330L in about 60 core-years sieve, which scales really roughly to an estimate of 3840 core-years sieve (6 doublings at 5.5 digits per doubling). The CADO group found a *massive* improvement in sieve speed for large problems! 4 times faster, wowee. Edit: Their job is so fast that RSA-250 is easily within their reach. Which means that C251 from Euclid-Mullen is within reach, theoretically. I mean, imagine if all NFS work over 200 digits is suddenly twice as fast..... Last fiddled with by VBCurtis on 2019-12-02 at 18:12 |
![]() |
![]() |
![]() |
#3 | |
"Bob Silverman"
Nov 2003
North of Boston
164448 Posts |
![]() Quote:
If going after a ~C250, I think 2,1139+ is a better target. It's been waiting for nearly 60 years to be factored. The Euclid-Mullen cofactor is a relative newcomer. Of course I am biased towards Cunningham numbers. I'd love to hear the implementation/parameterization details that resulted in their terrific speed improvement. |
|
![]() |
![]() |
![]() |
#4 |
Sep 2010
So Cal
3216 Posts |
![]()
Here is a link to the new C240 parameter file, posted by Paul Zimmermann about 9 hours ago. https://gforge.inria.fr/scm/browser.php?group_id=2065
![]() |
![]() |
![]() |
![]() |
#5 | |
"Bob Silverman"
Nov 2003
North of Boston
1D2416 Posts |
![]() Quote:
Last fiddled with by R.D. Silverman on 2019-12-02 at 20:35 |
|
![]() |
![]() |
![]() |
#6 |
"Curtis"
Feb 2005
Riverside, CA
22×31×43 Posts |
![]()
For those curious, but not curious enough to git:
Poly select used some different parameters, notably incr of 110880 (similar to tests Gimarel has run with msieve) and admax (that is, c6 max) of 2e12. CADO-specific params: P 20 million, nq 1296. I bet they'd have better-yet performance with nq of 7776 and admax around 3e11, for the same search time. sizeopt-effort was set to 20; I've not seen this set on any previous params file. sieve params: factor base bounds of 1.8e9 and 2.1e9. LP bounds of 36/37, mfb 72 and 111 (exactly 2x and 3x LP, so 3LP on one side) lambda values specified at 2.0 and 3.0, respectively Q from 800 million up. CADO-specific: ncurves0 of *one hundred*. Typical previously was 25 or so. ncurves1 of 35, typical previous was 15 or so. tasks.A = 32; this is a new setting not in my ~Sep'19 git version of CADO. This "A" setting, combined with much higher ncurves, appears to be where the new speed is found. Matrix: target density 200 (170 was prior standard). This number is about 50 higher than msieve's version of this setting, so this corresponds to target density of ~150. Not that crazy for a C240. I checked the params.c90 file, which is where the CADO team explains the meaning of each setting. No mention of "A". However, there is a new setting: tasks.sieve.adjust_strategy = 0 # this may be 0, 1, 2. 0 is default. They note that 1 or 2 may require more work than 0, but give more relations. I checked a handful of other params files in today's git clone, no other use of "A". EDIT: Aha! tasks.I is not set in the new c240 file; I is the equivalent of siever size (e.g. I=16). tasks.A is set instead. I imagine these are related. Last fiddled with by VBCurtis on 2019-12-02 at 23:03 |
![]() |
![]() |
![]() |
#7 | ||||
"Bob Silverman"
Nov 2003
North of Boston
22×5×373 Posts |
![]() Quote:
Quote:
I presume the rational side is 2 and the algebraic 3? Quote:
B1/B2? Quote:
Also, what was the sieve area per Q? Was it constant? Or larger for smaller Q? Did they consider trying smaller Q than 800M? How many total Q? What was the average yield per Q? |
||||
![]() |
![]() |
![]() |
#8 | |
"Curtis"
Feb 2005
Riverside, CA
14D416 Posts |
![]() Quote:
35 curves tried on the algebraic side; I believe the logic is that many of the cofactors won't split into 3 factors, so one doesn't try as hard to split the 3LP side. I believe the ECM bounds increase each trial, but the details are not documented anywhere I've seen; perhaps one would need to review the code (or ask the mailing list?) to discover these details. mfb has the same meaning as it does for GGNFS jobs. |
|
![]() |
![]() |
![]() |
#9 |
"Bob Silverman"
Nov 2003
North of Boston
22·5·373 Posts |
![]() |
![]() |
![]() |
![]() |
#10 |
"Bob Silverman"
Nov 2003
North of Boston
746010 Posts |
![]() |
![]() |
![]() |
![]() |
#11 |
"Curtis"
Feb 2005
Riverside, CA
22·31·43 Posts |
![]()
Sure, but I must leave it to someone more well-versed in NFS to reply.
I believe it's the size in bits of cofactors to be fed to ECM to be split. I believe CADO uses lambda to also control the size of cofactors, as lambda * LP size. This provides finer-grained control over cofactor size; however, I am not sure about this. I tried today's git on a small job with tasks.I = 12 replaced with tasks.A = 24 (or 32), and got an error that tasks.I was not specified. This means the params.c240 file posted today by PaulZ would not actually run, I think. I was hoping/speculating that tasks.A was a measure of sieve area, and that CADO might now vary the dimensions of the area intelligently. I = 16 corresponds to 2^16 by 2^15 for the sieve region, akin to 16e GGNFS. Alas, a mystery we await an answer for! |
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
factored mp / (mp < 100.000.000 ) | bhelmes | Data | 3 | 2018-09-28 18:31 |
10^224 + 3 factored | 2147483647 | Factoring | 0 | 2016-12-31 16:22 |
Factored vs. Completely factored | aketilander | Factoring | 4 | 2012-08-08 18:09 |
F33 is factored !! | Raman | Factoring | 4 | 2010-04-01 13:57 |
RSA-100 factored! | ewmayer | Math | 5 | 2003-05-14 15:08 |