mersenneforum.org Cunningham ECM efforts
 Register FAQ Search Today's Posts Mark Forums Read

2022-07-26, 00:29   #12
charybdis

Apr 2020

797 Posts

Quote:
 Originally Posted by R.D. Silverman Two more are "within reach" of NFS@Home: 2,1091+ and 2,1109+ [C225 via GNFS]. There are 34 left *if* these get done.
Out of curiosity, I did some test-sieving for 2,1091+ with the following parameters:
Code:
n: 2117208798053985074797883391743275990128601953853639828878164892688444863926960451777994923461629323162218814154866250606508547121440235925708386797172317097515145076163293879812027206424552538135108597109220186300900511691987121969358311920812929997749355581156627347486061441269205378406076851632845597947
skew: 1.563
c6: 1
c0: 2
Y1: 1
Y0: -6129982163463555433433388108601236734474956488734408704
type: snfs
rlim: 232000000
alim: 268000000
lpbr: 35
lpba: 35
mfbr: 102
mfba: 70
rlambda: 3.9
alambda: 2.8
Rational-side sieving over 1k ranges:
Code:
MQ       Norm_yield      Speed (sec/rel)
100         2503              0.433
300         1793              0.618
500         1572              0.679
1000        1260              0.807
1500         997              0.994
2000         935              1.038
3000         760              1.242
4000         684              1.359
Suggests that sieving 100-4000M will generate ~4G raw relations, which I'd guess is about the right number. A big job for NFS@Home, but within reach.

The NFS@Home limits on alim/rlim are very restrictive at this size; the natural way to compensate is to use higher large prime bounds, hence the move to 35/35 from NFS@Home's usual 34/34. It's possible that 36/35 or 36/36 would be even better, but that would require >2^32 unique relations, which msieve can't handle.

With mfbr/mfba being that large relative to alim/rlim, it's also important to ensure that the lambdas are high enough that you don't lose lots of relations.

Last fiddled with by charybdis on 2022-07-26 at 00:40

2022-07-26, 15:01   #13
VBCurtis

"Curtis"
Feb 2005
Riverside, CA

5·29·37 Posts

Quote:
 Originally Posted by charybdis The NFS@Home limits on alim/rlim are very restrictive at this size; the natural way to compensate is to use higher large prime bounds, hence the move to 35/35 from NFS@Home's usual 34/34. It's possible that 36/35 or 36/36 would be even better, but that would require >2^32 unique relations, which msieve can't handle. With mfbr/mfba being that large relative to alim/rlim, it's also important to ensure that the lambdas are high enough that you don't lose lots of relations.
With use of remdups and remsing, I'm confident that a 36/35 job could be sent to msieve with fewer than 4G relations.

We can also do a team-sieve with CADO for low Q, say 100-250M, with A=32 and larger lims, if it looks like NFS@home alone will cut it close for relation gathering. This is also possible after the fact, by running CADO above Q=4000G if needed to get a more reasonable matrix.

2022-07-26, 16:55   #14
R.D. Silverman

"Bob Silverman"
Nov 2003
North of Boston

746410 Posts

Quote:
 Originally Posted by R.D. Silverman Here is an update to a post that I made some time ago. This post contains info about recent ECM efforts. There are currently 37 unfinished numbers from the 1987 hardcover edition of the Cunningham book. It would be nice to finish them. They are all from base 2, with index < 1200 for 2,n+ and index < 2400 for 2LM. These numbers were added in the early 1960's to the original 1925 tables. The original Cunningham book only took n <= 600. So these have been waiting for a while....... None of them have been sieved and are waiting for or running LA: () None of them are sieving: () According to Sam one is queued to start sieving: (2.2246M C221 via GNFS). Two more are "within reach" of NFS@Home: 2,1091+ and 2,1109+ [C225 via GNFS]. There are 34 left *if* these get done. According to Greg, these last two push NFS@Home limits. Perhaps 2,2350M, 2,1180+ and 2,2390L are within range of NFS@Home as octics? [unclear] They get quite a bit harder after that via SNFS. Of course the 2- table was finished to index 1200, so the rest are all doable, but it would take a massive effort. It is an open question as to how large a number can be done by NFS@Home. Greg says ~330 digits SNFS (225 GNFS) so even the smallest, e.g. 2,1097+, 2,2194LM are seemingly out of reach. How about a very large ECM effort to pick off as many of the rest as we can? Below is the current YoYo ECM Effort; 9900 @B1 = 2.9G is in progress. (default B2) Both Bruce Dodson and Ryan Propper have previously done extensive trials, aided by assorted efforts of others. The exact total is unknown. EPFL did 20K curves @1G for 2+ I've run 1000 curves at B1 = 3G with higher B2 limits than the GMP default. [I used equal B1/B2 times] Would it be worth it for YoYo to do a full t70? ========================================================================================== 2,1180+ 12010 @850M 9910 @2.9G 2,1139+ 12010 @850M 9910 @2.9G 2,1091+ 12010 @850M 9910 @2.9G 2,1097+ 12010 @850M 9910 @2.9G 2,2194M 12010 @850M 9910 @2.9G 2,2194L 12010 @850M 9910 @2.9G 2,2206L 12010 @850M 2,1109+ 12010 @850M 9910 @2.9G 2,2222L 12010 @850M 9910 @2.9G 2,2222M 12010 @850M 9910 @2.9G 2,1108+ 12010 @850M 9910 @2.9G 2,2246M 12010 @850M 9910 @2.9G 2,2246L 12010 @850M 9910 @2.9G 2,1124+ 12010 @850M 9910 @2.9G 2,1123+ 12010 @850M 9910 @2.9G 2,1129+ 12010 @850M 9910 @2.9G 2,2266L 12010 @850M 9910 @2.9G 2,1136+ 12010 @850M 9910 @2.9G 2,2278M 12010 @850M 9910 @2.9G 2,1147+ 12010 @850M 9910 @2.9G 2,1151+ 12010 @850M 9910 @2.9G 2,2306L 12010 @850M 9910 @2.9G 2,2302L 12010 @850M 9910 @2.9G 2,1153+ 12010 @850M 9806 @2.9G 2,2318M 12010 @850M 8212 @2.9G 2,1159+ 12010 @850M 2,1163+ 12010 @850M 2,1168+ 12010 @850M 2,2342M 12010 @850M 2,2350M 12010 @850M 2,2354M 12010 @850M 2,2354L 12010 @850M 2,2378L 12010 @850M 2,2374L 12010 @850M 2,1187+ 12010 @850M 2,2390L 12010 @850M
Greg has queued the 3 octics. I assume that 2,1109+ and 2,1091+ will get done eventually. The remainder
seem out of reach for NFS@Home. I suggest we refer to them as the 'Gang of 31'. The "31" is quite apropos.

I understand that Ryan did quite bit of ECM pounding on 2,2398M. We should thank him.

2022-07-26, 16:57   #15
R.D. Silverman

"Bob Silverman"
Nov 2003
North of Boston

23×3×311 Posts

Quote:
 Originally Posted by charybdis Out of curiosity, I did some test-sieving for 2,1091+ with the following parameters: Code: n: 2117208798053985074797883391743275990128601953853639828878164892688444863926960451777994923461629323162218814154866250606508547121440235925708386797172317097515145076163293879812027206424552538135108597109220186300900511691987121969358311920812929997749355581156627347486061441269205378406076851632845597947 skew: 1.563 c6: 1 c0: 2 Y1: 1 Y0: -6129982163463555433433388108601236734474956488734408704 type: snfs rlim: 232000000 alim: 268000000 lpbr: 35 lpba: 35 mfbr: 102 mfba: 70 rlambda: 3.9 alambda: 2.8 Rational-side sieving over 1k ranges: Code: MQ Norm_yield Speed (sec/rel) 100 2503 0.433 300 1793 0.618 500 1572 0.679 1000 1260 0.807 1500 997 0.994 2000 935 1.038 3000 760 1.242 4000 684 1.359 Suggests that sieving 100-4000M will generate ~4G raw relations, which I'd guess is about the right number. A big job for NFS@Home, but within reach. The NFS@Home limits on alim/rlim are very restrictive at this size; the natural way to compensate is to use higher large prime bounds, hence the move to 35/35 from NFS@Home's usual 34/34. It's possible that 36/35 or 36/36 would be even better, but that would require >2^32 unique relations, which msieve can't handle. With mfbr/mfba being that large relative to alim/rlim, it's also important to ensure that the lambdas are high enough that you don't lose lots of relations.

This is only 4 bits larger than 2,2178LM. Is the parameter data for those available?

2022-07-26, 17:37   #16
charybdis

Apr 2020

797 Posts

Quote:
 Originally Posted by R.D. Silverman This is only 4 bits larger than 2,2174LM. Is the parameter data for those available?
2,2174L was done with 33-bit large primes, 2,2174M started with 33-bit but was mostly done with 34-bit. This was the result:

Quote:
 Originally Posted by frmky For 2,2174L we sieved from 20M - 6B, and collected 1.36B relations. This gave 734M uniques, so about 46% duplicates. For 2,2174M we sieved from 20M - 4B, and collected 2.19B relations. This gave 1.29B uniques, so about 41% duplicates. However, we sieved a considerably narrower range of q, and it was overall much faster.
So 2,1091+ should be possible with 34-bit large primes too. Though I haven't tested it, I assume 35-bit will be faster at this size. Maybe 2,1097+ and 2,2194L/M will be possible too.

2,1109+ will require a big polyselect effort, which I expect we will begin in a few months once northern hemisphere temperatures have dropped a bit.

 2022-07-27, 23:15 #17 swellman     Jun 2012 1110001010002 Posts Gang of 31 Code: 2_2222L C228 SNFS 334 2_2278M C234 SNFS 343 2_1151+ C236 SNFS 347 2_2206L C243 SNFS 332 2_1136+ C247 SNFS 342 2_1139+ C248 SNFS 323 (octic) 2_2246L C253 SNFS 338 2_2266L C255 SNFS 341 2_1108+ C271 SNFS 334 2_2354M C271 SNFS 354 2_2306L C287 SNFS 347 2_1097+ C288 SNFS 331 2_2222M C289 SNFS 334 2_2342M C291 SNFS 353 2_2302L C293 SNFS 347 2_2318M C296 SNFS 349 2_1163+ C297 SNFS 350 2_2194M C301 SNFS 331 2_2194L C304 SNFS 331 2_2378L C305 SNFS 358 2_1153+ C306 SNFS 347 2_2374L C309 SNFS 358 2_1124+ C311 SNFS 338 2_2354L C314 SNFS 354 2_1147+ C317 SNFS 345 2_1159+ C318 SNFS 349 2_1168+ C326 SNFS 352 2_2398M C326 SNFS 361 2_1129+ C330 SNFS 340 2_1187+ C334 SNFS 358 2_1123+ C338 SNFS 338
2022-07-27, 23:42   #18
charybdis

Apr 2020

31D16 Posts

Quote:
 Originally Posted by swellman Code: 2_2222L C228 SNFS 334 2_2278M C234 SNFS 343 2_1151+ C236 SNFS 347
These three are borderline SNFS/GNFS.
Most of the rest are probably beyond the degree 6/7 cutoff for SNFS? There will be a transitional zone where proximity of the exponent to multiples of 6 and 7 determines which is better.

 2022-07-28, 09:02 #19 xilman Bamboozled!     "๐บ๐๐ท๐ท๐ญ" May 2003 Down not across 2·5,711 Posts Quick question. I could likely answer it myslf but I am too sleepy right now. Are any sets of these amenable to the factoring factory approach of Lenstra et al? If so, it should reduce the sieving effort substantially.
2022-07-28, 13:37   #20
R.D. Silverman

"Bob Silverman"
Nov 2003
North of Boston

23×3×311 Posts

Quote:
 Originally Posted by xilman Quick question. I could likely answer it myslf but I am too sleepy right now. Are any sets of these amenable to the factoring factory approach of Lenstra et al? If so, it should reduce the sieving effort substantially.
It is amenable, but I doubt that it will work for NFS@Home. The data storage/transfer requirements
would be much too large for a distributed effort. You would have to sieve ONE number and SAVE
all of the lattice locations for one polynomial that were potentially smooth to a central location.
This would be the special q polynomial. This would be for ALL of the special q values. Then, for subsequent
numbers all of those lattice locations [for all the special q] would need to be sent to EVERY client.
They could then sieve the other polynomial for each number being factored. This is a massive amount
of data for the clients to read and save as well as an enormous burden on the server. Latency and
bandwidth would be a major problem.

Or, if you could guarantee that everyone who helped for the first number would work on ALL of the
subsequent numbers and then sieve exactly those same special q that they sieved the first time you
could avoid sending the lattice locations back and forth. But this would be very delicate to manage and
very error prone.

Note that the theoretical best speedup is also only 50% if everything is done perfectly. Data read/write
latency would prevent the max theoretical gain even when data is retained locally by each client.
Storage requirements are massive.

 Similar Threads Thread Thread Starter Forum Replies Last Post gd_barnes Conjectures 'R Us 185 2021-12-20 05:51 gd_barnes Conjectures 'R Us 16 2014-08-07 02:11 R.D. Silverman GMP-ECM 4 2012-04-25 02:45 10metreh mersennewiki 1 2008-12-28 13:31 R.D. Silverman Factoring 63 2005-06-24 13:41

All times are UTC. The time now is 19:20.

Tue Aug 9 19:20:10 UTC 2022 up 33 days, 14:07, 1 user, load averages: 1.48, 1.31, 1.16