mersenneforum.org  

Go Back   mersenneforum.org > Factoring Projects > Cunningham Tables

Reply
 
Thread Tools
Old 2022-07-26, 00:29   #12
charybdis
 
charybdis's Avatar
 
Apr 2020

32×5×19 Posts
Default

Quote:
Originally Posted by R.D. Silverman View Post
Two more are "within reach" of NFS@Home:
2,1091+ and 2,1109+ [C225 via GNFS]. There are 34 left *if* these get done.
Out of curiosity, I did some test-sieving for 2,1091+ with the following parameters:
Code:
n: 2117208798053985074797883391743275990128601953853639828878164892688444863926960451777994923461629323162218814154866250606508547121440235925708386797172317097515145076163293879812027206424552538135108597109220186300900511691987121969358311920812929997749355581156627347486061441269205378406076851632845597947
skew: 1.563
c6: 1
c0: 2
Y1: 1
Y0: -6129982163463555433433388108601236734474956488734408704
type: snfs
rlim: 232000000
alim: 268000000
lpbr: 35
lpba: 35
mfbr: 102
mfba: 70
rlambda: 3.9
alambda: 2.8
Rational-side sieving over 1k ranges:
Code:
MQ       Norm_yield      Speed (sec/rel)
100         2503              0.433
300         1793              0.618
500         1572              0.679
1000        1260              0.807
1500         997              0.994
2000         935              1.038
3000         760              1.242
4000         684              1.359
Suggests that sieving 100-4000M will generate ~4G raw relations, which I'd guess is about the right number. A big job for NFS@Home, but within reach.

The NFS@Home limits on alim/rlim are very restrictive at this size; the natural way to compensate is to use higher large prime bounds, hence the move to 35/35 from NFS@Home's usual 34/34. It's possible that 36/35 or 36/36 would be even better, but that would require >2^32 unique relations, which msieve can't handle.

With mfbr/mfba being that large relative to alim/rlim, it's also important to ensure that the lambdas are high enough that you don't lose lots of relations.

Last fiddled with by charybdis on 2022-07-26 at 00:40
charybdis is offline   Reply With Quote
Old 2022-07-26, 15:01   #13
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

125378 Posts
Default

Quote:
Originally Posted by charybdis View Post
The NFS@Home limits on alim/rlim are very restrictive at this size; the natural way to compensate is to use higher large prime bounds, hence the move to 35/35 from NFS@Home's usual 34/34. It's possible that 36/35 or 36/36 would be even better, but that would require >2^32 unique relations, which msieve can't handle.

With mfbr/mfba being that large relative to alim/rlim, it's also important to ensure that the lambdas are high enough that you don't lose lots of relations.
With use of remdups and remsing, I'm confident that a 36/35 job could be sent to msieve with fewer than 4G relations.

We can also do a team-sieve with CADO for low Q, say 100-250M, with A=32 and larger lims, if it looks like NFS@home alone will cut it close for relation gathering. This is also possible after the fact, by running CADO above Q=4000G if needed to get a more reasonable matrix.
VBCurtis is offline   Reply With Quote
Old 2022-07-26, 16:55   #14
R.D. Silverman
 
R.D. Silverman's Avatar
 
"Bob Silverman"
Nov 2003
North of Boston

52·13·23 Posts
Default

Quote:
Originally Posted by R.D. Silverman View Post
Here is an update to a post that I made some time ago.
This post contains info about recent ECM efforts.

There are currently 37 unfinished numbers from the 1987 hardcover edition of the
Cunningham book.

It would be nice to finish them. They are all from base 2, with index < 1200
for 2,n+ and index < 2400 for 2LM. These numbers were added in the early 1960's
to the original 1925 tables. The original Cunningham book only took n <= 600.
So these have been waiting for a while.......

None of them have been sieved and are waiting for or running LA: ()
None of them are sieving: ()
According to Sam one is queued to start sieving: (2.2246M C221 via GNFS).

Two more are "within reach" of NFS@Home:
2,1091+ and 2,1109+ [C225 via GNFS]. There are 34 left *if* these get done.

According to Greg, these last two push NFS@Home limits.

Perhaps 2,2350M, 2,1180+ and 2,2390L are within range of NFS@Home as octics? [unclear]

They get quite a bit harder after that via SNFS. Of course the 2- table
was finished to index 1200, so the rest are all doable, but it would take
a massive effort. It is an open question as to how large a number can be done by NFS@Home.
Greg says ~330 digits SNFS (225 GNFS) so even the smallest, e.g. 2,1097+, 2,2194LM are
seemingly out of reach.

How about a very large ECM effort to pick off as many of the rest as we can?
Below is the current YoYo ECM Effort; 9900 @B1 = 2.9G is in progress. (default B2)
Both Bruce Dodson and Ryan Propper have previously done extensive trials, aided
by assorted efforts of others. The exact total is unknown. EPFL did 20K curves @1G for 2+
I've run 1000 curves at B1 = 3G with higher B2 limits than the GMP default. [I used equal B1/B2 times]

Would it be worth it for YoYo to do a full t70?
==========================================================================================
2,1180+ 12010 @850M 9910 @2.9G
2,1139+ 12010 @850M 9910 @2.9G
2,1091+ 12010 @850M 9910 @2.9G
2,1097+ 12010 @850M 9910 @2.9G
2,2194M 12010 @850M 9910 @2.9G
2,2194L 12010 @850M 9910 @2.9G
2,2206L 12010 @850M
2,1109+ 12010 @850M 9910 @2.9G
2,2222L 12010 @850M 9910 @2.9G
2,2222M 12010 @850M 9910 @2.9G
2,1108+ 12010 @850M 9910 @2.9G
2,2246M 12010 @850M 9910 @2.9G
2,2246L 12010 @850M 9910 @2.9G
2,1124+ 12010 @850M 9910 @2.9G
2,1123+ 12010 @850M 9910 @2.9G
2,1129+ 12010 @850M 9910 @2.9G
2,2266L 12010 @850M 9910 @2.9G
2,1136+ 12010 @850M 9910 @2.9G
2,2278M 12010 @850M 9910 @2.9G
2,1147+ 12010 @850M 9910 @2.9G
2,1151+ 12010 @850M 9910 @2.9G
2,2306L 12010 @850M 9910 @2.9G
2,2302L 12010 @850M 9910 @2.9G
2,1153+ 12010 @850M 9806 @2.9G
2,2318M 12010 @850M 8212 @2.9G
2,1159+ 12010 @850M
2,1163+ 12010 @850M
2,1168+ 12010 @850M
2,2342M 12010 @850M
2,2350M 12010 @850M
2,2354M 12010 @850M
2,2354L 12010 @850M
2,2378L 12010 @850M
2,2374L 12010 @850M
2,1187+ 12010 @850M
2,2390L 12010 @850M
Greg has queued the 3 octics. I assume that 2,1109+ and 2,1091+ will get done eventually. The remainder
seem out of reach for NFS@Home. I suggest we refer to them as the 'Gang of 31'. The "31" is quite apropos.

I understand that Ryan did quite bit of ECM pounding on 2,2398M. We should thank him.
R.D. Silverman is offline   Reply With Quote
Old 2022-07-26, 16:57   #15
R.D. Silverman
 
R.D. Silverman's Avatar
 
"Bob Silverman"
Nov 2003
North of Boston

1D3316 Posts
Default

Quote:
Originally Posted by charybdis View Post
Out of curiosity, I did some test-sieving for 2,1091+ with the following parameters:
Code:
n: 2117208798053985074797883391743275990128601953853639828878164892688444863926960451777994923461629323162218814154866250606508547121440235925708386797172317097515145076163293879812027206424552538135108597109220186300900511691987121969358311920812929997749355581156627347486061441269205378406076851632845597947
skew: 1.563
c6: 1
c0: 2
Y1: 1
Y0: -6129982163463555433433388108601236734474956488734408704
type: snfs
rlim: 232000000
alim: 268000000
lpbr: 35
lpba: 35
mfbr: 102
mfba: 70
rlambda: 3.9
alambda: 2.8
Rational-side sieving over 1k ranges:
Code:
MQ       Norm_yield      Speed (sec/rel)
100         2503              0.433
300         1793              0.618
500         1572              0.679
1000        1260              0.807
1500         997              0.994
2000         935              1.038
3000         760              1.242
4000         684              1.359
Suggests that sieving 100-4000M will generate ~4G raw relations, which I'd guess is about the right number. A big job for NFS@Home, but within reach.

The NFS@Home limits on alim/rlim are very restrictive at this size; the natural way to compensate is to use higher large prime bounds, hence the move to 35/35 from NFS@Home's usual 34/34. It's possible that 36/35 or 36/36 would be even better, but that would require >2^32 unique relations, which msieve can't handle.

With mfbr/mfba being that large relative to alim/rlim, it's also important to ensure that the lambdas are high enough that you don't lose lots of relations.

This is only 4 bits larger than 2,2178LM. Is the parameter data for those available?
R.D. Silverman is offline   Reply With Quote
Old 2022-07-26, 17:37   #16
charybdis
 
charybdis's Avatar
 
Apr 2020

32·5·19 Posts
Default

Quote:
Originally Posted by R.D. Silverman View Post
This is only 4 bits larger than 2,2174LM. Is the parameter data for those available?
2,2174L was done with 33-bit large primes, 2,2174M started with 33-bit but was mostly done with 34-bit. This was the result:

Quote:
Originally Posted by frmky View Post
For 2,2174L we sieved from 20M - 6B, and collected 1.36B relations. This gave 734M uniques, so about 46% duplicates.

For 2,2174M we sieved from 20M - 4B, and collected 2.19B relations. This gave 1.29B uniques, so about 41% duplicates. However, we sieved a considerably narrower range of q, and it was overall much faster.
So 2,1091+ should be possible with 34-bit large primes too. Though I haven't tested it, I assume 35-bit will be faster at this size. Maybe 2,1097+ and 2,2194L/M will be possible too.

2,1109+ will require a big polyselect effort, which I expect we will begin in a few months once northern hemisphere temperatures have dropped a bit.
charybdis is offline   Reply With Quote
Old 2022-07-27, 23:15   #17
swellman
 
swellman's Avatar
 
Jun 2012

71408 Posts
Default Gang of 31

Code:
2_2222L   C228   SNFS 334
2_2278M   C234   SNFS 343
2_1151+   C236   SNFS 347
2_2206L   C243   SNFS 332
2_1136+   C247   SNFS 342
2_1139+   C248   SNFS 323 (octic)
2_2246L   C253   SNFS 338
2_2266L   C255   SNFS 341
2_1108+   C271   SNFS 334
2_2354M   C271   SNFS 354
2_2306L   C287   SNFS 347
2_1097+   C288   SNFS 331
2_2222M   C289   SNFS 334
2_2342M   C291   SNFS 353
2_2302L   C293   SNFS 347
2_2318M   C296   SNFS 349
2_1163+   C297   SNFS 350
2_2194M   C301   SNFS 331
2_2194L   C304   SNFS 331
2_2378L   C305   SNFS 358
2_1153+   C306   SNFS 347
2_2374L   C309   SNFS 358
2_1124+   C311   SNFS 338
2_2354L   C314   SNFS 354
2_1147+   C317   SNFS 345
2_1159+   C318   SNFS 349
2_1168+   C326   SNFS 352
2_2398M   C326   SNFS 361
2_1129+   C330   SNFS 340
2_1187+   C334   SNFS 358
2_1123+   C338   SNFS 338
swellman is online now   Reply With Quote
Old 2022-07-27, 23:42   #18
charybdis
 
charybdis's Avatar
 
Apr 2020

15278 Posts
Default

Quote:
Originally Posted by swellman View Post
Code:
2_2222L   C228   SNFS 334
2_2278M   C234   SNFS 343
2_1151+   C236   SNFS 347
These three are borderline SNFS/GNFS.
Most of the rest are probably beyond the degree 6/7 cutoff for SNFS? There will be a transitional zone where proximity of the exponent to multiples of 6 and 7 determines which is better.
charybdis is offline   Reply With Quote
Old 2022-07-28, 09:02   #19
xilman
Bamboozled!
 
xilman's Avatar
 
"๐’‰บ๐’ŒŒ๐’‡ท๐’†ท๐’€ญ"
May 2003
Down not across

263338 Posts
Default

Quick question. I could likely answer it myslf but I am too sleepy right now.

Are any sets of these amenable to the factoring factory approach of Lenstra et al?

If so, it should reduce the sieving effort substantially.
xilman is offline   Reply With Quote
Old 2022-07-28, 13:37   #20
R.D. Silverman
 
R.D. Silverman's Avatar
 
"Bob Silverman"
Nov 2003
North of Boston

52·13·23 Posts
Default

Quote:
Originally Posted by xilman View Post
Quick question. I could likely answer it myslf but I am too sleepy right now.

Are any sets of these amenable to the factoring factory approach of Lenstra et al?

If so, it should reduce the sieving effort substantially.
It is amenable, but I doubt that it will work for NFS@Home. The data storage/transfer requirements
would be much too large for a distributed effort. You would have to sieve ONE number and SAVE
all of the lattice locations for one polynomial that were potentially smooth to a central location.
This would be the special q polynomial. This would be for ALL of the special q values. Then, for subsequent
numbers all of those lattice locations [for all the special q] would need to be sent to EVERY client.
They could then sieve the other polynomial for each number being factored. This is a massive amount
of data for the clients to read and save as well as an enormous burden on the server. Latency and
bandwidth would be a major problem.

Or, if you could guarantee that everyone who helped for the first number would work on ALL of the
subsequent numbers and then sieve exactly those same special q that they sieved the first time you
could avoid sending the lattice locations back and forth. But this would be very delicate to manage and
very error prone.

Note that the theoretical best speedup is also only 50% if everything is done perfectly. Data read/write
latency would prevent the max theoretical gain even when data is retained locally by each client.
Storage requirements are massive.
R.D. Silverman is offline   Reply With Quote
Old 2022-08-19, 10:34   #21
R.D. Silverman
 
R.D. Silverman's Avatar
 
"Bob Silverman"
Nov 2003
North of Boston

52×13×23 Posts
Default A followup

Quote:
Originally Posted by R.D. Silverman View Post

<snip>

.
A fundamental (IMO) question remains. Given that this set of 31 numbers is probably beyond the range of
NFS@Home, what can be done (if anything) to promote the use of ECM beyond that of the effort provided by YoYo?

It seems that this would be the only means of furthering work on this set of numbers.
R.D. Silverman is offline   Reply With Quote
Old 2022-08-26, 22:56   #22
swellman
 
swellman's Avatar
 
Jun 2012

25×5×23 Posts
Default

Just noticed that Greg loaded 2,2390L and 2,2350M into the big queue. ECM has not yet been run on these two numbers but Iโ€™ll ask Yoyo to remove them from his system. Let NFS@Home do its thing.
swellman is online now   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Recommended bases and efforts gd_barnes Conjectures 'R Us 185 2021-12-20 05:51
Doublecheck efforts; S66/S79 to start with gd_barnes Conjectures 'R Us 16 2014-08-07 02:11
Cunningham ECM Now Futile? R.D. Silverman GMP-ECM 4 2012-04-25 02:45
ECM efforts mistake? 10metreh mersennewiki 1 2008-12-28 13:31
ECM Efforts R.D. Silverman Factoring 63 2005-06-24 13:41

All times are UTC. The time now is 17:09.


Sat Sep 24 17:09:53 UTC 2022 up 37 days, 14:38, 0 users, load averages: 0.99, 1.10, 1.16

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.

โ‰  ยฑ โˆ“ รท ร— ยท โˆ’ โˆš โ€ฐ โŠ— โŠ• โŠ– โŠ˜ โŠ™ โ‰ค โ‰ฅ โ‰ฆ โ‰ง โ‰จ โ‰ฉ โ‰บ โ‰ป โ‰ผ โ‰ฝ โŠ โŠ โŠ‘ โŠ’ ยฒ ยณ ยฐ
โˆ  โˆŸ ยฐ โ‰… ~ โ€– โŸ‚ โซ›
โ‰ก โ‰œ โ‰ˆ โˆ โˆž โ‰ช โ‰ซ โŒŠโŒ‹ โŒˆโŒ‰ โˆ˜ โˆ โˆ โˆ‘ โˆง โˆจ โˆฉ โˆช โจ€ โŠ• โŠ— ๐–• ๐–– ๐–— โŠฒ โŠณ
โˆ… โˆ– โˆ โ†ฆ โ†ฃ โˆฉ โˆช โŠ† โŠ‚ โŠ„ โŠŠ โŠ‡ โŠƒ โŠ… โŠ‹ โŠ– โˆˆ โˆ‰ โˆ‹ โˆŒ โ„• โ„ค โ„š โ„ โ„‚ โ„ต โ„ถ โ„ท โ„ธ ๐“Ÿ
ยฌ โˆจ โˆง โŠ• โ†’ โ† โ‡’ โ‡ โ‡” โˆ€ โˆƒ โˆ„ โˆด โˆต โŠค โŠฅ โŠข โŠจ โซค โŠฃ โ€ฆ โ‹ฏ โ‹ฎ โ‹ฐ โ‹ฑ
โˆซ โˆฌ โˆญ โˆฎ โˆฏ โˆฐ โˆ‡ โˆ† ฮด โˆ‚ โ„ฑ โ„’ โ„“
๐›ข๐›ผ ๐›ฃ๐›ฝ ๐›ค๐›พ ๐›ฅ๐›ฟ ๐›ฆ๐œ€๐œ– ๐›ง๐œ ๐›จ๐œ‚ ๐›ฉ๐œƒ๐œ— ๐›ช๐œ„ ๐›ซ๐œ… ๐›ฌ๐œ† ๐›ญ๐œ‡ ๐›ฎ๐œˆ ๐›ฏ๐œ‰ ๐›ฐ๐œŠ ๐›ฑ๐œ‹ ๐›ฒ๐œŒ ๐›ด๐œŽ๐œ ๐›ต๐œ ๐›ถ๐œ ๐›ท๐œ™๐œ‘ ๐›ธ๐œ’ ๐›น๐œ“ ๐›บ๐œ”