![]() |
![]() |
#12 | |
Apr 2020
797 Posts |
![]() Quote:
Code:
n: 2117208798053985074797883391743275990128601953853639828878164892688444863926960451777994923461629323162218814154866250606508547121440235925708386797172317097515145076163293879812027206424552538135108597109220186300900511691987121969358311920812929997749355581156627347486061441269205378406076851632845597947 skew: 1.563 c6: 1 c0: 2 Y1: 1 Y0: -6129982163463555433433388108601236734474956488734408704 type: snfs rlim: 232000000 alim: 268000000 lpbr: 35 lpba: 35 mfbr: 102 mfba: 70 rlambda: 3.9 alambda: 2.8 Code:
MQ Norm_yield Speed (sec/rel) 100 2503 0.433 300 1793 0.618 500 1572 0.679 1000 1260 0.807 1500 997 0.994 2000 935 1.038 3000 760 1.242 4000 684 1.359 The NFS@Home limits on alim/rlim are very restrictive at this size; the natural way to compensate is to use higher large prime bounds, hence the move to 35/35 from NFS@Home's usual 34/34. It's possible that 36/35 or 36/36 would be even better, but that would require >2^32 unique relations, which msieve can't handle. With mfbr/mfba being that large relative to alim/rlim, it's also important to ensure that the lambdas are high enough that you don't lose lots of relations. Last fiddled with by charybdis on 2022-07-26 at 00:40 |
|
![]() |
![]() |
![]() |
#13 | |
"Curtis"
Feb 2005
Riverside, CA
5·29·37 Posts |
![]() Quote:
We can also do a team-sieve with CADO for low Q, say 100-250M, with A=32 and larger lims, if it looks like NFS@home alone will cut it close for relation gathering. This is also possible after the fact, by running CADO above Q=4000G if needed to get a more reasonable matrix. |
|
![]() |
![]() |
![]() |
#14 | |
"Bob Silverman"
Nov 2003
North of Boston
746410 Posts |
![]() Quote:
seem out of reach for NFS@Home. I suggest we refer to them as the 'Gang of 31'. The "31" is quite apropos. ![]() ![]() I understand that Ryan did quite bit of ECM pounding ![]() |
|
![]() |
![]() |
![]() |
#15 | |
"Bob Silverman"
Nov 2003
North of Boston
23×3×311 Posts |
![]() Quote:
This is only 4 bits larger than 2,2178LM. Is the parameter data for those available? |
|
![]() |
![]() |
![]() |
#16 | ||
Apr 2020
797 Posts |
![]() Quote:
Quote:
2,1109+ will require a big polyselect effort, which I expect we will begin in a few months once northern hemisphere temperatures have dropped a bit. |
||
![]() |
![]() |
![]() |
#17 |
Jun 2012
1110001010002 Posts |
![]() Code:
2_2222L C228 SNFS 334 2_2278M C234 SNFS 343 2_1151+ C236 SNFS 347 2_2206L C243 SNFS 332 2_1136+ C247 SNFS 342 2_1139+ C248 SNFS 323 (octic) 2_2246L C253 SNFS 338 2_2266L C255 SNFS 341 2_1108+ C271 SNFS 334 2_2354M C271 SNFS 354 2_2306L C287 SNFS 347 2_1097+ C288 SNFS 331 2_2222M C289 SNFS 334 2_2342M C291 SNFS 353 2_2302L C293 SNFS 347 2_2318M C296 SNFS 349 2_1163+ C297 SNFS 350 2_2194M C301 SNFS 331 2_2194L C304 SNFS 331 2_2378L C305 SNFS 358 2_1153+ C306 SNFS 347 2_2374L C309 SNFS 358 2_1124+ C311 SNFS 338 2_2354L C314 SNFS 354 2_1147+ C317 SNFS 345 2_1159+ C318 SNFS 349 2_1168+ C326 SNFS 352 2_2398M C326 SNFS 361 2_1129+ C330 SNFS 340 2_1187+ C334 SNFS 358 2_1123+ C338 SNFS 338 |
![]() |
![]() |
![]() |
#18 | |
Apr 2020
31D16 Posts |
![]() Quote:
Most of the rest are probably beyond the degree 6/7 cutoff for SNFS? There will be a transitional zone where proximity of the exponent to multiples of 6 and 7 determines which is better. |
|
![]() |
![]() |
![]() |
#19 |
Bamboozled!
"๐บ๐๐ท๐ท๐ญ"
May 2003
Down not across
2·5,711 Posts |
![]()
Quick question. I could likely answer it myslf but I am too sleepy right now.
Are any sets of these amenable to the factoring factory approach of Lenstra et al? If so, it should reduce the sieving effort substantially. |
![]() |
![]() |
![]() |
#20 | |
"Bob Silverman"
Nov 2003
North of Boston
23×3×311 Posts |
![]() Quote:
would be much too large for a distributed effort. You would have to sieve ONE number and SAVE all of the lattice locations for one polynomial that were potentially smooth to a central location. This would be the special q polynomial. This would be for ALL of the special q values. Then, for subsequent numbers all of those lattice locations [for all the special q] would need to be sent to EVERY client. They could then sieve the other polynomial for each number being factored. This is a massive amount of data for the clients to read and save as well as an enormous burden on the server. Latency and bandwidth would be a major problem. Or, if you could guarantee that everyone who helped for the first number would work on ALL of the subsequent numbers and then sieve exactly those same special q that they sieved the first time you could avoid sending the lattice locations back and forth. But this would be very delicate to manage and very error prone. Note that the theoretical best speedup is also only 50% if everything is done perfectly. Data read/write latency would prevent the max theoretical gain even when data is retained locally by each client. Storage requirements are massive. |
|
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Recommended bases and efforts | gd_barnes | Conjectures 'R Us | 185 | 2021-12-20 05:51 |
Doublecheck efforts; S66/S79 to start with | gd_barnes | Conjectures 'R Us | 16 | 2014-08-07 02:11 |
Cunningham ECM Now Futile? | R.D. Silverman | GMP-ECM | 4 | 2012-04-25 02:45 |
ECM efforts mistake? | 10metreh | mersennewiki | 1 | 2008-12-28 13:31 |
ECM Efforts | R.D. Silverman | Factoring | 63 | 2005-06-24 13:41 |