mersenneforum.org How to identify SNFS candidates and factor them?
 Register FAQ Search Today's Posts Mark Forums Read

2021-07-10, 06:30   #45
axn

Jun 2003

2·5·17·31 Posts

Quote:
 Originally Posted by bur Interesting, is there a way to know if GNFS is faster or does it just turn out by experiment?
There is some rule of thumb regarding SNFS-GNFS equivalency. I think ratio of SNFS size:GNFS size of about 1.3 (?) is roughly equivalent. Here 175/119 = 1.47 is clearly favoring GNFS. SNFS 175 should be equivalent to GNFS 130 (+/-).

Of course, not all SNFS 175 are equal. And the 1.3 might not be the proper value. So, it is a good starting point to check if one or the other is clearly superior, but not definitive.

Last fiddled with by axn on 2021-07-10 at 06:30

 2021-07-10, 13:13 #46 firejuggler     "Vincent" Apr 2010 Over the rainbow 274310 Posts I think I saw somewhere that snfs difficulty =(digit lenght-30)*1.5 Wich is close to what axn said.
2021-07-10, 13:34   #47
charybdis

Apr 2020

10010010002 Posts

You can use Murphy-E scores as given by cownoise/msieve to compare polynomials as long as they have the same degree. Helpfully this is a degree 5 SNFS polynomial, so we can compare it to GNFS polynomials in the right range. The score of ~9e-11 is comparable to the score of a typical GNFS-130 polynomial. 175/130 = 1.35 so that's a good ratio to use for these numbers.

Quote:
 Originally Posted by axn Of course, not all SNFS 175 are equal. And the 1.3 might not be the proper value. So, it is a good starting point to check if one or the other is clearly superior, but not definitive.
Indeed. The big c5 coefficient makes this number more difficult than a "typical" SNFS-175. An SNFS-175 polynomial with small algebraic coefficients scores more like GNFS-123, suggesting a ratio of ~1.42 rather than 1.3 or 1.35. Though I'd guess a lot of SNFS-175s being done these days don't have small algebraic coefficients.

I believe the ratio gets larger as numbers get bigger. 1.42 would suggest SNFS-250 is as hard as GNFS-176, which it isn't.

Quote:
 Originally Posted by firejuggler I think I saw somewhere that snfs difficulty =(digit lenght-30)*1.5 Wich is close to what axn said.
This must have a typo? (130-30)*1.5 = 150...

Last fiddled with by charybdis on 2021-07-10 at 13:37

 2021-07-10, 13:53 #48 richs     "Rich" Aug 2002 Benicia, California 3·11·43 Posts Batalov told me this: The short rule of thumb is gnfs_size < 0.56 * S +30 where S is SNFS difficulty.
 2021-07-11, 09:27 #49 bur     Aug 2020 79*6581e-4;3*2539e-3 1111000112 Posts So the difficulty to factor a co-factor of a "special number" is entirely determined by the size of the special number? Or does the co-factor size also play a role? I guess the latter? And at roughly which digit length should I switch to c130 parameters? VBcurtis mentioned approximately every 30 bits I should switch +5 digits for the params, but at what starting points?
2021-07-11, 11:08   #50
axn

Jun 2003

122268 Posts

Quote:
 Originally Posted by bur So the difficulty to factor a co-factor of a "special number" is entirely determined by the size of the special number? Or does the co-factor size also play a role? I guess the latter?
The former. Using the cofactor helps in the final sqrt phase when it is doing the gcd to extract factors. That'll avoid reporting small (previously known, redundant) factors.

2021-07-11, 19:02   #51
VBCurtis

"Curtis"
Feb 2005
Riverside, CA

33·191 Posts

Quote:
 Originally Posted by bur And at roughly which digit length should I switch to c130 parameters? VBcurtis mentioned approximately every 30 bits I should switch +5 digits for the params, but at what starting points?
As you get into jobs that take days rather than hours, you'll want to test-sieve parameters yourself. There aren't solid rules-of-thumb for SNFS jobs like there are for GNFS, because optimal parameter choice depends in part on the size of coefficients in the poly.

That said, the formula given the post before yours allows you to plug in 130 for GNFS and solve for S; I get 180 or so. By the time you're at 200 digits, you ought to test params yourself or study the 14e queue submissions in NFS@home subforum to see what params were chosen for SNFS jobs of similar size. That research should keep you out of trouble of the "oops this job took twice as long as it should have" sense. When in doubt, use the bigger large-prime option.

 2021-07-13, 18:00 #52 bur     Aug 2020 79*6581e-4;3*2539e-3 3×7×23 Posts Ok, so I roughly chose the params for a GNFS composite of the same difficulty? 180 would agree with my current plan, I used params.c120 for 520, switched to 125 at 550 and next at 580 would be up. So far the longest SNFS took about 5 h for sieving, so I'm still far from days.
 2021-10-18, 16:17 #53 bur     Aug 2020 79*6581e-4;3*2539e-3 1111000112 Posts Once again the factoring times increased significantly. I'm at 1281979*2^632+1 and the 179 digits co-factor took nearly 24 hours. I used the optimized c140 params as basis with slightl adjustment as suggested here previously: Code: tasks.lim0 = 13000000 tasks.lim1 = 13000000 tasks.lpb0 = 31 tasks.lpb1 = 31 tasks.sieve.mfb0 = 58 tasks.sieve.mfb1 = 56 tasks.sieve.lambda0 = 1.805 tasks.sieve.lambda1 = 1.79 tasks.sieve.ncurves0 = 22 tasks.sieve.ncurves1 = 19 tasks.I = 14 tasks.qmin = 1000000 tasks.sieve.qrange = 10000 tasks.sieve.rels_wanted = 90000000 #increased from 82,000,000 tasks.sieve.sqside = 0 The unique ratio was ok, 67%, but it seems the excess is really low after singleton removal. Even with the increase of rels to 90E6, it came to an excess of 300 or so. The previous number was similar, it kept sieving more and more with ok unique ratios of 56% for the additional relations but excess hardly moved. Does that give a hint as to what to optimize? In general, as it was said here that SNFS required custom tweaking, what are the things to look out for? I have an idea about qmin and rels_wanted, but I have no clue how to determine optimization of the other parameters. Any help is greatly appreciated.
 2021-10-18, 16:45 #54 henryzz Just call me Henry     "David" Sep 2007 Liverpool (GMT/BST) 3×5×397 Posts What polynomial degree are you using? 5?
2021-10-18, 20:51   #55
charybdis

Apr 2020

23·73 Posts

Quote:
 Originally Posted by bur Once again the factoring times increased significantly. I'm at 1281979*2^632+1 and the 179 digits co-factor took nearly 24 hours. I used the optimized c140 params as basis with slightl adjustment as suggested here previously: ... The unique ratio was ok, 67%, but it seems the excess is really low after singleton removal. Even with the increase of rels to 90E6, it came to an excess of 300 or so. The previous number was similar, it kept sieving more and more with ok unique ratios of 56% for the additional relations but excess hardly moved. Does that give a hint as to what to optimize? In general, as it was said here that SNFS required custom tweaking, what are the things to look out for? I have an idea about qmin and rels_wanted, but I have no clue how to determine optimization of the other parameters. Any help is greatly appreciated.
The E score for this number would be more typical for GNFS ~c146 (comparison is fine assuming you're still using deg 5) so your lims, lpbs and mfbs may be on the small side. As henryzz alluded to, at some point you'll want to switch to degree 6. At this size degree 5 is probably still fastest but it may be worth doing some test-sieving soon. Near the boundary, the size of the coefficients will make a difference, e.g. degree 6 is more likely to be better if the exponent is divisible by 6.

The way that CADO reports the excess is unhelpful IMO. The way that the "purge" phase of filtering works is as follows:
1a. Run singleton removal until no singletons are left. If the excess is negative or too small (as determined by required_excess), stop and report this excess value.
1b. If excess is large enough, run clique removal. This throws away some relations and ideals in order to make the eventual matrix smaller, while decreasing the excess. CADO will make sure not to let the excess get too small (but now the threshold for "small" is no longer determined by required_excess; it's just a few hundred)
2a. Run singleton removal again: after clique removal some new singletons will have been created.
2b. Run clique removal again.
... repeat until the excess is reduced to the desired figure. Report this excess value and move on to the "merge" phase.

So if "purge" is successful, it will always report a very small excess figure, so you can't see how much you've oversieved. To get a figure that's actually useful, you'll need to look in the jobname.purge.stdout.* files. You'll see that the program starts by reading the relations, and then outputs something like this:

Code:
Step 0: only singleton removal
Sing. rem.: begin with: nrows=44925659 ncols=48627588 excess=-3701929 at 30.06
Sing. rem.:   iter 001: nrows=23365948 ncols=22350686 excess=1015262 at 34.29
Sing. rem.:   iter 002: nrows=18377635 ncols=16926608 excess=1451027 at 35.81
Sing. rem.:   iter 003: nrows=16784335 ncols=15279175 excess=1505160 at 36.70
Sing. rem.:   iter 004: nrows=16225933 ncols=14713523 excess=1512410 at 37.48
Sing. rem.:   iter 005: nrows=16015632 ncols=14502171 excess=1513461 at 38.10
Sing. rem.:   iter 006: nrows=15935778 ncols=14422157 excess=1513621 at 38.69
Sing. rem.:   iter 007: nrows=15903684 ncols=14390037 excess=1513647 at 39.26
Sing. rem.:   iter 008: nrows=15890465 ncols=14376813 excess=1513652 at 39.86
Sing. rem.:   iter 009: nrows=15885325 ncols=14371673 excess=1513652 at 40.44
Sing. rem.:   iter 010: nrows=15883380 ncols=14369728 excess=1513652 at 41.06
Sing. rem.:   iter 011: nrows=15882660 ncols=14369008 excess=1513652 at 41.63
Sing. rem.:   iter 012: nrows=15882378 ncols=14368726 excess=1513652 at 42.26
Sing. rem.:   iter 013: nrows=15882267 ncols=14368615 excess=1513652 at 42.90
Sing. rem.:   iter 014: nrows=15882228 ncols=14368576 excess=1513652 at 43.44
Sing. rem.:   iter 015: nrows=15882201 ncols=14368549 excess=1513652 at 44.07
Sing. rem.:   iter 016: nrows=15882192 ncols=14368540 excess=1513652 at 44.70
Sing. rem.:   iter 017: nrows=15882186 ncols=14368534 excess=1513652 at 45.27
Sing. rem.:   iter 018: nrows=15882185 ncols=14368533 excess=1513652 at 45.90
Sing. rem.:   iter 019: No more singletons, finished at 46.44
The final excess figure here is the useful one for seeing how much you've oversieved. It's directly comparable to the excesses you see when you don't have enough relations.

Last fiddled with by charybdis on 2021-10-18 at 20:55

 Similar Threads Thread Thread Starter Forum Replies Last Post garambois Aliquot Sequences 24 2021-02-25 23:31 EdH EdH 14 2020-04-20 16:21 cochet Miscellaneous Math 4 2008-10-24 14:33 Brian-E Lounge 24 2008-08-01 14:13 Nightgamer360 Miscellaneous Math 9 2007-07-09 17:38

All times are UTC. The time now is 12:09.

Fri Jan 21 12:09:22 UTC 2022 up 182 days, 6:38, 0 users, load averages: 1.50, 1.53, 1.35