![]() |
![]() |
#23 | ||
Apr 2020
17·41 Posts |
![]()
Nicely done!
Quote:
Quote:
|
||
![]() |
![]() |
![]() |
#24 | |
"Curtis"
Feb 2005
Riverside, CA
22·1,319 Posts |
![]() Quote:
I'd try making lim0 and lim1 the same (use the larger of the two from my C120 file). That'll require a few more relations, maybe 10% more, but the job should go faster. About every 30 bits of exponent increase, step to the next-bigger params file for lim/lpb choices. |
|
![]() |
![]() |
![]() |
#25 | |
Aug 2020
79*6581e-4;3*2539e-3
7678 Posts |
![]()
I updated Yafu, poly generation works, it's even doing test sieving on two polys.
Quote:
And out of curiosity, what is the reasoning behing making lim0 and lim1 the same, why does it save time? Last fiddled with by bur on 2021-06-01 at 18:29 |
|
![]() |
![]() |
![]() |
#26 |
"Curtis"
Feb 2005
Riverside, CA
22×1,319 Posts |
![]()
When the norms of the two sides are quite different in size, CADO is more efficient when making the side with the larger norm also have a larger lim.
Since your job has anorm and rnorm very similar in size, there isn't an obvious reason to make the lim's different in size. I'd change entirely to params.c125 when you get 30 bits bigger than you are now. SNFS jobs double in difficulty about every 9 digits, while CADO-GNFS jobs double in difficulty about every 5 digits. With YAFU giving you polynomials, you'll be able to get quite far in your factoring sequence. |
![]() |
![]() |
![]() |
#27 |
Aug 2020
79*6581e-4;3*2539e-3
503 Posts |
![]()
I tried 2^528 with yafu settings and the sieving had an ETA of about 3 h if I chose tasks.sieve.sqside = 1 as per EdH's guide. I switched to tasks.sieve.sqside = 0 and ETA changed to 1 h. I also changed lim0/1 to 4500000 instead of yafu's 6100000 (yes, never change more than one parameter when testing), but I guess the sqside=0 is the big impact?
Would it be advisable to generally use yafu's settings but with tasks.sieve.sqside = 0? Last fiddled with by bur on 2021-06-02 at 08:50 |
![]() |
![]() |
![]() |
#28 |
"Curtis"
Feb 2005
Riverside, CA
527610 Posts |
![]()
It's advisable to experiment a lot, as you are doing. That's how we all learned!
SNFS jobs have fewer "do it this way" guidelines, more job-specific settings. I'm surprised there is a factor-of-2 difference in speed for sieving the other side- the norms suggest the decision should be a close call. |
![]() |
![]() |
![]() |
#29 |
Jun 2012
13×269 Posts |
![]()
There is a lot of empirical data on Kamada’s site that you may find helpful. It has an accumulation of data as reported over the years. Some cases are certainly non-optimal but all are definitely “what worked”.
There are also the log files spread throughout the various NFS@Home sievers. Start with lasieved - it performs sieving for the smallest factoring jobs. Keep in mind NFS@Home uses a BOINC wrapper so there are some inefficiencies baked in, such as the target number of relations being artificially elevated to compensate for a percentage of “junk” relations received back by the servers from the volunteer workers. But experimentation is always the best way. |
![]() |
![]() |
![]() |
#30 | |
Apr 2020
12718 Posts |
![]() Quote:
I calculated the norms for a few relations around Q=2M, and typical values are something like 10^36 for the algebraic norm and 10^39 for the rational norm. This is consistent with a small advantage for rational-side sieving. YAFU's estimates weren't too far off, but it overestimated the algebraic norm and so it chose the algebraic side for sieving. I'd say keep on using tasks.sieve.sqside = 0 for these jobs, as it's likely that YAFU is systematically overestimating the algebraic norm. The advantage for the rational side should grow as the numbers get larger, until you switch to degree 6 at which point the algebraic side may be worth considering again. @bsquared, if you're reading this - maybe worth getting YAFU to test-sieve algebraic vs rational when the estimated norms are close together? Last fiddled with by charybdis on 2021-06-02 at 23:05 |
|
![]() |
![]() |
![]() |
#31 | |
Aug 2020
79*6581e-4;3*2539e-3
7678 Posts |
![]() Quote:
Maybe the large difference I found was caused by various factors, cado vs msieve, optimized vbcurtis parameters etc. What also brings a nice boost is starting at much lower q-values, instead of 60000 I now start at 10000 which for some numbers had 50 rels/q while at the final q it was about 15 or 16. Maybe it's advisable to go even lower. |
|
![]() |
![]() |
![]() |
#32 | ||
Apr 2020
17·41 Posts |
![]() Quote:
Quote:
|
||
![]() |
![]() |
![]() |
#33 | |
"Ben"
Feb 2007
E2116 Posts |
![]() Quote:
a = sqrt((double)(1ULL << (2 * I - 1)) * 1000000.0 * poly->poly->skew); b = sqrt((double)(1ULL << (2 * I - 1)) * 1000000.0 / poly->poly->skew); But don't take into account root properties and obviously the Q is static. If there is a better way that is nearly as simple that'd be great. But the reason it is choosing algebraic side is that yafu is biased that way on purpose. The rational norms need to be about 5 orders of magnitude larger on the rational side before it will choose to sieve there. Can't remember why I did that... I think it's because for gnfs jobs it is usually the case where alg side is better, even when the norms are slighter higher on the rational side. Yes, another good idea. Should be fairly straightforward to implement. |
|
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
A new tool to identify aliquot sequence margins and acquisitions | garambois | Aliquot Sequences | 24 | 2021-02-25 23:31 |
Comparison of GNFS/SNFS With Quartic (Why not to use SNFS with a Quartic) | EdH | EdH | 14 | 2020-04-20 16:21 |
new candidates for M...46 and M48 | cochet | Miscellaneous Math | 4 | 2008-10-24 14:33 |
Please identify! | Brian-E | Lounge | 24 | 2008-08-01 14:13 |
Easily identify composites using multiplication | Nightgamer360 | Miscellaneous Math | 9 | 2007-07-09 17:38 |