20210601, 16:17  #23  
Apr 2020
2×251 Posts 
Nicely done!
Quote:
Quote:


20210601, 17:47  #24  
"Curtis"
Feb 2005
Riverside, CA
11610_{8} Posts 
Quote:
I'd try making lim0 and lim1 the same (use the larger of the two from my C120 file). That'll require a few more relations, maybe 10% more, but the job should go faster. About every 30 bits of exponent increase, step to the nextbigger params file for lim/lpb choices. 

20210601, 18:27  #25  
Aug 2020
79*6581e4;3*2539e3
401 Posts 
I updated Yafu, poly generation works, it's even doing test sieving on two polys.
Quote:
And out of curiosity, what is the reasoning behing making lim0 and lim1 the same, why does it save time? Last fiddled with by bur on 20210601 at 18:29 

20210601, 19:09  #26 
"Curtis"
Feb 2005
Riverside, CA
2^{3}·5^{4} Posts 
When the norms of the two sides are quite different in size, CADO is more efficient when making the side with the larger norm also have a larger lim.
Since your job has anorm and rnorm very similar in size, there isn't an obvious reason to make the lim's different in size. I'd change entirely to params.c125 when you get 30 bits bigger than you are now. SNFS jobs double in difficulty about every 9 digits, while CADOGNFS jobs double in difficulty about every 5 digits. With YAFU giving you polynomials, you'll be able to get quite far in your factoring sequence. 
20210602, 08:50  #27 
Aug 2020
79*6581e4;3*2539e3
401 Posts 
I tried 2^528 with yafu settings and the sieving had an ETA of about 3 h if I chose tasks.sieve.sqside = 1 as per EdH's guide. I switched to tasks.sieve.sqside = 0 and ETA changed to 1 h. I also changed lim0/1 to 4500000 instead of yafu's 6100000 (yes, never change more than one parameter when testing), but I guess the sqside=0 is the big impact?
Would it be advisable to generally use yafu's settings but with tasks.sieve.sqside = 0? Last fiddled with by bur on 20210602 at 08:50 
20210602, 14:41  #28 
"Curtis"
Feb 2005
Riverside, CA
11610_{8} Posts 
It's advisable to experiment a lot, as you are doing. That's how we all learned!
SNFS jobs have fewer "do it this way" guidelines, more jobspecific settings. I'm surprised there is a factorof2 difference in speed for sieving the other side the norms suggest the decision should be a close call. 
20210602, 21:40  #29 
Jun 2012
3,203 Posts 
There is a lot of empirical data on Kamada’s site that you may find helpful. It has an accumulation of data as reported over the years. Some cases are certainly nonoptimal but all are definitely “what worked”.
There are also the log files spread throughout the various NFS@Home sievers. Start with lasieved  it performs sieving for the smallest factoring jobs. Keep in mind NFS@Home uses a BOINC wrapper so there are some inefficiencies baked in, such as the target number of relations being artificially elevated to compensate for a percentage of “junk” relations received back by the servers from the volunteer workers. But experimentation is always the best way. 
20210602, 23:04  #30  
Apr 2020
502_{10} Posts 
Quote:
I calculated the norms for a few relations around Q=2M, and typical values are something like 10^36 for the algebraic norm and 10^39 for the rational norm. This is consistent with a small advantage for rationalside sieving. YAFU's estimates weren't too far off, but it overestimated the algebraic norm and so it chose the algebraic side for sieving. I'd say keep on using tasks.sieve.sqside = 0 for these jobs, as it's likely that YAFU is systematically overestimating the algebraic norm. The advantage for the rational side should grow as the numbers get larger, until you switch to degree 6 at which point the algebraic side may be worth considering again. @bsquared, if you're reading this  maybe worth getting YAFU to testsieve algebraic vs rational when the estimated norms are close together? Last fiddled with by charybdis on 20210602 at 23:05 

20210603, 06:15  #31  
Aug 2020
79*6581e4;3*2539e3
621_{8} Posts 
Quote:
Maybe the large difference I found was caused by various factors, cado vs msieve, optimized vbcurtis parameters etc. What also brings a nice boost is starting at much lower qvalues, instead of 60000 I now start at 10000 which for some numbers had 50 rels/q while at the final q it was about 15 or 16. Maybe it's advisable to go even lower. 

20210603, 11:06  #32  
Apr 2020
766_{8} Posts 
Quote:
Quote:


20210603, 13:02  #33  
"Ben"
Feb 2007
6772_{8} Posts 
Quote:
a = sqrt((double)(1ULL << (2 * I  1)) * 1000000.0 * poly>poly>skew); b = sqrt((double)(1ULL << (2 * I  1)) * 1000000.0 / poly>poly>skew); But don't take into account root properties and obviously the Q is static. If there is a better way that is nearly as simple that'd be great. But the reason it is choosing algebraic side is that yafu is biased that way on purpose. The rational norms need to be about 5 orders of magnitude larger on the rational side before it will choose to sieve there. Can't remember why I did that... I think it's because for gnfs jobs it is usually the case where alg side is better, even when the norms are slighter higher on the rational side. Yes, another good idea. Should be fairly straightforward to implement. 

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
A new tool to identify aliquot sequence margins and acquisitions  garambois  Aliquot Sequences  24  20210225 23:31 
Comparison of GNFS/SNFS With Quartic (Why not to use SNFS with a Quartic)  EdH  EdH  14  20200420 16:21 
new candidates for M...46 and M48  cochet  Miscellaneous Math  4  20081024 14:33 
Please identify!  BrianE  Lounge  24  20080801 14:13 
Easily identify composites using multiplication  Nightgamer360  Miscellaneous Math  9  20070709 17:38 