20220105, 19:33  #78  
Apr 2020
13×71 Posts 
Quote:
Incidentally, the number of specialq per range varies less when sieving on the rational side, so on the SNFS jobs that require this, the number of relations per WU does not have as much variation. 

20220110, 08:14  #79 
Aug 2020
79*6581e4;3*2539e3
653 Posts 
I'm at 190M relations of which 69.7% or 132M are unique. The C167 took 152M uniques to build a matrix, what number of uniqes can I expect for a C170?
Is the necessary number of uniques just a function of log(n) or are there other influences? And from comparison with that C167 the uniques ratio seems a bit low, what does it depend on? Just the qrange or also size/poly/...? 
20220110, 17:37  #80  
"Curtis"
Feb 2005
Riverside, CA
12766_{8} Posts 
Quote:
I expect you'll need more uniques than you did for a C167, but I've no idea how many more maybe 5%? 10%? There's a lot of noise here, too your C167 with a different poly may have needed 150M or 155M, etc. 

20220111, 11:02  #81 
Aug 2020
79*6581e4;3*2539e3
653 Posts 
So the poly influences the uniques ratio, but "random" meaning not depending on the score? Is the same true for the required number of uniques? This would mean that a slightly lower scoring poly could perform better overall?
I'm at 205M total and 143M uniques (69.8%). I'll wait another day, which should bring me to 220M total and 153M uniques, before attempting to build a matrix. That would make it 14 days of sieving for the 220M. The C167 took 12 days until I could build a matrix, so it's actually going quite well. Btw, if C165 to C170 will double the sieving time, what is the time ratio between C167 and C170? I've been wondering that generally, interpolating within a linear relation is easy, but what is the rule for x^n? Last fiddled with by bur on 20220111 at 11:53 
20220112, 08:31  #82 
Aug 2020
79*6581e4;3*2539e3
653 Posts 
Curtis, you suggested excess of 0.05 for building the matrix with CADO. For msieve I think the same effect is achieved with targetdensity? What would be a good value? Charybdis used target_density = 100 for his c170 posted on the first page of this thread.

20220112, 12:26  #83  
Apr 2020
13·71 Posts 
Quote:
100 is probably sensible, though the optimum value for a given job depends on the ratio of sieving to LA speed for your system. Last fiddled with by charybdis on 20220112 at 12:28 

20220112, 17:53  #84 
Aug 2020
79*6581e4;3*2539e3
653 Posts 
Thanks, but msieve has no equivalent to required_excess?
The first attempt failed as expected. I had 154M uniques (the recent C167 took 152M): Code:
Wed Jan 12 16:44:33 2022 keeping 41220401 ideals with weight <= 200, target excess is 216115 Wed Jan 12 16:44:36 2022 commencing inmemory singleton removal Wed Jan 12 16:44:40 2022 begin with 39898507 relations and 41220401 unique ideals Wed Jan 12 16:45:37 2022 reduce to 39629881 relations and 40951589 ideals in 21 passes Wed Jan 12 16:45:37 2022 max relations containing the same ideal: 200 Wed Jan 12 16:45:39 2022 filtering wants 1000000 more relations Is it possible to estimate the number of required additional uniques from the relations:ideals ratio? 
20220112, 23:11  #85 
"Curtis"
Feb 2005
Riverside, CA
2·3·937 Posts 
Correct, msieve has no filtering equivalent to required_excess. We use target_density as a proxy.
Yes, you can estimate based on how early the filtering failed; but it's easier to just try filtering with a lower target_density (like default) rather than try to recall just how far away you might be based on your previous experience. That is, if a C165175 filtering run usually has 12 filtering passes and yours failed on pass 1, you need quite a few more relations but if it failed after pass 8 or 9, it nearly worked and an adjustment to target density or a few M more relations will get you a matrix. 
20220112, 23:46  #86 
Apr 2020
13×71 Posts 
In this case filtering would have failed whatever target_density was used, because there were more ideals than relations. Once you have more relations than ideals and the excess is greater than the displayed target excess value, filtering will proceed until the merge phase  maybe if it's extremely tight it could fail earlier, but I've never seen that  and if target_density is too high you get "too few cycles, matrix probably cannot build". Before the merge begins, target_density is not used at all.
I wish there was an option to dump the output of clique removal to disk if merging fails, so that you could run merge at different TDs without having to go through all the earlier stages of filtering again... 
20220113, 08:51  #87 
Aug 2020
79*6581e4;3*2539e3
1010001101_{2} Posts 
Second attempt with 230M rels and 162M uniques, it went through to the full merge, but quit there. So it's getting closer.
Btw, if C165 to C170 will double the sieving time, what is the time ratio between C167 and C170? I've been wondering that generally, interpolating within a linear relation is easy, but what is the rule for an exponential relation? 
20220113, 09:23  #88  
Jun 2003
5·1,087 Posts 
Quote:
1 digit = factor of 2^(1/5) ~= 1.15 3 digits = factor of 2^(3/5) ~= 1.52 Linear interpolation between 1 & 2 wouldn't be too far off though. You'd get 1.6 which is good enough for gov't purposes. 

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Some CADONFS Work At Around 175180 Decimal Digits  EdH  CADONFS  127  20201007 01:47 
Sigma parameter in ecm  storm5510  Information & Answers  4  20191130 21:32 
PrimeNet error 7: Invalid parameter  ksteczk  PrimeNet  6  20180326 15:11 
Parameter Underestimation  R.D. Silverman  Cunningham Tables  14  20100929 19:56 
ECM Work and Parameter Choices  R.D. Silverman  Cunningham Tables  11  20060306 18:46 