20220113, 18:55  #89 
Aug 2020
79*6581e4;3*2539e3
2×3×109 Posts 
Thanks, the c167 took 12 days of sieving, the c170 had enough relations already after 15 days, much faster than I anticipated also from c150 and c160 jobs on that machine.
Anyway, I tried again with 235M rels and 166.7M uniques which resulted in a matrix: Code:
matrix is 10297670 x 10297895 (4105.4 MB) with weight 1089257460 (105.77/col) sparse part has weight 973222523 (94.51/col) linear algebra completed 309078 of 10297895 dimensions (3.0%, ETA 28h36m) I had anticipated something like 34 weeks WCT for this, now it only took 2.5 weeks. 
20220127, 06:47  #90 
Aug 2020
79*6581e4;3*2539e3
28E_{16} Posts 
I am currently testing the optimal qmin for the c170, it seems it's quite large as well. The qrange of the original run was 15M84M yielding 166.4M uniques. In the qrange 24M93M I got 169.0M uniques.
I also have another c165 to factor, should I just use the same parameters as for the c167 discussed here before? 
20220207, 18:33  #91 
"Vincent"
Apr 2010
Over the rainbow
2^{2}·7·103 Posts 
Are the parametters for polyselect of interest or only the sieving LA and the rest?

20220207, 21:47  #92 
"Vincent"
Apr 2010
Over the rainbow
2^{2}×7×103 Posts 
ok, I tried a polyselect on the same range, but with different tasks.polyselect.P. on a C165
(7^178*178^71)/129 with Code:
tasks.polyselect.P = 300e3 tasks.polyselect.admin=1.99e6 tasks.polyselect.admax = 2e6 tasks.polyselect.adrange = 1e3 tasks.polyselect.incr = 250 tasks.polyselect.nq = 3125 tasks.polyselect.nrkeep = 6 tasks.polyselect.ropteffort=10 tasks.polyselect.sopteffort=200 Code:
R0: 35809166954067912357846660220632 R1: 13463319667006865269 A0: 2219834428349037570991732730801168691845 A1: 692194239737244432596138760691083 A2: 488342217850668125365542693 A3: 31512898162007566885 A4: 45218104446796 A5: 1994500 skew 3796573.56, size 3.067e016, alpha 6.046, combined = 5.428e013 rroots = 3 Code:
R0: 35750679366989574211105475195855 R1: 633662203321820819083499 A0: 82290745085368826347864212268994400888 A1: 78146700994698691661021326578602 A2: 288482486297396525055747957 A3: 4358229227643201247 A4: 5932124350875 A5: 17995500 skew 1393695.22, size 3.034e016, alpha 6.080, combined = 5.528e013 rroots = 3 Code:
R0: 35809200555754954673312861203782 R1: 31581039660216687989269 A0: 11327319509762555016982123133746333048 A1: 286465213321329043975159328886322 A2: 306077442964448660296883751 A3: 265544123343226564667 A4: 1365778877333728 A5: 4564560000 skew 379204.77, size 1.848e016, alpha 7.335, combined = 4.089e013 rroots = 3 clearly the P=30e6 isn't good for that short range. Last fiddled with by firejuggler on 20220207 at 21:54 
20220207, 22:16  #93  
"Curtis"
Feb 2005
Riverside, CA
2·3·937 Posts 
Quote:
I'd turn sopteffort way down, too something like 5 to 10 should be best for C160170. Better to search over more a5 values than to look *really* hard at a few. Perhaps you chose these values because you wanted to compare various P values but as it is poly select is such a needleinhaystack game that using poly score as a measure of setting effectiveness is hazy, at best. If you want to compare settings, I suggest you look at the lognorm score posted after stage 1; that's giving you a measure of the quality of the worst poly found before rootopt. It's possible that some settings yield better top polys but worse "worst" polys, but I haven't found such settings yet. Last fiddled with by VBCurtis on 20220207 at 22:16 

20220207, 22:20  #94 
Apr 2020
1633_{8} Posts 
incr=250 is not a good choice. Better values to try would be 60, 120, 210, 420; you want the leading coefficient to have lots of small prime factors.

20220707, 06:01  #95 
Aug 2020
79*6581e4;3*2539e3
2·3·109 Posts 
Is there a new draft for C165?

20220707, 14:15  #96 
"Curtis"
Feb 2005
Riverside, CA
1010111110110_{2} Posts 
I haven't done any work on large params in months I bought a Ryzen about a month ago, and have been retuning for it starting from the bottom and have only made it to C140 or so.
I'm on vacation presently, won't have a chance to consult my notes for ten days or so that makes the info in this thread as uptodate as I have. 
20221109, 00:25  #97 
"Curtis"
Feb 2005
Riverside, CA
15F6_{16} Posts 
I spent some humantime testsieving with CADO on a C161, and then ran the job entirely with CADO. My fastest params in testing were 31/32LP, 60/88 MFB, lambdas of 1.93 and 2.75. Lims were 36/30M, Q from 5.5M to ~39.5M.
Before the testsieve, I ran a C160 with 31/32LP, 59/61 MFB params. 216M relations took 46 hr to sieve and yielded a ~5.5M matrix which CADO took ~14hr to solve; the job in total was about 7% slower than my trendline, so I thought 3LP and a few more relations (with additional required_excess) might get me back to trend. This C161 (first digit 4) was forecast to be about 25% tougher than the C160. I was quite excited when sieving only took 48hr! However, even with required_excess set to 0.08 (rather than 0.06 or 0.07 for C150160 jobs I've done before), and 252M relations gathered (with a 73% unique rate, 3% better than my C160 job), the matrix came out 7.75M and is taking 30 hr to solve using CADO on the same machine that did the sieving. Ugh! I think msieve would solve an 8M matrix in around 89 hr on this machine, which tells me that above C160 it's just not worth my effort to benchmark with CADO for the entire job. I'll set target_rels for this C160 params file at 260M, with a note in the file to use msieve for postprocessing to save time. This Ryzen 5950 is really slow for CADO matrix solving relative to sieve speed (and msieve matrix solving). Last fiddled with by VBCurtis on 20221109 at 00:30 
20230111, 05:13  #98 
"Curtis"
Feb 2005
Riverside, CA
2×3×937 Posts 
Ed has a C170 to run, so here's a new params file to try out:
Code:
########################################################################### # Polynomial selection ########################################################################### tasks.polyselect.degree = 5 tasks.polyselect.P = 1250000 tasks.polyselect.admin = 8400 tasks.polyselect.admax = 2e6 tasks.polyselect.adrange = 1680 tasks.polyselect.incr = 210 tasks.polyselect.nq = 15625 tasks.polyselect.sopteffort = 10 tasks.polyselect.nrkeep = 84 tasks.polyselect.ropteffort = 32 ########################################################################### # Sieve ########################################################################### tasks.A = 28 tasks.qmin = 11000000 tasks.lim0 = 80000000 tasks.lim1 = 60000000 tasks.lpb0 = 32 tasks.lpb1 = 32 tasks.sieve.lambda0 = 1.91 tasks.sieve.lambda1 = 2.81 tasks.sieve.mfb0 = 61 tasks.sieve.mfb1 = 90 tasks.sieve.ncurves0 = 13 tasks.sieve.ncurves1 = 8 tasks.sieve.qrange = 10000 tasks.sieve.adjust_strategy = 2 #comment out if using cado for filtering/matrix tasks.sieve.rels_wanted = 380000000 #this is guesswork still, target msieve matrix size is 11M ########################################################################### # Filtering ########################################################################### tasks.filter.purge.keep = 200 tasks.filter.required_excess = 0.09 tasks.filter.target_density = 155.0 
20230111, 13:19  #99 
"Ed Hall"
Dec 2009
Adirondack Mtns
2×5×521 Posts 
Thanks! I'll post results in the harvest thread in a few days.

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Some CADONFS Work At Around 175180 Decimal Digits  EdH  CADONFS  127  20201007 01:47 
Sigma parameter in ecm  storm5510  Information & Answers  4  20191130 21:32 
PrimeNet error 7: Invalid parameter  ksteczk  PrimeNet  6  20180326 15:11 
Parameter Underestimation  R.D. Silverman  Cunningham Tables  14  20100929 19:56 
ECM Work and Parameter Choices  R.D. Silverman  Cunningham Tables  11  20060306 18:46 