![]() |
![]() |
#89 |
Aug 2020
79*6581e-4;3*2539e-3
24·32·5 Posts |
![]()
Thanks, the c167 took 12 days of sieving, the c170 had enough relations already after 15 days, much faster than I anticipated also from c150 and c160 jobs on that machine.
Anyway, I tried again with 235M rels and 166.7M uniques which resulted in a matrix: Code:
matrix is 10297670 x 10297895 (4105.4 MB) with weight 1089257460 (105.77/col) sparse part has weight 973222523 (94.51/col) linear algebra completed 309078 of 10297895 dimensions (3.0%, ETA 28h36m) I had anticipated something like 3-4 weeks WCT for this, now it only took 2.5 weeks. |
![]() |
![]() |
![]() |
#90 |
Aug 2020
79*6581e-4;3*2539e-3
24×32×5 Posts |
![]()
I am currently testing the optimal q-min for the c170, it seems it's quite large as well. The q-range of the original run was 15M-84M yielding 166.4M uniques. In the q-range 24M-93M I got 169.0M uniques.
I also have another c165 to factor, should I just use the same parameters as for the c167 discussed here before? |
![]() |
![]() |
![]() |
#91 |
"Vincent"
Apr 2010
Over the rainbow
288410 Posts |
![]()
Are the parametters for polyselect of interest or only the sieving LA and the rest?
|
![]() |
![]() |
![]() |
#92 |
"Vincent"
Apr 2010
Over the rainbow
22×7×103 Posts |
![]()
ok, I tried a polyselect on the same range, but with different tasks.polyselect.P. on a C165
(7^178*178^7-1)/129 with Code:
tasks.polyselect.P = 300e3 tasks.polyselect.admin=1.99e6 tasks.polyselect.admax = 2e6 tasks.polyselect.adrange = 1e3 tasks.polyselect.incr = 250 tasks.polyselect.nq = 3125 tasks.polyselect.nrkeep = 6 tasks.polyselect.ropteffort=10 tasks.polyselect.sopteffort=200 Code:
R0: -35809166954067912357846660220632 R1: 13463319667006865269 A0: -2219834428349037570991732730801168691845 A1: -692194239737244432596138760691083 A2: 488342217850668125365542693 A3: -31512898162007566885 A4: -45218104446796 A5: 1994500 skew 3796573.56, size 3.067e-016, alpha -6.046, combined = 5.428e-013 rroots = 3 Code:
R0: -35750679366989574211105475195855 R1: 633662203321820819083499 A0: 82290745085368826347864212268994400888 A1: -78146700994698691661021326578602 A2: -288482486297396525055747957 A3: -4358229227643201247 A4: 5932124350875 A5: 17995500 skew 1393695.22, size 3.034e-016, alpha -6.080, combined = 5.528e-013 rroots = 3 Code:
R0: -35809200555754954673312861203782 R1: 31581039660216687989269 A0: 11327319509762555016982123133746333048 A1: 286465213321329043975159328886322 A2: -306077442964448660296883751 A3: -265544123343226564667 A4: 1365778877333728 A5: -4564560000 skew 379204.77, size 1.848e-016, alpha -7.335, combined = 4.089e-013 rroots = 3 clearly the P=30e6 isn't good for that short range. Last fiddled with by firejuggler on 2022-02-07 at 21:54 |
![]() |
![]() |
![]() |
#93 | |
"Curtis"
Feb 2005
Riverside, CA
22·33·53 Posts |
![]() Quote:
I'd turn sopteffort way down, too- something like 5 to 10 should be best for C160-170. Better to search over more a5 values than to look *really* hard at a few. Perhaps you chose these values because you wanted to compare various P values- but as it is poly select is such a needle-in-haystack game that using poly score as a measure of setting effectiveness is hazy, at best. If you want to compare settings, I suggest you look at the lognorm score posted after stage 1; that's giving you a measure of the quality of the worst poly found before root-opt. It's possible that some settings yield better top polys but worse "worst" polys, but I haven't found such settings yet. Last fiddled with by VBCurtis on 2022-02-07 at 22:16 |
|
![]() |
![]() |
![]() |
#94 |
Apr 2020
22·3·79 Posts |
![]()
incr=250 is not a good choice. Better values to try would be 60, 120, 210, 420; you want the leading coefficient to have lots of small prime factors.
|
![]() |
![]() |
![]() |
#95 |
Aug 2020
79*6581e-4;3*2539e-3
24·32·5 Posts |
![]()
Is there a new draft for C165?
|
![]() |
![]() |
![]() |
#96 |
"Curtis"
Feb 2005
Riverside, CA
22×33×53 Posts |
![]()
I haven't done any work on large params in months- I bought a Ryzen about a month ago, and have been re-tuning for it starting from the bottom and have only made it to C140 or so.
I'm on vacation presently, won't have a chance to consult my notes for ten days or so- that makes the info in this thread as up-to-date as I have. |
![]() |
![]() |
![]() |
#97 |
"Curtis"
Feb 2005
Riverside, CA
22·33·53 Posts |
![]()
I spent some human-time test-sieving with CADO on a C161, and then ran the job entirely with CADO. My fastest params in testing were 31/32LP, 60/88 MFB, lambdas of 1.93 and 2.75. Lims were 36/30M, Q from 5.5M to ~39.5M.
Before the test-sieve, I ran a C160 with 31/32LP, 59/61 MFB params. 216M relations took 46 hr to sieve and yielded a ~5.5M matrix which CADO took ~14hr to solve; the job in total was about 7% slower than my trendline, so I thought 3LP and a few more relations (with additional required_excess) might get me back to trend. This C161 (first digit 4) was forecast to be about 25% tougher than the C160. I was quite excited when sieving only took 48hr! However, even with required_excess set to 0.08 (rather than 0.06 or 0.07 for C150-160 jobs I've done before), and 252M relations gathered (with a 73% unique rate, 3% better than my C160 job), the matrix came out 7.75M and is taking 30 hr to solve using CADO on the same machine that did the sieving. Ugh! I think msieve would solve an 8M matrix in around 8-9 hr on this machine, which tells me that above C160 it's just not worth my effort to benchmark with CADO for the entire job. I'll set target_rels for this C160 params file at 260M, with a note in the file to use msieve for postprocessing to save time. This Ryzen 5950 is really slow for CADO matrix solving relative to sieve speed (and msieve matrix solving). Last fiddled with by VBCurtis on 2022-11-09 at 00:30 |
![]() |
![]() |
![]() |
#98 |
"Curtis"
Feb 2005
Riverside, CA
22·33·53 Posts |
![]()
Ed has a C170 to run, so here's a new params file to try out:
Code:
########################################################################### # Polynomial selection ########################################################################### tasks.polyselect.degree = 5 tasks.polyselect.P = 1250000 tasks.polyselect.admin = 8400 tasks.polyselect.admax = 2e6 tasks.polyselect.adrange = 1680 tasks.polyselect.incr = 210 tasks.polyselect.nq = 15625 tasks.polyselect.sopteffort = 10 tasks.polyselect.nrkeep = 84 tasks.polyselect.ropteffort = 32 ########################################################################### # Sieve ########################################################################### tasks.A = 28 tasks.qmin = 11000000 tasks.lim0 = 80000000 tasks.lim1 = 60000000 tasks.lpb0 = 32 tasks.lpb1 = 32 tasks.sieve.lambda0 = 1.91 tasks.sieve.lambda1 = 2.81 tasks.sieve.mfb0 = 61 tasks.sieve.mfb1 = 90 tasks.sieve.ncurves0 = 13 tasks.sieve.ncurves1 = 8 tasks.sieve.qrange = 10000 tasks.sieve.adjust_strategy = 2 #comment out if using cado for filtering/matrix tasks.sieve.rels_wanted = 380000000 #this is guesswork still, target msieve matrix size is 11M ########################################################################### # Filtering ########################################################################### tasks.filter.purge.keep = 200 tasks.filter.required_excess = 0.09 tasks.filter.target_density = 155.0 |
![]() |
![]() |
![]() |
#99 |
"Ed Hall"
Dec 2009
Adirondack Mtns
23·677 Posts |
![]()
Thanks! I'll post results in the harvest thread in a few days.
|
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Some CADO-NFS Work At Around 175-180 Decimal Digits | EdH | CADO-NFS | 127 | 2020-10-07 01:47 |
Sigma parameter in ecm | storm5510 | Information & Answers | 4 | 2019-11-30 21:32 |
PrimeNet error 7: Invalid parameter | ksteczk | PrimeNet | 6 | 2018-03-26 15:11 |
Parameter Underestimation | R.D. Silverman | Cunningham Tables | 14 | 2010-09-29 19:56 |
ECM Work and Parameter Choices | R.D. Silverman | Cunningham Tables | 11 | 2006-03-06 18:46 |