![]() |
![]() |
#45 | |
Apr 2020
16648 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
#46 |
Aug 2020
79*6581e-4;3*2539e-3
2D016 Posts |
![]()
This is the score of the c163 I did earlier, calculated with msieve:
Code:
skew 8385289.96, size 8.965e-16, alpha -7.745, combined = 1.077e-12 rroots = 5 Code:
skew 2613329.11, size 1.949e-15, alpha -7.801, combined = 1.703e-12 rroots = 5 160M Code:
found 47576500 duplicates and 112589785 unique relations (70.3%) [...] matrix is 5597069 x 5597293 (2017.8 MB) with weight 529305051 (94.56/col) sparse part has weight 472974971 (84.50/col) [...] linear algebra completed 56142 of 5597293 dimensions (1.0%, ETA 6h19m) Code:
found 44436078 duplicates and 110560392 unique relations (71.3%) [...] matrix is 5850133 x 5850358 (2110.1 MB) with weight 553183174 (94.56/col) sparse part has weight 494635444 (84.55/col) [...] linear algebra completed 58683 of 5850358 dimensions (1.0%, ETA 6h59m) Code:
found 41393391 duplicates and 108603079 unique relations (72.4%) [...] matrix is 6136751 x 6136976 (2215.4 MB) with weight 580409331 (94.58/col) sparse part has weight 519375147 (84.63/col) [...] linear algebra completed 119868 of 6136976 dimensions (2.0%, ETA 7h52m) Code:
found 38915690 duplicates and 106080857 unique relations (73.2%) [...] matrix is 6621872 x 6622097 (2394.0 MB) with weight 626672267 (94.63/col) sparse part has weight 561353788 (84.77/col) [...] linear algebra completed 66224 of 6622097 dimensions (1.0%, ETA 9h17m) Code:
found 36920951 duplicates and 103075743 unique relations (73.6%) [...] keeping 27769539 ideals with weight <= 200, target excess is 147096 commencing in-memory singleton removal begin with 27594853 relations and 27769539 unique ideals reduce to 27476994 relations and 27651624 ideals in 23 passes max relations containing the same ideal: 200 filtering wants 1000000 more relations |
![]() |
![]() |
![]() |
#47 |
Aug 2020
79*6581e-4;3*2539e-3
24×32×5 Posts |
![]()
edit: An unexpected P41 factor turned up, but I'll look for some other c160 to factor, so if you have new parameters, please let me know.
I'll start work on a c160, part of AL86610. Do you have a new set of parameters or should I go with the old one, with rels_wanted reduced to 145M? Last fiddled with by bur on 2021-06-16 at 13:14 |
![]() |
![]() |
![]() |
#48 |
"Curtis"
Feb 2005
Riverside, CA
22×33×53 Posts |
![]()
73% unique is unusually high; I would expect most ~C160 jobs to need a few rounds of filtering if we set target relations to 145M. I suppose your sieving machine needs more than 80 minutes to sieve from 145M rels to 150M rels?
Then again, with required_excess set as it is, there is almost-no chance of ending up with a matrix so big that more sieving is desired; I'm still working on figuring out the "right" setting for that one. So, try reducing tasks.filter.required_excess by 0.01 when you change rels_wanted to 145M. Thanks for the data! |
![]() |
![]() |
![]() |
#49 |
Aug 2020
79*6581e-4;3*2539e-3
24×32×5 Posts |
![]()
It took 125 hours for the 160M relations. That's 2800 s / 1M rels or roughly 45 minutes. So yes, nearly 4 hours for 5 M rels.
I'm currently looking for a useful c160, if you want one, then suddenly ECM finds factors all the time... So I'd use the same params but with rels_wanted=145M and required_excess reduced by 0.01? |
![]() |
![]() |
![]() |
#50 | |
"Curtis"
Feb 2005
Riverside, CA
22×33×53 Posts |
![]() Quote:
We're looking for a required-excess setting that *always* avoids those too-big matrices. If you reduce it by 0.02 to be more aggressive, set rels_wanted to 140M and let us know how many relations it actually takes to build a matrix (it should be more than 140!) |
|
![]() |
![]() |
![]() |
#51 |
Aug 2020
79*6581e-4;3*2539e-3
24×32×5 Posts |
![]()
Ok, I found another C159 from an aliquot sequence. I'm finishing ECM on it and then go with the 0.02 and 140M.
Btw, is there anything else to be done on the c159 or c164, such as target_density? Otherwise I'd clear the space for that next job. |
![]() |
![]() |
![]() |
#52 | |
Apr 2020
22·3·79 Posts |
![]() Quote:
Code:
tasks.I = 14 tasks.qmin = 7000000 tasks.lim0 = 25000000 tasks.lim1 = 45000000 tasks.lpb0 = 31 tasks.lpb1 = 31 tasks.sieve.mfb0 = 58 tasks.sieve.mfb1 = 61 tasks.sieve.lambda0 = 2.07 tasks.sieve.lambda1 = 2.17 tasks.sieve.ncurves0 = 19 tasks.sieve.ncurves1 = 24 tasks.sieve.qrange = 10000 tasks.sieve.adjust_strategy = 2 Filtering with TD=90: Code:
factoring 8162413651...0362934861 (159 digits) skew 1612609.51, size 2.057e-15, alpha -5.963, combined = 1.761e-12 rroots = 5 found 47335431 hash collisions in 168523663 relations found 57680004 duplicates and 110843659 unique relations begin with 110843659 relations and 118457131 unique ideals begin with 33364618 relations and 32436212 unique ideals reduce to 33230023 relations and 32301540 ideals in 15 passes matrix is 6559845 x 6560066 (2367.4 MB) with weight 618605783 (94.30/col) |
|
![]() |
![]() |
![]() |
#53 |
Aug 2020
79*6581e-4;3*2539e-3
24·32·5 Posts |
![]()
That was the c159...
The number of unique relations required for it isn't that much lower than for my c159 with 106,080,857 uniques, but as vbcurtis already mentioned, the ratio of unique/total was much higher: 73.2% vs 65.8%. I found a c158 from AL32796, I'm curious what ratio I'll end up with. The params I use now are slightly different from charybdis': Code:
tasks.lim0 = 30000000 tasks.lim1 = 47000000 tasks.lpb0 = 31 tasks.lpb1 = 31 tasks.sieve.mfb0 = 58 tasks.sieve.mfb1 = 61 tasks.sieve.lambda0 = 1.84 tasks.sieve.ncurves0 = 19 tasks.sieve.ncurves1 = 24 tasks.I = 14 tasks.qmin = 8000000 tasks.sieve.qrange = 10000 tasks.sieve.rels_wanted = 140000000 Last fiddled with by bur on 2021-06-18 at 07:51 |
![]() |
![]() |
![]() |
#54 |
Aug 2020
79*6581e-4;3*2539e-3
24·32·5 Posts |
![]()
140M wasn't sufficient, but after two additional sievings the matrix was build with:
143466178 relations with 105330573 unique (73.4%) Again the high number of uniques. Maybe the optimized parameters are that good? That's what I could find in c160.log about the matrix: Code:
Merging: Merged matrix has 5597842 rows and total weight 818539599 (146.2 entries per row on average) |
![]() |
![]() |
![]() |
#55 |
Aug 2020
79*6581e-4;3*2539e-3
24×32×5 Posts |
![]()
I'll factor a c167 soon, did you change anything in the params file in the meantime? Otherwise I'll use the one I used before but with wanted_rels = 190M. Which was the minimum that worked for the previous number though that had a high unique ratio of 74% or so.
Btw, I'm doing a manual polyselect just out of curiosity and noticed that often one of the threads only runs on one core for a few minutes before the workunit finishes. Adrange = 1680 = 8 * 210 = 8 * incr, so that is ok? Maybe it's normal behavior that one coefficient finishes slightly earlier and I just didn't notice before. Last fiddled with by bur on 2021-09-02 at 08:04 |
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Some CADO-NFS Work At Around 175-180 Decimal Digits | EdH | CADO-NFS | 127 | 2020-10-07 01:47 |
Sigma parameter in ecm | storm5510 | Information & Answers | 4 | 2019-11-30 21:32 |
PrimeNet error 7: Invalid parameter | ksteczk | PrimeNet | 6 | 2018-03-26 15:11 |
Parameter Underestimation | R.D. Silverman | Cunningham Tables | 14 | 2010-09-29 19:56 |
ECM Work and Parameter Choices | R.D. Silverman | Cunningham Tables | 11 | 2006-03-06 18:46 |