[QUOTE=VBCurtis;604513]Here's my 3LP settings for params.c165, tested exactly once:
. . ..[/QUOTE]Sorry for my 3LP ignorance, but will these params tell it to use 3LP or do I need to add something else? Also, do you need the full 175M relations? If Msieve successfully filters earlier than 175M, do you still want the rest? BTW, I found a c164 candidate. I think it has a 6 leading digit. The current c164 should be done tomorrow, so I can start the 3LP job after that. 
[QUOTE=charybdis;604514]Are you sure that swapping the lims won't improve yield? I thought larger lim on the 2LP side was pretty well established by now. Too lazy to dig up an old polynomial and testsieve it myself.[/QUOTE]
Definitely not sure. With GGNFS that's clear, but CADO sieves below the factor base so I didn't make any assumptions. Ed Nothing else needs to be changed. mfb at 88 is the key setting that causes 3LP (any setting larger than 3 * log_2(lim) will do it). I don't mind if you don't get to 175M; whatever your scripts do is just fine with me all the better to compare to a previous run with your script. May wish to swap lim's per Charybdis' suggestion, though. 
Haven't done any big GNFS jobs myself for a while, but larger lims on the 2LP side definitely sieve better for the big SNFS jobs I've been doing instead. I think we used larger lims on the 2LP side for 3,748+ too.

Here's the first c164 (note: A=28 and adjust_strategy=2):[code]N = 345... <164 digits>
tasks.lim0 = 50000000 tasks.lim1 = 70000000 tasks.lpb0 = 31 tasks.lpb1 = 31 tasks.qmin = 10000000 tasks.filter.target_density = 170.0 tasks.filter.purge.keep = 160 tasks.sieve.lambda0 = 2.07 tasks.sieve.lambda1 = 2.17 tasks.sieve.mfb0 = 58 tasks.sieve.mfb1 = 61 tasks.sieve.ncurves0 = 18 tasks.sieve.ncurves1 = 25 tasks.sieve.qrange = 5000 Polynomial Selection (size optimized): Total time: 529277 Polynomial Selection (root optimized): Total time: 31468 Lattice Sieving: Total time: 4.6221e+06s (all clients used 4 threads) Lattice Sieving: Total number of relations: 171561952 Found 149733097 unique, 40170110 duplicate, and 0 bad relations. cownoise Best MurphyE for polynomial is 8.37946014e13[/code]Anything else I should grab before I do the next one? 
One last question:
Should I have a [C]tasks.sieve.lambda1[/C] value? I currently have 2.17 (as can be seen above). Should I just keep that? 
No lambda if you did use one, it would have to be close to 3 for 3LP. Best leave it default.

[QUOTE=VBCurtis;604538]No lambda if you did use one, it would have to be close to 3 for 3LP. Best leave it default.[/QUOTE]
Thanks! It is in work. Here are the significant parts of the snapshot:[code]N = 685. . .<164 digits> tasks.I = 14 tasks.lim0 = 60000000 tasks.lim1 = 40000000 tasks.lpb0 = 31 tasks.lpb1 = 31 tasks.qmin = 10000000 tasks.sieve.lambda0 = 1.83 tasks.sieve.mfb0 = 58 tasks.sieve.mfb1 = 88 tasks.sieve.ncurves0 = 18 tasks.sieve.ncurves1 = 10 tasks.sieve.qrange = 5000 tasks.sieve.rels_wanted = 175000000[/code] 
Here is the latest c164:[code]N = 685... <164 digits>
tasks.I = 14 tasks.lim0 = 60000000 tasks.lim1 = 40000000 tasks.lpb0 = 31 tasks.lpb1 = 31 tasks.qmin = 10000000 tasks.filter.target_density = 170.0 tasks.filter.purge.keep = 160 tasks.sieve.lambda0 = 1.83 tasks.sieve.mfb0 = 58 tasks.sieve.mfb1 = 88 tasks.sieve.ncurves0 = 18 tasks.sieve.ncurves1 = 10 tasks.sieve.qrange = 5000 Polynomial Selection (size optimized): Total time: 526394 Polynomial Selection (root optimized): Total time: 31614.9 Lattice Sieving: Total time: 4.67967e+06s (all clients used 4 threads) Lattice Sieving: Total number of relations: 175012772 Found 149733097 unique, 40170110 duplicate, and 0 bad relations. cownoise Best MurphyE for polynomial is 8.31589954e13[/code]I don't see too much difference. Murphy_E is lower and sieving took a little bit longer. However, the previous c164 used strategy 2, while this one did not. What effect would that have had at this size? If you like, provide some changes and I'll put them in the params file for the next ~c165 composite. It may not be real soon, but maybe this upcoming week. 
Try with strategy2, please? I don't use that setting because it seems to trigger errors with CADO postprocessing, so I forgot to include it for you.
My guess is 4% faster from strat2? Your next data point will tell us. :) Was the resulting matrix notably bigger than your previous C164? 
[QUOTE=VBCurtis;604613]Try with strategy2, please? I don't use that setting because it seems to trigger errors with CADO postprocessing, so I forgot to include it for you.
My guess is 4% faster from strat2? Your next data point will tell us. :) Was the resulting matrix notably bigger than your previous C164?[/QUOTE]I'll run the next one with A=28 and strategy 2, then.* I didn't bring up strategy 2 because you used I=14. Here are the matrix sections from the two logs  first c164:[code]Thu Apr 21 08:30:37 2022 matrix is 9822977 x 9823172 (3015.8 MB) with weight 932199937 (94.90/col) Thu Apr 21 08:30:37 2022 sparse part has weight 672685141 (68.48/col) Thu Apr 21 08:32:25 2022 filtering completed in 2 passes Thu Apr 21 08:32:27 2022 matrix is 9792967 x 9793156 (3013.3 MB) with weight 931103496 (95.08/col) Thu Apr 21 08:32:27 2022 sparse part has weight 672400393 (68.66/col) Thu Apr 21 08:33:10 2022 matrix starts at (0, 0) Thu Apr 21 08:33:11 2022 matrix is 9792967 x 9793156 (3013.3 MB) with weight 931103496 (95.08/col) Thu Apr 21 08:33:11 2022 sparse part has weight 672400393 (68.66/col) Thu Apr 21 08:33:11 2022 saving the first 48 matrix rows for later Thu Apr 21 08:33:12 2022 matrix includes 64 packed rows Thu Apr 21 08:33:13 2022 matrix is 9792919 x 9793156 (2895.1 MB) with weight 745879127 (76.16/col)[/code]and, the second c164:[code]Sat Apr 23 07:29:12 2022 matrix is 10949079 x 10949259 (3349.2 MB) with weight 1042919866 (95.25/col) Sat Apr 23 07:29:12 2022 sparse part has weight 746571916 (68.18/col) Sat Apr 23 07:32:13 2022 filtering completed in 2 passes Sat Apr 23 07:32:17 2022 matrix is 10934410 x 10934588 (3348.1 MB) with weight 1042422445 (95.33/col) Sat Apr 23 07:32:17 2022 sparse part has weight 746467122 (68.27/col) Sat Apr 23 07:33:16 2022 matrix starts at (0, 0) Sat Apr 23 07:33:19 2022 matrix is 10934410 x 10934588 (3348.1 MB) with weight 1042422445 (95.33/col) Sat Apr 23 07:33:19 2022 sparse part has weight 746467122 (68.27/col) Sat Apr 23 07:33:19 2022 saving the first 48 matrix rows for later Sat Apr 23 07:33:21 2022 matrix includes 64 packed rows Sat Apr 23 07:33:23 2022 matrix is 10934362 x 10934588 (3228.6 MB) with weight 832280012 (76.11/col)[/code]Yes, the matrix is a little bit larger, but is that just due to when Msieve happened to succeed in its filtering tests? * I'm guessing that's the only change you would like for the next ~c164 run (I don't have another c164 handy just yet), or do you want something else modified, too? 
Please try A=28 separately from strat 2. I'd like to know the speed gained from start 2 on I=14.
I expect A=28 would be slower than I=14 here, anyway; perhaps we can testsieve that rather than run a full job. 
All times are UTC. The time now is 00:50. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2022, Jelsoft Enterprises Ltd.