Today is not turning ot to be a "better day!" I'm causing duplication of wok in another thread, and now I'm finding out that if CADONFS is told to stop prior to its filtering, it doesn't give a tme for las. I will have to sort out something else. For now, the c160 will not be useful for much. I might just take a break. . .

Maybe I've found a solution. Here's the data for the c160:[code]N = 516... <160 digits>
tasks.I = 14 tasks.lim0 = 45000000 tasks.lim1 = 70000000 tasks.lpb0 = 31 tasks.lpb1 = 32 tasks.qmin = 17000000 tasks.filter.target_density = 150.0 tasks.filter.purge.keep = 190 tasks.sieve.lambda0 = 1.84 tasks.sieve.mfb0 = 59 tasks.sieve.mfb1 = 62 tasks.sieve.ncurves0 = 20 tasks.sieve.ncurves1 = 25 tasks.sieve.qrange = 10000 Polynomial Selection (size optimized): Total time: 505021 Polynomial Selection (root optimized): Total time: 26267.6 Lattice Sieving: Total number of relations: 212441669 Lattice Sieving: Total time: 3.16477e+06s (all clients used 4 threads) Found 149733097 unique, 40170110 duplicate, and 0 bad relations. cownoise Best MurphyE for polynomial is 1.43789954e12[/code] 
Here's a c162:[code]N = 235... <162 digits>
tasks.I = 14 tasks.lim0 = 45000000 tasks.lim1 = 70000000 tasks.lpb0 = 31 tasks.lpb1 = 32 tasks.qmin = 17000000 tasks.filter.target_density = 150.0 tasks.filter.purge.keep = 190 tasks.sieve.lambda0 = 1.84 tasks.sieve.mfb0 = 59 tasks.sieve.mfb1 = 62 tasks.sieve.ncurves0 = 20 tasks.sieve.ncurves1 = 25 tasks.sieve.qrange = 10000 Polynomial Selection (size optimized): Total time: 508246 Polynomial Selection (root optimized): Total time: 25518.1 Lattice Sieving: Total time: 3.77171e+06s (all clients used 4 threads) Lattice Sieving: Total number of relations: 218448391 Found 149733097 unique, 40170110 duplicate, and 0 bad relations. cownoise Best MurphyE for polynomial is 1.16869325e12[/code] 
Here's a c168:[code]N = 385... <168 digits>
tasks.I = 14 tasks.lim0 = 65000000 tasks.lim1 = 100000000 tasks.lpb0 = 31 tasks.lpb1 = 31 tasks.qmin = 10000000 tasks.filter.target_density = 170.0 tasks.filter.purge.keep = 160 tasks.sieve.mfb0 = 58 tasks.sieve.mfb1 = 60 tasks.sieve.ncurves0 = 19 tasks.sieve.ncurves1 = 25 tasks.sieve.qrange = 5000 Polynomial Selection (size optimized): Total time: 999726 Polynomial Selection (root optimized): Total time: 6873.68 Lattice Sieving: Total time: 6.3694e+06s (all clients used 4 threads) Lattice Sieving: Total number of relations: 179907757 Found 149733097 unique, 40170110 duplicate, and 0 bad relations. cownoise Best MurphyE for polynomial is 5.83275752e13[/code] 
Here's a c161:[code]N = 235... <161 digits>
tasks.I = 14 tasks.lim0 = 45000000 tasks.lim1 = 70000000 tasks.lpb0 = 31 tasks.lpb1 = 32 tasks.qmin = 17000000 tasks.filter.target_density = 150.0 tasks.filter.purge.keep = 190 tasks.sieve.lambda0 = 1.84 tasks.sieve.mfb0 = 59 tasks.sieve.mfb1 = 62 tasks.sieve.ncurves0 = 20 tasks.sieve.ncurves1 = 25 tasks.sieve.qrange = 10000 Polynomial Selection (size optimized): Total time: 493855 Polynomial Selection (root optimized): Total time: 27925.7 Lattice Sieving: Total time: 2.84944e+06s (all clients used 4 threads) Lattice Sieving: Total number of relations: 202173233 Found 149733097 unique, 40170110 duplicate, and 0 bad relations. cownoise Best MurphyE for polynomial is 1.47600121e12[/code] 
I'm currently running a c164 with [C]A = 28[/C] and [C]adjust_strategy = 2[/C]. Will the data from this one compare to the data from others. What additional things might I need to mention, if any?

I don't think any. We're looking for deviances from the trendline of "twice as hard every 5.5 digits". When a job is above that trend, it's a sign the params for that job size might benefit from some more attention.
At least, that's what I've found working my way up from 95 to 150 digits. Charybdis occasionally runs two jobs very similar in length with one setting changed between them, as an A/B comparison to determine which setting is better. This is timeconsuming at 160+, but it's the sort of work that lets us refine the params set. For instance, we *still* don't have a clear idea of when 3LP pulls ahead of 2LP for CADO. In principle, there should be a single cutoff above which we always use 3LP. If you find yourself running a second job in the 160s the same size as one you've already documented, give 3LP a shot (I can be more specific on settings if you like). 
I've got a pretty large pool of numbers I'm playing with. If you can get me specific params you'd like me to use, I'll try them on another 164 digit or close. The current one is 345. . . I've got about a dozen to check for something close, that I'll hope (ironically) doesn't ECM. I have no idea how to use 3LP, so please be quite specific in what I should do.

[QUOTE=EdH;604411]I've got a pretty large pool of numbers I'm playing with. . .[/QUOTE]Of course that meant that all the c164s are falling to ECM, now.* If I don't find a suitable c164, would you prefer I move up or down a digit? The leading digits are 345... on the current one, which should be finished tomorrow.
[SIZE=2]* The best way to get them to succeed at ECM is to look for GNFS candidates, unless you actually try that. . .[/SIZE] [SIZE=2] [/SIZE][SIZE=2]Edit: By posting the above I hope that the final c164 will fail ECM.[SIZE=1] But then because I posted such, it will succeed. But. . .:smile:[/SIZE] [/SIZE] 
Here's my 3LP settings for params.c165, tested exactly once:
[code]tasks.I = 14 tasks.qmin = 10000000 tasks.lim0 = 40000000 tasks.lim1 = 60000000 tasks.sieve.lambda0 = 1.83 tasks.lpb0 = 31 tasks.lpb1 = 31 tasks.sieve.mfb0 = 58 tasks.sieve.mfb1 = 88 tasks.sieve.ncurves0 = 18 tasks.sieve.ncurves1 = 10 tasks.sieve.qrange = 5000 tasks.sieve.rels_wanted = 175000000[/code] If you can compare sieve time to your current settings, that would help us decide if C165 is big enough to run 3LP. If you have multiple jobs, please also try mfb1=89, as 88 might be too small. 3LP makes the sieve faster, at the expense of a jump in matrix size. It's not to our benefit to log a 10% improvement in sieve time if we lose 50% to matrix time! Hopefully that's an exaggeration, but that's why we take data. 
Are you sure that swapping the lims won't improve yield? I thought larger lim on the 2LP side was pretty well established by now. Too lazy to dig up an old polynomial and testsieve it myself.

All times are UTC. The time now is 00:19. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2022, Jelsoft Enterprises Ltd.