Some CADONFS Work At Around 175180 Decimal Digits
[SIZE=2]This thread will be the new home for some posts from the [/SIZE][SIZE=2] [URL="https://www.mersenneforum.org/showthread.php?t=25434"]Comparison of GNFS/SNFS With Quartic (Why not to use SNFS with a Quartic)[/URL] thread. This new thread (with moved posts) has been created to continue discussion, but move it out of the blog area.[/SIZE]
[SIZE=2]There may be slight overlap, with the possibility of a couple duplicate posts, but all new posts should be made in this thread.[/SIZE] [QUOTE=VBCurtis;541962]Once the matrix sizes exceed 10M, I think it's pretty important to get off the default matrix density of 70. If you retained the data, I suggest you explore this by setting the flag target_density=100 in your msieve filtering invocation. I've found positive results (measured by matrix ETA) up to density 120, while the NFS@home solvers often use 130 or even 140. I think most of the gains come going from 70 to 90 or 100. A pleasant (and possibly more important side effect) is that it's harder to build a matrix with higher density, which acts as a nice measure of whether you have "oversieved enough". Your second msieve run will almost surely build at TD 100, and I bet the matrix will be smaller by 1M dimensions or so. That might only take a day off the matrix ETA, but saving a day for "free" is still worthy![/QUOTE]I should be able to give this a try. I haven't cleaned up anything yet and will be interrupting all the snfs in favor of letting the gnfs run to completion. BTW, are you working on Improved params for the 165175 digit range at all? 
I've just started back on building params files, with an eye toward extending the patterns of whatincreaseswhen from the C100140 files up into C160.
Let me know what size you'd like, I'll be happy to put one together for you! I'm running trials personally on C130140 presently to refine those files and test A=26 vs I=13 and I=14; I think 165170 will be a spot to test I=14 vs A=28, and I would appreciate some testing of that setting on one of my params files. 
[QUOTE=VBCurtis;541982]I've just started back on building params files, with an eye toward extending the patterns of whatincreaseswhen from the C100140 files up into C160.
Let me know what size you'd like, I'll be happy to put one together for you! I'm running trials personally on C130140 presently to refine those files and test A=26 vs I=13 and I=14; I think 165170 will be a spot to test I=14 vs A=28, and I would appreciate some testing of that setting on one of my params files.[/QUOTE] The one I've got running LA right now is a 168 dd HCN, for which I used the default 170 params file. The trouble I foresee is that my CADONFS server crashed with memory woes. My hybrid CADO/msieve setup would not give you complete data and that is what I would probably need to use for the 175176 HCNs that are next in line. However, if you'd like to just toss something together roughly for me at the 175 level, I could see how a 175/176 compares to the 168 I'm currently finishing. You could maybe slant it toward my large siever count vs. single LA machine. I wish I kept better notes! I'm pretty sure I found that I could still use mpi to run msieve LA across two machines, if one didn't have enough memory. I only have gigabit currently and am pretty sure that caused a bit of slowdown, but wasn't excessive for just two nodes. That may very well only be "wishful thinking," though. 
[QUOTE=EdH;541985] However, if you'd like to just toss something together roughly for me at the 175 level, I could see how a 175/176 compares to the 168 I'm currently finishing. You could maybe slant it toward my large siever count vs. single LA machine.[/QUOTE]
This is just the sort of thing I'd be happy to do for you. 175 is a size where CADO chooses I=14, but I think A=28 or maybe I=15 are better choices. However, A=28 uses twice the memory of I=14, and I=15 is double again. Do you have enough RAM to run I=15? I think it's around 2.5GB per process, and you can choose the number of threads per process on the client command line with "override t 4" to run, e.g. 4threaded. A=28 should be under 1.5GB per process; but I think you recently mentioned you are running an older CADO that doesn't recognise A... so we're choosing between I=14 and 15? 
[QUOTE=VBCurtis;541993]This is just the sort of thing I'd be happy to do for you.
175 is a size where CADO chooses I=14, but I think A=28 or maybe I=15 are better choices. However, A=28 uses twice the memory of I=14, and I=15 is double again. Do you have enough RAM to run I=15? I think it's around 2.5GB per process, and you can choose the number of threads per process on the client command line with "override t 4" to run, e.g. 4threaded. A=28 should be under 1.5GB per process; but I think you recently mentioned you are running an older CADO that doesn't recognise A... so we're choosing between I=14 and 15?[/QUOTE] I think almost all of my machines are at least 6GB, with the largest three maxed out at 16GB. And, I'm currently having the machines run one instance with full threads under that instance. I have a client script that gets the CPU count and uses that to determine the override. 
The 176 digit job is running with the modified params. I always find the first ETA returned, entertaining:
[code] Info:Lattice Sieving: Marking workunit c175_sieving_650000660000 as ok (0.0% => [B]ETA Mon Oct 19 20:46:32 2020[/B]) [/code] 
For a job big enough to produce a 20M+ matrix, failing to build at TD100 is an indication that more relations are needed. It's not that density 100 will shrink the matrix some magical amount, rather that having enough relations to build at 100 or 110 will also be enough relations to shrink the matrix another 10% or so.
As a guess, 5% more relations would shrink the matrix 10%. Not usually a great tradeoff, but when you have a sieving farm it's surely a plus for you! A note on the GNFS175 file: Qmin is so small that ETA will start out about 2/3rds of what it will actually take to gather the relations. The sievers are really efficient at small Q, at the cost of some extra duplicates and CADO making empty promises about a fast job. 
Some CADONFS questions and experiment points I'm pondering  any thoughts are quite welcome:
1. CADONFS provides a summary when it completes. I would typically abort the CADONFS LA stage, if the msieve LA is successfully running with a substantially ealier ETA. Should I let this one finish via CADONFS to provide a full dataset for you? 2. For the previous SNFS job, I was able to invoke a duplicate server on a Colab instance to sieve the area from 100k150k and add those relations to the set provided to msieve. But, this used a provided polynomial. For this GNFS job, what CADONFS files would I need to use to invoke a parallel CADONFS server in a Colab instance in a similar manner to before? Would I need to use more than the snapshot file (modified for the qmin)? 3. I am toying with the idea of using a(n) RPi as a proxy for a CADONFS client. Is there a way I can have a machine (RPi or another incapable of meeting the memory requirements) act as a go between with a Colab instance? Basically, I want to have the RPi be seen as the client, which picks up the WUs and reassigns them to the Colab instance, then retrieves the WU results and uploads them to the server. I can copy files between the RPi and the Colab instance, but I can't run the Colab instance as a client to my server. (I actually don't want to open the server to machines outside my LAN.) 
1. This only matters if we plan to iterate multiple factorizations of similar size to compare various params settings; otherwise, your timing data doesn't tell us much since there is little to compare to. If you have some elapsed (e.g. wall clock) time for the C168ish you did with the default CADO file, we can see if my C175 file did better than the observed doubleevery5.5digits typical on CADO. So, I wouldn't bother letting it finish, but I would try to record CADO's claim of sieve time from right before it enters filtering.
2. I believe you need to give it the poly also; either the .poly file in the same folder (which the snapshot should reference), or by explicitly declaring the poly the same way you did for the SNFS poly (tasks.poly = {polyfilename}, if I recall). Either way, you'll need to copy the poly file to the colab instance. 3. Far beyond my paygrade in networking nor CADO knowledge, sorry. 
[QUOTE=VBCurtis;541998]If quotes, no hyphen. EDIT: Actually, no hyphen at all for that setting.
The biggest mystery for a C175 file is the number of relations to target; but since you have an army of sievers and not much matrix power, I put the relations count a fair bit higher than I think strictly necessary. I think you could get away with 250M relations, but 270 should make a much nicer matrix.[/QUOTE] I've been following this discussion a bit and I'd like to do one of the homogeneous Cunningham c177s with your parameters (and ~160 cores for sieving), but to give a bit of variety I'm thinking of doing it with A=28. Are there any other changes I ought to make to compensate for the smaller sieve region? 
[QUOTE=charybdis;542123]I've been following this discussion a bit and I'd like to do one of the homogeneous Cunningham c177s with your parameters (and ~160 cores for sieving), but to give a bit of variety I'm thinking of doing it with A=28. Are there any other changes I ought to make to compensate for the smaller sieve region?[/QUOTE]
I think A=28 is optimal for this size, but this is just a guess really I'm happy to hear you'll try it! Here's what I would change, and why: The duplicate rate is often a bit higher when using a smaller siever, so you may need more than 270M relations. I estimate a matrix would build for Ed on I=15 with 250M, and I added 20M because he uses a farm to sieve but his "main" machine isn't very fast so he is willing to sacrifice some sieve time to reduce matrix time. Our experience with ggnfs sievers is that 1015% relations are needed on 14e vs 15e; since A=28 is halfway in between, we can guess 58% more relations will be needed. 8% more than 250M is 270M, so if you don't mind a longish matrix you could leave it at 270M. I would personally choose 285M for A=28, and see what happens. If yield isn't very good, you can relax the lambda settings a bit, like 0.01 each. This will increase the relations required, though those complicated interactions between lambda/sieve speed/relations needed are why I do 810 factorizations at a given size before publishing parameters. I would also increase each lim by 15% or so, say to lim0=105M and lim1= 160M. I don't have a good reason for this, other than that ggnfs sievers see yield fall off markedly when Q > 2 * lim0. Even with CADO, I have found that choosing lim's such that Q sieved does not exceed lim1 is always faster than otherwise (where "always" is for all tests below 160 digits). I believe Ed's I=15 job should finish when Q is in the 100130M range. Using A=28 will need roughly 50% more Q, 150190M as final Q. So I'm suggesting lim1 equal to my guess at final Q; note that since you're doing C177 rather than C175, you might add another 10% to both lim's, to e.g. 115M and 175M. Larger lim's improve yield (relations per Qrange) at the expense of a little speed. Finally, 2 extra digits of difficulty is about 25% harder, so I'd add 25% to poly select: Change admax from 12e5 to 15e5. If you'd like to contribute to refining these parameters going forward, I'd like to know the final Q sieved, the number of relations you generated (that is, the rels_wanted you chose), the number of unique relations, and the matrix size (total weight is a proxy for size, but it's nice to have both row count and total weight). Timing info is only useful if you plan to factor multiple numbers with your setup obviously, if you do a second one of similar size, say within 3 digits, we can compare the timings and conclude which params were better. Good luck! 
Thank you so much for this! I'll go for 285M relations and see what happens, and I'll keep you updated on the filtering/matrix steps.

A couple little things:
I didn't account for needing more relations for C177 vs C175, but this is all guesswork anyway... 290M might be smarter? Also, Ed uses msieve to solve the matrix because it's faster than CADO. If you're going to use CADO starttofinish, then the matrix step is relatively slower, which again argues for more sieving. I *think* that you can use a snapshot file to retroactively do more sieving if a matrix doesn't meet your sensibilities for size that is, if the matrix looks big, you can edit the snapshot file to add a higher rels_wanted setting, and restart CADO. I believe CADO will look to see if relations count matches that number, even if filtering is already complete (I'd like confirmation of this, actually!). If I'm right, starting with 285M with a plan to bump to 300M if the matrix comes out big is maybe the best plan. tl'dr: 285M good. More might be better. :) 
[QUOTE=VBCurtis;542116]1. This only matters if we plan to iterate multiple factorizations of similar size to compare various params settings; otherwise, your timing data doesn't tell us much since there is little to compare to. If you have some elapsed (e.g. wall clock) time for the C168ish you did with the default CADO file, we can see if my C175 file did better than the observed doubleevery5.5digits typical on CADO. So, I wouldn't bother letting it finish, but I would try to record CADO's claim of sieve time from right before it enters filtering.
2. I believe you need to give it the poly also; either the .poly file in the same folder (which the snapshot should reference), or by explicitly declaring the poly the same way you did for the SNFS poly (tasks.poly = {polyfilename}, if I recall). Either way, you'll need to copy the poly file to the colab instance. 3. Far beyond my paygrade in networking nor CADO knowledge, sorry.[/QUOTE]1. Do you know whether "tasks.filter.run = false" provides the poly/sieve timings or just stops? 2. I had thought the poly values were also in the snapshot, but I see they aren't, so I'll be sure to copy that file as well. I will have to modify the snapshot file for other things such as build path, too. One thing I hope to try in the next few days, is to run a semiclone, server only, instance of the current local server, on Colab, sieving 100k500k*. I'd like to see if the local instance would recognize the relations from the Colab instance, if it had no record of them being assigned. If not, I'm wondering if including the .stderr file would tell the local instance that they had already been accepted. *Would running this area throw off your "rels_wanted" value, since these would be more prone to duplicates, or am I off course? 
[QUOTE=VBCurtis;542130]A couple little things:
I didn't account for needing more relations for C177 vs C175, but this is all guesswork anyway... 290M might be smarter? Also, Ed uses msieve to solve the matrix because it's faster than CADO. If you're going to use CADO starttofinish, then the matrix step is relatively slower, which again argues for more sieving. I *think* that you can use a snapshot file to retroactively do more sieving if a matrix doesn't meet your sensibilities for size that is, if the matrix looks big, you can edit the snapshot file to add a higher rels_wanted setting, and restart CADO. I believe CADO will look to see if relations count matches that number, even if filtering is already complete (I'd like confirmation of this, actually!). If I'm right, starting with 285M with a plan to bump to 300M if the matrix comes out big is maybe the best plan. tl'dr: 285M good. More might be better. :)[/QUOTE] On previous occasions, I have restarted CADONFS after it was already performing krylov, to add more relations. As you posted, I changed the rels_wanted value and it did go back to sieving until the new value was met. I mainly did this when msieve wouldn't build a matrix but CADONFS did, but I also did it for the recent testing of msieve matrices. 
[QUOTE=VBCurtis;542130]A couple little things:
I didn't account for needing more relations for C177 vs C175, but this is all guesswork anyway... 290M might be smarter? Also, Ed uses msieve to solve the matrix because it's faster than CADO. If you're going to use CADO starttofinish, then the matrix step is relatively slower, which again argues for more sieving. I *think* that you can use a snapshot file to retroactively do more sieving if a matrix doesn't meet your sensibilities for size that is, if the matrix looks big, you can edit the snapshot file to add a higher rels_wanted setting, and restart CADO. I believe CADO will look to see if relations count matches that number, even if filtering is already complete (I'd like confirmation of this, actually!). If I'm right, starting with 285M with a plan to bump to 300M if the matrix comes out big is maybe the best plan. tl'dr: 285M good. More might be better. :)[/QUOTE] I'll be using msieve  it's faster, and the machine I was using ran out of memory during the "replay" stage of CADO filtering for the c172 I've just finished. Filtering is obviously a while away, but what matrix size would you consider "too big" here (say with target density 100)? 
[QUOTE=EdH;542132]1. Do you know whether "tasks.filter.run = false" provides the poly/sieve timings or just stops?
*Would running this area throw off your "rels_wanted" value, since these would be more prone to duplicates, or am I off course?[/QUOTE] 1. I'm pretty sure the timing is listed before filtering begins, so CADO should show the timetosieve just before it exits. No big deal on the tweak to relations from starting at lower Q. 500k is already quite low/prone to extra duplicates, going down to 100 or 150 won't change those numbers enough to matter. 
[QUOTE=charybdis;542144]I'll be using msieve  it's faster, and the machine I was using ran out of memory during the "replay" stage of CADO filtering for the c172 I've just finished. Filtering is obviously a while away, but what matrix size would you consider "too big" here (say with target density 100)?[/QUOTE]
I glanced through the NFS@home 15e results page to have a look at matrix sizes, but I forgot that most numbers don't have the difficulty listed on the results (one has to open the log to find that info). So, I'm taking a guess: 20M matrix is too big for GNFS177. Using msieve, filtering and the matrix will fit in a 16GB machine easily; the limit is around 2526M matrix size on 16GB. ... Aha! [url]https://mersenneforum.org/showpost.php?p=533106&postcount=217[/url] is a good data point: C184ish, 32/32LP, 366M raw relations was enough to build a 17.7M matrix. 32/32 needs about 30% more relations than our choice of 31/32, so 280M would be equivalent if we were on ggnfs and starting at Q=20M. We're taking advantage of CADO's super fast speeds at low Q, at the cost of extra duplicates; I estimate you'll need 190M unique relations, and 285290M raw relations is still a good guess (based on that one data point from the linked post). C177 is markedly easier than C184, so I won't be surprised to see a 1516M matrix from your dataset. 
So I decided to do an early filtering run (with the default target_density of 90) for the c177 to see how things were going, and was surprised to get a rather friendly matrix (edit  this was after sieving Q up to 190M):
[code]Mon Apr 13 23:19:04 2020 commencing relation filtering Mon Apr 13 23:19:04 2020 estimated available RAM is 15845.6 MB Mon Apr 13 23:19:04 2020 commencing duplicate removal, pass 1 ...relation errors... Mon Apr 13 23:45:59 2020 found 81454230 hash collisions in 266531234 relations Mon Apr 13 23:46:21 2020 added 122209 free relations Mon Apr 13 23:46:21 2020 commencing duplicate removal, pass 2 Mon Apr 13 23:51:30 2020 found 110111310 duplicates and 156542133 unique relations Mon Apr 13 23:51:30 2020 memory use: 1449.5 MB Mon Apr 13 23:51:30 2020 reading ideals above 189857792 Mon Apr 13 23:51:30 2020 commencing singleton removal, initial pass Tue Apr 14 00:03:26 2020 memory use: 3012.0 MB Tue Apr 14 00:03:27 2020 reading all ideals from disk Tue Apr 14 00:03:43 2020 memory use: 2357.2 MB Tue Apr 14 00:03:46 2020 commencing inmemory singleton removal Tue Apr 14 00:03:49 2020 begin with 156542133 relations and 141217020 unique ideals Tue Apr 14 00:04:16 2020 reduce to 69079091 relations and 41834367 ideals in 15 passes Tue Apr 14 00:04:16 2020 max relations containing the same ideal: 24 Tue Apr 14 00:04:20 2020 reading ideals above 720000 Tue Apr 14 00:04:20 2020 commencing singleton removal, initial pass Tue Apr 14 00:13:17 2020 memory use: 1506.0 MB Tue Apr 14 00:13:17 2020 reading all ideals from disk Tue Apr 14 00:13:38 2020 memory use: 2750.9 MB Tue Apr 14 00:13:43 2020 keeping 62470184 ideals with weight <= 200, target excess is 377745 Tue Apr 14 00:13:48 2020 commencing inmemory singleton removal Tue Apr 14 00:13:52 2020 begin with 69079091 relations and 62470184 unique ideals Tue Apr 14 00:14:38 2020 reduce to 68407306 relations and 61797325 ideals in 11 passes Tue Apr 14 00:14:38 2020 max relations containing the same ideal: 200 Tue Apr 14 00:15:02 2020 removing 6152193 relations and 5152193 ideals in 1000000 cliques Tue Apr 14 00:15:04 2020 commencing inmemory singleton removal Tue Apr 14 00:15:08 2020 begin with 62255113 relations and 61797325 unique ideals Tue Apr 14 00:15:42 2020 reduce to 61908990 relations and 56293091 ideals in 9 passes Tue Apr 14 00:15:42 2020 max relations containing the same ideal: 193 Tue Apr 14 00:16:03 2020 removing 4732119 relations and 3732119 ideals in 1000000 cliques Tue Apr 14 00:16:05 2020 commencing inmemory singleton removal Tue Apr 14 00:16:08 2020 begin with 57176871 relations and 56293091 unique ideals Tue Apr 14 00:16:33 2020 reduce to 56939736 relations and 52320100 ideals in 7 passes Tue Apr 14 00:16:33 2020 max relations containing the same ideal: 185 Tue Apr 14 00:16:52 2020 removing 4296409 relations and 3296409 ideals in 1000000 cliques Tue Apr 14 00:16:54 2020 commencing inmemory singleton removal Tue Apr 14 00:16:57 2020 begin with 52643327 relations and 52320100 unique ideals Tue Apr 14 00:17:19 2020 reduce to 52427201 relations and 48804144 ideals in 7 passes Tue Apr 14 00:17:19 2020 max relations containing the same ideal: 176 Tue Apr 14 00:17:38 2020 removing 4072883 relations and 3072883 ideals in 1000000 cliques Tue Apr 14 00:17:39 2020 commencing inmemory singleton removal Tue Apr 14 00:17:42 2020 begin with 48354318 relations and 48804144 unique ideals Tue Apr 14 00:18:05 2020 reduce to 48141563 relations and 45515043 ideals in 8 passes Tue Apr 14 00:18:05 2020 max relations containing the same ideal: 166 Tue Apr 14 00:18:22 2020 removing 3937420 relations and 2937420 ideals in 1000000 cliques Tue Apr 14 00:18:23 2020 commencing inmemory singleton removal Tue Apr 14 00:18:26 2020 begin with 44204143 relations and 45515043 unique ideals Tue Apr 14 00:18:47 2020 reduce to 43985166 relations and 42354678 ideals in 8 passes Tue Apr 14 00:18:47 2020 max relations containing the same ideal: 159 Tue Apr 14 00:19:02 2020 removing 3855855 relations and 2855855 ideals in 1000000 cliques Tue Apr 14 00:19:04 2020 commencing inmemory singleton removal Tue Apr 14 00:19:06 2020 begin with 40129311 relations and 42354678 unique ideals Tue Apr 14 00:19:23 2020 reduce to 39897778 relations and 39262821 ideals in 7 passes Tue Apr 14 00:19:23 2020 max relations containing the same ideal: 146 Tue Apr 14 00:19:37 2020 removing 1008731 relations and 811959 ideals in 196772 cliques Tue Apr 14 00:19:38 2020 commencing inmemory singleton removal Tue Apr 14 00:19:40 2020 begin with 38889047 relations and 39262821 unique ideals Tue Apr 14 00:19:54 2020 reduce to 38873526 relations and 38435281 ideals in 6 passes Tue Apr 14 00:19:54 2020 max relations containing the same ideal: 145 Tue Apr 14 00:20:01 2020 relations with 0 large ideals: 1351 Tue Apr 14 00:20:01 2020 relations with 1 large ideals: 2032 Tue Apr 14 00:20:01 2020 relations with 2 large ideals: 31561 Tue Apr 14 00:20:01 2020 relations with 3 large ideals: 276178 Tue Apr 14 00:20:01 2020 relations with 4 large ideals: 1367390 Tue Apr 14 00:20:01 2020 relations with 5 large ideals: 4161767 Tue Apr 14 00:20:01 2020 relations with 6 large ideals: 8220245 Tue Apr 14 00:20:01 2020 relations with 7+ large ideals: 24813002 Tue Apr 14 00:20:01 2020 commencing 2way merge Tue Apr 14 00:20:20 2020 reduce to 25269597 relation sets and 24831352 unique ideals Tue Apr 14 00:20:20 2020 commencing full merge Tue Apr 14 00:25:34 2020 memory use: 3048.5 MB Tue Apr 14 00:25:36 2020 found 12788387 cycles, need 12729552 Tue Apr 14 00:25:38 2020 weight of 12729552 cycles is about 1145831261 (90.01/cycle) Tue Apr 14 00:25:39 2020 distribution of cycle lengths: Tue Apr 14 00:25:39 2020 1 relations: 936274 Tue Apr 14 00:25:39 2020 2 relations: 1233061 Tue Apr 14 00:25:39 2020 3 relations: 1366643 Tue Apr 14 00:25:39 2020 4 relations: 1339097 Tue Apr 14 00:25:39 2020 5 relations: 1277647 Tue Apr 14 00:25:39 2020 6 relations: 1185428 Tue Apr 14 00:25:39 2020 7 relations: 1063351 Tue Apr 14 00:25:39 2020 8 relations: 927894 Tue Apr 14 00:25:39 2020 9 relations: 789959 Tue Apr 14 00:25:39 2020 10+ relations: 2610198 Tue Apr 14 00:25:39 2020 heaviest cycle: 23 relations Tue Apr 14 00:25:41 2020 commencing cycle optimization Tue Apr 14 00:25:55 2020 start with 80743841 relations Tue Apr 14 00:27:24 2020 pruned 2673306 relations Tue Apr 14 00:27:25 2020 memory use: 2459.5 MB Tue Apr 14 00:27:25 2020 distribution of cycle lengths: Tue Apr 14 00:27:25 2020 1 relations: 936274 Tue Apr 14 00:27:25 2020 2 relations: 1265314 Tue Apr 14 00:27:25 2020 3 relations: 1424607 Tue Apr 14 00:27:25 2020 4 relations: 1385641 Tue Apr 14 00:27:25 2020 5 relations: 1326920 Tue Apr 14 00:27:25 2020 6 relations: 1219435 Tue Apr 14 00:27:25 2020 7 relations: 1089468 Tue Apr 14 00:27:25 2020 8 relations: 940531 Tue Apr 14 00:27:25 2020 9 relations: 790962 Tue Apr 14 00:27:25 2020 10+ relations: 2350400 Tue Apr 14 00:27:25 2020 heaviest cycle: 22 relations Tue Apr 14 00:27:42 2020 RelProcTime: 4118 Tue Apr 14 00:27:46 2020 Tue Apr 14 00:27:46 2020 commencing linear algebra Tue Apr 14 00:27:47 2020 read 12729552 cycles Tue Apr 14 00:28:04 2020 cycles contain 38650274 unique relations Tue Apr 14 00:33:26 2020 read 38650274 relations Tue Apr 14 00:34:10 2020 using 20 quadratic characters above 4294917295 Tue Apr 14 00:36:38 2020 building initial matrix Tue Apr 14 00:42:43 2020 memory use: 5483.9 MB Tue Apr 14 00:42:53 2020 read 12729552 cycles Tue Apr 14 00:42:55 2020 matrix is 12729375 x 12729552 (4750.3 MB) with weight 1481221875 (116.36/col) Tue Apr 14 00:42:55 2020 sparse part has weight 1092512135 (85.82/col) Tue Apr 14 00:44:26 2020 filtering completed in 2 passes Tue Apr 14 00:44:29 2020 matrix is 12728889 x 12729066 (4750.3 MB) with weight 1481204265 (116.36/col) Tue Apr 14 00:44:29 2020 sparse part has weight 1092509145 (85.83/col) Tue Apr 14 00:45:25 2020 matrix starts at (0, 0) Tue Apr 14 00:45:27 2020 matrix is 12728889 x 12729066 (4750.3 MB) with weight 1481204265 (116.36/col) Tue Apr 14 00:45:27 2020 sparse part has weight 1092509145 (85.83/col) Tue Apr 14 00:45:27 2020 saving the first 112 matrix rows for later Tue Apr 14 00:45:29 2020 matrix includes 128 packed rows Tue Apr 14 00:45:32 2020 matrix is 12728777 x 12729066 (4413.9 MB) with weight 1088568275 (85.52/col) Tue Apr 14 00:45:32 2020 sparse part has weight 1004323212 (78.90/col) Tue Apr 14 00:45:32 2020 using block size 8192 and superblock size 442368 for processor cache size 9216 kB Tue Apr 14 00:45:56 2020 commencing Lanczos iteration (6 threads) Tue Apr 14 00:45:56 2020 memory use: 5060.6 MB Tue Apr 14 00:46:26 2020 linear algebra at 0.0%, ETA 64h47m[/code] Looks like 285M relations was a big overestimate  maybe something to do with the double large prime bounds being slightly less than double the single large prime bounds? These were the parameters I ended up using: [code]tasks.A = 28 tasks.qmin = 500000 tasks.lim0 = 115000000 tasks.lim1 = 175000000 tasks.lpb0 = 31 tasks.lpb1 = 32 tasks.sieve.lambda0 = 1.855 tasks.sieve.lambda1 = 1.85 tasks.sieve.mfb0 = 58 tasks.sieve.mfb1 = 60 tasks.sieve.ncurves0 = 20 tasks.sieve.ncurves1 = 25[/code] I'll do another c177 next; Curtis, what parameters do you think I should try this time (bearing in mind Ed's c176 will also be a useful comparison)? I guess from a datacollection point of view it would also be useful to try some more filtering runs on the first c177 with a smaller number of relations to see how many are actually needed? (Also shouldn't this really be in a separate thread?) 
[QUOTE=charybdis;542584]Looks like 285M relations was a big overestimate  maybe something to do with the double large prime bounds being slightly less than double the single large prime bounds? These were the parameters I ended up using:
[code]tasks.A = 28 tasks.qmin = 500000 tasks.lim0 = 115000000 tasks.lim1 = 175000000 tasks.lpb0 = 31 tasks.lpb1 = 32 tasks.sieve.lambda0 = 1.855 tasks.sieve.lambda1 = 1.85 tasks.sieve.mfb0 = 58 tasks.sieve.mfb1 = 60 tasks.sieve.ncurves0 = 20 tasks.sieve.ncurves1 = 25[/code] I'll do another c177 next; Curtis, what parameters do you think I should try this time (bearing in mind Ed's c176 will also be a useful comparison)? I guess from a datacollection point of view it would also be useful to try some more filtering runs on the first c177 with a smaller number of relations to see how many are actually needed? (Also shouldn't this really be in a separate thread?)[/QUOTE] Wow! Sorry about the mistake on relationsestimate. So 266M raw relations, 156M unique (not an unusuallygood ratio, meaning this is likely not an exceptional poly), built a 12.7M matrix. That's quite small for this size, meaning more relations are not a good idea. If you're willing to do a few filtering runs, please use msieve's filter_maxrels flag (find the exact invocation via msieve h) to test 260M and 250M relations? Looks like 250M should be the target number for our future c175 file (since this is a c177, at the high end of what this file will cover). Maybe 240M is even enough... Since you aborted the run, I suppose you don't have the CADOgenerated summary of sieving threadtime? Bummer. Your final Q of 190M means yield wasn't terrific; that suggests we might benefit from increasing the lim's a bit for your next run. How about lim0=130M and lim1=180M? Those are kind of big by the norms of ggnfs/15e, but you're sieving on 14.5e (A=28). Yield for your job wasn't great, Q:1190M producing 266M relations is just under yield of 1.5. I think I'd boost either the siever (to I=15, in which case don't bother changing lim's, or even reduce them a bit) or LP. Alternative: Go for 32LP on both sides, rather than 31/32. That would add 30% to relations needed, 325M rather than 250M. lim's should be less unbalanced in this case, e.g. 140M and 175M. I think I like this plan better, as a lessmassive change to the params. EDIT: Also, Ed is doing I=15 on his run, so we'll get some sort of comparison there. Do you have any sense for whether the job ran more quickly than you expected, or less? GNFS jobs seem to double in length every 5.5 digits with CADO, if that helps you make a comparison to previous work. My experience with poly select tweak of nq=15625, starting Q really small, and tight lambda/low mfb settings generally seem to effectively take ~2 digits off the job time compared to CADO defaults. As for breaking off a new thread for this interesting 175params discussion, that's Ed's call it's his subforum, after all! 
[quote=VBCurtis;542587]Since you aborted the run, I suppose you don't have the CADOgenerated summary of sieving threadtime? Bummer.[/quote]
I don't have the full summary, but there are the "combined stats" lines in the log, which give 'stats_total_cpu_time': '67364457.04999968', or ~2.13 CPUyears for sieving (most of the CPUs are i54xxx and i56xxx). [quote]Alternative: Go for 32LP on both sides, rather than 31/32. That would add 30% to relations needed, 325M rather than 250M. lim's should be less unbalanced in this case, e.g. 140M and 175M. I think I like this plan better, as a lessmassive change to the params. EDIT: Also, Ed is doing I=15 on his run, so we'll get some sort of comparison there.[/quote] Yes, even if I=15 turns out to be faster I guess 32/32 will give a better data point. I suppose the double large prime bounds should go up too? [quote]]Do you have any sense for whether the job ran more quickly than you expected, or less? GNFS jobs seem to double in length every 5.5 digits with CADO, if that helps you make a comparison to previous work. My experience with poly select tweak of nq=15625, starting Q really small, and tight lambda/low mfb settings generally seem to effectively take ~2 digits off the job time compared to CADO defaults.[/quote] I think it took a little longer than I expected, but some of this will be down to the oversieving at high Q with low yield; we'll get a better idea once I've done some filtering runs with fewer relations. I'd already been trying to fudge the parameters for my jobs using some of your guesses. 
[QUOTE=charybdis;542584]. . .
(Also shouldn't this really be in a separate thread?)[/QUOTE] [QUOTE=VBCurtis;542587]. . . As for breaking off a new thread for this interesting 175params discussion, that's Ed's call it's his subforum, after all![/QUOTE]A separate thread would be fine, but there is intermix within some of the posts. The thread might gain more interest from others if it was located outside the blog area. And, this portion doesn't match the original post. It's OK either way. If you'd like to, Curtis, you can grab the relevant posts and move them to a more appropriate location. 
After too many restarts to kick the server into issuing WUs (I really thought I was using an earlier version that never did this before.), I'm closing in on the target relations.
@Curtis, What was the suggested density for my msieve matrix? 
For C175ish, and your ratio of sieving to matrix resources, I'd go for 116 or 120.
For a normal person (say, using one or two machines), I'd go for 105110. 
[QUOTE=charybdis;542616]
Yes, even if I=15 turns out to be faster I guess 32/32 will give a better data point. I suppose the double large prime bounds should go up too? [/QUOTE] Yep! Mfb should be 60 on both sides; lambda0 and lambda1 are floatingpoint controls of mfb, in effect multiply lambda by LP and you get the effective MFB that CADO is using. I round that up to choose mfb, but the choice isn't doing much because lambda is a tighter control. 1.85 * 32 = 59.2, so I used 60. With that lambda setting, only cofactors that split into one factor smaller than 27.2 (and thus the other bigger than 32) bits are wasted effort. 2^27.2 is 154M, so with lim set to that value there is no wasted cofactorization effort every split yields a relation. This also shows you how I compute lambda: If lim1 is raised to 180M, that's 27.43 bits. 27.43/32 is 0.857, so lambda1 should be raised to at least 1.86, since no cofactor can split into a prime smaller than lim. Adding 0.01 raises yield and doesn't seem to require too many more relations, so perhaps try 1.87 for lambda1, and 1.85 or 1.855 for lambda0 if you use 32/32 and 140/180M for lim's. 
Done a few more filtering runs for the first c177.
250M relations gave: [code]matrix is 13947683 x 13947907 (5100.4 MB) with weight 1347451665 (96.61/col) [/code] 245M gave: [code]matrix is 14494203 x 14494428 (5303.1 MB) with weight 1399857738 (96.58/col) [/code] 240M wasn't enough to build a matrix. These were all at target density 90; higher would presumably be better but we're probably talking savings of less than a day (the 14.5M matrix had an ETA of ~81h). What's optimal will depend heavily on both the individual setup and whether the aim is to be able to factor a single c17x as quickly as possible (for which more relations is better for the faster matrix) or to factor lots in succession, as then the matrix can be left running while the next one sieves. [QUOTE=VBCurtis;542646]Yep! Mfb should be 60 on both sides; lambda0 and lambda1 are floatingpoint controls of mfb, in effect multiply lambda by LP and you get the effective MFB that CADO is using. I round that up to choose mfb, but the choice isn't doing much because lambda is a tighter control. 1.85 * 32 = 59.2, so I used 60. With that lambda setting, only cofactors that split into one factor smaller than 27.2 (and thus the other bigger than 32) bits are wasted effort. 2^27.2 is 154M, so with lim set to that value there is no wasted cofactorization effort every split yields a relation. This also shows you how I compute lambda: If lim1 is raised to 180M, that's 27.43 bits. 27.43/32 is 0.857, so lambda1 should be raised to at least 1.86, since no cofactor can split into a prime smaller than lim. Adding 0.01 raises yield and doesn't seem to require too many more relations, so perhaps try 1.87 for lambda1, and 1.85 or 1.855 for lambda0 if you use 32/32 and 140/180M for lim's.[/QUOTE] Thanks again! I've started the next c177. 
@Curtis:
Sieving is complete and I am now trying to run msieve for the LA. Here is a portion of the log: [code] PID15335 20200416 09:03. . . Debug:Lattice Sieving: stderr is: b"# redoing q=93240031, rho=4478888 because 1s buckets are full\n# Fullest level1s bucket #1090, wrote 3135/3072\n# Average J=15601 for 558 specialq's, max bucket fill bkmult 1,1s:1.07153\n# Discarded 0 specialq's out of 558 pushed\n# Wasted cpu time due to 1 bkmult adjustments: 8.39\n# Total cpu time 8830.48s [norm 7.92+19.5, sieving 8481.8 (4614.5 + 259.7 + 3607.7), factor 321.2 (321.0 + 0.2)] (not incl wasted time)\n# Total elapsed time 1299.00s, per specialq 2.32796s, per relation 0.0650575s\n# PeakMemusage (MB) = 3094 \n# Total 19967 reports [0.442s/r, 35.8r/sq] in 1.3e+03 elapsed s [679.8% CPU]\n" Debug:Lattice Sieving: Newly arrived stats: {'stats_avg_J': '15601.0 558', 'stats_total_time': '1299.0', 'stats_total_cpu_time': '8830.48', 'stats_max_bucket_fill': '1,1s:1.07153'} Debug:Lattice Sieving: Combined stats: {'stats_avg_J': '16023.856554636326 5345903', 'stats_total_time': '21977179.10000008', 'stats_total_cpu_time': '93963886.25000069', 'stats_max_bucket_fill': '1.0,1s:1.416720'} Info:Lattice Sieving: Found 19967 relations in '/tmp/cadofactor/c175.upload/c175.9324000093250000.ue6ki7i9.gz', total is now 270013333/270000000 . . . Debug:Lattice Sieving: Exit SievingTask.run(sieving) Info:Lattice Sieving: Aggregate statistics: Info:Lattice Sieving: Total number of relations: 270013333 Info:Lattice Sieving: Average J: 16023.9 for 5345903 specialq, max bucket fill bkmult 1.0,1s:1.416720 Info:Lattice Sieving: Total time: 2.19772e+07s Info:Filtering  Duplicate Removal, splitting pass: Stopping at duplicates1 [/code]Is there something else of value I should seek for you? 
t_d=120 didn't build:
[code] Thu Apr 16 10:03:48 2020 Thu Apr 16 10:03:48 2020 Thu Apr 16 10:03:48 2020 Msieve v. 1.54 (SVN 1018) Thu Apr 16 10:03:48 2020 random seeds: f85ef96f 7298ed39 Thu Apr 16 10:03:48 2020 factoring 76552370139504036674890813564032281493867343366619508594816489005834882856199128873928842970710045044111574726594936894404957063604759585302342441093226844531070349677623657609 (176 digits) Thu Apr 16 10:03:49 2020 searching for 15digit factors Thu Apr 16 10:03:50 2020 commencing number field sieve (176digit input) Thu Apr 16 10:03:50 2020 R0: 10749206376460432970317818596117873 Thu Apr 16 10:03:50 2020 R1: 4023609444811856477743 Thu Apr 16 10:03:50 2020 A0: 91389778824609164214454779424151524400880 Thu Apr 16 10:03:50 2020 A1: 16573333756774205759678902993899502 Thu Apr 16 10:03:50 2020 A2: 8753197000583595457254903663 Thu Apr 16 10:03:50 2020 A3: 1186820920867031701728 Thu Apr 16 10:03:50 2020 A4: 77519198521772 Thu Apr 16 10:03:50 2020 A5: 533400 Thu Apr 16 10:03:50 2020 skew 1.00, size 3.822e17, alpha 6.645, combined = 8.280e16 rroots = 5 Thu Apr 16 10:03:50 2020 Thu Apr 16 10:03:50 2020 commencing relation filtering Thu Apr 16 10:03:50 2020 setting target matrix density to 120.0 Thu Apr 16 10:03:50 2020 estimated available RAM is 15926.6 MB Thu Apr 16 10:03:50 2020 commencing duplicate removal, pass 1 Thu Apr 16 10:03:51 2020 error 1 reading relation 189590 . . . Thu Apr 16 10:38:29 2020 error 1 reading relation 267865222 Thu Apr 16 10:38:58 2020 found 88770665 hash collisions in 271761268 relations Thu Apr 16 10:39:31 2020 added 122298 free relations Thu Apr 16 10:39:31 2020 commencing duplicate removal, pass 2 Thu Apr 16 10:46:02 2020 found 125086447 duplicates and 146797119 unique relations Thu Apr 16 10:46:02 2020 memory use: 1449.5 MB Thu Apr 16 10:46:03 2020 reading ideals above 139919360 Thu Apr 16 10:46:03 2020 commencing singleton removal, initial pass Thu Apr 16 11:01:31 2020 memory use: 3012.0 MB Thu Apr 16 11:01:31 2020 reading all ideals from disk Thu Apr 16 11:01:49 2020 memory use: 2357.7 MB Thu Apr 16 11:01:53 2020 commencing inmemory singleton removal Thu Apr 16 11:01:57 2020 begin with 146797119 relations and 142617272 unique ideals Thu Apr 16 11:02:39 2020 reduce to 56506171 relations and 38799310 ideals in 18 passes Thu Apr 16 11:02:39 2020 max relations containing the same ideal: 21 Thu Apr 16 11:02:42 2020 reading ideals above 720000 Thu Apr 16 11:02:42 2020 commencing singleton removal, initial pass Thu Apr 16 11:13:13 2020 memory use: 1506.0 MB Thu Apr 16 11:13:13 2020 reading all ideals from disk Thu Apr 16 11:13:30 2020 memory use: 2241.3 MB Thu Apr 16 11:13:36 2020 keeping 54258109 ideals with weight <= 200, target excess is 313347 Thu Apr 16 11:13:42 2020 commencing inmemory singleton removal Thu Apr 16 11:13:47 2020 begin with 56506171 relations and 54258109 unique ideals Thu Apr 16 11:14:50 2020 reduce to 56030216 relations and 53781539 ideals in 13 passes Thu Apr 16 11:14:50 2020 max relations containing the same ideal: 200 Thu Apr 16 11:15:17 2020 removing 3684525 relations and 3284525 ideals in 400000 cliques Thu Apr 16 11:15:18 2020 commencing inmemory singleton removal Thu Apr 16 11:15:23 2020 begin with 52345691 relations and 53781539 unique ideals Thu Apr 16 11:16:08 2020 reduce to 52174610 relations and 50324306 ideals in 10 passes Thu Apr 16 11:16:08 2020 max relations containing the same ideal: 197 Thu Apr 16 11:16:33 2020 removing 2772195 relations and 2372195 ideals in 400000 cliques Thu Apr 16 11:16:34 2020 commencing inmemory singleton removal Thu Apr 16 11:16:38 2020 begin with 49402415 relations and 50324306 unique ideals Thu Apr 16 11:17:17 2020 reduce to 49291975 relations and 47840809 ideals in 9 passes Thu Apr 16 11:17:17 2020 max relations containing the same ideal: 190 Thu Apr 16 11:17:40 2020 removing 2488158 relations and 2088158 ideals in 400000 cliques Thu Apr 16 11:17:41 2020 commencing inmemory singleton removal Thu Apr 16 11:17:45 2020 begin with 46803817 relations and 47840809 unique ideals Thu Apr 16 11:18:22 2020 reduce to 46708746 relations and 45656840 ideals in 9 passes Thu Apr 16 11:18:22 2020 max relations containing the same ideal: 185 Thu Apr 16 11:18:43 2020 removing 2334687 relations and 1934687 ideals in 400000 cliques Thu Apr 16 11:18:44 2020 commencing inmemory singleton removal Thu Apr 16 11:18:49 2020 begin with 44374059 relations and 45656840 unique ideals Thu Apr 16 11:19:19 2020 reduce to 44283467 relations and 43630836 ideals in 8 passes Thu Apr 16 11:19:19 2020 max relations containing the same ideal: 182 Thu Apr 16 11:19:40 2020 removing 1701806 relations and 1412658 ideals in 289148 cliques Thu Apr 16 11:19:41 2020 commencing inmemory singleton removal Thu Apr 16 11:19:44 2020 begin with 42581661 relations and 43630836 unique ideals Thu Apr 16 11:20:14 2020 reduce to 42532132 relations and 42168363 ideals in 8 passes Thu Apr 16 11:20:14 2020 max relations containing the same ideal: 176 Thu Apr 16 11:20:41 2020 relations with 0 large ideals: 1038 Thu Apr 16 11:20:41 2020 relations with 1 large ideals: 1550 Thu Apr 16 11:20:41 2020 relations with 2 large ideals: 21593 Thu Apr 16 11:20:41 2020 relations with 3 large ideals: 198057 Thu Apr 16 11:20:41 2020 relations with 4 large ideals: 1072004 Thu Apr 16 11:20:41 2020 relations with 5 large ideals: 3623143 Thu Apr 16 11:20:41 2020 relations with 6 large ideals: 7969344 Thu Apr 16 11:20:41 2020 relations with 7+ large ideals: 29645403 Thu Apr 16 11:20:41 2020 commencing 2way merge Thu Apr 16 11:21:10 2020 reduce to 25695897 relation sets and 25332128 unique ideals Thu Apr 16 11:21:10 2020 commencing full merge Thu Apr 16 11:40:43 2020 memory use: 1167.7 MB Thu Apr 16 11:40:44 2020 found 84937 cycles, need 5310047 Thu Apr 16 11:40:44 2020 too few cycles, matrix probably cannot build [/code]More relations or less dense? I'll try both while waiting for your reply. 
The C177 from above had 156M unique relations and built a nice matrix. You have 146M, so your poly happened to generate more duplicate relations. I'd aim for that 155M unique number since it worked well on the C177; your duplicate rate is quite high, so something like 20M more raw relations might get you there. 25M wouldn't be bad.

[QUOTE=EdH;542856]@Curtis:
Sieving is complete and I am now trying to run msieve for the LA. Here is a portion of the log: [code]Debug:Lattice Sieving: Exit SievingTask.run(sieving) Info:Lattice Sieving: Aggregate statistics: Info:Lattice Sieving: Total number of relations: 270013333 Info:Lattice Sieving: Average J: 16023.9 for 5345903 specialq, max bucket fill bkmult 1.0,1s:1.416720 Info:Lattice Sieving: Total time: 2.19772e+07s Info:Filtering  Duplicate Removal, splitting pass: Stopping at duplicates1 [/code]Is there something else of value I should seek for you?[/QUOTE] This is great! We see that on your farm, 22 million threadseconds produced 270M raw relations. If we refine params for 170180 digit problems in the future, we have that time to compare to. In this case, you need another 8% relations or so, so we can add 10% to time and say 24M threadseconds of sieving for this C176. Comparison points, using my own params: C155, 2.3M threadsec on 12 threads of a 6core haswell 3.3ghz C186, 100M threadsec on many threads of IvyB Xeon2.6ghz. RichD did a C150 in 0.7M coresec of notHT i5 (I forget what speed). Only the C150 params had multiple test/refine cycles, the rest were firstguesses like this C175. 
[QUOTE=VBCurtis;542871]The C177 from above had 156M unique relations and built a nice matrix. You have 146M, so your poly happened to generate more duplicate relations. I'd aim for that 155M unique number since it worked well on the C177; your duplicate rate is quite high, so something like 20M more raw relations might get you there. 25M wouldn't be bad.[/QUOTE]
I've told CADONFS to run to 300M raw relations, but I can do an msieve test prior to that without disturbing CADONFS. 
[QUOTE=VBCurtis;542872]22 million threadseconds[/QUOTE]
Isn't that clientseconds, rather than threadseconds? Just above it in the log that Ed posted was: [code]Debug:Lattice Sieving: Combined stats: {'stats_avg_J': '16023.856554636326 5345903', 'stats_total_time': '21977179.10000008', 'stats_total_cpu_time': '93963886.25000069', 'stats_max_bucket_fill': '1.0,1s:1.416720'}[/code] One of these lines gets printed to the log each time a workunit arrives. From looking at my logs, 'stats_total_time' is clientseconds and 'stats_total_cpu_time' is threadseconds. The "Total time: 2.19772e+07s" line, from which you're getting 22 million seconds, is using the final 'stats_total_time', which isn't very helpful. When a CADO job finishes completely (including postprocessing), the time given for the whole factorization appears to use 'stats_total_cpu_time' instead, as it should. Edit: for comparison, the final such line from my c177 before I ran filtering: [code]Debug:Lattice Sieving: Combined stats: {'stats_total_cpu_time': '67364457.04999968', 'stats_total_time': '34176880.70000007', 'stats_max_bucket_fill': '1.0,1s:1.081950', 'stats_avg_J': '15365.574654104747 10509757'}[/code] Ed's c176 seems to have taken significantly longer in CPUtime than the c177, though I'm not sure how much of this is down to what CPUs are being used. Is there any adverse effect from running a larger number of threads per client, as Ed seems be doing? 
Hrmmmmm..... I've been using the screen display "total sieving time" that pops up at the end of the sieve phase, and matches the endoffactorization summary printed to screen. I can't say I delve into the log file hardly ever.
Ed's timing falls into my curvefitting of time vs input size from the other jobs I listed, so I am quite confused. I appreciate you pointing out the inconsistency in the log report! 
[QUOTE=VBCurtis;542887]Hrmmmmm..... I've been using the screen display "total sieving time" that pops up at the end of the sieve phase, and matches the endoffactorization summary printed to screen. I can't say I delve into the log file hardly ever.
Ed's timing falls into my curvefitting of time vs input size from the other jobs I listed, so I am quite confused. I appreciate you pointing out the inconsistency in the log report![/QUOTE] Did you also use the "total sieving time" to get the times for some of the other jobs you mentioned? If so that might explain the discrepancy. Here are a couple of sample lines from an old completed job, as an example: [code]Info:Lattice Sieving: Total time: 403109s ... Info:Complete Factorization / Discrete logarithm: Total cpu/elapsed time for entire factorization: 862194/16096.8[/code] The second one seems to give the correct CPU time, whereas the first is just clientseconds; the sieving for this job took just over 2 hours wall clock time on ~110 cores, which is consistent with ~800k CPUseconds for sieving plus a little extra for polyselect + postprocessing. 
[QUOTE=VBCurtis;542887]Hrmmmmm..... I've been using the screen display "total sieving time" that pops up at the end of the sieve phase, and matches the endoffactorization summary printed to screen. I can't say I delve into the log file hardly ever.
Ed's timing falls into my curvefitting of time vs input size from the other jobs I listed, so I am quite confused. I appreciate you pointing out the inconsistency in the log report![/QUOTE]Here are the significant last lines from the screen run to 270M: [code] Info:Lattice Sieving: Reached target of 270000000 relations, now have 270013333 Info:Lattice Sieving: Aggregate statistics: Info:Lattice Sieving: Total number of relations: 270013333 Info:Lattice Sieving: Average J: 16023.9 for 5345903 specialq, max bucket fill bkmult 1.0,1s:1.416720 Info:Lattice Sieving: Total time: 2.19772e+07s Info:Filtering  Duplicate Removal, splitting pass: Stopping at duplicates1 [/code] 
t_d=100 will build a matrix:
[code] Thu Apr 16 12:15:32 2020 commencing relation filtering Thu Apr 16 12:15:32 2020 setting target matrix density to 100.0 Thu Apr 16 12:15:32 2020 estimated available RAM is 15926.6 MB Thu Apr 16 12:15:32 2020 commencing duplicate removal, pass 1 Thu Apr 16 12:15:34 2020 error 1 reading relation 189590 . . . Thu Apr 16 12:49:54 2020 error 1 reading relation 267865222 Thu Apr 16 12:50:24 2020 found 88770665 hash collisions in 271761268 relations Thu Apr 16 12:50:56 2020 added 122298 free relations Thu Apr 16 12:50:56 2020 commencing duplicate removal, pass 2 Thu Apr 16 12:57:26 2020 found 125086447 duplicates and 146797119 unique relations Thu Apr 16 12:57:26 2020 memory use: 1449.5 MB Thu Apr 16 12:57:26 2020 reading ideals above 139919360 Thu Apr 16 12:57:26 2020 commencing singleton removal, initial pass Thu Apr 16 13:13:00 2020 memory use: 3012.0 MB Thu Apr 16 13:13:00 2020 reading all ideals from disk Thu Apr 16 13:13:19 2020 memory use: 2357.7 MB Thu Apr 16 13:13:24 2020 commencing inmemory singleton removal Thu Apr 16 13:13:28 2020 begin with 146797119 relations and 142617272 unique ideals Thu Apr 16 13:14:09 2020 reduce to 56506171 relations and 38799310 ideals in 18 passes Thu Apr 16 13:14:09 2020 max relations containing the same ideal: 21 Thu Apr 16 13:14:12 2020 reading ideals above 720000 Thu Apr 16 13:14:12 2020 commencing singleton removal, initial pass Thu Apr 16 13:24:49 2020 memory use: 1506.0 MB Thu Apr 16 13:24:49 2020 reading all ideals from disk Thu Apr 16 13:25:08 2020 memory use: 2241.3 MB Thu Apr 16 13:25:14 2020 keeping 54258109 ideals with weight <= 200, target excess is 313347 Thu Apr 16 13:25:20 2020 commencing inmemory singleton removal Thu Apr 16 13:25:25 2020 begin with 56506171 relations and 54258109 unique ideals Thu Apr 16 13:26:29 2020 reduce to 56030216 relations and 53781539 ideals in 13 passes Thu Apr 16 13:26:29 2020 max relations containing the same ideal: 200 Thu Apr 16 13:26:55 2020 removing 3684525 relations and 3284525 ideals in 400000 cliques Thu Apr 16 13:26:57 2020 commencing inmemory singleton removal Thu Apr 16 13:27:01 2020 begin with 52345691 relations and 53781539 unique ideals Thu Apr 16 13:27:47 2020 reduce to 52174610 relations and 50324306 ideals in 10 passes Thu Apr 16 13:27:47 2020 max relations containing the same ideal: 197 Thu Apr 16 13:28:11 2020 removing 2772195 relations and 2372195 ideals in 400000 cliques Thu Apr 16 13:28:13 2020 commencing inmemory singleton removal Thu Apr 16 13:28:17 2020 begin with 49402415 relations and 50324306 unique ideals Thu Apr 16 13:28:55 2020 reduce to 49291975 relations and 47840809 ideals in 9 passes Thu Apr 16 13:28:55 2020 max relations containing the same ideal: 190 Thu Apr 16 13:29:19 2020 removing 2488158 relations and 2088158 ideals in 400000 cliques Thu Apr 16 13:29:20 2020 commencing inmemory singleton removal Thu Apr 16 13:29:24 2020 begin with 46803817 relations and 47840809 unique ideals Thu Apr 16 13:30:00 2020 reduce to 46708746 relations and 45656840 ideals in 9 passes Thu Apr 16 13:30:00 2020 max relations containing the same ideal: 185 Thu Apr 16 13:30:22 2020 removing 2334687 relations and 1934687 ideals in 400000 cliques Thu Apr 16 13:30:23 2020 commencing inmemory singleton removal Thu Apr 16 13:30:27 2020 begin with 44374059 relations and 45656840 unique ideals Thu Apr 16 13:30:57 2020 reduce to 44283467 relations and 43630836 ideals in 8 passes Thu Apr 16 13:30:57 2020 max relations containing the same ideal: 182 Thu Apr 16 13:31:18 2020 removing 1701806 relations and 1412658 ideals in 289148 cliques Thu Apr 16 13:31:19 2020 commencing inmemory singleton removal Thu Apr 16 13:31:23 2020 begin with 42581661 relations and 43630836 unique ideals Thu Apr 16 13:31:52 2020 reduce to 42532132 relations and 42168363 ideals in 8 passes Thu Apr 16 13:31:52 2020 max relations containing the same ideal: 176 Thu Apr 16 13:32:19 2020 relations with 0 large ideals: 1038 Thu Apr 16 13:32:19 2020 relations with 1 large ideals: 1550 Thu Apr 16 13:32:19 2020 relations with 2 large ideals: 21593 Thu Apr 16 13:32:19 2020 relations with 3 large ideals: 198057 Thu Apr 16 13:32:19 2020 relations with 4 large ideals: 1072004 Thu Apr 16 13:32:19 2020 relations with 5 large ideals: 3623143 Thu Apr 16 13:32:19 2020 relations with 6 large ideals: 7969344 Thu Apr 16 13:32:19 2020 relations with 7+ large ideals: 29645403 Thu Apr 16 13:32:19 2020 commencing 2way merge Thu Apr 16 13:32:48 2020 reduce to 25695897 relation sets and 25332128 unique ideals Thu Apr 16 13:32:48 2020 commencing full merge Thu Apr 16 13:41:06 2020 memory use: 3081.6 MB Thu Apr 16 13:41:09 2020 found 12276374 cycles, need 12244328 Thu Apr 16 13:41:12 2020 weight of 12244328 cycles is about 1224870286 (100.04/cycle) Thu Apr 16 13:41:12 2020 distribution of cycle lengths: Thu Apr 16 13:41:12 2020 1 relations: 1128304 Thu Apr 16 13:41:12 2020 2 relations: 1130284 Thu Apr 16 13:41:12 2020 3 relations: 1190346 Thu Apr 16 13:41:12 2020 4 relations: 1117203 Thu Apr 16 13:41:12 2020 5 relations: 1062965 Thu Apr 16 13:41:12 2020 6 relations: 969179 Thu Apr 16 13:41:12 2020 7 relations: 868868 Thu Apr 16 13:41:12 2020 8 relations: 766062 Thu Apr 16 13:41:12 2020 9 relations: 679349 Thu Apr 16 13:41:12 2020 10+ relations: 3331768 Thu Apr 16 13:41:12 2020 heaviest cycle: 28 relations Thu Apr 16 13:41:15 2020 commencing cycle optimization Thu Apr 16 13:41:35 2020 start with 86442588 relations Thu Apr 16 13:43:59 2020 pruned 2629604 relations Thu Apr 16 13:44:00 2020 memory use: 2639.1 MB Thu Apr 16 13:44:00 2020 distribution of cycle lengths: Thu Apr 16 13:44:00 2020 1 relations: 1128304 Thu Apr 16 13:44:00 2020 2 relations: 1158969 Thu Apr 16 13:44:00 2020 3 relations: 1235456 Thu Apr 16 13:44:00 2020 4 relations: 1152269 Thu Apr 16 13:44:00 2020 5 relations: 1097163 Thu Apr 16 13:44:00 2020 6 relations: 991876 Thu Apr 16 13:44:00 2020 7 relations: 886776 Thu Apr 16 13:44:00 2020 8 relations: 775313 Thu Apr 16 13:44:00 2020 9 relations: 683433 Thu Apr 16 13:44:00 2020 10+ relations: 3134769 Thu Apr 16 13:44:00 2020 heaviest cycle: 27 relations Thu Apr 16 13:44:19 2020 RelProcTime: 5327 Thu Apr 16 13:44:24 2020 Thu Apr 16 13:44:24 2020 commencing linear algebra Thu Apr 16 13:44:25 2020 read 12244328 cycles Thu Apr 16 13:44:47 2020 cycles contain 42246024 unique relations Thu Apr 16 13:51:48 2020 read 42246024 relations Thu Apr 16 13:52:54 2020 using 20 quadratic characters above 4294917295 Thu Apr 16 13:55:56 2020 building initial matrix Thu Apr 16 14:03:01 2020 memory use: 5922.5 MB Thu Apr 16 14:03:06 2020 read 12244328 cycles Thu Apr 16 14:03:09 2020 matrix is 12244151 x 12244328 (5011.5 MB) with weight 1544672270 (126.15/col) Thu Apr 16 14:03:09 2020 sparse part has weight 1166814329 (95.29/col) Thu Apr 16 14:05:56 2020 filtering completed in 2 passes Thu Apr 16 14:05:59 2020 matrix is 12243059 x 12243236 (5011.5 MB) with weight 1544627458 (126.16/col) Thu Apr 16 14:05:59 2020 sparse part has weight 1166804691 (95.30/col) Thu Apr 16 14:07:01 2020 matrix starts at (0, 0) Thu Apr 16 14:07:04 2020 matrix is 12243059 x 12243236 (5011.5 MB) with weight 1544627458 (126.16/col) Thu Apr 16 14:07:04 2020 sparse part has weight 1166804691 (95.30/col) Thu Apr 16 14:07:04 2020 saving the first 48 matrix rows for later Thu Apr 16 14:07:06 2020 matrix includes 64 packed rows Thu Apr 16 14:07:08 2020 matrix is 12243011 x 12243236 (4887.3 MB) with weight 1292507191 (105.57/col) Thu Apr 16 14:07:08 2020 sparse part has weight 1158749915 (94.64/col) Thu Apr 16 14:07:08 2020 using block size 8192 and superblock size 786432 for processor cache size 8192 kB Thu Apr 16 14:08:03 2020 commencing Lanczos iteration (8 threads) Thu Apr 16 14:08:03 2020 memory use: 4077.8 MB Thu Apr 16 14:08:55 2020 linear algebra at 0.0%, ETA 112h 0m [/code] 
12.2M at TD 100 seems pretty good; so maybe just 1015M more relations will get you TD 120 and 20% faster matrix time.

t_d=120 success:
[code] Fri Apr 17 10:03:58 2020 Msieve v. 1.54 (SVN 1018) Fri Apr 17 10:03:58 2020 random seeds: 947503a5 a978d24a Fri Apr 17 10:03:58 2020 factoring 76552370139504036674890813564032281493867343366619508594816489005834882856199128873928842970710045044111574726594936894404957063604759585302342441093226844531070349677623657609 (176 digits) Fri Apr 17 10:03:59 2020 searching for 15digit factors Fri Apr 17 10:03:59 2020 commencing number field sieve (176digit input) Fri Apr 17 10:03:59 2020 R0: 10749206376460432970317818596117873 Fri Apr 17 10:03:59 2020 R1: 4023609444811856477743 Fri Apr 17 10:03:59 2020 A0: 91389778824609164214454779424151524400880 Fri Apr 17 10:03:59 2020 A1: 16573333756774205759678902993899502 Fri Apr 17 10:03:59 2020 A2: 8753197000583595457254903663 Fri Apr 17 10:03:59 2020 A3: 1186820920867031701728 Fri Apr 17 10:03:59 2020 A4: 77519198521772 Fri Apr 17 10:03:59 2020 A5: 533400 Fri Apr 17 10:03:59 2020 skew 1.00, size 3.822e17, alpha 6.645, combined = 8.280e16 rroots = 5 Fri Apr 17 10:03:59 2020 Fri Apr 17 10:03:59 2020 commencing relation filtering Fri Apr 17 10:03:59 2020 setting target matrix density to 120.0 Fri Apr 17 10:03:59 2020 estimated available RAM is 15926.6 MB Fri Apr 17 10:03:59 2020 commencing duplicate removal, pass 1 Fri Apr 17 10:04:01 2020 error 1 reading relation 160728 . . . Fri Apr 17 10:42:16 2020 error 1 reading relation 294036055 Fri Apr 17 10:42:19 2020 found 96785463 hash collisions in 294359053 relations Fri Apr 17 10:42:51 2020 added 122298 free relations Fri Apr 17 10:42:51 2020 commencing duplicate removal, pass 2 Fri Apr 17 10:49:52 2020 found 136725894 duplicates and 157755457 unique relations Fri Apr 17 10:49:52 2020 memory use: 2387.0 MB Fri Apr 17 10:49:52 2020 reading ideals above 139919360 Fri Apr 17 10:49:52 2020 commencing singleton removal, initial pass Fri Apr 17 11:06:57 2020 memory use: 3012.0 MB Fri Apr 17 11:06:57 2020 reading all ideals from disk Fri Apr 17 11:07:13 2020 memory use: 2535.2 MB Fri Apr 17 11:07:18 2020 commencing inmemory singleton removal Fri Apr 17 11:07:22 2020 begin with 157755457 relations and 148007529 unique ideals Fri Apr 17 11:08:04 2020 reduce to 67900523 relations and 45789557 ideals in 15 passes Fri Apr 17 11:08:04 2020 max relations containing the same ideal: 22 Fri Apr 17 11:08:08 2020 reading ideals above 720000 Fri Apr 17 11:08:08 2020 commencing singleton removal, initial pass Fri Apr 17 11:20:26 2020 memory use: 1506.0 MB Fri Apr 17 11:20:26 2020 reading all ideals from disk Fri Apr 17 11:20:48 2020 memory use: 2698.7 MB Fri Apr 17 11:20:55 2020 keeping 61205493 ideals with weight <= 200, target excess is 380214 Fri Apr 17 11:21:02 2020 commencing inmemory singleton removal Fri Apr 17 11:21:09 2020 begin with 67900523 relations and 61205493 unique ideals Fri Apr 17 11:22:25 2020 reduce to 67676809 relations and 60981638 ideals in 13 passes Fri Apr 17 11:22:25 2020 max relations containing the same ideal: 200 Fri Apr 17 11:22:56 2020 removing 6407085 relations and 5407085 ideals in 1000000 cliques Fri Apr 17 11:22:59 2020 commencing inmemory singleton removal Fri Apr 17 11:23:04 2020 begin with 61269724 relations and 60981638 unique ideals Fri Apr 17 11:23:52 2020 reduce to 60870327 relations and 55168052 ideals in 9 passes Fri Apr 17 11:23:52 2020 max relations containing the same ideal: 194 Fri Apr 17 11:24:20 2020 removing 4932237 relations and 3932237 ideals in 1000000 cliques Fri Apr 17 11:24:22 2020 commencing inmemory singleton removal Fri Apr 17 11:24:27 2020 begin with 55938090 relations and 55168052 unique ideals Fri Apr 17 11:25:05 2020 reduce to 55656169 relations and 50949518 ideals in 8 passes Fri Apr 17 11:25:05 2020 max relations containing the same ideal: 182 Fri Apr 17 11:25:31 2020 removing 4493365 relations and 3493365 ideals in 1000000 cliques Fri Apr 17 11:25:33 2020 commencing inmemory singleton removal Fri Apr 17 11:25:37 2020 begin with 51162804 relations and 50949518 unique ideals Fri Apr 17 11:26:12 2020 reduce to 50901236 relations and 47190406 ideals in 8 passes Fri Apr 17 11:26:12 2020 max relations containing the same ideal: 175 Fri Apr 17 11:26:36 2020 removing 4277502 relations and 3277502 ideals in 1000000 cliques Fri Apr 17 11:26:37 2020 commencing inmemory singleton removal Fri Apr 17 11:26:42 2020 begin with 46623734 relations and 47190406 unique ideals Fri Apr 17 11:27:17 2020 reduce to 46363498 relations and 43648404 ideals in 9 passes Fri Apr 17 11:27:17 2020 max relations containing the same ideal: 163 Fri Apr 17 11:27:39 2020 removing 4160361 relations and 3160361 ideals in 1000000 cliques Fri Apr 17 11:27:40 2020 commencing inmemory singleton removal Fri Apr 17 11:27:44 2020 begin with 42203137 relations and 43648404 unique ideals Fri Apr 17 11:28:09 2020 reduce to 41929982 relations and 40209959 ideals in 7 passes Fri Apr 17 11:28:09 2020 max relations containing the same ideal: 152 Fri Apr 17 11:28:28 2020 removing 4106362 relations and 3106362 ideals in 1000000 cliques Fri Apr 17 11:28:30 2020 commencing inmemory singleton removal Fri Apr 17 11:28:33 2020 begin with 37823620 relations and 40209959 unique ideals Fri Apr 17 11:28:55 2020 reduce to 37524651 relations and 36798691 ideals in 7 passes Fri Apr 17 11:28:55 2020 max relations containing the same ideal: 144 Fri Apr 17 11:29:13 2020 removing 1493656 relations and 1208745 ideals in 284911 cliques Fri Apr 17 11:29:14 2020 commencing inmemory singleton removal Fri Apr 17 11:29:17 2020 begin with 36030995 relations and 36798691 unique ideals Fri Apr 17 11:29:38 2020 reduce to 35991753 relations and 35550443 ideals in 7 passes Fri Apr 17 11:29:38 2020 max relations containing the same ideal: 141 Fri Apr 17 11:30:01 2020 relations with 0 large ideals: 1293 Fri Apr 17 11:30:01 2020 relations with 1 large ideals: 2792 Fri Apr 17 11:30:01 2020 relations with 2 large ideals: 37024 Fri Apr 17 11:30:01 2020 relations with 3 large ideals: 297491 Fri Apr 17 11:30:01 2020 relations with 4 large ideals: 1403043 Fri Apr 17 11:30:01 2020 relations with 5 large ideals: 4111486 Fri Apr 17 11:30:01 2020 relations with 6 large ideals: 7823119 Fri Apr 17 11:30:01 2020 relations with 7+ large ideals: 22315505 Fri Apr 17 11:30:01 2020 commencing 2way merge Fri Apr 17 11:30:24 2020 reduce to 22470918 relation sets and 22029608 unique ideals Fri Apr 17 11:30:24 2020 commencing full merge Fri Apr 17 11:38:40 2020 memory use: 2811.5 MB Fri Apr 17 11:38:42 2020 found 10221905 cycles, need 10199808 Fri Apr 17 11:38:45 2020 weight of 10199808 cycles is about 1224250293 (120.03/cycle) Fri Apr 17 11:38:45 2020 distribution of cycle lengths: Fri Apr 17 11:38:45 2020 1 relations: 584468 Fri Apr 17 11:38:45 2020 2 relations: 687298 Fri Apr 17 11:38:45 2020 3 relations: 768063 Fri Apr 17 11:38:45 2020 4 relations: 784213 Fri Apr 17 11:38:45 2020 5 relations: 792232 Fri Apr 17 11:38:45 2020 6 relations: 768076 Fri Apr 17 11:38:45 2020 7 relations: 737052 Fri Apr 17 11:38:45 2020 8 relations: 696318 Fri Apr 17 11:38:45 2020 9 relations: 646480 Fri Apr 17 11:38:45 2020 10+ relations: 3735608 Fri Apr 17 11:38:45 2020 heaviest cycle: 28 relations Fri Apr 17 11:38:47 2020 commencing cycle optimization Fri Apr 17 11:39:06 2020 start with 85373695 relations Fri Apr 17 11:42:01 2020 pruned 3541155 relations Fri Apr 17 11:42:01 2020 memory use: 2367.5 MB Fri Apr 17 11:42:01 2020 distribution of cycle lengths: Fri Apr 17 11:42:01 2020 1 relations: 584468 Fri Apr 17 11:42:01 2020 2 relations: 705356 Fri Apr 17 11:42:01 2020 3 relations: 800146 Fri Apr 17 11:42:01 2020 4 relations: 815349 Fri Apr 17 11:42:01 2020 5 relations: 828312 Fri Apr 17 11:42:01 2020 6 relations: 800399 Fri Apr 17 11:42:01 2020 7 relations: 769761 Fri Apr 17 11:42:01 2020 8 relations: 722256 Fri Apr 17 11:42:01 2020 9 relations: 668701 Fri Apr 17 11:42:01 2020 10+ relations: 3505060 Fri Apr 17 11:42:01 2020 heaviest cycle: 28 relations Fri Apr 17 11:42:21 2020 RelProcTime: 5902 Fri Apr 17 11:42:25 2020 Fri Apr 17 11:42:25 2020 commencing linear algebra Fri Apr 17 11:42:26 2020 read 10199808 cycles Fri Apr 17 11:42:46 2020 cycles contain 35733787 unique relations Fri Apr 17 11:49:55 2020 read 35733787 relations Fri Apr 17 11:50:57 2020 using 20 quadratic characters above 4294917295 Fri Apr 17 11:53:31 2020 building initial matrix Fri Apr 17 12:00:17 2020 memory use: 4960.8 MB Fri Apr 17 12:00:22 2020 read 10199808 cycles Fri Apr 17 12:00:25 2020 matrix is 10199631 x 10199808 (4836.1 MB) with weight 1480894507 (145.19/col) Fri Apr 17 12:00:25 2020 sparse part has weight 1145354678 (112.29/col) Fri Apr 17 12:02:39 2020 filtering completed in 2 passes Fri Apr 17 12:02:42 2020 matrix is 10199506 x 10199683 (4836.1 MB) with weight 1480888728 (145.19/col) Fri Apr 17 12:02:42 2020 sparse part has weight 1145353286 (112.29/col) Fri Apr 17 12:03:55 2020 matrix starts at (0, 0) Fri Apr 17 12:03:57 2020 matrix is 10199506 x 10199683 (4836.1 MB) with weight 1480888728 (145.19/col) Fri Apr 17 12:03:57 2020 sparse part has weight 1145353286 (112.29/col) Fri Apr 17 12:03:57 2020 saving the first 48 matrix rows for later Fri Apr 17 12:03:59 2020 matrix includes 64 packed rows Fri Apr 17 12:04:01 2020 matrix is 10199458 x 10199683 (4728.6 MB) with weight 1264348240 (123.96/col) Fri Apr 17 12:04:01 2020 sparse part has weight 1137575108 (111.53/col) Fri Apr 17 12:04:01 2020 using block size 8192 and superblock size 786432 for processor cache size 8192 kB Fri Apr 17 12:04:51 2020 commencing Lanczos iteration (8 threads) Fri Apr 17 12:04:51 2020 memory use: 3960.1 MB Fri Apr 17 12:05:37 2020 linear algebra at 0.0%, ETA 82h 9m Fri Apr 17 12:05:52 2020 checkpointing every 130000 dimensions [/code]About 30 hours saved in LA, with less than 24 extra in sieving. But, as was pointed out elsewhere, the LA could have been running on a sole machine, with the sieving running for the next project. I'll have to think about this for a bit. It just might still be better, at least at this size, to go for the first viable matrix and move the rest of the farm to the next composite. Then again, I'm not nearly that organized. 
Two data points that these params lead to ~12M matrices even without tons of extra sieving; at that size, I agree that adding more sieving is a waste of power.
Looks like 260M relations is enough to get to a matrix, and Ed's job has one of the highest duplicate rates I've ever seen almost 45%! Pending further testing, let's chalk that up to an unlucky poly, and figure his 270M isn't usually necessary. Charybdis' other filtering runs show us that 5M relations of extra sieving takes half a million dimensions off the matrix; that's a tradeoff I happily take. I suppose I can put a note in the params file that jobs run on a single machine should use 250M rels_wanted. Now that we have that setting fixed, we can tweak lambdas and ncurves etc to try to slice off another 1020% of sieve time. Ed's log snippet shows Q=93M at the end of sieving; what we don't have is obvious indication that I=15 is faster or slower than A=28. That's not so bad, as it means they're likely fairly close in speed. Loosening lambda settings should help on A=28, since increasing yield will reduce the Qrange and thus reduce the slowsieving Q's. I'll wait for data from Charybdis' second run (32/32LP, I think?) before I go mucking with lambdas. Perhaps this weekend I'll try some testsieving, even. 
I told CADONFS to go ahead and start LA. It currently shows krylov finishing on the 21st:
[code] Info:Linear Algebra: krylov: N=51000 ; ETA (N=288000): Tue Apr 21 13:15:12 2020 [1.134 s/iter] [/code]vs. msieve LA completion around 1:00 a.m. on Apr 21: [code] linear algebra completed 2752960 of 10199683 dimensions (27.0%, ETA 61h11m) [/code]Of course, the rest of the CADONFS steps will take quite a bit longer, while the msieve square root step will take a few extra minutes. I tried to harvest some additional data from the CADONFS log, but gedit isn't being cooperative. I did extract a bunch of time data: [code] PID2635 20200407 12:01:27,455 Info:Generate Factor Base: Total cpu/real time for makefb: 158.69/31.8918 PID2635 20200407 12:12:09,925 Info:Generate Free Relations: Total cpu/real time for freerel: 4950.49/642.391 PID29604 20200408 22:52:05,674 Info:Generate Factor Base: Total cpu/real time for makefb: 158.69/31.8918 PID29604 20200408 22:52:05,676 Info:Generate Free Relations: Total cpu/real time for freerel: 4950.49/642.391 PID10019 20200409 22:48:52,134 Info:Generate Factor Base: Total cpu/real time for makefb: 158.69/31.8918 PID10019 20200409 22:48:52,137 Info:Generate Free Relations: Total cpu/real time for freerel: 4950.49/642.391 PID12298 20200412 22:30:38,662 Info:Generate Factor Base: Total cpu/real time for makefb: 158.69/31.8918 PID12298 20200412 22:30:38,664 Info:Generate Free Relations: Total cpu/real time for freerel: 4950.49/642.391 PID22249 20200413 19:59:08,517 Info:Generate Factor Base: Total cpu/real time for makefb: 158.69/31.8918 PID22249 20200413 19:59:08,518 Info:Generate Free Relations: Total cpu/real time for freerel: 4950.49/642.391 PID23248 20200413 22:47:27,140 Info:Generate Factor Base: Total cpu/real time for makefb: 158.69/31.8918 PID23248 20200413 22:47:27,150 Info:Generate Free Relations: Total cpu/real time for freerel: 4950.49/642.391 PID15335 20200415 22:45:00,195 Info:Generate Factor Base: Total cpu/real time for makefb: 158.69/31.8918 PID15335 20200415 22:45:00,196 Info:Generate Free Relations: Total cpu/real time for freerel: 4950.49/642.391 PID21322 20200416 12:08:52,799 Info:Generate Factor Base: Total cpu/real time for makefb: 158.69/31.8918 PID21322 20200416 12:08:52,800 Info:Generate Free Relations: Total cpu/real time for freerel: 4950.49/642.391 PID25635 20200416 22:52:09,492 Info:Generate Factor Base: Total cpu/real time for makefb: 158.69/31.8918 PID25635 20200416 22:52:09,494 Info:Generate Free Relations: Total cpu/real time for freerel: 4950.49/642.391 PID31904 20200417 12:46:17,245 Info:Generate Factor Base: Total cpu/real time for makefb: 158.69/31.8918 PID31904 20200417 12:46:17,252 Info:Generate Free Relations: Total cpu/real time for freerel: 4950.49/642.391 PID31904 20200417 14:42:34,649 Info:Filtering  Duplicate Removal, splitting pass: Total cpu/real time for dup1: 2112.98/6974.47 PID31904 20200417 15:07:33,352 Info:Filtering  Duplicate Removal, removal pass: Total cpu/real time for dup2: 5388.95/1496.74 PID31904 20200417 15:18:41,063 Info:Filtering  Singleton removal: Total cpu/real time for purge: 2189.08/664.827 PID31904 20200417 16:45:35,564 Info:Filtering  Merging: Total cpu/real time for merge: 5613.2/4885.08 PID31904 20200417 16:45:35,564 Info:Filtering  Merging: Total cpu/real time for replay: 365.12/328.686 [/code]Are there some other specifics you'd like me to find? 
[QUOTE=VBCurtis;542975]Two data points that these params lead to ~12M matrices even without tons of extra sieving; at that size, I agree that adding more sieving is a waste of power.
Looks like 260M relations is enough to get to a matrix, and Ed's job has one of the highest duplicate rates I've ever seen almost 45%! Pending further testing, let's chalk that up to an unlucky poly, and figure his 270M isn't usually necessary. Charybdis' other filtering runs show us that 5M relations of extra sieving takes half a million dimensions off the matrix; that's a tradeoff I happily take. I suppose I can put a note in the params file that jobs run on a single machine should use 250M rels_wanted. Now that we have that setting fixed, we can tweak lambdas and ncurves etc to try to slice off another 1020% of sieve time. Ed's log snippet shows Q=93M at the end of sieving; what we don't have is obvious indication that I=15 is faster or slower than A=28. That's not so bad, as it means they're likely fairly close in speed. Loosening lambda settings should help on A=28, since increasing yield will reduce the Qrange and thus reduce the slowsieving Q's. I'll wait for data from Charybdis' second run (32/32LP, I think?) before I go mucking with lambdas. Perhaps this weekend I'll try some testsieving, even.[/QUOTE] I suppose I'm nearly ready to tackle another 17x sometime soon. If you'd like to suggest some changes to "my" params, I'd welcome giving them a shot. 
I am considering breaking some of these posts into a separate thread with a new, more accurate title. Suggestions are welcome for a title and for whether to keep the new thread in my blog area or move it into factoring. I'm thinking that loading the posts into the CADONFS thread would not be good.
Perhaps it's time for a CADONFS subforum? 
(edited to remove CADOrequesting, since you already asked xyzzy)
Sure, I'll tweak a few settings for your next c17x. I'll post the new params here later today. 
[QUOTE=VBCurtis;543076]. . .
Sure, I'll tweak a few settings for your next c17x. I'll post the new params here later today.[/QUOTE] Thanks. I've reserved 5+2,415 (12586...71 <177 dd>) for this next run. 
1 Attachment(s)
Changes to this file versus the last file of the same name:
Added 25% to poly select range (because c177, same range I used for Charybdis' C177). Reduced both lim's by 10M. You had Qfinal of 93M while using I=15, and smaller lim's often yield a smaller matrix. If someone uses A=28 instead, they would choose bigger lim's (as we did for Charybdis). Yield drops a tiny bit when lim's are reduced, effect of sec/rel unclear (there's a fastest choice for lim, we don't know what it is). Increased both lambdas by 0.015. This will improve yield quite a bit, but will increase the number of relations needed because the added yield comes from splitting larger cofactors. It's unclear whether that tradeoff improves or hurts sieve time that's why we test! Set rels_wanted to 265M. Just a few posts ago I said 260 would be enough, but then I increased lambdas. 
[QUOTE=EdH;543090]Thanks. I've reserved 5+2,415 (12586...71 <177 dd>) for this next run.[/QUOTE]
Might want to check you've got the right number, those digits don't match what I'm [URL="http://factordb.com/index.php?id=1100000000642546194"]seeing[/URL]? Currently at 287M relations on my 32/32LP run, should reach the initial target of 325M tomorrow. If 325M at 32/32 is indeed comparable with 250M at 31/32, then it looks like 32/32 may be very slightly better, but I'll hold off on giving a full opinion on that until I've got a matrix. 
[QUOTE=charybdis;543097]Might want to check you've got the right number, those digits don't match what I'm [URL="http://factordb.com/index.php?id=1100000000642546194"]seeing[/URL]?
Currently at 287M relations on my 32/32LP run, should reach the initial target of 325M tomorrow. If 325M at 32/32 is indeed comparable with 250M at 31/32, then it looks like 32/32 may be very slightly better, but I'll hold off on giving a full opinion on that until I've got a matrix.[/QUOTE] Quite right! Thanks! I've reserved 5+2,415, but the composite is: [code] 572104397924416007907491497280028964417584573249344846496494935688320410664593694835173261723655611747846299032109095649324060656666752385406703309173809568221352517003465647761 [/code]I don't think I'll bother trying to cover the mistake with edits.:smile: 
[QUOTE=VBCurtis;543095]Changes to this file versus the last file of the same name:
Added 25% to poly select range (because c177, same range I used for Charybdis' C177). Reduced both lim's by 10M. You had Qfinal of 93M while using I=15, and smaller lim's often yield a smaller matrix. If someone uses A=28 instead, they would choose bigger lim's (as we did for Charybdis). Yield drops a tiny bit when lim's are reduced, effect of sec/rel unclear (there's a fastest choice for lim, we don't know what it is). Increased both lambdas by 0.015. This will improve yield quite a bit, but will increase the number of relations needed because the added yield comes from splitting larger cofactors. It's unclear whether that tradeoff improves or hurts sieve time that's why we test! Set rels_wanted to 265M. Just a few posts ago I said 260 would be enough, but then I increased lambdas.[/QUOTE] Thanks! I will keep you posted. What density should I initially try? Or, should I just let msieve choose its default? 
I'd try msieve density 100, failing that 90. I would sieve more on a job this size before I'd run a matrix that didn't build at 90. My personal standard is density 84 above GNFS140, 90 above GNFS150, 100 above 160, .... up to density 120 for big jobs.

[QUOTE=VBCurtis;543119]I'd try msieve density 100, failing that 90. I would sieve more on a job this size before I'd run a matrix that didn't build at 90. My personal standard is density 84 above GNFS140, 90 above GNFS150, 100 above 160, .... up to density 120 for big jobs.[/QUOTE]
Thanks! I've forgotten (again) what to do about this. I could go find my notes on it, but for now I'm going to leave it to see what it looks like in the morning. Server: [code] PID7663 20200418 21:55:22,267 Info:Polynomial Selection (size optimized): Marking workunit c175_polyselect1_14061601407840 as ok (99.8% => ETA Sat Apr 18 21:55:53 2020) PID7663 20200418 21:55:58,329 Debug:Polynomial Selection (size optimized): Timeout check took 0.000260 s, found 0 WUs . . . PID7663 20200418 22:43:02,590 Debug:Polynomial Selection (size optimized): Timeout check took 0.000260 s, found 0 WUs PID7663 20200418 22:44:02,680 Debug:Polynomial Selection (size optimized): Timeout check took 0.000253 s, found 0 WUs [/code]Clients: [code]20200418 22:51:29,661  ERROR:root:Download failed, URL error: HTTP Error 404: No work available 20200418 22:51:29,661  ERROR:root:Waiting 10.0 seconds before retrying (I have been waiting since 3400.0 seconds)[/code] 
I guess after a couple hours it figured it out:
[code] 20200419 00:22:06,693  ERROR:root:Download failed, URL error: HTTP Error 404: No work available 20200419 00:22:06,693  ERROR:root:Waiting 10.0 seconds before retrying (I have been waiting since 8830.0 seconds) 20200419 00:22:16,726  INFO:root:Opened URL http://math79.local:13531/cgibin/getwu?clientid=math97.math97 [B]after 8840.0 seconds wait[/B] 20200419 00:22:16,726  INFO:root:Downloading http://math79.local:13531/c175.polyselect2.raw_60 to download/c175.polyselect2.raw_60 (cafile = None) 20200419 00:22:16,735  INFO:root:Result file math97.math97.work/c175.polyselect2.opt_60 does not exist 20200419 00:22:16,736  INFO:root:Overriding argument t 2 by t 8 in command line (substitution t 8) 20200419 00:22:16,736  INFO:root:Running 'build/math97/polyselect/polyselect_ropt' t 8 inputpolys 'download/c175.polyselect2.raw_60' ropteffort 35.0 area 268435456000000.0 Bf 4294967296.0 Bg 2147483648.0 > 'math97.math97.work/c175.polyselect2.opt_60' 20200419 00:22:16,737  INFO:root:[Sun Apr 19 00:22:16 2020] Subprocess has PID 5693 20200419 00:31:33,484  INFO:root:Attaching file math97.math97.work/c175.polyselect2.opt_60 to upload 20200419 00:31:33,484  INFO:root:Attaching stderr for command 0 to upload 20200419 00:31:33,485  INFO:root:Sending result for workunit c175_polyselect2_60 to http://math79.local:13531/cgibin/upload.py[/code] 
325M relations weren't quite enough to get a matrix even at TD 90, but a few million more did it:
[code]Sun Apr 19 16:45:41 2020 commencing relation filtering Sun Apr 19 16:45:41 2020 estimated available RAM is 15845.4 MB Sun Apr 19 16:45:41 2020 commencing duplicate removal, pass 1 (errors) Sun Apr 19 17:19:51 2020 found 102448637 hash collisions in 330667512 relations Sun Apr 19 17:20:12 2020 commencing duplicate removal, pass 2 Sun Apr 19 17:26:36 2020 found 139709136 duplicates and 190958376 unique relations Sun Apr 19 17:26:36 2020 memory use: 2387.0 MB Sun Apr 19 17:26:37 2020 reading ideals above 179765248 Sun Apr 19 17:26:37 2020 commencing singleton removal, initial pass Sun Apr 19 17:41:48 2020 memory use: 5512.0 MB Sun Apr 19 17:41:48 2020 reading all ideals from disk Sun Apr 19 17:42:05 2020 memory use: 3105.6 MB Sun Apr 19 17:42:10 2020 commencing inmemory singleton removal Sun Apr 19 17:42:14 2020 begin with 190958376 relations and 190440858 unique ideals Sun Apr 19 17:42:49 2020 reduce to 67714925 relations and 46908602 ideals in 18 passes Sun Apr 19 17:42:49 2020 max relations containing the same ideal: 18 Sun Apr 19 17:42:54 2020 reading ideals above 720000 Sun Apr 19 17:42:54 2020 commencing singleton removal, initial pass Sun Apr 19 17:52:48 2020 memory use: 1506.0 MB Sun Apr 19 17:52:48 2020 reading all ideals from disk Sun Apr 19 17:53:02 2020 memory use: 2712.7 MB Sun Apr 19 17:53:08 2020 keeping 66505989 ideals with weight <= 200, target excess is 362159 Sun Apr 19 17:53:13 2020 commencing inmemory singleton removal Sun Apr 19 17:53:17 2020 begin with 67714927 relations and 66505989 unique ideals Sun Apr 19 17:54:20 2020 reduce to 66898598 relations and 65688193 ideals in 15 passes Sun Apr 19 17:54:20 2020 max relations containing the same ideal: 200 Sun Apr 19 17:54:45 2020 removing 4307227 relations and 3912077 ideals in 395150 cliques Sun Apr 19 17:54:46 2020 commencing inmemory singleton removal Sun Apr 19 17:54:50 2020 begin with 62591371 relations and 65688193 unique ideals Sun Apr 19 17:55:33 2020 reduce to 62380577 relations and 61563592 ideals in 11 passes Sun Apr 19 17:55:33 2020 max relations containing the same ideal: 196 Sun Apr 19 17:55:57 2020 removing 3199408 relations and 2804258 ideals in 395150 cliques Sun Apr 19 17:55:58 2020 commencing inmemory singleton removal Sun Apr 19 17:56:02 2020 begin with 59181169 relations and 61563592 unique ideals Sun Apr 19 17:56:35 2020 reduce to 59051063 relations and 58628363 ideals in 9 passes Sun Apr 19 17:56:35 2020 max relations containing the same ideal: 191 Sun Apr 19 17:57:05 2020 relations with 0 large ideals: 1246 Sun Apr 19 17:57:05 2020 relations with 1 large ideals: 1656 Sun Apr 19 17:57:05 2020 relations with 2 large ideals: 24589 Sun Apr 19 17:57:05 2020 relations with 3 large ideals: 240980 Sun Apr 19 17:57:05 2020 relations with 4 large ideals: 1367864 Sun Apr 19 17:57:05 2020 relations with 5 large ideals: 4791557 Sun Apr 19 17:57:05 2020 relations with 6 large ideals: 10864030 Sun Apr 19 17:57:05 2020 relations with 7+ large ideals: 41759141 Sun Apr 19 17:57:05 2020 commencing 2way merge Sun Apr 19 17:57:36 2020 reduce to 34679367 relation sets and 34256667 unique ideals Sun Apr 19 17:57:36 2020 commencing full merge Sun Apr 19 18:05:41 2020 memory use: 4120.6 MB Sun Apr 19 18:05:43 2020 found 17244214 cycles, need 17220867 Sun Apr 19 18:05:47 2020 weight of 17220867 cycles is about 1550339153 (90.03/cycle) Sun Apr 19 18:05:47 2020 distribution of cycle lengths: Sun Apr 19 18:05:47 2020 1 relations: 2022334 Sun Apr 19 18:05:47 2020 2 relations: 1926744 Sun Apr 19 18:05:47 2020 3 relations: 1913590 Sun Apr 19 18:05:47 2020 4 relations: 1730002 Sun Apr 19 18:05:47 2020 5 relations: 1548025 Sun Apr 19 18:05:47 2020 6 relations: 1328489 Sun Apr 19 18:05:47 2020 7 relations: 1121823 Sun Apr 19 18:05:47 2020 8 relations: 968705 Sun Apr 19 18:05:47 2020 9 relations: 819466 Sun Apr 19 18:05:47 2020 10+ relations: 3841689 Sun Apr 19 18:05:47 2020 heaviest cycle: 28 relations Sun Apr 19 18:05:50 2020 commencing cycle optimization Sun Apr 19 18:06:11 2020 start with 110652123 relations Sun Apr 19 18:08:19 2020 pruned 2790485 relations Sun Apr 19 18:08:20 2020 memory use: 3551.9 MB Sun Apr 19 18:08:20 2020 distribution of cycle lengths: Sun Apr 19 18:08:20 2020 1 relations: 2022334 Sun Apr 19 18:08:20 2020 2 relations: 1972975 Sun Apr 19 18:08:20 2020 3 relations: 1982743 Sun Apr 19 18:08:20 2020 4 relations: 1771497 Sun Apr 19 18:08:20 2020 5 relations: 1583946 Sun Apr 19 18:08:20 2020 6 relations: 1341882 Sun Apr 19 18:08:20 2020 7 relations: 1131207 Sun Apr 19 18:08:20 2020 8 relations: 965861 Sun Apr 19 18:08:20 2020 9 relations: 810926 Sun Apr 19 18:08:20 2020 10+ relations: 3637496 Sun Apr 19 18:08:20 2020 heaviest cycle: 28 relations Sun Apr 19 18:08:46 2020 RelProcTime: 4985 Sun Apr 19 18:08:51 2020 Sun Apr 19 18:08:51 2020 commencing linear algebra Sun Apr 19 18:08:52 2020 read 17220867 cycles Sun Apr 19 18:09:17 2020 cycles contain 58542363 unique relations Sun Apr 19 18:16:26 2020 read 58542363 relations Sun Apr 19 18:17:36 2020 using 20 quadratic characters above 4294917295 Sun Apr 19 18:21:21 2020 building initial matrix Sun Apr 19 18:29:34 2020 memory use: 8134.6 MB Sun Apr 19 18:29:47 2020 read 17220867 cycles Sun Apr 19 18:29:49 2020 matrix is 17220690 x 17220867 (6464.0 MB) with weight 2013780097 (116.94/col) Sun Apr 19 18:29:49 2020 sparse part has weight 1487861006 (86.40/col) Sun Apr 19 18:32:21 2020 filtering completed in 2 passes Sun Apr 19 18:32:24 2020 matrix is 17217913 x 17218089 (6463.8 MB) with weight 2013661326 (116.95/col) Sun Apr 19 18:32:24 2020 sparse part has weight 1487837862 (86.41/col) Sun Apr 19 18:33:43 2020 matrix starts at (0, 0) Sun Apr 19 18:33:46 2020 matrix is 17217913 x 17218089 (6463.8 MB) with weight 2013661326 (116.95/col) Sun Apr 19 18:33:46 2020 sparse part has weight 1487837862 (86.41/col) Sun Apr 19 18:33:46 2020 saving the first 48 matrix rows for later Sun Apr 19 18:33:48 2020 matrix includes 64 packed rows Sun Apr 19 18:33:49 2020 matrix is 17217865 x 17218089 (6281.6 MB) with weight 1663992024 (96.64/col) Sun Apr 19 18:33:49 2020 sparse part has weight 1474515316 (85.64/col) Sun Apr 19 18:33:49 2020 using block size 8192 and superblock size 884736 for processor cache size 9216 kB Sun Apr 19 18:34:30 2020 commencing Lanczos iteration (6 threads) Sun Apr 19 18:34:30 2020 memory use: 6059.7 MB Sun Apr 19 18:35:10 2020 linear algebra at 0.0%, ETA 123h21m[/code] This will need a fair amount more sieving to get a good matrix I imagine; 17M is a fair bit larger than any of the matrices I got on my 31/32LP job. 
Better, but still not ideal:
[code]Mon Apr 20 00:18:31 2020 commencing relation filtering Mon Apr 20 00:18:31 2020 setting target matrix density to 100.0 Mon Apr 20 00:18:31 2020 estimated available RAM is 15845.4 MB Mon Apr 20 00:18:31 2020 commencing duplicate removal, pass 1 (errors) Mon Apr 20 00:54:28 2020 found 108030440 hash collisions in 347132929 relations Mon Apr 20 00:54:50 2020 commencing duplicate removal, pass 2 Mon Apr 20 01:01:34 2020 found 147518191 duplicates and 199614738 unique relations Mon Apr 20 01:01:34 2020 memory use: 2387.0 MB Mon Apr 20 01:01:35 2020 reading ideals above 179896320 Mon Apr 20 01:01:35 2020 commencing singleton removal, initial pass Mon Apr 20 01:17:30 2020 memory use: 5512.0 MB Mon Apr 20 01:17:31 2020 reading all ideals from disk Mon Apr 20 01:17:50 2020 memory use: 3246.7 MB Mon Apr 20 01:17:54 2020 commencing inmemory singleton removal Mon Apr 20 01:17:58 2020 begin with 199614738 relations and 194861693 unique ideals Mon Apr 20 01:18:37 2020 reduce to 76740221 relations and 52750298 ideals in 17 passes Mon Apr 20 01:18:37 2020 max relations containing the same ideal: 18 Mon Apr 20 01:18:42 2020 reading ideals above 720000 Mon Apr 20 01:18:42 2020 commencing singleton removal, initial pass Mon Apr 20 01:29:35 2020 memory use: 1506.0 MB Mon Apr 20 01:29:36 2020 reading all ideals from disk Mon Apr 20 01:29:54 2020 memory use: 3078.2 MB Mon Apr 20 01:30:00 2020 keeping 72339295 ideals with weight <= 200, target excess is 412697 Mon Apr 20 01:30:06 2020 commencing inmemory singleton removal Mon Apr 20 01:30:11 2020 begin with 76740226 relations and 72339295 unique ideals Mon Apr 20 01:31:17 2020 reduce to 76235207 relations and 71833769 ideals in 14 passes Mon Apr 20 01:31:17 2020 max relations containing the same ideal: 200 Mon Apr 20 01:31:45 2020 removing 3982395 relations and 3582395 ideals in 400000 cliques Mon Apr 20 01:31:46 2020 commencing inmemory singleton removal Mon Apr 20 01:31:51 2020 begin with 72252812 relations and 71833769 unique ideals Mon Apr 20 01:32:27 2020 reduce to 72108808 relations and 68106326 ideals in 8 passes Mon Apr 20 01:32:27 2020 max relations containing the same ideal: 198 Mon Apr 20 01:32:53 2020 removing 2968133 relations and 2568133 ideals in 400000 cliques Mon Apr 20 01:32:54 2020 commencing inmemory singleton removal Mon Apr 20 01:32:59 2020 begin with 69140675 relations and 68106326 unique ideals Mon Apr 20 01:33:33 2020 reduce to 69052705 relations and 65449713 ideals in 8 passes Mon Apr 20 01:33:33 2020 max relations containing the same ideal: 196 Mon Apr 20 01:33:58 2020 removing 2645270 relations and 2245270 ideals in 400000 cliques Mon Apr 20 01:33:59 2020 commencing inmemory singleton removal Mon Apr 20 01:34:03 2020 begin with 66407435 relations and 65449713 unique ideals Mon Apr 20 01:34:36 2020 reduce to 66332671 relations and 63129258 ideals in 8 passes Mon Apr 20 01:34:36 2020 max relations containing the same ideal: 190 Mon Apr 20 01:35:00 2020 removing 2465553 relations and 2065553 ideals in 400000 cliques Mon Apr 20 01:35:01 2020 commencing inmemory singleton removal Mon Apr 20 01:35:05 2020 begin with 63867118 relations and 63129258 unique ideals Mon Apr 20 01:35:36 2020 reduce to 63798876 relations and 60995098 ideals in 8 passes Mon Apr 20 01:35:36 2020 max relations containing the same ideal: 187 Mon Apr 20 01:36:00 2020 removing 2343003 relations and 1943003 ideals in 400000 cliques Mon Apr 20 01:36:01 2020 commencing inmemory singleton removal Mon Apr 20 01:36:05 2020 begin with 61455873 relations and 60995098 unique ideals Mon Apr 20 01:36:31 2020 reduce to 61391450 relations and 58987295 ideals in 7 passes Mon Apr 20 01:36:31 2020 max relations containing the same ideal: 182 Mon Apr 20 01:36:54 2020 removing 2255365 relations and 1855365 ideals in 400000 cliques Mon Apr 20 01:36:54 2020 commencing inmemory singleton removal Mon Apr 20 01:36:58 2020 begin with 59136085 relations and 58987295 unique ideals Mon Apr 20 01:37:24 2020 reduce to 59074085 relations and 57069592 ideals in 7 passes Mon Apr 20 01:37:24 2020 max relations containing the same ideal: 178 Mon Apr 20 01:37:45 2020 removing 2189740 relations and 1789740 ideals in 400000 cliques Mon Apr 20 01:37:46 2020 commencing inmemory singleton removal Mon Apr 20 01:37:50 2020 begin with 56884345 relations and 57069592 unique ideals Mon Apr 20 01:38:14 2020 reduce to 56823284 relations and 55218433 ideals in 7 passes Mon Apr 20 01:38:14 2020 max relations containing the same ideal: 172 Mon Apr 20 01:38:35 2020 removing 2141324 relations and 1741324 ideals in 400000 cliques Mon Apr 20 01:38:36 2020 commencing inmemory singleton removal Mon Apr 20 01:38:39 2020 begin with 54681960 relations and 55218433 unique ideals Mon Apr 20 01:39:09 2020 reduce to 54621273 relations and 53416061 ideals in 9 passes Mon Apr 20 01:39:09 2020 max relations containing the same ideal: 168 Mon Apr 20 01:39:29 2020 removing 2101359 relations and 1701359 ideals in 400000 cliques Mon Apr 20 01:39:30 2020 commencing inmemory singleton removal Mon Apr 20 01:39:33 2020 begin with 52519914 relations and 53416061 unique ideals Mon Apr 20 01:39:56 2020 reduce to 52458873 relations and 51653274 ideals in 7 passes Mon Apr 20 01:39:56 2020 max relations containing the same ideal: 166 Mon Apr 20 01:40:15 2020 removing 1744320 relations and 1417450 ideals in 326870 cliques Mon Apr 20 01:40:16 2020 commencing inmemory singleton removal Mon Apr 20 01:40:19 2020 begin with 50714553 relations and 51653274 unique ideals Mon Apr 20 01:40:40 2020 reduce to 50670954 relations and 50191970 ideals in 7 passes Mon Apr 20 01:40:40 2020 max relations containing the same ideal: 164 Mon Apr 20 01:41:06 2020 relations with 0 large ideals: 1404 Mon Apr 20 01:41:06 2020 relations with 1 large ideals: 2464 Mon Apr 20 01:41:06 2020 relations with 2 large ideals: 35035 Mon Apr 20 01:41:06 2020 relations with 3 large ideals: 313598 Mon Apr 20 01:41:06 2020 relations with 4 large ideals: 1606387 Mon Apr 20 01:41:06 2020 relations with 5 large ideals: 5084360 Mon Apr 20 01:41:06 2020 relations with 6 large ideals: 10406242 Mon Apr 20 01:41:06 2020 relations with 7+ large ideals: 33221464 Mon Apr 20 01:41:06 2020 commencing 2way merge Mon Apr 20 01:41:32 2020 reduce to 30988355 relation sets and 30509371 unique ideals Mon Apr 20 01:41:32 2020 commencing full merge Mon Apr 20 01:48:59 2020 memory use: 3777.1 MB Mon Apr 20 01:49:02 2020 found 14975257 cycles, need 14925571 Mon Apr 20 01:49:05 2020 weight of 14925571 cycles is about 1492665462 (100.01/cycle) Mon Apr 20 01:49:05 2020 distribution of cycle lengths: Mon Apr 20 01:49:05 2020 1 relations: 1267534 Mon Apr 20 01:49:05 2020 2 relations: 1375762 Mon Apr 20 01:49:05 2020 3 relations: 1443044 Mon Apr 20 01:49:05 2020 4 relations: 1366428 Mon Apr 20 01:49:05 2020 5 relations: 1300232 Mon Apr 20 01:49:05 2020 6 relations: 1181723 Mon Apr 20 01:49:05 2020 7 relations: 1079223 Mon Apr 20 01:49:05 2020 8 relations: 957610 Mon Apr 20 01:49:05 2020 9 relations: 853021 Mon Apr 20 01:49:05 2020 10+ relations: 4100994 Mon Apr 20 01:49:05 2020 heaviest cycle: 27 relations Mon Apr 20 01:49:08 2020 commencing cycle optimization Mon Apr 20 01:49:26 2020 start with 105680214 relations Mon Apr 20 01:51:40 2020 pruned 3293755 relations Mon Apr 20 01:51:41 2020 memory use: 3182.7 MB Mon Apr 20 01:51:41 2020 distribution of cycle lengths: Mon Apr 20 01:51:41 2020 1 relations: 1267534 Mon Apr 20 01:51:41 2020 2 relations: 1409368 Mon Apr 20 01:51:41 2020 3 relations: 1498136 Mon Apr 20 01:51:41 2020 4 relations: 1408492 Mon Apr 20 01:51:41 2020 5 relations: 1342845 Mon Apr 20 01:51:41 2020 6 relations: 1211287 Mon Apr 20 01:51:41 2020 7 relations: 1104355 Mon Apr 20 01:51:41 2020 8 relations: 973387 Mon Apr 20 01:51:41 2020 9 relations: 861883 Mon Apr 20 01:51:41 2020 10+ relations: 3848284 Mon Apr 20 01:51:41 2020 heaviest cycle: 27 relations Mon Apr 20 01:52:04 2020 RelProcTime: 5613 Mon Apr 20 01:52:09 2020 Mon Apr 20 01:52:09 2020 commencing linear algebra Mon Apr 20 01:52:10 2020 read 14925571 cycles Mon Apr 20 01:52:34 2020 cycles contain 50343966 unique relations Mon Apr 20 01:59:36 2020 read 50343966 relations Mon Apr 20 02:00:40 2020 using 20 quadratic characters above 4294917295 Mon Apr 20 02:03:53 2020 building initial matrix Mon Apr 20 02:11:39 2020 memory use: 7107.1 MB Mon Apr 20 02:11:48 2020 read 14925571 cycles Mon Apr 20 02:11:50 2020 matrix is 14925394 x 14925571 (6098.2 MB) with weight 1895598568 (127.00/col) Mon Apr 20 02:11:50 2020 sparse part has weight 1419511207 (95.11/col) Mon Apr 20 02:14:13 2020 filtering completed in 2 passes Mon Apr 20 02:14:15 2020 matrix is 14924343 x 14924519 (6098.2 MB) with weight 1895554864 (127.01/col) Mon Apr 20 02:14:15 2020 sparse part has weight 1419502766 (95.11/col) Mon Apr 20 02:15:30 2020 matrix starts at (0, 0) Mon Apr 20 02:15:32 2020 matrix is 14924343 x 14924519 (6098.2 MB) with weight 1895554864 (127.01/col) Mon Apr 20 02:15:32 2020 sparse part has weight 1419502766 (95.11/col) Mon Apr 20 02:15:32 2020 saving the first 48 matrix rows for later Mon Apr 20 02:15:33 2020 matrix includes 64 packed rows Mon Apr 20 02:15:35 2020 matrix is 14924295 x 14924519 (5935.9 MB) with weight 1585538856 (106.24/col) Mon Apr 20 02:15:35 2020 sparse part has weight 1406827930 (94.26/col) Mon Apr 20 02:15:35 2020 using block size 8192 and superblock size 884736 for processor cache size 9216 kB Mon Apr 20 02:16:13 2020 commencing Lanczos iteration (6 threads) Mon Apr 20 02:16:13 2020 memory use: 5668.7 MB Mon Apr 20 02:16:47 2020 linear algebra at 0.0%, ETA 89h59m[/code] My impression is that 31/32 and 32/32 are pretty similar in terms of the sieving time needed to build a matrix, but 32/32 results in larger matrices and so 31/32 should be preferred. Edit: Curtis  I'll start my next job after I've sieved the current c177 a bit more overnight. The natural target would be one of the Homogeneous Cunningham c178s (Ed took the last c177), which I'd like to do with I=15 so that we can get some sort of comparison between A=28 and I=15 on the same hardware. Are there any changes I should make to the parameter file you just gave Ed  maybe slightly higher lims? 
If we're trying to build new params file for c175 and c180 to send to the CADO group, we would set poly select params optimal for c1756 on the .c175 file, and optimal for c180181 for the .c180 file.
So, let's do a bit of that change admax from 15e5 to 17e5, and add 20% to P (2400000 rather than 2000000). I think for a c180 file I'd go with admax 2e6 and P=2.5M, but you're not going to run a full c180. Similarly, to go from c175 to c180 I'd usually add 40% to both lim's; but to go up one digit that's not called for. Let's use 100M and 140M. For I=15, 31/32 is still the right LP size; 32/32 was possibly faster for A=28, but not for I=15. We simply don't need the boost in yield, and you saw the matrices are bigger without a big advantage in sieve time. Let's loosen lambda0 a bit more, 1.88. Bigger numbers need more relations, but we're seeing the number of unique relations jump all over the place it's the number of uniques that matter, not the rawrelations count. So, I'd say 270M is enough, but I think we're really targeting some number of unique rels just a bit higher than you had on your first job. Note that your 32/32 job also had a lot of duplicate relations. That duplicate ratio is hard to predict; really we'd like to set a rels_wanted_unique, and let duplicate removal run in parallel with sieving. Sigh, maybe in CADO 4.0. If you had sieved your 32/32 job until you had 30% more unique rels than you had on the 31/32 job, I suspect the matrix would come out only barely bigger. 
[QUOTE=charybdis;543219]. . .
Edit: Curtis  I'll start my next job after I've sieved the current c177 a bit more overnight. The natural target would be one of the Homogeneous Cunningham c178s (Ed took the last c177), which I'd like to do with I=15 so that we can get some sort of comparison between A=28 and I=15 on the same hardware. Are there any changes I should make to the parameter file you just gave Ed  maybe slightly higher lims?[/QUOTE]If you would like the 177 for experimentation and better comparison, it would be OK with me to let you have it. I can grab a 178 next. I can easily unreserve it on the HCN page. You can go ahead and start it and let me know later so I can stop my farming and unreserve it. 
[QUOTE=EdH;543224]If you would like the 177 for experimentation and better comparison, it would be OK with me to let you have it. I can grab a 178 next. I can easily unreserve it on the HCN page. You can go ahead and start it and let me know later so I can stop my farming and unreserve it.[/QUOTE]
Thank you! I wouldn't want you to waste work but if you're happy to give it up then I'll start 5+2_415 with the same parameters that you were using. 
[QUOTE=charybdis;543239]Thank you! I wouldn't want you to waste work but if you're happy to give it up then I'll start 5+2_415 with the same parameters that you were using.[/QUOTE]
No problem at all! I've unreserved it. I will play elsewhere in a bit. 
[QUOTE=EdH;543242]No problem at all! I've unreserved it. I will play elsewhere in a bit.[/QUOTE]
Thanks again! [QUOTE=VBCurtis;543223]If you had sieved your 32/32 job until you had 30% more unique rels than you had on the 31/32 job, I suspect the matrix would come out only barely bigger.[/QUOTE] Let's have a look: [code]Mon Apr 20 13:10:09 2020 commencing relation filtering Mon Apr 20 13:10:09 2020 setting target matrix density to 120.0 ... Mon Apr 20 13:48:51 2020 found 115949063 hash collisions in 374461390 relations Mon Apr 20 13:49:13 2020 commencing duplicate removal, pass 2 Mon Apr 20 13:56:33 2020 found 158222008 duplicates and 216239382 unique relations ... Mon Apr 20 15:10:51 2020 matrix is 12730379 x 12730604 (5886.1 MB) with weight 1588223894 (124.76/col) Mon Apr 20 15:10:51 2020 sparse part has weight 1415690179 (111.20/col) Mon Apr 20 15:10:51 2020 using block size 8192 and superblock size 884736 for processor cache size 9216 kB Mon Apr 20 15:11:28 2020 commencing Lanczos iteration (6 threads) Mon Apr 20 15:11:28 2020 memory use: 5541.4 MB Mon Apr 20 15:11:59 2020 linear algebra at 0.0%, ETA 70h20m[/code] The 31/32 job had 156M unique so this is 38% more unique relations than that one had. The matrix has almost identical dimensions to the 31/32 job, but I ran this one with a higher TD, so I'd need to sieve even more to get a truly comparable matrix. 
[QUOTE=charybdis;543244]
The 31/32 job had 156M unique so this is 38% more unique relations than that one had. The matrix has almost identical dimensions to the 31/32 job, but I ran this one with a higher TD, so I'd need to sieve even more to get a truly comparable matrix.[/QUOTE] I appreciate this so much! Generating hard data, especially when it disproves my assumptions, just helps a great deal. While I don't think going down to 31/31 for the c175 file makes sense, your data tells me I should only very slowly add LP sizes as we increase the size of the input. Perhaps still 31/32 on I=15 for c180, then 32/32 I = 15 for C185 and C190. Fivemack has tested 32 vs 33 (without testing 32/33 hybrid) at C193 and found 33 marginally faster for ggnfs, but we were ignoring matrix size; so I think I'd go with 32/33 for c195 and test I=15 against A=30. Somewhere in the 180190 range, 3 large primes on one side ought to become faster. I think we should test that starting soon; whomever does a C178180 next, please post your poly so I can testsieve a variety of 3LP scenarios. 
[QUOTE=VBCurtis;543245]. . .
Somewhere in the 180190 range, 3 large primes on one side ought to become faster. I think we should test that starting soon; whomever does a C178180 next, please post your poly so I can testsieve a variety of 3LP scenarios.[/QUOTE] Here's the poly CADONFS came up with for 127,945 (178 dd): [code] n: 1258632688840167527990479924759660967727113832014282715541202945214604444063933854224497640052542892497254083608311352572748398102592769128719761767229021194362137011186342088871 skew: 21118322.345 c0: 2400190655725017956973353574287095281260800 c1: 211014402684521569088239119813217640 c2: 39939568153496532334089408958 c3: 1552818850110399495593 c4: 58933721795178 c5: 294840 Y0: 21186307570697279371433463611848173 Y1: 8845196529223700328463 # MurphyE (Bf=4.295e+09,Bg=2.147e+09,area=2.684e+14) = 1.310e07 # f(x) = 294840*x^5+58933721795178*x^41552818850110399495593*x^339939568153496532334089408958*x^2+211014402684521569088239119813217640*x+2400190655725017956973353574287095281260800 # g(x) = 8845196529223700328463*x21186307570697279371433463611848173 [/code]======================== On a totally separate note, I appear to have discovered my polyselect stalls. It would seem that one of my clients is failing to return a WU and it is not replaced until the timeout is reached, at what point the new assignment of the WU has to start from scratch. Unfortunately, I was using a 12 hour timeout because some of my ancient machines are yet too young to stay up through the night and I was getting an overrun of maxtimedout for multiday jobs. I must think out my strategy to cope with this. . . 
I have a little bit of testsieving done on the c178 poly Ed provided a few posts ago.
I am testing a variety of mfb1 settings, leaving mfb0 untouched (at 31/lambda 1.88). The tests are run 12threaded, Q starting at 30M (somewhere around the middle third of relation gathering, seemed representative enough). I let CADO run an hour or so, record the number of relations found, Qrange done, and ETA. Workunits are in blocks of 1000; I thought that was small enough to reduce granularity, but it seems maybe not. When I change settings, I restart CADO and let it run 3040 min to flush out the workunits with the old settings and shuffle up the end times of workers enough to reduce the effect of granular workunits. Original: mfb1 60, lambda1 1.865. 112500 relations/hr, ETA 81 days 20 hr (for default 237M relations). Run 2: mfb1 60, no lambda set. 134200 relations/hr, ETA 78 days 15 hr Run 3: mfb1 64, no lambda. 167000 relations/hr, ETA 70 days 14 hr Next I'll test a variety of mfb1 3LP settings, from 88 to 94. The catch with the apparent clear boost in speed is that using really tight lambda setting reduces relations required quite a lot. A typical 31/32LP job of this size would require 300325M relations, but our runs require 260270. I'm confused by the disparity between relations/hr and ETA; if I go by ETA I'm getting ~15% faster for mfb1=64, but I expect to need 1518% more relations so it's a wash or small loss (though more of the relations would have larger LPs, so one might expect a larger matrix to turn up). If I go by relations/hr, 64 is a clear win. I think the ETA is more accurate, since it accounts for actual workunit length while I may have simply picked lucky endpoints where a bunch of WUs had just ended. When sieving with ggnfs, using 3LP produces a dramatic dropoff in sec/rel as Q rises; so once I find the best MFB1 setting for 3LP here, I'll repeat the comparison at Q=80M for mfb60/64/whatever is best with 3LP. If that indicates a big change in relative timing, I'll do it again at Q=5M. 
[QUOTE=EdH;543314]Here's the poly CADONFS came up with for 127,945 (178 dd):
[code] n: 1258632688840167527990479924759660967727113832014282715541202945214604444063933854224497640052542892497254083608311352572748398102592769128719761767229021194362137011186342088871 skew: 21118322.345 c0: 2400190655725017956973353574287095281260800 c1: 211014402684521569088239119813217640 c2: 39939568153496532334089408958 c3: 1552818850110399495593 c4: 58933721795178 c5: 294840 Y0: 21186307570697279371433463611848173 Y1: 8845196529223700328463 # MurphyE (Bf=4.295e+09,Bg=2.147e+09,area=2.684e+14) = 1.310e07 # f(x) = 294840*x^5+58933721795178*x^41552818850110399495593*x^339939568153496532334089408958*x^2+211014402684521569088239119813217640*x+2400190655725017956973353574287095281260800 # g(x) = 8845196529223700328463*x21186307570697279371433463611848173 [/code]======================== [/QUOTE] Ed  cow noise scores this poly as 1.5825e13 with an optimal skew of 28173150.129. This is a new record for a C178. Record shattering in fact: the old record score for a C178 was 1.299e13. Nicely done! 
Whoa, that's 20% better than the previous record!? 15% is a digit, so this is near the C177 record? I suppose I should look that up
(moments pass) Yep! C177 record is 1.545e13. So, the CADO poly select and a nice turn of luck took over a digit off the difficulty of this composite. Neat! I imagine one of Carybdis' C177s broke that record, too. 
[QUOTE=VBCurtis;543607]
Run 1: mfb1 60, lambda1 1.865. 112500 relations/hr, ETA 81 days 20 hr (for default 237M relations). Yield 3.39 Run 2: mfb1 60, no lambda set. 134200 relations/hr, ETA 78 days 15 hr Yield 3.53 Run 3: mfb1 64, no lambda. 167000 relations/hr, ETA 70 days 14 hr Yield 4.46 [/QUOTE] I didn't explain my procedure very clearly: After changing settings in the snapshot file, I let CADO run for 3040 min without timing anything. Then I set a timer for 45 min or 1 hour, and record the # of relations and Qrange processed; I also noted the ETA since that seems more stable as a forecast of sieve speed. For 2LP runs, I used ncurves1 = 35. For 3LP, I'm using 12; I should test higher also. The ETA gain may be due in part to this drop in ncurves. Run 4: mfb1 88, no lambda. 171000 rels/hr, ETA 62 days 5 hr. Yield 5.12 Run 5: mfb1 90, no lambda. 174500 rels/hr, ETA 59 days 15 hr. Yield 5.21 I added yield to the first 3 runs in the quote above. I'll try mfb1 = 92 tomorrow. Yield is now so high that A=28 is even more likely to be faster than I=15; I'll test that too once I decide which mfb is likely to minimize ETA. Miniconclusion: Sieve speed up 25% over the params Ed is running, yield up 50%. However, unknown how many additional relations are needed without the tight lambda and with 3LP. 
[QUOTE=VBCurtis;543616]Whoa, that's 20% better than the previous record!? 15% is a digit, so this is near the C177 record? I suppose I should look that up
(moments pass) Yep! C177 record is 1.545e13. So, the CADO poly select and a nice turn of luck took over a digit off the difficulty of this composite. Neat! I imagine one of Carybdis' C177s broke that record, too.[/QUOTE] Nope, seems Ed just got extremely lucky: [code]54_1085L 177 (1535...) 1.455e13 1211_759L 177 (5646...) 1.315e13 5+2_415 177 (5721...) 1.440e13[/code] In fact, taking into account the difference in poly scores between my first two runs, that may account for the difference in speed I observed between 31/32 and 32/32... It looks like I=15 31/32 is going to end up being a bit faster than A=28 31/32; I'll report more on this later today. 
[QUOTE=charybdis;543629]. . ., seems Ed just got extremely lucky:
. . .[/QUOTE] What he said!:smile: However, the question forms: Could there have been any possible advantage to my having 30+ totally separate machines searching (maybe close to 200 threads), vs. a few machines with many threads? @VBCurtis: Are you taking into account duplication ratios? Is that something to even consider? I will have to see what other poly's look like, if I can find/figure out cow noise. . . 
I found and ran the poly for the 168 digit cofactor for 6+5,370 and got the following from cownoise:
[code] [COLOR=green]4266086.58004 5.23508635e13[/COLOR] [/code]Not sure how that compares. . . 
The records are kept in this thread, in the msieve forum:
[url]https://mersenneforum.org/showthread.php?p=539610#post539610[/url] 5.32 is the record for C168, so you were 2% shy. 
[QUOTE=EdH;543632] Could there have been any possible advantage to my having 30+ totally separate machines searching (maybe close to 200 threads), vs. a few machines with many threads?
@VBCurtis: Are you taking into account duplication ratios? Is that something to even consider?[/QUOTE] Both poly select and sieving are totally deterministic in CADO, so the manner in which the work is completed should have no effect. We are noting dup ratios by way of noting the number of unique relations, which is the only count that matters for filtering. I appreciate Charybdis noting his poly scores that may indeed explain the speed difference! However, if it's a tie the lower LP should be used to save storage space and potentially produce smaller matrices. Perhaps C180 params should be tested with 32/32, though. Sigh, so many options! 
[QUOTE=VBCurtis;543640]Both poly select and sieving are totally deterministic in CADO, so the manner in which the work is completed should have no effect.
We are noting dup ratios by way of noting the number of unique relations, which is the only count that matters for filtering. I appreciate Charybdis noting his poly scores that may indeed explain the speed difference! However, if it's a tie the lower LP should be used to save storage space and potentially produce smaller matrices. Perhaps C180 params should be tested with 32/32, though. Sigh, so many options![/QUOTE] The unique relations was what I was wondering about when I "read" relations. Unfortunately, my compiled logs don't seem to have the actual polynomials, but they do list the Murphy_E scores as computed by CADONFS for the chosen poly's. What I don't understand is that my score for 5+2,415 is totally different from charybdis': [code] 5+2_415 177 (5721...) 1.440e13 [/code]as opposed to: [code] Info:Polynomial Selection (root optimized): Finished, best polynomial has Murphy_E = 1.239e07 [/code]:confused: 
[QUOTE=VBCurtis;543639]The records are kept in this thread, in the msieve forum:
[URL]https://mersenneforum.org/showthread.php?p=539610#post539610[/URL] 5.32 is the record for C168, so you were 2% shy.[/QUOTE] Thanks! I see I'm now listed! Thanks swellman! I suppose I'll now have to pay more attention to my polynomials. 2% shy  and I thought is was a poor poly. Didn't I have huge duplication for that one? Maybe it was the one before  darn memory  it's only great for some things. 
The poly score evaluation has no way to know how many relations will be dups; so you had a strong score on that one, but it was unlucky in that it found lots of duplicate relations so it didn't perform as well as the score would indicate.
CADO uses a different scorecalculation method, one that they believe better forecasts poly performance, than the traditional Murphy Escore. cownoise finds the traditional score, which uses a fixed test area to determine score. CADO uses the actual lim's and sieve area (I or A value) and largeprimes to estimate performance, so the CADO score depends on your parameter choice while the traditional Murphy Escore does not. We use the traditional scores to compare for obvious reasons, but within a single factorization with preset params I think CADO is more accurately evaluating which poly will sieve best among those found during poly select. 
[QUOTE=VBCurtis;543619]Run 4: mfb1 88, no lambda. 171000 rels/hr, ETA 62 days 5 hr. Yield 5.12
Run 5: mfb1 90, no lambda. 174500 rels/hr, ETA 59 days 15 hr. Yield 5.21[/QUOTE] Same as run 5, except A=28 rather than I=15. Run 6: mfb1 90, A=28. 220800 rels/hr, ETA 47 days 5 hr. Yield 3.45. So, yield is back to the original parameters on I=15, by using 3LP instead; ETA went from 14 July to 10 June, 12 weeks down to 7! (Not really, since the target relations is both too low and the same for all settings) Testing mfb1=92 next, then I'll mess with ncurves. Also, CADO default params switch which lim is bigger at this size, perhaps because 3LP works well with smaller lim, so I'll try that also. That requires a new run from scratch, since factor bases will change. 
C182
By happy chance, a 182 dd composite just fell out of ECM of the [url=https://www.mersenneforum.org/showthread.php?t=23255]kosta project[/url] after Yoyo@Home found a p67. Specifically C182_M19_k94:
[CODE] 26521232090195873108384905824300492852413283081683568418163219479089273132380406501680155963531361683795706304607082425988301635509432877463621844114521741860720947862338201013214619[/CODE] You guys may want to consider it if you explore the C180185 parameters. The SNFS poly looked ok, but a decent GNFS poly should beat it. [/shameless plug] [CODE] # 524287 ^ 47 + 1 # MurphyE Score: 1.291e14 anorm: 1.169e49 rnorm: 9.577e52. (Since rnorm is larger, sieve with the "r" parameter.) # SNFS difficulty is 274.539 which is approximately equivalent to GNFS difficulty 183. (Since n has 182 digits, it's recommended to use either SNFS or GNFS.) # (some extra msieve library info) size: 4.922e14 alpha: 1.228 rroots: 0 n: 26521232090195873108384905824300492852413283081683568418163219479089273132380406501680155963531361683795706304607082425988301635509432877463621844114521741860720947862338201013214619 skew: 12.16597 type: snfs c6: 1 c5: 0 c4: 0 c3: 0 c2: 0 c1: 0 c0: 524287 Y1: 1 Y0: 5708903659119442793759136591282812149479505921 rlambda: 2.6 alambda: 2.6 lpbr: 31 lpba: 31 mfbr: 62 mfba: 62 alim: 134000000 rlim: 134000000 [/CODE] 
As promised, here's some data from my third c177 run, with I=15 and 31/32LP:
[code]Fri Apr 24 23:34:21 2020 commencing relation filtering Fri Apr 24 23:34:21 2020 setting target matrix density to 110.0 Fri Apr 24 23:34:21 2020 estimated available RAM is 15845.4 MB Fri Apr 24 23:34:21 2020 commencing duplicate removal, pass 1 Sat Apr 25 00:03:27 2020 found 98515288 hash collisions in 295205985 relations Sat Apr 25 00:03:49 2020 commencing duplicate removal, pass 2 Sat Apr 25 00:09:33 2020 found 139833153 duplicates and 155372832 unique relations ... Sat Apr 25 01:08:08 2020 matrix is 11425351 x 11425576 (4878.0 MB) with weight 1295565782 (113.39/col) Sat Apr 25 01:08:08 2020 sparse part has weight 1164492361 (101.92/col) Sat Apr 25 01:08:08 2020 using block size 8192 and superblock size 884736 for processor cache size 9216 kB Sat Apr 25 01:08:39 2020 commencing Lanczos iteration (6 threads) Sat Apr 25 01:08:39 2020 memory use: 4605.1 MB Sat Apr 25 01:09:04 2020 linear algebra at 0.0%, ETA 50h16m[/code] The very high duplication rate appears to be a feature of I=15 sieving with these parameters, given that Ed's first run had ~158M unique from ~294M relations. Nevertheless, I=15 seems to be 1015% faster than A=28 at this size, judging by the CPUtime to collect ~155M unique relations (58.6M vs 67.4M CPUseconds). The polynomial scores for these two numbers were very similar as can be seen from my last post. Edit: Curtis, if I do a c178 next is there any parametertestingoverawholejob that you'd like me to do? 
[QUOTE=swellman;543679]You guys may want to consider it if you explore the C180185 parameters. The SNFS poly looked ok, but a decent GNFS poly should beat it. [/shameless plug]
[CODE] # 524287 ^ 47 + 1 # MurphyE Score: 1.291e14 anorm: 1.169e49 rnorm: 9.577e52. (Since rnorm is larger, sieve with the "r" parameter.) # SNFS difficulty is 274.539 which is approximately equivalent to GNFS difficulty 183. (Since n has 182 digits, it's recommended to use either SNFS or GNFS.) [/CODE][/QUOTE] I moved Sean's post from the CADOparams thread to this one about c180ish'es. The record poly score for a C182 is 5 times bigger than this listed SNFS poly score; I imagine GNFS will be faster even accounting for the "deg 6 / SNFS scores don't translate perfectly to deg 5" issue. 
[QUOTE=charybdis;543682]Edit: Curtis, if I do a c178 next is there any parametertestingoverawholejob that you'd like me to do?[/QUOTE]
That's odd all else equal, a larger siever usually produces fewer duplicate relations than a smaller siever. When I go from I=14 to I=15 on the same sized input, I reduce about 8% from rels_wanted to account for this. Let's try the 3LP settings for your C178: MFB1 = 90, ncurves1=13, A=28, rels_wanted=320M. EDIT: Ditch the lambda1 line entirely. I haven't yet tested new lim's; I'm using 100/140M for lim0 and lim1. I'm running mfb1=92 right now, with a limswap to 140/100 coming next. If you can wait a couple hours, I'll have a good idea if that should be faster. If I grasp 3LP correctly, sieving will be faster but the matrix will be bigger. Then again, 320M is a wild guess, and we can trade some sieve time for a smaller matrix. 
I tried ncurves1 = 16, ETA went up 3 days.
I tried lim0=140M, lim1=100M, ETA went up 810 days. So no changes to the suggestions from last post. 
Just want to make sure I've got this right:
[code]tasks.A = 28 tasks.qmin = 500000 tasks.lim0 = 100000000 tasks.lim1 = 140000000 tasks.lpb0 = 31 tasks.lpb1 = 32 tasks.sieve.lambda0 = 1.88 tasks.sieve.mfb0 = 58 tasks.sieve.mfb1 = 90 tasks.sieve.ncurves0 = 20 tasks.sieve.ncurves1 = 13 tasks.sieve.qrange = 10000 tasks.sieve.rels_wanted = 320000000[/code] 
Precisely. Sorry I was too lazy to do that myself for you.

[QUOTE=VBCurtis;543778]Precisely. Sorry I was too lazy to do that myself for you.[/QUOTE]
No worries at all  if anything I'm the lazy one here. 
Managed to get a matrix with just over 320M relations after sieving Q from 500k to 136M, but it's a big one:
[code]Thu Apr 30 00:23:42 2020 commencing relation filtering Thu Apr 30 00:23:42 2020 setting target matrix density to 100.0 Thu Apr 30 00:23:42 2020 estimated available RAM is 15845.4 MB Thu Apr 30 00:23:43 2020 commencing duplicate removal, pass 1 ... Thu Apr 30 00:55:20 2020 found 94773795 hash collisions in 321245930 relations Thu Apr 30 00:55:41 2020 commencing duplicate removal, pass 2 Thu Apr 30 01:01:49 2020 found 122972626 duplicates and 198273304 unique relations Thu Apr 30 01:01:49 2020 memory use: 2387.0 MB Thu Apr 30 01:01:50 2020 reading ideals above 151584768 Thu Apr 30 01:01:50 2020 commencing singleton removal, initial pass Thu Apr 30 01:17:31 2020 memory use: 5512.0 MB Thu Apr 30 01:17:32 2020 reading all ideals from disk Thu Apr 30 01:17:52 2020 memory use: 3537.9 MB Thu Apr 30 01:17:57 2020 commencing inmemory singleton removal Thu Apr 30 01:18:03 2020 begin with 198273304 relations and 195834296 unique ideals Thu Apr 30 01:18:55 2020 reduce to 86504203 relations and 67341595 ideals in 18 passes Thu Apr 30 01:18:55 2020 max relations containing the same ideal: 23 Thu Apr 30 01:19:01 2020 reading ideals above 720000 Thu Apr 30 01:19:01 2020 commencing singleton removal, initial pass Thu Apr 30 01:30:14 2020 memory use: 2756.0 MB Thu Apr 30 01:30:14 2020 reading all ideals from disk Thu Apr 30 01:30:34 2020 memory use: 3467.0 MB Thu Apr 30 01:30:41 2020 keeping 83938241 ideals with weight <= 200, target excess is 450665 Thu Apr 30 01:30:48 2020 commencing inmemory singleton removal Thu Apr 30 01:30:53 2020 begin with 86504208 relations and 83938241 unique ideals Thu Apr 30 01:32:07 2020 reduce to 86349299 relations and 83783289 ideals in 14 passes Thu Apr 30 01:32:07 2020 max relations containing the same ideal: 200 Thu Apr 30 01:32:38 2020 removing 4545236 relations and 4145236 ideals in 400000 cliques Thu Apr 30 01:32:39 2020 commencing inmemory singleton removal Thu Apr 30 01:32:44 2020 begin with 81804063 relations and 83783289 unique ideals Thu Apr 30 01:33:34 2020 reduce to 81605208 relations and 79437817 ideals in 10 passes Thu Apr 30 01:33:34 2020 max relations containing the same ideal: 199 Thu Apr 30 01:34:03 2020 removing 3387725 relations and 2987725 ideals in 400000 cliques Thu Apr 30 01:34:05 2020 commencing inmemory singleton removal Thu Apr 30 01:34:10 2020 begin with 78217483 relations and 79437817 unique ideals Thu Apr 30 01:34:52 2020 reduce to 78096236 relations and 76328142 ideals in 9 passes Thu Apr 30 01:34:52 2020 max relations containing the same ideal: 194 Thu Apr 30 01:35:21 2020 removing 3022251 relations and 2622251 ideals in 400000 cliques Thu Apr 30 01:35:22 2020 commencing inmemory singleton removal Thu Apr 30 01:35:27 2020 begin with 75073985 relations and 76328142 unique ideals Thu Apr 30 01:36:03 2020 reduce to 74969366 relations and 73600718 ideals in 8 passes Thu Apr 30 01:36:03 2020 max relations containing the same ideal: 190 Thu Apr 30 01:36:30 2020 removing 2824610 relations and 2424610 ideals in 400000 cliques Thu Apr 30 01:36:31 2020 commencing inmemory singleton removal Thu Apr 30 01:36:36 2020 begin with 72144756 relations and 73600718 unique ideals Thu Apr 30 01:37:15 2020 reduce to 72050844 relations and 71081672 ideals in 9 passes Thu Apr 30 01:37:15 2020 max relations containing the same ideal: 188 Thu Apr 30 01:37:41 2020 removing 2697707 relations and 2297707 ideals in 400000 cliques Thu Apr 30 01:37:42 2020 commencing inmemory singleton removal Thu Apr 30 01:37:47 2020 begin with 69353137 relations and 71081672 unique ideals Thu Apr 30 01:38:24 2020 reduce to 69262639 relations and 68692948 ideals in 9 passes Thu Apr 30 01:38:24 2020 max relations containing the same ideal: 180 Thu Apr 30 01:38:49 2020 removing 430114 relations and 383195 ideals in 46919 cliques Thu Apr 30 01:38:50 2020 commencing inmemory singleton removal Thu Apr 30 01:38:54 2020 begin with 68832525 relations and 68692948 unique ideals Thu Apr 30 01:39:23 2020 reduce to 68830278 relations and 68307505 ideals in 7 passes Thu Apr 30 01:39:23 2020 max relations containing the same ideal: 180 Thu Apr 30 01:39:35 2020 relations with 0 large ideals: 1698 Thu Apr 30 01:39:35 2020 relations with 1 large ideals: 2688 Thu Apr 30 01:39:35 2020 relations with 2 large ideals: 42577 Thu Apr 30 01:39:35 2020 relations with 3 large ideals: 386727 Thu Apr 30 01:39:35 2020 relations with 4 large ideals: 2037316 Thu Apr 30 01:39:35 2020 relations with 5 large ideals: 6689570 Thu Apr 30 01:39:35 2020 relations with 6 large ideals: 14111696 Thu Apr 30 01:39:35 2020 relations with 7+ large ideals: 45558006 Thu Apr 30 01:39:35 2020 commencing 2way merge Thu Apr 30 01:40:12 2020 reduce to 41453428 relation sets and 40930655 unique ideals Thu Apr 30 01:40:12 2020 commencing full merge Thu Apr 30 01:50:00 2020 memory use: 4714.1 MB Thu Apr 30 01:50:02 2020 found 19338169 cycles, need 19310855 Thu Apr 30 01:50:07 2020 weight of 19310855 cycles is about 1931089230 (100.00/cycle) Thu Apr 30 01:50:07 2020 distribution of cycle lengths: Thu Apr 30 01:50:07 2020 1 relations: 2082996 Thu Apr 30 01:50:07 2020 2 relations: 1886001 Thu Apr 30 01:50:07 2020 3 relations: 1865997 Thu Apr 30 01:50:07 2020 4 relations: 1724216 Thu Apr 30 01:50:07 2020 5 relations: 1575985 Thu Apr 30 01:50:07 2020 6 relations: 1439418 Thu Apr 30 01:50:07 2020 7 relations: 1264045 Thu Apr 30 01:50:07 2020 8 relations: 1107001 Thu Apr 30 01:50:07 2020 9 relations: 986442 Thu Apr 30 01:50:07 2020 10+ relations: 5378754 Thu Apr 30 01:50:07 2020 heaviest cycle: 28 relations Thu Apr 30 01:50:12 2020 commencing cycle optimization Thu Apr 30 01:50:35 2020 start with 137988190 relations Thu Apr 30 01:53:40 2020 pruned 3943658 relations Thu Apr 30 01:53:41 2020 memory use: 4233.3 MB Thu Apr 30 01:53:41 2020 distribution of cycle lengths: Thu Apr 30 01:53:41 2020 1 relations: 2082996 Thu Apr 30 01:53:41 2020 2 relations: 1934591 Thu Apr 30 01:53:41 2020 3 relations: 1936470 Thu Apr 30 01:53:41 2020 4 relations: 1772730 Thu Apr 30 01:53:41 2020 5 relations: 1624454 Thu Apr 30 01:53:41 2020 6 relations: 1464862 Thu Apr 30 01:53:41 2020 7 relations: 1283296 Thu Apr 30 01:53:41 2020 8 relations: 1116240 Thu Apr 30 01:53:41 2020 9 relations: 987568 Thu Apr 30 01:53:41 2020 10+ relations: 5107648 Thu Apr 30 01:53:41 2020 heaviest cycle: 28 relations Thu Apr 30 01:54:12 2020 RelProcTime: 5430 Thu Apr 30 01:54:18 2020 Thu Apr 30 01:54:18 2020 commencing linear algebra Thu Apr 30 01:54:19 2020 read 19310855 cycles Thu Apr 30 01:54:50 2020 cycles contain 68397133 unique relations Thu Apr 30 02:02:14 2020 read 68397133 relations Thu Apr 30 02:03:45 2020 using 20 quadratic characters above 4294917295 Thu Apr 30 02:08:07 2020 building initial matrix Thu Apr 30 02:18:19 2020 memory use: 9234.0 MB Thu Apr 30 02:18:43 2020 read 19310855 cycles Thu Apr 30 02:18:45 2020 matrix is 19310678 x 19310855 (7913.5 MB) with weight 2405426155 (124.56/col) Thu Apr 30 02:18:45 2020 sparse part has weight 1842743275 (95.43/col) Thu Apr 30 02:21:26 2020 filtering completed in 2 passes Thu Apr 30 02:21:29 2020 matrix is 19308744 x 19308921 (7913.3 MB) with weight 2405349100 (124.57/col) Thu Apr 30 02:21:29 2020 sparse part has weight 1842727716 (95.43/col) Thu Apr 30 02:23:23 2020 matrix starts at (0, 0) Thu Apr 30 02:23:26 2020 matrix is 19308744 x 19308921 (7913.3 MB) with weight 2405349100 (124.57/col) Thu Apr 30 02:23:26 2020 sparse part has weight 1842727716 (95.43/col) Thu Apr 30 02:23:26 2020 saving the first 48 matrix rows for later Thu Apr 30 02:23:28 2020 matrix includes 64 packed rows Thu Apr 30 02:23:30 2020 matrix is 19308696 x 19308921 (7658.0 MB) with weight 2009697914 (104.08/col) Thu Apr 30 02:23:30 2020 sparse part has weight 1814420535 (93.97/col) Thu Apr 30 02:23:30 2020 using block size 8192 and superblock size 884736 for processor cache size 9216 kB Thu Apr 30 02:24:23 2020 commencing Lanczos iteration (6 threads) Thu Apr 30 02:24:23 2020 memory use: 7355.4 MB Thu Apr 30 02:25:29 2020 linear algebra at 0.0%, ETA 223h 1m[/code] The ETA has come down to ~168h, but that's still inconveniently long  I'll keep on sieving. Sieving (so far) took 52.9M CPUseconds, compared to 58.6M for the fastest c177 I ran (with I=15 31/32), despite the poly score being 17% worse. So it seems 3LP may be the way to go if you don't care much about the difficulty of the matrix, for example if you're running a job this size on a single machine where the sieving could take months. We'll see whether the comparison is still favourable when I've sieved enough to get a nice matrix; I imagine it might not be. 
I agree that this is a terrific speedup for a single machine job 6 * 86400 * 7 is about 3.5M cpusec for the matrix, so your complete job would finish in less time than the C177 took to sieve!
It's also unclear if I=15 would be faster; with 2LP on C177 you found 10% speedup there, so this could get faster still. That tells me C178 is too late to begin 3LP; a C178 job should not be faster than a C177, especially with those poly scores. So, we should try 3LP on the C175 file. I imagine you're going to sieve up to about the same time as the C177, and end up with a matrix somewhere near 1516M. That's still a big win, to achieve the same sieve time with a poly 17% worse! I await your data. 
I have msieve doing LA on the 178 cofactor of 127_945L now. Here are the Total times I can find for CADONFS, which crashed due to insufficient memory during full merge:
[code] PID28474 20200420 16:27:18,736 Info:Polynomial Selection (size optimized): Total time: 2.65212e+06 PID31307 20200420 17:37:53,424 Info:Polynomial Selection (size optimized): Total time: 2.65212e+06 PID31307 20200420 18:00:56,820 Info:Polynomial Selection (root optimized): Total time: 58448.6 PID8786 20200420 22:45:07,422 Info:Polynomial Selection (size optimized): Total time: 2.65212e+06 PID8786 20200420 22:45:07,425 Info:Polynomial Selection (root optimized): Total time: 58448.6 PID8786 20200428 02:06:51,549 Info:Lattice Sieving: Total time: 2.02059e+07s [/code]Msieve refused to build at t_d=120 and 110, so I tried it at default and it came up with the following. Of note, every one of the first 1.2M relations showed error 11. Here is the successful run which is now at about 120 hours to finish: [code] Tue Apr 28 13:07:12 2020 Msieve v. 1.54 (SVN 1018) Tue Apr 28 13:07:12 2020 random seeds: 6cef034c 096382c6 Tue Apr 28 13:07:12 2020 factoring 1258632688840167527990479924759660967727113832014282715541202945214604444063933854224497640052542892497254083608311352572748398102592769128719761767229021194362137011186342088871 (178 digits) Tue Apr 28 13:07:13 2020 searching for 15digit factors Tue Apr 28 13:07:13 2020 commencing number field sieve (178digit input) Tue Apr 28 13:07:13 2020 R0: 21186307570697279371433463611848173 Tue Apr 28 13:07:13 2020 R1: 8845196529223700328463 Tue Apr 28 13:07:13 2020 A0: 2400190655725017956973353574287095281260800 Tue Apr 28 13:07:13 2020 A1: 211014402684521569088239119813217640 Tue Apr 28 13:07:13 2020 A2: 39939568153496532334089408958 Tue Apr 28 13:07:13 2020 A3: 1552818850110399495593 Tue Apr 28 13:07:13 2020 A4: 58933721795178 Tue Apr 28 13:07:13 2020 A5: 294840 Tue Apr 28 13:07:13 2020 skew 1.00, size 3.619e17, alpha 7.825, combined = 5.648e16 rroots = 5 Tue Apr 28 13:07:13 2020 Tue Apr 28 13:07:13 2020 commencing relation filtering Tue Apr 28 13:07:13 2020 estimated available RAM is 15926.6 MB Tue Apr 28 13:07:13 2020 commencing duplicate removal, pass 1 Tue Apr 28 13:07:13 2020 error 11 reading relation 0 . . . [ Includes every relation in between ] Tue Apr 28 13:07:35 2020 error 11 reading relation 1216455 Tue Apr 28 13:07:19 2020 error 1 reading relation 345340 Tue Apr 28 13:07:29 2020 error 1 reading relation 907378 . . . Tue Apr 28 13:42:35 2020 error 1 reading relation 266916827 Tue Apr 28 13:43:47 2020 error 1 reading relation 276189407 Tue Apr 28 13:43:50 2020 found 90847552 hash collisions in 275423639 relations Tue Apr 28 13:44:23 2020 added 121850 free relations Tue Apr 28 13:44:23 2020 commencing duplicate removal, pass 2 Tue Apr 28 13:51:04 2020 found 126526927 duplicates and 149018562 unique relations Tue Apr 28 13:51:04 2020 memory use: 1449.5 MB Tue Apr 28 13:51:04 2020 reading ideals above 139919360 Tue Apr 28 13:51:04 2020 commencing singleton removal, initial pass Tue Apr 28 14:07:21 2020 memory use: 3012.0 MB Tue Apr 28 14:07:21 2020 reading all ideals from disk Tue Apr 28 14:07:34 2020 memory use: 2421.4 MB Tue Apr 28 14:07:39 2020 commencing inmemory singleton removal Tue Apr 28 14:07:43 2020 begin with 149018562 relations and 146733511 unique ideals Tue Apr 28 14:08:27 2020 reduce to 55424518 relations and 38967191 ideals in 19 passes Tue Apr 28 14:08:27 2020 max relations containing the same ideal: 20 Tue Apr 28 14:08:29 2020 reading ideals above 720000 Tue Apr 28 14:08:30 2020 commencing singleton removal, initial pass Tue Apr 28 14:19:03 2020 memory use: 1506.0 MB Tue Apr 28 14:19:03 2020 reading all ideals from disk Tue Apr 28 14:19:15 2020 memory use: 2195.2 MB Tue Apr 28 14:19:21 2020 keeping 54432776 ideals with weight <= 200, target excess is 304675 Tue Apr 28 14:19:27 2020 commencing inmemory singleton removal Tue Apr 28 14:19:32 2020 begin with 55424518 relations and 54432776 unique ideals Tue Apr 28 14:20:39 2020 reduce to 54906475 relations and 53914041 ideals in 14 passes Tue Apr 28 14:20:39 2020 max relations containing the same ideal: 200 Tue Apr 28 14:21:05 2020 removing 3380876 relations and 3061371 ideals in 319505 cliques Tue Apr 28 14:21:06 2020 commencing inmemory singleton removal Tue Apr 28 14:21:11 2020 begin with 51525599 relations and 53914041 unique ideals Tue Apr 28 14:21:56 2020 reduce to 51370877 relations and 50696663 ideals in 10 passes Tue Apr 28 14:21:56 2020 max relations containing the same ideal: 196 Tue Apr 28 14:22:20 2020 removing 2526879 relations and 2207374 ideals in 319505 cliques Tue Apr 28 14:22:21 2020 commencing inmemory singleton removal Tue Apr 28 14:22:26 2020 begin with 48843998 relations and 50696663 unique ideals Tue Apr 28 14:23:04 2020 reduce to 48745655 relations and 48390321 ideals in 9 passes Tue Apr 28 14:23:04 2020 max relations containing the same ideal: 192 Tue Apr 28 14:23:35 2020 relations with 0 large ideals: 1040 Tue Apr 28 14:23:35 2020 relations with 1 large ideals: 1286 Tue Apr 28 14:23:35 2020 relations with 2 large ideals: 18987 Tue Apr 28 14:23:35 2020 relations with 3 large ideals: 185362 Tue Apr 28 14:23:35 2020 relations with 4 large ideals: 1069605 Tue Apr 28 14:23:35 2020 relations with 5 large ideals: 3821977 Tue Apr 28 14:23:35 2020 relations with 6 large ideals: 8808829 Tue Apr 28 14:23:35 2020 relations with 7+ large ideals: 34838569 Tue Apr 28 14:23:35 2020 commencing 2way merge Tue Apr 28 14:24:10 2020 reduce to 28658603 relation sets and 28303270 unique ideals Tue Apr 28 14:24:10 2020 ignored 1 oversize relation sets Tue Apr 28 14:24:10 2020 commencing full merge Tue Apr 28 14:31:02 2020 memory use: 3240.8 MB Tue Apr 28 14:31:06 2020 found 15724964 cycles, need 15677470 Tue Apr 28 14:31:11 2020 weight of 15677470 cycles is about 1097547456 (70.01/cycle) Tue Apr 28 14:31:11 2020 distribution of cycle lengths: Tue Apr 28 14:31:11 2020 1 relations: 2291729 Tue Apr 28 14:31:11 2020 2 relations: 2282984 Tue Apr 28 14:31:11 2020 3 relations: 2192031 Tue Apr 28 14:31:11 2020 4 relations: 1842581 Tue Apr 28 14:31:11 2020 5 relations: 1578365 Tue Apr 28 14:31:11 2020 6 relations: 1260813 Tue Apr 28 14:31:11 2020 7 relations: 992679 Tue Apr 28 14:31:11 2020 8 relations: 779990 Tue Apr 28 14:31:11 2020 9 relations: 616680 Tue Apr 28 14:31:11 2020 10+ relations: 1839618 Tue Apr 28 14:31:11 2020 heaviest cycle: 22 relations Tue Apr 28 14:31:14 2020 commencing cycle optimization Tue Apr 28 14:31:34 2020 start with 77754198 relations Tue Apr 28 14:32:55 2020 pruned 1290229 relations Tue Apr 28 14:32:56 2020 memory use: 2790.2 MB Tue Apr 28 14:32:56 2020 distribution of cycle lengths: Tue Apr 28 14:32:56 2020 1 relations: 2291729 Tue Apr 28 14:32:56 2020 2 relations: 2325968 Tue Apr 28 14:32:56 2020 3 relations: 2254390 Tue Apr 28 14:32:56 2020 4 relations: 1867541 Tue Apr 28 14:32:56 2020 5 relations: 1592479 Tue Apr 28 14:32:56 2020 6 relations: 1254891 Tue Apr 28 14:32:56 2020 7 relations: 982745 Tue Apr 28 14:32:56 2020 8 relations: 765997 Tue Apr 28 14:32:56 2020 9 relations: 602147 Tue Apr 28 14:32:56 2020 10+ relations: 1739583 Tue Apr 28 14:32:56 2020 heaviest cycle: 22 relations Tue Apr 28 14:33:16 2020 RelProcTime: 5163 Tue Apr 28 14:33:22 2020 Tue Apr 28 14:33:22 2020 commencing linear algebra Tue Apr 28 14:33:24 2020 read 15677470 cycles Tue Apr 28 14:33:44 2020 cycles contain 48314663 unique relations Tue Apr 28 14:41:24 2020 read 48314663 relations Tue Apr 28 14:42:25 2020 using 20 quadratic characters above 4294917295 Tue Apr 28 14:45:53 2020 building initial matrix Tue Apr 28 14:51:45 2020 memory use: 6545.9 MB Tue Apr 28 14:51:50 2020 read 15677470 cycles Tue Apr 28 14:51:53 2020 matrix is 15677291 x 15677470 (4747.0 MB) with weight 1462757761 (93.30/col) Tue Apr 28 14:51:53 2020 sparse part has weight 1071935640 (68.37/col) Tue Apr 28 14:54:32 2020 filtering completed in 2 passes Tue Apr 28 14:54:35 2020 matrix is 15667059 x 15667237 (4746.3 MB) with weight 1462436968 (93.34/col) Tue Apr 28 14:54:35 2020 sparse part has weight 1071872063 (68.41/col) Tue Apr 28 14:55:51 2020 matrix starts at (0, 0) Tue Apr 28 14:55:54 2020 matrix is 15667059 x 15667237 (4746.3 MB) with weight 1462436968 (93.34/col) Tue Apr 28 14:55:54 2020 sparse part has weight 1071872063 (68.41/col) Tue Apr 28 14:55:54 2020 saving the first 48 matrix rows for later Tue Apr 28 14:55:56 2020 matrix includes 64 packed rows Tue Apr 28 14:55:58 2020 matrix is 15667011 x 15667237 (4594.2 MB) with weight 1171728648 (74.79/col) Tue Apr 28 14:55:58 2020 sparse part has weight 1047677685 (66.87/col) Tue Apr 28 14:55:58 2020 using block size 8192 and superblock size 786432 for processor cache size 8192 kB Tue Apr 28 14:56:59 2020 commencing Lanczos iteration (8 threads) Tue Apr 28 14:56:59 2020 memory use: 3801.9 MB Tue Apr 28 14:57:59 2020 linear algebra at 0.0%, ETA 166h29m [/code]Here is the t_d=120 attempt (110 was identical): [code] Tue Apr 28 03:55:54 2020 Msieve v. 1.54 (SVN 1018) Tue Apr 28 03:55:54 2020 random seeds: 8f4b785c cc995584 Tue Apr 28 03:55:54 2020 factoring 1258632688840167527990479924759660967727113832014282715541202945214604444063933854224497640052542892497254083608311352572748398102592769128719761767229021194362137011186342088871 (178 digits) Tue Apr 28 03:55:55 2020 searching for 15digit factors Tue Apr 28 03:55:56 2020 commencing number field sieve (178digit input) Tue Apr 28 03:55:56 2020 R0: 21186307570697279371433463611848173 Tue Apr 28 03:55:56 2020 R1: 8845196529223700328463 Tue Apr 28 03:55:56 2020 A0: 2400190655725017956973353574287095281260800 Tue Apr 28 03:55:56 2020 A1: 211014402684521569088239119813217640 Tue Apr 28 03:55:56 2020 A2: 39939568153496532334089408958 Tue Apr 28 03:55:56 2020 A3: 1552818850110399495593 Tue Apr 28 03:55:56 2020 A4: 58933721795178 Tue Apr 28 03:55:56 2020 A5: 294840 Tue Apr 28 03:55:56 2020 skew 1.00, size 3.619e17, alpha 7.825, combined = 5.648e16 rroots = 5 Tue Apr 28 03:55:56 2020 Tue Apr 28 03:55:56 2020 commencing relation filtering Tue Apr 28 03:55:56 2020 setting target matrix density to 120.0 Tue Apr 28 03:55:56 2020 estimated available RAM is 15926.6 MB Tue Apr 28 03:55:56 2020 commencing duplicate removal, pass 1 . . . Tue Apr 28 04:33:20 2020 found 90847552 hash collisions in 275423639 relations Tue Apr 28 04:33:52 2020 added 121850 free relations Tue Apr 28 04:33:52 2020 commencing duplicate removal, pass 2 Tue Apr 28 04:40:31 2020 found 126526927 duplicates and 149018562 unique relations Tue Apr 28 04:40:31 2020 memory use: 1449.5 MB Tue Apr 28 04:40:32 2020 reading ideals above 139919360 Tue Apr 28 04:40:32 2020 commencing singleton removal, initial pass Tue Apr 28 04:57:17 2020 memory use: 3012.0 MB Tue Apr 28 04:57:18 2020 reading all ideals from disk Tue Apr 28 04:57:38 2020 memory use: 2421.4 MB Tue Apr 28 04:57:42 2020 commencing inmemory singleton removal Tue Apr 28 04:57:47 2020 begin with 149018562 relations and 146733511 unique ideals Tue Apr 28 04:58:31 2020 reduce to 55424518 relations and 38967191 ideals in 19 passes Tue Apr 28 04:58:31 2020 max relations containing the same ideal: 20 Tue Apr 28 04:58:34 2020 reading ideals above 720000 Tue Apr 28 04:58:34 2020 commencing singleton removal, initial pass Tue Apr 28 05:09:26 2020 memory use: 1506.0 MB Tue Apr 28 05:09:26 2020 reading all ideals from disk Tue Apr 28 05:09:45 2020 memory use: 2195.2 MB Tue Apr 28 05:09:51 2020 keeping 54432776 ideals with weight <= 200, target excess is 304675 Tue Apr 28 05:09:56 2020 commencing inmemory singleton removal Tue Apr 28 05:10:01 2020 begin with 55424518 relations and 54432776 unique ideals Tue Apr 28 05:11:08 2020 reduce to 54906475 relations and 53914041 ideals in 14 passes Tue Apr 28 05:11:08 2020 max relations containing the same ideal: 200 Tue Apr 28 05:11:34 2020 removing 3380876 relations and 3061371 ideals in 319505 cliques Tue Apr 28 05:11:36 2020 commencing inmemory singleton removal Tue Apr 28 05:11:40 2020 begin with 51525599 relations and 53914041 unique ideals Tue Apr 28 05:12:26 2020 reduce to 51370877 relations and 50696663 ideals in 10 passes Tue Apr 28 05:12:26 2020 max relations containing the same ideal: 196 Tue Apr 28 05:12:51 2020 removing 2526879 relations and 2207374 ideals in 319505 cliques Tue Apr 28 05:12:52 2020 commencing inmemory singleton removal Tue Apr 28 05:12:56 2020 begin with 48843998 relations and 50696663 unique ideals Tue Apr 28 05:13:34 2020 reduce to 48745655 relations and 48390321 ideals in 9 passes Tue Apr 28 05:13:34 2020 max relations containing the same ideal: 192 Tue Apr 28 05:14:06 2020 relations with 0 large ideals: 1040 Tue Apr 28 05:14:06 2020 relations with 1 large ideals: 1286 Tue Apr 28 05:14:06 2020 relations with 2 large ideals: 18987 Tue Apr 28 05:14:06 2020 relations with 3 large ideals: 185362 Tue Apr 28 05:14:06 2020 relations with 4 large ideals: 1069605 Tue Apr 28 05:14:06 2020 relations with 5 large ideals: 3821977 Tue Apr 28 05:14:06 2020 relations with 6 large ideals: 8808829 Tue Apr 28 05:14:06 2020 relations with 7+ large ideals: 34838569 Tue Apr 28 05:14:06 2020 commencing 2way merge Tue Apr 28 05:14:40 2020 reduce to 28658603 relation sets and 28303270 unique ideals Tue Apr 28 05:14:40 2020 ignored 1 oversize relation sets Tue Apr 28 05:14:40 2020 commencing full merge Tue Apr 28 05:35:26 2020 memory use: 1309.7 MB Tue Apr 28 05:35:26 2020 found 205766 cycles, need 5904375 Tue Apr 28 05:35:26 2020 too few cycles, matrix probably cannot build [/code] 
15.6M at TD 70 isn't so big to send me back to sieving (unlike Charybdis' 3LP job, which at 19M does send us back for more relations). Not getting a matrix at TD 100 means you barely built a matrix; nothing wrong with this, merely a "matrix could be faster with a few million more rels" situation. I personally benefit, since you've directed the farm to the C198 sieve project while this matrix runs! :tu:
So, we have data that 3LP yields larger matrices as one would expect, though having samesized jobs from the same sieve software and same LP settings come out 30% different in size is a bit more than I expected. The sieve speedup is so big that it's worth the tradeoff; I think we just need to accept 1416M matrix sizes for C180s with 3LP. In the next day or so, I'll post what I think are best files for C175 and C180; we happen to be optimizing for C177 and 178, but the lessons should apply for the whole range of C173182 that these two files would cover. Development isn't finished, but I think it's helpful to not have to scroll through the thread to find the current suggestions. 
[QUOTE=VBCurtis;544250]I imagine you're going to sieve up to about the same time as the C177, and end up with a matrix somewhere near 1516M. That's still a big win, to achieve the same sieve time with a poly 17% worse! I await your data.[/QUOTE]
Good shout, I got this with 60M CPUseconds of sieving (TD 120 didn't build): [code]Thu Apr 30 15:20:48 2020 commencing relation filtering Thu Apr 30 15:20:48 2020 setting target matrix density to 110.0 Thu Apr 30 15:20:48 2020 estimated available RAM is 15845.4 MB Thu Apr 30 15:20:48 2020 commencing duplicate removal, pass 1 ... Thu Apr 30 15:55:36 2020 found 104136282 hash collisions in 353025118 relations Thu Apr 30 15:55:58 2020 commencing duplicate removal, pass 2 Thu Apr 30 16:02:44 2020 found 135256670 duplicates and 217768448 unique relations ... Thu Apr 30 17:23:44 2020 matrix is 15570888 x 15571108 (6664.9 MB) with weight 1761668903 (113.14/col) Thu Apr 30 17:23:44 2020 sparse part has weight 1591448342 (102.21/col) Thu Apr 30 17:23:44 2020 using block size 8192 and superblock size 884736 for processor cache size 9216 kB Thu Apr 30 17:24:26 2020 commencing Lanczos iteration (6 threads) Thu Apr 30 17:24:26 2020 memory use: 6339.7 MB Thu Apr 30 17:25:05 2020 linear algebra at 0.0%, ETA 105h12m[/code] I'll sieve a bit more to try and get a matrix at TD 120. Curtis  I'll do a c178 with 3LP and I=15 next as you suggested; please could you give me some parameters to try? 
[QUOTE=EdH;544282]I have msieve doing LA on the 178 cofactor of 127_945L now.
Msieve refused to build at t_d=120 and 110, so I tried it at default and it came up with the following. Of note, every one of the first 1.2M relations showed error 11. Here is the successful run which is now at about 120 hours to finish:[/QUOTE] That's an awfully high dup rate. Did you sieve on the a side? 
[QUOTE=RichD;544298]That's an awfully high dup rate. Did you sieve on the a side?[/QUOTE]
CADONFS should have defaulted to the algebraic side, as there was no task telling it different. But, I don't know if I can verify that to be the case. 
[QUOTE=charybdis;544297]
I'll sieve a bit more to try and get a matrix at TD 120. Curtis  I'll do a c178 with 3LP and I=15 next as you suggested; please could you give me some parameters to try?[/QUOTE] Let's just trim lim's a little bit how about 90M and 125M? There shouldn't be much difference in settings between A=28 and A=29 (aka I=15); I have more things I want to try, but if we try them all at once we won't know which change found speed. You needed 353M relations with A=28 to build a decent matrix; I estimate I=15 to need 45% fewer, so target 340M? We're seeing duplicate rates all over the place, so the target is more of a "try msieve here while it keeps sieving" for the way you guys have things set up? 
[QUOTE=VBCurtis;544302]Let's just trim lim's a little bit how about 90M and 125M? There shouldn't be much difference in settings between A=28 and A=29 (aka I=15); I have more things I want to try, but if we try them all at once we won't know which change found speed.
You needed 353M relations with A=28 to build a decent matrix; I estimate I=15 to need 45% fewer, so target 340M? We're seeing duplicate rates all over the place, so the target is more of a "try msieve here while it keeps sieving" for the way you guys have things set up?[/QUOTE] I'm still trying to decide where the balance point would be for my setup. If a day of extra sieving (or, maybe even two) only saves a day of LA, it's probably a loss in that I could have started sieving the next composite, or as now, work on a team project while LA completes. I am thinking along this line, though: I plan to use msieve for LA on a single machine. I'm thinking that I should oversieve on purpose on the CADONFS setup and periodically check whether msieve can build a matrix, instead of expecting CADONFS to build the matrix. In that vein, I think, if 270M is required, I should just start with 300M as wanted relations and then let msieve test starting at 270M. That way, CADONFS isn't taking up the time trying to build and deciding to go for more relations. Of course, the duplication rate is an issue. Maybe I should use remdups4 and shoot for a unique relations value rather than raw, and use that to adjust the CADONFS wanted value of raw. Then again, the duplication rate isn't linear. . . 
I agree with this entirely 300M target and 270M is a spot to start msieve (assuming 2LP settings). If you use the recent 3LP params, you'd add ~50M relations to both numbers; 3LP is so useful that it's still faster!
I am interested to see a C174176 with 3LP; I might have to try that myself when the Kosta C198 miniproject is over. I really appreciate your contribution there; you've already nearly guaranteed that we won't spend a month to get to Q=80M. 
Glad I can be helpful. I'll stick with the 198 team effort for now. I should be able to add the msieve machine next Tuesday evening. I'm using this to refine some scripts I'm running that take care gracefully dropping some of my machines out of the workforce when they near their bedtime, instead of causing WU timeouts.

Good to go this time:
[code]Thu Apr 30 22:39:41 2020 commencing relation filtering Thu Apr 30 22:39:41 2020 setting target matrix density to 120.0 Thu Apr 30 22:39:41 2020 estimated available RAM is 15845.4 MB Thu Apr 30 22:39:42 2020 commencing duplicate removal, pass 1 ... Thu Apr 30 23:16:49 2020 found 110859058 hash collisions in 375804511 relations Thu Apr 30 23:17:10 2020 commencing duplicate removal, pass 2 Thu Apr 30 23:24:24 2020 found 143912162 duplicates and 231892349 unique relations ... Fri May 1 00:51:57 2020 matrix is 13966788 x 13967013 (6423.2 MB) with weight 1706963165 (122.21/col) Fri May 1 00:51:57 2020 sparse part has weight 1544139266 (110.56/col) Fri May 1 00:51:57 2020 using block size 8192 and superblock size 884736 for processor cache size 9216 kB Fri May 1 00:52:37 2020 commencing Lanczos iteration (6 threads) Fri May 1 00:52:37 2020 memory use: 6066.3 MB Fri May 1 00:53:12 2020 linear algebra at 0.0%, ETA 84h 3m[/code] [QUOTE=VBCurtis;544302]Let's just trim lim's a little bit how about 90M and 125M? There shouldn't be much difference in settings between A=28 and A=29 (aka I=15); I have more things I want to try, but if we try them all at once we won't know which change found speed. You needed 353M relations with A=28 to build a decent matrix; I estimate I=15 to need 45% fewer, so target 340M? We're seeing duplicate rates all over the place, so the target is more of a "try msieve here while it keeps sieving" for the way you guys have things set up?[/QUOTE] Yes, I figured it was better to put an artificially large rels_wanted in the params file and run msieve when I get the chance. It messes up the ETA, but the yield changes so much through the job that the ETA wasn't all that useful anyway. Thanks once again for the parameters. 
57.6M CPUseconds of sieving with 3LP at I=15 gave this:
[code]Tue May 5 13:11:32 2020 commencing relation filtering Tue May 5 13:11:32 2020 setting target matrix density to 110.0 Tue May 5 13:11:32 2020 estimated available RAM is 15845.4 MB Tue May 5 13:11:32 2020 commencing duplicate removal, pass 1 ... Tue May 5 13:47:36 2020 found 115022527 hash collisions in 367310494 relations Tue May 5 13:47:58 2020 commencing duplicate removal, pass 2 Tue May 5 13:55:03 2020 found 154133454 duplicates and 213177040 unique relations ... Tue May 5 15:20:11 2020 matrix is 15594597 x 15594821 (6670.9 MB) with weight 1766271532 (113.26/col) Tue May 5 15:20:11 2020 sparse part has weight 1592785088 (102.14/col) Tue May 5 15:20:11 2020 using block size 8192 and superblock size 884736 for processor cache size 9216 kB Tue May 5 15:20:56 2020 commencing Lanczos iteration (6 threads) Tue May 5 15:20:56 2020 memory use: 6331.9 MB Tue May 5 15:21:37 2020 linear algebra at 0.0%, ETA 110h56m[/code] I'll sieve a bit more, but the matrix is similar to the one I obtained after 60M CPUseconds of sieving at A=28 in the previous job. Cownoise poly scores for the two jobs were very similar, so assuming they actually do sieve similarly, I=15 looks like it gives a speedup of a few percent over A=28 at c178. I notice once again I=15 is giving more duplicates, perhaps because of the enormous yields at low q values? 
Great comparison same size matrix, 4% less sieve time. That's a win for I=15.
Do you know what the Qrange sieved was? You may be right about the duplicate rate being related to starting sieving at such low Q. It's fast down there, but the duplicate rate makes some of that speed an illusion. I guess that means we don't reduce rels_wanted for I=15 compared to A=28. You could try Qinitial of 5M, see how it affects elapsed time and duplicates. Ideas for future tests: Should we try 31LP on both sides? Since the 3LP side isn't lambdarestrticted, it may make sense to ditch the tight lambda setting on the 2LP side. 31/31 should need 75% the relations of 31/32. Ditching lambda on the 2LP side might need 10% more relations (this number is a guess). mfb=58 and mfb=59 on the 2LP side are worth trying if lambda setting is removed. Finding the optimal ncurves settings could find us another 5% of sieve speed, with no change to matrix size. 
[QUOTE=VBCurtis;544643]Great comparison same size matrix, 4% less sieve time. That's a win for I=15.
Do you know what the Qrange sieved was?[/quote] 500k to 90.1M. [quote]You may be right about the duplicate rate being related to starting sieving at such low Q. It's fast down there, but the duplicate rate makes some of that speed an illusion. I guess that means we don't reduce rels_wanted for I=15 compared to A=28. You could try Qinitial of 5M, see how it affects elapsed time and duplicates.[/quote] This is something I can test without having to run another job  hopefully should have some data later today. [quote]Should we try 31LP on both sides? Since the 3LP side isn't lambdarestrticted, it may make sense to ditch the tight lambda setting on the 2LP side. 31/31 should need 75% the relations of 31/32. Ditching lambda on the 2LP side might need 10% more relations (this number is a guess). mfb=58 and mfb=59 on the 2LP side are worth trying if lambda setting is removed.[/quote] What lims would you suggest using if I try 31/31 for a c178? Might just be best to give all the params so I don't make any mistakes :smile: At some point I suppose I could help out with finding the 2LP/3LP crossover, but I presume it makes sense to optimise 3LP first. [quote]Finding the optimal ncurves settings could find us another 5% of sieve speed, with no change to matrix size.[/QUOTE] This could be figured out by testsieving a small range, right? 
I have started a study of duplication in reference to my recent c178 and found something interesting and disappointing. This may be due to my "farm" setup, but I actually have duplicated+ WUs:
[code] $ ls c180.500000*.gz c180.500000510000.k1_rwswi.gz c180.500000510000.ylvkcw5x.gz [/code]triplicated: [code] $ ls c180.550000*.gz c180.550000560000.bbfy0nc6.gz c180.550000560000.w31ocae5.gz c180.550000560000.by9pspsu.gz [/code]and, even more: [code] $ ls c180.570000*.gz c180.570000580000.3w5sj28b.gz c180.570000580000.5tugazh8.gz c180.570000580000.46ui0z36.gz c180.570000580000.a4r_0xaq.gz c180.570000580000.58h92gje.gz [/code]which, or course, greatly increased my duplication rate: [code] $ zcat c180.500000*.gz  ./remdups4 100 >test Found 42946 unique, [B]43346 duplicate[/B], and 0 bad relations. [/code][code] $ zcat c180.550000*.gz  ./remdups4 100 >test Found 41950 unique, [B]84389 duplicate[/B], and 0 bad relations.[/code][code] $ zcat c180.570000*.gz  ./remdups4 100 >test Found 42875 unique, [B]172369 duplicate[/B], and 0 bad relations.[/code]For the range 500000600000, I actually had a 129% duplication rate: [code] $ zcat c180.5?0000*.gz  ./remdups4 100 >test Found 414110 unique, [B]532836 duplicate[/B], and 0 bad relations. [/code]I'm wondering if others experience this, or it is, in fact, due to something in my "farm" setup. 
All times are UTC. The time now is 09:39. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2020, Jelsoft Enterprises Ltd.