20211120, 15:27  #287 
"Ed Hall"
Dec 2009
Adirondack Mtns
47×89 Posts 
We're closing in on our primary goal!
Now it's time to show a small portion of my ignorance:  If A=32 performed so much better returning relations, why was A=30 a better choice? Did the duplicates outweigh the overall return?  I thought that CPU time was set against an old standard CPU benchmark. If newer CPUs are multiples of that benchmark, why is host #2 (on Seth's page) showing 50% less CPU time over host #3, while showing 50% more productivity? Was that much efficiency really a difference in CPU capabilities? 
20211120, 16:18  #288 
"Oliver"
Sep 2017
Porta Westfalica, DE
2·5·83 Posts 
A=32 took much longer to run a single WU (less relations per time) and used much more memory.
Filtering and LA with 1.255B relations gave me (TD 120): Code:
commencing linear algebra read 73594760 cycles cycles contain 241224346 unique relations read 241224346 relations using 20 quadratic characters above 4294917295 building initial matrix memory use: 33714.0 MB read 73594760 cycles matrix is 73594580 x 73594760 (27697.0 MB) with weight 8500291408 (115.50/col) sparse part has weight 6377472282 (86.66/col) filtering completed in 2 passes matrix is 73565842 x 73566019 (27695.0 MB) with weight 8499247339 (115.53/col) sparse part has weight 6377274484 (86.69/col) matrix starts at (0, 0) matrix is 73565842 x 73566019 (27695.0 MB) with weight 8499247339 (115.53/col) sparse part has weight 6377274484 (86.69/col) saving the first 48 matrix rows for later matrix includes 64 packed rows matrix is 73565794 x 73566019 (26838.6 MB) with weight 7031611414 (95.58/col) sparse part has weight 6299920567 (85.64/col) using block size 8192 and superblock size 6291456 for processor cache size 65536 kB commencing Lanczos iteration (16 threads) memory use: 27168.9 MB Linear algebra completed 1007 of 73566019 dimensions (0.0%, ETA 3612h 5m) Last fiddled with by kruoli on 20211120 at 16:20 Reason: Corrected measurement unit. 
20211120, 16:46  #289  
Apr 2020
11×53 Posts 
Quote:
Quote:
Ouch  looks like we might need to go up to 1.41.5G relations. Even given the huge matrix, that ETA feels worse than I'd have expected. Were you still sieving on the other 16 threads? 

20211120, 16:53  #290  
"Oliver"
Sep 2017
Porta Westfalica, DE
2×5×83 Posts 
Quote:
Our sieving speed (available cores) is still good. We should be able to tackle this quickly. 

20211120, 17:32  #291 
"Ed Hall"
Dec 2009
Adirondack Mtns
47×89 Posts 
Thanks for the answers. That does help educate me a bit.
Oliver, are you planning to take the server down when you reach the initial goal? I'm not sure how my scripts will react. If you plan to drop the server during my overnight (which will include your early morning), I will tell all my clients to stop before I head to bed. 
20211120, 17:57  #292 
"Oliver"
Sep 2017
Porta Westfalica, DE
830_{10} Posts 
No, I am going to wait at least until our experts here agree that we can stop, in that case I would announce my intention and will shut down my server after one hour without no new work sent out or 24 h ignoring existing connections, whichever comes first.

20211120, 18:05  #293 
"Curtis"
Feb 2005
Riverside, CA
5,153 Posts 
CADO should keep running until we get a manageable matrix. 73M at 1.25G relations suggests we have another day or three to go; hopefully filtering can be run once a day until the matrix doesn't shrink much from the day's extra relations.
1.16G > 87M matrix (default TD) 1.255G > 73M matrix (TD 120) So ~100M extra relations dropped 16% from matrix size. Another 50M relations from a day's sieving should reduce another 6% or so; that is, we might lose ~4M dimensions from one more day's sieving. Projecting from these two data points, 1.4G might yield a matrix around 6164M. Let's use TD 124 next time, to make filtering work a little harder. 
20211120, 18:12  #294 
"Oliver"
Sep 2017
Porta Westfalica, DE
33E_{16} Posts 
This can be done. I will start the next run at around 8 PM UTC. The filtering ran 15 h last time since the machine is busy, so results should be there around 2 PM UTC.

20211120, 18:31  #295 
"Curtis"
Feb 2005
Riverside, CA
5,153 Posts 
Also, you are correct that filtering and matrixbuilding is singlethreaded.
CADO's is multithreaded, but that doesn't help us here since CADO matrixsolving is so much slower. If only we knew how to use the results from CADO filtering to run with msieve matrixsolving; on the C207 team sieve I got a 60M matrix from CADO but 72M from msieve. 
20211120, 18:33  #296 
"Ed Hall"
Dec 2009
Adirondack Mtns
47×89 Posts 
Sounds good. Will you have the server send out a 410 message, or just stop its process? I might pull all my clients on Monday night, if we're still sieving then, due to expecting to be tied up Tuesday.

20211120, 18:36  #297  
"Ed Hall"
Dec 2009
Adirondack Mtns
47·89 Posts 
Quote:
Quote:


Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Using 16e on smaller numbers  fivemack  Factoring  3  20170919 08:52 
NFS on smaller numbers?  skan  YAFU  6  20130226 13:57 
Bernoulli(200) c204  akruppa  Factoring  114  20120820 14:01 
checking smaller number  fortega  Data  2  20050616 22:48 
Factoring Smaller Numbers  marc  Factoring  6  20041009 14:17 