20210102, 15:43  #56 
"Oliver"
Sep 2017
Porta Westfalica, DE
439 Posts 
You are sincerly welcome!
The machine is an AMD Threadripper 1950X. In the moment, I am running two threads per worker. Previously, with all memory slots populated, I had this: Code:
Timings for 480K FFT length (16 cores, 1 worker): 0.60 ms. Throughput: 1661.77 iter/sec. Timings for 480K FFT length (16 cores, 2 workers): 0.63, 0.56 ms. Throughput: 3369.52 iter/sec. Timings for 480K FFT length (16 cores, 4 workers): 0.96, 0.88, 0.94, 1.01 ms. Throughput: 4239.61 iter/sec. Timings for 480K FFT length (16 cores, 8 workers): 1.70, 1.71, 1.89, 1.89, 1.90, 1.90, 1.77, 1.70 ms. Throughput: 4438.59 iter/sec. Timings for 480K FFT length (16 cores, 16 workers): 3.40, 3.42, 3.44, 3.45, 3.36, 3.40, 3.42, 3.43, 3.46, 3.52, 3.43, 3.47, 3.43, 3.51, 3.48, 3.43 ms. Throughput: 4651.21 iter/sec. Now, with only two memory modules, I have (attention, slightly bigger FFT size!): Code:
Timings for 512K FFT length (16 cores, 1 worker): 0.65 ms. Throughput: 1528.24 iter/sec. Timings for 512K FFT length (16 cores, 2 workers): 0.52, 0.55 ms. Throughput: 3724.46 iter/sec. Timings for 512K FFT length (16 cores, 4 workers): 0.88, 0.90, 0.91, 0.93 ms. Throughput: 4419.46 iter/sec. Timings for 512K FFT length (16 cores, 8 workers): 1.76, 1.77, 1.76, 1.82, 1.75, 1.77, 1.81, 1.85 ms. Throughput: 4480.33 iter/sec. Timings for 512K FFT length (16 cores, 16 workers): 6.58, 6.58, 6.65, 6.74, 6.57, 6.56, 6.56, 6.86, 6.72, 6.70, 6.65, 6.84, 6.98, 6.58, 6.56, 6.70 ms. Throughput: 2397.05 iter/sec. In contrast, on an AMD Ryzen 3800X, I got this (with larger FFTs, again): Code:
Timings for 560K FFT length (8 cores, 1 worker): 0.35 ms. Throughput: 2826.81 iter/sec. Timings for 560K FFT length (8 cores, 2 workers): 0.51, 0.50 ms. Throughput: 3991.92 iter/sec. Timings for 560K FFT length (8 cores, 4 workers): 0.92, 0.94, 0.94, 0.93 ms. Throughput: 4279.59 iter/sec. Timings for 560K FFT length (8 cores, 8 workers): 2.09, 2.08, 2.07, 2.04, 2.07, 2.17, 2.06, 2.06 ms. Throughput: 3847.43 iter/sec. Timings for 560K FFT length (8 cores hyperthreaded, 1 worker): 0.36 ms. Throughput: 2758.59 iter/sec. Timings for 560K FFT length (8 cores hyperthreaded, 2 workers): 0.48, 0.48 ms. Throughput: 4141.37 iter/sec. Timings for 560K FFT length (8 cores hyperthreaded, 4 workers): 0.93, 0.88, 0.90, 0.91 ms. Throughput: 4430.35 iter/sec. Timings for 560K FFT length (8 cores hyperthreaded, 8 workers): 2.18, 2.28, 2.13, 2.12, 2.09, 2.21, 2.29, 2.26 ms. Throughput: 3647.35 iter/sec. Last fiddled with by kruoli on 20210102 at 15:44 Reason: Spelling. 
20210104, 20:36  #57 
6809 > 6502
"""""""""""""""""""
Aug 2003
101×103 Posts
249F_{16} Posts 
Fancy graphic to gawk at.

20210105, 06:17  #58 
Romulan Interpreter
Jun 2011
Thailand
5^{2}·7·53 Posts 
They'll finish faster (when red reached the point where green was when we began, and not where red will intersect green, that's because new prps don't need dc anymore  maybe sometime towards the end of February?)

20210105, 22:15  #59 
6809 > 6502
"""""""""""""""""""
Aug 2003
101×103 Posts
3·5^{5} Posts 
There are/were still (I think) some of the first time CFPRP's being turned in that are not being done by v29. The rate of change will accelerate quickly once we do get close to those that were turned in shortly after v30 came out.

20210108, 15:53  #60 
"Oliver"
Sep 2017
Porta Westfalica, DE
1B7_{16} Posts 
Weekly Update
Currently, in the range from 9M to 11M, there are 2,049 PRPCFDCs assigned. In total, there are 26,114 exponents to go.
That means, we have a progress of around 485 exponents per day, so ETA with our current speed is around 54 days. PS: What will happen, when we run out of PRPCFDC work? Will Prime95 fetch PRPCF instead? Last fiddled with by kruoli on 20210108 at 15:54 Reason: Clarified intentions. 
20210108, 21:04  #61 
6809 > 6502
"""""""""""""""""""
Aug 2003
101×103 Posts
3·5^{5} Posts 
I believe that it would roll over to FT PRPCF.

20210114, 18:09  #62 
6809 > 6502
"""""""""""""""""""
Aug 2003
101×103 Posts
10010010011111_{2} Posts 
I have a 6 core machine that I am using to do LLDC and PRPCFDC (3 cores to each task.) I had the 3 cores for PRPCFDC as 1 worker. After seeing your post I shifted to 3 workers 1 core each for a while. It turns out I get better throughput by having a single worker. Once we get caught up to the PRPCFFTC the worker will be reverted to LLDC work.

20210115, 18:57  #63 
"Oliver"
Sep 2017
Porta Westfalica, DE
1B7_{16} Posts 
Weekly Update
The wavefront will likely hit 10M today!
Currently, in the range from 9M to 11M, there are 1,544 PRPCFDCs assigned. In total, there are 22,497 exponents to go. That means, we have a progress of around 517 exponents per day, so ETA with our current speed is around 44 days. 
20210115, 19:17  #65 
"Oliver"
Sep 2017
Porta Westfalica, DE
439 Posts 
Sorry, bad phrasing:
I wanted to say that we will likely hit 10M in the assigment wavefront today (when using the time zone of this forum). Last fiddled with by kruoli on 20210115 at 19:20 Reason: Interpunctation. 
20210116, 05:21  #66 
Romulan Interpreter
Jun 2011
Thailand
5^{2}·7·53 Posts 
Which matches well my initial approximation above, for end of February. Looking to my status page (which shows the number of people doing each kind of work, each one with his own, but the number of participants is the same if you access yours) I see new people "jumping in" daily, so the given time is quite realistic.

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Finish started assignments but don't get new?  jas  Software  5  20200206 04:06 
Need a way to finish exponent and have it automatically quit.  jasong  Software  4  20070221 23:13 
where do TF go when they finish?  markr  Data  5  20050309 05:19 
How to finish?  1997rj7  Lone Mersenne Hunters  2  20031019 05:33 
Resetting projected finish dates  Kevin  Software  5  20030707 13:42 