mersenneforum.org > Math Can we begin PRP using part of the results of P-1 stage 1?
 Register FAQ Search Today's Posts Mark Forums Read

 2021-06-01, 02:48 #1 Zhangrc   "University student" May 2021 北京 2·17 Posts Can we begin PRP using part of the results of P-1 stage 1? As far as I know, in P-1 stage 1 we compute 32*p*E modulo Mp. However, in PRP we also compute 32^p modulo Mp. In both cases the base number is 3. Could we begin PRP using part of the results in P-1 (say, 32^k where k is the highest bit level of 2*p*E) , to save some work (k iterations, about 1%), since most PRP first-time tests are assigned together with a prior P-1 ? I'm a freshman here and I don't know if the idea is feasible. Last fiddled with by Zhangrc on 2021-06-01 at 02:57
 2021-06-01, 05:02 #2 Prime95 P90 years forever!     Aug 2002 Yeehaw, FL 1D6C16 Posts A good idea! I won't go into the details of how one would do such an optimization. Gpuowl already does this. It is on my list of things to look into for prime95. The one problem is the optimization takes a lot of memory. Thus, for many users there may be little benefit.
2021-06-01, 10:13   #3
Zhangrc

"University student"
May 2021

1000102 Posts

Quote:
 Originally Posted by Prime95 A good idea! I won't go into the details of how one would do such an optimization. Gpuowl already does this. It is on my list of things to look into for prime95. The one problem is the optimization takes a lot of memory. Thus, for many users there may be little benefit.
Thank you!
By the way, how much memory will it take? I'm sparing 11GB of memory out of 16GB to run Prime95, is that enough?
Also I'm looking forward to see version 30.7 release.

 2021-06-01, 10:35 #4 axn     Jun 2003 13D916 Posts This thread has some related discussions, I believe.
 2021-06-01, 11:22 #5 axn     Jun 2003 508110 Posts And this followup thread as well
 2021-06-01, 12:10 #6 kriesel     "TF79LL86GIMPS96gpu17" Mar 2017 US midwest 5,413 Posts Estimating space, for each power needed, 64 bits/double / (~18 bits/ fft word) x ceil (p/8). Storage could be in the compressed form, ceil(p/8). For p up to ~109, log2(109) ~30. 30 ceil (109/8) = 3.75GB. The representation as doubles is ~13.3GB, without misc. overhead. That's in addition to the P-1 stage 1 requirements. In low ram situations or to save at stop, resume later, it can be spilled to disk and retrieved later.
2021-06-02, 08:17   #7
drkirkby

"David Kirkby"
Jan 2021
Althorne, Essex, UK

6438 Posts

Quote:
 Originally Posted by Prime95 The one problem is the optimization takes a lot of memory. Thus, for many users there may be little benefit.
Given Ben is doing 10x more first-time primality tests than anyone else, it might be worth doing just for Ben! (He will have 192 GB on his Amazon instances).

BTW, during stage 2 of P-1 factoring of M104212883, which was previously trial factored to 276, mprime used 303 GB RAM with B1=881,000, B2=52,281,000.
Code:
[Worker #2 May 31 12:25] M104212883 stage 1 complete. 2543108 transforms. Time: 1935.398 sec.
[Worker #2 May 31 12:25] Starting stage 1 GCD - please be patient.
[Worker #2 May 31 12:26] Stage 1 GCD complete. Time: 45.404 sec.
[Worker #2 May 31 12:26] Available memory is 376728MB.
[Worker #2 May 31 12:26] D: 2730, relative primes: 7088, stage 2 primes: 3059717, pair%=95.64
[Worker #2 May 31 12:26] Using 310498MB of memory.
mprime is saying it saves two primality tests if a factor is found, despite it is only very slightly over one.
Code:
[Worker #2 May 31 11:53] Assuming no factors below 2^76 and 2 primality tests saved if a factor is found.

Last fiddled with by drkirkby on 2021-06-02 at 08:24

2021-06-02, 09:19   #8
kriesel

"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

124458 Posts

Quote:
 Originally Posted by drkirkby mprime is saying it saves two primality tests if a factor is found, despite it is only very slightly over one. Code: [Worker #2 May 31 11:53] Assuming no factors below 2^76 and 2 primality tests saved if a factor is found.
Key word there is highlighted in red, a holdover from the days of all LL, no PRP. Since most first testing is now PRP, perhaps a future release will adjust the assumption from 2 to ~1 for first tests, or at least PRP first tests, which would adjust the bounds selection somewhat, for greater overall efficiency of searching for new Mersenne primes.

 2021-06-05, 03:13 #9 Zhangrc   "University student" May 2021 北京 2·17 Posts Wow! By the way, shall we use slightly smaller TF bounds, such as 2^75, for exponents of ~115M? I think it's not a must given so much GPU computing power.
2021-06-05, 06:21   #10
axn

Jun 2003

508110 Posts

Quote:
 Originally Posted by Prime95 It is on my list of things to look into for prime95. The one problem is the optimization takes a lot of memory. Thus, for many users there may be little benefit.
While the optimal memory is 2^13 temps for current wavefront, we can make do with much lesser amounts and still gain a lot (compared to distinct P-1 + PRP). Illustrative numbers:

Current wavefront is around 110m which uses 6M FFT. Given memory of 1GB, we can get 1024/48 = 21 temps. Let's assume we're targeting B1=1.2m which is about 1.73mbits of straight P-1.

With 16 temps (largest power of 2 < 21), we can do the P-1 stage 1 with an additional ~290k multiplies. However, we aren't limited to power of two temps (though that is the easiest to conceptualize). We utilize all 21 temps (handling all 5 bit patterns and a few 6 bit patterns), this reduces effort to ~275k multiplies). Compare this with the optimal 2^13 temps which gets this done in ~132k multiplies. But also compare this with straight P-1 which gets this done in 1.73m multiplies!!

Also, there is a downside to using a large number of temps. All the temps are part of your state! So if you need to write a checkpoint, you will need to write all of them to the disk. In the optimal case, that is ~110GB of IO per checkpoint. Obviously, this is not good. You could reduce it by accumulating the results and just writing that out - but that would mean 2*temp muls before each checkpoint. Here also, having fewer temps is helpful.

In short, less memory is still a major gain, and might be a blessing in disguise.

 2021-06-15, 18:25 #11 JuanTutors     Mar 2004 10348 Posts To piggyback on this idea, could we also go the other way? Could we use a saved value of B1 and also a small power of 3 times one of the last few iterations of the PRP test do a quick 2nd B1 test? Meaning, the following: Given Mp and iteration m near the end of the PRP test (say on the last day), so m is somewhat close to but below p-1. On iteration m, we have found 3^2^m mod Mp. We then do a quick P-1 test on 3^[(2^m+k)*B1] mod Mp for 1<=k<=128. Is that feasible? Each 2^m+k should have about 18.4 distinct prime factors. Perhaps 18.4 prime factors for each value of k is enough to make it matter? Or would the GCD stage take too long? Last fiddled with by JuanTutors on 2021-06-15 at 18:25

 Similar Threads Thread Thread Starter Forum Replies Last Post Ensigm Factoring 9 2020-08-24 17:32 Chuck PrimeNet 64 2014-01-06 01:22 dabaichi PrimeNet 5 2011-12-07 19:27 Unregistered Information & Answers 0 2010-11-30 23:34 davieddy Lounge 9 2009-08-25 20:01

All times are UTC. The time now is 21:44.

Thu Jul 29 21:44:22 UTC 2021 up 6 days, 16:13, 1 user, load averages: 2.29, 2.27, 2.14