mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Math (https://www.mersenneforum.org/forumdisplay.php?f=8)
-   -   Can we begin PRP using part of the results of P-1 stage 1? (https://www.mersenneforum.org/showthread.php?t=26863)

Zhangrc 2021-06-01 02:48

Can we begin PRP using part of the results of P-1 stage 1?
 
As far as I know, in P-1 stage 1 we compute 3[SUP]2*p*E[/SUP] modulo M[SUB]p[/SUB]. However, in PRP we also compute 3[SUP]2^p[/SUP] modulo M[SUB]p[/SUB]. In both cases the base number is 3.
Could we begin PRP using part of the results in P-1 (say, 3[SUP]2^k[/SUP] where k is the highest bit level of 2*p*E) , to save some work (k iterations, about 1%), since most PRP first-time tests are assigned together with a prior P-1 ?
I'm a freshman here and I don't know if the idea is feasible.

Prime95 2021-06-01 05:02

A good idea! I won't go into the details of how one would do such an optimization.

Gpuowl already does this.

It is on my list of things to look into for prime95. The one problem is the optimization takes a lot of memory. Thus, for many users there may be little benefit.

Zhangrc 2021-06-01 10:13

[QUOTE=Prime95;579631]A good idea! I won't go into the details of how one would do such an optimization.

Gpuowl already does this.

It is on my list of things to look into for prime95. The one problem is the optimization takes a lot of memory. Thus, for many users there may be little benefit.[/QUOTE]

Thank you!
By the way, how much memory will it take? I'm sparing 11GB of memory out of 16GB to run Prime95, is that enough?
Also I'm looking forward to see version 30.7 release.

axn 2021-06-01 10:35

[URL="https://www.mersenneforum.org/showthread.php?t=25774"]This thread[/URL] has some related discussions, I believe.

axn 2021-06-01 11:22

And this [URL="https://www.mersenneforum.org/showthread.php?t=25799"]followup thread[/URL] as well

kriesel 2021-06-01 12:10

Estimating space, for each power needed, 64 bits/double / (~18 bits/ fft word) x ceil (p/8). Storage could be in the compressed form, ceil(p/8). For p up to ~10[SUP]9[/SUP], log2(10[SUP]9[/SUP]) ~30. 30 ceil (10[SUP]9[/SUP]/8) = 3.75GB. The representation as doubles is ~13.3GB, without misc. overhead. That's in addition to the P-1 stage 1 requirements. In low ram situations or to save at stop, resume later, it can be spilled to disk and retrieved later.

drkirkby 2021-06-02 08:17

[QUOTE=Prime95;579631]The one problem is the optimization takes a lot of memory. Thus, for many users there may be little benefit.[/QUOTE]
Given Ben is doing 10x more first-time primality tests than anyone else, it might be worth doing just for Ben! (He will have 192 GB on his Amazon instances).

BTW, during stage 2 of P-1 factoring of [M]M104212883[/M], which was previously trial factored to 2[SUP]76[/SUP], mprime used 303 GB RAM with B1=881,000, B2=52,281,000.
[CODE][Worker #2 May 31 12:25] M104212883 stage 1 complete. 2543108 transforms. Time: 1935.398 sec.
[Worker #2 May 31 12:25] Starting stage 1 GCD - please be patient.
[Worker #2 May 31 12:26] Stage 1 GCD complete. Time: 45.404 sec.
[Worker #2 May 31 12:26] Available memory is 376728MB.
[Worker #2 May 31 12:26] D: 2730, relative primes: 7088, stage 2 primes: 3059717, pair%=95.64
[Worker #2 May 31 12:26] [B]Using 310498MB of memory[/B].[/CODE]mprime is saying it saves two primality tests if a factor is found, despite it is only very slightly over one.
[CODE][Worker #2 May 31 11:53] Assuming no factors below 2^76 and [B]2 primality tests[/B] saved if a factor is found.[/CODE]

kriesel 2021-06-02 09:19

[QUOTE=drkirkby;579754]mprime is saying it saves two primality tests if a factor is found, despite it is only very slightly over one.
[CODE][Worker #2 May 31 11:53] [COLOR=Red][B]Assuming[/B][/COLOR] no factors below 2^76 and [B]2 primality tests[/B] saved if a factor is found.[/CODE][/QUOTE]Key word there is highlighted in red, a holdover from the days of all LL, no PRP. Since most first testing is now PRP, perhaps a future release will adjust the assumption from 2 to ~1 for first tests, or at least PRP first tests, which would adjust the bounds selection somewhat, for greater overall efficiency of searching for new Mersenne primes.

Zhangrc 2021-06-05 03:13

Wow!
By the way, shall we use slightly smaller TF bounds, such as 2^75, for exponents of ~115M? I think it's not a must given so much GPU computing power.

axn 2021-06-05 06:21

[QUOTE=Prime95;579631]It is on my list of things to look into for prime95. The one problem is the optimization takes a lot of memory. Thus, for many users there may be little benefit.[/QUOTE]

While the optimal memory is 2^13 temps for current wavefront, we can make do with much lesser amounts and still gain a lot (compared to distinct P-1 + PRP). Illustrative numbers:

Current wavefront is around 110m which uses 6M FFT. Given memory of 1GB, we can get 1024/48 = 21 temps. Let's assume we're targeting B1=1.2m which is about 1.73mbits of straight P-1.

With 16 temps (largest power of 2 < 21), we can do the P-1 stage 1 with an additional ~290k multiplies. However, we aren't limited to power of two temps (though that is the easiest to conceptualize). We utilize all 21 temps (handling all 5 bit patterns and a few 6 bit patterns), this reduces effort to ~275k multiplies). Compare this with the optimal 2^13 temps which gets this done in ~132k multiplies. But also compare this with straight P-1 which gets this done in 1.73m multiplies!!

Also, there is a downside to using a large number of temps. All the temps are part of your state! So if you need to write a checkpoint, you will need to write all of them to the disk. In the optimal case, that is ~110GB of IO per checkpoint. Obviously, this is not good. You could reduce it by accumulating the results and just writing that out - but that would mean 2*temp muls before each checkpoint. Here also, having fewer temps is helpful.

In short, less memory is still a major gain, and might be a blessing in disguise.

JuanTutors 2021-06-15 18:25

To piggyback on this idea, could we also go the other way? Could we use a saved value of B1 and also a small power of 3 times one of the last few iterations of the PRP test do a quick 2nd B1 test? Meaning, the following:

Given Mp and iteration m near the end of the PRP test (say on the last day), so m is somewhat close to but below p-1. On iteration m, we have found 3^2^m mod Mp. We then do a quick P-1 test on 3^[(2^m+k)*B1] mod Mp for 1<=k<=128.

Is that feasible? Each 2^m+k should have about 18.4 distinct prime factors. Perhaps 18.4 prime factors for each value of k is enough to make it matter? Or would the GCD stage take too long?


All times are UTC. The time now is 09:01.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.