View Single Post
Old 2020-08-23, 16:37   #8
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

22×33×43 Posts
Default

Quote:
Originally Posted by axn View Post
The figure of 91K is wrong. I think it is closer to 600K. James's site doesn't have P95 timing data for a FFT big enough to handle that exponent, and so uses timing from a smaller FFT, hence the discrepancy. If 87 bits is good enough for CPU TF, then 91 bits is correct for GPU TF.

EDIT:- Compare the LL GH-Days for an exponent 1/10th the size: http://www.mersenne.ca/exponent/332193019. An exponent 10 times the size should be at least 100 times the effort, so 600K might actually be a conservative estimate.
I've obtained around p2.1 scaling for an assortment of software, PRP, LL, P-1, over broad ranges of p. Applying that as a long extrapolation here, I get 622,082. GhzDays; a few years of a Radeon VII, if the software existed, to do one gigadigit PRP test. (3.34 years at Prime95's reported rate of 510. GhzD/day on linux & ROCm. Note that was at 5M fft, and I've seen some considerable throughput dropoff at larger fft lengths; of order half, on Windows.)
An LL test should only be considered on large exponents if an initial PRP/GEC/Proof/Cert test sequence yield a probably-prime result. LL even with Jacobi check is simply too likely to have an undetected error in such large long computations. LL confirmation would probably best be done with different software and hardware and frequent comparison of interim residues.

Last fiddled with by kriesel on 2020-08-23 at 16:46
kriesel is offline   Reply With Quote