Quote:
Originally Posted by axn
The figure of 91K is wrong. I think it is closer to 600K. James's site doesn't have P95 timing data for a FFT big enough to handle that exponent, and so uses timing from a smaller FFT, hence the discrepancy. If 87 bits is good enough for CPU TF, then 91 bits is correct for GPU TF.
EDIT: Compare the LL GHDays for an exponent 1/10th the size: http://www.mersenne.ca/exponent/332193019. An exponent 10 times the size should be at least 100 times the effort, so 600K might actually be a conservative estimate.

I've obtained around p
^{2.1} scaling for an assortment of software, PRP, LL, P1, over broad ranges of p. Applying that as a long extrapolation here, I get 622,082. GhzDays; a few years of a Radeon VII, if the software existed, to do one gigadigit PRP test. (3.34 years at Prime95's reported rate of 510. GhzD/day on linux & ROCm. Note that was at 5M fft, and I've seen some considerable throughput dropoff at larger fft lengths; of order half, on Windows.)
An LL test should only be considered on large exponents if an initial PRP/GEC/Proof/Cert test sequence yield a probablyprime result. LL even with Jacobi check is simply too likely to have an undetected error in such large long computations. LL confirmation would probably best be done with different software and hardware and frequent comparison of interim residues.