20120126, 21:11  #12 
"Nathan"
Jul 2008
Maryland, USA
1115_{10} Posts 
Is that 5060 years on a single core, or on multiple cores? If the former, how long do you get if you run on multiple cores?

20120127, 02:34  #13 
6809 > 6502
"""""""""""""""""""
Aug 2003
101×103 Posts
8596_{10} Posts 

20120127, 03:06  #14 
Basketry That Evening!
"Bunslow the Bold"
Jun 2011
40<A<43 89<O<88
3×29×83 Posts 
Classic.
When was that estimate made? Last fiddled with by Dubslow on 20120127 at 03:10 Reason: sed e s/.\ /.\\n\\n/ i 
20120127, 09:50  #15 
Banned
"Luigi"
Aug 2002
Team Italia
2·2,383 Posts 
AFAIK, the estimate (852 years) was done many years ago, using just one Pentium IV.
And for the records, there is another program actually faster than Prime95 on LL tests: it is CUDALucas, working in parallel on GPUs, with a throughput about double than a 8 cores I5. The limitation is that it still doesn't handle very big exponents (but neither Prime95 does). Luigi 
20120127, 12:49  #17  
"Forget I exist"
Jul 2009
Dumbassville
2^{6}·131 Posts 
Quote:
days by my count about 34 years. 

20120127, 14:41  #18 
Oct 2011
2A7_{16} Posts 

20120127, 16:17  #19  
Basketry That Evening!
"Bunslow the Bold"
Jun 2011
40<A<43 89<O<88
1110000110101_{2} Posts 
Quote:
@_ET, a GTX 460 gets around 22.5x the throughput of one of my cores, but a 580 would obviously get quite a bit more. If I used all four CPU cores, I think I could get 15 GD/d (being generous here) so that's 250/3=~ 80 years. Either way, OP still shouldn't even think about it. 

20170726, 19:12  #20 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
2^{6}×3×23 Posts 
Extrapolation from GTX480 run times gives ~14 years for current fastest GPUs w/ software changes.
From running a selection of exponents p under 90M, I obtained a best regression fit to a power law p^2.03 and a (widely extrapolated) forecast of 1305 days ~ 3.57 years for p~10^9 on a GTX480 running CUDALucas 2.05.1. More modern GPUs are up to three times as fast at a given fft length per the mersenne.ca benchmarks, so a billionbit candidate could be run in about 1.2 years now.
CUDALucas currently supports up to about 1.14 billion as an exponent (1143276383) with its maximum implemented fft length of 65536K. (A GTX480 or other card of 1.5GB or 2 GB VRAM or less won't run that max length, but other existing cards with 3GB or more will run threads benchmarks to the maximum implemented length.) The NVIDIA cufft supports up to twice that, so conceivably CUDALucas could be expanded by some determined and talented programmer to about p=2.25 billion for cards with adequate memory (68GB?). Beyond that it would require either a future increase in what NVIDIA chooses to support in their library, using a custom created fft, or alternate approaches (perhaps a dash of Karatsuba, which would slow things). If NVIDIA were to double the supported cufft length again, and it fit on existing fastest cards, and CUDALucas or equivalents were modified to support larger exponents, run time on one fast GPU would be somewhere around an estimated 13.7 years for p~3.322 billion, an exponent corresponding to a billion decimal digit candidate. That's one exponent, with tiny chance of being prime. Trial factoring software mfaktc is ready up to 4,294,967,291, but P1 factoring software CUDAPm1 has the same fft length limits as CUDALucas and so would also require extension, and run times for them longer than current typical LL test times would be justified to qualify a candidate for such a lengthy run. I think "prime95" might recommend saving money on electricity until faster more electrically efficient gpus come out in the next few years, then paying for the new faster card with the saved cost of electricity. Running a 180W GPU for 14 years is currently in my neighborhood at about a dollar per watt year, ~$2500. 
20170726, 21:46  #21 
∂^{2}ω=0
Sep 2002
República de California
2·13·443 Posts 
@KenK: Your ~15 years is about the same time estimate I get for my code running on a cuttingedge manycore Xeon server or Knights Landing workstation. (And I actually timed out some 10Kiter partial runs on the smallest several 1GdigitMnumber exponents at the smallest FFT length needed for such, 192M.) More costefficient would be an 8core AMD Ryzen, on which such a test would need ~60 years. Not sure how the lifetime cost of ownership math there stacks up vs your GPU.

20170726, 23:52  #22 
Undefined
"The unspeakable one"
Jun 2006
My evil lair
3×23×83 Posts 
Tests can be migrated. You don't need to run the entire test on the same machine. As the years pass you can upgrade the hardware and copy over the current state and continue from there. So the time estimates would be maximums, and will have lower run times when taking into account the expected future advancement in hardware capabilities.

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
New Mersenne primality test  Prime95  Miscellaneous Math  19  20140823 04:18 
A (new) old, (faster) slower mersenne(primality) PRP test  boldi  Miscellaneous Math  74  20140417 07:16 
The fastest primality test for Fermat numbers.  Arkadiusz  Math  6  20110405 19:39 
LLT Cycles for Mersenne primality test: a draft  T.Rex  Math  1  20100103 11:34 
Mersenne Primality Test in Hardware  Unregistered  Information & Answers  4  20070709 00:32 