20100307, 22:00  #1 
"Jason Goatcher"
Mar 2005
6663_{8} Posts 
Doing more about LL test verification
Won't stick around in here to bug you guys, but quick question:
Since Prime95 uses rounding to do its math, it seems inevitable to me that there be an error at some point which happens twice, and therefore generates the same residue. Basically, the cpu would be working properly, but the rounding error correction method might let a few bad calculations accidentally slide through. Is there ever going to be a dedicated program to go back, say, a decade from now and rerun this stuff with an integerbased LLR program? This is assuming of course that the floating point method stays tremendously faster. Sorry if I phrased anything badly, hope it makes enough sense to be answered. 
20100307, 22:51  #2  
Jun 2003
19×271 Posts 
Quote:
Longer answer: Quote:


20100307, 22:53  #3  
"Mark"
Apr 2003
Between here and the
2^{2}·1,607 Posts 
Quote:
I'm certain that someone more familiar with the hardware level could explain more regarding the FPU and how it differs from machine to machine. 

20100308, 00:51  #4 
Tribal Bullet
Oct 2004
3543_{10} Posts 
The nature of the arithmetic used in the LL test is such that if one number out of millions is wrong in a given iteration, that error is propagated to the entire number in the next iteration. So if you even get one error in the course of an LL test, nothing beyond that point will be the same. So your hardware error would have to be in the same place at the same time to also fool a doublecheck run, which is sufficiently unlikely in a weekslong test that nobody assumes it will happen.
The other danger in a long run like that is actual incorrect rounding, which would affect even perfectly working hardware. We use balanced representation, i.e. represent multipleprecision numbers as an array of positive or negative words. Multiplication results with this scheme are actually centered about zero on average, and the parameters chosen actually allow the possibility of a multiplication result not being able to fit in a 53bit floating point mantissa. So if enough of the words in a multiple precision number had the same sign, the convolution would fail, even with no rounding error. To combat this possibility, double check runs compute the LucasLehmer test with the initial residue multiplied by a random power of two. This still allows the original residue to be recovered, but changes all the intermediate results used to compute it, and so a freakish roundoff error failure is not expected to repeat exactly. If instead the double check incurs its own freakish roundoff error, this is also okay; you can keep doing doublechecks until two runs with different initial shifts until you get a pair of them where freakish roundoff error does not occur. So, bottom line, no archaeology on previous results is needed. 
20100308, 01:22  #5 
P90 years forever!
Aug 2002
Yeehaw, FL
2^{4}·3^{2}·53 Posts 
Prime95 does more checking when running near the upper limit of an FFT size. If a roundoff error is between 0.4 and 0.6 the iteration is redone using a slower, but safer, method.
Thus, for an incorrect doublecheck to slip by, you'd need an LL test where the roundoff error was greater than 0.6 (very, very, rare) and the doublecheck LL test also had an error greater than 0.6 (very, very rare) and the resides matched (thanks to shifting of the initial FFT data this will happen 1 in 2^64 occurrences). Multiply that out and you get one *really* rare event, though not impossible. Now let's say this really, really, really rare event happened. The chance that the corrected LL test yields a new Mersenne prime? Tack on another 1 in 300,000 chance. In summary, I'm not losing any sleep that an undiscovered Mersenne prime lays among the doublechecked exponents. 
20100309, 17:10  #6  
Oct 2008
n00bville
2^{3}×7×13 Posts 
Quote:


20100309, 17:32  #7 
"Ben"
Feb 2007
DFA_{16} Posts 
How paranoid are you? Lets say the chances of a very very rare event is 1 in 1000. So what we have here is the probability of 4 independent events happening with a combined probability of 1 in (1000 * 1000 * 2^64 * 300000). There are approximately 3 * 10^23 primes between 0 and the number in parentheses, so one would need to do approximately that many LL tests and double checks before expecting one of them to mistakenly cover up the existence of a mersenne prime. If an LL test took 1 second on some hypothetical computer, you would need 10000 years with 1 trillion such computers to test this many exponents. Last fiddled with by bsquared on 20100309 at 17:32 Reason: typo 
20100309, 19:15  #8  
"Ben"
Feb 2007
2·1,789 Posts 
Quote:


20100309, 22:34  #9 
Oct 2008
n00bville
2^{3}·7·13 Posts 
I know that in operating system concepts even rare concurrency problems (SMP and deadlocks) are avoided. In software development the same concept should be implemented.

20100309, 23:36  #10 
Tribal Bullet
Oct 2004
3·1,181 Posts 
One failed operation in 10^50 is not 'rare'; it can be safely described as 'impossible'.

20100309, 23:52  #11  
"Richard B. Woods"
Aug 2002
Wisconsin USA
1111000001100_{2} Posts 
(By "this problem", I assume you mean rounding errors.) Sure, there is: use only integer arithmetic.
Quote:
As long as Intel (and others) keep cranking out CPUs with large amounts of circuitry dedicated to performing floatingpoint operations significantly faster than equivalent integer operations, integerbased GIMPSrelated calculations will take significantly more time than floatingpoint implementations of the same calculations. Since the FP rounding error probabilities can be reduced to extremely low values (according to some folks' scale, anyway) by using a relatively small amount of guarding calculation, it's generally deemed worthwhile for the sake of the large time savings. Furthermore, other errordetection and avoidance measures (e.g., pseudorandom bit shifts with doublechecking) have the effect of also reducing the likelihood that a FP rounding error will affect results. A rare concurrency problem in an operating system could affect possiblylifecritical applications or otherwise have almostunforeseeable consequences. An error in GIMPS simply does not pose risks of the same magnitude, and the consequences are confined to a small area of mathematics practically unconnected to other aspects of daily living. You're welcome to develop further erroravoiding measures for addition to GIMPS FP routines  or to use only integerbased FFT software  if you deem it worthwhile to spend your time doing so. Last fiddled with by cheesehead on 20100310 at 00:00 

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Modifying the Lucas Lehmer Primality Test into a fast test of nothing  Trilo  Miscellaneous Math  25  20180311 23:20 
Unsafe verification?  CuriousKit  PrimeNet  13  20150621 14:49 
Double check LL test faster than first run test  lidocorc  Software  3  20081203 15:12 
Will the torture test, test ALL available memory?  swinster  Software  2  20071201 17:54 
A primality test for Fermat numbers faster than Pépin's test ?  T.Rex  Math  0  20041026 21:37 