20060805, 20:31  #1 
Jun 2005
17E_{16} Posts 
double precision in LL tests
Hi all,
In order to learn more of the math behind LucasLehmer testing, I wrote my own C code implementation of the algorithm. It's working, but I think I may be losing some precision somewhere in the calculations, and I'm wondering what I should expect. Through trialanderror, I've discovered that the computation will succeed if the largest base of the fft array (prior to weighting) is 2^20 or less, and will fail if it is 2^22 or more. I haven't encountered a case, yet, that required 2^21, so the performace there is still unknown. What I'm wondering is what amount of precision should I expect? Doubleprecision floating point format uses a 53bit mantissa. I'd expect that to be cut in half because of the squaring, and then subtract perhaps a couple of bits to account for additions and rounding errors. That should still leave things at 24 bits or so, mayble 23, as far as I can tell. Are my expectations unreasonable, or am I losing some precision in my code? Thanks, Drew 
20060805, 20:45  #2  
Tribal Bullet
Oct 2004
6727_{8} Posts 
Quote:
jasonp 

20060805, 20:47  #3  
Jun 2005
2×191 Posts 
Quote:


20060807, 20:31  #4 
∂^{2}ω=0
Sep 2002
República de California
10110110011100_{2} Posts 
A more precise estimate, which also shows how the accuracy decreases (quite slowly though) with increasing FFT length can be found here  two out of three authors of that paper have posted to this thread, so hey, it must be true. ;)

20060808, 04:08  #5  
Jun 2005
2×191 Posts 
Quote:
FWIW, I switched to a balanceddigit representation, and now it works through 2^22, at least when testing the primes M11213, M21701, M44497 and M86243. I now understand the cause of that limit...the depth of the FFT requires more additions of values, and I had been thinking in terms of decimal, not binary digits when I said it should only account for a couple of digits of precision. I expect that the loss of precision should grow *much* more slowly beyond the numbers I've already tested (up to the 34th Mersenne prime). My next optimization will be to convert to a realvalued FFT, hopefully cutting the FFT size for most exponents in half, but since the data will be twice as dense, I expect to lose that bit I gained going to the balanceddigit representation. Thanks, Drew Last fiddled with by drew on 20060808 at 04:18 

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
How do firsttime and doublecheck primality tests work?  ChemicalCat59  Information & Answers  9  20170323 11:14 
x.265 half the size, double the computation; so if you double again? 1/4th?  jasong  jasong  7  20150817 10:56 
translating double to single precision?  ixfd64  Hardware  5  20120912 05:10 
Fast double precision Division  __HRB__  Programming  21  20120110 02:10 
Double precision GPUs coming very soon  dsouza123  Hardware  4  20071015 02:20 