View Single Post
Old 2006-08-05, 20:31   #1
drew
 
drew's Avatar
 
Jun 2005

1011111102 Posts
Default double precision in LL tests

Hi all,

In order to learn more of the math behind Lucas-Lehmer testing, I wrote my own C code implementation of the algorithm. It's working, but I think I may be losing some precision somewhere in the calculations, and I'm wondering what I should expect.

Through trial-and-error, I've discovered that the computation will succeed if the largest base of the fft array (prior to weighting) is 2^20 or less, and will fail if it is 2^22 or more. I haven't encountered a case, yet, that required 2^21, so the performace there is still unknown.

What I'm wondering is what amount of precision should I expect? Double-precision floating point format uses a 53-bit mantissa. I'd expect that to be cut in half because of the squaring, and then subtract perhaps a couple of bits to account for additions and rounding errors. That should still leave things at 24 bits or so, mayble 23, as far as I can tell.

Are my expectations unreasonable, or am I losing some precision in my code?

Thanks,
Drew
drew is offline   Reply With Quote