double precision in LL tests
Hi all,
In order to learn more of the math behind LucasLehmer testing, I wrote my own C code implementation of the algorithm. It's working, but I think I may be losing some precision somewhere in the calculations, and I'm wondering what I should expect.
Through trialanderror, I've discovered that the computation will succeed if the largest base of the fft array (prior to weighting) is 2^20 or less, and will fail if it is 2^22 or more. I haven't encountered a case, yet, that required 2^21, so the performace there is still unknown.
What I'm wondering is what amount of precision should I expect? Doubleprecision floating point format uses a 53bit mantissa. I'd expect that to be cut in half because of the squaring, and then subtract perhaps a couple of bits to account for additions and rounding errors. That should still leave things at 24 bits or so, mayble 23, as far as I can tell.
Are my expectations unreasonable, or am I losing some precision in my code?
Thanks,
Drew
