mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > Software

Reply
 
Thread Tools
Old 2006-08-05, 20:31   #1
drew
 
drew's Avatar
 
Jun 2005

17E16 Posts
Default double precision in LL tests

Hi all,

In order to learn more of the math behind Lucas-Lehmer testing, I wrote my own C code implementation of the algorithm. It's working, but I think I may be losing some precision somewhere in the calculations, and I'm wondering what I should expect.

Through trial-and-error, I've discovered that the computation will succeed if the largest base of the fft array (prior to weighting) is 2^20 or less, and will fail if it is 2^22 or more. I haven't encountered a case, yet, that required 2^21, so the performace there is still unknown.

What I'm wondering is what amount of precision should I expect? Double-precision floating point format uses a 53-bit mantissa. I'd expect that to be cut in half because of the squaring, and then subtract perhaps a couple of bits to account for additions and rounding errors. That should still leave things at 24 bits or so, mayble 23, as far as I can tell.

Are my expectations unreasonable, or am I losing some precision in my code?

Thanks,
Drew
drew is offline   Reply With Quote
Old 2006-08-05, 20:45   #2
jasonp
Tribal Bullet
 
jasonp's Avatar
 
Oct 2004

67278 Posts
Default

Quote:
Originally Posted by drew
What I'm wondering is what amount of precision should I expect? Double-precision floating point format uses a 53-bit mantissa. I'd expect that to be cut in half because of the squaring, and then subtract perhaps a couple of bits to account for additions and rounding errors. That should still leave things at 24 bits or so, mayble 23, as far as I can tell.

Are my expectations unreasonable, or am I losing some precision in my code?
Your experience mirrors that of other implementations; for latter-day size exponents I think 20-21 bits per array element is the best you can hope for, even with balanced digit representation.

jasonp
jasonp is offline   Reply With Quote
Old 2006-08-05, 20:47   #3
drew
 
drew's Avatar
 
Jun 2005

2×191 Posts
Default

Quote:
Originally Posted by jasonp
Your experience mirrors that of other implementations; for latter-day size exponents I think 20-21 bits per array element is the best you can hope for, even with balanced digit representation.

jasonp
Thanks.
drew is offline   Reply With Quote
Old 2006-08-07, 20:31   #4
ewmayer
2ω=0
 
ewmayer's Avatar
 
Sep 2002
Rep├║blica de California

101101100111002 Posts
Default

A more precise estimate, which also shows how the accuracy decreases (quite slowly though) with increasing FFT length can be found here - two out of three authors of that paper have posted to this thread, so hey, it must be true. ;)
ewmayer is offline   Reply With Quote
Old 2006-08-08, 04:08   #5
drew
 
drew's Avatar
 
Jun 2005

2×191 Posts
Default

Quote:
Originally Posted by ewmayer
A more precise estimate, which also shows how the accuracy decreases (quite slowly though) with increasing FFT length can be found here - two out of three authors of that paper have posted to this thread, so hey, it must be true. ;)
That's why I came here to ask.

FWIW, I switched to a balanced-digit representation, and now it works through 2^22, at least when testing the primes M11213, M21701, M44497 and M86243.

I now understand the cause of that limit...the depth of the FFT requires more additions of values, and I had been thinking in terms of decimal, not binary digits when I said it should only account for a couple of digits of precision. I expect that the loss of precision should grow *much* more slowly beyond the numbers I've already tested (up to the 34th Mersenne prime).

My next optimization will be to convert to a real-valued FFT, hopefully cutting the FFT size for most exponents in half, but since the data will be twice as dense, I expect to lose that bit I gained going to the balanced-digit representation.

Thanks,
Drew

Last fiddled with by drew on 2006-08-08 at 04:18
drew is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
How do first-time and double-check primality tests work? ChemicalCat59 Information & Answers 9 2017-03-23 11:14
x.265 half the size, double the computation; so if you double again? 1/4th? jasong jasong 7 2015-08-17 10:56
translating double to single precision? ixfd64 Hardware 5 2012-09-12 05:10
Fast double precision Division __HRB__ Programming 21 2012-01-10 02:10
Double precision GPUs coming very soon dsouza123 Hardware 4 2007-10-15 02:20

All times are UTC. The time now is 15:36.


Fri Dec 3 15:36:42 UTC 2021 up 133 days, 10:05, 0 users, load averages: 1.21, 1.13, 1.23

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.