![]() |
![]() |
#12 |
Romulan Interpreter
Jun 2011
Thailand
5·1,877 Posts |
![]()
@OP:
Yes. You are missing something. Those numbers are wrong. A multiplication with a power of 2 (random shift) will rotate all the operations by that amount, therefore when you square, you double the amount, and when you subtract 2 from the result, that 2 is rotated too, because the result is rotated. See posts #5, and the following discussion in posts #16, #17, #18, in this thread. OTOH, if you suggest that starting LL tests with different initial values, instead of the actual** random shift method, would be better (sorry, but as pointed by the former posters, I have no idea what do you want from us with those numbers, so I am trying to guess your mind), then it won't work, because, regardless of the value you start with, after a while the path converges (squaring numbers mod x is not a bijection, all negatives result in positives, so you only have half of the values in the co-domain), therefore FFT will work with the same numbers (or their negatives), which will defeat the goal: to make the FFT deal each time with different data, which would eliminate a bug in the code. Note that while other measures (GEC test, certification, etc) target hardware errors, cosmic rays, CPU going drunk, user's dishonesty (cheating for credits or whatever reasons), the random shift targets possible errors in software implementation of the FFT (software errors). If the FFT deals with completely different data each time and still produces the same result at the end (shifted or not) then we can be quite confident that the implementation is right. Code:
Ex: 19: Starts with 10: 10, 98, 9602, 448177, 409322, 200240, 160699, 412414, 313150, 282018, 338709, 353913, 150119, 286038, 129657, 199279, 1024, 0 Starts with 52: 52, 2702, 485071, 160883, 339071, 343957, 7723, 400296, 100378, 519603, 444087, 87082, 511841, 238249, 129657, 199279, 1024, 0 ---------- **former. Since PRP tricks invented by people here, LL is not anymore "actual" Last fiddled with by LaurV on 2021-03-02 at 06:36 |
![]() |
![]() |
![]() |
#13 |
Dec 2012
The Netherlands
11·151 Posts |
![]()
If you are genuinely interested in different starting values (and not just understanding the current implementation),
read Bas Jansen's PhD thesis, available here: https://scholarlypublications.univer...dle/1887/20310 |
![]() |
![]() |
![]() |
#14 | |
Mar 2021
710 Posts |
![]() Quote:
If you like, I can post all the intermediate s values for p=29 and 31 for s0 = 4, 8, and 16. It's not a problem since I'm already calculating the residuals for p<32 with s0 = 4 << (x-2) = 2^x for x = 2..100000. Here are the number of cases out of 99999 where the residual was zero for p<32. Cases for p where the count of zero residues was zero out 99999 were tested but omitted from the list, including p is not Mp and p is not prime. 3, 33333, 33% 5, 20000, 20% 7, 28571, 28% 13, 23077, 23% 17, 23530, 23% 19, 26316, 26% 31, 25806, 25% For the 99,999 simple cases of s0=x, x= 2.100000 the counts for residues = 0 for the 7 set of Mp's tested are 3, 28572, 28% 5, 25807, 25% 7, 25197, 25% 13, 25007, 25% 17, 24979, 24% 19, 24900, 24% 31, 25025, 25% My expectation is that for arbitrary starting values of s0, an Mp has a 1 in 4 chance of returning a zero residue and for not Mp, two different starting values have unrelated residues. Hopefully we'll hear from someone who can shed light on what's happening since frankly, using a start value other than s0=4 is a waste of time. If you can't calculate that residual accurately, the rest isn't worth much |
|
![]() |
![]() |
![]() |
#15 | |
Feb 2017
Nowhere
10001100001002 Posts |
![]() Quote:
The subtrahend is being systematically shifted, with each iteration. Instead of subtracting the constant value 2 after each squaring starting with 4*2^k, you subtract 2*2^(2*k) after the first squaring, 2*2^(4*k) after the second, 2*2^(8*k) after the third, etc. This is what makes the new sequence of residues merely a shifted version of the original. |
|
![]() |
![]() |
![]() |
#16 | |
Apr 2020
2×32×13 Posts |
![]() Quote:
For example, let's say we shift left by 5 bits, so we start with 2^7 = 128 instead of 2^2 = 4. After the first squaring, the shift doubles to 10 bits, so we need to subtract a 2 shifted left by 10 bits, so 2^11. The unshifted LL gives 4^2-2 = 14, or 1110 in binary. The shifted LL gives 128^2-2^11 = 14336, or 11100000000000 in binary, and this is just 1110 shifted left by 10 bits. At the next iteration the squaring doubles the shift again to 20 bits, so we must subtract 2^21, and so on. Of course we reduce mod 2^p-1 at each step, so we do the same to the power of 2 that we subtract: if p=17, for example, then instead of subtracting 2^21 we subtract 2^4. Edit: beaten to it, but more examples can only help. Last fiddled with by charybdis on 2021-03-02 at 14:47 |
|
![]() |
![]() |
![]() |
#17 | |
Mar 2021
1112 Posts |
![]() Quote:
If the intention is to test the hardware and FFT, wouldn't it be simpler and more efficient to generate p pseudorandom bits, X, and it's conjugate Y = ((2^p - 1) Xor X) = (2^p - 1) - X, i.e. X + Y = X Or Y = 2^p - 1, since (X*X - 2) mod (2^p - 1) = (Y*Y - 2) mod (2^p - 1) and compute a 128-bit Xor checksum on the two values (Xor 128 bit chunks of data at a time together) and if they don't match, do the two multiplications again so that if the error results are the same, then transmit the RNG seed and the two 128-bit checksums because you just found a confirmed error in the math so it's off to see the wizard. If this test were to be performed every 1,000 or 10,000 multiplications during the LL test, then the health of the computing device can be verified independent of the algorithm immediately and you don't have to wait till you are done and not know until someone runs a second LL to find out there is a problem. This way, you don't have to screw around with start values and can always use s0=4 with confidence. I good RNG can be found on the Wikipedia page for Xorshift. https://en.wikipedia.org/wiki/Xorshift James |
|
![]() |
![]() |
![]() |
#18 | |
Apr 2020
2×32×13 Posts |
![]() Quote:
2. Using a shift is not "screwing around with start values". It's still using s0=4, but everything is shifted, including the -2. The shift is a separate concept from using an alternative starting value such as 10, which would give a completely different sequence of residues. If we run one test on a composite Mersenne with s0=4 and another with s0=10, we won't get matching residues, so this would not be a suitable method for double-checking (except for primes). 3. As LaurV says, the main intention of the shift is that it will pick up bugs in the FFT implementation: while morally we're performing the same calculations, the numbers that we need to square are bit-shifted, so the FFT computations are different. An FFT bug would therefore produce different incorrect residues with different shifts. Hardware errors are much easier to pick up: even without a shift, they would lead to residues that don't match, if they don't get caught by other means first. 4. GIMPS now prefers to use PRP rather than LL for primality tests, apart from double-checks of LL results. This is because there is a robust error-checking procedure called the Gerbicz error-check which is almost certain to catch any errors during the test. PRP tests can also be certified, allowing a quick verification that the work was done correctly; this means that a double-check is not needed. |
|
![]() |
![]() |
![]() |
#19 | |
Feb 2017
Nowhere
22×19×59 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Missing Work | tomtuba | Software | 3 | 2019-04-16 10:02 |
Missing top 10? | R.D. Silverman | GMP-ECM | 3 | 2011-03-18 21:35 |
what causes missing results? | ixfd64 | PrimeNet | 1 | 2008-08-28 05:15 |
A missing identity | fivemack | Factoring | 4 | 2008-03-04 05:04 |
missing? | tha | Data | 6 | 2003-09-14 21:36 |