![]() |
![]() |
#1 |
Sep 2003
1010000111012 Posts |
![]()
First column is Meg range (for instance, 6 = 6,000,000 - 6,999,999).
Second column is the number of exponents in that range for which 2 matching LL tests were done with no P-1 factoring ever having been done for that exponent. Code:
0 0 1 0 2 15152 3 24580 4 18831 5 9243 6 4170 7 1916 8 1454 9 2754 10 140 11 23 12 8 13 6 14 2 15 8 16 3 17 2 18 3 19 1 At low ranges (2M - 4M), there's a lot. That's because P-1 wasn't added to Prime95 until fairly recent versions, so old exponents got two LL double-checks done and that was all. At very low ranges (0M - 1M) however, the number drops to zero, because someone is systematically P-1 trial-factoring all those old small exponents and they've gotten up to about 2.4M. At higher ranges (5M - 8M) the numbers drop steadily because P-1 factoring got added to Prime95 and the chances are reasonable that at least one of the two computers involved had enough memory to do a P-1 test. Still, thousands of exponents never got a P-1 test done. Finally at the highest ranges (10M +) the numbers are low because most exponents simply haven't been double-checked yet. The leading edge of double-checking is currently sweeping past 10.2M. If every single exponent got a P-1 test before a second LL test was performed, those numbers would stay permanently low and a few dozen new factors would be found in each Meg range, assuming a 3% or so chance of finding a P-1 factor. I'm not sure why the count picks up sharply in the 9M range after steadily declining. Any ideas? Last fiddled with by GP2 on 2003-09-27 at 10:11 |
![]() |
![]() |
![]() |
#2 | |
Aug 2002
2×7×13×47 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
#3 |
Sep 2003
3·863 Posts |
![]()
First column is Meg range (for instance, 6 = 6,000,000 - 6,999,999).
Second column is the number of exponents in that range for which at least one LL test was done, but not 2 matching LL tests, and with no P-1 factoring ever having been done for that exponent. Code:
0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 2 8 79 9 1507 10 9118 11 5972 12 4122 13 1880 14 1187 15 1062 16 1044 17 1054 18 1053 19 1069 At low ranges (0M - 7M), just about every exponent has been double-checked, so the numbers are zero. The numbers then rise sharply, peaking at 10M (not sure why). Of course, many of the machines that perform the double-checks will have enough memory to do a P-1 trial-factoring before going ahead with the LL double-check. But judging by past history some won't, and some thousands of exponents will never get a P-1 test done. From 15M-19M the numbers decline to a plateau. I'm not sure why. Maybe it's because only modern machines are fast enough to exponents in that range, and such machines are more likely to have plenty of memory (required for P-1 testing) and also more likely to have a recent version of Prime95 installed (since P-1 trial-factoring was only introduced in fairly recent version of Prime95). If P-1 testing could be organized to get through the hump between 10M-13M, then after that it would be fairly easy to ensure that P-1 trial-factoring always kept ahead of the leading edge of double-checking (in the "plateau" region). |
![]() |
![]() |
![]() |
#4 | |
Sep 2003
3×863 Posts |
![]() Quote:
Instead of working through the old exponents, though, it would benefit GIMPS more to do P-1 testing just ahead of the leading edge of double-checking, because this can save redundant LL double-checks by low-memory machines. If this we can keep ahead of the leading edge of double-checking, then no exponent will ever again have 2 LL tests done with no P-1 test having been done. As mentioned in my previous message, there's a smooth plateau at 14M+ where it will be very easy to ensure that P-1 trial-factoring keeps ahead of the leading edge of double-checking. But there's a fairly big hump at 10-11M, which it would be useful to tackle. Once past that, there's plenty of leisure opportunity to systematically do the 2M range once again. |
|
![]() |
![]() |
![]() |
#5 | |
Aug 2002
216A16 Posts |
![]() Quote:
![]() |
|
![]() |
![]() |
![]() |
#6 |
"Sander"
Oct 2002
52.345322,5.52471
29·41 Posts |
![]()
How can i get a list of exponents that haven't had any p-1 testing at all?
As long as it doesn't interfere with primenet it could be a side project of the LMH BTW, a lot of the exponents that have had P-1 have verry low bounds which have a very low chance of finding a factor. |
![]() |
![]() |
![]() |
#7 | |
Aug 2002
Richland, WA
22·3·11 Posts |
![]() Quote:
If the 9M range had not been mostly double-checked already, it would have even more exponents without P-1 than 10M because most of the 9M exponents were already handed out before the P-1 capable client was available. |
|
![]() |
![]() |
![]() |
#8 | |
Aug 2002
Richland, WA
2048 Posts |
![]() Quote:
Being currently the third highest LL (double-checks and first-time tests) producer (see http://www.teamprimerib.com/rr1/topover.htm), with almost all of his computing power focused on double-checks, he ends up doing a sizable percentage of the double-checks that are completed. I've noticed that his computers don't seem to do P-1 very often, which probably means he has intentionally turned it off because it doesn't give credit proportional to the amount of work done. So, I think the larger number of exponents in the 9M's without P-1 is due TempleU-CAS not doing P-1 while completing a sizable percentage of the double-checks. He will probably have a similar effect on the 10M range (though it won't be noticable compared to the 9M range). |
|
![]() |
![]() |
![]() |
#9 | |
Aug 2002
Richland, WA
2048 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
#10 | |
Sep 2003
50358 Posts |
![]() Quote:
If you want to try do P-1 trial-factoring just ahead of the leading edge of double-checking, see the Marin's Mersenne-aries forum. If you want some other range, let me know. I could also come up with a list of P-1 test with very low bounds. |
|
![]() |
![]() |
![]() |
#11 |
Aug 2002
Texas
5·31 Posts |
![]()
Another suggestion for those interested in and who have the means to P-1 test exponents would be to use PrimeNet to request a block of DC's and turn the sequentalwork switch on in prime.ini. It will go through and P-1 the exponents that need it and return the result to the server. When the batch is done just unreserve the lot, wait a while for them to be assigned to others and repeat. Has worked for me in the past when I wanted to get some coveted double check factors, plus clears the way for the generally older DC machines to concentrate on LL iterations.
Gratuitous dancing ![]() |
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Modifying the Lucas Lehmer Primality Test into a fast test of nothing | Trilo | Miscellaneous Math | 25 | 2018-03-11 23:20 |
Why would a website claim I've made too many requests when I haven't been back for hours? | jasong | jasong | 5 | 2016-06-02 01:14 |
Unreserving exponents(these exponents haven't been done) | jasong | Marin's Mersenne-aries | 7 | 2006-12-22 21:59 |
If you haven't yet ditched the Netscape browser... | ewmayer | Lounge | 3 | 2005-05-10 00:28 |
A primality test for Fermat numbers faster than Pépin's test ? | T.Rex | Math | 0 | 2004-10-26 21:37 |