mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Wagstaff PRP Search (https://www.mersenneforum.org/forumdisplay.php?f=102)
-   -   Status of Wagstaff testing? and testing Mersenne primes for Wagstaff-ness (https://www.mersenneforum.org/showthread.php?t=23523)

sweety439 2018-07-27 19:24

[QUOTE=CRGreathouse;492591]Are all factors of Wagstaff numbers (2^p + 1)/3 of the form 2kp + 1? If so then yes, a similar heuristic would drop out.[/QUOTE]

Yes, since all prime factors of Phi_n(2) are of the form kn+1 (maybe except the largest prime factor of n, like the case n = 6, 18, 20, 21, 54, 100, 110, 136, 147, 155, 156, 162, 253, 342, 486, 500, ...), and (2^p+1)/3 is Phi_{2p}(2), thus all prime factors of it are of the form 2kp+1.

GP2 2018-07-27 22:51

[QUOTE=sweety439;492599]Yes, since all prime factors of Phi_n(2) are of the form kn+1 (maybe except the largest prime factor of n, like the case n = 6, 18, 20, 21, 54, 100, 110, 136, 147, 155, 156, 162, 253, 342, 486, 500, ...), and (2^p+1)/3 is Phi_{2p}(2), thus all prime factors of it are of the form 2kp+1.[/QUOTE]

Pardon my ignorance, but what is the name of this Phi function? The only phi I know is the totient function.

GP2 2018-07-28 00:39

Is trial-factoring more effective for Wagstaffs than for Mersennes? It doesn't seem to be true. Here are some numbers.

For Mersenne numbers, thanks to TJAOI's efforts and very extensive ECM testing we can be virtually certain that we know every factor of bit length smaller than 66, without exception, for all exponents less than a billion.

For Wagstaff numbers, there are various data sources, but the main one so far is ATH's trial factoring in 2013 and earlier, for which he [URL="http://mersenneforum.org/showthread.php?p=332533"]posted a zip file[/URL]. However, his data only includes first factors, since he stopped searching whenever he found a factor for an exponent.

If we look at the exponent range 10k to 10M, ATH trial-factored this range up to 56 bits (i.e. he found all exponents whose smallest factor has a bit length of less than 56).

So for both Mersenne and Wagstaff I counted how many exponents fit the criteria:

[CODE]
How many Mersenne numbers and how many Wagstaff numbers
with exponents between 10k and 1M
have a smallest factor of a given bit length?
(looking at bit lengths of less than 56)

b-l mer wag
15 40 51
16 99 105
17 245 250
18 495 470
19 885 857
20 1586 1546
21 2775 2806
22 2084 2193
23 3060 2990
24 2493 2596
25 2530 2489
26 2285 2284
27 2170 2231
28 2025 2015
29 1930 1856
30 1810 1733
31 1578 1620
32 1465 1500
33 1452 1441
34 1409 1391
35 1289 1276
36 1176 1207
37 1111 1115
38 1118 1065
39 1013 945
40 889 1008
41 939 928
42 869 899
43 838 847
44 791 795
45 801 768
46 788 723
47 735 692
48 697 711
49 649 652
50 640 657
51 604 616
52 579 532
53 542 545
54 559 503
55 519 479
[/CODE]

The numbers seem roughly the same. Trial-factoring is equally effective for both, and it doesn't seem like the trend would change for larger bit lengths.

Maybe trial factoring for Wagstaff seems more effective because we're still finding the ridiculously easy small factors, the kind that were found and cleared for Mersennes more than two decades ago.


What about P−1 testing? Well, trial factoring isn't really making a difference in terms of clearing away factors, so if P−1 was less effective for Wagstaff numbers than for Mersenne numbers, it would only be if the k−1 corresponding to factors 2kp + 1 was somehow less smooth. That doesn't seem likely.

However, there is a [URL="http://www.mersenneforum.org/showthread.php?p=492373"]longstanding bug where mprime/Prime95 wrongly calculates the appropriate B1,B2 bounds[/URL] for P−1 testing and sets them much too low. However if you manually set the bounds higher to match the same values you'd use for Mersenne, then P−1 testing should be equally effective for Wagstaff numbers.

PS,
The [URL="http://mersenneforum.org/showthread.php?p=489397&postcount=29"]Venn diagram intersection of factors that are capable of being found by either trial-factoring or by P−1[/URL] is smaller than you'd think. Most factors found by TF have k−1 much too unsmooth to be found by P−1 testing, and most factors found by P−1 testing are much too big to be found by TF.

Dr Sardonicus 2018-07-28 03:42

[QUOTE=CRGreathouse;492591]Are all factors of Wagstaff numbers (2^p + 1)/3 of the form 2kp + 1? If so then yes, a similar heuristic would drop out.[/QUOTE]
If p is an odd prime, p > 3, q is prime, and q divides (2^p + 1)/3, then

q divides 2^(2*p) - 1, q does not divide 2^p - 1, and (using the hypothesis p > 3) q also does not divide 2^2 - 1.

Thus, 2*p is the least positive integer h for which q divides 2^h - 1 (the "hauptexponent"), whence 2*p divides q-1.

The conclusion q = 2*p + 1 fails for p = 3; (2^3 + 1)/3 = 3.

ATH 2018-07-28 09:48

[QUOTE=GP2;492612]For Wagstaff numbers, there are various data sources, but the main one so far is ATH's trial factoring in 2013 and earlier, for which he [URL="http://mersenneforum.org/showthread.php?p=332533"]posted a zip file[/URL]. However, his data only includes first factors, since he stopped searching whenever he found a factor for an exponent.[/QUOTE]

Remember there is now "mfaktc-win-64.Wagstaff.exe" for trial factoring on GPUs. Looking at 16M-32M which I trial factored to 61 bits.
If I test 61-62 bits on my ~600 GHzdays / day card on 16M: 8.3 sec and 32M: 4.8 sec, so lets say 6.5 sec on average.
So all my effort could be duplicated pretty quickly on a single GPU today.

GP2 2018-07-28 16:39

Bug in PRP cofactor code???
 
For the Wagstaff number with exponent 29123, mprime says the cofactor is composite, but actually it's a probable prime.

It fails with both type 1 and type 5 testing.

Other Wagstaff numbers I tested with mprime all worked: all of the known Wagstaff primes and probable primes, plus other Wagstaff numbers with factors known by FactorDB to have a PRP cofactor.

[CODE]
PRP=N/A,1,2,29123,1,"3,126660872079575401"
PRP=N/A,1,2,29123,1,99,0,3,1,"3,126660872079575401"
PRP=N/A,1,2,29123,1,99,0,3,5,"3,126660872079575401"
[/CODE]

[CODE]
[Work thread Jul 28 16:08] Starting Gerbicz error-checking PRP test of 2^29123+1/3/126660872079575401 using all-complex FMA3 FFT length 1536
[Work thread Jul 28 16:08] 2^29123+1/3/126660872079575401 is not prime. Type-5 RES64: 663E02E0E55571F7. Wg8: 993B7657,00000000

[Work thread Jul 28 16:26] Starting PRP test of 2^29123+1/3/126660872079575401 using all-complex FMA3 FFT length 1536
[Work thread Jul 28 16:26] 2^29123+1/3/126660872079575401 is not prime. RES64: 2282024F325B71EB. Wg8: 993B7657,00000000

[Work thread Jul 28 16:26] Starting Gerbicz error-checking PRP test of 2^29123+1/3/126660872079575401 using all-complex FMA3 FFT length 1536
[Work thread Jul 28 16:26] 2^29123+1/3/126660872079575401 is not prime. Type-5 RES64: 663E02E0E55571F7. Wg8: 993B7657,00000000
[/CODE]

[CODE]

{"status":"C", "k":1, "b":2, "n":29123, "c":1, "known-factors":"3,126660872079575401", "worktype":"PRP-3", "res64":"663E02E0E55571F7", "residue-type":5, "fft-length":1536, "error-code":"00000000", "security-code":"993B7657", "program":{"name":"Prime95", "version":"29.4", "build":7, "port":8}, "timestamp":"2018-07-28 16:08:03", "errors":{"gerbicz":0}, ...

{"status":"C", "k":1, "b":2, "n":29123, "c":1, "known-factors":"3,126660872079575401", "worktype":"PRP-3", "res64":"2282024F325B71EB", "residue-type":1, "fft-length":1536, "error-code":"00000000", "security-code":"993B7657", "program":{"name":"Prime95", "version":"29.4", "build":7, "port":8}, "timestamp":"2018-07-28 16:26:44", ...

{"status":"C", "k":1, "b":2, "n":29123, "c":1, "known-factors":"3,126660872079575401", "worktype":"PRP-3", "res64":"663E02E0E55571F7", "residue-type":5, "fft-length":1536, "error-code":"00000000", "security-code":"993B7657", "program":{"name":"Prime95", "version":"29.4", "build":7, "port":8}, "timestamp":"2018-07-28 16:26:44", "errors":{"gerbicz":0}, ...

[/CODE]

[URL="http://factordb.com/index.php?query=%282%5E29123%2B1%29%2F3%2F126660872079575401"]FactorDB says it's PRP[/URL].

Can anybody verify this??? Do other versions of mprime have the same problem, or other programs derived from the same code base?

Batalov 2018-07-28 16:45

No, it is a composite. You cannot use 2-PRP for numbers like this one.

They are all 2-PRPs, for any n value.

GP2 2018-07-28 17:35

[QUOTE=Batalov;492655]No, it is a composite. You cannot use 2-PRP for numbers like this one.

They are all 2-PRPs, for any n value.[/QUOTE]

Hmmm, the FactorDB page at the link I gave now shows "C" instead of "PRP", and the main page for 2^29123+1 shows CF instead of FF.

So I guess their server software tests base 2 at first and displays PRP on a preliminary basis, and only then gets around to testing base 3... so if you click on it right after reporting a new factor, maybe you get the bogus FF notification.

Never mind the python snippet I posted, obviously it didn't test primality, it just verified the factor. Excitability and lack of sleep leads to a state of confusion.

axn 2018-07-29 02:46

This is a known issue in FactorDB. If you run across a large PRP cofactor from a base-2 number, try to test using another base. Chances are, it'll be revealed as a composite.

edit:- Factor DB doesn't, on its own, test using other bases. Somebody will have to manually prod it to do so.

GP2 2018-07-29 20:38

[QUOTE=GP2;492612]
so if P−1 was less effective for Wagstaff numbers than for Mersenne numbers, it would only be if the k−1 corresponding to factors 2kp + 1 was somehow less smooth.
[/QUOTE]

[QUOTE]Most factors found by TF have k−1 much too unsmooth to be found by P−1 testing[/QUOTE]

Of course, it's k itself that needs to be smooth, not k−1. I've been posting too much nonsense lately.

diep 2018-08-01 19:07

[QUOTE=axn;492469]That's not how you do math. Maybe physics. But definitely not math.[/QUOTE]

I come from search where we beat all methods by some lightyears of course by means of dubious approximations.
edit: even the most optimistic predictions of any expert in the 1990s were totally beaten in the 21th century - progress of hardware had been factored in by some - progress of software (algorithmic progress) by no one. (end of edit)

The best heuristic there to predict the future is by having the most recent history event valued at a much higher value than all before. For example with killermoves.

If we look at Wagstaff then there is 1 prime just under 1M which i found. Then Tony Reix did do the bulk of work for the 4M one, and we know for sure there is nothing until 10M and the propper team probably searched about everything until 13M and found 2 there.

Those 2 are very close together so i count those as for the DISTANCE to next prime, as a single entity. Of course finding 2 is nicer than 1.

most recent history:
13 / 4 = factor 3.25 distance
4 / 1 = factor 4.0 distance
981k / 374321 = factor 2.6 distance

That's significant 3 datapoints. Calling that 'luck' or 'not significant statistical' is wrong. It simply shows it's not Mersenne nor 321.

Seen from a distance I call that factor 3 distance, and then i'm still rather optimistic. Now if we start to test massive number of exponents, millions of them, it's possible luck starts to play a smaller role to some extend.

Still if you search until 30M i highly doubt you have more than odds of 1SD (66%) to find 1 or more PRP's out there.

The recent datapoints suggest 3 * 13M+ = 40M would give high odds.
Yet no garantuee even till the door of the primeshop.

If you claim 2SD odds to find one at 40M, i only would say: "maybe". Yet don't bet at it you find even one.

In a given search space [n;2n] and n larger than a million, i claim that odds simply is lightyears worse than Mersenne.

I would guess it's on average somewhere between log 3 / log 2 => 1.6 until factor 3 to the next one for Wagstaff (above n=1 million), versus what is it something like 1.2 for mersenne?

Yet in our lifetime we do not yet know how far we will be able to test to find sufficient datapoints there.


All times are UTC. The time now is 04:21.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.