Status of Wagstaff testing? and testing Mersenne primes for Wagstaffness
Is there any centralized data repository for Wagstaff number testing? A list of exponents with factors, and a list of PRP test residues, possibly doublechecked?
Various people have tested extensively, but have the results been centralized anywhere? I think systematic testing has been done up to about 4M, and then Ryan Popper found (2^13372531+1)/3 and (2^13347311+1)/3 which are actually the largest known PRPs, and it's not clear if anyone has searched the gap between 4M and 13.3M However, I'd especially like to confirm that testing has been done to try to invalidate the New Mersenne Conjecture, by factoring or testing all the known Mersenne prime exponents larger than 4M as Wagstaff exponents, to see if any of them might be Wagstaff primes as well. I assumed this was the case, but maybe I'm wrong? Edit: [URL="https://primes.utm.edu/mersenne/NewMersenneConjecture.html"]Chris Caldwell's page[/URL] says the Wagstaff status of Mersenne prime exponents [M]20996011[/M] and [M]30402457[/M] is unknown, with no info about any higher exponents. Is this still true? 
[QUOTE=GP2;492248]Is there any centralized data repository for Wagstaff number testing? A list of exponents with factors, and a list of residues, possibly doublechecked?
[/QUOTE] Why not use my own error checking on these numbers? Even a single run is much stronger than your double or even triple/quadruple check if you compare only RES64. The same is true for Mersenne numbers. 
[QUOTE=R. Gerbicz;492249]Why not use my own error checking on these numbers? Even a single run is much stronger than your double or even triple/quadruple check if you compare only RES64. The same is true for Mersenne numbers.[/QUOTE]
I was not really interested in Wagstaff testing until now, so I'm not familiar with it. Can mprime/Prime95 be used to do Wagstaff PRP testing? I was under the impression that it can only do k*b^n+c where all of these are integers. Which programs can do Wagstaff PRP testing, and has your Gerbicz error check been implemented for them yet? Also, it's necessary to know which exponents p have known factors for (2^p + 1)/3 
[QUOTE=GP2;492257]I was under the impression that it can only do k*b^n+c where all of these are integers.
[/QUOTE] It is in the form of (k*2^n+c)/d, so for these I'm assuming that the error checking is already included in prime95 (note that here the base=2). ps. If you have an algorithm/code for k*b^n+c, then you have a code also for (k*b^n+c)/d, because you need only one biginteger division (or even eliminate that also if d is "small"). 
[QUOTE=R. Gerbicz;492259]you need only one biginteger division[/QUOTE]
You also need to get more sleep. At least I do, clearly... :redface: 
[QUOTE=GP2;492262]You also need to get more sleep. At least I do, clearly... :redface:[/QUOTE]
OK, I was too compact/short. Here are the deatils: we restrict to b=2 to enable my fast error checking let N=k*2^n+c (but we want a test for N/d), if N/d is a Fermat prp to base=a^d, then (a^d)^(N/d)==a^d mod (N/d), so a^N==a^d mod (N/d), for this first get res=a^(2^n) mod N using my check, then compute (you can assume that k,c is small, but this works in general) res2=res^k*a^c mod N, until this point you have not used d. Reduce this: res'=res2 mod (N/d) to get a^N mod (N/d), using a single biginteger division (or two if you count also the N/d division). [There is another option here, if you begin with a^k mod N, and then use squarings, in timing there is no real difference, also note that c<0 is possible]. Assumed that you are a super heavy p95 user (I'm not using it on a daily routine) , PRPCF doesn't really using the error checking for (2^p1)/d ? Your own problem (2^p+1)/3 is totally similar. In general I'm not checking things where I'm sure, but if it is really not in p95, then it should be. Have you seen this? [url]http://www.mersenneforum.org/showthread.php?t=12157[/url] Just picked up a random (known!) prime: (use the d flag) in worktodo.txt put PRP=2675,2,1350023,1 [CODE] ... [Work thread Jul 22 09:22] Resuming Gerbicz errorchecking PRP test of 2675*2^1350023+1 using allcomplex FMA3 FFT length 96K, Pass1=384, Pass2=256, clm=4, 4 threads ... [Work thread Jul 22 09:23] 2675*2^1350023+1 is a probable prime! Wg8: A97983C5,00000000 [/CODE] And you can also see in results.txt that my error check has been used. So at least for d=1 (or there f=1), it is really working. But for f>1 it is just throwing [CODE] [Main thread Jul 22 09:28] Illegal line in worktodo.txt file: PRP=1,2,986191,1,3 [/CODE] or when you try to trick it with f=1: [CODE] [Work thread Jul 22 09:29] PRP test of 2^986191+1 aborted  number is divisible by 3 [/CODE] Well, we know this. Maybe overlooked something trivial how to set the numbers in worktodo.txt. But if N=k*2^n+c is already working, then N/d should be also, the underlying Maths is so trivial. 
Bingo: you have to use this: PRP=1,2,986191,1,"3"
at least on Linux, and, then [CODE] .. [Work thread Jul 22 10:07] Starting Gerbicz errorchecking PRP test of 2^986191+1/3 using allcomplex FMA3 FFT length 48K, Pass1=768, Pass2=64, clm=2, 4 threads [Work thread Jul 22 10:08] Iteration: 10000 / 986191 [1.01%], ms/iter: 0.124, ETA: 00:02:00 [Work thread Jul 22 10:08] Iteration: 20000 / 986191 [2.02%], ms/iter: 0.122, ETA: 00:01:57 .. [Work thread Jul 22 10:09] Iteration: 980000 / 986191 [99.37%], ms/iter: 0.122, ETA: 00:00:00 [Work thread Jul 22 10:10] 2^986191+1/3 is a probable prime! Wg8: D9C45894,00000000 [/CODE] (it was a known prp Wagstaff prime) Minor issue, but parentheses are missing here in text: 2^986191+1/3. 
I worked on the NMC:
[url]http://mersenneforum.org/showthread.php?t=19229[/url] [url]http://hoegge.dk/mersenne/NMC.html[/url] But as you can see in that thread, most agree that it is a "joke" conjecture, so I have not worked on it since adding the newest Mersenne primes. You can trial factor Wagstaff numbers on GPU using mfaktcwin64.Wagstaff.exe 
I was made aware of these resources that are useful for the New Mersenne Conjecture:
Factors of Wagstaff numbers with Mersenneprime exponents: [url]http://bearnol.isageek.com/Mersenneplustwo/Mersenneplustwo.html[/url] PRP tests of Wagstaff numbers with Mersenneprime exponents: [url]https://groups.google.com/forum/#!topic/Mersenneplustwo/j4RYTPh540[/url] 
[QUOTE=R. Gerbicz;492282]Bingo: you have to use this: PRP=1,2,986191,1,"3"
[/QUOTE] Yes, and I was actually well aware of how to do that sort of thing, since I've been doing both PRP and PRP cofactor testing. But I had some kind of brain freeze I think. 
the short answer to all questions i got by email and messages and everything regarding wagstaff is YES.
Systematic testing from our part has been done until 10M. I have all results until 10M. TF i have to lookup how far i did do it, as i did do it in waves. I think i had it until 5254 bits done at about 2024 cores up to 12M15M, i would need to look that one up. For the later gpgpu TF software that is total peanuts of course. And YES of course Wagstaff has the same habits like Mersenne. The main differences between Wagstaff and Mersenne, is that there seems no official proving method for Wagstaff has been recognized. I know some French and a Canadian researcher think different there, yet i'm not gonna pick sides there and just cite what i know as a layman there. Another difference seems the factoring. Trial Factoring, but i lack hard data of Mersenne to compare it with, seems more effective for Wagstaff than it is for Mersenne. Some researcher who wants to take this important unpaid job on himself might be able to officially determine that. I just posted here a gutfeeling. Though there might be a simple explanation for it that has to do with the spread of primes there. Because we divide by 3, maybe it is the case there is less Wagstaffs between [n and 2n]. As a result TF seemingly is far more effective so it would be possible to search Wagstaff deeper than Mersenne using the same horse power. Yet you'll find less PRP's than Mersenne primes of course using that same horse power. I do not have a formula how many less PRP's there are between [n;2n] of Wagstaff versus Mersenne. Blindfolded i would guess it ranges somewhere between (log 3) / (log 2) and factor 3. That means you could be busy with it a long time and not find any PRP and run the risk that when you are close to next one, the Ryan Propper team then suddenly posts them online including a "we were so lucky letter", which Jeff Gilchrist and Tony Reix, when they had to run fast and were out of paper at the bathroom, used over there. 
[QUOTE=GP2;492257]I was not really interested in Wagstaff testing until now, so I'm not familiar with it. Can mprime/Prime95 be used to do Wagstaff PRP testing? I was under the impression that it can only do k*b^n+c where all of these are integers.
Which programs can do Wagstaff PRP testing, and has your Gerbicz error check been implemented for them yet? Also, it's necessary to know which exponents p have known factors for (2^p + 1)/3[/QUOTE] Hello, we used LLR, which incorporates also the PRP27 test that is considered by Tony Reix to give a proof it is a prime as well  yet for sure shows whether it's PRP. I should still have Tony's paper there on proving Wagstaff. A quick search here is not promising on this machine nor my mailbox in the bunker of xs4all seems to have lost most emails from Jeff Gilchrist. Bizarre. Might need to boot some old machines to dig up some files there. 
In old backup directory i found some of the TF results until 12M that i have.
Let me see how large of a zip file i am able to post on this forum. The TF results have been spreaded over dozens of files of course. For todays gpu's it is peanuts to outdo this. The PRP testing, later on prp27 testing i didn't find all results yet. 
And i see most has been TF'ed to 61 or 62 bits.
Not the 5254 i remembered from head. 
2 Attachment(s)
Attached a few zip files. Note i created the new_sresults.zip file with 7zip so i hope it can also zip in compatible manner with zip and unzip if you select 'zip' there.
It has some of the TF results until 12M, mainly what has been done until november 2010 at the 16 core box and some older results from 2008 and 2009. If you want to 'go further' it might be wise to just generate all primes and subtract every factored result after you verified the result. Even though verification always has been turned on here when TFing the result one should not trust results here if you want to do a systematic attempt. Please do not blame me if there is an error. I simply see too many files always on my machines that others in illegal manner have modified or altered. Would be interesting to know whether that happened with Wagstaff as well. The testing itself had been carried out with REG+ECC ram turned on, so the testing itself is not questionable. It's the hackers who nonstop modify stuff on my machines, despite quite paranoia security i made myself. Some of the input files that the 16 core box swallowed i cannot read  that's why i typed this. The fact that i have linux firewalls and airgapped security of some sort doesn't seem to help all that. Most has been TF'ed to 61 bits. At lower ranges it has been TF'ed to 62 bits. If you start testing, you can safely start at 10M testing for PRP using LLR and the prp27 test. Though i'm very D sure you won't find any prp up until 15M or as how far propper has systematically tested it all. There is no records nor reliable statement there. So one might need to start from 10M. We did test it all systematically until 10M. A couple of hundreds of double tests have been performed yet not at the last ranges that were all done by Jeff. Even though he's the only one who tested without ECC ram, and at an i7 which turboboosted a bit, i don't think there is any need for a double test other than test a handful Jeff did do nearby the 10M mark. This was all reliable folks who tested it. Under 10M the only interesting test to perform is to generate all primes until 10M and subtract those from the TF results and the test results. A handful were removed by Jeff because of some P1 testing which from my viewpoint was not worth the system time to be performed as TF works dramatically better than P1 for Wagstaff. Though i have done this as well and Paul as well in some cases  it is always possible a few exponents were missed in this proces as i didn't use a database to determine that. Attempts to try to use a database were there yet quickly abandonned. Public databases kind of suck for this. A few scripts is a better idea :) The only reason 1 or 2 exponents might have been missed, is the fact that many files were prepared by hand. Even though combining them has been done with some utils i wrote (in C) a lot in the end has been done by hand. A mistake always can occur then. Odds we missed something there is very low though, as always checks at irregular times at a later period of time have been done to check all this. Yet if you go do a systematic attempt you might want to redo all that. A more problematic question right now is whether i can still locate all the PRP test results. Note that for the core0..15 results, this was an ongoing TF effort, so possibly many exponents have not been sieved there. As on my backup directory it is trivial the input files have been modified and some are unreadable now  i'm not so sure what to say about it. Above 10M in short the only possible approach is to regenerate all primes, and subtract TF'ed results and happily redo the TF from scratch at what's left. My suspicion is that it all has been systematically TF'ed until start 12M. The original input files run till 30M. The "first wave" was TF'ing it all 61 bits deep which succeeded until 12M. Do not bet money you find a new PRP between 10M and 30M though. Wagstaff is very unreliable formula there, not like the clockwork Mersenne and 3 * 2^n 1 are there. You could hit it rich or could get buried one day without accomplishing the mission to find a new one. If we do very bad form of extrapolation: 4M / 0.9 = 4.x and 13 / 4 = 3.25 If we carefully do 5 * 13 = 65M You might need to search up to 65M to have some sort of assurance odds are reasonable ok there is 1 or a bunch of Wagstaffs within your search domain. 
[QUOTE=diep;492382]I should still have Tony's paper there on proving Wagstaff.
[/QUOTE] Proving? Not proving, as we recall. These are all PRPs. 
Maybe read what i wrote Batalov. He submitted it as a paper how to prove Wagstaffs. I'm not wasting my time reading details like that (gigantic prime is a gigantic prime to me whether it's industry grade or a proven prime no diff to me). It's up to others to decide whether it's enough to accept it as a proof or reject it.

[QUOTE=diep;492397]I'm not wasting my time reading details like that.[/QUOTE]
That's the spirit! At least other people need to know and thank you for being honest about it. 
I wouldn't have the time for it Batalov  busy releasing a 3d printer. Hope to make a billion dollars with it. Even though that wasn't the initial idea when i started with it :)
p.s. i do remember some communication of when i was a student though, that was in the year 2000 or before that on some usenet group. Not sure with whom. Maybe it was Caldwell. He told me next: "just find first that huge industry grade prime then, after you got it, we'll find a way to prove it" :) 
Surely no one has time to read.
Then this is a completely useless link  no one is going to read it anyway: [url]http://www.mersenneforum.org/showthread.php?t=10737[/url] And in the strange alternative universe where no one has time to read, alternative facts will keep getting repeated as if they were truth, such as "some French and a Canadian researcher think different there", that "there is some secret cabal that makes proofs official" [B]but[/B] "the unofficial proofs are just as good". In that universe 2 equals 4, because: subtract 3 from both sides and square both sides ==> 1 = 1, which is "obviously true". The proofs in that quoted thread are [URL="http://www.mersenneforum.org/showpost.php?p=144825&postcount=38"]only slightly different[/URL] from this logical paradox. In that universe it doesn't matter that the [URL="http://www.mersenneforum.org/showpost.php?p=144953&postcount=54"]proof was withdrawn[/URL]. "Some French and a Canadian researcher" will continue to think different, no matter what the facts are. 
No worries about that Batalov  if my 3d printer fails i can still take that job as Trumps twitter advisor :)

[QUOTE=diep;492387]A handful were removed by Jeff because of some P1 testing which from my viewpoint was not worth the system time to be performed as TF works dramatically better than P1 for Wagstaff.[/QUOTE]
It doesn't make sense that P−1 testing should be ineffective for Wagstaff. Wagstaff numbers ( 2[SUP]p[/SUP] + 1) / 3 for odd p ) appear to share with Mersenne numbers ( 2[SUP]p[/SUP] − 1 ) the characteristic that all their factors are of the form 2kp + 1 for some k. I don't have a number theory proof of this for Wagstaff, I just tested it empirically, for instance on the [URL="http://mersenneforum.org/showpost.php?p=332533&postcount=59"]zip file that ATH posted[/URL], which has 1.75 million factors. The fact that factors are of the form 2kp + 1 is what makes P−1 testing particularly effective. If a Mersenne number has a prime factor f, then (f − 1) is an even composite number, and if (f − 1) is "smooth" (has lots of small factors), then P−1 testing has a good chance of detecting that factor f of the Mersenne number. But (f − 1) = 2 * p * k for some k, so (f − 1) is automatically already a little bit smooth, and it all depends on how smooth k is. And that's why P−1 testing is particularly effective for Mersenne numbers, we get a head start on the quest for smoothness. But Wagstaff numbers should share the exact same advantage. However, I was digging up old Wagstaffrelated threads, and came up with [URL="http://mersenneforum.org/showthread.php?t=15601"]this one from 2011[/URL]. It seems that in the source code, the 2 * p efficiency boost for P−1 testing is only applied to Mersenne numbers. In other words, only for k*b^n+c where k = 1, b = 2 and c = −1. I looked at the latest source code, for mprime/Prime95 version 29.4 b7 and it seems that it's unchanged from 2011, when the issue was first raised. So I wonder if P−1 testing for Wagstaffs is inefficient because of a longstanding bug (or not extending a desirable feature to a broader range). Can some number theory experts enlighten us? What can we say about the forms of the factors of k*b^n+c when the constants k, b and c take on various fixed values? I think George would need that information for certain before applying changes. And if it is indeed possible to give a Mersennelike boost to Wagstaff P−1 testing, it would be interesting to quickly compile a version with the modification. I'd run it and try to find new Wagstaff factors. 
[QUOTE=GP2;492438]
Wagstaff numbers ( 2[SUP]p[/SUP] + 1) / 3 for odd p ) appear to share with Mersenne numbers ( 2[SUP]p[/SUP] − 1 ) the characteristic that all their factors are of the form 2kp + 1 for some k. [/QUOTE] Elementary school stuff. Maybe nowadays you can learn this only in university. 
[QUOTE=R. Gerbicz;492443]Elementary school stuff. Maybe nowadays you can learn this only in university.[/QUOTE]
[CODE] #! /usr/bin/python3 import fileinput with fileinput.input() as f: for line in f: p, f = line.strip().split(",", maxsplit=2) p = int(p) f = int(f) if (f  1) % (2*p) != 0: print ("UNEXPECTED FORM: ", p, f) [/CODE] It takes a little less time to dash off a few lines of Python script to run over the 1.75millionline factors file than to look up the proof for Mersennes and then try to see how −1 instead of 1 might affect it. I didn't study math formally and that makes everything slower to figure out, so I take the lazy empirical way... math as an experimental science. What can we say for the general case of k × b[sup]n[/sup] + c ? Which values for the constants k, b, c let us do P−1 testing more effectively in the source code of mprime/Prime95? 
[QUOTE=GP2;492438]However, I was digging up old Wagstaffrelated threads, and came up with [URL="http://mersenneforum.org/showthread.php?t=15601"]this one from 2011[/URL]. It seems that in the source code, the 2 * p efficiency boost for P−1 testing is only applied to Mersenne numbers. In other words, only for k*b^n+c where k = 1, b = 2 and c = −1.[/QUOTE]
To be clear, what is missing is the probability calculation, and hence the bound selection logic. The actual P1 will throw in the p for free. Because of the former, it will pick much lower (and suboptimal) bounds, if asked to do so. However, if you give the bounds (after calculating the same for equivalent mersenne), it would indeed be efficient. 
[QUOTE=GP2;492438]It doesn't make sense that P−1 testing should be ineffective for Wagstaff.
[/quote] Oh it does compared to the alternatives. Let's first focus upon the reality. that's that in the GIMPS project most people wanted a chance to find a prime. So they wanted to TEST rather than TF and initially there wasn't a gpgpu version of TF. They simply didn't do a very deep TF initially for Mersenne. It was just so so. Additionally Wagstaff TF's much better than Mersenne. so you keep left with a far smaller percentage of numbers to test for PRP than to LL. TF simply is more effective in that sense for Wagstaff. So there is a number of things that make this plausible: a) there is a smaller percentage of numbers left with wagstaff after TF than with mersenne. b) nowadays the TF is far far far deeper than years ago with Mersenne c) Mersenne is at dozens of millions of bits, Wagstaff still is at 10+ million bits. d) we divide by 3 so it's not a +1 number of bits. It's a 1 number of bits. It's especially B. the mersenne project has been pretty lazy in that respect. Everyone wanted a chance to find a prime. So they started testing long before a good TF had been done. 
Let me focus upon argument D as this is the thing most researchers seem to forget.
x = 2^p + 1; We first divide a number length len = prime p number of bits We divide it by 3. If we take p = 5 2^5 + 1 = 33 = 0b100001 bits = 6 bits we divide it by 3 ==> 33 / 3 = 11 = 0b1011 = 4 bits So we want a prime of 4 bits which is p1 bits. That's a lot of constraints we want to find a prime. 
[QUOTE=diep;492454]Oh it does compared to the alternatives. [/quote]
What alternative? P1 is a complementary method to TF, not an alternative. And ECM is not cost competitive with P1. What other alternative is there? [QUOTE=diep;492454]Let's first focus upon the reality. that's that in the GIMPS project most people wanted a chance to find a prime. So they wanted to TEST rather than TF and initially there wasn't a gpgpu version of TF. They simply didn't do a very deep TF initially for Mersenne. It was just so so. [/quote] You're just conflating many things. I'm sure some people at some time would have jumped the gun and LL tested underTF'ed, but for the most part, GIMPS has always tried to do optimal TF (and P1) before LLtest. The optimal level for CPUbased TF is different from optimal level of GPUbased TF. If you don't understand why that is so, you have a problem [QUOTE=diep;492454]a) there is a smaller percentage of numbers left with wagstaff after TF than with mersenne.[/quote] I don't know why this should be so... but I'll take your word for it. However this has no relevance as to whether doing P1 after TF is worthwhile. [QUOTE=diep;492454]b) nowadays the TF is far far far deeper than years ago with Mersenne[/quote] Due to GPUTF. If Wagstaff's have also been similarly deeply TF'ed with GPUs, then the P1 breakeven point would shift [QUOTE=diep;492454]c) Mersenne is at dozens of millions of bits, Wagstaff still is at 10+ million bits. [/quote] This is a valid point. At different sizes and different TF levels, the P1 breakeven point changes. But you have to do the actual probability calculation to see if it makes sense. Gut feelings don't cut it. [QUOTE=diep;492454]d) we divide by 3 so it's not a +1 number of bits. It's a 1 number of bits.[/quote] Not sure I understand. What do you mean by this? 
[QUOTE=diep;492454]d) we divide by 3 so it's not a +1 number of bits. It's a 1 number of bits.
[/QUOTE] Not likely, because when you'd divide by 3 you'd already know 2^p+1, right? 
>> Originally Posted by diep View Post
>>a) there is a smaller percentage of numbers left with wagstaff after TF than with >>mersenne. > I don't know why this should be so... but I'll take your word for it. However this has > no relevance as to whether doing P1 after TF is worthwhile. Sir, if you do not understand this one, i propose that you first study this one in depth first before claiming it doesn't matter. 
Originally Posted by diep View Post
>> b) nowadays the TF is far far far deeper than years ago with Mersenne > Due to GPUTF. If Wagstaff's have also been similarly deeply TF'ed with GPUs, then the P1 breakeven point would shift It is a very silly assumption to assume you will not be using some massive gpu's to TF. The amount of system time to factor with a silly algorithm like P1 provable is nonsense except if you can do that on hardware that wouldn't be used for PRPtesting otherwise. GPU's are so much faster than the hardware most have to do PRPtesting, it's just not even funny to compare the 2. 
Originally Posted by diep View Post
>> Oh it does compared to the alternatives. > What alternative? P1 is a complementary method to TF, not an alternative. And ECM is not cost competitive with P1. What other alternative is there? You get what you pay for. 
>> d) we divide by 3 so it's not a +1 number of bits. It's a 1 number of bits.
> Not sure I understand. What do you mean by this? Wagstaff is a much harder beast than Mersenne. 
I think another important thing to note is perspective from which you write things. If you work for government you probably have a lousy GPU and relative new CPU.
However, most users at home they have a fast GPU and old outdated CPU most of the cases and your computer works for more years than at the government. Especially the CAD i'm doing requires fast GPU's and whatever sort of CPU you got is total irrelevant. Kids that game want a fast GPU and the cpu really is less relevant and has less cores in general. So the amount of times the GPU is faster than the CPU is far greater than most uni professors over here. 
[QUOTE=diep;492459]>> Originally Posted by diep View Post
>>a) there is a smaller percentage of numbers left with wagstaff after TF than with >>mersenne. > I don't know why this should be so... but I'll take your word for it. However this has > no relevance as to whether doing P1 after TF is worthwhile. Sir, if you do not understand this one, i propose that you first study this one in depth first before claiming it doesn't matter.[/QUOTE] Go ahead. Put forth your argument as to why this is relevant. [QUOTE=diep;492461]Originally Posted by diep View Post >> b) nowadays the TF is far far far deeper than years ago with Mersenne > Due to GPUTF. If Wagstaff's have also been similarly deeply TF'ed with GPUs, then the P1 breakeven point would shift It is a very silly assumption to assume you will not be using some massive gpu's to TF. The amount of system time to factor with a silly algorithm like P1 provable is nonsense except if you can do that on hardware that wouldn't be used for PRPtesting otherwise. GPU's are so much faster than the hardware most have to do PRPtesting, it's just not even funny to compare the 2.[/QUOTE] I made no assumptions  merely noted that [B]if[/B] they are used, it [B]will[/B] shift the P1 breakeven point. But you still need to actually model your probabilities to know if P1 is worthwhile [QUOTE=diep;492462]Originally Posted by diep View Post >> Oh it does compared to the alternatives. > What alternative? P1 is a complementary method to TF, not an alternative. And ECM is not cost competitive with P1. What other alternative is there? You get what you pay for.[/QUOTE] What are we trying to do with TF & P1? When we say P1 is effective, we mean that running P1 will reduce overall time to prove a number composite. Thus you'll spend fewer resources figuring out the primality status of all the given candidates. In that respect, ECM is a net loss. You'll save fewer resources of PRP testing than whay you would put into the ECM. [QUOTE=diep;492463]>> d) we divide by 3 so it's not a +1 number of bits. It's a 1 number of bits. > Not sure I understand. What do you mean by this? Wagstaff is a much harder beast than Mersenne.[/QUOTE] For same P wagstaff is 1.5 bits smaller than mersenn. Big deal! Makes no difference. If anything, it makes wagstaff tiny bit likelier to be prime, but that's neither here nor there. [QUOTE=diep;492464]I think another important thing to note is perspective from which you write things. If you work for government you probably have a lousy GPU and relative new CPU. However, most users at home they have a fast GPU and old outdated CPU most of the cases and your computer works for more years than at the government. Especially the CAD i'm doing requires fast GPU's and whatever sort of CPU you got is total irrelevant. Kids that game want a fast GPU and the cpu really is less relevant and has less cores in general. So the amount of times the GPU is faster than the CPU is far greater than most uni professors over here.[/QUOTE] GPU is [B]way[/B] faster than CPU in some workloads (generalising since GPUs and CPUs come in wide variety of capability  we can normalize for cost or power consumption to do applestoapples comparison). In other workloads they might be moderately faster or even slower. TF is certainly the first kind. Yet the 45 additional bits of TF that GIMPS gets out of GPUs helps the project maybe about 7% faster (compared to if those same GPUs did the LL testing). Not bad, but not earth shattering either. 
>or same P wagstaff is 1.5 bits smaller than mersenn. Big deal! Makes no >difference. If anything, it makes wagstaff tiny bit likelier to be prime, but that's >neither here nor there.
You seem a few lightyears away from grasping Wagstaff. Can you compare odds for a prime wagstaff versus mersenne if we just look at the last few PRP's / primes found? It's like 1.2 for mersenne to the next one and factor 3 to 4 for wagstaff. that's not a 'tiny bit' of difference. Please do not just post cheap remarks. My time is more expensive than yours. 
[QUOTE=diep;492466]Can you compare odds for a prime wagstaff versus mersenne if we just look at the last few PRP's / primes found?[/QUOTE]
That's not how you do math. Maybe physics. But definitely not math. 
[QUOTE=axn;492469]That's not how you do math. Maybe physics. But definitely not math.[/QUOTE]
No, not physics either. The word you are looking for is numerology. We have a fine specimen of a numerologist here. With [I]years [/I]of experience. Just search for his past posts. 
[QUOTE=axn;492469]That's not how you do math. Maybe physics. But definitely not math.[/QUOTE]
You [i]can[/i] do math that way, it's called experimental mathematics. But looking at just the last few doesn't give you enough statistical power to say anything meaningful, so I agree that would basically put you back in the realm of numerology. 
So far I have lists of factors from ATH, diep, lalera, bearnol... but not from some other major efforts that were active in 2013 and earlier. Perhaps those old factor lists were not conserved. Also FactorDB up to 1M.
ATH in particular "[URL="http://mersenneforum.org/showthread.php?p=332533"]trialfactored: 10k<p<1M to 56bit, 1M<p<2M to 57bit, 2M<p<4M to 58bit, 4M<p<8M to 59bit, 8M<p<16M to 60bit, 16M<p<32M to 61bit, 32M<p<50M to 62bit[/URL]." I think TJAOI's method "by k" could also be used to generate factors for Wagstaff numbers, because they have the same 2kp+1 form. And it would probably find a lot more first factors, since Wagstaffs have only been lightly factored compared to Mersennes. TJAOI's method was [URL="http://www.mersenneforum.org/showthread.php?p=468493&postcount=425"]described by ATH in this post[/URL]. I have no idea how many resources TJAOI is throwing at the problem, but here's the [URL="http://www.mersenneforum.org/showthread.php?p=467989#post467989"]timetable of when he reached each bit level[/URL], and also he [URL="http://www.mersenneforum.org/showthread.php?p=485158&postcount=449"]finished 65 bits on April 12 of this year[/URL]. But assuming you only sieve the array up to, say, 50M instead of 1G, presumably it would go a lot faster? Edit: If I'm reading it correctly, the sieving stage seems to be independent of Mersennes, it would be applicable to anything that has factors of the form 2kp+1. So if the results of that sieving were available as a list of surviving k, perhaps you'd only need to retest those same k, but this time as candidate factors for Wagstaffs instead of Mersennes? Assuming TJAOI conserved that information, of course. How practical is this idea? And is it more suited to CPUs or GPUs? 
There is a wellknown [URL="https://primes.utm.edu/mersenne/heuristic.html"]heuristic for predicting the frequency of Mersenne numbers[/URL].
Do similar considerations apply for Wagstaff numbers? 
[QUOTE=GP2;492590]There is a wellknown [URL="https://primes.utm.edu/mersenne/heuristic.html"]heuristic for predicting the frequency of Mersenne numbers[/URL].
Do similar considerations apply for Wagstaff numbers?[/QUOTE] Are all factors of Wagstaff numbers (2^p + 1)/3 of the form 2kp + 1? If so then yes, a similar heuristic would drop out. 
[QUOTE=GP2;492589]I think TJAOI's method "by k" could also be used to generate factors for Wagstaff numbers, because they have the same 2kp+1 form. And it would probably find a lot more first factors, since Wagstaffs have only been lightly factored compared to Mersennes.
TJAOI's method was [URL="http://www.mersenneforum.org/showthread.php?p=468493&postcount=425"]described by ATH in this post[/URL].[/QUOTE] It cannot be that way TJAOI searches, then he would find factors of varying bit sizes all the time, but he seems to find them 1 bit size at the time. 
[QUOTE=CRGreathouse;492591]Are all factors of Wagstaff numbers (2^p + 1)/3 of the form 2kp + 1? If so then yes, a similar heuristic would drop out.[/QUOTE]
Empirically, yes. I'm sure there's an elementary proof, which the margin of this post is too small to contain. 
[QUOTE=CRGreathouse;492591]Are all factors of Wagstaff numbers (2^p + 1)/3 of the form 2kp + 1? If so then yes, a similar heuristic would drop out.[/QUOTE]
Yes, since all prime factors of Phi_n(2) are of the form kn+1 (maybe except the largest prime factor of n, like the case n = 6, 18, 20, 21, 54, 100, 110, 136, 147, 155, 156, 162, 253, 342, 486, 500, ...), and (2^p+1)/3 is Phi_{2p}(2), thus all prime factors of it are of the form 2kp+1. 
[QUOTE=sweety439;492599]Yes, since all prime factors of Phi_n(2) are of the form kn+1 (maybe except the largest prime factor of n, like the case n = 6, 18, 20, 21, 54, 100, 110, 136, 147, 155, 156, 162, 253, 342, 486, 500, ...), and (2^p+1)/3 is Phi_{2p}(2), thus all prime factors of it are of the form 2kp+1.[/QUOTE]
Pardon my ignorance, but what is the name of this Phi function? The only phi I know is the totient function. 
Is trialfactoring more effective for Wagstaffs than for Mersennes? It doesn't seem to be true. Here are some numbers.
For Mersenne numbers, thanks to TJAOI's efforts and very extensive ECM testing we can be virtually certain that we know every factor of bit length smaller than 66, without exception, for all exponents less than a billion. For Wagstaff numbers, there are various data sources, but the main one so far is ATH's trial factoring in 2013 and earlier, for which he [URL="http://mersenneforum.org/showthread.php?p=332533"]posted a zip file[/URL]. However, his data only includes first factors, since he stopped searching whenever he found a factor for an exponent. If we look at the exponent range 10k to 10M, ATH trialfactored this range up to 56 bits (i.e. he found all exponents whose smallest factor has a bit length of less than 56). So for both Mersenne and Wagstaff I counted how many exponents fit the criteria: [CODE] How many Mersenne numbers and how many Wagstaff numbers with exponents between 10k and 1M have a smallest factor of a given bit length? (looking at bit lengths of less than 56) bl mer wag 15 40 51 16 99 105 17 245 250 18 495 470 19 885 857 20 1586 1546 21 2775 2806 22 2084 2193 23 3060 2990 24 2493 2596 25 2530 2489 26 2285 2284 27 2170 2231 28 2025 2015 29 1930 1856 30 1810 1733 31 1578 1620 32 1465 1500 33 1452 1441 34 1409 1391 35 1289 1276 36 1176 1207 37 1111 1115 38 1118 1065 39 1013 945 40 889 1008 41 939 928 42 869 899 43 838 847 44 791 795 45 801 768 46 788 723 47 735 692 48 697 711 49 649 652 50 640 657 51 604 616 52 579 532 53 542 545 54 559 503 55 519 479 [/CODE] The numbers seem roughly the same. Trialfactoring is equally effective for both, and it doesn't seem like the trend would change for larger bit lengths. Maybe trial factoring for Wagstaff seems more effective because we're still finding the ridiculously easy small factors, the kind that were found and cleared for Mersennes more than two decades ago. What about P−1 testing? Well, trial factoring isn't really making a difference in terms of clearing away factors, so if P−1 was less effective for Wagstaff numbers than for Mersenne numbers, it would only be if the k−1 corresponding to factors 2kp + 1 was somehow less smooth. That doesn't seem likely. However, there is a [URL="http://www.mersenneforum.org/showthread.php?p=492373"]longstanding bug where mprime/Prime95 wrongly calculates the appropriate B1,B2 bounds[/URL] for P−1 testing and sets them much too low. However if you manually set the bounds higher to match the same values you'd use for Mersenne, then P−1 testing should be equally effective for Wagstaff numbers. PS, The [URL="http://mersenneforum.org/showthread.php?p=489397&postcount=29"]Venn diagram intersection of factors that are capable of being found by either trialfactoring or by P−1[/URL] is smaller than you'd think. Most factors found by TF have k−1 much too unsmooth to be found by P−1 testing, and most factors found by P−1 testing are much too big to be found by TF. 
[QUOTE=CRGreathouse;492591]Are all factors of Wagstaff numbers (2^p + 1)/3 of the form 2kp + 1? If so then yes, a similar heuristic would drop out.[/QUOTE]
If p is an odd prime, p > 3, q is prime, and q divides (2^p + 1)/3, then q divides 2^(2*p)  1, q does not divide 2^p  1, and (using the hypothesis p > 3) q also does not divide 2^2  1. Thus, 2*p is the least positive integer h for which q divides 2^h  1 (the "hauptexponent"), whence 2*p divides q1. The conclusion q = 2*p + 1 fails for p = 3; (2^3 + 1)/3 = 3. 
[QUOTE=GP2;492612]For Wagstaff numbers, there are various data sources, but the main one so far is ATH's trial factoring in 2013 and earlier, for which he [URL="http://mersenneforum.org/showthread.php?p=332533"]posted a zip file[/URL]. However, his data only includes first factors, since he stopped searching whenever he found a factor for an exponent.[/QUOTE]
Remember there is now "mfaktcwin64.Wagstaff.exe" for trial factoring on GPUs. Looking at 16M32M which I trial factored to 61 bits. If I test 6162 bits on my ~600 GHzdays / day card on 16M: 8.3 sec and 32M: 4.8 sec, so lets say 6.5 sec on average. So all my effort could be duplicated pretty quickly on a single GPU today. 
Bug in PRP cofactor code???
For the Wagstaff number with exponent 29123, mprime says the cofactor is composite, but actually it's a probable prime.
It fails with both type 1 and type 5 testing. Other Wagstaff numbers I tested with mprime all worked: all of the known Wagstaff primes and probable primes, plus other Wagstaff numbers with factors known by FactorDB to have a PRP cofactor. [CODE] PRP=N/A,1,2,29123,1,"3,126660872079575401" PRP=N/A,1,2,29123,1,99,0,3,1,"3,126660872079575401" PRP=N/A,1,2,29123,1,99,0,3,5,"3,126660872079575401" [/CODE] [CODE] [Work thread Jul 28 16:08] Starting Gerbicz errorchecking PRP test of 2^29123+1/3/126660872079575401 using allcomplex FMA3 FFT length 1536 [Work thread Jul 28 16:08] 2^29123+1/3/126660872079575401 is not prime. Type5 RES64: 663E02E0E55571F7. Wg8: 993B7657,00000000 [Work thread Jul 28 16:26] Starting PRP test of 2^29123+1/3/126660872079575401 using allcomplex FMA3 FFT length 1536 [Work thread Jul 28 16:26] 2^29123+1/3/126660872079575401 is not prime. RES64: 2282024F325B71EB. Wg8: 993B7657,00000000 [Work thread Jul 28 16:26] Starting Gerbicz errorchecking PRP test of 2^29123+1/3/126660872079575401 using allcomplex FMA3 FFT length 1536 [Work thread Jul 28 16:26] 2^29123+1/3/126660872079575401 is not prime. Type5 RES64: 663E02E0E55571F7. Wg8: 993B7657,00000000 [/CODE] [CODE] {"status":"C", "k":1, "b":2, "n":29123, "c":1, "knownfactors":"3,126660872079575401", "worktype":"PRP3", "res64":"663E02E0E55571F7", "residuetype":5, "fftlength":1536, "errorcode":"00000000", "securitycode":"993B7657", "program":{"name":"Prime95", "version":"29.4", "build":7, "port":8}, "timestamp":"20180728 16:08:03", "errors":{"gerbicz":0}, ... {"status":"C", "k":1, "b":2, "n":29123, "c":1, "knownfactors":"3,126660872079575401", "worktype":"PRP3", "res64":"2282024F325B71EB", "residuetype":1, "fftlength":1536, "errorcode":"00000000", "securitycode":"993B7657", "program":{"name":"Prime95", "version":"29.4", "build":7, "port":8}, "timestamp":"20180728 16:26:44", ... {"status":"C", "k":1, "b":2, "n":29123, "c":1, "knownfactors":"3,126660872079575401", "worktype":"PRP3", "res64":"663E02E0E55571F7", "residuetype":5, "fftlength":1536, "errorcode":"00000000", "securitycode":"993B7657", "program":{"name":"Prime95", "version":"29.4", "build":7, "port":8}, "timestamp":"20180728 16:26:44", "errors":{"gerbicz":0}, ... [/CODE] [URL="http://factordb.com/index.php?query=%282%5E29123%2B1%29%2F3%2F126660872079575401"]FactorDB says it's PRP[/URL]. Can anybody verify this??? Do other versions of mprime have the same problem, or other programs derived from the same code base? 
No, it is a composite. You cannot use 2PRP for numbers like this one.
They are all 2PRPs, for any n value. 
[QUOTE=Batalov;492655]No, it is a composite. You cannot use 2PRP for numbers like this one.
They are all 2PRPs, for any n value.[/QUOTE] Hmmm, the FactorDB page at the link I gave now shows "C" instead of "PRP", and the main page for 2^29123+1 shows CF instead of FF. So I guess their server software tests base 2 at first and displays PRP on a preliminary basis, and only then gets around to testing base 3... so if you click on it right after reporting a new factor, maybe you get the bogus FF notification. Never mind the python snippet I posted, obviously it didn't test primality, it just verified the factor. Excitability and lack of sleep leads to a state of confusion. 
This is a known issue in FactorDB. If you run across a large PRP cofactor from a base2 number, try to test using another base. Chances are, it'll be revealed as a composite.
edit: Factor DB doesn't, on its own, test using other bases. Somebody will have to manually prod it to do so. 
[QUOTE=GP2;492612]
so if P−1 was less effective for Wagstaff numbers than for Mersenne numbers, it would only be if the k−1 corresponding to factors 2kp + 1 was somehow less smooth. [/QUOTE] [QUOTE]Most factors found by TF have k−1 much too unsmooth to be found by P−1 testing[/QUOTE] Of course, it's k itself that needs to be smooth, not k−1. I've been posting too much nonsense lately. 
[QUOTE=axn;492469]That's not how you do math. Maybe physics. But definitely not math.[/QUOTE]
I come from search where we beat all methods by some lightyears of course by means of dubious approximations. edit: even the most optimistic predictions of any expert in the 1990s were totally beaten in the 21th century  progress of hardware had been factored in by some  progress of software (algorithmic progress) by no one. (end of edit) The best heuristic there to predict the future is by having the most recent history event valued at a much higher value than all before. For example with killermoves. If we look at Wagstaff then there is 1 prime just under 1M which i found. Then Tony Reix did do the bulk of work for the 4M one, and we know for sure there is nothing until 10M and the propper team probably searched about everything until 13M and found 2 there. Those 2 are very close together so i count those as for the DISTANCE to next prime, as a single entity. Of course finding 2 is nicer than 1. most recent history: 13 / 4 = factor 3.25 distance 4 / 1 = factor 4.0 distance 981k / 374321 = factor 2.6 distance That's significant 3 datapoints. Calling that 'luck' or 'not significant statistical' is wrong. It simply shows it's not Mersenne nor 321. Seen from a distance I call that factor 3 distance, and then i'm still rather optimistic. Now if we start to test massive number of exponents, millions of them, it's possible luck starts to play a smaller role to some extend. Still if you search until 30M i highly doubt you have more than odds of 1SD (66%) to find 1 or more PRP's out there. The recent datapoints suggest 3 * 13M+ = 40M would give high odds. Yet no garantuee even till the door of the primeshop. If you claim 2SD odds to find one at 40M, i only would say: "maybe". Yet don't bet at it you find even one. In a given search space [n;2n] and n larger than a million, i claim that odds simply is lightyears worse than Mersenne. I would guess it's on average somewhere between log 3 / log 2 => 1.6 until factor 3 to the next one for Wagstaff (above n=1 million), versus what is it something like 1.2 for mersenne? Yet in our lifetime we do not yet know how far we will be able to test to find sufficient datapoints there. 
[QUOTE=diep;492901]
Yet in our lifetime we do not yet know how far we will be able to test to find sufficient datapoints there.[/QUOTE] Not going into this deep numerology staff, but we have already a lot of results, if I understand correctly the search for Wagstaff prime is complete up to p=15e6, the Mersenne prime search also reached this. We have 43 Wagtsaff prime, and **only** 39 Mersenne prime up to p=15e6, so what would you search? My 2 cents: probably the density is the same (a similar heuristic what used for Mersenne could work here), ofcourse the big drawback for Wagstaff that we have no (known) fast primality test. 
[QUOTE=R. Gerbicz;492905]Not going into this deep numerology staff, but we have already a lot of results, if I understand correctly the search for Wagstaff prime is complete up to p=15e6, the Mersenne prime search also reached this. We have 43 Wagtsaff prime, and **only** 39 Mersenne prime up to p=15e6, so what would you search? My 2 cents: probably the density is the same (a similar heuristic what used for Mersenne could work here), ofcourse the big drawback for Wagstaff that we have no (known) fast primality test.[/QUOTE]
The first few n's, see it like Wagstaff is a propellor airplane. It had some advantages, was easier to build, yet Mersenne is a fighter jet. Lots of problems to get started and some hiccups, yet after it finally took off at the million bits area, it produces very systematic primes now. Distance between each wagstaff and the next one simply is factor 3 nearly. 
[QUOTE=diep;492907]
Distance between each wagstaff and the next one simply is factor 3 nearly.[/QUOTE] Yeah, like at 13347311, 13372531. 
You ignore what i wrote.
To find the NEXT one, even if 3 prp's lurk nearby, you want to know the DISTANCE you need to search to find 'em. 3 x 13M = 40M 
[QUOTE=diep;492901]most recent history:
13 / 4 = factor 3.25 distance 4 / 1 = factor 4.0 distance 981k / 374321 = factor 2.6 distance[/QUOTE] If Wagstaff primes are distributed like Mersenne primes (is it true?), then we would expect [URL="https://primes.utm.edu/notes/faq/NextMersenne.html"]an average ratio of 1.47576[/URL] between exponents of successive primes. For Mersennes, the highest ratios are: [CODE] 521 / 127 = 4.10 756839 / 216091 = 3.50 [/CODE] with everything else at 2.31 or lower. For Wagstaffs, the highest ratios are: [CODE] 1709 / 701 = 2.44 42737 / 14479 = 2.95 986191 / 374321 = 2.63 4031399 / 986191 = 4.09 13347311 / 4031399 = 3.31 [/CODE] with everything else at 2.02 or lower. Those three very high ratios in a row do seem anomalous. But can we be certain they are real? Maybe I'm mistaken, but I think the large rangers were tested with programs that assumed the VrbaReix conjecture is correct. If there are counterexamples to this conjecture, then maybe some Wagstaff primes were missed? [QUOTE=R. Gerbicz;492249]Why not use my own error checking on these numbers? Even a single run is much stronger than your double or even triple/quadruple check if you compare only RES64. The same is true for Mersenne numbers.[/QUOTE] The previous testing was mostly done in 2013 and earlier. Today mprime can do PRP testing with Gerbicz error checking, and it does not depend on VrbaReix being true. And we have faster CPUs with more cores, and faster GPUs that can find more factors. I am seriously considering throwing some resources into Wagstaff testing old ranges using mprime. I don't know how far it would go, but the goal would be to publish the PRP residues and a consolidated list of new factors and old factors from multiple available sources, as a permanent verifiable record. Sadly, there don't seem to be too many published results from the 2013 efforts, other than the primes that were found. Some lists of factors were published, but others are harder to track down and might even be lost. Ideally, the PRP residues should be recorded with a lot more than 64 bits. If the only goal is doublecheck verifiability, then 64 bits is plenty; however, larger residues [URL="http://mersenneforum.org/showthread.php?t=23462"]might let us do quick PRPcofactor checking for newly discovered factors[/URL]. (I presume the analysis at the linked page for Mersennes applies similarly for Wagstaffs, or is "inv" different?) So, at least 512 bits, or with an eye to future decades, who knows, maybe even 2048. Storage is very cheap. I'm not aware of any undoc.txt setting for mprime that would output residues larger than 64 bits, but hopefully it would be straightforward to add. 
Yeah you are mistaken.
Things at million bit size really works different than at a handful of bits. If you want to search in the dozens of millions bit range and then basing your expectation to find something upon a handful of bits for that, makes no sense at all. 
[QUOTE=diep;492955]Yeah you are mistaken.
Things at million bit size really works different than at a handful of bits. If you want to search in the dozens of millions bit range and then basing your expectation to find something upon a handful of bits for that, makes no sense at all.[/QUOTE] You consider an LL test passed or not by checking a single bit: true or false. If a Law works in math, the number of involved bits does not matter. 
it's about the short term expectation how much of a computational work you need to throw in to find the next gem.
Whether you find 1 or a million doesn't matter  you first have to do that huge computational effort to find at least 1. So the way how the PRP's have been spread also matter quite a lot. 
If one was searching for Wagstaff cofactors then they can be PRP'ed over 2^p+1 before a final modular reduction. I guess Prime95 does this.

So in summary:
diep: We should ignore everything but the big numbers, numbers at millions of bits aren't like numbers with just a few bits. The big Wagstaff exponents are spread very thin. GP2: You're throwing out most of the data, if you look at the broader picture you see the Wagstaff exponents aren't that thin. Also, the higher ranges might have missed some primes because searches relied on a conjecture. 
You mean missed above 10M until 13M?
Possible. Until 10M you won't find anything new. A 3PRP will find anything of course there for both mersenne and wagstaff. Doing a 27PRP test which the conjecture uses is pretty obvious the same thing like doing a 3PRP at least. So it won't miss anything. Whether it is a method to prove them prime  that i leave up to the theoretists with enough time :) 
[QUOTE=GP2;492928]
Ideally, the PRP residues should be recorded with a lot more than 64 bits. If the only goal is doublecheck verifiability, then 64 bits is plenty; however, larger residues [URL="http://mersenneforum.org/showthread.php?t=23462"]might let us do quick PRPcofactor checking for newly discovered factors[/URL]. (I presume the analysis at the linked page for Mersennes applies similarly for Wagstaffs, or is "inv" different?) So, at least 512 bits, or with an eye to future decades, who knows, maybe even 2048. Storage is very cheap. [/QUOTE] As noted in that post this works for all odd N (like the Suyama test), for Wagstaff there is also a shorter formula w: you should use w=(3*d*(ress))%(2^t) instead of w=(d*(sres))%(2^t); (what used for Mersenne numbers), if p>t=512 (or whatever you use). If we wouldn't want a fixed bitlength residue, then RES_t is a good choice where say t=16*floor(log2(p)), and then you have to redo the prp test only when d>2^t or when my test is showing that N/d could be a (probable) prime. 
[QUOTE=R. Gerbicz;492977]As noted in that post this works for all odd N (like the Suyama test), for Wagstaff there is also a shorter formula w:
you should use w=(3*d*(ress))%(2^t) instead of w=(d*(sres))%(2^t); (what used for Mersenne numbers), if p>t=512 (or whatever you use).[/QUOTE] I have one more question: You define res = 3^mp mod mp and in the formula for w we can substitute: res_t = res mod 2^t But for either Mersenne [c]PRP=1,2,n,1[/c] or Wagstaff [c]PRP=1,2,n,1,"3"[/c] , mprime actually calculates: mp = 2^p ± 1 resx = 3^(mp1) mod mp and prints out: resx_t = resx mod 2^t (t = 64 currently) For instance for 2^523 − 1 it gives hex D749C83A364C1462 and for 2^523 + 1 it gives hex 388104444C578BC6 So (3 * resx) mod mp == res but (3 * resx_t) mod 2^t != res_t When t=512 the difference observed is small, only 0 or 1 or 2, but I think it would affect the calculation. So how can we derive res_t from the available printed value resx_t ? 
[QUOTE=GP2;492993]
So how can we derive res_t from the available printed value resx_t ?[/QUOTE] Right, and there was another problem with that approach, because the exponent=(2^p+1)/3 is not a power of two , and that would block my error check. Fortunately there is a workaround: let N=(2^p+1)/3, if N/d is prime then with Fermat's little theorem: 3^(N/d)==3 mod (N/d), from this 3^(2^p)==3^(3*d1) mod (N/d) and the rest is similar to the Mersenne case: (the special case, when b=2: N=(k*2^n+c)/D is similar) [CODE] Let N=(2^p+1)/3 res=(3^(2^p) mod (2^p+1)) mod (2^t) s=3^(3*d1) mod (N/d) w=3*d*(ress) mod (2^t) if 3*d<2^t and w>=3*d then N/d is composite, otherwise if 3*d>2^t then we know nothing (too large divisor!) or 3*d<2^t and w<3*d, then N/d can be prime (we are weaker than any Fermat test, because of the "shrinked" residue) [/CODE] and a sample test for Wagstaff: [CODE] prp(n)=return(Mod(3,n)^n==Mod(3,n)) prpc(maxp)={t=512;num_large_d=0;c0=c1=0;forprime(p=t,maxp, d=1;N=(2^p+1)/3;forstep(q=2*p+1,10000*p,2*p,if(isprime(q)&&lift(Mod(2,q)^p+1)==0,d*=q)); if(d>1&&d<N,res=lift((Mod(3,2^p+1)^(2^p)))%(2^t);s=lift(Mod(3,N/d)^(3*d1)); w=(3*d*(ress))%(2^t); if(3*d>2^t,num_large_d+=1); if(3*d<2^t&&w>=3*d,c0++); if(3*d<2^t&&w<3*d,c1++); if(prp(N/d),print(["Found prp ",p,d])); if(w<3*d&&2^t>3*d,print(["Candidate! ",p,d])))); print("Number of large divisors=",num_large_d); print(c0);print(c1)} prpc(10000) [/CODE] 
[QUOTE=diep;492972]Possible. Until 10M you won't find anything new. [/QUOTE]
There is a possibility, however remote, of a hidden FFT bug that could miss a prime. For mersennes, we have doublecheck with [B]different rotation[/B], that can detect the issue even if there is FFT bug. But I don't believe Wagstaff numbers can do the bit rotation trick (corrections welcome!). So your confidence in the integrity of the result is predicated upon your confidence about the software. Having said that, a single pass with GEC PRP should be sufficient to rule out such issues. 
1 Attachment(s)
I wrote some Python code for the Gerbicz quicktest method which determines in many cases that a PRP cofactor is composite without actually doing a PRP cofactor test.
I find Python a little more comfortable to use than PARI/GP. The code is attached ([c]gerbicz_quickprp.tar.gz[/c]) 
[QUOTE=axn;493026]But I don't believe Wagstaff numbers can do the bit rotation trick (corrections welcome!). [/QUOTE]
As I know only the newer versions of p95 includes rotation for PRP test, the older version hasn't got this, or we can say that they used the constant shift=0. [QUOTE=GP2;493028]I wrote some Python code for the Gerbicz quicktest method which determines in many cases that a PRP cofactor is composite without actually doing a PRP cofactor test. [/QUOTE] Looks like good, have you checked that this finds all "known" (probable) primes for Mersenne/Wagstaff case in your range? (ofcourse not including the numbers where Mp,W(p) is prime). My super light sieve only finds prp numbers at p=557,1171,1627,3329 for Wagstaff case in that p=512..5000 range. 
[QUOTE=R. Gerbicz;493049]As I know only the newer versions of p95 includes rotation for PRP test, the older version hasn't got this, or we can say that they used the constant shift=0.[/QUOTE]
For PRP, the JSON output has a shiftcount: [CODE] {"status":"C", "exponent":78358739, "worktype":"PRP3", "res64":"EF71CC587195371C", "residuetype":1, "fftlength":4587520, [B]"shiftcount":75805076,[/B] "errorcode":"00000000", ... [/CODE] For PRPCF, the JSON output also has a shiftcount: [CODE] {"status":"C", "exponent":6444727, "knownfactors":"373794167,176482404169", "worktype":"PRP3", "res64":"B3948091270BC85C", "residuetype":5, "fftlength":327680, [B]"shiftcount":3602762,[/B] "errorcode":"00000000", ... [/CODE] But for Wagstaff, the JSON output doesn't display a shiftcount: [CODE] {"status":"C", "k":1, "b":2, "n":31873, "c":1, "knownfactors":"3", "worktype":"PRP3", "res64":"130344E20CCBF5EE", "residuetype":5, "fftlength":1536, "errorcode":"00000000", ... [/CODE] Maybe it's just an omission? As long as b == 2 in k × b[SUP]n[/SUP] + c, a nonzero shift count should be possible, no? [QUOTE] Looks like good, have you checked that this finds all "known" (probable) primes for Mersenne/Wagstaff case in your range? (ofcourse not including the numbers where Mp,W(p) is prime). My super light sieve only finds prp numbers at p=557,1171,1627,3329 for Wagstaff case in that p=512..5000 range.[/QUOTE] Yes, it flags all the PRPs in that range, and also flags a few false positives: for Mersenne [M]1489[/M] and [M]3853[/M], and for Wagstaff 1439. It's easy to see why, those exponents have a big list of factors, and the product of the factors is larger than 512 bits. In the code I posted, the slowest step by far is the line where I calculate the 512bit residue myself in Python, rather than getting it from a future version of mprime that will hopefully have the capability of displaying largebitsize residues. [CODE] res = pow(a, mp  1, mp) % pow2_t [/CODE] However, if we use the gmpy2 library for Python (interfaces to GMP), and make one small change: [CODE] from gmpy2 import mpz mpz_a = mpz(a) ... ... res = pow(mpz_a, mp  1, mp) % pow2_t [/CODE] This makes the modular exponentiation use GMP rather than Python's much slower internal C code for large numbers. So now I can calculate 512bit residues for exponents up to 100k in maybe a day or so, and then recheck that it doesn't miss any PRPs in this larger range. 
[QUOTE=GP2;493057]But for Wagstaff, the JSON output doesn't display a shiftcount:
[CODE] {"status":"C", "k":1, "b":2, "n":31873, "c":1, "knownfactors":"3", "worktype":"PRP3", "res64":"130344E20CCBF5EE", "residuetype":5, "fftlength":1536, "errorcode":"00000000", ... [/CODE] Maybe it's just an omission? As long as b == 2 in k × b[SUP]n[/SUP] + c, a nonzero shift count should be possible, no? [/QUOTE] Ouch, no, or I don't see the b=2 case, but for Wagstaff it is still working: if you start with base=3*2^shift, where 0<=shift<2*p (but it is right to restrict to 0<=shift<p). then the final residue is: res=(3*2^shift)^(2^p)==3^(2^p)*2^(2*shift) mod (2^p+1) so to get the original final (nonshifted) residue: res'=res*2^(2*p2*shift) mod (2^p+1). 
1 Attachment(s)
I tested up to p = 50k for both Mersenne and Wagstaff.
With 2048bit residues there are no false positives, and obviously no false negatives. Updated programs and datafiles with largebitsize residues are attached. 
[QUOTE=R. Gerbicz;493081]Ouch, no, or I don't see the b=2 case, but for Wagstaff it is still working:
if you start with base=3*2^shift, where 0<=shift<2*p (but it is right to restrict to 0<=shift<p). then the final residue is: res=(3*2^shift)^(2^p)==3^(2^p)*2^(2*shift) mod (2^p+1) so to get the original final (nonshifted) residue: res'=res*2^(2*p2*shift) mod (2^p+1).[/QUOTE] This looks interesting, but you need to do additional work to recover the correct residue. In mersennes, you don't need to do any work at all. Having said that, it is just shifts, so not really a big deal. I'm pretty sure this is being not done currently, so more work for George (hint!) HOWEVER, what is the math for recovering intermediate residues? Because there is an interest in getting intermediate residues at specific milestone as well. Anyway, if we're using GEC, then the shift trick is superfluous. It doesn't bring any extra safety to the project. It is merely a nicetohave. 
[QUOTE=axn;493107]This looks interesting, but you need to do additional work to recover the correct residue.[/QUOTE]
That is the big deal, you are worrying about O(p) cost when you already done O(p^(2+eps)) work. [QUOTE=axn;493107] HOWEVER, what is the math for recovering intermediate residues? Because there is an interest in getting intermediate residues at specific milestone as well. [/QUOTE] Similar maths, but why would you store the unshifted residue, when you continue with the shifted? I'd store the shifted residue and the shift number, that is more logical. [QUOTE=axn;493107] Anyway, if we're using GEC, then the shift trick is superfluous. It doesn't bring any extra safety to the project. It is merely a nicetohave. [/QUOTE] Correct. 
[QUOTE=R. Gerbicz;493122]Similar maths, but why would you store the unshifted residue, when you continue with the shifted? I'd store the shifted residue and the shift number, that is more logical.[/QUOTE]
For comparing multiple runs (not the full residue  just RES64). You don't need to wait till the end to know if two runs have diverged. Anyway, just a thought... 
[QUOTE=axn;493127]For comparing multiple runs (not the full residue  just RES64). You don't need to wait till the end to know if two runs have diverged. Anyway, just a thought...[/QUOTE]
You don't need to wait, after k iterations the extra factor is 2^(shift*2^k) mod (2^p+1), so you have to unshift by 2*p((shift*2^k) mod (2*p)) bits to get the residue what would be for shift=0. Btw if at the end if you want to avoid the shift then calculate 3^(2^p2) mod (2^p+1), so the exponent would be the same what used for Mersenne, but it would be confusing and adds nothing to the theory. ( the reason is that 2^p2 is divisible by 2*p (for odd p prime), and 2^(2*p)1 is divisible by 2^p+1 ). 
[QUOTE=GP2;492609]Pardon my ignorance, but what is the name of this Phi function? The only phi I know is the totient function.[/QUOTE]
"Phi" is the cyclotomic polynomial. The first 64 cyclotomic polynomials are: [CODE] Phi_1(x) = x  1 Phi_2(x) = x + 1 Phi_3(x) = x^2 + x + 1 Phi_4(x) = x^2 + 1 Phi_5(x) = x^4 + x^3 + x^2 + x + 1 Phi_6(x) = x^2  x + 1 Phi_7(x) = x^6 + x^5 + x^4 + x^3 + x^2 + x + 1 Phi_8(x) = x^4 + 1 Phi_9(x) = x^6 + x^3 + 1 Phi_10(x) = x^4  x^3 + x^2  x + 1 Phi_11(x) = x^10 + x^9 + x^8 + x^7 + x^6 + x^5 + x^4 + x^3 + x^2 + x + 1 Phi_12(x) = x^4  x^2 + 1 Phi_13(x) = x^12 + x^11 + x^10 + x^9 + x^8 + x^7 + x^6 + x^5 + x^4 + x^3 + x^2 + x + 1 Phi_14(x) = x^6  x^5 + x^4  x^3 + x^2  x + 1 Phi_15(x) = x^8  x^7 + x^5  x^4 + x^3  x + 1 Phi_16(x) = x^8 + 1 Phi_17(x) = x^16 + x^15 + x^14 + x^13 + x^12 + x^11 + x^10 + x^9 + x^8 + x^7 + x^6 + x^5 + x^4 + x^3 + x^2 + x + 1 Phi_18(x) = x^6  x^3 + 1 Phi_19(x) = x^18 + x^17 + x^16 + x^15 + x^14 + x^13 + x^12 + x^11 + x^10 + x^9 + x^8 + x^7 + x^6 + x^5 + x^4 + x^3 + x^2 + x + 1 Phi_20(x) = x^8  x^6 + x^4  x^2 + 1 Phi_21(x) = x^12  x^11 + x^9  x^8 + x^6  x^4 + x^3  x + 1 Phi_22(x) = x^10  x^9 + x^8  x^7 + x^6  x^5 + x^4  x^3 + x^2  x + 1 Phi_23(x) = x^22 + x^21 + x^20 + x^19 + x^18 + x^17 + x^16 + x^15 + x^14 + x^13 + x^12 + x^11 + x^10 + x^9 + x^8 + x^7 + x^6 + x^5 + x^4 + x^3 + x^2 + x + 1 Phi_24(x) = x^8  x^4 + 1 Phi_25(x) = x^20 + x^15 + x^10 + x^5 + 1 Phi_26(x) = x^12  x^11 + x^10  x^9 + x^8  x^7 + x^6  x^5 + x^4  x^3 + x^2  x + 1 Phi_27(x) = x^18 + x^9 + 1 Phi_28(x) = x^12  x^10 + x^8  x^6 + x^4  x^2 + 1 Phi_29(x) = x^28 + x^27 + x^26 + x^25 + x^24 + x^23 + x^22 + x^21 + x^20 + x^19 + x^18 + x^17 + x^16 + x^15 + x^14 + x^13 + x^12 + x^11 + x^10 + x^9 + x^8 + x^7 + x^6 + x^5 + x^4 + x^3 + x^2 + x + 1 Phi_30(x) = x^8 + x^7  x^5  x^4  x^3 + x + 1 Phi_31(x) = x^30 + x^29 + x^28 + x^27 + x^26 + x^25 + x^24 + x^23 + x^22 + x^21 + x^20 + x^19 + x^18 + x^17 + x^16 + x^15 + x^14 + x^13 + x^12 + x^11 + x^10 + x^9 + x^8 + x^7 + x^6 + x^5 + x^4 + x^3 + x^2 + x + 1 Phi_32(x) = x^16 + 1 Phi_33(x) = x^20  x^19 + x^17  x^16 + x^14  x^13 + x^11  x^10 + x^9  x^7 + x^6  x^4 + x^3  x + 1 Phi_34(x) = x^16  x^15 + x^14  x^13 + x^12  x^11 + x^10  x^9 + x^8  x^7 + x^6  x^5 + x^4  x^3 + x^2  x + 1 Phi_35(x) = x^24  x^23 + x^19  x^18 + x^17  x^16 + x^14  x^13 + x^12  x^11 + x^10  x^8 + x^7  x^6 + x^5  x + 1 Phi_36(x) = x^12  x^6 + 1 Phi_37(x) = x^36 + x^35 + x^34 + x^33 + x^32 + x^31 + x^30 + x^29 + x^28 + x^27 + x^26 + x^25 + x^24 + x^23 + x^22 + x^21 + x^20 + x^19 + x^18 + x^17 + x^16 + x^15 + x^14 + x^13 + x^12 + x^11 + x^10 + x^9 + x^8 + x^7 + x^6 + x^5 + x^4 + x^3 + x^2 + x + 1 Phi_38(x) = x^18  x^17 + x^16  x^15 + x^14  x^13 + x^12  x^11 + x^10  x^9 + x^8  x^7 + x^6  x^5 + x^4  x^3 + x^2  x + 1 Phi_39(x) = x^24  x^23 + x^21  x^20 + x^18  x^17 + x^15  x^14 + x^12  x^10 + x^9  x^7 + x^6  x^4 + x^3  x + 1 Phi_40(x) = x^16  x^12 + x^8  x^4 + 1 Phi_41(x) = x^40 + x^39 + x^38 + x^37 + x^36 + x^35 + x^34 + x^33 + x^32 + x^31 + x^30 + x^29 + x^28 + x^27 + x^26 + x^25 + x^24 + x^23 + x^22 + x^21 + x^20 + x^19 + x^18 + x^17 + x^16 + x^15 + x^14 + x^13 + x^12 + x^11 + x^10 + x^9 + x^8 + x^7 + x^6 + x^5 + x^4 + x^3 + x^2 + x + 1 Phi_42(x) = x^12 + x^11  x^9  x^8 + x^6  x^4  x^3 + x + 1 Phi_43(x) = x^42 + x^41 + x^40 + x^39 + x^38 + x^37 + x^36 + x^35 + x^34 + x^33 + x^32 + x^31 + x^30 + x^29 + x^28 + x^27 + x^26 + x^25 + x^24 + x^23 + x^22 + x^21 + x^20 + x^19 + x^18 + x^17 + x^16 + x^15 + x^14 + x^13 + x^12 + x^11 + x^10 + x^9 + x^8 + x^7 + x^6 + x^5 + x^4 + x^3 + x^2 + x + 1 Phi_44(x) = x^20  x^18 + x^16  x^14 + x^12  x^10 + x^8  x^6 + x^4  x^2 + 1 Phi_45(x) = x^24  x^21 + x^15  x^12 + x^9  x^3 + 1 Phi_46(x) = x^22  x^21 + x^20  x^19 + x^18  x^17 + x^16  x^15 + x^14  x^13 + x^12  x^11 + x^10  x^9 + x^8  x^7 + x^6  x^5 + x^4  x^3 + x^2  x + 1 Phi_47(x) = x^46 + x^45 + x^44 + x^43 + x^42 + x^41 + x^40 + x^39 + x^38 + x^37 + x^36 + x^35 + x^34 + x^33 + x^32 + x^31 + x^30 + x^29 + x^28 + x^27 + x^26 + x^25 + x^24 + x^23 + x^22 + x^21 + x^20 + x^19 + x^18 + x^17 + x^16 + x^15 + x^14 + x^13 + x^12 + x^11 + x^10 + x^9 + x^8 + x^7 + x^6 + x^5 + x^4 + x^3 + x^2 + x + 1 Phi_48(x) = x^16  x^8 + 1 Phi_49(x) = x^42 + x^35 + x^28 + x^21 + x^14 + x^7 + 1 Phi_50(x) = x^20  x^15 + x^10  x^5 + 1 Phi_51(x) = x^32  x^31 + x^29  x^28 + x^26  x^25 + x^23  x^22 + x^20  x^19 + x^17  x^16 + x^15  x^13 + x^12  x^10 + x^9  x^7 + x^6  x^4 + x^3  x + 1 Phi_52(x) = x^24  x^22 + x^20  x^18 + x^16  x^14 + x^12  x^10 + x^8  x^6 + x^4  x^2 + 1 Phi_53(x) = x^52 + x^51 + x^50 + x^49 + x^48 + x^47 + x^46 + x^45 + x^44 + x^43 + x^42 + x^41 + x^40 + x^39 + x^38 + x^37 + x^36 + x^35 + x^34 + x^33 + x^32 + x^31 + x^30 + x^29 + x^28 + x^27 + x^26 + x^25 + x^24 + x^23 + x^22 + x^21 + x^20 + x^19 + x^18 + x^17 + x^16 + x^15 + x^14 + x^13 + x^12 + x^11 + x^10 + x^9 + x^8 + x^7 + x^6 + x^5 + x^4 + x^3 + x^2 + x + 1 Phi_54(x) = x^18  x^9 + 1 Phi_55(x) = x^40  x^39 + x^35  x^34 + x^30  x^28 + x^25  x^23 + x^20  x^17 + x^15  x^12 + x^10  x^6 + x^5  x + 1 Phi_56(x) = x^24  x^20 + x^16  x^12 + x^8  x^4 + 1 Phi_57(x) = x^36  x^35 + x^33  x^32 + x^30  x^29 + x^27  x^26 + x^24  x^23 + x^21  x^20 + x^18  x^16 + x^15  x^13 + x^12  x^10 + x^9  x^7 + x^6  x^4 + x^3  x + 1 Phi_58(x) = x^28  x^27 + x^26  x^25 + x^24  x^23 + x^22  x^21 + x^20  x^19 + x^18  x^17 + x^16  x^15 + x^14  x^13 + x^12  x^11 + x^10  x^9 + x^8  x^7 + x^6  x^5 + x^4  x^3 + x^2  x + 1 Phi_59(x) = x^58 + x^57 + x^56 + x^55 + x^54 + x^53 + x^52 + x^51 + x^50 + x^49 + x^48 + x^47 + x^46 + x^45 + x^44 + x^43 + x^42 + x^41 + x^40 + x^39 + x^38 + x^37 + x^36 + x^35 + x^34 + x^33 + x^32 + x^31 + x^30 + x^29 + x^28 + x^27 + x^26 + x^25 + x^24 + x^23 + x^22 + x^21 + x^20 + x^19 + x^18 + x^17 + x^16 + x^15 + x^14 + x^13 + x^12 + x^11 + x^10 + x^9 + x^8 + x^7 + x^6 + x^5 + x^4 + x^3 + x^2 + x + 1 Phi_60(x) = x^16 + x^14  x^10  x^8  x^6 + x^2 + 1 Phi_61(x) = x^60 + x^59 + x^58 + x^57 + x^56 + x^55 + x^54 + x^53 + x^52 + x^51 + x^50 + x^49 + x^48 + x^47 + x^46 + x^45 + x^44 + x^43 + x^42 + x^41 + x^40 + x^39 + x^38 + x^37 + x^36 + x^35 + x^34 + x^33 + x^32 + x^31 + x^30 + x^29 + x^28 + x^27 + x^26 + x^25 + x^24 + x^23 + x^22 + x^21 + x^20 + x^19 + x^18 + x^17 + x^16 + x^15 + x^14 + x^13 + x^12 + x^11 + x^10 + x^9 + x^8 + x^7 + x^6 + x^5 + x^4 + x^3 + x^2 + x + 1 Phi_62(x) = x^30  x^29 + x^28  x^27 + x^26  x^25 + x^24  x^23 + x^22  x^21 + x^20  x^19 + x^18  x^17 + x^16  x^15 + x^14  x^13 + x^12  x^11 + x^10  x^9 + x^8  x^7 + x^6  x^5 + x^4  x^3 + x^2  x + 1 Phi_63(x) = x^36  x^33 + x^27  x^24 + x^18  x^12 + x^9  x^3 + 1 Phi_64(x) = x^32 + 1 [/CODE] 
Thus if p is an odd prime, Phi_2p(x) = (x^p+1)/(x+1) is the Wagstaff number base x.

Here's a silly question.
Currently, when mprime does a PRP test, it sends a 64bit residue to the server at the end. But it also sends interim 64bit residues at the 500k iteration, and then at 5M, 10M, 15M, etc. I believe the server stores these as a potential check against faked PRP results. There is a small bug where the final interim residue might be missing, or associated with the wrong exponent. If we only know the final 64bit residue, then the remaining digits of the full pbit residue could be anything. But if we know some earlier interim residues, then perhaps this constrains what the remaining digits of the final residue could be? Could we use those interim residues in some way to reconstruct the lost 65th bit of the final residue? And the 66th? Or perhaps constrain the 64bit word at bits 65 to 128 to a relatively small set of possibilities? And then maybe run the quick cofactorcompositeness check on that small set of possible 128bit final residues? 
[QUOTE=GP2;493177]
If we only know the final 64bit residue, then the remaining digits of the full pbit residue could be anything. But if we know some earlier interim residues, then perhaps this constrains what the remaining digits of the final residue could be? [/QUOTE] Hopeless. You would have no information even on the next iteration: if you know (say) the last 64 bits of x then you won't know even the parity of (x^2) mod N. 
Oh la la  you didn't learn what the principle of hashing is?
There would be special cases if you can pick very specific the p over which you take the modulo. If we ignore some special cases which will never happen here  by definition a hashfunction is a oneway function. [QUOTE=GP2;493177]Here's a silly question. Currently, when mprime does a PRP test, it sends a 64bit residue to the server at the end. But it also sends interim 64bit residues at the 500k iteration, and then at 5M, 10M, 15M, etc. I believe the server stores these as a potential check against faked PRP results. There is a small bug where the final interim residue might be missing, or associated with the wrong exponent. If we only know the final 64bit residue, then the remaining digits of the full pbit residue could be anything. But if we know some earlier interim residues, then perhaps this constrains what the remaining digits of the final residue could be? Could we use those interim residues in some way to reconstruct the lost 65th bit of the final residue? And the 66th? Or perhaps constrain the 64bit word at bits 65 to 128 to a relatively small set of possibilities? And then maybe run the quick cofactorcompositeness check on that small set of possible 128bit final residues?[/QUOTE] 
[QUOTE=diep;493207]Oh la la  you didn't learn what the principle of hashing is?[/QUOTE]
I thought it's probably impossible, but wouldn't hurt to ask. After all, we have a remarkable technique now for discovering new PRP cofactors without doing a PRP cofactor test. So I started wondering what else might be possible... 
The first success of the new Gerbicz cofactorcompositeness algorithm, by the way:
Wagstaff number (2^161683+1)/3 has the 2048bit residue: [code] res = 718EEDB10BA0C44165E8029C79D6D2CEA9F1B9656063F40649D24E3A6F83E83B7C39CD85DF46F05AEA4E57197AD7E9E8DE146224378351ED2B97527A8121F4AA94B411A18AF7E44D34AA5FCF2DC7B92E0EEA23574E560D258A6D143080C284340EFC3E4C8658EBBF6D1CA1F5C2260BD6E8DE7C07F801E104B2D0403AE232230F08061DB56F5F3CAD610DF88A9AB0694E86F4A783424811326CB0AE2F1D4547EAF0A9DC3037F4194A98CCB2548F3CBD7930EFAD9E1D5D3840756F27A7CBF05DE3E735BF4680EEF02BD299AFFF592AD29CCF48C46E034E124CB592588ABA85C7405783CEA6C41825E36BA21BA59DB234A95D68FCF86AE6C9824C4C4858CC8D5439 [/code] Apply the test to the factor 388473905764291 and it flags the cofactor as a possible prime! First of all, d = 3 * 388473905764291 is much smaller than 2048 bits (it's overkill, I know), so the first criterion is met. (Note: if we were doing Mersenne, obviously there would be no factor "3" involved) Then s = 3^(d  1) mod (2^161683+1)/d Then w = (d*(res  s)) mod (2^2048) (Note: if we were doing Mersenne, it would be (s  res) instead) And we have w < d, so the cofactor (2^161683+1)/d is flagged as a potential probable prime. Note, if it was the case that w >=d we would know for sure that the cofactor is composite. Since w < d, we have to do a PRP cofactor test, which passed. And FactorDB confirms with bases 5, 7, 11, 13. The cofactor is 48657 digits, pretty measly by modern standards, but still big enough to be submitted to the PRPTop site. In Python: [CODE] from __future__ import print_function, division res = int("718EEDB10BA0C44165E8029C79D6D2CEA9F1B9656063F40649D24E3A6F83E83B7C39CD85DF46F05AEA4E57197AD7E9E8DE146224378351ED2B97527A8121F4AA94B411A18AF7E44D34AA5FCF2DC7B92E0EEA23574E560D258A6D143080C284340EFC3E4C8658EBBF6D1CA1F5C2260BD6E8DE7C07F801E104B2D0403AE232230F08061DB56F5F3CAD610DF88A9AB0694E86F4A783424811326CB0AE2F1D4547EAF0A9DC3037F4194A98CCB2548F3CBD7930EFAD9E1D5D3840756F27A7CBF05DE3E735BF4680EEF02BD299AFFF592AD29CCF48C46E034E124CB592588ABA85C7405783CEA6C41825E36BA21BA59DB234A95D68FCF86AE6C9824C4C4858CC8D5439", 16) # Use PRP base 3, could use 5, 7, 11, etc. instead a = 3 pow2_t = 2**2048 d = 3 * 388473905764291 print(d < pow2_t) s = pow(a, d  1, (2**161683+1)//d) w = (d*(res  s)) % pow2_t print(w < d) [/CODE] I modified the mprime source code to print out 2048bit residues and tested it by comparing it to the output of a much slower GMPbased version. They match up to about p=78k, I didn't run the slow version beyond that. Part of the reason I'm redoing old ranges is to to create the permanent database of largebitsize Wagstaff residues, which can be used for all future cofactor discoveries even decades from now. I am calculating the 2048bit residue of every Wagstaff number with every prime exponent p, regardless of whether it has known factors or not. This only needs to be done once, and then just apply the algorithm above to any factors currently known or discovered in the future, and nearly always, it will tell you in a few seconds that the newlycreated cofactor is composite. The slowest part is the calculation of "s". 
For Mersenne numbers this is what Prime95 is already doing with the type 5 PRP test?
Type 5 PRP tests does not need to be rerun after a new factor is found. 
[QUOTE=ATH;493253]For Mersenne numbers this is what Prime95 is already doing with the type 5 PRP test?
Type 5 PRP tests does not need to be rerun after a new factor is found.[/QUOTE] This is incorrect. This test has not been implemented, and any new factors found needs to be retested. You're probably confused by the fact that Type 5 produces the same residue regardless of how many factors are there. In theory, if a bigger residue was returned, then the server itself could do the preliminary check and only issue a definitive PRP test if the quick check passed. This is yet to be implemented, however. 
[QUOTE=GP2;493250]
Note, if it was the case that w >=d we would know for sure that the cofactor is composite. Since w < d, we have to do a PRP cofactor test, which passed. And FactorDB confirms with bases 5, 7, 11, 13. [/QUOTE] To help rule out the PRP being a Carmichael number: [CODE]time ./pfgw64 k f0 od q"(2^161683+1)/3/388473905764291"  ../../coding/gwnum/hybrid  1 2 161683 1 Testing (x + 1)^(n + 1) == 2 + 3 (mod n, x^2  3*x + 1)... Likely prime! real 0m29.630s user 0m37.092s sys 0m15.812s [/CODE] 
[QUOTE=axn;493257]In theory, if a bigger residue was returned, then the server itself could do the preliminary check and only issue a definitive PRP test if the quick check passed. This is yet to be implemented, however.[/QUOTE]
Yes, exactly this. In a world with big residues, every time a new factor is reported for any exponent, the server itself can spend a few seconds to determine for certain that the newlycreated cofactor is composite. Actual PRP cofactor tests would only be needed perhaps every few months, in the rare cases that the cofactorcompositeness test returns a maybeprime result instead. PRPCF would be eliminated as an assignment category, they would only be performed manually, as needed on rare occasions. The only PRP tests done would be the same ones that are done when there are no known factors: exactly once for each exponent. I am using 2048 bit residues for my Wagstaff stuff to futureproof it. I think [URL="http://www.mersenne.ca/manyfactors.php"]the current recordholder[/URL] is [M]M31817[/M], where the product of the known factors fits into 650 bits. But actually, it could be even longer, why not 4096 or 8192? Storage is cheap. We only need to calculate a PRP residue for each exponent once, and then use that result maybe several decades from now, when future generations of GPUs or maybe even quantum computers could be finding a lot more factors. If the product of the known factors starts routinely exceeding the residue bitsize, then the Gerbicz cofactorcompositeness test can't be used and we're back to the current status quo of redoing PRP tests every single time. I think largebitsize PRP is the only way to go, and soon. 
1 Attachment(s)
Updated version of the [c]gerbicz_quickprp.tar.gz[/c] file attached to an earlier post. Default bitsize = 2048 now, runs faster, and some other changes (run with h or help for help).

[QUOTE=GP2;493247]I thought it's probably impossible, but wouldn't hurt to ask.
After all, we have a remarkable technique now for discovering new PRP cofactors without doing a PRP cofactor test. So I started wondering what else might be possible...[/QUOTE] Yeah well, you know, i figured something like that out in 20022006. You also can figure out amongst others in a similar manner the number of bits of the smallest divisor of a M = 2^p  1 where p = a prime  yet it has a similar problem that you first need to square lot of times and in order to detect the cycle length which indicates the number of bits the divisor has, we need roughly p + len squares where len = number of bits of the divisor, which makes it a very inefficient manner to figure out whether M is a prime or not. It's of course possible to invent quadrizillions things like that  by burning more energy than you need to figure out whether it's a prime or not  but IMHO that's utter useless except if you get a salary to act as if you're busy. 
[QUOTE=diep;493272]Yeah well, you know, i figured something like that out in 20022006.
You also can figure out amongst others in a similar manner the number of bits of the smallest divisor of a M = 2^p  1 where p = a prime  yet it has a similar problem that you first need to square lot of times and in order to detect the cycle length which indicates the number of bits the divisor has, we need roughly p + len squares where len = number of bits of the divisor, which makes it a very inefficient manner to figure out whether M is a prime or not. It's of course possible to invent quadrizillions things like that  by burning more energy than you need to figure out whether it's a prime or not  but IMHO that's utter useless except if you get a salary to act as if you're busy.[/QUOTE] I'm not sure I'm following your argument, but from [URL="http://mersenneforum.org/showthread.php?p=352733&postcount=21"]this old post from 2013 when Ryan Propper's big PRPs were being verified[/URL] it seems that PRP27 and VrbaReix take about the same amount of time to run. And I'm not sure, but PRP3 might have a similar run time to PRP27 ? So I don't know of a reason why using mprime to test Wagstaff numbers is inefficient. Today's computers are in any case faster, so PRP doublechecking one of Ryan's big PRPs took less than 12 hours (on a singlecore Skylake), whereas it took 36 hours (on unknown hardware) five years ago. 
A couple more PRPs flagged by the Gerbicz cofactorcompositeness algorithm and confirmed by PRP cofactor test:
(2^343933 + 1) /3 /98294339989393 (2^364513 + 1) /3 /44290516579 The previous record holder for [URL="http://www.primenumbers.net/prptop/searchform.php?form=%282^x%2B1%29%2F%3F&action=Search"]Wagstaff PRP cofactors[/URL] was (2^257869+1)/(3*18645362967379), discovered in 2005. 2048bit residues are: [CODE] 343933,,,9941FD27FFA885A2785D6FA950D7A8287CAD937E9A1B228B2BED905A284455A7A6DA4C31DD45FF9D9280B95D80E5DF2BC82361737366AE6523066896872AC453009C6A5494C514379622F8192E7D7250DAF372FE0E4B2F610ABE187A38FCD281107E00B29291C373E022E09007BC5C9724086DFA07AAE016D565D32B7BDF63186F16140271085526E7A5E74F6358701CF5E2BA90A7FB5056EDA0E91000618E08A416DA263B73A11443542FF310BFFE03FD740CEFCE93D3B1ED40FEF10A03B1B35808561425C50A3E76C10297B68A041B2DB2EB7B00B82D6F2901489E8F4038864E5F8B6353266C4FA6A429980B970C5CCDEE8339567F146D689BA4C5E231A5B8 364513,,,0A34ACB9B441502BA3E78F33DD061AC8587882F78DAB3477952CB109A9F4769C4B1882129B90200EFEADAA3D2031FD584257923D878F137E46E168B5FEEB7F3C4473F13D07958F59B23145DCF8ECFAEA27EF5B26AA0BBC8EE72F91CABF06245B53A67752533573206DA30EB9D9CA6794FB1D5B6E655F00E132AD2A9FCECB93919EACA1D1380F60690DFD2C4BBCB7F3454774A9B66F61686D334C3BECE703F88DE4803F787D51211FC62F061C50E10AAAEAE1F1316BE0ECF032701C6494465EC02552EFC4E982ABE56BB3B3642F34897F39F88F4B5071154AEFED0036648E8772DD480B4108D6E1C4F69F9B657B710AA0230C824BDB6305543EC0BE6E1294F19D [/CODE] 
Congrats! Here are my tests on them:
[CODE]time ./pfgw64 k f0 od q"(2^343933 + 1) /3 /98294339989393"  ../../coding/gwnum/hybrid  1 2 343933 1 Testing (x + 2)^(n + 1) == 5 (mod n, x^2 + 1)... Likely prime! real 1m46.397s user 2m25.676s sys 0m53.604s [/CODE] [CODE]time ./pfgw64 k f0 od q"(2^364513 + 1) /3 /44290516579"  ../../coding/gwnum/hybrid  1 2 364513 1 Testing (x + 1)^(n + 1) == 2 + 15 (mod n, x^2  15*x + 1)... Likely prime! real 2m11.807s user 3m1.988s sys 0m59.388s [/CODE] 
Two more PRPs:
(2^610289+1)/(3*528882550291) (2^685739+1)/(3*87774593) 
[QUOTE=GP2;493548]Two more PRPs:
(2^610289+1)/(3*528882550291) (2^685739+1)/(3*87774593)[/QUOTE] Congrats! My tests on them: [CODE]time ./pfgw64 k f0 od q"(2^610289+1)/(3*528882550291)"  ../../coding/gwnum/hybrid  1 2 610289 1 Testing (x + 2)^(n + 1) == 7 (mod n, x^2  x + 1)... Likely prime! real 4m10.218s user 6m20.932s sys 1m27.084s [/CODE] [CODE]time ./pfgw64 k f0 od q"(2^685739+1)/(3*87774593)"  ../../coding/gwnum/hybrid  1 2 685739 1 Testing (x + 2)^(n + 1) == 5 (mod n, x^2 + 1)... Likely prime! real 6m30.061s user 9m37.436s sys 3m7.136s [/CODE] 
All times are UTC. The time now is 23:56. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2022, Jelsoft Enterprises Ltd.