mersenneforum.org I didn't find a 5 round deterministic MillerRabin Test
 Register FAQ Search Today's Posts Mark Forums Read

2021-04-22, 02:25   #12
VBCurtis

"Curtis"
Feb 2005
Riverside, CA

4,789 Posts

Quote:
 Originally Posted by LarsNet Mine is deterministic and algorithmic, and yes, it failed one number
Do you not know what "deterministic" means? I mean, you negate "deterministic" 6 words later in the same sentence. "I haven't found a second exception yet" is absolutely positively not deterministic. A proof that no counterexamples are possible is deterministic. You're not even sure it works, let alone able to prove it works, for all numbers.

2021-04-22, 03:41   #13
LarsNet

Mar 2021

23×3 Posts

Quote:
 Originally Posted by Happy5214 I found a counter-example. 1469328248071876501 passes your test, but is listed on this list of numbers which are strong pseudoprimes to bases 2, 3, 5, and 7.
Quote:
 Originally Posted by VBCurtis Do you not know what "deterministic" means? I mean, you negate "deterministic" 6 words later in the same sentence. "I haven't found a second exception yet" is absolutely positively not deterministic. A proof that no counterexamples are possible is deterministic. You're not even sure it works, let alone able to prove it works, for all numbers.
Mine is deterministic at the right k size "i believe". I'm not math savvy enough to prove it in a paper so i'm looking for insight for the enlightened minds here if it's worth pursuing or not. Since i believe that i think it's better than the deterministic test at Wikipedia, i'm looking for insight to why or why not it would be, does that make sense? I fully understand i don't know why it works as well as it does, i'm just not savvy enough to make mathematical sense enough of it, so i was hoping a somebody with the savvy could comment as i don't want to waste time on something not worth pursuing. That's all really.

So if the k needs to increase for larger bit_length numbers, i might pursue it, because i think it's better algorithmically than what's on Wikipedia. From what i've read there i might have a better theory than they do for deterministic Miller Rabin Tests, i just need somebody to comment as to why or why not, and that well help me formulate whether to pursue this more or not at all

Last fiddled with by LarsNet on 2021-04-22 at 03:49

2021-04-22, 09:32   #14
retina
Undefined

"The unspeakable one"
Jun 2006
My evil lair

6,143 Posts

Quote:
 Originally Posted by LarsNet Does anyone think my test is better than what's posted at https://en.wikipedia.org/wiki/Miller...istic_variants ?
Probably only you.

The test on WP can be shown to be deterministic. Your test cannot. Therefore your test is not better.

2021-04-22, 12:24   #15
Dr Sardonicus

Feb 2017
Nowhere

11C816 Posts

Arnault, F. Rabin-Miller Primality Test: Composite Numbers Which Pass It. Math. Comput. 64, 355-361, 1995

gives a method for constructing counterexamples that "fool" Rabin-Miller. He gives a composite number n = p1*p2 which "passes" Rabin-Miller to all 46 prime bases < 200. He says:
Quote:
 The only limitation towards finding strong pseudoprimes to more bases in this way seems to be the difficulty of doing computations involving such large numbers.
I give the example from the 26-year-old paper for ease of reference.

Code:
n=8038374574536394912570796143419421081388376882875581458374889175222974273765333652186502336163960045457915042023603208766569966760987284043965408232928738791850869166857328267761771029389697739470167082304286871099974399765441448453411558724506334092790222752962294149842306881685404326457534018329786111298960644845216191652872597534901

p1 = 2004791083197498027091532260422734265025940830205662543872531023690016085350598121358111595798609866791081582542679083484572616906958584643763990222898400226296015918301

p2 = 4009582166394996054183064520845468530051881660411325087745062047380032170701196242716223191597219733582163165085358166969145233813917169287527980445796800452592031836601

2021-04-22, 12:45   #16
charybdis

Apr 2020

2×7×19 Posts

Quote:
 Originally Posted by LarsNet Mine is deterministic at the right k size "i believe". I'm not math savvy enough to prove it in a paper so i'm looking for insight for the enlightened minds here if it's worth pursuing or not. Since i believe that i think it's better than the deterministic test at Wikipedia, i'm looking for insight to why or why not it would be, does that make sense? I fully understand i don't know why it works as well as it does, i'm just not savvy enough to make mathematical sense enough of it, so i was hoping a somebody with the savvy could comment as i don't want to waste time on something not worth pursuing. That's all really.
What makes you believe that your test is better than doing Miller-Rabin to the first 5 or 7 primes, or indeed any set of randomly chosen bases? As far as I can see there is no numerical or mathematical evidence for this belief. Happy5214's counterexample to your 5-base test is highly unlikely to be the smallest.

2021-04-22, 18:38   #17
LarsNet

Mar 2021

23·3 Posts

Quote:
 Originally Posted by charybdis What makes you believe that your test is better than doing Miller-Rabin to the first 5 or 7 primes, or indeed any set of randomly chosen bases? As far as I can see there is no numerical or mathematical evidence for this belief. Happy5214's counterexample to your 5-base test is highly unlikely to be the smallest.
Well, i did test a the 5 length up to the first 100 billion numbers. Really i would need to know how from wikipedia Really this entry at wikipedia makes me interested in how they did there testing:

if n < 3,317,044,064,679,887,385,961,981, it is enough to test a = 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, and 41.

So i can do comparable testing.

From what i'm hearing from here is that for the size of numbers i'm testing from my k test from the initial pow i choose ( which is a modular inverse of the number from it's powers of two, then going up and down those numbers to build 5 or 7 or more tests ) isn't interesting enough to pursue further, i think due to lack of proof, rather than hate for the idea, but maybe there it that too, i don't know :-)

This code here has a withstats option to make it clearer as to what i'm doing.

BTW, i don't want to defend this, i was just looking for the thoughts of other prime number theorists here as to whether this was better way to the deterministic test listed at Wikipedia entry. So far feedback here seems negative so i don't think i'll be pursing it, but i did want to share a better version of the code so you can see what it is doing and here is a simple explanation to go with it.

My conjecture was that the following could be a better way than using the Miller Rabin test or Maybe even a better alternative to the Miller Rabin Random test since this is an algorithmic test. So far i haven't convinced some here that it is better due to lack of mathematical proof and better testing among other things listed by the posters. I accept this and know then the bar is set high for me to prove this is a better option, but with my idea laid out and with the limited testing i have done to 100 billion numbers and the OEIS provided in the code comments and with advice on how better to test, i may take on the challenge but may not if everyone agrees that it doesn't even at glance look better than current way people use Miller Rabin for prime testing.

So I end with a simple explanation of what my Miller Rabin test does and some code to go with it with stats to help understand that. My thinking was that this test reduces the amount of rounds needed to get an accurate result than random numbers and was more deterministic than the random test. I have failed to provide enough evidence for this but am likely to spend time to do so if i get interest. Otherwise, if no one here likes my test, then why bother wasting my time right?

Explanation of My Miller Rabin Test conjecture and some code with a stats option:

1. Instead of using random numbers, use a modular inverse of your test number against it's powers of two. Let's call that k.
2 Then build numbers up and down from this k number so you have 5, 7 or more tests. So something like [k-3, k-2, k-1, k, k+1, k+2. k+3]
3. Perform the Miller Rabin Test against those k numbers for a prime test.

Code:
import gmpy2

def ffs(x):
"""Returns the index, counting from 0, of the
least significant set bit in x.
"""
return (x&-x).bit_length()-1

def MillerRabin(N, primetest, iterx, powx, withstats=False):
primetest = gmpy2.powmod(primetest, powx, N)
if withstats == True:
print("first: ",primetest)
if primetest == 1 or primetest == N - 1:
return True
else:
for x in range(0, iterx-1):
primetest = gmpy2.powmod(primetest, 2, N)
if withstats == True:
print("else: ", primetest, gmpy2.gcd(primetest-1, N))
if primetest == N - 1: return True
if primetest == 1: return False
return False

def sfactorint_isprime(N, withstats=False):
if N == 2:
return True
if N % 2 == 0:
return False
if N < 2:
return False
iterx = ffs(N-1)
""" This k test is an algorithmic test builder instead of using
random numbers. The offset of k, from -3 to +3 produces pow tests
that fail or pass instead of having to use random numbers and more
iterations. This has been tested against 100,000,000,000 numbers and
also with OEIS https://oeis.org/A074773/b074773.txt and
OEIS https://oeis.org/A001262/b001262.txt. For this test to pass A074773,
the k tests had to expand from 5 to 7, So with that you would need to increase
the k size for bigger bit_length's, but i don't know what those sizes are at
the moment.
"""
k = gmpy2.powmod(N, -1, 1<<N.bit_length()) - 1
t = N >> iterx
tests = [k-3, k-2, k-1, k, k+1, k+2, k+3]
for primetest in tests:
if primetest >= N:
primetest %= N
if primetest >= 2:
if MillerRabin(N, primetest, iterx, t, withstats) == False:
return False
return True

Last fiddled with by LarsNet on 2021-04-22 at 19:36

 2021-04-22, 20:36 #18 retina Undefined     "The unspeakable one" Jun 2006 My evil lair 6,143 Posts The idea is fine. Using fewer bases to test is always a good thing. But the lack of reasoning, and lack of proof, makes it all moot. It might work fine for all the small numbers you tested, but when going into larger numbers we have no idea what it will do. Maybe it gets worse? Maybe it gets better? We don't know.
2021-04-22, 21:50   #19
charybdis

Apr 2020

1000010102 Posts

Quote:
 Originally Posted by LarsNet Well, i did test a the 5 length up to the first 100 billion numbers. Really i would need to know how from wikipedia Really this entry at wikipedia makes me interested in how they did there testing: if n < 3,317,044,064,679,887,385,961,981, it is enough to test a = 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, and 41. So i can do comparable testing.
I believe that the methods used to search up to such large bounds rely on the same set of bases being used for every n, which is not the case for your test. Someone with more knowledge in this area can correct me if I'm wrong.

Quote:
 1. Instead of using random numbers, use a modular inverse of your test number against it's powers of two. Let's call that k.
(i.e. take inverse of n modulo the smallest power of 2 above n, if I'm understanding correctly)

This is the bit that leaps out to me. What's so special about this function? What's so special about the smallest power of 2 above n? Why would this be a better choice than any other power of 2? In fact, what's so special about your choice of inverse? Say 2^m is the smallest power of 2 above n. Then we could take, say, k+2^m or k-2^m as the inverse instead of k, and that would give us a different set of bases for the M-R test.

The inverse is an integer mod 2^m. The base for the M-R test is an integer mod n. There isn't a natural way of converting from one to the other. "19 mod 32" and "19 mod 27" might look like they have something in common, but that's just an illusion. After all, "19 mod 32" and "17 mod 27" don't look like they have much in common at first, but what if I write them as "179 mod 32" and "179 mod 27"?

None of this means for sure that your test is no better than picking bases at random. But I hope this goes some way to explaining why people here see it as essentially picking bases at random.

2021-04-22, 21:51   #20
LarsNet

Mar 2021

23×3 Posts

Quote:
 Originally Posted by retina The idea is fine. Using fewer bases to test is always a good thing. But the lack of reasoning, and lack of proof, makes it all moot. It might work fine for all the small numbers you tested, but when going into larger numbers we have no idea what it will do. Maybe it gets worse? Maybe it gets better? We don't know.
Ok, while i try to wrap my head around how to create larger numbers that MillerRabin fails with ( and work my processor a few days around this) I wanted to point out that a 6 round Miller Rabin tests fails more often than not than when using just the [k-3, k-2, k-1, k, k+1, k+2] (6 algorithmic tests) which always pass both OEIS A001262 and A074773. and the 6 round random tests fail frequently. The 6 round tests find a failure at almost every other run:

Code:

In [1605]: for x in A001262 + A074773:
...:     if sfactorint_isprime_random(x) != gmpy2.is_prime(x): print(f"Random 6 roundtest failure: {x}")
...:     if sfactorint_isprime(x) != gmpy2.is_prime(x): print(f"Non Random 6 algorithmic Failure: {x}")
...:
Random 6 roundtest failure: 1583435793635011

In [1606]: for x in A001262 + A074773:
...:     if sfactorint_isprime_random(x) != gmpy2.is_prime(x): print(f"Random 6 roundtest failure: {x}")
...:     if sfactorint_isprime(x) != gmpy2.is_prime(x): print(f"Non Random 6 algorithmic Failure: {x}")
...:
Random 6 roundtest failure: 130962331953070351

In [1607]: for x in A001262 + A074773:
...:     if sfactorint_isprime_random(x) != gmpy2.is_prime(x): print(f"Random 6 roundtest failure: {x}")
...:     if sfactorint_isprime(x) != gmpy2.is_prime(x): print(f"Non Random 6 algorithmic Failure: {x}")
...:

In [1608]: for x in A001262 + A074773:
...:     if sfactorint_isprime_random(x) != gmpy2.is_prime(x): print(f"Random 6 roundtest failure: {x}")
...:     if sfactorint_isprime(x) != gmpy2.is_prime(x): print(f"Non Random 6 algorithmic Failure: {x}")
...:
Random 6 roundtest failure: 15247621
Once again, this gives me some hope that the test works on larger numbers, so if anyone has other OEIS lists to test against, that would be nice. Random 6 round Miller Rabin tests with random two primes multiplied together works surprisingly well at any bit_length i tried, so finding a way to make the numbers in these OEIS lists is something i'm trying to figure out as they seem to fail random Miller Rabin tests more frequently.

I don't offer this as proof, just something interesting to show why i'm interested in testing further. It just offers me a glimp of hope this holds up against bigger numbers

2021-04-22, 22:45   #21
charybdis

Apr 2020

1000010102 Posts

Quote:
 Originally Posted by LarsNet Ok, while i try to wrap my head around how to create larger numbers that MillerRabin fails with ( and work my processor a few days around this) I wanted to point out that a 6 round Miller Rabin tests fails more often than not than when using just the [k-3, k-2, k-1, k, k+1, k+2] (6 algorithmic tests) which always pass both OEIS A001262 and A074773. and the 6 round random tests fail frequently. The 6 round tests find a failure at almost every other run:
Once they've passed once, of course they'll "always pass". Running the same thing over and over isn't going to give you different results.

So maybe your test actually is good, or maybe you just got lucky. Given that you sometimes get no failures from picking 6 random bases, I'm sticking with "got lucky" for now.

Quote:
 Once again, this gives me some hope that the test works on larger numbers, so if anyone has other OEIS lists to test against, that would be nice. Random 6 round Miller Rabin tests with random two primes multiplied together works surprisingly well at any bit_length i tried, so finding a way to make the numbers in these OEIS lists is something i'm trying to figure out as they seem to fail random Miller Rabin tests more frequently.
M-R tests aren't fully independent. If n is a strong pseudoprime to base b then it is certainly a Fermat pseudoprime, and if n is a Fermat pseudoprime to bases a and b then it is also a Fermat pseudoprime to base ab (and therefore all bases in the subgroup of (Z/nZ)* generated by a and b). Therefore, if we know that n is a strong pseudoprime to bases 2, 3, 5 and 7, then n is more likely to be a strong pseudoprime to some other base than a randomly chosen number would be.

That's why the numbers in those OEIS lists pass 6 random M-R tests much more often than random semiprimes do. Some of the numbers in those lists are strong pseudoprimes to as many as a quarter of all bases.

2021-04-23, 00:30   #22
LarsNet

Mar 2021

23×3 Posts

Quote:
 Originally Posted by charybdis Once they've passed once, of course they'll "always pass". Running the same thing over and over isn't going to give you different results. So maybe your test actually is good, or maybe you just got lucky. Given that you sometimes get no failures from picking 6 random bases, I'm sticking with "got lucky" for now.
Yep, because mine is algorithmic they always pass when re-run ( just why 6 is such a good number for passing the tests i don't know yet, which is why i'm testing).

OK, so i took some more OEIS tests and have 182050 numbers i'm testing against, this is what i came up with, so far no failures at k6 for my algorithmic test, though I expect that will not hold and needs to be upped at some point for bigger bit_lengths, because why is 6 better than 5? I'm sure i'll find a list that requires 7 at some point, but for now, it is passing quite well with the 182050 numbers from those OEIS lists listed below in the code. Those also fail quite frequently at 7 or more random tests, but the more tests you use, the less chance of failure. Some of those numbers are somewhat large, but not as large as i'd like.

I wonder how many tests where luck overcomes something that might be useful. I'm as skeptical as you, but for now I am having fun testing. I wish there was a master list of every OEIS for psuedoprimes, charmicheal numebers, etc, that i could test against. I'm sure I could automate a pull of all of them, it'll just take some time.

I really wish i could generate the numbers at much bigger bit_lengths than i'm testing with now, but until then, i'll keep testing.

Code:
In [1799]: for x in A001262 + A074773 + A033502 + A112432 + A112431 + A112430 + A112429 + A087788 + A000244 + A005938:
...:     if sfactorint_isprime_random(x) != gmpy2.is_prime(x): print(f"Random 6 roundtest failure: {x}")
...:     if sfactorint_isprime(x) != gmpy2.is_prime(x): print(f"Non Random 6 algorithmic Failure: {x}")
...:
Random 6 roundtest failure: 1101217851432757921
Random 6 roundtest failure: 4638902368591

In [1804]: for x in A001262 + A074773 + A033502 + A112432 + A112431 + A112430 + A112429 + A087788 + A000244 + A005938:
...:     if sfactorint_isprime_random(x) != gmpy2.is_prime(x): print(f"Random 6 roundtest failure: {x}")
...:     if sfactorint_isprime(x) != gmpy2.is_prime(x): print(f"Non Random 6 algorithmic Failure: {x}")
...:
Random 6 roundtest failure: 896907112111
Random 6 roundtest failure: 916134753691

In [1805]: for x in A001262 + A074773 + A033502 + A112432 + A112431 + A112430 + A112429 + A087788 + A000244 + A005938:
...:     if sfactorint_isprime_random(x) != gmpy2.is_prime(x): print(f"Random 6 roundtest failure: {x}")
...:     if sfactorint_isprime(x) != gmpy2.is_prime(x): print(f"Non Random 6 algorithmic Failure: {x}")
...:
Random 6 roundtest failure: 3044941089945791
Random 6 roundtest failure: 4957839253
Random 6 roundtest failure: 291064280851

In [1806]: for x in A001262 + A074773 + A033502 + A112432 + A112431 + A112430 + A112429 + A087788 + A000244 + A005938:
...:     if sfactorint_isprime_random(x) != gmpy2.is_prime(x): print(f"Random 6 roundtest failure: {x}")
...:     if sfactorint_isprime(x) != gmpy2.is_prime(x): print(f"Non Random 6 algorithmic Failure: {x}")
...:
Random 6 roundtest failure: 408464728561

In [1807]: for x in A001262 + A074773 + A033502 + A112432 + A112431 + A112430 + A112429 + A087788 + A000244 + A005938:
...:     if sfactorint_isprime_random(x) != gmpy2.is_prime(x): print(f"Random 6 roundtest failure: {x}")
...:     if sfactorint_isprime(x) != gmpy2.is_prime(x): print(f"Non Random 6 algorithmic Failure: {x}")
...:
Random 6 roundtest failure: 250859023381

In [1812]: for x in A001262 + A074773 + A033502 + A112432 + A112431 + A112430 + A112429 + A087788 + A000244 + A005938:
...:     if sfactorint_isprime_random(x) != gmpy2.is_prime(x): print(f"Random 6 roundtest failure: {x}")
...:     if sfactorint_isprime(x) != gmpy2.is_prime(x): print(f"Non Random 6 algorithmic Failure: {x}")
...:
Random 6 roundtest failure: 773838811966028581
Random 6 roundtest failure: 418185761311

In [1813]: for x in A001262 + A074773 + A033502 + A112432 + A112431 + A112430 + A112429 + A087788 + A000244 + A005938:
...:     if sfactorint_isprime_random(x) != gmpy2.is_prime(x): print(f"Random 6 roundtest failure: {x}")
...:     if sfactorint_isprime(x) != gmpy2.is_prime(x): print(f"Non Random 6 algorithmic Failure: {x}")
...:
Random 6 roundtest failure: 243003135511

In [1814]: for x in A001262 + A074773 + A033502 + A112432 + A112431 + A112430 + A112429 + A087788 + A000244 + A005938:
...:     if sfactorint_isprime_random(x) != gmpy2.is_prime(x): print(f"Random 6 roundtest failure: {x}")
...:     if sfactorint_isprime(x) != gmpy2.is_prime(x): print(f"Non Random 6 algorithmic Failure: {x}")
...:
Random 6 roundtest failure: 18663409801

In [1810]: len(A001262 + A074773 + A033502 + A112432 + A112431 + A112430 + A112429 + A087788 + A000244 + A005938)
Out[1810]: 182050

 Similar Threads Thread Thread Starter Forum Replies Last Post Runtime Error Software 21 2021-03-13 16:39 storflyt32 storflyt32 298 2018-09-27 02:36 a1call Miscellaneous Math 194 2018-03-19 05:54 SlashDude Riesel Prime Search 121 2008-01-03 08:47 SlashDude Riesel Prime Search 538 2007-05-08 01:42

All times are UTC. The time now is 07:57.

Tue May 18 07:57:05 UTC 2021 up 40 days, 2:37, 0 users, load averages: 2.60, 2.29, 1.97