![]() |
![]() |
#1 |
Aug 2002
Buenos Aires, Argentina
3×499 Posts |
![]()
I don't know if this was noticed before.
This is the factorization of the exponent - 1 of Mersenne primes. 1278 = 2 * 3^2 * 71 2202 = 2 * 3 * 367 2280 = 2^3 * 3 * 5 * 19 3216 = 2^4 * 3 * 67 4252 = 2^2 * 1063 4422 = 2 * 3 * 11 * 67 9688 = 2^3 * 7 * 173 9940 = 2^2 * 5 * 7 * 71 11212 = 2^2 * 2803 19936 = 2^5 * 7 * 89 21700 = 2^2 * 5^2 * 7 * 31 23208 = 2^3 * 3 * 967 44496 = 2^4 * 3^3 * 103 86242 = 2 * 13 * 31 * 107 110502 = 2 * 3^2 * 7 * 877 132048 = 2^4 * 3^2 * 7 * 131 216090 = 2 * 3^2 * 5 * 7^4 756838 = 2 * 23 * 16453 859432 = 2^3 * 7 * 103 * 149 1257786 = 2 * 3^2 * 69877 1398268 = 2^2 * 349567 2976220 = 2^2 * 5 * 13 * 11447 3021376 = 2^6 * 17 * 2777 6972592 = 2^4 * 11 * 173 * 229 13466916 = 2^2 * 3^2 * 83 * 4507 20996010 = 2 * 3^4 * 5 * 7^2 * 23^2 24036582 = 2 * 3 * 4006097 25964950 = 2 * 5^2 * 11 * 17 * 2777 30402456 = 2^3 * 3 * 7 * 37 * 67 * 73 32582656 = 2^10 * 47 * 677 Look at the second largest factor, in general it is much smaller than a second largest factor of a random number, so it appears that the exponents are not randomly distributed. Last fiddled with by akruppa on 2006-09-12 at 05:38 Reason: Added M44 to list |
![]() |
![]() |
![]() |
#2 | |
"Richard B. Woods"
Aug 2002
Wisconsin USA
22×3×641 Posts |
![]() Quote:
If P is a random odd prime, what's the expected size of the second largest factor of P-1 ? Wouldn't it be closer to the expected size of the second largest factor of (P+-1)/2 than to the expected size of the second largest factor of P-2 or P+2 ? Last fiddled with by cheesehead on 2006-01-12 at 20:00 |
|
![]() |
![]() |
![]() |
#3 |
∂2ω=0
Sep 2002
República de California
5·2,351 Posts |
![]()
I had also noticed that the p-1 factorizations of M-prime exponents seem nonrandomly smooth. To avoid the inevitable human tendency to "find" patterns in random data, we need to objectively quantify the smoothness of these p-1 factorizations and compare the result with the same smoothness measure applied to a bunch of random primes of similar sizes.
If it indeed turns out that the M(p) exponents have p-1 smoothness that is unlikely to be coincidental (which remains to be established), I wonder if the properties of factors of M(p) may somehow be at work there, in the sense that prime M(p) by definition have just one large factor, i.e. are themselves as non-smooth as it is possible to be. But that would imply that on average, M(p) with less-smooth p-1 are more likely to possess small factors (i.e. to be smooth themselves), which seems counterintuitive, at least to me. Here's perhaps a simple way to gauge whether the (p-1) have nonrandomly small penultimate factors: compare them to the average penultimate factors of the corresponding (p+1). In order to avoid small-number bias, let's just look at the 21 known M-prime exponents larger than 10000: The p-1 have penultimate factors 2,7,7,3,3,3,7,7,7,23,103,3,2,13,17,173,83,23,3,17,67, with average = 27.3 The p+1 have penultimate factors 7,3,3,11,19,3,19,19,89,17,3,29,127,401,3,3,149,83,13,13,23, with average = 49.4 . But whether this is statistically significant is questionable, as the sample size is very small. If the next prime M(p) that gets found happens to have a large penultimate (p-1) factor but small (p+1) factor, the respective averages would shift dramatically. My sense is this is just human nature at work, telling us to look for patterns in everything around us - apparently in the course of human evolution, correctly identifying a pattern in nonrandom data was a bigger advantage than its opposite (fooling oneself into thinking one had discerned a pattern in objectively random data) was a drawback. Last fiddled with by ewmayer on 2006-01-12 at 21:05 |
![]() |
![]() |
![]() |
#4 |
Aug 2002
Buenos Aires, Argentina
3·499 Posts |
![]()
It is just a false alarm. I'm not a mathematician but at least I can write programs, like the following UBASIC one:
Code:
5 Base=1000000 10 Sum=0 20 P=nxtprm(Base) 30 Q=P-1 40 Ult=1:Penult=1 50 if Q\2*2=Q then Q=Q\2:Penult=Ult:Ult=2:goto 50 60 R=3 70 if Q<R*R then 110 80 if Q\R*R=Q then Q=Q\R:Penult=Ult:Ult=R:goto 70 90 R=R+2 100 goto 70 110 if Q<>1 then Penult=Ult 120 Sum=Sum+log(Penult)/log(P) 130 Nbrprimes=Nbrprimes+1 140 P=nxtprm(P) 150 if P<Base*10 then 30 160 print Sum/Nbrprimes base = 10000 -> average = 0.19244 base = 100000 -> average = 0.19442 base = 1000000 -> average = 0.19624 So it appears that on average, the second largest prime number of p-1 is about the fifth root of p. I didn't think that this average would be so small. Last fiddled with by alpertron on 2006-01-12 at 20:41 |
![]() |
![]() |
![]() |
#5 |
Aug 2002
Buenos Aires, Argentina
3×499 Posts |
![]()
Using the same assumption, for the greatest prime factor I found:
base = 10000 -> average = 0.58088 base = 100000 -> average = 0.58979 base = 1000000 -> average = 0.59607 |
![]() |
![]() |
![]() |
#6 |
Aug 2002
Buenos Aires, Argentina
3·499 Posts |
![]()
Just for the curious I repeated the computation for the 29 exponents shown above and the average of the second largest factor is x^0.17663, just below the expected average.
|
![]() |
![]() |
![]() |
#7 | |
362410 Posts |
![]() Quote:
http://www.mcs.surrey.ac.uk/Personal...l#fibprimecarm |
|
![]() |
![]() |
#8 |
∂2ω=0
Sep 2002
República de California
2DEB16 Posts |
![]()
I had a look at a few other measures of smoothness for the exponents of the known M-primes - to avoid blatant small-number bias, I only considered exponents > 1000 (of which there are 29 known). Here are some additional data for these:
Number of prime factors (including repeated factors): p-1: 4,3,6,6,3,4,5,5,3,7,6,5,8,4,5,8,8,3,6,4,3,5,8,7,6,10,3,6,8, Average = 5.48 p+1: 9,4,3,2,3,5,5,3,5,3,3,4,3,4,5,6,4,8,3,5,5,4,3,3,3,4,6,6,3, Average = 4.28 Even though the sample size is not huge, that difference appears to be statistically significant, and not biased by one or two extreme outliers in either set. I next looked at the average size of the largest factor of p+-1, specifically at log(largest prime factor of p+-1) / log(p+-1): p-1: 0.536 p+1: 0.619 For random composites, we expect the latter ratio to be around 0.632 on average, so the p+1 factorizations match those expected for random composites, whereas the p-1 factorizations have largest prime factors that are significantly smaller than expected for random composites. Lastly, to get an idea of how the analogous statistics look for "random" primes with a similar statistical size distribution, for each M-prime exponent I substituted the next-larger prime and redid the above p+-1 statistics: Number of prime factors (including repeated factors): p-1: 2,2,4,5,2,6,7,4,3,3,6,5,6,4,5,2,3,5,6,2,6,6,2,5,5,3,5,5,3 Average = 4.21 p+1: 4,7,6,4,5,2,3,4,5,6,5,3,3,7,8,6,6,4,5,6,4,3,10,7,4,5,4,3,8 Average = 5.07 Average size of the largest factor, log(largest prime factor of p+-1) / log(p+-1): p-1: 0.641 p+1: 0.585 Here it is the p+1 factorizations which have the larger average number of factors, though the difference between the 2 samples is not as great (less than one prime factor on average) as for the M-prime exponent sample set. Correspondingly, the p+1 decompositions have a smaller average largest-factor size, but it's not significantly smaller than 0.6, as is the largest-factor log ratio for the M-prime p-1 factorizations. One would really need to do a decently-large ensemble of such sets of randomly chosen primes to get a better idea as to what extent the M-prime p-1 data are statistical outliers - I may write a small piece of code to do this at some later date, when I have more time to look at it. |
![]() |
![]() |
![]() |
#9 | |
"Bob Silverman"
Nov 2003
North of Boston
750810 Posts |
![]() Quote:
I suggest counting the factors without repetition. For p ~ 10000 (say) we expect about 3.2 prime factors (loglog(10k)) = 2.2 plus one because these numbers (p +/- 1) are ALWAYS divisible by 2. Indeed, one of then is always divisible by 2^2. The variance should equal 2.2, (by Erdos Kac) which means the standard deviation is about 1.4. Your means differ by only 1.2 WITH repetition and I can't think that the variance with repetition should be any less than without. The difference bwteen your numbers seems to be less than 1 standard deviation; hardly significant. Last fiddled with by ewmayer on 2006-09-06 at 22:47 Reason: fixed quote syntax |
|
![]() |
![]() |
![]() |
#10 |
∂2ω=0
Sep 2002
República de California
1175510 Posts |
![]()
OK, with Bob's suggestion in mind, let's do some more data crunching. Start with a recap of my earlier statistic:
Number of prime factors (including repeated factors): p-1: 4,3,6,6,3,4,5,5,3,7,6,5,8,4,5,8,8,3,6,4,3,5,8,7,6,10,3,6,8 Average = 5.48 p+1: 9,4,3,2,3,5,5,3,5,3,3,4,3,4,5,5,4,8,3,5,5,4,3,3,3,4,6,6,3 Average = 4.24 (slight correction downward here - I had an extra 1 in my earlier count). Next, let's ignore repetition: Number of prime factors (distinct prime factors only): p-1: 3,3,4,3,2,4,3,4,2,3,4,3,3,4,4,4,4,3,4,3,2,4,3,4,4,5,3,5,6 Average = 3.55 p+1: 2,3,3,2,3,3,5,3,4,3,3,4,3,3,3,4,3,6,3,4,5,4,3,3,3,3,4,4,3 Average = 3.41 So Bob is correct in there being no statistically significant difference in the average number of distinct prime factors between the 2 sets. Next, let's look at the most likely suspect for the difference in the average number of factors (including repeated factors), namely the Powers of 2 appearing in the factorizations: p-1: pow2: 1,1,3,4,2,1,3,2,2,5,2,3,4,1,1,4,1,1,3,1,2,2,6,4,2,1,1,1,3 Average = 2.31 p+1: pow2: 8,2,1,1,1,3,1,1,1,1,1,1,1,2,3,1,2,3,1,2,1,1,1,1,1,2,3,3,1 Average = 1.76 So roughly half of the extra smoothness (in terms of the average number of prime factors, including repeated factors) appears to be a direct result of the additional powers of 2 in the p-1 decompositions. Now, with regard to the powers of 2, we should also take account of whether the exponent is congruent to 1 or 3 modulo 4. In the former cases the p-+1 factorizations are guaranteed to contain minimal powers of 22 and 21, respectively; in the latter the minimal powers of 2 are switched. So taking into account the minimal power of 2 which must appear in the p-+1 factorization based on which of these 2 congruence classes the exponent belongs to, we tabulate the excess over this which appears in the respective factorizations: Excess power of 2 in the factorization (min2 = minimal power of 2 guaranteed for the factorization in question based on the value of p%4, exc2 = excess above this in the actual factorization): p-1: p%4 : 3,3,1,1,1,3,1,1,1,1,1,1,1,3,3,1,3,3,1,3,1,1,1,1,1,3,3,3,1 pow2: 1,1,3,4,2,1,3,2,2,5,2,3,4,1,1,4,1,1,3,1,2,2,6,4,2,1,1,1,3 min2: 1,1,2,2,2,1,2,2,2,2,2,2,2,1,1,2,1,1,2,1,2,2,2,2,2,1,1,1,2 exc2: 0,0,1,2,0,0,1,0,0,3,2,1,2,0,0,2,0,0,1,0,0,0,4,2,0,0,0,0,1, Average = 0.759 p+1: p%4 : 3,3,1,1,1,3,1,1,1,1,1,1,1,3,3,1,3,3,1,3,1,1,1,1,1,3,3,3,1 pow2: 8,2,1,1,1,3,1,1,1,1,1,1,1,2,3,1,2,3,1,2,1,1,1,1,1,2,3,3,1 min2: 2,2,1,1,1,2,1,1,1,1,1,1,1,2,2,1,2,2,1,2,1,1,1,1,1,2,2,2,1 exc2: 6,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,1,1,0 Average = 0.379 Thus, the excess in the powers of 2 in the p-1 factorizations is there as a result of two combined effects: (1) The slightly greater excess-over-expected average power of 2 in the p-1 factorizations (0.759 vs. 0.379), which is further magnified by (2) The greater number of exponents == 1 mod 4 vs. those == 3 mod 4 (19 vs. 11). Based on the heuristics of Wagstaff we indeed expect a slight excess of exponents in the former congruence class - the asymptotic ratio as p -> oo is predicted to be roughly 1.06, which translates to an expected (15 vs. 14) ratio for a sample of 29 p's. Thus the excess in exponents == 1 mod 4 appears to be statistically insignificant, but the fact that it is magnified by the power-of-2 effect in the corresponding p-1 factorizations makes it look like a "signal." Again, this only explains roughly half the average-total-number-of-prime-factors discrepancy, but the remainder is now well within the level of statistical noise, i.e. within one standard deviation from the expected average. I'll update the above statistics to include the newly discovered Mersenne prime (assuming it proves to be such) once it's officially announced. |
![]() |
![]() |
![]() |
#11 | |
"Robert Gerbicz"
Oct 2005
Hungary
5·17·19 Posts |
![]() Quote:
http://primes.utm.edu/mersenne/heuristic.html They remultiple for each r<2*k (r is prime) by r/(r-1), this is good because 2^k-1 isn't divisible by r if r<2*k, but here they stop and using Mertens's theorem this is e^gamma*log(2*k)/(k*log(2)). But this isn't good, because 2^k-1 is also not divisible by the primes between 2*k+2 and 4*k so they have to remultiple the probability also these r/(r-1), and so on. Using this they would get a very large probability, this is >1/2 (in fact very close to 1), meaning that for more than every second p prime number 2^p-1 is also prime. What is impossible, looking the known number of Mersenne primes. But what is wrong with this argument? See if q=2*k*p+1 is prime then the probability that q divides 2^p-1 is: 1/(2*k) if k==1 or 3 mod 4 1/k if k==0 mod 4 0 if k==2 mod 4 So in average (if we don't know k mod 4) the expected probability is 1/(2*k). And this is much larger than 1/q=1/(2*k*p+1). So the probability that for p prime number 2^p-1 is also prime is about prod(k=1,k<2^p/p,isprime(2*k*p+1)*(1-1/(2*k))) and this will be about c/p, where c is a constant. This proof isn't hard. Use that about every p-th prime number is in the 2*k*p+1 sequence and 1-x is about exp(-x) if abs(x) is small and also Mertens' theorem. So their argument was wrong but the expected probability: e^gamma/(p*log(2) can be still good. See their regression line on the page. But I'm almost sure that their third heuristic is wrong. For this I've done a quick computer search to estimate the probabilities: I've tested the first 10^5 primes that are larger than 2^u, where 30<=u<=40 up to the same sieve depth: using primes q=2*k*p+1, where 1<=k<=2^11. The number of survivors mod 4, and their ratio's: Code:
u=30,a[1]=38063,a[3]=36703,ratio=1.037054 u=31,a[1]=38161,a[3]=36844,ratio=1.035745 u=32,a[1]=38536,a[3]=37111,ratio=1.038398 u=33,a[1]=38669,a[3]=37560,ratio=1.029526 u=34,a[1]=39074,a[3]=37918,ratio=1.030487 u=35,a[1]=39347,a[3]=37964,ratio=1.036429 u=36,a[1]=39314,a[3]=38462,ratio=1.022152 u=37,a[1]=39654,a[3]=38590,ratio=1.027572 u=38,a[1]=39722,a[3]=38946,ratio=1.019925 u=39,a[1]=40097,a[3]=38905,ratio=1.030639 u=40,a[1]=40424,a[3]=39198,ratio=1.031277 is about 2^30 then it has got higher probability by factor of 38063/36703=1.037054, than q==3 mod 4 and q is about 2^30. If p==1 mod 4 and q==3 mod 4 and p/q is almost 1 then f(p)/f(q) is about 1+0.74/log(p) ( using the 11 data), where f(p) is the probability that 2^p-1 is prime. This is smaller than the predicted value by Wagstaff: f(p)/f(q)=log(6*p)/log(2*p)= =1+(log(3)-1)/log(2*p) and this is about f(p)/f(q)=1+log(3)/log(p)=1+1.0986/log(p) (I've used log(3) and that 1/log(2*p) is about 1/log(p) for large p values.) Last fiddled with by R. Gerbicz on 2006-09-07 at 17:03 |
|
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Mersenne Prime Exponent Distribution | PawnProver44 | Miscellaneous Math | 26 | 2016-03-18 08:48 |
PC shuts down randomly. Need Help please | mdq | Hardware | 38 | 2009-05-23 09:35 |
Fun with the new Mersenne prime exponent | ewmayer | Lounge | 4 | 2006-09-06 20:57 |
Distributed Factoring and prime testing algorithms | bunbun | Software | 6 | 2006-03-27 17:14 |
Mersenne composites (with prime exponent) | Dougy | Math | 4 | 2005-03-11 12:14 |