20130412, 23:06  #1 
"Bill Staffen"
Jan 2013
Pittsburgh, PA, USA
3·137 Posts 
I am so sorry for being bad at math.
But I really did search the forums before asking, and read up on the relevant topics on the mersenne wiki and wikipedia.
I am trying to figure out why p1 factorization is used over ECM for mersenne primes. Seems like ECM used to be a thing, and now it's not, and I don't know why. 
20130412, 23:18  #2 
Just call me Henry
"David"
Sep 2007
Cambridge (GMT/BST)
5751_{10} Posts 
All the factors of 2^p1 are of the form 2*k*p+1. This means in the p1 method we get the factor p for free. This makes it much easier to find factors using p1 than ecm for this form of number. I imagine one day when the exponents get large enough some ecm will be done in addition to p1. We have probably got a long way to go before that happens though.

20130412, 23:28  #3 
"Bill Staffen"
Jan 2013
Pittsburgh, PA, USA
411_{10} Posts 
Wow, that's crazy. I will work on that for a while, thanks much.

20130413, 03:34  #4 
Jun 2003
7×683 Posts 
Never gonna happen. As exponent grows larger, the TF level also goes higher. Thus useful ECM level also goes higher. It'll never breakeven. The handicap (not being able to leverage the p in 2kp+1) is too large (and getting larger).

20130413, 15:33  #5  
Jun 2003
10010010001_{2} Posts 
Quote:
ECM is used on Mersenne numbers, but usually only on small exponents below about 10,000,000. See this page. These are already known to be composite, and in some cases a factor is known. Finding (additional) factors, not primality testing is the goal here. For larger exponents, finding factors is nice, but the main goal is finding primes. We search for factors in the hope that a small investment of computer time will save us from having to conduct two expensive LL tests. The law of diminishing returns applies to factorfinding  the deeper we search, the fewer factors each extra GHzday is likely to find. Eventually we reach the breakeven point at which the expected cost of finding a factor is equal to the cost of doing the LLs. At this point, we stop. That breakeven point, for ECM is none at all. Why? And why not for P1? First P1 is oveall a faster and less memoryhungry algorithm. Since P1 will happily eat up all the memory I can throw at it  and I have 16GB  you can imagine how voracious ECM is. Second, as henryzz remarks above, P1 can make use of the 2kp+1 form of Mersenne factors which means it will find factors about log_{2}(p) bits larger than ECM would run to the same bounds. Finally, after we've already done TF and P1, there just aren't enough small factors left to find. If P1 is so much better than ECM, you may be wondering why use ECM at all? The answer is that ECM can be run again and again with the same bounds on the same number, and still have a chance of finding a new factor, while P1 can be run only once. 

20130413, 18:01  #6 
"Bill Staffen"
Jan 2013
Pittsburgh, PA, USA
3·137 Posts 
Thank you, that's very helpful. I would not have guessed that p1 might be the 'less memory hungry' version of anything. :)
Last fiddled with by Aramis Wyler on 20130413 at 18:01 
20130413, 18:22  #7 
"Phil"
Sep 2002
Tracktown, U.S.A.
2^{2}×3^{2}×31 Posts 
At one level, P1 and ECM are very similar: Both methods exploit group operations hoping to find a factor if the group order is smooth. Given a prime factor F, the group order in P1 is fixed at F1, and the group operation is multiplication mod F. In ECM, there are many group orders possible depending upon what curve is chosen, but the group orders are all in the range from F+12*sqrt(F) to F+1+2*sqrt(F), roughly the same size as F1, so we would expect an a priori probability of any of these group orders to be smooth to be about the same. However, there are two considerations that make P1 the more efficient method for eliminating LL candidates:
1) We know that the P1 group order is divisible by 2p, where p is the Mersenne number exponent in question. ECM curves can be chosen so that the group order is divisible by either 12 or 16. Suppose trial factoring has already been done to 73 or so bits on a Mersenne number with a 26bit exponent. Suppose that this number has a factor which is a little bit larger, say 80 bits. The group orders then are also 80 bit numbers, but since the P1 group order is divisible by 2p, we find the factor when the cofactor of 80  27 = 53 bits is smooth to the factoring bounds. ECM only finds the factor when the cofactor of 80  4 = 76 bits is smooth, and the probability that a 76bit factor is smooth is considerably less than for a 53bit factor. Of course, if we really want to find that particular factor, and P1 does not find it, we can run a number of ECM curves on the number, but in the context of GIMPS, it would be more efficient to move on and run P1 on other numbers, eliminating more LL candidates in the long run. 2) The other advantage of P1 is that the group operation, multiplication mod F, is straightforward and easy to implement. (Of course, we don't know F, so we do the multiplications mod 2^{p}1, but this also applies to ECM.) The group operation in ECM is more complicated, and requires about 6 times the amount of computations to perform one step than a step in P1 takes. So one ECM curve to given factoring bounds should take about 6 times as long as a P1 run to the same bounds. I have oversimplified some details, but I hope this is sufficient to get across the general reason why ECM on LL candidates just is not as efficient as doing P1, and doing P1 to higher bounds will find more candidates in the long run than ECM. Last fiddled with by philmoore on 20130413 at 18:24 
20130413, 18:49  #8 
"Bill Staffen"
Jan 2013
Pittsburgh, PA, USA
3×137 Posts 
This may not make sence, but knowing what numbers are going to be hit by p1 stage 1, would it be a significant gain to sieve those numbers out of the TF test?

20130413, 21:01  #9 
"Phil"
Sep 2002
Tracktown, U.S.A.
45C_{16} Posts 
Ideally, all numbers should be hit by the P1 test before LL testing. But it makes sense to TF them to a certain level before doing P1. TF finds some factors that P1 will miss, and conversely, P1 will find some factors that TF will miss.

20130414, 15:45  #10 
Jun 2003
2221_{8} Posts 
To do that would effectively require the trial factorisation of q1 for every candidate factor q of Mp. While this can indeed be done quickly using a sieve (it's how the quadratic sieve factoring algorithm works) it would still take far more time, and eliminate so few candidates, that the cost would exceed the benefit.

20130414, 16:06  #11  
Jun 2003
7×167 Posts 
Quote:
Prime95, by contrast will comfortably run P1 on 20milliondigit numbers using that much memory, and if pressed, could make do with just a few hundred Meg. The downside is that with Prime95, stage 2 typically takes longer than stage 1, while B2 is typically about 50 times B1. With gmpecm stage 2 is much faster than stage 1 despite B2 being tens of thousands times larger than B1. 

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
need some math help.  swl551  Math  2  20140220 16:33 
Math  ET_  Operazione Doppi Mersennes  4  20120920 19:33 
Math  JohnFullspeed  Forum Feedback  1  20110711 16:42 
math help pls!  DSC  Miscellaneous Math  2  20050911 04:53 
help with math  DSC  Homework Help  13  20050831 07:16 