20191113, 15:53  #23  
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
2×3×773 Posts 
Quote:
If you are a programmer you should consider getting the source and coding it yourself so it can process large batches of potential P1 assignments and calculate the expected success rate and the GhzDays effort. If you choose a few different percentages vs effort and graph them you will note that at some point it takes a LOT more work for a small percentage improvement. I try to find good balance between expected factors found vs. GhzDays effort to find them. In my experience multiple passes with increasing bounds is inefficient. If you only use B1 then each successive Stage 1 run will continue where it left off (assuming you save the work files). However, any time you increase B2 it runs the entire Stage 2 from the start whether or not you change B1. I prefer to calculate the bounds that get the number of factors I want and use those bounds with one pass; starting with the exponents with the best odds. 

20191113, 17:55  #24 
Jul 2003
wear a mask
3^{2}×179 Posts 
Thanks, Wayne. That confirms what I was beginning to suspect about P1 strategy. I think 2.6M will be very close to the 1999 unfactored goal after my initial pass is complete. If necessary, finishing that range with a second pass shouldn't be too difficult. I should be finished with 2.6M by the end of 2019.
2.8M will take a little longer with my measly i5, but I'll keep plugging away at it. The next pass over that range will be a little more strategic. 
20191113, 19:14  #25  
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
2·3·773 Posts 
Quote:


20191113, 22:58  #26  
Nov 2003
1D24_{16} Posts 
Quote:
probability of finding a factor given that the method failed with a prior given B1. Extending P1 just isn't worth the effort. It would be much better to run ECM instead. 

20191114, 17:39  #27 
Jul 2003
wear a mask
64B_{16} Posts 
Dr. Silverman, thank you for the recommendation. I considered running ECM curves.
Here is an exponent I considered: 2693501 P1 has already been performed on this exponent with bounds, B1= 670000 and B2 = 6700000. The probability of finding a factor with those bounds was 0.03856. If I complete another P1 calculation, with bounds, B1=2000000, B2=50000000, the probability of finding a factor (given current trial factoring limit, but assuming no prior P1) will be 0.06294. I used the probability calculator here. I estimate the conditional probability using subtraction (how bad is this?): 0.062940.03856 = 0.02438. The P1 calculation will take 21 minutes on my machine and I expect to find a factor for every 41 (1/0.02438) exponents that I test with similar bounds. So, I expect to find 1 factor every 14.35 hours via P1. Each ECM curve for M2693501, at B1=50000 and B2=5000000, takes about 5 minutes on my machine. Completing 214 such curves will take about 18 hours. Would completing the 214 ECM curves at B1=50000 be comparable to trial factoring to 83 bits? That's how I came up with an estimate for the probability of finding a factor via ECM: 0.1139. With these crude estimates of the probability, I anticipate finding a factor via ECM in this range about once a week. Experience has not born out the P1 estimate above of one factor every 14 hours; it has been more like one factor every day. I note that there has been some ECM already performed on these exponents and I'm using very crude estimates. Please correct me if I'm doing anything wildly incorrect. However, at this point it appears that additional P1 in the 2.6M range is more efficient at finding factors than ECM. 
20191115, 10:54  #28 
Just call me Henry
"David"
Sep 2007
Cambridge (GMT/BST)
2·5·587 Posts 
I believe that the conditional probability for the P1 rerun should be (P(second)  P(first))/(1P(first)). This assumes that the search space for the first run is completely contained within the second run(and is also quite possibly messed up by tf considerations). As such the search space is smaller for the second run. This provides a value of 0.03968 for your example 2693501.
These numbers do not take into account any ECM that has already been done. I am not sure what the average amount of ECM done so far on numbers this size is but 2693501 has 33 curves done. This is 33/280 of t25 and as such, there is a 1e^(33/280)=11.1% chance of any 25 digit factor having been found already by ECM. This will be higher for smaller factors of course. Please feel free to correct my hastily done maths on this. It would be interesting to make a calculator that took all known work into account as well as the expected size of the factors and showed the probability of any new work finding a factor of x digits graphically. I might look into doing this if I can work out the formulas for P1 and ECM probabilities. 
20191115, 18:04  #29  
Nov 2003
2^{2}·5·373 Posts 
Quote:
You are comparing the wrong things. I will accept your probabilities as correct. [your conditional probability computation is not correct, however]. You need to divide by (1 P1) Running P1 with limits B1, B2 is the same as running a single elliptic curve. [although the computations are simpler and hence faster] (one does need to adjust the size of the candidate by log(exponent) because P1 is always divisible by the exponent.) Running a second elliptic curve with the same limits will double the probability of success. Increasing B1, B2 for P1 does not double the probability of success unless B1, B2 are greatly increased. You are comparing running P1 with B1, B2 against running another ECM curve with much SMALLER limits. And running ECM gives multiple independent chances. P1 does not. Read my joint paper with Sam Wagstaff: : A Practical Analysis of ECM. 

20191115, 18:20  #30  
"Curtis"
Feb 2005
Riverside, CA
3×5×11×29 Posts 
Quote:
Masser is comparing the rate of found factors by P1 (at large bounds as stated) to the rate by ECM (at small bounds). If that's the wrong thing to compare, what is the right thing? 

20191115, 23:38  #31  
Nov 2003
2^{2}×5×373 Posts 
Quote:
Suppose we have the choice of (say) running P1 again with (say) limits kB1, kB2 or running a elliptic curve with limits B1, B2. [for some k]. The latter choice will double the probability of success. The former will not. The former will take less time, but the latter will give greater likelihood of success. The former will allow more numbers to be tested in a fixed amount of time. If we choose k to spend the same amount of time for extending P1 or to run ECM that latter should be more efficient OTOH, if one places a BOUND on the time to be spent, i.e. one wants to spend time T either running more curves or extending B1,B2 for P1, it may be more effective to extend P1 because (as already noted) one must lower B1,B2 for the elliptic curves because point addition on curves is much more expensive than simple exponentiation. I have not computed the effect that attempting 2^p  1 has on P1. The form of the numbers gives that P1 is always divisible by p. This effectively reduces the size of the numbers that "must be smooth" by log(p). This may give a sufficient advantage to make P1 more effective when bounding the run time. Note that this advantage disappears if one were attempting to factor numbers of no particular form. If one is willing to spend an arbitrary amount of time on each candidate it is clear that running multiple elliptic curves will be far more effective than raising the P1 bounds, especially as the factors get larger. 

20191116, 18:22  #32 
Jul 2003
wear a mask
3^{2}·179 Posts 
Thank you all for the feedback. I have downloaded the SilvermanWagstaff paper and will read it, but I will probably need to brush up on some of the background before I fully comprehend it. Thanks again.

20191128, 06:16  #33 
Jul 2003
wear a mask
3113_{8} Posts 
New (to me) result feedback:
Splitting composite factor 72115741492408141371057158919540730748664584042639 into: * 57330969015562354090032601 * 1257884573219624043651239 https://www.mersenne.org/report_expo...2645329&full=1 Last fiddled with by masser on 20191128 at 06:19 
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Redoing factoring work done by unreliable machines  tha  Lone Mersenne Hunters  23  20161102 08:51 
Sieving reservations and coordination  gd_barnes  No Prime Left Behind  2  20080216 03:28 
Sieved files/sieving coordination  gd_barnes  Conjectures 'R Us  32  20080122 03:09 
P1 factoring Q&A thread  Unregistered  Software  27  20050611 05:32 
5.98M to 6.0M: redoing factoring to 62 bits  GP2  Lone Mersenne Hunters  0  20031119 01:30 