20181113, 06:15  #23 
"Curtis"
Feb 2005
Riverside, CA
2^{2}×3^{3}×53 Posts 
See https://www.mersenne.org/report_ecm/. Each column is a "level", a set of curves expected to find a factor of a specific size. It is customary to jump B1 to a higher level once the prescribed number of curves are complete at a lower level.
Citing ATH's B2 from GMPECM isn't very relevant for Prime95. They're two different programs using two different algorithms to do ECM. I will say that when I used P95 to do ECM, I used B2 = 150* B1, but I'm not sure it was faster at finding factors than the default B2 = 100* B1. I disagree that stage 2 time should be as long as stage 1 time; stage 2 taking anywhere from 40% to 70% of stage 1 time is perfectly normal for both GMPECM and Prime95. If your goal is to maximize probability of finding a factor per unit of computation, you should not mess with the assignments as given. There are changes that might be a tiny bit better, say 10% more likely to find a factor per unit time, but there are lots of settings that will be quite a bit less efficient such as setting bounds suitable for a factor size much larger than you're likely to spend the time to run enough curves to find. If you're assigned curves with B1=50,000 and decide to use B1=500,000, your factorsfoundpercurve will be much much better, but factorsfoundpertime will be quite a bit worse (because you're running curves that take 10 times as long but not finding 10 times as many factors). 
20181113, 16:18  #24  
Random Account
Aug 2009
Not U. + S.A.
2·3·19·23 Posts 
Quote:
My HP is running assignments as given by the server. No changes. What I am doing with this Asus is experimentation. What I run is not assigned. I just comb through the existing records to find something with very little ECM work done. 

20181113, 17:58  #25 
"Curtis"
Feb 2005
Riverside, CA
1011001011100_{2} Posts 
There is no particular reason to run B1's in between. You can if you like, but bookkeeping is easier if folks run t25 curves (B1=50k) or t30 curves (B1=250k). There's nothing wrong with picking arbitrary B1 values, but there is little to gain either. They're called t25 and t30 because those B1 sizes are most efficient for finding factors of those respective sizes. After the t25 column is "done", it is more timeefficient to target larger factors even though it's not impossible that a 25 digit factor is still out there.
Largerbound curves will still find factors of a smaller size, and there is little harm in choosing B1 one level higher than the current column. Or, for that matter, any B1 between the current level's B1 and the nexthigher B1. Running curves smaller than recommended is a waste of time. Running curves larger increases the chance per curve to find a larger factor, at the cost of decreasing the chance per day of finding a relatively smaller factor. The site I linked converts curves at odd B1 to the current wavefront linearly; that is, if your candidate has curves listed on the 50k column, and you run a curve at 100k, it'll count as two curves at 50k. It also takes twice as long as one curve at 50k. 
20181113, 23:30  #26  
Random Account
Aug 2009
Not U. + S.A.
2·3·19·23 Posts 
I found this a few nights ago when I was trying different things:
Quote:
I looked at the assignments on my HP. B1 is either 250,000 or 1,000,000. This goes along with the link to the report pages. I cannot say that I fully understand it, but enough to see how it is working. 

20181114, 06:28  #27  
Einyen
Dec 2003
Denmark
3447_{10} Posts 
Quote:
Those "optimal" values was calculated >20 years ago. They are calculated so after ~280 curves at B1=50,000 there are "only" 1/e ~ 37 % risk/chance of missing a 25 digit factor and 63% chance of finding it, if there is one. Then you go to B1=250,000 at the 30 digit level, but those curves will also find any missing 25 digit factor or even smaller factors. After ~640 curves there are again 63% chance of finding any 30 digit factor and now there are >99% chance of having found any 25 digit factor. And so on at each digit level the calculated number of curves assures 63% chance of finding the factor at that level while assuring practically 100% chance of any factors at the earlier levels. Factors between the "levels" for example 28 digits and 32 digits will have difference odds of being found but will eventually be found at the next "level". Last fiddled with by ATH on 20181114 at 06:32 

20181114, 11:32  #28  
Random Account
Aug 2009
Not U. + S.A.
5076_{8} Posts 
Quote:
I noticed something last night that I did not know. Someone had written that B1 was only used in Stage 1, Both B1 and B2 are used in Stage 2. Consider the sample below: Code:
ECM2=<ID>,1,2,<Exponent>,1,250000,25000000,50,<Knowns factors> As for the probability percentages, I am not an advanced mathematician. I know enough to do what I need. My area of expertise is hardware. I believe I can say that after 30 years of experience. If that doesn't make sense, consider the fact that I am 63 years old. 

20181114, 12:18  #29  
Einyen
Dec 2003
Denmark
3447_{10} Posts 
Quote:
But if you run all the curves for 25digit and 30digit level you can be "nearly 100%" sure there are no factor at or below 25 digits, and if you then also run the curves for the 35 digit level, you can be "nearly 100%" sure there are no factors at or below 30 digits and so on. Last fiddled with by ATH on 20181114 at 12:18 

20181114, 20:57  #30  
Aug 2006
5988_{10} Posts 
Quote:
Quote:
Code:
nr=1; \\ approximate as 1, not very important 1(1ecmprob (50000, 12746592, 5e24, nr, 2))^261*(1ecmprob (250000, 128992510, 5e24, nr, 3))^513 1(1ecmprob (50000, 12746592, 5e29, nr, 2))^261*(1ecmprob (250000, 128992510, 5e29, nr, 3))^513*(1ecmprob (10^6, 1045563762, 5e29, nr, 6))^1071 

20181115, 23:41  #31 
Random Account
Aug 2009
Not U. + S.A.
2·3·19·23 Posts 
Everyone here will think I am nutty when I say that I keep a record of everything I run, except TF. So far this month, on this machine, I have ran 1,103 ECM's. 26 tests found a factor. That is 2.35% All of these had B1's of either 250,000 or 1,000,000. i didn't check the numbers on the HP. I suspect it would trend the same way.
Edit. Counting what the HP has done, drops the percentage to 1.7 Last fiddled with by storm5510 on 20181115 at 23:53 Reason: Supplemental 
20181122, 03:02  #32 
Random Account
Aug 2009
Not U. + S.A.
A3E_{16} Posts 

20181205, 02:44  #33 
"6800 descendent"
Feb 2005
Colorado
2^{3}·3·31 Posts 
My question is: Is there any difference in the probability of finding a factor when performing ECM on a small Mersenne number vs. a larger one (assuming plenty of memory is available)?

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
How to use prime95 for stage 1 & GMPECM for stage 2  Prime95  Lone Mersenne Hunters  118  20220704 18:19 
Stage 1  G_A_FURTADO  Information & Answers  1  20081026 15:21 
Stage 1 with mprime/prime95, stage 2 with GMPECM  D. B. Staple  Factoring  2  20071214 00:21 
Need help to run stage 1 and stage 2 separately  jasong  GMPECM  9  20071025 22:32 
Stage 1 and stage 2 tests missing  Matthias C. Noc  PrimeNet  5  20040825 15:42 