![]() |
![]() |
#23 |
"Curtis"
Feb 2005
Riverside, CA
22×33×53 Posts |
![]()
See https://www.mersenne.org/report_ecm/. Each column is a "level", a set of curves expected to find a factor of a specific size. It is customary to jump B1 to a higher level once the prescribed number of curves are complete at a lower level.
Citing ATH's B2 from GMP-ECM isn't very relevant for Prime95. They're two different programs using two different algorithms to do ECM. I will say that when I used P95 to do ECM, I used B2 = 150* B1, but I'm not sure it was faster at finding factors than the default B2 = 100* B1. I disagree that stage 2 time should be as long as stage 1 time; stage 2 taking anywhere from 40% to 70% of stage 1 time is perfectly normal for both GMP-ECM and Prime95. If your goal is to maximize probability of finding a factor per unit of computation, you should not mess with the assignments as given. There are changes that might be a tiny bit better, say 10% more likely to find a factor per unit time, but there are lots of settings that will be quite a bit less efficient such as setting bounds suitable for a factor size much larger than you're likely to spend the time to run enough curves to find. If you're assigned curves with B1=50,000 and decide to use B1=500,000, your factors-found-per-curve will be much much better, but factors-found-per-time will be quite a bit worse (because you're running curves that take 10 times as long but not finding 10 times as many factors). |
![]() |
![]() |
![]() |
#24 | |
Random Account
Aug 2009
Not U. + S.A.
2·3·19·23 Posts |
![]() Quote:
![]() My HP is running assignments as given by the server. No changes. What I am doing with this Asus is experimentation. What I run is not assigned. I just comb through the existing records to find something with very little ECM work done. |
|
![]() |
![]() |
![]() |
#25 |
"Curtis"
Feb 2005
Riverside, CA
10110010111002 Posts |
![]()
There is no particular reason to run B1's in between. You can if you like, but bookkeeping is easier if folks run t25 curves (B1=50k) or t30 curves (B1=250k). There's nothing wrong with picking arbitrary B1 values, but there is little to gain either. They're called t25 and t30 because those B1 sizes are most efficient for finding factors of those respective sizes. After the t25 column is "done", it is more time-efficient to target larger factors even though it's not impossible that a 25 digit factor is still out there.
Larger-bound curves will still find factors of a smaller size, and there is little harm in choosing B1 one level higher than the current column. Or, for that matter, any B1 between the current level's B1 and the next-higher B1. Running curves smaller than recommended is a waste of time. Running curves larger increases the chance per curve to find a larger factor, at the cost of decreasing the chance per day of finding a relatively smaller factor. The site I linked converts curves at odd B1 to the current wavefront linearly; that is, if your candidate has curves listed on the 50k column, and you run a curve at 100k, it'll count as two curves at 50k. It also takes twice as long as one curve at 50k. |
![]() |
![]() |
![]() |
#26 | |
Random Account
Aug 2009
Not U. + S.A.
2·3·19·23 Posts |
![]()
I found this a few nights ago when I was trying different things:
Quote:
I looked at the assignments on my HP. B1 is either 250,000 or 1,000,000. This goes along with the link to the report pages. I cannot say that I fully understand it, but enough to see how it is working. |
|
![]() |
![]() |
![]() |
#27 | |
Einyen
Dec 2003
Denmark
344710 Posts |
![]() Quote:
Those "optimal" values was calculated >20 years ago. They are calculated so after ~280 curves at B1=50,000 there are "only" 1/e ~ 37 % risk/chance of missing a 25 digit factor and 63% chance of finding it, if there is one. Then you go to B1=250,000 at the 30 digit level, but those curves will also find any missing 25 digit factor or even smaller factors. After ~640 curves there are again 63% chance of finding any 30 digit factor and now there are >99% chance of having found any 25 digit factor. And so on at each digit level the calculated number of curves assures 63% chance of finding the factor at that level while assuring practically 100% chance of any factors at the earlier levels. Factors between the "levels" for example 28 digits and 32 digits will have difference odds of being found but will eventually be found at the next "level". Last fiddled with by ATH on 2018-11-14 at 06:32 |
|
![]() |
![]() |
![]() |
#28 | |
Random Account
Aug 2009
Not U. + S.A.
50768 Posts |
![]() Quote:
I noticed something last night that I did not know. Someone had written that B1 was only used in Stage 1, Both B1 and B2 are used in Stage 2. Consider the sample below: Code:
ECM2=<ID>,1,2,<Exponent>,-1,250000,25000000,50,<Knowns factors> As for the probability percentages, I am not an advanced mathematician. I know enough to do what I need. My area of expertise is hardware. I believe I can say that after 30 years of experience. If that doesn't make sense, consider the fact that I am 63 years old. |
|
![]() |
![]() |
![]() |
#29 | |
Einyen
Dec 2003
Denmark
344710 Posts |
![]() Quote:
But if you run all the curves for 25digit and 30digit level you can be "nearly 100%" sure there are no factor at or below 25 digits, and if you then also run the curves for the 35 digit level, you can be "nearly 100%" sure there are no factors at or below 30 digits and so on. Last fiddled with by ATH on 2018-11-14 at 12:18 |
|
![]() |
![]() |
![]() |
#30 | ||
Aug 2006
598810 Posts |
![]() Quote:
Quote:
Code:
nr=1; \\ approximate as 1, not very important 1-(1-ecmprob (50000, 12746592, 5e24, nr, 2))^261*(1-ecmprob (250000, 128992510, 5e24, nr, -3))^513 1-(1-ecmprob (50000, 12746592, 5e29, nr, 2))^261*(1-ecmprob (250000, 128992510, 5e29, nr, -3))^513*(1-ecmprob (10^6, 1045563762, 5e29, nr, -6))^1071 |
||
![]() |
![]() |
![]() |
#31 |
Random Account
Aug 2009
Not U. + S.A.
2·3·19·23 Posts |
![]()
Everyone here will think I am nutty when I say that I keep a record of everything I run, except TF. So far this month, on this machine, I have ran 1,103 ECM's. 26 tests found a factor. That is 2.35% All of these had B1's of either 250,000 or 1,000,000. i didn't check the numbers on the HP. I suspect it would trend the same way.
Edit. Counting what the HP has done, drops the percentage to 1.7 Last fiddled with by storm5510 on 2018-11-15 at 23:53 Reason: Supplemental |
![]() |
![]() |
![]() |
#32 |
Random Account
Aug 2009
Not U. + S.A.
A3E16 Posts |
![]() |
![]() |
![]() |
![]() |
#33 |
"6800 descendent"
Feb 2005
Colorado
23·3·31 Posts |
![]()
My question is: Is there any difference in the probability of finding a factor when performing ECM on a small Mersenne number vs. a larger one (assuming plenty of memory is available)?
|
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
How to use prime95 for stage 1 & GMP-ECM for stage 2 | Prime95 | Lone Mersenne Hunters | 118 | 2022-07-04 18:19 |
Stage 1 | G_A_FURTADO | Information & Answers | 1 | 2008-10-26 15:21 |
Stage 1 with mprime/prime95, stage 2 with GMP-ECM | D. B. Staple | Factoring | 2 | 2007-12-14 00:21 |
Need help to run stage 1 and stage 2 separately | jasong | GMP-ECM | 9 | 2007-10-25 22:32 |
Stage 1 and stage 2 tests missing | Matthias C. Noc | PrimeNet | 5 | 2004-08-25 15:42 |