![]() |
![]() |
#947 | |
"University student"
May 2021
Beijing, China
2·53 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
#948 |
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
5,179 Posts |
![]()
They are only TF'd to 73 bits but each range only needs 19 more factors.
If you choose to TF them to 74 they will likely clear. If you choose to P-1 them with modest bounds they will also clear. |
![]() |
![]() |
![]() |
#949 |
Oct 2021
Germany
2·72 Posts |
![]() |
![]() |
![]() |
![]() |
#950 |
"Lisander Viaene"
Oct 2020
Belgium
11011012 Posts |
![]()
I'll be doing P-1 in the 10.xM ranges (10.4M-11.0M) with B1=700k to 800k and B2: whatever v30.8b2 assigns :)
Last fiddled with by lisanderke on 2021-11-28 at 23:27 |
![]() |
![]() |
![]() |
#951 |
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
5,179 Posts |
![]()
That leaves the highest undone in our ranges of interest at 29.8.
Wow Thanks everyone. |
![]() |
![]() |
![]() |
#952 |
Dec 2016
32·13 Posts |
![]() |
![]() |
![]() |
![]() |
#953 |
"Oliver"
Sep 2017
Porta Westfalica, DE
11110011012 Posts |
![]()
IIRC I set it to 20 GB and had 23 GB usage. On the same machine I regularly had overallocation with ECM. George got a lot of them sorted out.
|
![]() |
![]() |
![]() |
#954 |
Jul 2003
Behind BB
35318 Posts |
![]() |
![]() |
![]() |
![]() |
#955 |
"Seth"
Apr 2019
24×33 Posts |
![]()
I'm re-running stage 2 with 30.8v2 and founding a few extra factors from larger B2
https://www.mersenne.org/report_expo...6907619&full=1 Stage 2 is several times faster. From memory it was taking 5-10K core seconds to complete B2=110M vs 2K core seconds to complete B2=414M now. |
![]() |
![]() |
![]() |
#956 | |
Oct 2021
Germany
2×72 Posts |
![]() Quote:
![]() ![]() Last fiddled with by Luminescence on 2021-12-01 at 02:27 |
|
![]() |
![]() |
![]() |
#957 |
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
5,179 Posts |
![]()
George in 30.8 is working on determining the best B1/B2 for this new version.
Because Stage 2 is several times faster than it used to be the optimal B2 is a much higher multiple of B1. This will equate to a higher success rate and more factors found on average. Most of the time P-1 is run on fresh exponents with no prior P-1. However, for this sub-project most of the P-1 work is on exponents that have already had P-1 done; just to relatively low bounds. With newer, faster hardware it is reasonable to re-do P-1 to higher bounds and factor more exponents. Along with that, the goal of this sub-project is to find a defined quantity of factors as efficiently as possible. So in the past I've done a lot of analysis and some trial-and-error and *have* a pretty good handle of the recommended bounds for each remaining range. However, with version 30.8 I now *had* a pretty good handle. ![]() The basic formula remains the same: - Note how many factors are required. - Analyze the current average P-1 success rate. - Calculate the new success rate that is required to produce the required number of factors. - Determine the new P-1 bounds that achieve that success rate. For example, prior to 30.8 if I needed a +3% success rate my new Bounds would be in the 1.5M/45M range. A 30x B2/B1 ratio was reasonable in these versions. For an exponent in the 28M these bound give a 4.58% success rate....here I'm assuming the current success rate is about 1.58%; reasonable. With the new version recommending about a 200x B2/B1 ratio to get the same 4.58% success rate would require bounds of about 530K/106M ... here That may not seem like a problem until you consider that the current B1 for most of the exponents is over 530K (those that are lower are not much lower). Therefore, bounds such as these make it unlikely that a Stage 1 factor will be found. But maybe that is not a big problem with Stage 2 being so much faster. For now I'm just not quite sure what to suggest. - Use as above and accept Stage 1 is unlikely to find a factor. - Increase B1/Reduce B2 to 100x to make Stage 1 more productive but lose some overall P-1 efficiencies. - Increase B1/Keep B2 at 200x; find more factors sooner but with more total effort .... but in less time due to the speed of 30.8 - Something else? Opinions? Thanks P.S. In the same clock time Stage 2 has 3 to 4 times the odds of finding a factor of Stage 1. Seems Stage 1 is not as significant/effective in 30.8 Last fiddled with by petrw1 on 2021-12-01 at 05:24 Reason: P.S. |
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Thinking of Joining GPU to 72 | jschwar313 | GPU to 72 | 3 | 2016-01-31 00:50 |
Thinking about lasieve5 | Batalov | Factoring | 6 | 2011-12-27 22:40 |
Thinking about buying a panda | jasong | jasong | 1 | 2008-11-11 09:43 |
Loud thinking on irregular primes | devarajkandadai | Math | 4 | 2007-07-25 03:01 |
Question on unfactored numbers... | WraithX | GMP-ECM | 1 | 2006-03-19 22:16 |