![]() |
![]() |
#23 |
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
23×661 Posts |
![]()
If I choose 1M as the desired B1 for exponents at 20M and use George's suggested "if exponent is halved increase B1 by 2.2 factor" I get the following table. Column 3 (B1-Neat rounds it to nearest 100K)
Code:
Exponent B1 B1-Neat 78125 548,758,735 548,800,000 156250 249,435,789 249,400,000 312500 113,379,904 113,400,000 500000 66,427,649 66,400,000 625000 51,536,320 51,500,000 750000 41,883,644 41,900,000 1000000 30,194,386 30,200,000 1250000 23,425,600 23,400,000 1500000 19,038,020 19,000,000 2000000 13,724,721 13,700,000 2500000 10,648,000 10,600,000 3000000 8,653,645 8,700,000 4000000 6,238,510 6,200,000 5000000 4,840,000 4,800,000 6000000 3,933,475 3,900,000 7000000 3,300,838 3,300,000 8000000 2,835,686 2,800,000 9000000 2,480,116 2,500,000 10000000 2,200,000 2,200,000 11000000 1,973,960 2,000,000 12000000 1,787,943 1,800,000 13000000 1,632,344 1,600,000 14000000 1,500,381 1,500,000 15000000 1,387,133 1,400,000 16000000 1,288,948 1,300,000 17000000 1,203,057 1,200,000 18000000 1,127,325 1,100,000 19000000 1,060,082 1,100,000 20000000 1,000,000 1,000,000 |
![]() |
![]() |
![]() |
#24 |
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
23·661 Posts |
![]()
George is much more qualified for this point but I'll start.
Using this ... which might NOT be accurate for v30.8. and based on my PC's various RAM I get these specs: My PC with 24.5GB RAM for P-1 with B1=1M/B2=328M gets 5.74% success rate for 17.66 GhzDays My PC with 12GB RAM (about half) to get the same success rate needs B1=1.3M/B2=260M for 14.45 GDs My PC with 6.5GB RAM (about half again) to get the same success rate needs B1=1.8M/B2=200M for 11.82GDs. The actual GDs rewarded are about 15% higher than above. Does this seems somewhat reasonable? |
![]() |
![]() |
![]() |
#25 | |
If I May
"Chris Halsall"
Sep 2002
Barbados
2×3×5×373 Posts |
![]() Quote:
There was a time a while ago when we were working "blind". When the GPUs turned TF'ing upside-down. James managed to inform the discussion as to what was "sane". This was based on empirical data and peer-reviewed analysis. Just putting that out there... |
|
![]() |
![]() |
![]() |
#26 | |
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
14A816 Posts |
![]() Quote:
Empirical data and peer reviews greatly appreciated. |
|
![]() |
![]() |
![]() |
#27 |
"Curtis"
Feb 2005
Riverside, CA
2×7×409 Posts |
![]()
My view: Don't bother re-doing P-1 unless you're adding a zero to the already-done B1 bound.
But since a couple of folks seem to want to do another project after under-20k is done: How about, right now, doing bigger P-1 than necessary to clear the ranges left? Those who are doing near the minimum B1 to clear the number of factors needed are, in a sense, fouling those exponents from future P-1. Better to do it once, do it right- go deep enough on P-1 now that nobody would "ever" want to re-do it. I've been helping Masser on the hard-to-finish ranges he chooses. In 8.6M, I used B1 around 8M; now in 17.7M I'm doing 4M/4G for bounds. It's not the most efficient path to finding factors, but in the case where some of you keep doing P-1 after the project finishes it's hard to set an optimal set of bounds now since there is a "next project" of some sort coming. A simpler "next project" is under-200 for each 0.01M range. That'll require interestingly-large P-1 bounds in some ranges, and just a few factors in others. As with the current project, that variety will attract a wider set of "we like useless projects with clear goals" users than setting some arbitrary bounds for P-1 and TF work. The downside is that some ranges are nigh impossible... for now. People with 64GB+ machines might consider doing some big-bound P-1 near the first-time wavefront- that would actually help the overall mersenne-searching project. Last fiddled with by VBCurtis on 2022-01-11 at 01:07 |
![]() |
![]() |
![]() |
#28 | |
If I May
"Chris Halsall"
Sep 2002
Barbados
256668 Posts |
![]() Quote:
My post was more along the lines of "I know this is an ask... But... |
|
![]() |
![]() |
![]() |
#29 | ||||
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
23×661 Posts |
![]() Quote:
Quote:
Quote:
Quote:
|
||||
![]() |
![]() |
![]() |
#30 | |
"Curtis"
Feb 2005
Riverside, CA
2·7·409 Posts |
![]() Quote:
Finding factors for 20k is fun, just like finding primes for CRUS is fun. The fact that the finish line is rather over the horizon shouldn't be a deal killer; see Riesel-base-3 as an example in that other project. :) |
|
![]() |
![]() |
![]() |
#31 | |
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
528810 Posts |
![]() Quote:
Or it could be a hybrid as in focus the deep P1 on ranges over 199 togo first. |
|
![]() |
![]() |
![]() |
#32 | |
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
14A816 Posts |
![]() Quote:
When I analyzed the under 2000 project I could see that with very few exceptions the upper limit was about 60M. I'm not sure under 200 will have an upper limit; quite possibly there will be some ranges right to 999M. Right now mersenne.ca does not break down ranges over 100M under 0.1M (i.e. the under 2000 ranges). But looking at the 99.9M ranges for over 200 there are 21 ranges to go out of 100 and several have more than 20 factors required....that could be 10 TF levels or more. That said, if the reason for another project like this is to take advantage of v30.8's P-1 power we would need to set an upper limit; the point where v30.8 loses it's luster (unless you give it a LLLLOTTTT of RAM). As well as the exponents get larger TF becomes more efficient and P-1 less efficient; even v30.8. |
|
![]() |
![]() |
![]() |
#33 | |
"Curtis"
Feb 2005
Riverside, CA
2×7×409 Posts |
![]() Quote:
I think anything under 20M can be slayed with the big P-1 gun, and George has hinted that we may have similar B2 advancements for P+1 and ECM in the future; that would be just the ticket to finish off the tough ranges that I'm mistaken about. Also, LaurV likes it. The taste police have spoken! |
|
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
How to optimize the sieving stage of QS? | Ilya Gazman | Factoring | 6 | 2020-08-26 22:03 |
Placeholder: When is it legal to torrent BBC tv stuff? | kladner | Lounge | 3 | 2018-10-01 20:32 |
Future project direction and server needs synopsis | gd_barnes | No Prime Left Behind | 6 | 2008-02-29 01:09 |
Unreserving exponents(these exponents haven't been done) | jasong | Marin's Mersenne-aries | 7 | 2006-12-22 21:59 |
A distributed-computing project to optimize GIMPS FFT? Genetic algorithms | GP2 | Software | 10 | 2003-12-09 20:41 |