mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Data (https://www.mersenneforum.org/forumdisplay.php?f=21)
-   -   (Preying for) World Record P-1! (https://www.mersenneforum.org/showthread.php?t=27406)

lisanderke 2021-12-15 12:01

(Preying for) World Record P-1!
 
I'd like to organize a thread to find World Record Factors with P-1 on small exponents. I'll now refer to this project as WR P-1 (at the risk of it sounding too pretentious, perhaps :lol:) That would make this a sub-sub-project of GIMPS! Similar to Petrw1's <2k (sub twok, <twok...) project, although that name is much catchier than WR P-1 in my opinion :razz:

All of these small exponents will need to be P-1'ed to very high bounds. For now, it's best to only run stage 1 on these exponents (see quoted post below). However, this is where I'm stuck. The maths behind P-1 escapes me. Let alone the conversion from previous ECM-GMP work done to what that could equate to in terms of P-1 bounds ("previously done").

[QUOTE=Prime95;595226]There will not be lots of easy to find factors. GMP-ECM has been run on these expos - likely to pretty big bounds.

That's not to say it wouldn't be a good project to redo P-1 on all small exponents.

Suggestion: Remember that prime95 is not multi-threaded for small exponents. So, set up a worktodo.txt with work for every core. Just run stage 1, go deep. You're accumulating stage 1 results for a stage 2 run at a later date.

There are more advances coming for stage 2 P-1 on small expos (better multi-threading). When that version is ready, switch prime95 to one worker using all cores. Feed it all those stage 1 results you've accumulated.

BTW, a catchy project name (like petrw1's 20M project) and coordinating thread might be a good idea![/QUOTE]

With all of that out of the way, I need help devising up proper bounds for all of these ranges, exponents... As I've said before, the math escapes me, but hopefully I can learn from this project!


What I can do for now, however, is draft up some way of organizing this effort. Here goes nothing: I suggest we limit our effort to the 1M range for the foreseeable future. Subsequently, our current efforts should be focused on the 100k range. (Once this range has been completed, we take the next 100k range, and so on...)
Within this 100k range, I'd like it to be possible to 'reserve'/claim certain ranges, with the smallest range being 1k.


[B]I'd like to reserve the 5k range (all exponents from 5000 to 6000)[/B]
I'll attempt to take this range to B1=2 trillion (2,000,000,000,000 (as suggested by Zhangrc)




For a bit of backstory, I realized that lots of very small exponents that have already been factored, but are not co-factor PRP, (starting from exponent M[M]4013[/M]) never had any P-1 done to date. This prompted me to find more of these exponents that never had any P-1 done previously. Upon my request, James Heinrich added a checkbox to [URL]https://www.mersenne.ca/morefactors.php[/URL] to find these exponents that have had no P-1 done before. Though later it was pointed out that these exponents have had extensive GMP-ECM done, and any P-1 done would have to be to very high bounds to have a crack at finding more factors. Credit to Zhangrc for coming up with the name for this project!

[QUOTE=Zhangrc;595237]Maybe … The title should be "Fighting for World Record P-1." That is really going to find world record size factors.
Maybe something like this:
[URL="https://www.mersenne.ca/prob.php?exponent=9007&factorbits=95&b1=2000000000000&b2=1.0E%2B17"]https://www.mersenne.ca/prob.php?exponent=9007&factorbits=95&b1=2000000000000&b2=1.0E%2B17[/URL][/QUOTE]

lisanderke 2021-12-15 12:03

Post reserved for factors found within this projects active ranges.

lisanderke 2021-12-15 12:08

Taking George Woltman's advice, let's wait with starting stage 2 on these exponents. For now, I'd like to open up discussion on what B1 would be sufficient to take these ranges up to. Zhangrc suggested the following bound:

[url]https://www.mersenne.ca/prob.php?exponent=9007&factorbits=95&b1=2000000000000&b2=1.0E%2B17[/url]

Zhangrc 2021-12-15 14:13

[QUOTE=lisanderke;595273]Taking George Woltman's advice, let's wait with starting stage 2 on these exponents. For now, I'd like to open up discussion on what B1 would be sufficient to take these ranges up to. Zhangrc suggested the following bound:

[url]https://www.mersenne.ca/prob.php?exponent=9007&factorbits=95&b1=2000000000000&b2=1.0E%2B17[/url][/QUOTE]

For 5000 to 6000, you can use even larger bounds, say 3e12 to 4e12 (as long as you can bear with the run time)
Then you assume no factor below 2^95 and let Prime95 automatically decide the B2 bounds for you.

Prime95 2021-12-15 14:32

I don't know if there is a "correct" B1 value. It may simply depend on how much patience you have.

FWIW, my aging quad core can compute 4 exponents around 80K to B1=300M in just under 2 hours. Extrapolating I should be able to take four 5K exponents to B1=1T in about 17 days. A pretty significant effort considering the number of exponents to work on.

I'll reserve the 79000 to 82000 area for P-1. I've been doing B1=300M and annoyingly this range has not given me a new factor yet. I think I'll try B1=3 to 5 billion.

@James: Can the P-1 probability calculator be changed to allow more than 95 bits of TF? Or even better, estimate the proper TF value given the amount of ECM that's been done?

If anyone has old P-1 files to donate that can be used as a starting point please let us know. We could set up a repository for future P-1 efforts.

axn 2021-12-15 14:39

Try to keep the following in mind while doing large B1. Although, 2e12 seems so large that it would barely make any difference, I guess.

[QUOTE=undoc.txt]If doing P-1 with a very large B1, the program uses about 47MB of memory to
pre-calculate the product of all small primes below 250 million. The primes above
250 million are processed using a somewhat slower method. You can elect to use
more memory to move the 250 million threshold higher. In prime.txt enter:
MaxStage0Prime=n (n is in millions, default is 250)[/QUOTE]

lisanderke 2021-12-15 16:15

First off: I'm not able to keep editing my posts (mainly the first and second one in this thread), but given the nature of this thread, I'd like to request the ability to do so. It might be much easier to keep track of reservations if it's all being done in one post. Should I contact someone specifically to request this permission if it's a possibility? If not, I could set up a google sheet to keep track of reservations.



I've spread out my worktodo with exponents on each core (6 cores), B1=2T and it seems like this would take 28 days per exponent (with B1 only). There are a little more than 100 exponents (with factors found) and only three exponents with no factors found in the 5k range... B1=200B takes it down to three days and change, B1=20B takes it down to seventeen hours, B1=2B becomes two hours.


I'll start out with taking all exponents in 5k to 2B, which for almost half will be the first P-1 done (though this is not an accurate indicator of further factoring success ;) ) Until 30.8 (or perhaps a further version) comes along with multi-threaded P-1 on small exponents, I'll hold off on doing stage 2 on these exponents. Instead, further increasing B1 with every round seems like the best option for now. I might be able to muster enough patience to bring B1 up with 1 days worth of work per exponent.


@axn, Do you know how the memory usage increases with a higher prime threshold? I assume it doesn't scale linearly in the sense that 500 million would become 100 MB?


@Prime95 Hopefully 3B to 5B gives you some factors! As mentioned above, it might be a bit hard to keep track of reservations for a bit but I've noted you down for 79k, 80k and 81k (if I understood correctly)

axn 2021-12-15 16:59

[QUOTE=lisanderke;595288]@axn, Do you know how the memory usage increases with a higher prime threshold? I assume it doesn't scale linearly in the sense that 500 million would become 100 MB?[/QUOTE]
It should be very nearly linear growth. However, if you're planning on doing B1>1e12, it probably doesn't matter; > 99% of the stage 1 will be using the slower method. Also note that calculating that product itself will take some time, so even if you could (because you have the RAM), you shouldn't try to go too high with that parameter. Practically, you might set that if you're attempting, say, B1<1e10

EDIT:- Note that if you take a number to some B1, and later on increase it, it will use the slower method, Try to go directly to the largest B1 that you can comfortably do with the faster method (say, 1e10)

bur 2021-12-16 12:32

I really like that idea, to me finding new factors of small Mersenne numbers is more interesting than factors of very large Mersenne numbers.

Why is it considered better to just do B1 at first? I there a deeper mathematical/efficiency reason for it, or is it just because new features for stage 2 will come and we want to make use of them?

How exactly are stage 1 results fed to mprime? Do I just keep the files from a B1=B2 run and mprime will automatically notice when I do that exponent again with B2 > B1?

axn 2021-12-16 12:58

[QUOTE=bur;595365]Why is it considered better to just do B1 at first? I there a deeper mathematical/efficiency reason for it, or is it just because new features for stage 2 will come and we want to make use of them?[/quote]
The latter. George is still fine tuning the software for extreme B2. There is one more reason. Stage 2 runs well when you give it more RAM, so currently the software will only run 1 stage 2 at a time, but it can be run multi-threaded. However, stage 1 on small exponents can't be run multithreaded. So to make optimal use of all cores, you run "n" stage 1 in parallel, and then one by one do the stage 2 using all cores.

[QUOTE=bur;595365]How exactly are stage 1 results fed to mprime? Do I just keep the files from a B1=B2 run and mprime will automatically notice when I do that exponent again with B2 > B1?[/QUOTE]

Yep.

bur 2021-12-16 13:50

[QUOTE]So to make optimal use of all cores, you run "n" stage 1 in parallel, and then one by one do the stage 2 using all cores.[/QUOTE]that's good to keep in mind, thanks!

[QUOTE]Instead, further increasing B1 with every round seems like the best option for now.[/QUOTE]I'm not sure I got that right, but is the plan to run consecutive P-1 with increasing bounds on the same exponent? Isn't that a waste of the previous runs?


All times are UTC. The time now is 13:16.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.