mersenneforum.org > Data P-1 / P+1 / ECM strategy for PRP-CF
 Register FAQ Search Today's Posts Mark Forums Read

2021-05-11, 17:38   #100
Prime95
P90 years forever!

Aug 2002
Yeehaw, FL

753310 Posts

Quote:
 Originally Posted by ATH How does that work?
Starting with build 4 the exponent of a Mersenne number or Generalized Fermat number is included in B1.

2021-05-15, 15:43   #101
masser

Jul 2003

22·419 Posts

Quote:
 Originally Posted by Prime95 You'd have to run some tests. My gut instinct says that nth_run=3 would not be profitable. That is do two runs at 1.5G effort. I have no data to back up this hunch.
I ran a lot of P+1 tests in the 14.03M range; 3 factors were found. There are still about 1850 exponents (with no known factor) in the 14.0M range that have not had any P+1 tests yet. I'm trying to decide on a productive P+1 strategy for those remaining exponents. All exponents will have had one P-1 test with large bounds and a tiny amount of ECM completed when I begin the P+1 testing.

Here's a thought experiment: I picked a somewhat arbitrary time of 15 weeks to spend on P+1 tests with a CPU farm. That 15 weeks translates to 3.5 hours per remaining exponent. In 3.5 hours, a representative CPU can do 1 test with B1 = 2400K or 2 tests with B1 = 1270K or 3 tests with B1 = 900K.

The 1 large test will have probability 1.14% to find a factor. The expected number of factors over the 1850 remaining exponents will be 21.1. The 2 medium tests (with seeds nth_run=1,2) will have probabilities 0.822% and 0.387% (sum 1.21%); the expected number of factors will be 22.4. The three small tests will have probabilities 0.68%, 0.317% and 0.0713% (sum 1.07%); the expected number of factors will be 19.8.

If I want to optimize factors found per time spent, I will choose two runs with seeds nth_run = 1,2. There's not a lot of difference in the expected numbers above, but extending that logic to a larger range, the benefits become clear. This is probably the correct strategy for many generic factoring efforts. Good, general advice might be to run one P-1 curve, and then two P+1 curves before starting ECM efforts on an exponent.

However, that approach is not directly compatible with the goal of getting less than 2000 unfactored exponents in the 14.0M range. For that goal, it seems wiser to spend 15 weeks running 1 large P+1 test on each exponent using nth_run = 1. Then, I expect to find about 20 factors. Depending on how close we are to the goal of 2000, it may or may not make sense to run for another 15 weeks to find approximately 10 factors with the second seed.

Last fiddled with by masser on 2021-05-15 at 15:46

2021-05-25, 21:28   #102
keisentraut

Jul 2020

2410 Posts

Quote:
 Originally Posted by Prime95 Continuing on to 314K doing P-1 using the keisentraut Python script. This has hardly been the factoring bonanza I was hoping for -- 165 consecutive failures. I'm beginning to wonder if this area has already had deep P-1 run but not reported to the server (or was done before the server accepted P-1 results on factored exponents).
I have (almost) finished 503k-506k now. The Mersenne numbers with known factors in this range were already P-1 factored to 1.5M by alpertron. I pushed them to B1=10M and also did a P+1 test with B1=5M. The ones without factors were already P-1 factored up to B1=25M, so I only did a single P+1 run until half of the B1 bound, i.e. B1=12.5M

In total, I did 294 P+1 tests and 231 P-1 tests. Out of them, four P-1 tests were successful (506047, 505313, 504377, 503453) and one P+1 test (505877).

2021-05-26, 00:21   #103
Prime95
P90 years forever!

Aug 2002
Yeehaw, FL

35·31 Posts

Quote:
 Originally Posted by keisentraut In total, I did 294 P+1 tests and 231 P-1 tests. Out of them, four P-1 tests were successful (506047, 505313, 504377, 503453) and one P+1 test (505877).
I've almost completed 313K to 319K. 237 P-1 tests, 222 P+1 tests. 4 new factors (3 P-1, 1 P+1)

 2021-05-29, 14:55 #104 LaurV Romulan Interpreter     Jun 2011 Thailand 72·197 Posts Question: if I want to waste a couple of cycles to run P+1 in 1.7M to 2M, what conservative/reasonable/aggressive B1 and B2 limits should I use? (and does it make any sense? or there was too much ECM already done there?) Last fiddled with by LaurV on 2021-05-29 at 15:40
2021-05-29, 15:17   #105
Uncwilly
6809 > 6502

"""""""""""""""""""
Aug 2003
101×103 Posts

265A16 Posts

Quote:
 Originally Posted by masser Sure thing. Let me know if you want different bounds, a different range or a bigger file. Here you go:
I finished the file. Can you give me another one? The size works for me.
My machine is working on a backstop assignment, so no cycles are being wasted.

2021-05-29, 15:57   #106
masser

Jul 2003

167610 Posts

Quote:
 Originally Posted by LaurV Question: if I want to waste a couple of cycles to run P+1 in 1.7M to 2M, what conservative/reasonable/aggressive B1 and B2 limits should I use? (and does it make any sense?)
Here's how I answer that question. First, look at the P-1 B1 max (100M) and min (1.7M) values over that range. If we throw out some of the extreme values, the max P-1 B1 that has been used is around 40M and the min P-1 B1 that has been used is around 2M. Next look at the amount of ECM that has been done: most of those exponents are at the t30 level, so ECM B1 should be 250K.

I seem to recall very old (yahoo groups days) discussions of a rule-of-thumb, that as you progressed in factoring effort, P-1 B1 was 10x the ECM level; P+1 was somewhere in between. So, find the level and ECM B1, run P-1 at B1 = 10xB1_ecm, then (perhaps multiple) P+1 with (2-5)xB1_ecm, then complete the ECM curves, then increment the level and iterate. This seems to line up with observations on this forum, but the local strategy is still work-in-progress.

So, given all that, I think a "reasonable" B1 for P+1 in the 1.7M to 2M range would be B1=1M. "Aggressive" might be B1=5M. "Low probability, but fast" might be B1<500K.

Also, you can see here what has been successful in that range.

For B2, just let prime95/mprime decide, using the known TF level (maybe artificially increased to reflect completed ECM).

Does it make sense? Certainly if you run the first P+1 curve on an exponent with decent bounds, there's a reasonable chance you'll find a factor that P-1 couldn't find. Secondly, you'll be doing the various factoring efforts a favor, by settling the question of a hidden, easily found P+1 factor that has been missed by ecm and that P-1 might never find. Very satisfying for the "completionists" working on Mersenne factoring .

Last fiddled with by masser on 2021-05-29 at 16:03

2021-05-29, 16:22   #107
masser

Jul 2003

167610 Posts

Quote:
 Originally Posted by Uncwilly I finished the file. Can you give me another one? The size works for me. My machine is working on a backstop assignment, so no cycles are being wasted.
Here you go:
Attached Files
 wtd_unc3.txt (7.5 KB, 22 views)

2021-05-30, 03:44   #108
LaurV
Romulan Interpreter

Jun 2011
Thailand

72·197 Posts

Quote:
 Originally Posted by masser Here's how I answer that question.
Thanks a lot mate. That settles it. I will give it a try, let's see first how long the assignments take for each flavor.

 Similar Threads Thread Thread Starter Forum Replies Last Post Prime95 PrimeNet 103 2012-04-09 07:39 davieddy Lounge 34 2012-03-17 02:03 diamonddave GPU to 72 18 2011-12-06 19:56 Kees Puzzles 4 2006-04-07 07:17 Citrix Prime Sierpinski Project 5 2004-10-31 12:25

All times are UTC. The time now is 16:14.

Sun Aug 1 16:14:44 UTC 2021 up 9 days, 10:43, 0 users, load averages: 1.44, 1.89, 1.73