mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Lone Mersenne Hunters (https://www.mersenneforum.org/forumdisplay.php?f=12)
-   -   Coordination thread for redoing P-1 factoring (https://www.mersenneforum.org/showthread.php?t=23152)

ric 2018-04-05 15:53

[QUOTE=chalsall;484387]Keep in mind that when you're redoing P-1 work you should expect a lower probability than what Prime95/mprime reports. As a rough guide, I subtract from what is reported as expected by what the previous run's probability was.
[/QUOTE]

Sure thing!

No need to split hairs, since this is a filler job for me, but - using mersenne.ca's p-1 prob estimator - those cands had former B1/B2's corresponding to a prob level around 1.5%, and I brought them around 4% - hence my disappointment.

As a more general point, I've been playing this "filler game" for quite some time now, and I've as well experienced an average of 3-4% of successes on average. For sure, when TF levels were lower, 'twas much more rewarding :)

chalsall 2018-04-05 16:10

[QUOTE=ric;484392]No need to split hairs, since this is a filler job for me, but - using mersenne.ca's p-1 prob estimator - those cands had former B1/B2's corresponding to a prob level around 1.5%, and I brought them around 4% - hence my disappointment.[/QUOTE]

Ah... I now better understand your statement.

Statistics has no memory. Run an infinite number of tests and you should see about a 2.5% success rate... :smile:

petrw1 2018-04-15 04:16

[QUOTE=petrw1;483122]As you may have noticed I am on a P-1 binge recently.
I have full time P-1 running on:
2 x 2-core PC's
6 x 4-core PC's

Some have as much as 16GB of RAM; a couple only 4G.
I have noticed over the past years of doing P-1 that more RAM makes a big difference in Phase 2.

Simple observations have shown that running 480 relative primes in batch of 10 takes noticeably longer than that same run in batches of 120 for example.

(I wouldn't be surprised if the following has been noted before and I missed it)...

So that got me to thinking that, especially for the PC's with 4GB or 8GB of RAM it should complete more total tests per week if I ran 2 workers of 2 cores each rather than 4 workers with 1 core each.
The phase 1 may be slightly slower but the phase 2 should be enough faster that it more than makes up for it; faster because with only 2 workers fighting for RAM they each will get a lot more and can process more relative primes.

Opinions?[/QUOTE]

So after a month of trying P-1 with 2 workers of 2 cores each on a PC with 4.5G available to Prime95 I measured that the actual thruput has dropped by about 5%....I'm surprised but facts don't lie.

But then I got to thinking that when I have 2 workers instead of 4; not only do they each get twice the RAM per Stage 2...but (and I know almost nothing about this) ... I believe that at some point of extra RAM for Stage 2 Brent Suyama kicks in and my E=3 or E=6 or E=12 increases the odds of finding a factor.

Am I anywhere close on this...and can someone tell me how much of an increase the E=3 or E=6 or E=12 gives me? Maybe enough that I can swallow the 5% thruput loss.

Thanks

petrw1 2018-06-08 20:52

[QUOTE=chalsall;484387]Keep in mind that when you're redoing P-1 work you should expect a lower probability than what Prime95/mprime reports. As a rough guide, I subtract from what is reported as expected by what the previous run's probability was.

When I'm redoing poorly P-1'ed work (read: no Stage 2) I tell mprime that the test will save four LL tests, and give it between 10GB and 12GB of RAM to use. Doesn't make sense, I know, but it's my kit and electrons. On average in the 4xM range I get about a 3% success rate.[/QUOTE]

I have 28 cores doing full time P-1; all are re-do's in the 5xM range where the prior P-1 was Stage 1 only (B1=B2).

Subtracting the expected factor ratio of the prior run from my run comes out close to 3%.

After more than 5,100 of such tests my overall success rate is currently 2.98%

In the extremes I once saw consecutive Factors and six times 2 factors out of 3 attempts (on the low end)
AND 169 and 183 attempts between factors (on the high end).

My per-PC success rates range from 2.12% to 3.39% among PCs that have done more than 600.

petrw1 2018-11-08 01:56

I am now "reserving" P-1 work in the following ranges:

48.6
48.7
46.7
48.3
47.2

ixfd64 2019-06-16 23:47

I'm redoing P-1 on exponents in the [20m, 20.05m] range where B2 < 2,500,000.

petrw1 2019-06-17 03:29

Continuing P1 where B1=B2 in all 4xM ranges that have more than 2000 unfactored

ixfd64 2019-06-24 16:44

I have another PC running P-1 on exponents in the [57m, 57.1m] range.

masser 2019-09-26 23:37

I'm running P-1 with larger bounds on exponents in the 2.6M and 2.8M ranges.

ixfd64 2019-10-02 22:04

I'm currently redoing P-1 on exponents from 14.4m to 14.5m with B2 < 2,000,000.

masser 2019-11-13 15:10

[QUOTE=masser;526678]I'm running P-1 with larger bounds on exponents in the 2.6M and 2.8M ranges.[/QUOTE]

I will pause on the 2.8M range and resume on the 2.6M range. I'm losing a few credits here and there because a few people (probably not on the forum) are also taking exponents in these ranges for P-1 with large bounds.

After I complete my first pass over 2.6M, I plan to do a second pass over 2.6M and 2.8M. Anyone have a suggestion for how much to increase the bounds for a second pass of P-1 factoring? I was thinking that I would use the probability calculator at mersenne.ca; currently I'm putting in about 0.5 GhzD/exponent. For the second pass, I thought I would increase that to 0.75 GhzD/exponent and use bounds that approx. maximize the probability of finding a factor.


All times are UTC. The time now is 10:29.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.