mersenneforum.org  

Go Back   mersenneforum.org > Factoring Projects > Lone Mersenne Hunters

Reply
 
Thread Tools
Old 2018-04-05, 15:53   #12
ric
 
ric's Avatar
 
Jul 2004
Milan, Ita

23×23 Posts
Default

Quote:
Originally Posted by chalsall View Post
Keep in mind that when you're redoing P-1 work you should expect a lower probability than what Prime95/mprime reports. As a rough guide, I subtract from what is reported as expected by what the previous run's probability was.
Sure thing!

No need to split hairs, since this is a filler job for me, but - using mersenne.ca's p-1 prob estimator - those cands had former B1/B2's corresponding to a prob level around 1.5%, and I brought them around 4% - hence my disappointment.

As a more general point, I've been playing this "filler game" for quite some time now, and I've as well experienced an average of 3-4% of successes on average. For sure, when TF levels were lower, 'twas much more rewarding :)
ric is offline   Reply With Quote
Old 2018-04-05, 16:10   #13
chalsall
If I May
 
chalsall's Avatar
 
"Chris Halsall"
Sep 2002
Barbados

957610 Posts
Default

Quote:
Originally Posted by ric View Post
No need to split hairs, since this is a filler job for me, but - using mersenne.ca's p-1 prob estimator - those cands had former B1/B2's corresponding to a prob level around 1.5%, and I brought them around 4% - hence my disappointment.
Ah... I now better understand your statement.

Statistics has no memory. Run an infinite number of tests and you should see about a 2.5% success rate...
chalsall is offline   Reply With Quote
Old 2018-04-15, 04:16   #14
petrw1
1976 Toyota Corona years forever!
 
petrw1's Avatar
 
"Wayne"
Nov 2006
Saskatchewan, Canada

121E16 Posts
Default

Quote:
Originally Posted by petrw1 View Post
As you may have noticed I am on a P-1 binge recently.
I have full time P-1 running on:
2 x 2-core PC's
6 x 4-core PC's

Some have as much as 16GB of RAM; a couple only 4G.
I have noticed over the past years of doing P-1 that more RAM makes a big difference in Phase 2.

Simple observations have shown that running 480 relative primes in batch of 10 takes noticeably longer than that same run in batches of 120 for example.

(I wouldn't be surprised if the following has been noted before and I missed it)...

So that got me to thinking that, especially for the PC's with 4GB or 8GB of RAM it should complete more total tests per week if I ran 2 workers of 2 cores each rather than 4 workers with 1 core each.
The phase 1 may be slightly slower but the phase 2 should be enough faster that it more than makes up for it; faster because with only 2 workers fighting for RAM they each will get a lot more and can process more relative primes.

Opinions?
So after a month of trying P-1 with 2 workers of 2 cores each on a PC with 4.5G available to Prime95 I measured that the actual thruput has dropped by about 5%....I'm surprised but facts don't lie.

But then I got to thinking that when I have 2 workers instead of 4; not only do they each get twice the RAM per Stage 2...but (and I know almost nothing about this) ... I believe that at some point of extra RAM for Stage 2 Brent Suyama kicks in and my E=3 or E=6 or E=12 increases the odds of finding a factor.

Am I anywhere close on this...and can someone tell me how much of an increase the E=3 or E=6 or E=12 gives me? Maybe enough that I can swallow the 5% thruput loss.

Thanks
petrw1 is offline   Reply With Quote
Old 2018-06-08, 20:52   #15
petrw1
1976 Toyota Corona years forever!
 
petrw1's Avatar
 
"Wayne"
Nov 2006
Saskatchewan, Canada

121E16 Posts
Default

Quote:
Originally Posted by chalsall View Post
Keep in mind that when you're redoing P-1 work you should expect a lower probability than what Prime95/mprime reports. As a rough guide, I subtract from what is reported as expected by what the previous run's probability was.

When I'm redoing poorly P-1'ed work (read: no Stage 2) I tell mprime that the test will save four LL tests, and give it between 10GB and 12GB of RAM to use. Doesn't make sense, I know, but it's my kit and electrons. On average in the 4xM range I get about a 3% success rate.
I have 28 cores doing full time P-1; all are re-do's in the 5xM range where the prior P-1 was Stage 1 only (B1=B2).

Subtracting the expected factor ratio of the prior run from my run comes out close to 3%.

After more than 5,100 of such tests my overall success rate is currently 2.98%

In the extremes I once saw consecutive Factors and six times 2 factors out of 3 attempts (on the low end)
AND 169 and 183 attempts between factors (on the high end).

My per-PC success rates range from 2.12% to 3.39% among PCs that have done more than 600.
petrw1 is offline   Reply With Quote
Old 2018-11-08, 01:56   #16
petrw1
1976 Toyota Corona years forever!
 
petrw1's Avatar
 
"Wayne"
Nov 2006
Saskatchewan, Canada

2×3×773 Posts
Default

I am now "reserving" P-1 work in the following ranges:

48.6
48.7
46.7
48.3
47.2
petrw1 is offline   Reply With Quote
Old 2019-06-16, 23:47   #17
ixfd64
Bemusing Prompter
 
ixfd64's Avatar
 
"Danny"
Dec 2002
California

2·3·397 Posts
Default

I'm redoing P-1 on exponents in the [20m, 20.05m] range where B2 < 2,500,000.

Last fiddled with by ixfd64 on 2019-06-16 at 23:48
ixfd64 is offline   Reply With Quote
Old 2019-06-17, 03:29   #18
petrw1
1976 Toyota Corona years forever!
 
petrw1's Avatar
 
"Wayne"
Nov 2006
Saskatchewan, Canada

2·3·773 Posts
Default

Continuing P1 where B1=B2 in all 4xM ranges that have more than 2000 unfactored
petrw1 is offline   Reply With Quote
Old 2019-06-24, 16:44   #19
ixfd64
Bemusing Prompter
 
ixfd64's Avatar
 
"Danny"
Dec 2002
California

2×3×397 Posts
Default

I have another PC running P-1 on exponents in the [57m, 57.1m] range.

Last fiddled with by ixfd64 on 2019-08-26 at 17:39
ixfd64 is offline   Reply With Quote
Old 2019-09-26, 23:37   #20
masser
 
masser's Avatar
 
Jul 2003
wear a mask

32·179 Posts
Default

I'm running P-1 with larger bounds on exponents in the 2.6M and 2.8M ranges.

Last fiddled with by masser on 2019-09-26 at 23:37
masser is offline   Reply With Quote
Old 2019-10-02, 22:04   #21
ixfd64
Bemusing Prompter
 
ixfd64's Avatar
 
"Danny"
Dec 2002
California

238210 Posts
Default

I'm currently redoing P-1 on exponents from 14.4m to 14.5m with B2 < 2,000,000.
ixfd64 is offline   Reply With Quote
Old 2019-11-13, 15:10   #22
masser
 
masser's Avatar
 
Jul 2003
wear a mask

110010010112 Posts
Default

Quote:
Originally Posted by masser View Post
I'm running P-1 with larger bounds on exponents in the 2.6M and 2.8M ranges.
I will pause on the 2.8M range and resume on the 2.6M range. I'm losing a few credits here and there because a few people (probably not on the forum) are also taking exponents in these ranges for P-1 with large bounds.

After I complete my first pass over 2.6M, I plan to do a second pass over 2.6M and 2.8M. Anyone have a suggestion for how much to increase the bounds for a second pass of P-1 factoring? I was thinking that I would use the probability calculator at mersenne.ca; currently I'm putting in about 0.5 GhzD/exponent. For the second pass, I thought I would increase that to 0.75 GhzD/exponent and use bounds that approx. maximize the probability of finding a factor.
masser is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Redoing factoring work done by unreliable machines tha Lone Mersenne Hunters 23 2016-11-02 08:51
Sieving reservations and coordination gd_barnes No Prime Left Behind 2 2008-02-16 03:28
Sieved files/sieving coordination gd_barnes Conjectures 'R Us 32 2008-01-22 03:09
P-1 factoring Q&A thread Unregistered Software 27 2005-06-11 05:32
5.98M to 6.0M: redoing factoring to 62 bits GP2 Lone Mersenne Hunters 0 2003-11-19 01:30

All times are UTC. The time now is 06:04.

Sat May 15 06:04:23 UTC 2021 up 37 days, 45 mins, 0 users, load averages: 2.46, 2.42, 2.29

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.