![]() |
![]() |
#34 |
"Kieren"
Jul 2011
In My Own Galaxy!
27AE16 Posts |
![]()
While it is falling behind with 2 out of 4 as high mem workers, it seemed to keep up with 3 of 4 allowed.
|
![]() |
![]() |
![]() |
#35 |
"James Heinrich"
May 2004
ex-Northern Ontario
10000100101002 Posts |
![]()
Yes. On one system I have 3of4, on the other I have 2of3, and both setups allow stage2 to keep up. Of course, if I have some extended sessions with Photoshop or something else that is on the LowMemWhileRunning list, it takes a little longer to catch up the backlog.
|
![]() |
![]() |
![]() |
#36 |
Basketry That Evening!
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88
3·29·83 Posts |
![]()
I think the necessary fraction is equal to how S2 time/(S1+S2) time, but of course that is trivially obvious. So, James you might know better, but somewhere around 60% is the key number (noting that 2/3~=67%, and 3/4=75%).
|
![]() |
![]() |
![]() |
#37 | |
"Kieren"
Jul 2011
In My Own Galaxy!
236568 Posts |
![]() Quote:
On the other hand, Jerry's 400M run (20779MB) is showing a 32.75 ratio. I can't really draw much conclusion from this, except that the ratio goes up substantially with huge exponents and tons of RAM. (Well duh!) I just went ahead with upping my RAM to 16GB. I'm looking forward to seeing how that effects things. Early results do show that 192 RPs are being done in a single pass. I now have 8GB for day, and 12GB for night. I'll probably tweak that when I see how other high memory apps (Photoshop) perform. |
|
![]() |
![]() |
![]() |
#38 |
"Vincent"
Apr 2010
Over the rainbow
5·11·53 Posts |
![]() Code:
[Jan 26 18:09] Setting affinity to run worker on any logical CPU. [Jan 26 18:09] Optimal P-1 factoring of M55039799 using up to 5700MB of memory. [Jan 26 18:09] Assuming no factors below 2^71 and 2 primality tests saved if a factor is found. [Jan 26 18:09] Optimal bounds are B1=580000, B2=13340000 [Jan 26 18:09] Chance of finding a factor is an estimated 4.74% [Jan 26 18:09] Setting affinity to run helper thread 1 on any logical CPU. [Jan 26 18:09] Using Core2 type-2 FFT length 2880K, Pass1=640, Pass2=4608, 3 threads [Jan 26 18:09] Setting affinity to run helper thread 2 on any logical CPU. [Jan 26 18:09] Using 5697MB of memory. Processing 240 relative primes (147 of 432 already processed). |
![]() |
![]() |
![]() |
#39 |
"Kieren"
Jul 2011
In My Own Galaxy!
2×3×1,693 Posts |
![]()
Thanks for the info, firejuggler!
![]() On the first new assignment started with the new allocation (another HighMemWorkers exceeded case), the bounds ratio went up to 23. It's just under a 52M exponent. The bounds are B1=545000 and B2=12535000. Last fiddled with by kladner on 2012-01-26 at 22:16 |
![]() |
![]() |
![]() |
#40 | ||
"James Heinrich"
May 2004
ex-Northern Ontario
22×1,061 Posts |
![]() Quote:
Not taking into account efficiencies of more RAM, the most efficient (probability per effort) B1/B2 ratios tend to be around 20x-22x, as noted. However if you have a specific B1 and/or B2 that you want to target for whatever reason, the optimal ratios may vary. Quote:
If you have day=8/night=12, the bounds will be calculated assuming you have 12GB available, whereas in fact you may be running stage2 a large part of the time with 8/3=2.67GB per worker. If Prime95 knew that (it maybe should, but doesn't) it would've picked smaller, faster bounds to maintain the balance of efficiency. It would probably be best to put Memory=3500 under each worker section of local.txt -- this way Prime95 knows that it will have 3.5GB available for stage2, not think it "might" have 12GB, and it will calculate bounds accordingly. Remember that Prime95's bounds calculation is still based on the assumption that P-1 is a rare thing that happens before L-L, not a regular worktype. |
||
![]() |
![]() |
![]() |
#41 | |
"Kieren"
Jul 2011
In My Own Galaxy!
1015810 Posts |
![]() Quote:
Thanks VERY much James! EDIT: A question: with the 3500 per worker, would I make the overall allocation equal to that amount times the number of workers allowed? Or does the LowMemWhileRunning=photoshop take care of that, so that I should set the overall to 14,000? Thanks again for providing the fruits of your experience, and for your patience. Last fiddled with by kladner on 2012-01-26 at 22:52 |
|
![]() |
![]() |
![]() |
#42 |
"James Heinrich"
May 2004
ex-Northern Ontario
22×1,061 Posts |
![]() |
![]() |
![]() |
![]() |
#43 |
"Kieren"
Jul 2011
In My Own Galaxy!
2·3·1,693 Posts |
![]()
Thanks!
|
![]() |
![]() |
![]() |
#44 | |
"Kieren"
Jul 2011
In My Own Galaxy!
27AE16 Posts |
![]() Quote:
I also just saw the first E=12 come up on a 52M assignment with these settings. Last fiddled with by kladner on 2012-01-27 at 20:44 |
|
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Discussion about dates | Flatlander | Twin Prime Search | 12 | 2011-11-17 09:40 |
10,375- LA discussion | Raman | Cunningham Tables | 27 | 2008-12-04 21:17 |
P-1 discussion | AntonVrba | Prime Cullen Prime | 5 | 2007-04-04 04:59 |
factexcl.txt discussion | hhh | Prime Sierpinski Project | 5 | 2006-11-22 17:50 |
New .dat discussion | VJS | Prime Sierpinski Project | 7 | 2006-07-25 14:31 |