20210713, 01:19  #1 
Mar 2004
2^{2}×3^{3}×5 Posts 
Large range of exponents with small P1 bounds
I've been working on a Mersenne in the 100M digit range which has had P1 done with very small bounds, specifically B1=100000, B2=1000000. I got curious and found that there are actually a lot of exponents in this range that have had P1 done to these small bounds.
https://www.mersenne.org/report_exponent/?exp_lo=332370583&exp_hi=332375333&full=1 In these kinds of cases, should P95 be programmed to redo P1 when large amounts of ram are reserved? 
20210713, 03:20  #2 
P90 years forever!
Aug 2002
Yeehaw, FL
2^{2}·7·269 Posts 

20210713, 03:28  #3 
Mar 2004
2^{2}·3^{3}·5 Posts 

20210713, 05:48  #4 
P90 years forever!
Aug 2002
Yeehaw, FL
2^{2}·7·269 Posts 
A little server work is required. P1 bounds of 200K or more are now required to have prime95 skip P1.

20210719, 08:12  #5 
"David Kirkby"
Jan 2021
Althorne, Essex, UK
2^{2}·3·5·7 Posts 
I don't know if it's considered impolite to approach a user and see if they want to do a specific task. But user Tha has done 24219 GHz days this year, with 89% of that on P1 factoring, so it might be a reasonable assumption that P1 factoring is their preferred work type.
Related to this, are there laws of diminishing returns in allocating a lot of RAM to P1 factoring? I'm using 4workers for maximum throughput, and can give P1 factoring 360 GB RAM. These leaves a few possibilities
I'm currently finding that exponents around 105 million are usually trial factored to 2^{76}, and P1 factoring will use a maximum of about 300 GB RAM. But some exponents, even at 105 million exponents, have been factored more than 2^{76} by people with GPUs, so are likely to have P1 factoring done to larger bounds, and so even one P1 factoring task could probably use all my RAM. What would seem the best RAM allocation strategy in my circumstances? Last fiddled with by drkirkby on 20210719 at 08:21 
20210719, 08:20  #6  
Jun 2003
5,081 Posts 
Quote:
It would be good if you can prevent more than one worker from entering stage 2 simultaneously, but if it is too much hassle, dividing the memory evenly between workers will work fine. You're probably in the top 0.1 percentile in terms of RAM allocation. Typical dedicated P1'er might give 816 GB RAM. Compared to that, even 90GB is ginormous. Last fiddled with by axn on 20210719 at 08:21 

20210719, 12:30  #7  
"David Kirkby"
Jan 2021
Althorne, Essex, UK
2^{2}·3·5·7 Posts 
Quote:
Quote:
I think I will be able to prevent more than one worker from being in stage 2 of P1 factoring at the same time, but are not 100% sure yet. I managed it okay when running two workers, but it might be a bit more tricky with 4 workers. I think it will be possible though. Currently at least, I'm only finding I need to do P1 factoring on about 10% of the category 0 or 1 exponents, and given the factoring takes only about 5% of the time of the main PRP test, the chances of two randomly executing tests being in the 2nd stage of P1 at the same time are slim (around 0.5%), and with a bit of care, that probability can be reduced further. (On slower machines, where I get category 4 exponents, the computer has to do P1 factoring almost every time, but only one exponent gets tests at a time, so its not an issue.) Last fiddled with by drkirkby on 20210719 at 13:29 

20210719, 12:50  #8 
Mar 2004
2^{2}×3^{3}×5 Posts 
Isn't P95 programmed to give each worker half the ram in cases where they are both running stage 2? Couldn't you just let it run and let it adjust for you?

20210719, 13:10  #9 
Jun 2003
5,081 Posts 
It is. However, there is a catch. When one worker starts stage 2, it will grab all available memory. When another one enters stage 2, the first worker will stop & restart with reduced memory so that the second worker can proceed. That stop&restart alone will wipeout any potential gains you get from increased memory. You're better off explicitly allocating half the memory to both the workers in the first place.

20210719, 15:04  #10  
If I May
"Chris Halsall"
Sep 2002
Barbados
9761_{10} Posts 
Quote:
George, does this explain why there are several candidates listed between 94M and 103M that need a P1, but don't appear to being assigned? I've been doing some cleanup the last year or so of candidates that had an FC run without a P1 job done. But, I can't figure out the criteria for these candidates. Any hints as to how I can determine what Primenet is using for that column? Thanks. 

20210719, 16:07  #11 
Bemusing Prompter
"Danny"
Dec 2002
California
5×479 Posts 

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
P1 on small exponents  markr  PrimeNet  18  20090823 17:23 
Large small factor  ZetaFlux  Factoring  96  20070514 16:59 
Problems with Large FFT but not Small FFT's?  RichTJ99  Hardware  2  20060208 23:38 
Small range with high density of factors  hbock  Lone Mersenne Hunters  1  20040307 19:51 
Small win32 program, range of time to do a TF  dsouza123  Programming  1  20031009 16:04 