20131206, 14:06  #1 
Nov 2010
Ann Arbor, MI
1011110_{2} Posts 
P1 & LL wavefront slowed down?
I recently decided to shift to p1 tests and most of my assignments are for exponents around 68M69M. However, every now and then I am assigned one exponent on the 62M65M region. So I decided to check the work distribution map to check the wavefront on TF, p1 and LL.
I noticed there's a long list of exponents starting at 62913187 currently assigned for TF to the user "GPU Factoring" on the same computer "ll_work", even though those have been already factored until 2^74 and so are being held from p1 and LL testing. See links: http://www.mersenne.org/assignments/...et+Assignments http://www.mersenne.org/report_expon...&B1=Get+status Additionally, that same user/computer is having a lot more workload. It has currently assigned another long list of exponents on the 30M and 40M regions (which have been already LL'd) for TF. I also noticed most of the first time LL assignments are on those same exponents (62M to 64M). Who is that user? Also, they are affecting first time LL and p1 assignments by shifting other users' efforts to higher exponents. Is there anything to be done about it? 
20131206, 14:25  #2  
Nov 2003
1D24_{16} Posts 
Quote:
Why on Earth does it matter??? There is no deadline for finding the next Mersenne prime. It will be found in due course. This isn't a race. The computations move forward. As for those who argue constantly over the 'optimal' TF levels and the TF vs. P1 tradeoffs, I say to you: You are all seriously deluded if you think it makes any difference. It really doesn't matter whether you do TF to 71, 72, 73, ....... bits. 

20131206, 14:27  #3 
"GIMFS"
Sep 2002
Oeiras, Portugal
2725_{8} Posts 
@OP,
Have you heard of the GPUto72 subproject? See http://www.mersenneforum.org/forumdisplay.php?f=95 Also check the GPUto72 site. You are most welcome to participate! Last fiddled with by lycorn on 20131206 at 14:27 
20131206, 18:21  #4 
May 2013
East. Always East.
11·157 Posts 
GPU to 72 was created back when 72 bits was the optimal TF'ing level. Primenet at this point in time only hands out work for CPU's. GPU72.com is a resource that offers work for GPU's, which now includes the full range of TF, LL and P1.
Primenet assigns GPU to 72 with a massive amount of work which it delegates among its users. Results are submitted to Primenet manually by the user themself, and are given the appropriate credit. The two servers take care of the rest. The computer / user you pointed out is probably the central computer for the whole subproject. Check out position two in top teams overall: http://mersenne.org/report_top_teams/ EDIT: and position five in first LL's http://mersenne.org/report_top_teams_LL/ Last fiddled with by TheMawn on 20131206 at 18:22 
20131206, 19:16  #5  
Nov 2010
Ann Arbor, MI
2×47 Posts 
Quote:
Quote:
Quote:


20131206, 19:31  #6  
"GIMFS"
Sep 2002
Oeiras, Portugal
1,493 Posts 
Quote:
http://www.mersenneforum.org/showthread.php?t=18975 GPUto72 is succeeding in keeping ahead of the front wave of new 1st time LL tests, which means that, except in some relatively rare occasions, exponents are being handed out for testing already TFed to a high bit level and P1ed, so the testers may proceed straight to the actual LL test. In this process, several additional candidates are eliminated (due to the higherthandefault level of factoring done by the GPUs). There´s a lot to read on the subject in this forum. 

20131206, 20:14  #7  
Nov 2010
Ann Arbor, MI
2·47 Posts 
Quote:


20131206, 20:19  #8  
Nov 2010
Ann Arbor, MI
2·47 Posts 
Quote:


20131206, 20:26  #9 
If I May
"Chris Halsall"
Sep 2002
Barbados
2^{3}·17·73 Posts 

20131206, 21:01  #10  
Nov 2003
2^{2}×5×373 Posts 
Quote:
of the optimization that is being done. Such an optimization must take into account:  The different algorithms: LL, P1, TF  A very accurate specification of the computational complexity of each algorithm. This means an accurate determination of the implied constants in the big O estimate.  A very accurate measurement on how fast each of them runs on each of the various (many different!)  An accurate measurement of how many machines of each type are being used.  An accurate determination of the percentage of time each one spends running the algorithms.  Accurate estimates of the probability that a given computation succeeds based upon the input parameters. To put it plainly: The required data is not available. Nor has an objective function been specified. Not also that it will be a chanceconstrained optimization model. Conclusion: The notion of 'optimize' is poorly conceived at best. Noone (me included) knows how to specify this optimization problem. I could do it as a research project, but have neither the time nor inclination. Does anyone else here have the necessary skills? Quote:
the probability of success/failure for P1 and TF. And minimizing 'time per test' fails to take into account the allocation of machine resources to those tests. Running method A on machine 1 and method B on machine 2 might be better with the methods swapped, or even run on a totally different machine. i.e. you might find that running methods A and B work better entirely on machine 3, and that you should run method C on 1 and 2. Does anyone know whether the allocation of the many different machines to the set of tests has been done correctly. You also need to take into account tradeoffs that noone seems to consider. e.g. TF and P1 tests are NOT independent. Quote:
would stop worrying about overclocking. And there is another part of the optimization problem: You might have fewer errors and less total time spent by actually REDUCING clock speeds. In a highly heterogeneous environment, it seems impossible to even approach optimizing the calculations. 

20131206, 21:11  #11  
Nov 2003
1D24_{16} Posts 
Quote:
formulated model, implementing it would be night to impossible. Can you imagine a central organizer trying to tell (say) a given user: "You can't run LL. You should run TF to 70 bits" Can you imagine users' reactions to someone telling them what to run and how to run it? Finally, all you can hope to accomplish is to reduce the EXPECTED time to find the next prime. And you will never know if some other allocation of resources would have done it faster. And any time savings that you might achieve is UNMEASURABLE and lost in the noise of the process. You will not be able to observe any savings of time that you might achieve. 

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Call for GPU Workers to help at the "LL Wavefront"  chalsall  GPU Computing  24  20150711 17:48 
Prime95 slowed down after I restarted computer  ixfd64  Software  13  20101218 06:56 