![]() |
![]() |
#1 |
Nov 2010
Ann Arbor, MI
2·47 Posts |
![]()
I recently decided to shift to p-1 tests and most of my assignments are for exponents around 68M-69M. However, every now and then I am assigned one exponent on the 62M-65M region. So I decided to check the work distribution map to check the wavefront on TF, p-1 and LL.
I noticed there's a long list of exponents starting at 62913187 currently assigned for TF to the user "GPU Factoring" on the same computer "ll_work", even though those have been already factored until 2^74 and so are being held from p-1 and LL testing. See links: http://www.mersenne.org/assignments/...et+Assignments http://www.mersenne.org/report_expon...&B1=Get+status Additionally, that same user/computer is having a lot more workload. It has currently assigned another long list of exponents on the 30M and 40M regions (which have been already LL'd) for TF. I also noticed most of the first time LL assignments are on those same exponents (62M to 64M). Who is that user? Also, they are affecting first time LL and p-1 assignments by shifting other users' efforts to higher exponents. Is there anything to be done about it? |
![]() |
![]() |
![]() |
#2 | |
"Bob Silverman"
Nov 2003
North of Boston
22·1,877 Posts |
![]() Quote:
Why on Earth does it matter??? There is no deadline for finding the next Mersenne prime. It will be found in due course. This isn't a race. The computations move forward. As for those who argue constantly over the 'optimal' TF levels and the TF vs. P-1 tradeoffs, I say to you: You are all seriously deluded if you think it makes any difference. It really doesn't matter whether you do TF to 71, 72, 73, ....... bits. |
|
![]() |
![]() |
![]() |
#3 |
"GIMFS"
Sep 2002
Oeiras, Portugal
2×5×157 Posts |
![]()
@OP,
Have you heard of the GPUto72 subproject? See http://www.mersenneforum.org/forumdisplay.php?f=95 Also check the GPUto72 site. You are most welcome to participate! Last fiddled with by lycorn on 2013-12-06 at 14:27 |
![]() |
![]() |
![]() |
#4 |
May 2013
East. Always East.
11·157 Posts |
![]()
GPU to 72 was created back when 72 bits was the optimal TF'ing level. Primenet at this point in time only hands out work for CPU's. GPU72.com is a resource that offers work for GPU's, which now includes the full range of TF, LL and P-1.
Primenet assigns GPU to 72 with a massive amount of work which it delegates among its users. Results are submitted to Primenet manually by the user themself, and are given the appropriate credit. The two servers take care of the rest. The computer / user you pointed out is probably the central computer for the whole subproject. Check out position two in top teams overall: http://mersenne.org/report_top_teams/ EDIT: and position five in first LL's http://mersenne.org/report_top_teams_LL/ Last fiddled with by TheMawn on 2013-12-06 at 18:22 |
![]() |
![]() |
![]() |
#5 | |||
Nov 2010
Ann Arbor, MI
2×47 Posts |
![]() Quote:
Quote:
Quote:
|
|||
![]() |
![]() |
![]() |
#6 | |
"GIMFS"
Sep 2002
Oeiras, Portugal
2·5·157 Posts |
![]() Quote:
http://www.mersenneforum.org/showthread.php?t=18975 GPUto72 is succeeding in keeping ahead of the front wave of new 1st time LL tests, which means that, except in some relatively rare occasions, exponents are being handed out for testing already TFed to a high bit level and P-1ed, so the testers may proceed straight to the actual LL test. In this process, several additional candidates are eliminated (due to the higher-than-default level of factoring done by the GPUs). There´s a lot to read on the subject in this forum. |
|
![]() |
![]() |
![]() |
#7 | |
Nov 2010
Ann Arbor, MI
2·47 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
#8 | |
Nov 2010
Ann Arbor, MI
2·47 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
#9 |
If I May
"Chris Halsall"
Sep 2002
Barbados
2×72×113 Posts |
![]() |
![]() |
![]() |
![]() |
#10 | |||
"Bob Silverman"
Nov 2003
North of Boston
22×1,877 Posts |
![]() Quote:
of the optimization that is being done. Such an optimization must take into account: - The different algorithms: LL, P-1, TF - A very accurate specification of the computational complexity of each algorithm. This means an accurate determination of the implied constants in the big O estimate. - A very accurate measurement on how fast each of them runs on each of the various (many different!) - An accurate measurement of how many machines of each type are being used. - An accurate determination of the percentage of time each one spends running the algorithms. - Accurate estimates of the probability that a given computation succeeds based upon the input parameters. To put it plainly: The required data is not available. Nor has an objective function been specified. Not also that it will be a chance-constrained optimization model. Conclusion: The notion of 'optimize' is poorly conceived at best. Noone (me included) knows how to specify this optimization problem. I could do it as a research project, but have neither the time nor inclination. Does anyone else here have the necessary skills? Quote:
the probability of success/failure for P-1 and TF. And minimizing 'time per test' fails to take into account the allocation of machine resources to those tests. Running method A on machine 1 and method B on machine 2 might be better with the methods swapped, or even run on a totally different machine. i.e. you might find that running methods A and B work better entirely on machine 3, and that you should run method C on 1 and 2. Does anyone know whether the allocation of the many different machines to the set of tests has been done correctly. You also need to take into account tradeoffs that noone seems to consider. e.g. TF and P-1 tests are NOT independent. Quote:
would stop worrying about overclocking. And there is another part of the optimization problem: You might have fewer errors and less total time spent by actually REDUCING clock speeds. In a highly heterogeneous environment, it seems impossible to even approach optimizing the calculations. |
|||
![]() |
![]() |
![]() |
#11 | |
"Bob Silverman"
Nov 2003
North of Boston
1D5416 Posts |
![]() Quote:
formulated model, implementing it would be night to impossible. Can you imagine a central organizer trying to tell (say) a given user: "You can't run LL. You should run TF to 70 bits" Can you imagine users' reactions to someone telling them what to run and how to run it? Finally, all you can hope to accomplish is to reduce the EXPECTED time to find the next prime. And you will never know if some other allocation of resources would have done it faster. And any time savings that you might achieve is UNMEASURABLE and lost in the noise of the process. You will not be able to observe any savings of time that you might achieve. |
|
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Call for GPU Workers to help at the "LL Wavefront" | chalsall | GPU Computing | 24 | 2015-07-11 17:48 |
Prime95 slowed down after I restarted computer | ixfd64 | Software | 13 | 2010-12-18 06:56 |