![]() |
![]() |
#1 |
Sep 2018
6910 Posts |
![]()
I notice slow machines get ECM work at first before getting a double check, and if I transfer that ECM work to a fast machine, the fast machine chews right through it.
Is there a benefit to the community/gimps to have a few fast machines chewing out ECM work or should I leave those on LL tests? |
![]() |
![]() |
![]() |
#2 | |
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
47×107 Posts |
![]() Quote:
I think the conventional wisdom is PRP & P-1 are the best uses of cpus, and TF the best use of gpus, for advancing the wavefronts and finding the next Mersenne prime. P-1 assignments complete quicker than PRP or LL (primality tests) on the same hardware. Consequently P-1 is more suitable for slower cpus than primality testing. But run what makes you smile the most, and keeps you from getting bored and quitting. Running a mixed workload is fine, and prime95 makes it easy to do so automatically on multicore machines. |
|
![]() |
![]() |
![]() |
#3 |
Sep 2018
3·23 Posts |
![]()
Wow, that's a lot to digest.
So you're saying doing a P-1 is "quicker" and you do more of them but would give the same credit as doing a longer LL, in the end? Either way the following (in the blue brackets) furthers the cause? https://media.discordapp.net/attachm...64/unknown.png |
![]() |
![]() |
![]() |
#4 | |
"/X\(‘-‘)/X\"
Jan 2013
B7316 Posts |
![]() Quote:
If the machines were given ECM automatically, it's probably the best work for them. ECM won't help find new primes, but will help factor numbers we know not to be prime. Some people are interested in that as a sub-project. The LL double check is to check if the hardware is reliable. By default, each machine will do a single LL double check per year. This helps us find unreliable machines so their work can be double checked earlier: there's a reasonable chance a prime has been missed. Unreliable hardware should only run PRP as it can reliably detect errors and retry, whereas the other jobs cannot. |
|
![]() |
![]() |
![]() |
#5 |
Sep 2018
3×23 Posts |
![]()
Well I've had a few machines where Primenet detects them as P4 100mhz equiv to start, assigns them ECM work, and then after a while of ECM work, it realizes that "oh, this is actually a 5 ghz P4 equiv, here, have a doublecheck"
The only machine I had only get ECM work only was an Atom Netbook that I brought on as a test, and it took it days to do one assignment so I retired it. I started in august, every machine I've brought online (almost 50) has done a few doublechecks then moved on to a LL check. Oddly, when I bumped everything up to "get 10 days of work" they got confused and heaped on more doublechecks. Edit: so P-1 Factoring helps weed out things so the LL tests are better, and I should get machines with a ton of ram and make them do P-1 tests? Last fiddled with by irowiki on 2018-10-10 at 19:25 |
![]() |
![]() |
![]() |
#6 | |
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
47×107 Posts |
![]() Quote:
P-1 factoring bounds are tested for each run for the values which give optimal time savings given the probabilities of finding a factor or not, given the estimated time costs of P-1 factoring versus performing a number of LL or PRP tests on the exponent. P-1 done extensively may save the project time. But the system time you allocate to P-1 factoring won't find a Mersenne prime, although it may find some impressively large factors. P-1 and PRP are performed with very similar calculations (3 raised to a power mod the Mp using DP fft transforms), and LL is close too, so the credit per cpu hour expended is close. If P-1 takes 1/40 the time of a primality test and has a 3% chance of a factor, you'll get 1000 P-1 factored in the time it takes to do 25 primality tests, and find about 30 factors, eliminating the need for ~30-60+ primality tests. (First LL, LL DC, and the occasional third test when residues don't match, for 60+. Or 30 PRPs.) So the project would be 5 primality tests ahead in that hypothetical case. P-1 speed is helped by more RAM available to it, in stage 2. But slow machines can run it with 1GB available, at the current wavefront. Double checks are welcome both because they test a system's reliability and because they help cut down on the growing double-check backlog. |
|
![]() |
![]() |
![]() |
#7 | |
Sep 2018
10001012 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
The best work for my CPU | MacMagnus | Information & Answers | 57 | 2013-11-22 16:27 |
How to calculate work/effort for PRP work? | James Heinrich | PrimeNet | 0 | 2011-06-28 19:29 |
No Work | Pilgrim | Information & Answers | 1 | 2008-01-31 18:53 |
Work to do for old CPU | Riza | Lone Mersenne Hunters | 7 | 2006-03-15 22:57 |
It seems to work, but why ? | T.Rex | Math | 15 | 2005-10-15 10:38 |