20181010, 16:14  #1 
Sep 2018
3×23 Posts 
ECM work vs LL work
I notice slow machines get ECM work at first before getting a double check, and if I transfer that ECM work to a fast machine, the fast machine chews right through it.
Is there a benefit to the community/gimps to have a few fast machines chewing out ECM work or should I leave those on LL tests? 
20181010, 16:54  #2  
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
7·701 Posts 
Quote:
I think the conventional wisdom is PRP & P1 are the best uses of cpus, and TF the best use of gpus, for advancing the wavefronts and finding the next Mersenne prime. P1 assignments complete quicker than PRP or LL (primality tests) on the same hardware. Consequently P1 is more suitable for slower cpus than primality testing. But run what makes you smile the most, and keeps you from getting bored and quitting. Running a mixed workload is fine, and prime95 makes it easy to do so automatically on multicore machines. 

20181010, 17:55  #3 
Sep 2018
45_{16} Posts 
Wow, that's a lot to digest.
So you're saying doing a P1 is "quicker" and you do more of them but would give the same credit as doing a longer LL, in the end? Either way the following (in the blue brackets) furthers the cause? https://media.discordapp.net/attachm...64/unknown.png 
20181010, 18:33  #4  
"/X\(‘‘)/X\"
Jan 2013
101101110001_{2} Posts 
Quote:
If the machines were given ECM automatically, it's probably the best work for them. ECM won't help find new primes, but will help factor numbers we know not to be prime. Some people are interested in that as a subproject. The LL double check is to check if the hardware is reliable. By default, each machine will do a single LL double check per year. This helps us find unreliable machines so their work can be double checked earlier: there's a reasonable chance a prime has been missed. Unreliable hardware should only run PRP as it can reliably detect errors and retry, whereas the other jobs cannot. 

20181010, 19:21  #5 
Sep 2018
3·23 Posts 
Well I've had a few machines where Primenet detects them as P4 100mhz equiv to start, assigns them ECM work, and then after a while of ECM work, it realizes that "oh, this is actually a 5 ghz P4 equiv, here, have a doublecheck"
The only machine I had only get ECM work only was an Atom Netbook that I brought on as a test, and it took it days to do one assignment so I retired it. I started in august, every machine I've brought online (almost 50) has done a few doublechecks then moved on to a LL check. Oddly, when I bumped everything up to "get 10 days of work" they got confused and heaped on more doublechecks. Edit: so P1 Factoring helps weed out things so the LL tests are better, and I should get machines with a ton of ram and make them do P1 tests? Last fiddled with by irowiki on 20181010 at 19:25 
20181010, 22:18  #6  
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
7·701 Posts 
Quote:
P1 factoring bounds are tested for each run for the values which give optimal time savings given the probabilities of finding a factor or not, given the estimated time costs of P1 factoring versus performing a number of LL or PRP tests on the exponent. P1 done extensively may save the project time. But the system time you allocate to P1 factoring won't find a Mersenne prime, although it may find some impressively large factors. P1 and PRP are performed with very similar calculations (3 raised to a power mod the Mp using DP fft transforms), and LL is close too, so the credit per cpu hour expended is close. If P1 takes 1/40 the time of a primality test and has a 3% chance of a factor, you'll get 1000 P1 factored in the time it takes to do 25 primality tests, and find about 30 factors, eliminating the need for ~3060+ primality tests. (First LL, LL DC, and the occasional third test when residues don't match, for 60+. Or 30 PRPs.) So the project would be 5 primality tests ahead in that hypothetical case. P1 speed is helped by more RAM available to it, in stage 2. But slow machines can run it with 1GB available, at the current wavefront. Double checks are welcome both because they test a system's reliability and because they help cut down on the growing doublecheck backlog. 

20181011, 21:17  #7  
Sep 2018
3·23 Posts 
Quote:


Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
The best work for my CPU  MacMagnus  Information & Answers  57  20131122 16:27 
How to calculate work/effort for PRP work?  James Heinrich  PrimeNet  0  20110628 19:29 
No Work  Pilgrim  Information & Answers  1  20080131 18:53 
Work to do for old CPU  Riza  Lone Mersenne Hunters  7  20060315 22:57 
It seems to work, but why ?  T.Rex  Math  15  20051015 10:38 