Thanks WraithX! Your explanation is very helpful. It explains why, on some of the machines, it appears one thread has a high success rate while another has a high failure rate. The first one got its s completed and left no room for the other. I had suspected this, but thought that the maxmem would prevent that.
I have switched three high failure rate machines over and should know within an hour or so if this is successful. Here're some earlier runs from one of my machines:
Code:
Current pass started at 16:51:33
ecm -maxmem 1000 -save residues43b.txt 2900000000 2900000000
ECM took 31453 seconds
ECM took 8h 44m 13s
Current pass started at 01:35:47
ecm -maxmem 1000 -save residues43b.txt 2900000000 2900000000
GNU MP: Cannot allocate memory (size=268697616)
ECM took 1632 seconds
ECM took 0h 27m 12s
Current pass started at 02:02:59
ecm -maxmem 1000 -save residues43b.txt 2900000000 2900000000
GNU MP: Cannot allocate memory (size=268697616)
ECM took 1293 seconds
ECM took 0h 21m 33s
Current pass started at 02:24:32
ecm -maxmem 1000 -save residues43b.txt 2900000000 2900000000
GNU MP: Cannot allocate memory (size=268697616)
ECM took 1277 seconds
ECM took 0h 21m 17s
Current pass started at 02:45:49
One success followed by three failures! Even if somewhat slower, they should turn out more residues. The above shows over an hour of wasted time against a successful 8.75 hours.
Am I safe to assume I can remove the maxmem for stage 1 runs?
@Gordon:
All three of my nVidia cards are ancient, unfortunately. They have 1.2 and 1.3 architecture. I have CUDA 6 on one of my machines, but GMP-ECM considers it too old. I haven't totally given up, but that's on a side table for now.
Thanks again, everyone...