![]() |
![]() |
#375 | |
"Joe"
Oct 2019
United States
4C16 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
#376 | |
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
114538 Posts |
![]() Quote:
Reliability data versus exponent on LL will show the effectiveness of the current error detection methods including Jacobi check. Version upgrade is good for those systems new enough to run wavefront assignments, including the relatively quick Cert work. The XP systems need an OS upgrade to allow it. The timeliness of a prompt GEC check or Cert is of much greater value in my opinion than LL reliability feedback on a computer that ran LL first test several years ago. Many of those systems that produced existing LLDC candidates' first tests (54M and higher) are likely to no longer be in operation or running primality tests by the time the LL reliability feedback is obtained, if not already replaced by now. There is a very slight ~2% throughput advantage to PRP/GEC/CERT over LLDC, and a large reliability advantage. Approx 2% x 506K DC to Mp51 adds up (~10,120 tests). There is no great harm in having a mixed situation, with some LLDC and some PRP/CERT in place of LLDC (& ~4% LLTC, ~0.04% LLQC, ~0.0008% LL5C, ~16E-8 LL6C). PrimeNet continuing to automatically issue first time LL assignments ought cease sometime soon, since each one commits the project to future DC in some form, at at least 100% of first test cost, at higher more costly exponents. July 2021, a year after PRP proof introduced? Wait till mlucas supports PRP proof? A separate question is whether to also cease issuing first time LL assignments as manual assignments. Some gpus can't run gpuowl, so can't run PRP, with proof or not. |
|
![]() |
![]() |
![]() |
#377 | |
"Bill Staffen"
Jan 2013
Pittsburgh, PA, USA
3·137 Posts |
![]() Quote:
Make sure you're not using GPU72 as a proxy. It messes with the Certs. |
|
![]() |
![]() |
![]() |
#378 |
Dec 2002
3·269 Posts |
![]()
https://www.mersenne.org/report_expo...exp_hi=&full=1
On the line "Type : CERT" under status it says "n/a" suggesting something is to come later. I would expect something like "successfully verified" Last fiddled with by tha on 2020-10-01 at 08:48 |
![]() |
![]() |
![]() |
#379 | ||
"James Heinrich"
May 2004
ex-Northern Ontario
CC316 Posts |
![]() Quote:
Quote:
|
||
![]() |
![]() |
![]() |
#380 |
"Rich"
Aug 2002
Benicia, California
23×151 Posts |
![]()
Thanks for the info. I am using GPU72 as a proxy. I only do double checks so I'll leave the certs to others.
Last fiddled with by richs on 2020-10-01 at 16:09 |
![]() |
![]() |
![]() |
#381 |
"James Heinrich"
May 2004
ex-Northern Ontario
33·112 Posts |
![]() |
![]() |
![]() |
![]() |
#382 | |
Sep 2006
Brussels, Belgium
165110 Posts |
![]() Quote:
Jacob Last fiddled with by S485122 on 2020-10-01 at 17:47 |
|
![]() |
![]() |
![]() |
#383 | |
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
7·701 Posts |
![]() Quote:
I used 2% probability per LL test; 2% for the first test, 2% for the DC, which gives 4% probability of needing a TC, and continued to use 2% probability of error for any later LL retest that may occur, although there may be a factor of 2 missing for the quad and higher checks. George has stated 1.5%. I've seen 2% commonly used elsewhere. I've seen 1.5 & 2%/LLtest in my own cpu & gpu tests. For given hardware and software, the rate is expected to climb with run time and hardware age. The introduction of the Jacobi check should halve the figures at some point, where applicable (prime95, gpuowl, mlucas, not CUDALucas). Phaseout of LL/Jacobi in favor of PRP/GEC will lower the average of PRP & LL combined test error rate. Assuming 1% chance of uncorrected error per LL test would make the computation time about a wash. Being able to perform a proof of correctness remains an advantage for PRP. Primality tests via PRP or LL cost about the same; the GEC and occasional Jacobi are ~0.2-0.3% of a primality test, and both George and Mihai IIRC have stated there's no difference in cost between bare LL and bare PRP. Assuming power 8 PRP, total cost of a PRP with proof & cert is ~1.01 primality tests. First LL has cost 1 and error rate e, so cost of a correct test is 1+sum from i=1 to infinity, e^i to obtain a correct res64. LLDC has the same cost. Cost of two correct tests is then for e=.02, 2.04081632... After obtaining a first LL res64, we don't know whether it's right. The chance of a mismatch with a DC is the sum of the probabilities of the first test being wrong, or the second test being wrong (including the case of both being wrong but differently); ~2e. Last fiddled with by kriesel on 2020-10-01 at 19:03 |
|
![]() |
![]() |
![]() |
#384 | |
6809 > 6502
"""""""""""""""""""
Aug 2003
101×103 Posts
935210 Posts |
![]() Quote:
A minor other reason might be to get assignments that your machine does not qualify for normally. |
|
![]() |
![]() |
![]() |
#385 |
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
10011001010112 Posts |
![]()
Options, Resource Limits, Advanced...,
Daytime P-1/ECM stage 2 memory (GB): Nighttime P-1/ECM stage 2 memory (GB): On a multiple worker configuration, are these allowed memory settings per worker, or total for the prime95 application? The readme does not say either way. Treating it as per-worker is conservative but suboptimal if it is actually total for the prime95 application. Setting available memory was introduced in v20.0 for P-1, and ECM supported this memory limit beginning at V25.5. Multiple ll test workers support was introduced at V25.5. It's also possible to run P-1 on multiple workers, including overlapping stage 2. And presumably ECM also. I sometimes run a P-1 on each Xeon in a system, among the 2 or 4 workers per system. whatsnew.txt says Code:
[New features in Version 25.7 of prime95.exe ------------------------------------------- 1) Time= in ini files no longer supported. A during/else syntax can be unsed instead for some ini file options. 2) PauseWhileRunning enhanced to pause any number of workers. 3) LowMemWhileRunning added. 4) Ability to stop and start individual workers added. 5) DayMemory and NightMemory in local.txt replaced with a single Memory setting. 6) Memory can be set for each worker thread. 7) Scheme to distribute available memory among workers needing a lot of memory has been completely revamped. 8) MaxHighMemWorkers replaces delayStage2Workers option. 9) The executable now defaults to talking to the PrimeNet v5 server. To use the executable with the old v4 server, add "UseV4=1" to the top of prime.txt. Code:
The Memory=n setting in local.txt refers to the total amount of memory the program can use. You can also put this in the [Worker #n] section to place a maximum amount of memory that one particular worker can use. |
![]() |
![]() |