mersenneforum.org Mersenne Prime mostly-GPU Computing reference material
 Register FAQ Search Today's Posts Mark Forums Read

 2018-06-28, 01:11 #15 kriesel     "TF79LL86GIMPS96gpu17" Mar 2017 US midwest 10111000101012 Posts NVIDIA-SMI NVIDIA-smi is a command line utility available on Linux or Windows. The following relates to Windows. It can be buried deep in the directory tree. Windows Explorer search, followed by drag and drop onto a command prompt window, makes short work of that, no typing involved. Note it gives status for several parameters on all gpus and lists processes using them. Output following is if no command line parameters are given. There are pages of info available on options in the program, including looping output, using -h to display the options. Code: $C:\Windows\System32\DriverStore\FileRepository\nv_dispi.inf_amd64_neutral_cc2df69582aea972\nvidia-smi.exe Wed Jun 27 19:35:12 2018 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 378.66 Driver Version: 378.66 | |-------------------------------+----------------------+----------------------+ | GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 1070 WDDM | 0000:03:00.0 Off | N/A | | 89% 85C P2 108W / 158W | 345MiB / 8192MiB | 98% Default | +-------------------------------+----------------------+----------------------+ | 1 Quadro 2000 WDDM | 0000:1C:00.0 Off | N/A | |100% 90C P0 N/A / N/A | 1016MiB / 1024MiB | 100% Default | +-------------------------------+----------------------+----------------------+ | 2 GeForce GTX 105... WDDM | 0000:28:00.0 Off | N/A | | 49% 81C P0 66W / 75W | 154MiB / 4096MiB | 98% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 4972 C ...ktc\2\mfaktc-win-64.LessClasses-CUDA8.exe N/A | | 0 6884 C ...faktc\mfaktc-win-64.LessClasses-CUDA8.exe N/A | | 1 512 C ...-q2000\CUDAPm1_win64_20130923_CUDA_55.exe N/A | | 2 6500 C ...050ti\mfaktc-win-64.LessClasses-CUDA8.exe N/A | +-----------------------------------------------------------------------------+ It's fully implemented for Quadro and Tesla, and a subset is available for GeForce drivers and GTX / RTX gpus. Here's a --query example output for RTX2080 Super Code: "c:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe" --query ==============NVSMI LOG============== Timestamp : Thu Jul 09 11:26:02 2020 Driver Version : 442.19 CUDA Version : 10.2 Attached GPUs : 1 GPU 00000000:03:00.0 Product Name : GeForce RTX 2080 SUPER Product Brand : GeForce Display Mode : Enabled Display Active : Enabled Persistence Mode : N/A Accounting Mode : Disabled Accounting Mode Buffer Size : 4000 Driver Model Current : WDDM Pending : WDDM Serial Number : N/A GPU UUID : GPU-449f386b-dcaf-5433-af27-7650c45bd88f Minor Number : N/A VBIOS Version : 90.04.7A.00.CD MultiGPU Board : No Board ID : 0x300 GPU Part Number : N/A Inforom Version Image Version : G001.0000.02.04 OEM Object : 1.1 ECC Object : N/A Power Management Object : N/A GPU Operation Mode Current : N/A Pending : N/A GPU Virtualization Mode Virtualization Mode : None Host VGPU Mode : N/A IBMNPU Relaxed Ordering Mode : N/A PCI Bus : 0x03 Device : 0x00 Domain : 0x0000 Device Id : 0x1E8110DE Bus Id : 00000000:03:00.0 Sub System Id : 0x30813842 GPU Link Info PCIe Generation Max : 1 Current : 1 Link Width Max : 16x Current : 16x Bridge Chip Type : N/A Firmware : N/A Replays Since Reset : 0 Replay Number Rollovers : 0 Tx Throughput : 0 KB/s Rx Throughput : 0 KB/s Fan Speed : 51 % Performance State : P2 Clocks Throttle Reasons Idle : Not Active Applications Clocks Setting : Not Active SW Power Cap : Active HW Slowdown : Not Active HW Thermal Slowdown : Not Active HW Power Brake Slowdown : Not Active Sync Boost : Not Active SW Thermal Slowdown : Not Active Display Clock Setting : Not Active FB Memory Usage Total : 8192 MiB Used : 1364 MiB Free : 6828 MiB BAR1 Memory Usage Total : 256 MiB Used : 11 MiB Free : 245 MiB Compute Mode : Default Utilization Gpu : 100 % Memory : 1 % Encoder : 0 % Decoder : 0 % Encoder Stats Active Sessions : 0 Average FPS : 0 Average Latency : 0 FBC Stats Active Sessions : 0 Average FPS : 0 Average Latency : 0 Ecc Mode Current : N/A Pending : N/A ECC Errors Volatile SRAM Correctable : N/A SRAM Uncorrectable : N/A DRAM Correctable : N/A DRAM Uncorrectable : N/A Aggregate SRAM Correctable : N/A SRAM Uncorrectable : N/A DRAM Correctable : N/A DRAM Uncorrectable : N/A Retired Pages Single Bit ECC : N/A Double Bit ECC : N/A Pending Page Blacklist : N/A Temperature GPU Current Temp : 66 C GPU Shutdown Temp : 100 C GPU Slowdown Temp : 97 C GPU Max Operating Temp : 89 C Memory Current Temp : N/A Memory Max Operating Temp : N/A Power Readings Power Management : Supported Power Draw : 126.22 W Power Limit : 125.00 W Default Power Limit : 250.00 W Enforced Power Limit : 125.00 W Min Power Limit : 125.00 W Max Power Limit : 292.00 W Clocks Graphics : 1545 MHz SM : 1545 MHz Memory : 7500 MHz Video : 1425 MHz Applications Clocks Graphics : N/A Memory : N/A Default Applications Clocks Graphics : N/A Memory : N/A Max Clocks Graphics : 2100 MHz SM : 2100 MHz Memory : 7751 MHz Video : 1950 MHz Max Customer Boost Clocks Graphics : N/A Clock Policy Auto Boost : N/A Auto Boost Default : N/A Processes Process ID : 3188 Type : C Name : C:\Users\...\rtx2080super-mfaktc \mfaktc-2047-win-64.exe Used GPU Memory : Not available in WDDM driver model Process ID : 6992 Type : C Name : C:\Users\...\tx2080super-mfaktc \3\mfaktc-2047-win-64.exe Used GPU Memory : Not available in WDDM driver model Process ID : 8156 Type : C Name : C:\Users\...\rtx2080super-mfaktc \2\mfaktc-2047-win-64.exe Used GPU Memory : Not available in WDDM driver model It can show gpu serial numbers on some gpus. It fails if any NVIDIA gpu has a driver or hardware issue: Code: "c:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe" --query Unable to determine the device handle for GPU 0000:06:00.0: Unknown Error Putting it inside a simple batch file allows update upon a single keystroke and with low cpu overhead. Code: :loop C:\Windows\System32\DriverStore\FileRepository\nv_dispi.inf_amd64_neutral_cc2df69582aea972\nvidia-smi.exe pause goto loop Fix the directory path there to match the location in your system. Name it something convenient, like nv.bat. Top of this reference thread: https://www.mersenneforum.org/showpo...89&postcount=1 Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1 Last fiddled with by kriesel on 2021-01-08 at 19:31 2018-10-07, 14:48 #16 kriesel "TF79LL86GIMPS96gpu17" Mar 2017 US midwest 19×311 Posts TF & LL GhzD/day ratings & ratios and SP/DP ratios for certain GPUs The attached PDF shows TF GhzD/day rating, LL GhzD/day rating, their ratio, and SP/DP ratio for certain gpus. TF and LL throughputs are actually functions of exponent and other variables, not constant. Values here are for representative independent variables relevant to current GIMPS wavefront. Data are listed in a table and shown on a log chart. Feel free to PM me with data for additional gpus. LL ratings are mostly based on CUDALucas performance, which is substantially slower than recent versions of GpuOwl on the same inputs and GPU model. Based on these performance ratios, some GPUs are much better suited for TF (NVIDIA GTX10xx and later), while others are well suited to PRP/GEC/proof or P-1 or LL DC with Jacobi (recent AMD models, Vega 56 and newer, especially Radeon VII). Top of this reference thread: https://www.mersenneforum.org/showpo...89&postcount=1 Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1 Attached Files  tf ll ghzd and ratios vs gpu model.pdf (44.7 KB, 47 views) Last fiddled with by kriesel on 2021-07-29 at 19:01  2018-12-07, 16:48 #17 kriesel "TF79LL86GIMPS96gpu17" Mar 2017 US midwest 19×311 Posts P-1 bounds determination (following originated as https://www.mersenneforum.org/showpo...3&postcount=36) As far as I can determine, it's not PrimeNet doing the B1, B2, d, e or NRP determination and dictating to the applications, it's most applications optimizing the bounds and other parameters, unless specified by the user, and the applications afterward telling PrimeNet in the results record what parameters were selected and used. (However, note that if the bounds reported for P-1 completed without finding a factor are not sufficient, PrimeNet does not retire the P-1 factoring task for the exponent, but instead will reissue the task to someone else. Therefore much of the first P-1 attempt's computation will be duplicated elsewhere, which is inefficient.) The applications, mprime, prime95, and CUDAPm1 (but not Gpuowl v5.0's PRP-1 or later Gpuowl versions' P-1), unless the user specifies otherwise, estimate P-1 factoring run time and probability of saving some number of primality tests to try to optimize the probable savings in total computing time for the exponent, based on computed probabilities over combinations of many B1 values and several B2 values, of finding a P-1 factor, for a given prior TF level (number of bits trial factored to) a given number of future primality tests potentially saved, typically 1 or 2, available memory resource limits (system or gpu), and probably the system / gpu's performance characteristics / benchmark results. The mprime, prime95, and CUDAPm1 programs try many combinations of B1 and B2 values while seeking that optimal. Or the user dictates the P-1 bounds in the worktodo line (or command line as applicable). For mprime or prime95, that explicit bounds specification can be done in a Pminus1 worktodo line, but not a Pfactor line. It seems like a lot of work, and a poor bet that the average user can do better than the coded optimization algorithm created by the author of prime95, mprime, gwnum and some bits of Gpuowl. From experiments with prime95, with somewhat larger exponents, it appears that optimization calculation occurs also during prime95 Test Status output generation, which shows considerable lag for P-1 work compared to other computation types. It appears there's no caching of previous computation of the optimal P-1 bounds. In my experience prime95 status output without a stack of P-1 work assignment is essentially instantaneous, while this example attached takes 5 seconds, even immediately after a preceding one. With larger P-1 exponents or more P-1 assignments (deeper work caching or more complete dedication of a system to P-1 work than the 1/4 in my example) I think that 5 seconds will increase. prime95.log: Code: Got assignment [aid redacted]: P-1 M89787821 Sending expected completion date for M89787821: Dec 05 2018 ... [Thu Dec 06 09:17:24 2018 - ver 29.4] Sending result to server: UID: Kriesel/emu, M89787821 completed P-1, B1=730000, B2=14782500, E=12, Wg4: 123E2311, AID: redacted PrimeNet success code with additional info: CPU credit is 7.3113 GHz-days. The prime95 worktodo.txt record for a PrimeNet-given P-1 assignment contains no B1 or B2 specification. Code: Pfactor=[aid],1,2,89794319,-1,76,2 George's description of the optimization process is in the P-1 Factoring section of https://www.mersenne.org/various/math.php. It's there to read in the source codes also. CUDAPm1 example: worktodo entry from manual assignment: Code: PFactor=[aid],1,2,292000031,-1,81,2 program output: Code: CUDAPm1 v0.20 ------- DEVICE 1 ------- name GeForce GTX 480 Compatibility 2.0 clockRate (MHz) 1401 memClockRate (MHz) 1848 totalGlobalMem zu totalConstMem zu l2CacheSize 786432 sharedMemPerBlock zu regsPerBlock 32768 warpSize 32 memPitch zu maxThreadsPerBlock 1024 maxThreadsPerMP 1536 multiProcessorCount 15 maxThreadsDim[3] 1024,1024,64 maxGridSize[3] 65535,65535,65535 textureAlignment zu deviceOverlap 1 CUDA reports 1426M of 1536M GPU memory free. Index 91 Using threads: norm1 256, mult 128, norm2 32. Using up to 1408M GPU memory. Selected B1=1830000, B2=9607500, 2.39% chance of finding a factor Starting stage 1 P-1, M292000031, B1 = 1830000, B2 = 9607500, fft length = 16384K Aaron Haviland has rewritten part of CUDAPm1's bounds selection code in v0.22 https://www.mersenneforum.org/showpo...&postcount=646, building on his earlier 2014 fork. https://www.mersenneforum.org/showpo...&postcount=592 Gpuowl's PRP-1 implementation is a bit different approach, and requires user selection of B1. It defaults to B2=p but allows other B2 to be user specified. See https://www.mersenneforum.org/showth...=22204&page=70, posts 765-767 for Preda's description of gpuowl v5.0 P-1 handling. (See posts 694-706 for his earlier B1-only development; https://www.mersenneforum.org/showth...=22204&page=64.) Gpuowl's P-1 bounds defaults, cost and algorithm are continuing to evolve over time, with substantial performance increases in V6.11 and significant cost reduction by overlap with PRP squarings introduced in V7.0. As of V6.11, p ~104M, B1 defaults to 1M, B2 defaults to 30 * B1. As of V7.0 I think, Gpuowl does not run stage 2 if the available gpu ram is not sufficient for 15 or more buffers. On a 16GB gpu, that's above 900M exponent in my experience. Lookup on mersenne.ca for an exponent provides separate guides for GPU or CPU use bounds. For example, https://www.mersenne.ca/exponent/104089423 shows PrimeNet B1=450000, B2=22000000 GPU72 B2=650000, B2=24000000 When I reserve a block of ~30 to run on a GPU, I'll typically specify the GPU72 bounds for the last (largest) of the block. That way the bounds are just sufficient for all,, retiring the P-1 task for all the exponents in the block, regardless of exponent in the block and whether the primality test will run on CPU or GPU. It's also about 40% faster than taking the Gpuowl default B1=1000000, B2=30000000, allowing me to do more of them in a day, with a near optimal probability weighted saving of compute time overall by finding factors. That helps reduce the number that get inadequately run by other GIMPS participants, on CPUs with default prime95 memory allocation, that does stage 1 only, no stage 2. (Code authors are welcome to weigh in re any errors, omissions, nuances etc.) Top of this reference thread: https://www.mersenneforum.org/showpo...89&postcount=1 Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1 Last fiddled with by kriesel on 2021-05-21 at 21:15 Reason: consistent case on PrimeNet, Gpuowl; notes on Pminus1  2019-01-11, 22:17 #18 kriesel "TF79LL86GIMPS96gpu17" Mar 2017 US midwest 19·311 Posts What limits trial factoring What limits how high an exponent or factoring depth we run? 1) Utility per unit run time. There's a tradeoff between the probability function of finding a factor by trial factoring (TF), versus by P-1 factoring, or the time it takes to complete a conclusive primality test or pseudoprime test. Consider a few cases (ignoring P-1 for now for simplicity): A) 40 mersenne numbers are trial factored from their current level to a bit depth that corresponds to a 2.5% probability of finding a factor. This takes the same time as primality testing one exponent. There's a two percent error rate in LL tests, so on average, each unfactored mersenne number requires 2.04 primality tests. So in that average lot of 40 exponents, there's 1.04 test times saved, and 39 first tests, 39 second tests, 78*.02 = 1.56 third tests, .03 fourth tests. Total effort is equivalent to 1+39 +39 +1.56 + .03= 80.59 tests. B) Factor the same 40 as above, one bit deeper, which takes as long as all preceding factoring, or two primality tests of effort. The odds of finding a factor go up by 1.7% to 4.2%. Of the 40 considered before, 1.68 are factored, leaving 38.32 to primality test. 38.32 * 2 tests = 76.64, plus 38.32 * 2 * .02 = 1.53, plus 38.32 * 2 * .02 * .02 = .03; total effort = 2 + 76.64 + 1.53 +.03 = 80.2. This is better than A. (If the idea of finding factors of 1.68 mersenne numbers bothers you, consider doing 4000 exponents instead.) C) Same as b, but the odds of finding a factor go up by 1.2% to 3.6%. Of the 40 considered before, 1.44 are factored, leaving 38.56 to primality test. 38.56 * 2 = 77.12, plus 38.56 * 2 * .02 = 1.54 plus 38.56 * 2 * .02 * .02 = .03; total effort 2 + 77.12 + 1.54 +.03 = 79.12+1.54+.03= 80.69. This is slower than A. Now enter P-1. It's complicated. Given the exponent, prior trial factoring level, and estimate functions for probabilities of finding a factor versus various B1 and B2 bounds and corresponding run times, and the number of primality tests that could be saved by finding a factor, and estimate functions of run times for the exponent on the same hardware, the programs try a lot of B1 and B2 value combinations and estimate the probable net savings in run time, and go with what maximizes estimated saved time. 2) Preference or traits of the user. Some people are not willing to wait for the long run times of some exponent/bit level combinations. Faster run times per bit level are associated with very large exponents or very low bit levels or both. Some people enjoy finding factors quickly. Some people prioritize assisting finding new primes. Some value finding an actual factor higher than the knowledge a given Mersenne is composite but no factors for it are known. 3) Utility to finding new Mersenne primes. New Mersenne primes are likely to be found in the smaller part of the unsearched exponent range. Factoring effort on exponents ten times or even triple the approximate value of the next find don't help bring that find about. For equal computing time, many more exponents can be tested at low exponent value than at high; primality testing with the best algorithms scales as approximately p2.1. 4) Software feature limits. A) Trial factoring is not currently limited by the features of available software. Mfaktc supports trial factors up to 95 bits; Mfakto up to 92 bits. (Those are each more than enough to cover the 86 bit or less optimal TF for exponents up to 109, mersenne.org's limit. 92 is good to exponents about 232; 95 to almost 233, per lookups like https://www.mersenne.ca/exponent/8183844937.) B) Max supported exponent is 232-1 in Mfaktc and Mfakto. Modifying to support larger exponents would make it slower. C) Factor5 is not limited in exponent or bit level, but is limited in practice by run time / performance. Some build options of Ernst Mayer's Mfactor program are not limited in exponent or bit level, and would be faster than Factor5, but are also limited in practice by run time / performance. D) Availability of software making efficient use of the available computing hardware and available APIs and drivers. There is software for CUDA and OpenCL, but not for OpenGL, VULKAN, etc. For now, that seems to leave out some older GPUs or IGPs. 5) Supported parameters at the assignment and result coordination sites mersenne.org and mersenne.ca. Mersenne.org supports work assignments and exponent status up to 109. Mersenne.ca supports exponent status up to 1010 (and even some factor data for >1010), and TF work assignments up to 232. 6) Memory requirements on a GPU is not typically an issue for trial factoring. Its memory footprint is measured in MB while VRAM capacity per GPU is in GB. 7) Run time versus reliability and probably hardware lifetime can be a limiting issue. Factoring an exponent to 84 bits on a Quadro 4000 takes months. Going to 94 bits would take about 1024 times as long, so would be years even on a GTX 1080Ti. (1.4 years estimated for https://www.mersenne.ca/exponent/6013456871 without adjustment for the required code modifications) The lower performance of integrated graphics processors impose significant limits; eg HD4600, HD620, UHD630 are all around 18-20 GhzD/day throughput rating, while GTx1080 and above are over 1000, so what the faster can do in a day, the IGPs take months. Are there more? Top of this reference thread: https://www.mersenneforum.org/showpo...89&postcount=1 Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1 Last fiddled with by kriesel on 2021-04-26 at 17:25 Reason: minor grammar edit 2019-02-24, 01:52 #19 kriesel "TF79LL86GIMPS96gpu17" Mar 2017 US midwest 19×311 Posts Error rates (This originated as https://www.mersenneforum.org/showpo...1&postcount=23 and https://www.mersenneforum.org/showpo...5&postcount=55) What are typical error rates? The usual figure is about 2.% per LL test near the wavefront. That might be from before the addition of the Jacobi check to prime95. It has fluctuated over time as exponent increased, and additional code was written, additional bugs introduced, and later fixed, additional error checks added, etc. It will go up some as running larger exponents takes longer, requiring time roughly in proportion to p2.1, for the same hardware reliability hourly. The probability of an LL test being in error goes up considerably if the error counts accumulated during a prime95 run are nonzero. Even a single illegal sumout error recorded raises the probability of erroneous final residue to around 40%, if I recall Madpoo's recent post about that correctly. Hardware tends to get less reliable with age. PRP with Gerbicz check is much more reliable in producing correct final residues, and in sufficiently recent versions of prime95/mprime or gpuowl, also produces a proof file that allows avoiding over 99.5% of double-check effort, so run PRP with proof whenever possible. PRP/GEC was bulletproof on a very unreliable system I tested it on. It is still possible to have errors in the final residue with PRP and Gerbicz check, but it is unlikely, and the best we can do for now. The PRP/GEC test overall error rate per test is so small we don't have sufficient statistics to gauge an error rate for it, with only a few known erroneous results after two years of frequent use. There was a case in prime95 where in code outside the GEC check an error could occur. That code has been modified to address that. The PRP/GEC overall error rate is thought to be orders of magnitude smaller than the LL/Jacobi-check error rate. It's so low we do not have a sufficient empirical / statistical basis on which to compute an error rate. In a check for PRP tests on exponents > 50M reported 2019-08-12 to 2021-08-12 (>123,000 verified PRP/GEC or PRP/GEC/proof results), 3 bad results were found, indicating an error rate ~24. per million PRP tests. At least one of those 3 errors (possibly all 3) was from before the addition of error hardening for the code handling final residues outside the reach of the GEC. In any event, run self tests such as double checks regularly, at least annually, to check system reliability on these very unforgiving calculations. Error rate does depend on the software and hardware used. Mlucas, CUDALucas, cllucas and some LL-capable versions of gpuowl do LL and do not have the Jacobi check. The Jacobi check has a 50% chance of detecting an error if one occurs. Hardware with unreliable memory is more error prone. Overclocking too high or overheating increases error rates. In CUDALucas, there are CUDA levels and GPU models that interact badly, even on highly reliable hardware. These produce errors such as instead of the usual LL sequence, at some point all zeros gets returned. If that is before the subtraction of 2, then FFF...FFD is the result (the equivalent of -2). It gets squared and 2 subtracted, and voila, now you have 000...02. (-2)2-2=2. Then it will iterate at 2 until the end. These sorts of errors can be triggered at will. Some of them under certain circumstances have the side effect of making the iterations go much faster than expected. If something seems too good to be true, it probably is. (CUDA 4.0 or 4.1, 1024 threads, or certain fft lengths, typically is trouble in CUDALucas, if I recall correctly.) That is an example where the first and second test probability of a false positive match may be 100%. More typical would be of order 10-6 to 10-12. CUDALucas 2.06 May 5 2017 version has software traps for these error residues built in. There are other modes of error. The recent false positive by CUDALucas 2.05.1 was resulting in the interim residue having value zero. I'm guessing that's some failure to copy an array of values. Don't run CUDALucas versions earlier than 2.06, and don't let your friends either. Other applications also have characteristic error residues. Someone who wanted to use such bugs as the CUDALucas early zero bug to fake finding a prime would be disappointed, as the error would be quickly discovered early in the verification process. I've created application-specific reference threads for several of the popular GIMPS applications. Most of them have a post with a bug and wishlist tabulation attached, specific to that application. https://www.mersenneforum.org/forumdisplay.php?f=154 It helps to know what to avoid and how to avoid it. If you identify any issues that are not listed there yet, please PM me with details. As such issues are identified, they might be fixable, or code to detect and guard against them could be added, if still of sufficient interest. (Fixing or trapping for CUDA 4.0 or 4.1 issues is not of much interest now, since many GPUs are running at CUDA 8 or above.) It's common practice for the applications to keep more than one save file, and be able to restart from one or the other if something's detected to have gone seriously wrong in the past minutes of a lengthy run, thereby perhaps saving most of the time already expended. Some users will run side by side on two sets of hardware, duplicate runs that take months, periodically comparing interim 64-bit residues that should match along the way. Re odds of matching wrong residues: Given that the number of residues for a completed run of first and second LL checks of all primes below the current mersenne.org limit is about 2 n ~ 50 847 478* 2 ~101,694,956, while the number of possible unique 64-bit residues is r=18,446,744,073,709,551,616, with only the zero value indicating a correctly completed LL test of any of the 50-plus Mersenne prime exponents, the chance of one randomly distributed wrong residue coinciding with another randomly distributed wrong residue is very slim. If every prime exponent resulted in one randomly distributed wrong unique residue, the last wrong one, which has the most other residues in a list to dodge, and so the highest odds of coinciding, would have a chance 1/(r-2n)*2n of coinciding with another residue; ~5.5 10-12. If only 2% of residues are wrong and the wrong ones are randomly distributed, that chance drops by 49% to ~2.8 10-12 . The odds of any of the incorrect wrong residues coinciding with another residue by random chance is ~0.00014 if every exponent has one wrong randomly distributed residue, ~2.9 10-6 if 2% have a randomly distributed wrong residue. (Note though that the preceding figures do not account for the run times and so the error rates climbing with exponent. Or alternately, progress is assumed to occur roughly in sync with computing speed advances, so that run time and error rate do not grow.) The problem is the bad residues from software or hardware issues are not randomly distributed. If they were, we would not be patching and trapping and searching databases for known application-specific bad residues as markers of what exponents to double or triple check. https://www.mersenneforum.org/showpo...&postcount=142 https://www.mersenneforum.org/showpo...&postcount=150 There is an LL primality test error rate of ~2%/exponent, and similarly on second checks. We iterate until there's a match. We're always on the lookout for ways to reduce and to catch errors (without hurting performance too much). Some error detections if efficient enough will increase net accurate throughput. We know from some GPU runs that some bugs/misconfigurations will preferentially stabilize on a specific wrong res64 result, not a random wrong one. One such value is a false positive, as Madpoo has long known and dealt with. So that's an existence proof of nonrandom result from error, that occurs despite nonzero offset. A patch to detect and halt such runs was added. (See item 4 in the CUDALucas bug and wish list attached at https://www.mersenneforum.org/showpo...24&postcount=3) Quote:  # (in perl form) application-specific bad residues, indicative of some problem causing the calculation to go wrong # for applications other than gpuowl their detection means the run should be halted and the problem fixed before continuing # for gpuowl, the Gerbicz check will cause a lot of iterations recalculation requiring more time. Fixing the issue is recommended %badresidues=( 'cllucas', '0x0000000000000002, 0xffffffff80000000', 'cudalucas', '0x0000000000000000, 0x0000000000000002, 0xffffffff80000000, 0xfffffffffffffffd', 'cudapm1', '0x0000000000000000, 0x0000000000000001, 0xfff7fffbfffdfffe, 0xfff7fffbfffdffff, 0xfff7fffbfffffffe, 0xfff7fffbffffffff, '. '0xfff7fffffffdfffe, 0xfff7fffffffdffff, 0xfff7fffffffffffe, 0xfff7ffffffffffff, 0xfffffffbfffdfffe, 0xfffffffbfffdffff, '. '0xfffffffbfffffffe, 0xfffffffbffffffff, 0xfffffffffffdfffe, 0xfffffffffffdffff, 0xfffffffffffffffe, 0xffffffffffffffff', 'gpuowl', '0x0000000000000000', 'mfaktc', '', 'mfakto', '' ); #fff* added to cudapm1 list 7/19/18 # note, since second to last LL iteration's full residue can be +-2^[(p+1)/2], for a Mersenne prime, # and, above M127, that looks like in a res64, '0x0000000000000000, 0xffffffffffffffff', special handling may be required # for iteration p-3 for cllucas and cudalucas; add checks for below to cllucas and cudalucas checking code as (ok) exceptions to bad residues$llpm3okresidues='0x0000000000000000, 0xffffffffffffffff'; # see http://www.mersenneforum.org/showthread.php?t=5862 # see also http://www.hoegge.dk/mersenne/resultspenultimate.txt # http://www.hoegge.dk/mersenne/penultimate.txt
You might find the strategic double check thread https://www.mersenneforum.org/showth...462#post508462 and trippple check thread https://www.mersenneforum.org/showth...=17108&page=82 interesting background also.

Historically, error rates were somewhat higher. https://www.mail-archive.com/mersenn.../msg07476.html

With the approximate empirical 2% error rate per primality test completed, and certain assumptions that seem plausible, the chance of one exponent (total, out of the 50 million plus prime exponents p<109, not individually per prime exponent) having two matched wrong residues is ~2.9ppm. This seems to me to be a lower bound for matched wrong residues slipping by error detection. It's difficult to estimate probabilities for the nonrandom sources of incorrect matching residues; undetected software bugs, malicious reports, etc. which are additional. So let's suppose for now that the combined chance of random and nonrandom error producing matching wrong residues is 10ppm. Assuming further that it is distributed uniformly and independently over the ~50847478 prime exponents below 109, containing a probable number of ~55 mersenne primes, the chance of matching wrong residues occurring, times the chance of it coinciding with a Mersenne prime is 10ppm x 55/50847478 or 1.08x10-12. If we assume the occurrence of matching wrong residues is somehow connected to the Mersenne number being prime, the probability estimate of missing a Mersenne prime rises to the assumed 10ppm value. If we assume the occurrence of matching wrong residues is somehow connected to the Mersenne number being composite, the probability estimate of missing a Mersenne prime through matched wrong residues falls to zero. We could make various sets of assumptions about the relative probabilities of various assumptions (independent, prime-connected, composite-connected) and compute new probabilities, as possible estimates of the real probability. At some point, such estimates seem to rest on too shaky a foundation of assumptions and guesses to pursue further. Perhaps someone with a better background in statistics could help here.
Working three cases here for illustration, carrying to much higher precision than justified;

independent 99.98% 98 34

0.9998x1.08x10-12 + 0.0001 x 10ppm + 0.0001 x 0ppm = 0.00100108 ppm
0.98x1.08x10-12 + 0.01 x 10ppm + 0.01 x 0ppm = 0.10000106 ppm
0.34x1.08x10-12 + 0.33 x 10ppm + 0.33 x 0ppm = 3.30000037 ppm
Intuition tells me to weight "independent" heavily, but it's unclear how many nines to give it.

Now note that the assumption made earlier about the chance of error being distributed uniformly among the prime exponents is a convenient simplification for estimating probabilities, but it is wrong. It would be hard to get the primality of M2 or M3 wrong. It gets easier and more likely to have an error as the exponent gets bigger. I suppose we could sum the relative run times of all prime exponents, and assign a computed probability of error proportional to individual run times, fit through the empirical experience.

The odds of three matching wrong residues due to independent error would be much smaller. As I recall, triple checking was done of all exponents below ~3M. Some have had many more matching residues reported. See for example https://www.mersenne.org/report_expo...=101000&full=1
and note that in that range, any matching PRP results were preceded by matching LL results. It's my understanding that so far, all GIMPS discoveries of Mersenne primes were by first LL test, not by double check or later.

At the outset I gave a rough figure of LL test error rate as 2%. From my own running experience, it's clearly possible to do much better. Over a period of producing 447 verified LL tests, I also produced 6 bad residues, for a rate of 1.32% overall, 1.2% on prime95, 1.47% on GPUs. Also for a small sample of verified PRP3 tests, zero errors (23 prime95, 1 gpuowl). More to the point, decommissioning CPUs and GPUs that produce bad residues has led to zero GPU-produced bad residues since late 2017, and only one CPU-produced bad residue since then. Addition of more software checks would also help prevent completion and submission of bad runs.

It's also clearly possible to do much worse than the 2.% figure. A small sample of 47 LL tests on 21 100Mdigit exponents yields at least 9 bad residues, possibly as many as 13, for an estimated error rate of 19.% per LL test in that region. Extrapolating based on run time yields for 300Mdigit exponents, an estimate of 88.% error rate per test.

George computed the likely number of Mersenne primes below p=109 according to the Wagstaff conjecture and posted the result, 57.09, at https://www.mersenneforum.org/showpo...&postcount=204. Note that's a bit higher than the 55 I used above in computing estimates of matching wrong residue probability, but not by enough to shift the probabilities much. Two more primes is about a 4% greater value for the primes' generally negligible contribution.

Top of this reference thread: https://www.mersenneforum.org/showpo...89&postcount=1
Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1

Last fiddled with by kriesel on 2021-08-13 at 15:31 Reason: updated for PRP/GEC test error rate

 2019-03-30, 14:58 #20 kriesel     "TF79LL86GIMPS96gpu17" Mar 2017 US midwest 590910 Posts Costs Cost will vary widely depending on the age/speed/efficiency of the computing hardware used, local electrical rates including any applicable taxes, whether the waste heat is a benefit for comfort heating or vented to the outdoors without additional cost or constitute a load raising air conditioning needs and cost, whether equipment is being purchased for the purpose and assumptions about depreciation rate or whether the equipment was already purchased for other reasons and only the possibility of increased wear and tear is considered, etc. Some ballpark figures (all US$) mostly from my own fleet, for primality testing around 85M, per exponent tested, which include 4 year straight line depreciation, to zero salvage/resale value, of hardware purchased used or new, and the effect of US$0.11663/kw-hr, and neither heating benefit nor cooling penalty: gpu, PRP in gpuowl or LL in CUDALucas, around $1.07 (Radeon VII),$2.29 (RX480) to $3.23 for modern new AMD or NVIDIA gpus, up to$4.75 to $9 for CUDA2.x old used gpus; cpu,$3.37 for e5-2670 or e5-2690, up to $6.50 for i7-7500U based laptop,$6.70 for i7-8750H based laptop, $7.40 for X5650 tower,$9.30 for E5645 tower, $11.40 for E5520,$19.50 for Core 2 Duo, and 32-bit Intel processors are even higher. (Very old cpus can be both too slow for most assignments' expiration limits, and cost hundreds per primality test at 85M, or $3000 to$5500 for a pentium 133 which also takes about 45. YEARS!) Price, timings and wattage for a used Samsung S7 phone running Mlucas 18 provided by ewmayer yielded around $8.60. The electrical cost only, ranges from$0.81 (Radeon VII) or $1.71 (GTX1080) to$8 (Quadro 5000) for gpus tested; new laptops $0.72; e5-26x0$2.20; i3-370M $3.36; X5650$6.20; E5645 $7.55; E5520$8.12; Core 2 Duo $12; S7 phone$2.93. Costs only matter if the software will run the desired operands successfully. There was a time while Preda reported good results on a (16GB) Radeon VII, but users were unable to successfully run gpuowl P-1 on gpus on Windows. CUDAPm1 runs on a variety of NVIDIA hardware ranging from 1 to 11 GB, but is unable to do both stages on any gpu I have tried above p~432,500,000. Prime95 on the FMA3-capable i7-8750H seems to be the best bet for high p P-1; I have 901M running now. For my natural gas heating, furnace specs, central AC specs and utility rates, the heating benefits reduce the net electrical cost by 20.6%, while the cooling costs will increase it 36% and the non-heating-season sales tax will increase it another 5.5%. (Sales tax is not applied to heating fuel or electricity during the heating season here.) These effects combine to make the marginal electrical cost 78% higher in the cooling season than in the heating season. Therefore, some systems that are economic to run during the heating system are not when there's no heating benefit, and additional become uneconomic during the cooling season. Using cloud computing is an interesting alternative. It's hard to beat free, as in free trials for hundreds of hours. Otherwise, costs vary, but around $7/85M is feasible, at spot rates; lower than the electrical cost for some of my existing hardware. Some rough data and links related to cloud computing for GIMPS follow. How-to guide for running LL tests on the Amazon EC2 cloud https://www.mersenneforum.org/showpo...21&postcount=1 Amazon 36 cores on EC2 with 144 GB RAM and 2x900 GB SSD is$0.6841 per hour. 2017 cost per primality test at 80M $6.21 (extrapolates to about$7.05/85M) https://www.mersenneforum.org/showpo...6&postcount=23 2019 current EC2 costs ~$.019/hr$6.4 to 9.7 for 89M primality test (so ~$5.8 and up for 84M) https://www.mersenneforum.org/showpo...37&postcount=2 Google Colaboratory "Colab" (free) https://www.mersenneforum.org/showthread.php?t=24839 M344587487 contemplating providing a PRP testing service at around$5/85M https://www.mersenneforum.org/showth...138#post512138 https://www.phoronix.com/scan.php?pa...acket-Roll-Out 32 ARM cores @ 3.3Ghz + 128GB of RAM and 480GB of SSD storage $1/hour This worked out per https://www.mersenneforum.org/showpo...9&postcount=23 to 30.73ms/iter at 84M, an astonishingly costly$717./84M exponent. Ernst Mayer estimates several instances rather than a single instance would produce better performance and cost/throughput. Debian 9, Ubuntu 16.04 LTS, and Ubuntu 18.04 LTS are the current operating system options for this Ampere instance type. Numerous instance types here. Note discounts at reserved and spot. https://www.packet.com/cloud/servers/ google compute https://www.mersenneforum.org/showpo...96&postcount=4 free trial https://cloud.google.com/free/docs/gcp-free-tier Microsoft Azure https://www.mersenneforum.org/showthread.php?t=21440 https://www.atlantic.net/cloud-hosting/pricing/ https://www.hetzner.com/cloud https://www.scaleway.com/pricing/ https://www.ovh.com/world/vps/ https://us.ovhcloud.com/products/ser...ucture-servers Contrasting to personal gpu cost, ~\$2 and up/85M https://www.mersenneforum.org/showpo...44&postcount=3 Judicious clock and voltage tweaking may improve those numbers. For an example of electrical power variation with clock, see https://www.mersenneforum.org/showpo...1&postcount=52 Top of this reference thread: https://www.mersenneforum.org/showpo...89&postcount=1 Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1 Last fiddled with by kriesel on 2020-02-20 at 20:52 Reason: misc updates in costs only matter paragraph, colab thread link added

 Similar Threads Thread Thread Starter Forum Replies Last Post Madpoo Lounge 6 2017-01-31 20:03 Brain GPU Computing 20 2015-10-25 18:39 jasong jasong 97 2015-09-14 00:17 Jushi Math 2 2006-08-28 12:07 GP2 Lounge 2 2003-12-03 14:13

All times are UTC. The time now is 08:51.

Sun Nov 28 08:51:39 UTC 2021 up 128 days, 3:20, 0 users, load averages: 1.24, 1.12, 1.01