20190511, 18:16  #12 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
3^{2}·547 Posts 
Why don't we save interim residues on the primenet server?
This is often asked in the context of wanting to be able to continue a run abandoned before completion by someone. It's not unusual for someone to quit participating when their assigned exponent(s) are anywhere from 2 to 98% complete in a primality test.
Full length residues saved to the primenet server at some interval, perhaps every 20 million iterations, are sometimes proposed as a means of minimizing the lost throughput from abandoned uncompleted tests. The combined output of GIMPS would represent a considerable load on the server's resources to implement this, and require additional considerable expenditure to support, which is not in the Mersenne Research Inc. budget. For users with slow internet connections, the individual load could also be considerable as a fraction of available bandwidth. Transfer times could stall the application and reduce total throughput. https://www.mersenneforum.org/showpo...&postcount=118 Detailed analysis and discussion at https://www.mersenneforum.org/showpo...&postcount=124 However, it is feasible to save smaller interim residues, such as 64bit or 2048bit. And this is currently being done. Recent versions of prime95 automatically save 64bit residues at 500,000 iterations and at every multiple of 5,000,000. The 2048bit are generated at the end of PRP tests, possibly only type 1 and type 5 PRP tests, per posts 606609 of https://www.mersenneforum.org/showth...048#post494079 The stored interim 64bit residues from different runs can be compared to see if runs are matching along the way or when one or another diverges. Top of this reference thread: https://www.mersenneforum.org/showth...736#post510736 Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1 Last fiddled with by kriesel on 20200220 at 20:35 
20190519, 15:58  #13 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
3^{2}×547 Posts 
Why don't we skip double checking of PRP tests protected by the very reliable Gerbicz check?
George Woltman gave a few reasons at https://www.mersenneforum.org/showpo...68&postcount=3.
An example of a bad PRP result is listed at https://www.mersenne.org/report_expo...9078529&full=1, which George has identified as an example of a software bug affecting a single bit outside the block of computations protected by the Gerbicz error check. However, the development of a method of generating a proof of correct completion of a PRP test, that can be independently verified, will replace PRP double checking, at a great savings in checking effort. https://www.mersenneforum.org/showth...ewpost&t=25638 This has been implemented in Gpuowl, mprime/prime95, and on the PrimeNet server. It is planned to be added to Mlucas also. Top of this reference thread: https://www.mersenneforum.org/showth...736#post510736 Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1 Last fiddled with by kriesel on 20210202 at 19:28 Reason: updated statement of PRP proof/cert implementation status 
20190519, 17:11  #14  
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
3^{2}·547 Posts 
Why don't we self test the applications, immediately before starting each primality test?
Why don't we self test the applications, immediately before starting each primality test? For the same fft length about to be used for a primality test such as a current wavefront test, and any 100Mdigit exponent or higher? Perhaps also upon resumption of an exponent?
(part of this was first posted as https://www.mersenneforum.org/showpo...0&postcount=10) Quote:
Users might find the checks annoying or regard them as lost throughput. Running LL on 100Mdigit exponents would be disincentivized, since it would involve working also on a 100Mdigit PRP DC so that there is an fft length match. One might as well run PRP for 100Mdigit exponents, and avoid the side self test or commitment to doing a 100Mdigit DC. Increasing adoption of PRP and reducing LL for 100Mdigit exponents is a good thing. There are some applicationspecific or interfacespecific reasons. There is no GIMPS PRP code for CUDA or Gerbicz check code for CUDA. There is no provision for self test of fft lengths larger than 8192K in CUDALucas. Top of this reference thread: https://www.mersenneforum.org/showth...736#post510736 Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1 Last fiddled with by kriesel on 20200220 at 19:34 

20190520, 02:04  #15 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
3^{2}×547 Posts 
Why don't we occasionally manually submit progress reports for longduration manual primality tests?
There's currently no way to do that.
This is a CUDALucas console output line: Code:
 May 19 20:00:49  M49602851 30050000 0x05c21ef8e9eac8b2  2688K 0.15625 2.0879 104.39s  11:15:47 60.58%  Done processing: * Parsed 1 lines. * Found 0 datestamps. GHzdays Qty Work Submitted Accepted Average 0  all  0.000
Accepting gpuowl progress records would also be very useful. Top of this reference thread: https://www.mersenneforum.org/showth...736#post510736 Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1 Last fiddled with by kriesel on 20200627 at 14:30 
20191002, 05:09  #16  
Romulan Interpreter
Jun 2011
Thailand
10010000111011_{2} Posts 
Why don't we extend B1 or B2 of an existing nofactor P1 run?
Quote:
Extending B1 is a bit trickier, because you need to recompute the additional small primes that fit into the new B1, and do the exponentiation required to include them into the new product (b^E). There is a piece of pari/gp Pm1 code I posted some time ago which does B1 extension, but that is slow because first of all, it is pari, and second, it only uses "chunks" of 2 primes (i.e. no stage 2 extensions), but it can save intermediary files and extend B1 too. Also, once you extend B1, then you must do stage 2 "from scratch", whatever stage 2 you did before, for the same B2 (or more, or less) is void. (Kriesel:) Mostly though, we don't do P1 bounds extensions because:
Top of this reference thread: https://www.mersenneforum.org/showth...736#post510736 Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1 Last fiddled with by kriesel on 20210302 at 19:53 Reason: Add title, list of reasons for status quo 

20200619, 19:41  #17 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
11473_{8} Posts 
Why don't we do proofs and certificates instead of double checks and triple and higher?
Update:
We can and do. Everyone who can upgrade to PRP, GEC, and proof generation for first primality tests (prime95/mprime v30.3 or later; gpuowl ~v6.11316 or later; mprime v20 coming at some point, meanwhile use v19.1 for PRP/GEC without proof generation) should do so as soon as possible, and stop performing LL first tests. Original post: Because we didn't know it was possible to do proofs of PRP tests for these huge Mersenne numbers at considerably less effort than a repeat PRP test or repeat LL test until recently. The development of new code to do proofs and verifications, followed by widespread deployment of client applications to do proofs, and server infrastructure to accept proofs and perform verifications, will take around a year or more to complete. Gpuowl is closest to being ready to provide proofs. Prime95 and Mlucas haven't begun to get this added yet as of mid June 2020. There's also separate verifier code to write. Server modification for storing new data types. Manual result handling modification. Extension of the Primenet API to accommodate it for prime95. Some threads regarding this recent development are Announcement The Next Big Development for GIMPS (Layperson's and informal discussion here) Technical VDF (Verifiable Delay Function) and PRP (Leave this one for the number theorists and crack programmers) Technical background: Efficient Proth/PRP Test Proof Scheme (Also a math/numbertheory thread, let's leave this one for theorists too) This is an exciting development. It offers elimination of almost all confirmation effort on future PRP tests, so will substantially increase testing throughput (eventually). It is a high priority for design and implementation right now. Other possible gpuowl enhancements are likely to wait until this is at least ready for some final testing. Top of this reference thread: https://www.mersenneforum.org/showth...736#post510736 Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1 Last fiddled with by kriesel on 20210216 at 16:41 
20200627, 14:49  #18 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
3^{2}×547 Posts 
Why don't we run gpu P1 factoring's gcds on the gpus?
The software doesn't exist.
Currently CUDAPm1 stalls the gpu it runs on, for the duration of a stage 1 or stage 2 gcd that runs on one core of the system cpu. Earlier versions of gpuowl that performed P1 also stalled the gpu while running the gcd of a P1 stage on a cpu core. At some point, Mihai reprogrammed it so a separate thread ran the gcd on one core of the cpu, while the gpu went ahead and speculatively began the second stage of the P1 factoring in parallel with the stage 1 gcd, or the next worktodo assignment in parallel with the stage 2 gcd when one is available. In all cases, these gcds are performed by the gmp library. (About 98% of the time, a P1 factoring stage won't find a factor, so continuing is a good bet, and preferable to leaving the gpu idle during the gcd computation.) It was more efficient use of programmer time to implement it that way quickly, using an existing library routine. On a fast cpu the impact is small. On slow cpus hosting fast gpus it is not. Borrowing a cpu core for the gcd has the undesirable effect of stopping a worker in mprime or prime95 for the duration, and may also slow mlucas, unless hyperthreading is available and effective. To my knowledge no one has yet written a gpubased gcd routine for GIMPS size inputs. For gpu use for gcd in other contexts see http://www.cs.hiroshimau.ac.jp/cs/_...apdcm15gcd.pdf (RSA) and https://domino.mpiinf.mpg.de/intran...FILE/paper.pdf (polynomials). If one was written for the large inputs for current and future GIMPS work, a new gcd routine for the gpu could be difficult to share between CUDAPm1 and gpuowl, since gpuowl is OpenCL based but CUDAPm1 is CUDA based, and the available data structures probably differ significantly. Top of this reference thread: https://www.mersenneforum.org/showth...736#post510736 Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1 Last fiddled with by kriesel on 20210302 at 19:55 
20201216, 17:25  #19 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
4923_{10} Posts 
Why don't we use 2 instead of 3 as the base for PRP or P1 computations?
Mersenne numbers are base2 pseudoprimes. All would be indicated as prime in P1 factoring or Fermat PRP tests, whether actually prime or composite. Using 3 as the base provides useful information, and costs no more computing time; using 2 as the base provides no useful information. That's a summary of my understanding of this thread as it relates to base choice.
Top of this reference thread: https://www.mersenneforum.org/showth...736#post510736 Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1 Last fiddled with by kriesel on 20201217 at 16:41 