20200429, 18:51  #45 
If I May
"Chris Halsall"
Sep 2002
Barbados
245B_{16} Posts 

20200501, 16:32  #46  
If I May
"Chris Halsall"
Sep 2002
Barbados
41·227 Posts 
Quote:
There would be real value in being able to use mfakt[co] for ongoing sanity checking of GPU "kit"*. Even if it requires doing another TF run, some might "optin" doing this kind of work. It would be fine if it was run on the same users' GPU(s), since this isn't MP stuff, just (largely) no factors found of MP candidates. It would be relatively trivial to expand the data stream from mfakt* to include the checksum. A simple parsing change on Primenet, and one additional table or field. Has anyone with the requisite skillset had a look at this yet? Kit has empirically demonstrated it's potential to degrade over time. * Definition of "Kit" in this context: All of it, sans kittens. 

20200501, 20:11  #47  
"Seth"
Apr 2019
2·3^{4} Posts 
Quote:
It's ~80% done * I modified mfaktc to return one of the smaller residuals it found. * I output the new proof of work line * James may even have implemented part of verification on the server Code:
M59068201 proof_k(8940824503190951977): 29 bits [TF:60:64:mfaktc 0.21 75bit_mul32_gs] let residual = pow(2, 59068201, 8940824503190951977) = 422536362 let confidence = log10(tests / P)  log10(K / residual) in this case confidence = log10(2^63 / 59068201)  log10(8940824503190951977 / 422536362) = 0.86 I calculated the distribution of confidence somewhere. I think the outcome was mean 0.5 and 99.9% of the time confidence < 5, you can also store confidence from many submissions and make sure it averages < 1. Last fiddled with by SethTro on 20200501 at 20:13 

20200501, 21:53  #48  
"Seth"
Apr 2019
2×3^{4} Posts 
Quote:
https://github.com/sethtroisi/mfaktc/tree/proof For the smaller kernels in mfaktc you have access to the residual, for the larger kernels the residual is not always computed and we'd need a 2nd proof functions which will be similar but won't directly minimize residual but some other intermediate product (e.g. lower 32 bits of residual) 

20200501, 22:08  #49  
If I May
"Chris Halsall"
Sep 2002
Barbados
41·227 Posts 
Quote:
The question many will ask is it worthwhile to do this? A missed factor doesn't really impact GIMPS much (and certainly won't result in a missed MP). But I think there may be some interest in this ability (if it isn't too computationally expensive  the second runs could be a random distribution). 

20200501, 23:38  #50  
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
7×631 Posts 
Quote:
Last fiddled with by kriesel on 20200501 at 23:39 

20200502, 00:16  #51  
"Seth"
Apr 2019
10100010_{2} Posts 
Quote:
Additionally residuals are randomly distributed (and residuals with many leading zeros are rare) so we are verifying that they are doing a large amount of correct work without having to run a 2nd check. I'm not sure the amount of description that's needed. If this doesn't make sense I can add more details. 

20200502, 13:27  #52  
If I May
"Chris Halsall"
Sep 2002
Barbados
41·227 Posts 
Quote:
What do people think? Is this worth implementing? It could be done in such a way as people "optin" to the (slightly more expensive) codepath. Perhaps running a percentage of jobs using this, for sanity checking? Thoughts? Oliver, are you monitoring this thread? At the end of the day, it would be up to you to decide to pull this into your main codebase. 

20200502, 15:28  #53 
6809 > 6502
"""""""""""""""""""
Aug 2003
101×103 Posts
2^{2}·7·307 Posts 
My 2 centavos is that any overhead that is not error checking related, should not be more than 0.5%. The loss vs. gain doesn't make sense.
What is the potential gain (speedwise) to the project? Based upon the examination of "expected number of factors", we don't have systematic cheating. We have had a few idiots. Total loss due to them nearly nil. Does this capture missed factors due to errors (thus speeding the project)? No. Does this find false factors? No. Is this an actual residual of the calculation, or is it just a manufactured calculation? It is not a natural byproduct of the calculation like the LL residual. This is my very naive concept. Why can't the program take the result to the mod operation at certain select k values and do a rolling operation on them like a CRC? Or certain classes. The k values to be selected could be calculated based upon the exponent. Doing the operation every 10000 k's tested should not put an undue burden on operation and would prove that the at least those items were done in sequence. 
20200502, 17:14  #54 
Romulan Interpreter
Jun 2011
Thailand
2220_{16} Posts 

20200502, 18:32  #55 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
7×631 Posts 
As George has already pointed out, among others, the list of k values tried in gpu trial factoring is not reproducible from run to run of a given exponent/bit level combination, because the factor candidate sieving is not deterministic, letting some composite factors through to be tried against the mersenne number. It's done that way because it's faster than a perfect factor candidate sieving to primes only.
That means the list of k values will vary, so the n.10^cth k value will vary, so the crc will vary, for the same inputs. To fix that issue, would require completely sieving each factor candidate, not just every 10^c. I think Gerbicz' approach is more easily made immune to this leakysieve issue. And since it is a singleword binary mod of something that's already been moded down to factor size, that part should be very quick. Only the factor candidates for the Gerbicz check values list need be further screened for primeness. Last fiddled with by kriesel on 20200502 at 18:34 
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
GPU Trial Factoring FAQ  garo  GPU Computing  100  20190422 10:58 
mfaktc for dummies  NBtarheel_33  GPU Computing  10  20111013 00:04 
How much Trial Factoring to do?  odin  Software  4  20100808 20:23 
How far to do trial factoring  S485122  PrimeNet  1  20070906 00:52 
trial factoring and P1  jocelynl  Math  8  20060201 14:12 