mersenneforum.org Google Cloud Compute 31.4 Trillion Digits of Pi
 Register FAQ Search Today's Posts Mark Forums Read

2019-07-18, 17:24   #23
R. Gerbicz

"Robert Gerbicz"
Oct 2005
Hungary

1,493 Posts

Quote:
 Originally Posted by aurashift Sooo...If I break this record where can I upload the results?
If somebody wants to recheck the result it would be enough to upload say every millionth bit of pi, to enable a "fast" check using the https://en.wikipedia.org/wiki/Bailey...louffe_formula . I would do this. Anything else is a total waste of bandwidth.

2019-07-18, 17:40   #24
aurashift

Jan 2015

111111012 Posts

Quote:
 Originally Posted by R. Gerbicz If somebody wants to recheck the result it would be enough to upload say every millionth bit of pi, to enable a "fast" check using the https://en.wikipedia.org/wiki/Bailey...louffe_formula . I would do this. Anything else is a total waste of bandwidth.

That idea is appreciated....if anyone wants a physical copy they can come to Salt Lake and get it

2019-07-18, 17:41   #25
Mysticial

Sep 2016

23·43 Posts

Quote:
 Originally Posted by R. Gerbicz If somebody wants to recheck the result it would be enough to upload say every millionth bit of pi, to enable a "fast" check using the https://en.wikipedia.org/wiki/Bailey...louffe_formula . I would do this. Anything else is a total waste of bandwidth.
The error-check that I've built into the program is that along with the digits, it also outputs:
Code:
hash10 = floor(pi * 10^(# of decimal digits)) % M61
hash16 = floor(pi * 16^(# of hexadecimal digits)) % M61
along with other metadata such as the # of occurrences of each digit.

So it provides a way to verify the integrity of the digits at any point in the future. These hashes can also be verified in parallel. So if the data is distributed, you can compute the local hashes on-site then do a reduction at the end.

 2019-07-19, 17:05 #26 aurashift   Jan 2015 11·23 Posts Soo to put it in simpler terms... can the validation be done quick small and easy?
2019-07-19, 18:27   #27
Mysticial

Sep 2016

23·43 Posts

Quote:
 Originally Posted by aurashift Soo to put it in simpler terms... can the validation be done quick small and easy?
It's depends on which validation you're referring to. The difficulty of it will vary.

If you want to validate the digits of an already completed computation:
• If all the digits are in one place, you can use the built-in Digit Viewer to compute the hash. It might take a while since it needs to read all the digits, but it's fairly easy.
• If the digits are distributed, then it's harder since you need to manually compute the local hashes (using a non-obvious process). Then manually aggregate them at the end. Again this needs to read all the digits so it might take a while, but since it's distributed and parallelized, it could be faster. You may spend more time trying to figure out how to compute the local hashes and to aggregate them since this isn't automated.

If you want to validate the digits of a newly finished computation, the only thing you need to do is run the BBP formula to verify that the last few hexadecimal digits are correct.

Other forms of verification are needed to make it more rigorous, but these are all handled automatically by y-cruncher (if you turn them on):
• Verification that all errors will propagate to the last digits such that verifying the last digits is sufficient.
• Verification that the conversion from binary to base 10 is correct since BBP can only verify binary digits.
• Verification that the digits were correctly written out to disk and that the computed hashes are correct.

Last fiddled with by Mysticial on 2019-07-19 at 18:38

2019-07-19, 19:25   #28
R. Gerbicz

"Robert Gerbicz"
Oct 2005
Hungary

1,493 Posts

Quote:
 Originally Posted by Mysticial If you want to validate the digits of a newly finished computation, the only thing you need to do is run the BBP formula to verify that the last few hexadecimal digits are correct.
Exactly the BBP formula shows that it is not enough:
start the file with the known bits of pi, then totally random fake sequence, and end with some correct bits calculated with BBP.

ps. ofcourse your method is good if you want to validate your(!) own computation.

Last fiddled with by R. Gerbicz on 2019-07-19 at 19:28

2019-07-19, 21:16   #29
aurashift

Jan 2015

11·23 Posts

Quote:
 Originally Posted by R. Gerbicz ps. ofcourse your method is good if you want to validate your(!) own computation.
yeah im more thinking how other people will validate without a PB laying around.

2019-07-19, 21:48   #30
Mysticial

Sep 2016

23×43 Posts

Quote:
 Originally Posted by aurashift yeah im more thinking how other people will validate without a PB laying around.
For cases where you suspect the source of the digits to be intelligently adversarial, the verification options are more limited. But it still can be sanity checked with BBP.

First of all, there's no known way to verify *all* the digits of Pi from 1 to N in any base that's more efficient than recomputing them and comparing with the suspect digits.

But the next best thing you can do is a "proof-of-computation". The suspect source makes all the (binary) digits available with little latency. That is anybody can ask the suspect for binary digits at an arbitrary offset and the suspect will be able to produce them instantly.

As the verifier, you can pick random offsets, use BBP to compute the digits, and then test the suspect.

The security here relies on:
• BBP takes a significant amount of computational power to run for large offsets.
• There is no known faster algorithm than BBP for digit extraction.

So if the suspect can instantly produce the correct digits at arbitrary offsets, the only reasonable explanations are:
• The suspect has all the digits.
• The suspect has an unknown algorithm that's faster than BBP.

And if the latter, the suspect has accomplished something much more worthy than having done the computation.

Last fiddled with by Mysticial on 2019-07-19 at 21:54

2019-10-11, 14:45   #31
kriesel

"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

132138 Posts

Quote:
 Originally Posted by Mysticial For something like finding the next Mersenne prime, it's a lot more uncertain. Let's put aside the lower amount of publicity generated by something like a new Mersenne prime compared to Pi or SHAttered and look at the practicality of it. We seem to be hitting a lot more Mersenne primes than we should. And there are notoriously large gaps at the smaller sizes. So it's really uncertain if you would be able to find a prime by throwing a ton of resources at it. Likewise, Google probably doesn't want the public to know about the failed projects. So if they threw a ton of resources into GIMPS, they'd either have to do it in secret (maintaining their own version of the database), or they'd collaborate with the GIMPS database and it would be apparent what's going on including if they fail to find a prime.
I suggest redefining what constitutes success. Google Compute temporarily multiplying GIMPS throughput by pi or greater, or producing results at under 10% of the typical DC-detected error rate could be considered successes worth noting. There's marketing value in being able to factually state that for a time Google contributed more than the rest of the world to the project's progress. They could probably do that with whatever cpu and gpu time doesn't sell, running it as idle-priority interruptible tasks. Assuming their compute servers are using ECC the error stats should be very good. And they can use PRP with Gerbicz check too. Or they could choose to pioneer in some area with their skilled personnel, say take P-1 factoring to reaches GIMPS has not yet done (exponent > 937M). There's currently no primality or P-1 coordination site for exponent > 109.

 Similar Threads Thread Thread Starter Forum Replies Last Post GP2 Cloud Computing 4 2020-08-03 11:21 GP2 Cloud Computing 0 2019-01-12 12:52 pepi37 Software 14 2018-09-08 01:26 kergy47 Cloud Computing 0 2018-05-31 11:35 GP2 Cloud Computing 32 2018-01-23 02:16

All times are UTC. The time now is 17:04.

Sat Oct 16 17:04:20 UTC 2021 up 85 days, 11:33, 0 users, load averages: 1.40, 1.66, 1.52