[QUOTE=LaurV;565632]Yep. After I remake the calculus to see if I get the same value... :razz:
I assume somebody verifies this things, anyhow... Or not?[/QUOTE] That's why your suggested methods is just not working (needs to repeat the whole computation), but my proposed way is using much less time. For comparison in this record size the BBP takes less than a single day, while the whole record computation took 303 days. (ref. [url]https://en.wikipedia.org/wiki/Chronology_of_computation_of_%CF%80[/url]). So with a 50% probability your record claim would fail in a single day. What is not that bad. ps. Or with a much larger probability if we'd ask at each position not a single bit but say 20 consecutive bits, this is not increasing the BBP time too much but you'd fail with much larger chance. 
[QUOTE=R. Gerbicz;565638]For comparison in this record size the BBP takes less than a single day, while the whole record computation took 303 days. (ref. [url]https://en.wikipedia.org/wiki/Chronology_of_computation_of_%CF%80[/url]). So with a 50% probability your record claim would fail in a single day. What is not that bad.[/QUOTE]But the claimant can also compute the required digits just as you could.
So you ask for your 20 positional digits. A day later the reply gives the digits. You check them and see no error. Still doesn't prove the claimant computed all the other digits. Only a hash of them all after a full recomputation can do that. 
[QUOTE=retina;565639]But the claimant can also compute the required digits just as you could.
So you ask for your 20 positional digits. A day later the reply gives the digits. You check them and see no error. Still doesn't prove the claimant computed all the other digits. Only a hash of them all after a full recomputation can do that.[/QUOTE] False, I've written this: "if you're claiming a world record then I would choose 1 million random positions and you should give the bits for each of these positions. The check: select say 2025 positions and verify the bits with BBP. You have an extremely small probability to fake me." I'm requesting the bits for 1 million positions and after receiving your file with bits, checking random(!!!) 2025 positions. If you'd do this with BBP then the overall computation time would be 2500 times larger than what a direct computation of pi would use. You can still select some positions from the list and use BBP for these and/or use known bits of pi (for small positions), but you wouldn't pass the test. 
[QUOTE=R. Gerbicz;565642]False, I've written this:
"if you're claiming a world record then I would choose 1 million random positions and you should give the bits for each of these positions. The check: select say 2025 positions and verify the bits with BBP. You have an extremely small probability to fake me."[/QUOTE]I see now. You are correct, it would be almost impossible to fake it. Might as well do the entire computation and save a lot of hassle. 
[QUOTE=R. Gerbicz;565642]If you'd do this with BBP then the overall computation time would be 2500 times larger than what a direct computation of pi would use[/QUOTE]
How do you figure? Is BBP so slow that computing one million bits with it (in fact, you compute 4 bits every time, iirc), is 2500 times slower than computing (what's the record? two terra[U][B]digits[/B][/U]?) of pi by the fastest method? (probably some variation of Ramanujan formula?, well, it seems odd to me, but yes, I didn't make any calculation, just gut feeling, and just asking). 
[QUOTE=R. Gerbicz;565642]False, I've written this:
"if you're claiming a world record then I would choose 1 million random positions and you should give the bits for each of these positions. The check: select say 2025 positions and verify the bits with BBP. You have an extremely small probability to fake me." I'm requesting the bits for 1 million positions and after receiving your file with bits, checking random(!!!) 2025 positions. If you'd do this with BBP then the overall computation time would be 2500 times larger than what a direct computation of pi would use. You can still select some positions from the list and use BBP for these and/or use known bits of pi (for small positions), but you wouldn't pass the test.[/QUOTE] Another approach: Ask for any random offset and if you don't get an answer immediately, then it's suspicious. It would take a tremendous amount of computing power to run get a distant offset in under a minute.  There's some preliminary research into a hybrid BBP+Binary Splitting algorithm that [I]may[/I] allow M consecutive digits starting from the N'th digit to be computed faster than O(M * N log(N)). (binary digits of course) Such an algorithm could potentially allow for a lowmemory distributed computation of Pi  but only the binary digits and at the cost of a much larger BigO than the classic algorithms. If such an algorithm does come to fruit, then asking for a million digits starting from N may not be sufficient. 
I did not know where to post this in the forum. So here it is: [URL="https://www.theguardian.com/science/2021/aug/16/swissresearcherscalculatepitonewrecordof628tnfigures"]pi calculated to 68.2 trillion digits[/URL]. Also [URL="https://www.theguardian.com/science/2021/aug/17/newmathematicalrecordwhatsthepointofcalculatingpi"]this article[/URL].

62.8 trillion digits, probably [TEX]\pi[/TEX] * 20 trillion digits.

Been so busy lately that I hadn't had time to process this record yet on my site. haha
I need to give ycruncher a bit of love back. Largely neglected it for almost 2 years now. And lots of unfinished stuff and featurerequests (mostly involving storage scalability) that I still need to deal with. But life gets in the way. Just upgrading the compilers earlier this year took a month of retesting. :yucky: 
[QUOTE=ATH;585861]62.8 trillion digits, probably [TEX]\pi[/TEX] * 20 trillion digits.[/QUOTE]
And what algorithm has been used? Chudnovsky, or better: [url]https://mersenneforum.org/showpost.php?p=558249&postcount=8[/url] . 
[QUOTE=R. Gerbicz;585918]And what algorithm has been used? Chudnovsky, or better: [url]https://mersenneforum.org/showpost.php?p=558249&postcount=8[/url] .[/QUOTE]
I believe it's Chudnovsky. I'll have to look take a closer look at your ArcTan formula when I get the time. But my first impression is that yes multiple terms can be run in parallel, but it's not necessarily beneficial here.[LIST=1][*]Parallelization is already possible within a series. (whether that be an ArcTan term or the Chudnovsky formula)[*]The bottleneck isn't computation or even communication bandwidth in some cases. It's actually memory/storage capacity.[/LIST]To expand on this latter part, summing a "typical" linearly convergent hypergeometic series to N decimal digits of precision will require about ~ 4*N bytes of memory regardless of slowly it converges. Both the ArcTan and Chudnovsky series are "typical" linearly convergent hypergeometic series. So for 100 trillion digits of Pi, you're looking at 400 TB of storage using Chudnovsky or the ArcTan terms summed up serially. If you want to run the ArcTan terms in parallel, it would be 400TB x parallelization. While it may be faster, in practice, the #1 of complaint I get from people attempting these records is that they can't source enough storage for it  let alone fast storage. 
All times are UTC. The time now is 01:02. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2023, Jelsoft Enterprises Ltd.