![]() |
31.4 ... 62.8 ... 100 trillion digits of Pi - GWR
So when Google set the record of 31.4 trillion digits last year, I gave it a 50/50 chance that record would fall before the end of the year.
Didn't quite work out that way. This latest computation suffered about a month of setbacks that pushed it all the way through January. But it is finally complete and passes verification. Congrats to Timothy Mullican for setting the new record for the most digits of Pi! His Blog: [url]https://blog.timothymullican.com/calculating-pi-my-attempt-breaking-pi-record[/url] Compared to the Google's record last year, Tim used a 4-socket Ivy Bridge machine with a 48-drive array. The computation ran for 10 months starting from April and ending yesterday. The computation of the binary digits of Pi actually completed early in December and matched the results of BBP spot check. But the base conversion (which takes 2 weeks and has no checkpoints) took several attempts before completing successfully. -------------------- This base conversion has been an issue in 3 or the last 4 Pi records due to it being ~10% of the total time and having no checkpoints at all. 10% run-time of these computations of this size equates to multiple weeks - which is also comparable to the MTTF of the systems that are used. Why don't I have checkpoints in the base conversion? The algorithm is largely in-place and destructive. That's not to say it's impossible to checkpoint, but I just haven't figured out a good way to do it yet. |
:bow:
|
Reading this I almost want to have a go. Almost... That storage requirement is scary...
|
Very nice, congrats!
How do you make the base switch "in place"? I have some ideas how to transform from base 2 to base 10 quite fast and also have some checkpoints, but you would need more storage space (it can't really be done "in place"), and I don't believe this is new, for sure somebody else was thinking to it before. I "invented" it long time ago and used it in my programs in the past, but never for such large inputs. |
Just finished reading the blog...impressive home build.
|
[QUOTE=jwaltos;536711]Just finished reading the blog...impressive home build.[/QUOTE]It certainly causes one to reflect about how big 5E19 is!
:mike: |
[QUOTE=Xyzzy;536877]It certainly causes one to reflect about how big 5E19 is!
:mike:[/QUOTE] 5E19? I think it is 5E13. :whack: For those who can recite many digits of pi.. How long would it take to recite the first 50 trillion digits given that you hold them in your brain? |
[QUOTE=paulunderwood;536878]For those who can recite many digits of pi.. How long would it take to recite the first 50 trillion digits given that you hold them in your brain?[/QUOTE]
For a fast speaker who can recite ten digits per second, and disregard any needs intrinsic to the human nature (including death), about 158,000 years. (That is, by the current measurement of years.) (I've calculated this on my Casio calculator watch, so please correct me if I'm wrong) |
[QUOTE=mart_r;536901]For a fast speaker who can recite ten digits per second ...[/QUOTE]In what language is that? Surely not in English. No one can speak English numbers that quickly and still be understood.
You might need a tonal language specially constructed for the task. All words can be 'ah' and just vary the tone. So basically just singing notes to the tune of Pi. Even then it would be extremely difficult. I'd like to hear someone try. |
[QUOTE=retina;536902]In what language is that? Surely not in English. No one can speak English numbers that quickly and still be understood.[/QUOTE]655 words per minute. And since digits are short words, it is doable.
[url]https://www.guinnessworldrecords.com/world-records/358936-fastest-talker[/url] |
[QUOTE=Uncwilly;536903]655 words per minute. And since digits are short words, it is doable.
[url]https://www.guinnessworldrecords.com/world-records/358936-fastest-talker[/url][/QUOTE]Amazing! |
The problem of long and short scales :[URL="https://en.wikipedia.org/wiki/Long_and_short_scales"]https://en.wikipedia.org/wiki/Long_and_short_scales[/URL] … but 5E13 is already in itself a terrific result !
[QUOTE=paulunderwood;536878]5E19? I think it is 5E13. :whack: [/QUOTE] |
[QUOTE=Uncwilly;536903]655 words per minute. And since digits are short words, it is doable.
[url]https://www.guinnessworldrecords.com/world-records/358936-fastest-talker[/url][/QUOTE] "Sean Shannon (Canada) recited Hamlet's soliloquy `To be or not to be' (260 words) in a time of 23.8 seconds" Doable, pretty short words. |
Do they have a record for the "longest" speaker? Like the person who talked without a pause for a while?
(we wanted to send the former link to swmbo with the title "someone have beaten you already" but it occured to us that she's actually not a fast speaker, so we refrained ourselves on doing such a terrible mistake... :razz:) |
Woah, I go away for a while and didn't notice all these new posts!
[QUOTE=LaurV;536456]Very nice, congrats! How do you make the base switch "in place"? I have some ideas how to transform from base 2 to base 10 quite fast and also have some checkpoints, but you would need more storage space (it can't really be done "in place"), and I don't believe this is new, for sure somebody else was thinking to it before. I "invented" it long time ago and used it in my programs in the past, but never for such large inputs.[/QUOTE] It's not fully in-place in that it doesn't need any auxiliary memory. It needs a lot actually - especially to perform the FFTs. But there are a few technicalities: The large multiplications are done in-place. So the output overwrites the inputs. (the multiplication is still done using the FFT scratch buffers) 1) One problem is that I don't support checkpointing inside the large multiplications. Thus I can only checkpoint before or after a large multiplication. But if you encapsulate the multiplication into an indivisible operation, it becomes a destructive operation that destroys the inputs. Thus if something goes wrong inside the multiply, you cannot roll back to before the multiply because you've already destroyed the inputs. This in-place-ness of the multiplications within the base-conversion will chain up. Thus there's no point where you can do a checkpoint other than before the entire conversion begins. 2) The other problem is that checkpointing is done with file-granularity. I don't support checkpointing parts of a file. The binary->radix conversions involve recursively splitting up the binary input into smaller and smaller portions which you eventually write piecewise into a large output buffer (in the desired radix). The presents a problem. That output buffer is allocated as single contiguous storage region as a single file. I can't write to parts of the file, checkpoint it, then write to other parts. The reason I can't do that is due sector alignment. Let's say the following is a partially written sector that's been checkpointed: 0123456789xxxxxx Then in a later operation, I want to write out the rest of the sector so that it is: 0123456789abcdef However. When you access disk, the entire sector must be read/written at once. The later operation needs to do a read-modify-write to data that is part of the previous checkpoint. If something goes wrong during this step, it will corrupt the previous checkpoint! ---------------- Long story short, I believe both of issues are surmountable. But I haven't done the necessary research into it yet. For example, the in-place-ness issue (1) can be overcome by having two working buffers and writing back-and-forth between them. But the base conversion already uses the most storage of the entire computation. The sector-alignment is probably solvable by keeping a separate mapping that stores backups of all partially written sectors. But saying that this is a mess in the context of the software raid layer and manual error-detection checksums is an understatement. |
[QUOTE=LaurV;536456]Very nice, congrats!
How do you make the base switch "in place"? I have some ideas how to transform from base 2 to base 10 quite fast and also have some checkpoints, but you would need more storage space (it can't really be done "in place"), and I don't believe this is new, for sure somebody else was thinking to it before. I "invented" it long time ago and used it in my programs in the past, but never for such large inputs.[/QUOTE] To answer your actual question of how I do the conversion "in place". The algorithm is the scaled remainder tree. The only operations are multiplications. There are no additions or subtractions at all. Each multiply is a middle-product FFT that splits a 2N-bit binary input into two N-bit binary outputs which are written back into the same memory - overwriting the 2N-bit input. One of the N-bit outputs is the same as the upper-half of the 2B-bit output. Thus instead of overwriting the entire memory region, the "new" N-bit output overwrites the no-longer-needed portion of the 2N-bit input. This partial write is hard to checkpoint due to problem (2) of the previous post. Since the in-place multiply is the only operation for the entire radix conversion, these in-place multiplications form a dependency chain/tree that prevents any sort of checkpointing at any step. |
[QUOTE=LaurV;536949]Do they have a record for the "longest" speaker? Like the person who talked without a pause for a while?
<snip>[/QUOTE]The most recent record for "longest speech marathon" I could find at [url=https://www.guinnessworldrecords.com/world-records/longest-speech-marathon/]Guinness[/url] is just over 90 hours: [quote]The longest speech marathon is 90 hr and 2 min, achieved by Ananta Ram KC (Nepal) in Kathmandu, Nepal from 27 to 31 August 2018. The attempt started at 6:15 am on 27th August and finished at 12:17 am on 31st August 2018. Ananta Ram KC maintained a silence for almost 7 days before starting the attempt as a part of his preparation for the longest speech attempt.[/quote] In a story about an [url=https://www.guinnessworldrecords.com/news/charity/2014/6/new-longest-speech-marathon-record-raises-£25k-for-sue-ryder-hospice-appeal-357830?fb_comment_id=1105337029484095_1158992957451835]earlier record[/url] (2014) in this category, I found:[quote]Under Guinness World Records guidelines, a speech is defined as, "the act of delivering a formal spoken communication to an audience," meaning using footage, clips or audio is strictly prohibited. Notes are allowed, written copy, autocues and prompters are not; making Rob's 46 hour 21 minute-long achievement all the more impressive.[/quote] |
[QUOTE=mackerel;536373]Reading this I almost want to have a go. Almost... That storage requirement is scary...[/QUOTE]
You can certainly have a go at the other records. The program used to calculate pi to 50 trillion digits has also been used to calculate square roots, logarithms, and various constants to billions of decimal places. I held the record for ln (2) at one point: [url]https://www.mersenneforum.org/showpost.php?p=502462&postcount=716[/url] [url]http://www.numberworld.org/digits/Log(2)/[/url] |
[QUOTE=Mysticial;536256]So when Google set the record of 31.4 trillion digits last year, I gave it a 50/50 chance that record would fall before the end of the year.
Didn't quite work out that way. This latest computation suffered about a month of setbacks that pushed it all the way through January. But it is finally complete and passes verification. Congrats to Timothy Mullican for setting the new record for the most digits of Pi! His Blog: [url]https://blog.timothymullican.com/calculating-pi-my-attempt-breaking-pi-record[/url] Compared to the Google's record last year, Tim used a 4-socket Ivy Bridge machine with a 48-drive array. The computation ran for 10 months starting from April and ending yesterday. The computation of the binary digits of Pi actually completed early in December and matched the results of BBP spot check. But the base conversion (which takes 2 weeks and has no checkpoints) took several attempts before completing successfully. -------------------- This base conversion has been an issue in 3 or the last 4 Pi records due to it being ~10% of the total time and having no checkpoints at all. 10% run-time of these computations of this size equates to multiple weeks - which is also comparable to the MTTF of the systems that are used. Why don't I have checkpoints in the base conversion? The algorithm is largely in-place and destructive. That's not to say it's impossible to checkpoint, but I just haven't figured out a good way to do it yet.[/QUOTE] What are the advantages of y-cruncher ? Why can it calculate so many digits? [url]https://www.mersenneforum.org/showpost.php?p=562790&postcount=14[/url] why we can calculate 50 trillion digits of Pi ,but cannot test a PRP test on F33, F33 only has no more than 2700 billion digits |
Because the time complexity for pi calculation is much better/faster than that of primality/PRP tests. Also F33 has 2,585,827,973 digits, not 2.7 trillion (which is more like F43).
|
[QUOTE=bbb120;563454]What are the advantages of y-cruncher ?
Why can it calculate so many digits? [URL]https://www.mersenneforum.org/showpost.php?p=562790&postcount=14[/URL] why we can calculate 50 trillion digits of Pi ,but cannot test a PRP test on F33, F33 only has no more than 2700 billion digits[/QUOTE] I messed with Y-Cruncher a little. It appears to serve no practical purpose in the context of GIMPS. Some years ago, I saw a photo of a system built by a man in Japan on a sheet of plywood. He had an array of hard drives totaling 105 TB, I believe it was. 21 drives. The odd drive was OS only. The rest was for storage and swap, like a huge paging file. I don't remember it saying how much RAM he had on it or the CPU type. He had it all arranged very nicely. The drives were in stacked brackets with extra fans on them. The PSU was very large. It looked like a small computer tower sitting off to one side. IIRC, it took him nearly a year to run 11.4 trillion digits, which was a new record at the time. Y-Cruncher seems to prefer vast amounts of swap/storage space on drives over lot of RAM. The Japanese man must have spent a small fortune buying all the parts to build what he had. |
[QUOTE=Stargate38;563531]Because the time complexity for pi calculation is much better/faster than that of primality/PRP tests. Also F33 has 2,585,827,973 digits, not 2.7 trillion (which is more like F43).[/QUOTE]
F33 only has no more than 2700 billion digits it should be "F33 only has no more than 2700 million digits" I make a mistake ! floor(2^33*log(2)+1)=floor(2585827973.98) 2,585,827,973=2_585_827_973 (English) 2,585,827,973=25_8582_7973(chinese) I am not familiar with three digits separated number ,all Chinese four digits separated number , so I make a mistake |
[QUOTE=bbb120;563580]all Chinese four digits separated number [/QUOTE]
I did not know that! |
[QUOTE=bbb120;563580]I am not familiar with three digits separated number ,all Chinese four digits separated number , so I make a mistake[/QUOTE]The "English" groupings correspond to the named amounts:
1,000 = One Thousand 2,000,000 = Two Million 3,000,000,000 = Three Billion (bi-, meaning 2 in Greek) 4,000,000,000,000 = Four Trillion (tri- = 3) 5,000,000,000,000,000 = Five Quadrillion (quad = 4) etc. India has an odd system. |
[QUOTE=Prime95;563590]I did not know that![/QUOTE]
chinese 10,000=1万=10 thousand 100,000,000=1亿=100 million 123456789123456789=123_456_789_123_456_789(English) 123456789123456789=12_3456_7891_2345_6789(chinese) English grouping digits three by three, but Chinese grouping digits four by four |
Billion had two different meanings in English and French Canada:
[url]https://en.m.wikipedia.org/wiki/Billion[/url] |
We knew about Chinese system (we used to work in China few years at the end of the last century). Thai system is also odd, they group them [URL="https://en.wikipedia.org/wiki/Thai_numerals#Ten_to_a_million"]by 6 digits[/URL]. In fact, this, same as in Indian dialects, comes from Sanskrit. In fact, they also "shorten" it in speech, ignoring the "small quantities", for example, you may see 1 506078, but they will say "laan haa", which is textual "million five" (i.e. one million, five hundred something) unless the accuracy is especially required (like accounting, banking, invoices, etc).
|
[QUOTE=a1call;563611]Billion had two different meanings in English and French Canada:
[url]https://en.m.wikipedia.org/wiki/Billion[/url][/QUOTE]Still does in much of Europe where the term "milliard" is still used for 1e9. |
[QUOTE=xilman;563690]Still does in much of Europe where the term "milliard" is still used for 1e9.[/QUOTE]There is a myriad different number naming and notation schemes.
|
[QUOTE=Uncwilly;563703]There is a myriad different number naming and notation schemes.[/QUOTE]
Just be glad we are not using the Roman Numeral system. What a mess that would be. |
1 Attachment(s)
[QUOTE=storm5510;563764]Just be glad we are not using the Roman Numeral system. What a mess that would be.[/QUOTE]
I have [attach]23814[/attach] problems, but Roman numerals ain't one of them. |
[QUOTE=storm5510;563764]Just be glad we are not using the Roman Numeral system. What a mess that would be.[/QUOTE]Be thankful for the existence of dozenal and sexagesimal representations.
|
[QUOTE=Uncwilly;563785]I have <snip attachment link> problems, but Roman numerals ain't one of them.[/QUOTE]
Hmm, rings a bell.[indent][color=darkred]That girl has problems, bein' heard ain't one of 'em .[/color][/indent]-- Ethel Merman, referring to Janis Joplin |
[QUOTE=Dr Sardonicus;563805]Hmm, rings a bell.[indent][color=darkred]That girl has problems, bein' heard ain't one of 'em .[/color][/indent]-- Ethel Merman, referring to Janis Joplin[/QUOTE]
He was probably refering to: [indent][color=darkred]Got 99 problems and a bitch ain't one[/color][/indent]- Ice-T 1993 |
[QUOTE=storm5510;563536]Y-Cruncher seems to prefer vast amounts of swap/storage space on drives over lot of RAM.[/QUOTE]
Only just saw this, forgot this forum was here. Y-cruncher would love as much ram as is needed, but in reality no consumer attainable system can possibly have enough ram for the bigger runs. We're talking well into the TB that not even high end servers can reach. And that's putting aside the cost of that much ram even if you could put it in a single system. So the practicality of it is, you have to use some form of swap as a less insane cost substitute, and that is where the optimisation needs to go. |
To prove that you computed [I]x[/I] digits of pi, couldn't you store only a checksum of all of the digits and keep the last digit "for fun"?
:mike: |
[QUOTE=Xyzzy;565140]To prove that you computed [I]x[/I] digits of pi, couldn't you store only a checksum of all of the digits and keep the last digit "for fun"?
:mike:[/QUOTE]Don't see why not. Computing the last digit is much cheaper than computing them all. |
[QUOTE=xilman;565141]Don't see why not. Computing the last digit is much cheaper than computing them all.[/QUOTE]Would the NT community accept the last digit and a checksum as a record? (Say you ran it twice with a different algorithm each time and both checksums matched.)
:mike: |
The last 10 digits and a 128 bit check-sum would be enough, I would suppose.
|
Digit extraction algorithms exist. So merely producing a few trailing digits wouldn't be enough to prove you computed all the digits up to that point.
A hash of all digits up to your claimed last digit would be suitable IMO. |
[QUOTE=Xyzzy;565355]Would the NT community accept the last digit and a checksum as a record? (Say you ran it twice with a different algorithm each time and both checksums matched.)
:mike:[/QUOTE] [QUOTE=retina;565379]Digit extraction algorithms exist. So merely producing a few trailing digits wouldn't be enough to prove you computed all the digits up to that point. [/QUOTE] There is no BBP type formula for Pi in base ten [though there could be], [url]https://en.wikipedia.org/wiki/Bailey%E2%80%93Borwein%E2%80%93Plouffe_formula[/url] . But even giving only the last few bits would enable to provide a fake proof, just give the exact bits from BBP and give a trash hash value. [notice that even giving say hundred consecutive bits of Pi is "easy"]. Much better: if you're claiming a world record then I would choose 1 million random positions and you should give the bits for each of these positions. The check: select say 20-25 positions and verify the bits with BBP. You have an extremely small probability to fake me. This is assuming that when you need multiple bits of Pi then there is no faster method than to use the BBP formula for each position. |
Group the bytes by 32 or 64 and compute a SHA256 or SHA512 of it. I don't believe anybody would contest that.
|
[QUOTE=LaurV;565526]Group the bytes by 32 or 64 and compute a SHA256 or SHA512 of it. I don't believe anybody would contest that.[/QUOTE]
So you would accept any(?) hash value as a proof, say claiming 256T digits of Pi, and giving only sha256 as: [CODE] a19a6c3a75783b6b5deee64777873ae207764837e769eedbe9b4c485d94b2986 [/CODE] |
Yep. After I remake the calculus to see if I get the same value... :razz:
I assume somebody verifies this things, anyhow... Or not? |
[QUOTE=LaurV;565632]Yep. After I remake the calculus to see if I get the same value... :razz:
I assume somebody verifies this things, anyhow... Or not?[/QUOTE] That's why your suggested methods is just not working (needs to repeat the whole computation), but my proposed way is using much less time. For comparison in this record size the BBP takes less than a single day, while the whole record computation took 303 days. (ref. [url]https://en.wikipedia.org/wiki/Chronology_of_computation_of_%CF%80[/url]). So with a 50% probability your record claim would fail in a single day. What is not that bad. ps. Or with a much larger probability if we'd ask at each position not a single bit but say 20 consecutive bits, this is not increasing the BBP time too much but you'd fail with much larger chance. |
[QUOTE=R. Gerbicz;565638]For comparison in this record size the BBP takes less than a single day, while the whole record computation took 303 days. (ref. [url]https://en.wikipedia.org/wiki/Chronology_of_computation_of_%CF%80[/url]). So with a 50% probability your record claim would fail in a single day. What is not that bad.[/QUOTE]But the claimant can also compute the required digits just as you could.
So you ask for your 20 positional digits. A day later the reply gives the digits. You check them and see no error. Still doesn't prove the claimant computed all the other digits. Only a hash of them all after a full re-computation can do that. |
[QUOTE=retina;565639]But the claimant can also compute the required digits just as you could.
So you ask for your 20 positional digits. A day later the reply gives the digits. You check them and see no error. Still doesn't prove the claimant computed all the other digits. Only a hash of them all after a full re-computation can do that.[/QUOTE] False, I've written this: "if you're claiming a world record then I would choose 1 million random positions and you should give the bits for each of these positions. The check: select say 20-25 positions and verify the bits with BBP. You have an extremely small probability to fake me." I'm requesting the bits for 1 million positions and after receiving your file with bits, checking random(!!!) 20-25 positions. If you'd do this with BBP then the overall computation time would be 2500 times larger than what a direct computation of pi would use. You can still select some positions from the list and use BBP for these and/or use known bits of pi (for small positions), but you wouldn't pass the test. |
[QUOTE=R. Gerbicz;565642]False, I've written this:
"if you're claiming a world record then I would choose 1 million random positions and you should give the bits for each of these positions. The check: select say 20-25 positions and verify the bits with BBP. You have an extremely small probability to fake me."[/QUOTE]I see now. You are correct, it would be almost impossible to fake it. Might as well do the entire computation and save a lot of hassle. |
[QUOTE=R. Gerbicz;565642]If you'd do this with BBP then the overall computation time would be 2500 times larger than what a direct computation of pi would use[/QUOTE]
How do you figure? Is BBP so slow that computing one million bits with it (in fact, you compute 4 bits every time, iirc), is 2500 times slower than computing (what's the record? two terra-[U][B]digits[/B][/U]?) of pi by the fastest method? (probably some variation of Ramanujan formula?, well, it seems odd to me, but yes, I didn't make any calculation, just gut feeling, and just asking). |
[QUOTE=R. Gerbicz;565642]False, I've written this:
"if you're claiming a world record then I would choose 1 million random positions and you should give the bits for each of these positions. The check: select say 20-25 positions and verify the bits with BBP. You have an extremely small probability to fake me." I'm requesting the bits for 1 million positions and after receiving your file with bits, checking random(!!!) 20-25 positions. If you'd do this with BBP then the overall computation time would be 2500 times larger than what a direct computation of pi would use. You can still select some positions from the list and use BBP for these and/or use known bits of pi (for small positions), but you wouldn't pass the test.[/QUOTE] Another approach: Ask for any random offset and if you don't get an answer immediately, then it's suspicious. It would take a tremendous amount of computing power to run get a distant offset in under a minute. --------------- There's some preliminary research into a hybrid BBP+Binary Splitting algorithm that [I]may[/I] allow M consecutive digits starting from the N'th digit to be computed faster than O(M * N log(N)). (binary digits of course) Such an algorithm could potentially allow for a low-memory distributed computation of Pi - but only the binary digits and at the cost of a much larger Big-O than the classic algorithms. If such an algorithm does come to fruit, then asking for a million digits starting from N may not be sufficient. |
I did not know where to post this in the forum. So here it is: [URL="https://www.theguardian.com/science/2021/aug/16/swiss-researchers-calculate-pi-to-new-record-of-628tn-figures"]pi calculated to 68.2 trillion digits[/URL]. Also [URL="https://www.theguardian.com/science/2021/aug/17/new-mathematical-record-whats-the-point-of-calculating-pi"]this article[/URL].
|
62.8 trillion digits, probably [TEX]\pi[/TEX] * 20 trillion digits.
|
Been so busy lately that I hadn't had time to process this record yet on my site. haha
I need to give y-cruncher a bit of love back. Largely neglected it for almost 2 years now. And lots of unfinished stuff and feature-requests (mostly involving storage scalability) that I still need to deal with. But life gets in the way. Just upgrading the compilers earlier this year took a month of re-testing. :yucky: |
[QUOTE=ATH;585861]62.8 trillion digits, probably [TEX]\pi[/TEX] * 20 trillion digits.[/QUOTE]
And what algorithm has been used? Chudnovsky, or better: [url]https://mersenneforum.org/showpost.php?p=558249&postcount=8[/url] . |
[QUOTE=R. Gerbicz;585918]And what algorithm has been used? Chudnovsky, or better: [url]https://mersenneforum.org/showpost.php?p=558249&postcount=8[/url] .[/QUOTE]
I believe it's Chudnovsky. I'll have to look take a closer look at your ArcTan formula when I get the time. But my first impression is that yes multiple terms can be run in parallel, but it's not necessarily beneficial here.[LIST=1][*]Parallelization is already possible within a series. (whether that be an ArcTan term or the Chudnovsky formula)[*]The bottleneck isn't computation or even communication bandwidth in some cases. It's actually memory/storage capacity.[/LIST]To expand on this latter part, summing a "typical" linearly convergent hypergeometic series to N decimal digits of precision will require about ~ 4*N bytes of memory regardless of slowly it converges. Both the ArcTan and Chudnovsky series are "typical" linearly convergent hypergeometic series. So for 100 trillion digits of Pi, you're looking at 400 TB of storage using Chudnovsky or the ArcTan terms summed up serially. If you want to run the ArcTan terms in parallel, it would be 400TB x parallelization. While it may be faster, in practice, the #1 of complaint I get from people attempting these records is that they can't source enough storage for it - let alone fast storage. |
According to [url=https://www.theregister.com/2021/08/17/pi_world_record_challenged/]this article[/url], they used [url=http://www.numberworld.org/y-cruncher/]y-cruncher[/url], the same app used for the 50 tn digit computation.
|
Had to check back at my emails because it's been so long since I last spoke with Thomas that I completely forgot he was still running this. :davieddy:
Yes, it was on y-cruncher, and he asked me for a lot of help months ago about this. |
[QUOTE=Dr Sardonicus;585927]According to [url=https://www.theregister.com/2021/08/17/pi_world_record_challenged/]this article[/url], they used [url=http://www.numberworld.org/y-cruncher/]y-cruncher[/url], the same app used for the 50 tn digit computation.[/QUOTE]
Thanks for the link. An interesting read. It appears they took a bet and didn't put RAID over their JBOD. I hope they weren't using a certain manufacturer who will remain nameless... Or, if they did, they were lucky... 9-) Edit: [URL="https://www.trentonsystems.com/blog/jbod-vs-raid-what-are-the-differences"]Useful knowledge[/URL]. Edit2: Whoops... I just reread that article all the way to the bottom. While the knowledge is sound, they are a provider of JBOD kit. I have no affiliation. |
So this is uh... not great.
Based on my email convo with Thomas (the guy in charge of the latest Pi computation), it looks like they did not verify the computation. Because when I asked about the BBP computation he was like, "what?!?" If that's the case, then they have jumped the gun on announcing the record. I've kicked off my own BBP verification run which will end in about ~20 hours. |
[QUOTE=Mysticial;586003]So this is uh... not great.
Based on my email convo with Thomas (the guy in charge of the latest Pi computation), it looks like they did not verify the computation. Because when I asked about the BBP computation he was like, "what?!?" If that's the case, then they have jumped the gun on announcing the record. I've kicked off my own BBP verification run which will end in about ~20 hours.[/QUOTE] Ian Cutress made a video, published today, where he said that it was not verified. [YOUTUBE]s-Xma3SHHos[/YOUTUBE] |
[QUOTE=Viliam Furik;586018]Ian Cutress made a video, published today, where he said that it was not verified.[/QUOTE]
I'm not surprised Ian noticed. He's been eyeing this record for a while now and knows all the in-and-outs of it. He just needs to source the storage for it before he can make a run for the record. And it looks like there are at least [I]some[/I] sites that aren't ready to recognize the record yet. Not sure if they also noticed the lack of verification or if they're waiting for me to update the list on numberworld.org. |
[QUOTE=Mysticial;586031]I'm not surprised Ian noticed. He's been eyeing this record for a while now and knows all the in-and-outs of it. He just needs to source the storage for it before he can make a run for the record.[/QUOTE]
I don't know the history of this. But... From my perspective... Calculating N digits of something known is just a matter of money and time. Every time a record is announced, it just takes another team to run a longer job to beat it. It's a bit like auto-gratification. But without the euphoria. |
Verification is done and it matches. So the screw up fortunately didn't turn out to be.
|
[QUOTE=chalsall;586034]I don't know the history of this...[/QUOTE]
[URL]https://www.ams.org/publicoutreach/math-history/hap-6-pi.pdf[/URL] |
Ok, now we have enough digits to solve a(20) :razz: :cmd:
|
[QUOTE=LaurV;586109]Ok, now we have enough digits to solve a(20) :razz: :cmd:[/QUOTE]
What is a(20)? |
finding a certain specific chain of digit in the decimal part of pi. but can't remember wich
|
[QUOTE=Viliam Furik;586117]What is a(20)?[/QUOTE]
See [URL="https://mersenneforum.org/showthread.php?t=16978"]this thread[/URL]. |
[QUOTE=LaurV;586186]See [URL="https://mersenneforum.org/showthread.php?t=16978"]this thread[/URL].[/QUOTE]Ah, yes, the "primes in π" thread.
Since a(n) is standard notation for OEIS sequences, I had tried such sequences related to digits of pi. The Wolfram Mathworld page on [url=https://mathworld.wolfram.com/PiDigits.html]Pi Digits[/url] lists URLs of the the OEIS sequences a(n) for first occurrence of n consecutive decimal digits d, d = 0 to 9, in the decimal expansion of pi. The largest value of n in any of these, however, is n = 14, for d = 0, 5, 7, and 9. These are in the first 1.21 x 10[sup]13[/sup] digits. I suppose there might be a small chance of finding a(15) in one of these sequences among the first 6.28 x 10[sup]13[/sup] digits. |
[URL]https://cloud.google.com/blog/products/compute/calculating-100-trillion-digits-of-pi-on-google-cloud[/URL]
"Records are made to be broken. In 2019, we calculated 31.4 trillion digits of π — a world record at the time. Then, in 2021, scientists at the University of Applied Sciences of the Grisons calculated another 31.4 trillion digits of the constant, bringing the total up to 62.8 trillion decimal places. Today we're announcing yet another record: 100 trillion digits of π." |
[QUOTE=pinhodecarlos;607798][URL]https://cloud.google.com/blog/products/compute/calculating-100-trillion-digits-of-pi-on-google-cloud[/URL]
"Records are made to be broken. In 2019, we calculated 31.4 trillion digits of π — a world record at the time. Then, in 2021, scientists at the University of Applied Sciences of the Grisons calculated another 31.4 trillion digits of the constant, bringing the total up to 62.8 trillion decimal places. Today we're announcing yet another record: 100 trillion digits of π."[/QUOTE] [url]https://www.youtube.com/watch?v=nMqdRu9gGGs[/url] |
All times are UTC. The time now is 16:25. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.