View Single Post
2021-08-17, 22:22   #10
Mysticial

Sep 2016

23·43 Posts

Quote:
 Originally Posted by R. Gerbicz Don't know why not use this colossal 57 terms(!) arctan formula what I've found, its efficiency isn't competing with Chudnovsky, but using a bunch of computers we could get the result quicker in Wall time, since all terms can be computed paralel, and using num=57 arctan terms we can get the final sum basically using only O(log(num)) big number additions time (and multiplying by a "small" constant integer in c[]). For the smallest term in the sum(arctan(n)) we have n>10^20, so we get more than 20 digits per term, better than Chudnovsky. Code: c=[212346171621984379202607910, -141986132978022176645261831, -19188947083479808676847750, 72976183437824305758327029, -90487680380658315708343594, 311666636439147152580655021, 164886394092602675087156920, -277675188573061374603700591, -328918225746279750915200446, -115228975332701852008106265, 91695937787306382534509492, 274816818440262075651640693, -15278882553455239903148046, 12345819849357697383382739, -157487043779043149246827005, 165649173043654388123361981, 268335731435818979971293832, -74376668880349669845919200, 136746152203763097091123740, -106971635007532827887780437, -105558968251925687253287026, -82410840215405324255866021, 296511073341938960924412299, 79414242640647747520579196, 26575505338669030526976157, 42582496871221199838103045, -431056435239179795449215, 322897940072599977312538725, -5058363070279926676095997, 138150568339123858964501887, -100697699225584221681812015, -40360165609976142590233256, -32168480347955895535959243, -265774660195351767787225477, -19927028999571156486476849, -9604760200607125790956233, -388197118646979923787984357, -342841339813260618645453450, 178967052427653826777184469, -243278199242825683334544770, -32735042905672245875593049, 380865428210048809909621749, -215830479721667949495349715, 4859860087340886670720953, -306797318475862261176909614, 253850710497913248888215152, 99683692694159392561113651, 171658379893183050731039940, -89191773347130083621036329, 172802843689931488647278961, -96455659594184508250880480, 107781332940660320060027780, 195623057567176555762442409, -82697175828995156518216025, 8171045079609761562016517, 35001730194790928786362252, -28720454810100608397545222]; t = [100706129803452075294, 114063843547135341423, 117387028264098620557, 118182626635495860199, 120422248000399031137, 120870463680930344868, 120927259045345571307, 121843699825114397177, 125275271043850344818, 144552427347806978193, 151179894086004836582, 155531320434402458222, 171268442677083970343, 186312414964043780693, 200597192291437604193, 229771399727574656128, 242114657461222775367, 242526457156343868609, 252241001866777989537, 276914859479857813947, 279268215504325418912, 293274837014756552545, 306254909186162917405, 311286554505870488322, 321507762595941798843, 395467645802520991318, 397699150117042862902, 400464045964625262913, 408987081828419988057, 424370650490416068993, 431899278472593106531, 440044425799491348789, 503324067165721943132, 571415097863763305482, 647982671411101494018, 754220218301026231032, 860057504564641127682, 895965022987753171419, 904744940324446807318, 1350650129695249176568, 1474841158733738137711, 1702259183351533337068, 1707392125695342504348, 1786595743440215727323, 1866004788235399428730, 2021521390014319431432, 2149280509511211774827, 2224183918046598697675, 2262767288002926709269, 2355639885555472733772, 2627598404185081429432, 3661364551741763772965, 4256797797404613635163, 6694462477782585046432, 9443926883403066025057, 10442269772936340101219, 14218352152467117817607]; \p 10000 775078*Pi-sum(i=1,57,c[i]*atan(1/t[i])) \p 28 sum(i=1,length(t),log(10)/log(t[i])) vecmin(t) output: Code: ? realprecision = 10018 significant digits (10000 digits displayed) %3 = -6.630182182933390232 E-10012 ? realprecision = 38 significant digits (28 digits displayed) ? %4 = 2.747877508222941264640834032 ? %5 = 100706129803452075294 this comes from solving a system of linear equations, so the result is really 0, hence we got 775078*Pi. [we need a division by a small integer at the end to extract Pi]. Used the first 64 primes that is p=2 or p==1 mod 4, the largest such prime is 757. Could be able to eliminate 7 of them from lots of smooth solutions to keep only 57 primes and got the 57 terms formula.

Coming from the other thread on Pi... Wow at this formula!

There is a way to (fairly) accurately estimate how fast a formula like this is for actual computer implementations.

Say you have a series that you want to sum up to D digits of accuracy. First you calculate how many terms N you need. (this is fairly easy since these are usually linearly convergent)

Suppose you sum up the N terms without rounding or truncation (keeping full integers). You will get a very large fraction where the numerator and the denominator are roughly of equal size (in digits).

The # of digits in either the numerator or denominator is roughly proportional to the amount of computation that is needed to sum it up. Thus if you need to compare the speed of series, (such as Chudnovsky vs. a single ArcTan series), you can use this method to get how fast they are relative to each other.

-------

Now the question is how do you actually calculate the size of the resulting fraction? You first have to derive the binary splitting recursion for it.

http://www.numberworld.org/y-crunche...tml#CommonP2B3

Which will have the polynomials: P(x), Q(x), and R(x).

Q(0, N) will be the denominator of your resulting fraction (before simplification). Thus the speed of the series is O( log(Q(0, N)) ). Where the big-O is dependent only on the hardware and software. It is roughly the same for different formulas.

-------

How do you compute O( log(Q(0, N)) )?

Q(x) will be a polynomial. (usually one that completely factorizes) So factorize it, then do a log-gamma on each term individually. Complex pairs will cancel out.

Last fiddled with by Mysticial on 2021-08-17 at 22:35