View Single Post
Old 2006-01-18, 00:07   #16
ewmayer's Avatar
Sep 2002
Rep├║blica de California

2×5,813 Posts

Besides the monstrous runtime estimates given in the above threads for LL testing a number the size of M(M127), there is an even more fundamental problem: how to store numbers that large. M(M127) is a number having 2127 or roughly 1040 bits. Assume we had some magical way of using just a single hydrogen atom to store each bit of such a number (note that Helium would be safer if there's any oxygen nearby). Thus we would need roughly (1014 times Avogadro's number) hydrogen atoms, or roughly 1014 grams of hydrogen to store our data, assuming a data-perfect algorithm that needed only as many bits as the number being tested. That is roughly 1000 cubic kilometers of hydrogen at standard temperature and pressure - roughly equivalent to the volume of atmosphere overlying a small country - or around 1 cubic kilometer in more-compact supercooled-liquid form. Moreover, we'd need a way to coherently and quickly manipulate all the atoms in that volume in parallel. And even if we could do this amazing feat, the large size of this data reservoir would set fundamental limits on how quickly we could compute using it - how long does it take light to cross a spherical reservoir containing 1 cubic kilometer of ultracold liquid hydrogen? The answer is, on the order of a microsecond (thanks, alpertron ;), which limits our maximum operating frequency to around 1MHz, which is in fact much slower than today's microchips!

Thus, in order to complete the computation in less than the life of the universe (based on the time it takes low-mass stars to burn out), our hypothetical computer would have to be many orders of magnitude smaller than it could possibly be simply to store the data. It appears one is trapped between Scylla and Charybdis (the "rock and whirlpool-slash-hard-place" of Greek mythology) - the only way to be fast enough is to be much too small. Call me a pessimist, but I think I'll stick to trial-factoring - there, storing the data (which need only be the size of the factor candidates being tried) is no problem, and neither is massive parallelism (many machines can work on the problem, each on its own range without interacting with the others, except to get new ranges to test.)

(I've gone up to around 176 bits using my own factoring code - no factors yet.)

Last fiddled with by ewmayer on 2006-01-18 at 01:23 Reason: Since this topic is closely related to the more general issue of whether 2[sup]{Some Mersenne prime}[/sup]-1 is prime, I've merged it into that thread.
ewmayer is offline   Reply With Quote