 mersenneforum.org > Math Complexity of LLT
 Register FAQ Search Today's Posts Mark Forums Read 2007-05-27, 17:35 #1 T.Rex   Feb 2004 France 16378 Posts Complexity of LLT Hello, I know this has already been said in this forum (Mr Silverman I think), but I cannot find it again. What is the overall complexity of the LLT as implemented by George, with the IBDWT method (the book of Crandall and Pomerance does not say a world about this). I'm lost with the 2 following information: Wikipedia : The most efficient known multiplication method, the Schönhage-Strassen algorithm based on the Fast Fourier transform, requires O(p log p log log p) time to square a p-bit number, reducing the complexity to O(p2 log p log log p) or Õ(p2). Mersenne Wiki : When used with the Fast Fourier transform, the test has the complexity of O(n2 log n), where n is the length of the number. Thanks, Tony   2007-05-28, 01:07   #2
drew

Jun 2005

2·191 Posts Quote:
 Originally Posted by T.Rex Hello, I know this has already been said in this forum (Mr Silverman I think), but I cannot find it again. What is the overall complexity of the LLT as implemented by George, with the IBDWT method (the book of Crandall and Pomerance does not say a world about this). I'm lost with the 2 following information: Wikipedia : The most efficient known multiplication method, the Schönhage-Strassen algorithm based on the Fast Fourier transform, requires O(p log p log log p) time to square a p-bit number, reducing the complexity to O(p2 log p log log p) or Õ(p2). Mersenne Wiki : When used with the Fast Fourier transform, the test has the complexity of O(n2 log n), where n is the length of the number. Thanks, Tony
From analyzing the benchmark table, my money's on n^2 log(n)

Also, I don't believe the DWT represents an improvement in complexity over a typical FFT, it only reduces the proportionality constant.

Drew

Last fiddled with by drew on 2007-05-28 at 01:29   2007-05-28, 14:10 #3 Jens K Andersen   Feb 2006 Denmark 23010 Posts FFT multiplication of n-bit numbers is O(n*log n*log log n). The log log n term grows very slowly and realistic register and operand sizes make it disappear in practice in most implementations, so the time is often but inaccurately simplified to O(n*log n). A prp or fast proof test gets another factor n because it makes O(n) multiplications. Optimizations of special forms like with IBDWT give a constant factor in implementations, but the asymptotic time is unchanged.   2007-05-28, 16:12   #4
T.Rex

Feb 2004
France

92710 Posts Quote:
 Originally Posted by Jens K Andersen FFT multiplication of n-bit numbers is O(n*log n*log log n). The log log n term grows very slowly and realistic register and operand sizes make it disappear in practice in most implementations, so the time is often but inaccurately simplified to O(n*log n). A prp or fast proof test gets another factor n because it makes O(n) multiplications. Optimizations of special forms like with IBDWT give a constant factor in implementations, but the asymptotic time is unchanged.
Thanks Jens.
Let me add parenthesis to clarify.
So, for a 2^n-1 Mersenne number, executing the full LLT (n-2 steps), is O( n * n* log(n) * log(log(n)) ) in theory, which simplifies in O( n^2 * log(n) ). Though the simple multiplication we learn at school would lead to O(n^3) for a whole LLT. Correct ?
So it seems that Drew was correct and that our Mersenne Wiki is too vague by saying "the length of the number".
Thanks,
Tony   2007-05-28, 17:54   #5
drew

Jun 2005

1011111102 Posts Quote:
 Originally Posted by T.Rex Thanks Jens. Let me add parenthesis to clarify. So, for a 2^n-1 Mersenne number, executing the full LLT (n-2 steps), is O( n * n* log(n) * log(log(n)) ) in theory, which simplifies in O( n^2 * log(n) ). Though the simple multiplication we learn at school would lead to O(n^3) for a whole LLT. Correct ? So it seems that Drew was correct and that our Mersenne Wiki is too vague by saying "the length of the number". Thanks, Tony
I suspect the log(log(n)) will become more significant as n grows. Perhaps it's a fair simplification for now, but not always.

Edit: I stand corrected, I just graphed the function, and it looks like the rate of growth of log(log(n)) decreases substantially as n grows.

Last fiddled with by drew on 2007-05-28 at 18:00   2007-05-29, 04:37   #6
Jens K Andersen

Feb 2006
Denmark

2·5·23 Posts Quote:
 Originally Posted by T.Rex So, for a 2^n-1 Mersenne number, executing the full LLT (n-2 steps), is O( n * n* log(n) * log(log(n)) ) in theory, which simplifies in O( n^2 * log(n) ). Though the simple multiplication we learn at school would lead to O(n^3) for a whole LLT. Correct ? So it seems that Drew was correct and that our Mersenne Wiki is too vague by saying "the length of the number".
I agree, except about the last part. "the length of the number" is a common term and sounds OK to me. The big-O complexity O(n^2*log n) is the same whether the length n is measured in bits, decimal digits, or another base. A base change gives a constant factor in length with no influence on big-O.   2007-05-29, 09:55   #7
T.Rex

Feb 2004
France

32×103 Posts Quote:
 Originally Posted by Jens K Andersen I agree, except about the last part. "the length of the number" is a common term and sounds OK to me. The big-O complexity O(n^2*log n) is the same whether the length n is measured in bits, decimal digits, or another base. A base change gives a constant factor in length with no influence on big-O.
OK. That's clear now. Thanks to FFT, we have about O(n^2*log n) instead of O(n^3) with scholar method. Bits/digits and FFT/IBDWT are just a matter of a constant factor.
Thanks !
Tony   2007-05-29, 20:57   #8
ewmayer
2ω=0

Sep 2002
República de California

23·3·487 Posts Quote:
 Originally Posted by T.Rex OK. That's clear now. Thanks to FFT, we have about O(n^2*log n) instead of O(n^3) with scholar method. Bits/digits and FFT/IBDWT are just a matter of a constant factor.
Careful -- you're mixing up yoiur speedups in the above.

FFT gets you an asymptotic n --> log n reduction, assuming you can implement it in manner that actually scales that way on modern cache-based microprcessors.

DWT (a.k.a. "implicit mod") halves your runlength, i.e. gets you an asymptotic factor of ~2x. (At finite runlengths of practical interest, the speedup may be appreciably greater, approaching a factor of 3, due to the aforementioned cache effects.)

Balanced-digit arithmetic gives a significant improvement in the slowness of the growth of roundoff error. I leave it as a homework exercise as to the (heuristic) asymptotic behavior here.

Pure-integer-based transforms are not subject to roundoff error, but still have input wordsize restricted by the magnitude of convolution outputs. In fact, for a careful implementation of balanced-digit arithmetic, this convolution-output-size effect tends to dwarf the roundoff-error-growth effect. In other words, as n gets larger, floating-point (counterintuitively) becomes relatively more attractive, not less. (This assumes that one starts with a floating-point precision sufficiently good to be competitive with pure-integer at modest n -- that typically means double precision).

Last fiddled with by ewmayer on 2007-05-29 at 20:58   2007-05-29, 21:05   #9
T.Rex

Feb 2004
France

32·103 Posts Quote:
 Originally Posted by ewmayer Careful -- you're mixing up your speedups in the above ...
Since I plan to talk about that in a paper, what should I say about the complexity of LLT with basic multiplication compared to LLT with FFT/IBDWT ?
Is: "Thanks to FFT, we have about O(n^2*log n) instead of O(n^3) with scholar method" correct ? And what should I add (very short) about the impact of IBDWT ?
Since I do not master that point, I need help to get the correct sentence.
Thanks for helping !
Tony   2007-05-29, 21:15   #10
ewmayer
2ω=0

Sep 2002
República de California

23×3×487 Posts Hi, Tony:

Quote:
 Originally Posted by T.Rex Is: "Thanks to FFT, we have about O(n^2*log n) instead of O(n^3) with scholar method" correct ?
Yes, although note that the more-common English euphemism for the quadratic-mul is the "grammar-school multiply" method.

(What you might call the "Ecole normal inférieur" method, to make a pun on the French school naming system. ;)

I further suggest replacing "about" with the more mathematically precise "asymptotically."

Quote:
 And what should I add (very short) about the impact of IBDWT ?
Some suitable rendition of what I wrote about DWT above should do the trick, I think. Also note that IBDWT (at least as commonly cited in the literature) refers specifically to a weighting suitable for Mersenne-mod -- that's why when speaking in general terms I prefer the more generic "DWT" or "implicit mod".

Cheers,
-Ernst  Thread Tools Show Printable Version Email this Page Similar Threads Thread Thread Starter Forum Replies Last Post carpetpool Miscellaneous Math 4 2017-02-09 19:26 Mr. P-1 Computer Science & Computational Number Theory 5 2013-04-02 15:47 kurtulmehtap Math 10 2013-03-20 14:15 ixfd64 Math 4 2006-11-27 20:52 ixfd64 Math 14 2005-12-01 22:50

All times are UTC. The time now is 00:06.

Sat Jan 22 00:06:38 UTC 2022 up 182 days, 18:35, 0 users, load averages: 1.80, 1.51, 1.35