20070621, 12:44  #34  
Tribal Bullet
Oct 2004
3555_{10} Posts 
Quote:
One of the early papers on numbertheoretic transforms claimed that you could get some computational savings when computing these transforms because multiplication by many of the constants involved in the transform could be performed with only a few modular shifts, adds and subtracts. This is something the dedicated hardware the reader is designing should take advantage of :) 

20070621, 12:55  #35  
"Bob Silverman"
Nov 2003
North of Boston
2^{2}·1,877 Posts 
Quote:
However, it does not change the underlying complexity. All it does is improve the implied multiplicative CONSTANT in the O() estimates. The true asymptotic complexity of multiplication on a Turing machine is unknown. (see Knuth Vol 2 section on multiplication). We know that we can achieve linear time complexity on a *pointer* machine, and that this is the best possible, but we do not know the true lower bound on a Turing (e.g. finite state real computer) machine. We can achieve O(n log n loglog n) to multiply nbit numbers, but do not know if this is the best possible. I strongly suspect that it is, but there is no proof. The modular reduction just slows things down by a fixed constant (the constant being machine and implementation dependent) since we can also do division by FFT's in time O(n log n loglog n) time. (See Aho, Hopcroft, and Ullman's book on Algorithms for a clear exposition). 

20070622, 17:42  #36 
"Nancy"
Aug 2002
Alexandria
2,467 Posts 
The recent paper by Martin Fürer has lowered the complexity of integer multiplication to O(n log(n) 2^(log*(n))) where log*(n) is the minimum number of logarithms you need to stack to get <= 1. Bernstein did a quick estimate that suggested that it might be competitive with SchönhageStrassen for interesting operand sizes, i.e. a few million words or so.
Alex Last fiddled with by akruppa on 20070719 at 13:37 Reason: Corrected spelling of Fürer 
20070622, 18:35  #37  
∂^{2}ω=0
Sep 2002
República de California
10110111101011_{2} Posts 
Quote:
As anyone who has implemented a halfway decent transformbased multiply algorithm knows, realworld memory management and processoroptimization effects tend to dwarf anything beyond the n*log(n) terms, and for large vector lengths, one is typically elated to even approach the n*log(n) in practice, especially once the datasets in question significantly exceed the cache sizes on the target hardware. Thanks for the references, though. ;) 

20071023, 18:39  #38  
Sep 2005
2×3^{2} Posts 
yes, we prove this and find it as big as we want.
Quote:
Just waiting or continue to find another proof for this I wish all the best for u and all mathematicians . Sghodeif , 

20071023, 19:55  #39 
∂^{2}ω=0
Sep 2002
República de California
5·2,351 Posts 
For those not familiar with Mr. Ghodeif's alleged "proofs", some background reading to prevent another thread from getting polluted with the discussion:
http://www.mersenneforum.org/showthread.php?t=4604 http://www.mersenneforum.org/showthread.php?t=4618 http://www.mersenneforum.org/showthread.php?t=5211 http://www.mersenneforum.org/showthread.php?t=6018 http://www.mersenneforum.org/showthread.php?t=9507 Mr. Ghodeif, any future posts you make with claims of this sort which [as has been the case with all your claims to date] lack any supporting documentation will be subject to deletion. Last fiddled with by ewmayer on 20071023 at 20:20 
20071027, 23:11  #40  
"Robert Gerbicz"
Oct 2005
Hungary
3^{2}·179 Posts 
Quote:


Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
ecm with Fermat numbers  ET_  FermatSearch  1  20160802 19:40 
P1/P+1 on Fermat numbers  ATH  Operazione Doppi Mersennes  2  20150125 06:27 
LLT numbers, linkd with Mersenne and Fermat numbers  T.Rex  Math  4  20050507 08:25 
Fermat Numbers  devarajkandadai  Math  8  20040727 12:27 
Factoring Fermat Numbers  Axel Fox  Software  14  20030704 18:57 