20031122, 01:02  #1 
Sep 2003
13×199 Posts 
A distributedcomputing project to optimize GIMPS FFT? Genetic algorithms
Here's a wacky idea that may or may not be implementable.
First, some lengthy background: From New Scientist, 15 November 1997: http://www.newscientist.com/hottopics/ai/primordial.jsp using genetic algorithms to "evolve'" the performance of hardware. Fascinating, a mustread. The above article was the basis for: The Distributed Hardware Evolution Project, a recentlybegun distributed computing project. See also their official forum hosted by FreeDC. Finally, consider this article that ran in Slashdot a few days ago: Using genetic algorithms to optimize the optimization flags given to a C compiler: http://developers.slashdot.org/devel...id=126&tid=156 Inside the Slashdot story is a link to the original article: http://www.coyotegulch.com/acovea/index.html Finally, putting it all together: would it be possible to use genetic algorithms to optimize the FFT assembly code for each chip type and each FFT size? This could be incorporated right into the Prime95 client and report its results to the new version of the server. All 40,000+ Prime95 boxes out there could spend 15 minutes a day running new generations of the genetic algorithm... each would be extremely quick, a simple benchmark. If I'm not mistaken, some of the major optimizations that George found were discovered by accident: he noticed that the debug version of the code ran faster than the nondebug version, and investigated why. The truly fascinating thing about genetic algorithms is that they're capable of "evolving" optimizations that a human designer sometimes can't even comprehend when examining the end result, let alone be capable of designing. A selfoptimizing distributed computing project... it has a certain logic to it, no? OK, I've done the hard part of thinking this up... now it's just a matter of the trivial details of implementation. 
20031122, 23:02  #2 
Aug 2002
A Dyson Sphere
3^{2}·7 Posts 
I think that this is a great idea if it is feasable. I remember reading an article in Scientific American about how different things were improved with genetic algorithms like this.

20031123, 05:53  #3 
Mar 2003
Melbourne
515_{10} Posts 
I'd be interested to see if FFT algorithims could evolve. But can we guarantee the result?
The result could end up giving the correct result _only_ on the test input. Once the algorithim stabilzed through the evolving process, I wonder if _different_ inputs give correct answers. But like what's been said in the past the current code is highly optimised. Extending what was said in the New Scientist article, the evolving process might use 'nondigital' techniques of the processor which may depend on environmental conditions. How about if the evolving process was completely done in the northern hemisphere. There could be situations where when it's run on idential hardware in the Southern hemisphere, different results are produced. Given what I've said above it's still a very interesting experiement.  Craig 
20031123, 06:37  #4 
Sep 2003
13×199 Posts 
You wouldn't get the kind of weird analog interference effects and temperature dependence that they got in the original New Scientist article. I mean, the end result here is simply an assembly language routine, not an electrical circuit.
It would be closer to the article about varying optimization flags for compiling any given C program. Only here we're dealing with assembler. There are various parameters (with various tradeoffs) that George varies when he tries to optimize, and then he asks people to try benchmarking the modified version. But in all cases you're dealing with a program that gives the correct result  it's just that some versions are more efficient than others on a given CPU. With genetic algorithms, you could find an optimum combination of parameters much more readily than brute force testing of every possible permutation. And it would find that optimum combination of parameters even if it seems counterintuitive. 
20031123, 16:23  #5 
Apr 2003
Berlin, Germany
19^{2} Posts 
The task is really not easy. Current developments in this area are the selfoptimizing FFT algorithms (e.g. FFTW).
These compiler flag optimizations could be applied to existing FFT sources. But that doesn't work for FFTs which are written in Assembler. If you look into a book about FFT algorithms and applied tricks you'll quickly get the impression that this is a wonderful target for genetic algorithms. Currently that would lead to applying the best tricks on a certain target architecture. But the creation of yet unknown tricks and tweaks would be much more difficult. Or said in a different way: One would have to write a symbolic processor, which could take some math expressions and try to find the optimal implementation, which could be "written" in C++ and then compiled in another step which uses the genetic algorithms to find the optimal compiler flags. I see this situation a bit like when a human plays a game of chess against a computer program. The program could try a huge amount of combinations much faster than the human player but the latter will use his experience, intuition and strategical/tactical skills. The text p4notes.doc gives an impression of what the genetic approach would have to try. Last fiddled with by Dresdenboy on 20031123 at 16:24 
20031124, 18:43  #6 
Aug 2003
5 Posts 
If this is to be done, it must be done as 2 seperate projects.
1. GA to find the Best FFT algorithm 2. GA to find the "Stumper", in which the result of 1 Fails You can find similar references to this type of project in the search for a faster Sort algorithm, originally done as an earlyish GA. The purpose of the 2nd GA is to make sure that the first one is robust enough to handle all the cases. The other option would be to come up with a complete chart of all the possible multiplications, to validate against, but think that that would be to large to be practical. May want this as a separte option, outside of Prime95. Could have it evolve using some type of "test" harness, and if successful, send it in to George to include in the P95 code. (And maybe get some of the better method money, if you think that way!) 
20031203, 17:34  #7 
∂^{2}ω=0
Sep 2002
República de California
2·3^{2}·653 Posts 
Before we can start realistically considering the use of genetic algorithms to construct a nontrivial algorithm like the FFT, let's see if they can do something in a similar vein, but much, much simpler. Example: say I have N complex inputs in x + I*y form, which I want to multiply together in various ways. Assume the number of complex muls is >> N. Can your genetic algorithm figure out that the best way (or at least a better way) to do this is to first convert the multiplicands to complex polar form, then multiply?
GAs are probably quite useful for things like optimizing compile sequences and memory access patterns, or pipelining a given set of machine instructions, but I have my doubts as to whether they're much use for developing fundamental algorithms of the nontrivial variety. Of course the human algorithm designer is also a product of an evolutionary algorithm (with a healthy bit of random chance thrown in, quoth the dinosaur), but I don't feel like waiting a couple of million years for a GA to tell me how to reduce the opcount of my radixN DFT algorithm by 10 or 20% by way of a previously unheardof transform. Let's not make the same mistake as Steve Wolfram, of confusing pretty pictures with fundamental insight. No disrespect to Mr. Wolfram, but I have yet to hear of a single basic physical law being elucidated by way of cellular automata. (But that's probably a topic for a different forum.) 
20031203, 17:49  #8  
"William"
May 2003
New Haven
100101000101_{2} Posts 
Quote:


20031203, 18:09  #9 
Sep 2003
A1B_{16} Posts 
wblipp is right, I had in mind using GA to optimize the tunable parameters of a known algorithm, not something that could find an entirely unknown algorithm. As in the cited article on finding the best C compiler optimizing flags for a given program... here we'd be looking for the best setting of tunable parameters for a given CPU architecture.

20031204, 08:20  #10 
Apr 2003
Berlin, Germany
19^{2} Posts 
That would be the easiest implementable solution. There are a lot of important macros where the speed just depends on instruction order and alignment. A GA could shuffle the instructions around instead of the programmer.
OTOH there are known hardware features and constraints and one could just use a set of (heuristic) rules to optimize the code. And I'm sure my new idea regarding memory layout for a FFT couldn't be found that easily using GA's 
20031209, 20:41  #11 
Apr 2003
Berlin, Germany
101101001_{2} Posts 
Covering interesting topics:
http://www.ece.cmu.edu/~spiral/prese...nteljul02.pdf Genetic algorithms, dynamic programming etc. to find optimal DFT formulas. 
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
AlphaZero style distributed computing project for Go and Chess  MooMoo2  Lounge  5  20190528 13:22 
Considering getting back into distributed computing  jasong  jasong  7  20160328 14:36 
Distributed Factoring and prime testing algorithms  bunbun  Software  6  20060327 17:14 
Distributed Computing Survey  garo  Lounge  11  20040901 03:31 
The difference between P2P and distributed computing and grid computing  GP2  Lounge  2  20031203 14:13 