- **Software**
(*https://www.mersenneforum.org/forumdisplay.php?f=10*)

- - **Prime95 sorcery**
(*https://www.mersenneforum.org/showthread.php?t=27824*)

Prime95 sorceryHi, new to the forum and interested in hardware implementation/acceleration as applied to prime searching. I am first trying to quantify the scale of the problem for testing exponents >100M. I made some naive calculations but these don't seem to accord with the incredible speed at which I see Prime95 iterating for these candidates.
My back-of-the-envelope calculation just for the transform stages goes like this: [LIST=1][*]Start with exponent up to 127M for simplicity[*]On a 64-bit machine this forms an FFT of length 2M (I understand IBDWT is used meaning this doesn't need to double pre-modulo operation)[*]This would require 1M radix-2 butterfly kernels per stage x 20 stages = 20M kernels[*]This is doubled for the reverse transform -> 40M kernels[*]Each kernel operation requires a twiddle factor multiplication and 2 additions as well as data operations so the minimum I can see this being performed in is 10 instruction cycles (very rough guess)[*]This gives 400M instruction cycles per iteration for the transforms alone which would require 100ms/iter on a modern processor core[/LIST]I then compare this with what I see on a single-core of an 11th Gen core-i7 @ 3.8Ghz which is ~17ms/iter for the entire operation. The estimate doesn't even account for the pointwise-multiplication, twiddle-factor generation, modulo operation, word-length adjustment, memory latency, etc. - I have clearly missed some major aspect of the fundamentals or efficiency improvements at play. If anyone could point out where the discrepancies lie it would be greatly appreciated. Also, do Prime95 and similar implementations actually spend the vast majority of processing time resolving the FFT butterflies? |

Perhaps the best way to get answers is to directly contact our resident sorcerer supreme George Woltman, author of Prime95.
Forum username: Prime95 E-mail: :woltman: |

[QUOTE=jtravers;606702]On a 64-bit machine this forms an FFT of length 2M (I understand IBDWT is used meaning this doesn't need to double pre-modulo operation)[/quote]
We store about 18 bits per IEEE fp64, so it is more lime 6-7M. But I guess that just strengthens your case. [QUOTE=jtravers;606702]Each kernel operation requires a twiddle factor multiplication and 2 additions as well as data operations so the minimum I can see this being performed in is 10 instruction cycles (very rough guess)[/quote] Modern processors can do 8-16 floating point ops/cycles (including muls & adds). Data movement latency can be hidden with enough compute operations. So the 10 cycles/kernel might be more like 0.2 cycles. [QUOTE=jtravers;606702]The estimate doesn't even account for the pointwise-multiplication, twiddle-factor generation, modulo operation, word-length adjustment[/quote] These are all O(n) which is a smaller component. And also, no explicit modulo due to IBDWT. [QUOTE=jtravers;606702]If anyone could point out where the discrepancies lie it would be greatly appreciated.[/QUOTE] George (or Ernst) could give you the actual details (as opposed to just superficial knowledge that I have). |

Welcome to the forum. You may find for other purposes some of the [URL="https://mersenneforum.org/showthread.php?t=24607"]reference info collection[/URL] useful.
Ernst Mayer wrote Mlucas, and [URL="https://magazine.odroid.com/article/prime-number-discovery-use-odroid-c2-make-mathematical-history/"]a good article[/URL] on FFT based multiplication, worth a read. A few comments on your post: The FFT length of 2M is not adequate for 127M operands. Because of the need for handling a lot of carries, the usable bit width per word is ~17-20 bits out of the 53 significant bits (mantissa, including the implied leading 1 bit) in a double precision floating point word, not 64 bits. (See[URL="https://en.wikipedia.org/wiki/IEEE_754"] binary64 of IEEE754[/URL]) That applies in general; not only to prime95 and Mlucas, but also GPU applications gpuowl, cudalucas, etc. Bits/word slowly declines as fft length or exponent increase. 2M fft size is good to about 40M exponent; 127M requires ~6.5M fft length. Well written code is often memory bandwidth bound, and so may use what appears to be less than optimal code sequences to reduce memory bandwidth demands. Use of compound instructions such as FMA3 is common. Cache effectiveness has a big impact on memory bandwidth requirements. Benchmarking is done on multiple FFT forms to determine which is best for given hardware, operand, prime95 configuration (# of cores/worker & other variables). One of the benefits of the IBDWT is unlike traditional FFT, there is no need for zero-padding, reducing fft length by a factor of two compared to what would otherwise be required. The -2 of an LL test iteration, and the modulo 2[SUP]p[/SUP]-1 come almost for free, being performed as part of the single pass per iteration for limited-range carry propagation IIRC. George has put a lot of time and talent into improving Prime95's performance, for over a quarter century, including a lot of CPU-model-specific optimizations. It outperforms general purpose code like Mathematica considerably. The [URL="https://www.mersenne.org/download/"]source code is available[/URL] to browse. |

[QUOTE=axn;606706]We store about 18 bits per IEEE fp64, so it is more lime 6-7M. But I guess that just strengthens your case.[/QUOTE]
True [QUOTE=axn;606706] Modern processors can do 8-16 floating point ops/cycles (including muls & adds). Data movement latency can be hidden with enough compute operations. So the 10 cycles/kernel might be more like 0.2 cycles.[/QUOTE] I think this gets to the crux of it - it is a long time since I looked into processor architecture and assumed that 1 operation per core per cycle was standard. This would account for the large discrepancy. [QUOTE=axn;606706] These are all O(n) which is a smaller component. And also, no explicit modulo due to IBDWT.[/QUOTE] Accepted [QUOTE=axn;606706] George (or Ernst) could give you the actual details (as opposed to just superficial knowledge that I have).[/QUOTE] Thanks but I think that your "superficial knowledge" was more than adequate. I should work on bringing mine up to that level :wink: |

[QUOTE=kriesel;606707]Welcome to the forum. You may find for other purposes some of the [URL="https://mersenneforum.org/showthread.php?t=24607"]reference info collection[/URL] useful.
Ernst Mayer wrote Mlucas, and [URL="https://magazine.odroid.com/article/prime-number-discovery-use-odroid-c2-make-mathematical-history/"]a good article[/URL] on FFT based multiplication, worth a read. A few comments on your post: The FFT length of 2M is not adequate for 127M operands. Because of the need for handling a lot of carries, the usable bit width per word is ~17-20 bits out of the 53 significant bits (mantissa, including the implied leading 1 bit) in a double precision floating point word, not 64 bits. (See[URL="https://en.wikipedia.org/wiki/IEEE_754"] binary64 of IEEE754[/URL]) That applies in general; not only to prime95 and Mlucas, but also GPU applications gpuowl, cudalucas, etc. Bits/word slowly declines as fft length or exponent increase. 2M fft size is good to about 40M exponent; 127M requires ~6.5M fft length. Well written code is often memory bandwidth bound, and so may use what appears to be less than optimal code sequences to reduce memory bandwidth demands. Use of compound instructions such as FMA3 is common. Cache effectiveness has a big impact on memory bandwidth requirements. Benchmarking is done on multiple FFT forms to determine which is best for given hardware, operand, prime95 configuration (# of cores/worker & other variables). One of the benefits of the IBDWT is unlike traditional FFT, there is no need for zero-padding, reducing fft length by a factor of two compared to what would otherwise be required. The -2 of an LL test iteration, and the modulo 2[SUP]p[/SUP]-1 come almost for free, being performed as part of the single pass per iteration for limited-range carry propagation IIRC. George has put a lot of time and talent into improving Prime95's performance, for over a quarter century, including a lot of CPU-model-specific optimizations. It outperforms general purpose code like Mathematica considerably. The [URL="https://www.mersenne.org/download/"]source code is available[/URL] to browse.[/QUOTE] Thanks, those references look very interesting especially the Mlucas article which I am going through. As both yourself and @axn pointed out even though the word size is required to be smaller, the advances in processor architecture likely account for this and the discrepancy which I had naively calculated. |

All times are UTC. The time now is 03:04. |

Powered by vBulletin® Version 3.8.11

Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.