P95 trial division strategy
I'm experimenting with my own GPU Mersenne trial division tool.. it's got its own fun challenges.
The classic way to test for divisibility of 2^p 1 by q is to compute 2^p mod q and look for a 1 remainder. It's a fast and easy power ladder. The slow step is of course doing the mod q step. There's lots of strategies. The important cases are when q> 2^64, so I'm using 3 32 bit words as my representation. The modular method I use for SPRP testing is [URL="http://en.wikipedia.org/wiki/Montgomery_reduction"]Montgomery reduction[/URL], and that's what I was planning to use for trial division as well. I have that working and it's successful.. though of course you always want more performance. I've thought of using the "multiply trick" of computing 2^160/q and changing the division into a multiplication. There's also some clever tricks to use q^1 mod 2^160, but I think that these precomputes are already slower than the Montgomery method. The trial division tools in [URL="http://www.mersenne.org/various/freeware.htm"]MERS [/URL]uses several methods depending on the size of q. But for the interesting and most common case of q>2^62, it just jumps to an arbitrary precision library. Prime95, however, is very interesting but also fairly opaque. Despite the many comments in factor32.asm and factor64.asm, it's hard to decipher its approach! It seems as if it computes an (approximate!) floating point reciprocal to q, then uses that to compute how many multiples of q to subtract off. So this may be doing classic division and subtraction to compute its mods, boosted by using a floating point initial approximation step. I may be very wrong about this because the code is so [B]beautifully tuned [/B]and expanded that it's hard to take a step back and see its strategy at a larger level. The comments in the code also are well written but at a very low statement level. I understand the sieve steps, but can someone give a higher level description of P95's computational method of 2^p mod q? 
Did you read :
[url]http://www.mersenne.org/various/math.php[/url] ?? 
[quote=lfm;187295]Did you read :
[URL]http://www.mersenne.org/various/math.php[/URL] [/quote] Yes, indeed, but that does not describe the strategy P95 is using for its modsquare computation used in the loop to find 2^p mod q. 
[quote=SPWorley;187289]It seems as if it computes an (approximate!) floating point reciprocal to q, then uses that to compute how many multiples of q to subtract off. So this may be doing classic division and subtraction to compute its mods, boosted by using a floating point initial approximation step.[/quote]Compare that to Algorithm D in section 4.3.1 of Knuth's [I]The Art of Computer Programming[/I]. I haven't done that myself, but I'd bet that there's some similarity.

[quote=cheesehead;187302]Compare that to Algorithm D in section 4.3.1 of Knuth's [I]The Art of Computer Programming[/I]. I haven't done that myself, but I'd bet that there's some similarity.[/quote]
I bet you're right.. that's exactly what I was thinking of when I called it "classic division and subtraction". But that's also noticably slower (in my own code) than doing it with Montgomery reduction, at least for exponents more than about 65556. Given the beautiful tuning of P95 (both at algorithm and assembly code levels) I was thinking there may be something else going on that I could learn from. 
[QUOTE=SPWorley;187289]
Prime95, however, is very interesting but also fairly opaque. Despite the many comments in factor32.asm and factor64.asm, it's hard to decipher its approach! It seems as if it computes an (approximate!) floating point reciprocal to q, then uses that to compute how many multiples of q to subtract off. So this may be doing classic division and subtraction to compute its mods, boosted by using a floating point initial approximation step.[/QUOTE] There are actually many different algorithms in factor32.asm  each written for a different target CPU. Also, it has been a long time since I last looked at the code, so I've forgotten quite a lot. To compute x mod y, you take the approximate reciprocal accurate to 53bits and compute (top bits of x) * (approx. recip) to get 32 bits of the quotient (almost always accurate, Knuth guarantees it is accurate to within 2). Multiply q by y, subtract from x, repeat til done. Also, note that you can compute the reciprocal quickly without division. Just take the last reciprocal you computed (it will be real close) and use Newton's method to compute the new reciprocal. 
[QUOTE=SPWorley;187303]But that's also noticably slower (in my own code) than doing it with Montgomery reduction, at least for exponents more than about 65556. Given the beautiful tuning of P95 (both at algorithm and assembly code levels) I was thinking there may be something else going on that I could learn from.[/QUOTE]
There probably is  but the first question I would ask is "how fast is your Montgomerybased algorithm?" Roughly how many cycles do you need per 96bit modmul (try to separate that from trialsieving overhead as much as you can  maybe just write a single modmulbased test timing loop that does a billion modmuls), and on which platform is that? Note that should be able to get at least a 23x speedup on 64bit x86style (and most RISC) architectures (that is, ones running under a 64bit OS as well) from using the full 64x64 > 128bit hardware integer MUL instruction. That "wastes" some bits for the high parts of the multiword products (e.g. the 32x64bit subproducts), but fullnessofbitfieldaesthetics is not the name of the game here. 
[quote=Prime95;187308]There are actually many different algorithms in factor32.asm  each written for a different target CPU. Also, it has been a long time since I last looked at the code, so I've forgotten quite a lot.
To compute x mod y, you take the approximate reciprocal accurate to 53bits and compute (top bits of x) * (approx. recip) to get 32 bits of the quotient (almost always accurate, Knuth guarantees it is accurate to within 2). Multiply q by y, subtract from x, repeat til done. Also, note that you can compute the reciprocal quickly without division. Just take the last reciprocal you computed (it will be real close) and use Newton's method to compute the new reciprocal.[/quote] Thanks much for the authoritative answer! I did notice the (many!) multiple methods defined, especially for the low bit ranges of < 2^64. I guess in the days of 486 processors, it was a struggle even to test up that high and having custom code for different low bit ranges was useful. I may try a couple oldschool divide approaches as well... we'll see how they compare to Montgomery. 
[quote=ewmayer;187309]
Note that should be able to get at least a 23x speedup on 64bit x86style (and most RISC) architectures (that is, ones running under a 64bit OS as well) from using the full 64x64 > 128bit hardware integer MUL instruction. That "wastes" some bits for the high parts of the multiword products (e.g. the 32x64bit subproducts), but fullnessofbitfieldaesthetics is not the name of the game here.[/quote] Well the fun part is that I'm doing this on a new architecture: a GPU! And there's arguments to use a 24 bit word size.. there's hardware support for using the FPU to do integer math with 24 bit words (and you can pull the full 48 bit result out). But 32 bits and 64 bits are both also supported in hardware, but each have slower behavior as expected. It's hard to judge the fastest method to do bigint math, so my approach is to try them all and measure! 
All times are UTC. The time now is 18:02. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2020, Jelsoft Enterprises Ltd.