CRT optimisation?
Apologies in advance for what is probably a rather naïve question. I'm playing with a factoring algorithm in which CRT calculations are one of the bottlenecks. I have used an optimisation to the CRT calculation, which improves things slightly. I have never seen this optimisation mentioned in the literature, and was wondering why, as it is a very obvious one: Rather than solving:
[INDENT]x = r[SUB]0[/SUB] (mod m[SUB]0[/SUB]) x = r[SUB]1[/SUB] (mod m[SUB]1[/SUB]) ... x = r[SUB]n[/SUB] (mod m[SUB]n[/SUB])[/INDENT] we can solve [INDENT]xr[SUB]0[/SUB] = 0 (mod m[SUB]0[/SUB]) xr[SUB]0[/SUB] = r[SUB]1[/SUB]r[SUB]0[/SUB] (mod m[SUB]1[/SUB]) ... xr[SUB]0[/SUB] = r[SUB]n[/SUB]r[SUB]0[/SUB] (mod m[SUB]n[/SUB])[/INDENT] meaning one modular inversion can be avoided due to the 0 residue in the modified first congruence. For example, in the case of just 2 congruences, this leads to: [INDENT]x = r[SUB]0[/SUB] + m[SUB]0[/SUB][SUP]1[/SUP][SUB]m[SUB]1[/SUB][/SUB] m[SUB]0[/SUB] (r[SUB]1[/SUB]r[SUB]0[/SUB]) (mod m[SUB]0[/SUB]m[SUB]1[/SUB])[/INDENT] rather than the usual: [INDENT]x = m[SUB]1[/SUB][SUP]1[/SUP][SUB]m[SUB]0[/SUB][/SUB]m[SUB]1[/SUB]r[SUB]0[/SUB] + m[SUB]0[/SUB][SUP]1[/SUP][SUB]m[SUB]1[/SUB][/SUB]m[SUB]0[/SUB]r[SUB]1[/SUB] (mod m[SUB]0[/SUB]m[SUB]1[/SUB])[/INDENT] Do the disadvantages (initial extra addition plus extra subtraction per noninitial congruence, and the possible complications of (r[SUB]i[/SUB]r[SUB]0[/SUB]) going negative) outweigh the advantages in general? Just curious... (and apologies for using = rather than the congruence symbol, which I can't find!) 
[QUOTE=mickfrancis;430279](and apologies for using = rather than the congruence symbol, which I can't find!)[/QUOTE]
no worries [TEX]\equiv[/TEX] is what you want it's done in [TEX]\TeX[/TEX] 
[QUOTE=science_man_88;430281]no worries [TEX]\equiv[/TEX] is what you want it's done in [TEX]\TeX[/TEX][/QUOTE]
Ah yes  thanks! 
Are you assuming m0 is the smallest modulus in the set? I would think you risk messing up the CRT otherwise.
I don't have my copy of Knuth handy, but this looks a little like the fast(er) CRT that he describes. 
[QUOTE=jasonp;430287]Are you assuming m0 is the smallest modulus in the set? I would think you risk messing up the CRT otherwise.
I don't have my copy of Knuth handy, but this looks a little like the fast(er) CRT that he describes.[/QUOTE] I'm not sure why it would be a problem  it's really just modular subtraction on the left and subtraction in a Residue Number System on the right isn't it? (I'm probably missing something here though...). I'd be interested to see the Knuth algorithm... 
It's an exercise in Knuth (The Art of Computer Programming volume 2) just after his discussion of the Garner algorithm.

[QUOTE=Nick;430304]It's an exercise in Knuth (The Art of Computer Programming volume 2) just after his discussion of the Garner algorithm.[/QUOTE]
I knew I should have invested in those volumes! 
The problem I was thinking of was that in general (a mod b) mod c is not the same as (a mod c) mod b. So if your m0 was larger than m1, then (r1  r0) mod m1 may not be the same as (r1  (r0 mod m1)) mod m1. I could be overthinking it though.

[QUOTE=jasonp;430346]The problem I was thinking of was that in general (a mod b) mod c is not the same as (a mod c) mod b. So if your m0 was larger than m1, then (r1  r0) mod m1 may not be the same as (r1  (r0 mod m1)) mod m1. I could be overthinking it though.[/QUOTE]
to further point out this problem I decided to post an example: (30 mod 5) mod 7 > 0 mod 7 (30 mod 7) mod 5 > 2 mod 5 all we did was change the order of the modulo operations and it changes what we get back. I think jasonp is overthinking it if CRT is chinese remainder theorem as the mod is never done twice and if you subtract the same thing from two things that are congruent they stay congruent. in the two congruences you give ( unless you meant m0 in one of them) you can think of them as polynomials r1r0 can be (m1*y+r1)(m1*z+r2). though I still don't see how you can cross values mod the m's like that. if you remember that any value mod another number is always congruent to itself you can rewrite the second as the first. 
May I suggest that the modern algebraic way of looking at this offers some advantages here?
If we are working with integers modulo 5, for example, then we regard integers a and b as equivalent if (and only if) 5 divides ab, so we have 5 equivalence classes: {...,15,10,5, 0,5,10,15,...} {...,14,9,4, 1,6,11,16,...} {...,13,8,3, 2,7,12,17,...} {...,12,7,2, 3,8,13,18,...} {...,11,6,1, 4,9,14,19,...} For any integer a, we write [$]\bar{a}[/$] to denote the equivalence class of a, i.e. the entire set of all integers equivalent to a modulo 5 (this is the set in the above list which a appears in). We then define addition and multiplication on these [B]sets[/B] by: [$$] \begin{eqnarray*} \bar{a}+\bar{b} & = & \overline{a+b} \\ \bar{a}\cdot\bar{b} & = & \overline{a\cdot b} \end{eqnarray*} [/$$] for any integers a & b, and show that this is welldefined (i.e. the definition does not depend on the representative elements chosen). Thus, for example, we can write [$]\bar{4}^2=\overline{1}^2=\bar{1}[/$]. Viewed this way, we are not tempted to write something like "30 mod 5 mod 7" because 30 mod 5 is then the set [$]\overline{30}=\bar{0}[/$] and not all elements of the set have the same remainder on division by 7, so the final "mod 7" is not a valid operation. 
All times are UTC. The time now is 06:36. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2021, Jelsoft Enterprises Ltd.