mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Factoring (https://www.mersenneforum.org/forumdisplay.php?f=19)
-   -   Compositions of arithmetic progressions (https://www.mersenneforum.org/showthread.php?t=10953)

jasonp 2008-11-11 02:38

Compositions of arithmetic progressions
 
So this has now come up several times in the course of building msieve code, and it's high time I tried to understand how to think about it.

Suppose you have two arithmetic progressions, r1 + k1 * p1 and r2 + k2 * p2. If p1 and p2 are coprime, how does one go about constructing a third arithmetic progression P = r3 + k3 * p1 whose members all lie on values of the p2 progression? Put another way, k3 is a multiple of p2, so that P maps the first progression onto the lattice formed by the second.

I know that when r2 is zero, then r3 is r1 * modinverse(p2,p1) mod p1, where modinverse(x,y) is the modular inverse of x modulo y. What I don't understand is how to generalize this to the case of arbitrary r2.

Apologies if this is too basic. The case when p1 and p2 share factors is also of interest, though less so.

R.D. Silverman 2008-11-11 11:35

[QUOTE=jasonp;148761]So this has now come up several times in the course of building msieve code, and it's high time I tried to understand how to think about it.

Suppose you have two arithmetic progressions, r1 + k1 * p1 and r2 + k2 * p2. If p1 and p2 are coprime, how does one go about constructing a third arithmetic progression P = r3 + k3 * p1 whose members all lie on values of the p2 progression? Put another way, k3 is a multiple of p2, so that P maps the first progression onto the lattice formed by the second.

I know that when r2 is zero, then r3 is r1 * modinverse(p2,p1) mod p1, where modinverse(x,y) is the modular inverse of x modulo y. What I don't understand is how to generalize this to the case of arbitrary r2.

Apologies if this is too basic. The case when p1 and p2 share factors is also of interest, though less so.[/QUOTE]

If r1 + k_i p1 = r2 + m_i p2 for some (k_i, m_i), then trivially

r2-r1 + m_i p2 = 0 mod p1, whence m_i = [r1-r2] * p2^-1 mod p1

What else are you looking for? It's just a linear congruence.

jasonp 2008-11-11 12:40

Well the problem is trivial when you phrase it [i]that[/i] way :)

Makes perfect sense, thanks.

jasonp 2009-12-31 22:12

I'm resurrecting this thread because moving forward with degree 6 polynomial selection in msieve requires understanding the multidimensional generalization of this problem.

Suppose you want a degree 6 GNFS algebraic polynomial for a large input number, for example RSA200. Suppose further that you manage to find a polynomial with decent size properties, for example

[code]
R(x) =
-733353745969457329647324759819358
+10146121641768070157 x

A(x) =
+196658068845691396854154694784799560081209151290611987559
-593991137294951464528877714492146951852356505880 x
-1848998357142768993289988278919373127654 x^2
+1599893759674383491380658833465 x^3
+2423179599042378391811 x^4
-873569620551 x^5
+180 x^6
[/code]
(Nobody get excited, this was just 45 minutes work with a GPU)

The traditional way of optimizing the root properties of A(x) is to choose (exquisitely carefully) a rotation polynomial Rot(x) such that A(x) + Rot(x) * R(x) has an unusually large number of roots modulo powers of many different small primes p^k.

You do that using a sieve; for a single p^k, look at each of the p^k residue classes r_i. If Rot(x) is a constant C, then we want C such that

A(r_i) + C * R(r_i) = 0 mod p^k

If a solution exists, the set of such values lies on an arithmetic progression

R0 + j * p^k

where R0 = -A(r_i) * modinv(R(r_i), p^k) mod p^k.

So collect all of the arithmetic progressions for all the residue classes modulo the first few p^k, then sieve to find an C that maximizes a weighted sum of all the arithmetic progressions that it falls on, and yet is small enough so that the size of the new A(x) is not adversely affected.

If Rot(x) is non-constant, things are only a little harder. If Rot(x) has a linear term Y*x, then just do a separate sieve for each Y, and for each Y modify each arithmetic progression to use a start value of (R0 - Y * r_i) mod p^k for residue class r_i. With quadratic Rot(x), use (R0 - Y * r_i - Z * r_i^2) mod p^k.

The amount of work to do in the root sieve is a power of the skew S to use for the polynomial. For degree 6 polynomials the range of admissible C is a bit more than the ratio of the constant coefficients in the algebraic polynomials, so that a single line of admissible C is of size about 10^26 (or ~S^2.5). That's a gigantic sieve! Even worse, for quadratic rotations the admissible range of Y is ~S^1.5 and the range for Z is about S^.5. So to fully explore the available search space of rotation polynomials one has to examine S^(2.5+1.5+.5) = S^4.5 rotations. For the polynomial above, S is about 770e6 so this is more sieve lines than we can ever hope to exhaustively search, each of which is larger than we could hope to exhaustively search.

There are two ways to deal with this; we could restrict the range that is actually searched, but this should be a method of last resort. The other possibility is what I'm working on: moving away from lines and sieving over lattices. For degree 5 and linear rotation polynomials this works pretty well; the range of C is ~S^1.5 and the range of Y is ~S^.5, so even for very large inputs it's feasible to search all the lines but over a lattice for each line, and the latter just involves modifying the initialization for the sieve. That approach won't work for degree 6; we need to sieve over a 3-D lattice that is massively skewed.

Which brings me to my problem: for residue class r_i mod p^k, pick an integer triplet (i,j,k); then the corresponding rotation is
[code]
| C | | p^k -r_i -r_i^2 | | i | | R0 |
| Y | = | 0 1 0 | | j | + | 0 |
| Z | | 0 0 1 | | k | | 0 |
[/code]
I need to pick a lattice to sieve over that is the composition of several of these r_i for several different p^k. So how does one compute a basis for the intersection of two shifted 3x3 lattices? This is analogous to finding a reduced basis for NFS lattice sieving, but in the 3x3 case rather than the 2x2 case.

If the shift was not an issue, one could use an orthonormal transformation to produce basis vectors for the intersection, but to make the shifts not matter I need to find a common (C,Y,Z) triplet that is on both lattices. I only know how to do that by brute force, which works fine for degree 5 but is intractable when the product of the p^k is going to be 10^20 or so.

Bonus question: The feasible region for rotations looks like an ellipsoid that is probably not aligned with the unit vectors in th (C,Y,Z) plane. If the above can be done, and I then want to use lattice reduction to find a better basis for sieving, how can I twist the basis around so that small values of (i,j,k) are aligned to the axes of rotation of the ellipsoid? Doing this would allow the maximum number of points in the feasible region to sieve over.

maxal 2010-01-22 19:49

[QUOTE=jasonp;200485]So how does one compute a basis for the intersection of two shifted 3x3 lattices?[/QUOTE]
If I got the problem correctly, you need first to find a common shift S by solving a system of congruences (using CRT):
S == R0_1 (mod p_1^{k_1})
S == R0_2 (mod p_2^{k_2})
...
with respect to S.
Then you rewrite you matrix equation as:
[code]| C - S | | p^k -r_i -r_i^2 | | i |
| Y | = | 0 1 0 | | j |
| Z | | 0 0 1 | | k |[/code]
(where C,Y,Z,S are the same for all p^k) and proceed as if there were no shifts at all.

jasonp 2011-08-19 16:09

I might as well make this thread the dumping ground for all the problems I have with arithmetic progressions. Here's the latest, a consequence of an insidious bug in msieve that I'm trying to fix:

Suppose you have a bunch of different arithmetic progressions

R0 + i * p^k

where p is prime and p^k is small (say less than 1000). There are lots of different R0, up to p^k of them. I need to 'map' each of these progressions onto a lattice of size L, i.e. to find another arithmetic progression

R0' + j * p^k

such that every value of that second progression lies on a lattice A + m * L.

When L and p^k are coprime this is easy, and was addressed by Bob's post above. In fact with many R0 you can find R0' via a single table lookup: maintain an array of size p^k and put j at location (A+j*L) mod p^k for each j in turn, so that R0' = table[R0].

What I evidently don't understand is what to do when p divides L one or more times. I originally assumed that the table lookup method described above still works, except that

- R0' may not exist depending on the precise value of A and L

- the new progression would have a step value of p^k / gcd(L, p^k) and not p^k

but I've noticed numerical examples where that's not true. Rather than trying to fit a heuristic to data, I was wondering how to solve this (embarrassingly easy sounding) problem analytically.

axn 2011-08-20 08:26

[QUOTE=jasonp;269508]- the new progression would have a step value of p^k / gcd(L, p^k) and not p^k[/QUOTE]

Wouldn't the step value be LCM(p^k, L)= L* p^k / gcd(L, p^k) ? What am I missing?

R. Gerbicz 2011-08-20 09:01

The complete solution is known from Number Theory, see:

To find the same terms of two arithmetic progressions of:
r1+x*n1
r2+y*n2
(In your case n1=p^k, n2=L)

Suppose that:
r1+x*n1=r2+y*n2, from this

r1+x*n1==r2 mod n2
x*n1==r2-r1 mod n2
This is solvable if and only if gcd(n1,n2)|(r2-r1) and in this case divide by gcd(n1,n2), you get:

x*n1/gcd(n1,n2)==(r2-r1)/gcd(n1,n2) mod n2/gcd(n1,n2)

and here gcd(n1/gcd(n1,n2),n2/gcd(n1,n2))=1 so this system is solvable.
And there is exaclty one solution mod n2/gcd(n1,n2)

jasonp 2011-08-20 13:42

Thanks, makes sense.

axn: You'd be correct that that's the step value in 'normal' coordinates, but I'm looking for the step value in 'lattice' coordinates. The sieving procedure I'm trying to fix starts off with lattice points, and then converts back whatever good lattice points that it finds.


All times are UTC. The time now is 06:20.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.