mersenneforum.org NFS cofactorization with 3LP
 Register FAQ Search Today's Posts Mark Forums Read

 2020-10-18, 23:35 #1 charybdis   Apr 2020 2×3×19 Posts NFS cofactorization with 3LP When sieving with 2 large primes, one tends to find that the optimal value of mfb is slightly less than 2*lpb. Presumably this is because a composite cofactor close to 2^(2*lpb) is unlikely to split into two primes that are both smaller than 2^lpb. For example, for lpb=32 the optimum value of mfb with CADO-NFS appears to be 60. Similarly, when sieving with 3 large primes, the optimum value of mfb is slightly less than 3*lpb. However, what happens if we get a cofactor that is just below 2^(2*lpb)? Such a cofactor is still unlikely to contribute to a relation: as in the 2LP case, it is probably not the product of two primes both smaller than 2^lpb, and it will also be too small to be the product of three large primes. So again it should make sense to set a double large prime bound smaller than 2^(2*lpb). Neither GGNFS nor CADO appear to have an option to set a double large prime bound separately from mfb when using 3LP. Does anyone here know what is actually done? Is a value computed automatically, perhaps from the mfb given?
 2020-10-18, 23:56 #2 VBCurtis     "Curtis" Feb 2005 Riverside, CA 112·37 Posts I've wondered about this also, though my curiosity was about cofactors slightly bigger than 2*mfb but still too small to possibly split into 3 large primes.
2020-10-19, 00:05   #3
charybdis

Apr 2020

1628 Posts

Quote:
 Originally Posted by VBCurtis I've wondered about this also, though my curiosity was about cofactors slightly bigger than 2*mfb but still too small to possibly split into 3 large primes.
CADO definitely deals with these properly - see this snippet from sieve/las-cofactor.cpp:

Code:
  nd = mpz_get_d (n);
B = (double) scs.lim;
kB = B * B;
for (klpb = lpb; klpb < s; klpb += lpb, kB *= B)
{
/* invariant: klpb = k * lpb, kB = B^(k+1) */
if (nd < kB) /* L^k < n < B^(k+1) */
return 0;
}
I couldn't find anything related to the double large prime bound, but there's a LOT of code for the siever so there's every chance I just haven't found it.

 2020-10-19, 00:33 #4 charybdis   Apr 2020 1628 Posts Should have realised that I could just search in a relations file to find an answer to my question Here's one from a job with lim1=175M, lpb1=32, mfb1=90: 178218979181,5545:2,b,fb,445,3b21,3c47b,166eeb,b38e5f,c2091937:3,3,3,5,71,89,265,9e3f5,c5a6b,451d0b,1c9c589,45458a5,f085ca3b,fb78b42f That's two large 32-bit prime factors on the algebraic side, so the cofactor wasn't far off 2^64. It sure looks like CADO is just using a bound of 2^(2*lpb). It feels like there could be a decent speedup from setting the 2LP bound separately in this situation, and I'd be surprised if no-one had thought of this before, so is there some reason why this wouldn't actually work in practice?
 2020-10-19, 02:36 #5 VBCurtis     "Curtis" Feb 2005 Riverside, CA 112·37 Posts Well, standard practice for 2LP side is to use mfb of 2* lpb, or 2 * lpb-1. We're the weird ones using very non-standard settings for our params files. Take a look at the factory params files for, say, C130-145 to see what I mean. So, it would not surprise me that nobody thought to do what you're suggesting. Edit: I don't speak code well enough to answer my own question: Could you just modify that formula above to exclude, say, 64-bit cofactors when LP = 32? Just that one change should provide a fair bit of speedup. Last fiddled with by VBCurtis on 2020-10-19 at 02:38
2020-10-19, 18:59   #6
charybdis

Apr 2020

2×3×19 Posts

Quote:
 Originally Posted by VBCurtis So, it would not surprise me that nobody thought to do what you're suggesting. Edit: I don't speak code well enough to answer my own question: Could you just modify that formula above to exclude, say, 64-bit cofactors when LP = 32? Just that one change should provide a fair bit of speedup.
I've stuck in a couple of lines to exclude cofactors between 61 and 64 bits. 60 might be a bit lower than optimal but it matches the bound on the rational side. The siever appears to be running normally; yield and rels/sec are down, but I can't compare that with anything until I know how many fewer relations will be needed to form a matrix.

Of course, I wouldn't like having to recompile every time I change the bounds, so if this experiment isn't a complete failure I'll get in contact with the authors.

Last fiddled with by charybdis on 2020-10-19 at 19:01

 2020-11-03, 12:30 #7 charybdis   Apr 2020 7216 Posts Bounds of 32/60/90 on the algebraic side are quite clearly worse than 32/64/90, by something like 10-15% depending on whether you take matrix size into account or not. I'll try 32/62/90 on the next job.
 2020-11-19, 20:50 #8 charybdis   Apr 2020 2×3×19 Posts 32/62/90 performs fairly similarly to 32/64/90, it's maybe a couple of percent slower but hard to tell for sure. There is a slight reduction in relations needed, but the lower yield means more sieving has to be done at higher Q values where the sieving is slower. There isn't any noticeable improvement in matrix size. In hindsight, I shouldn't have been expecting the algebraic side to behave too similarly to the rational side: the larger norms mean that more of the relations have two large primes close to the bound on the algebraic side than on the rational side. I could try 32/63/90, but it's hard to imagine it'll be more than a percent or two faster than 32/64/90, so I'll just stick with the defaults from now on.