Thread: QS question
View Single Post
Old 2011-06-23, 17:21   #2
jasonp
Tribal Bullet
 
jasonp's Avatar
 
Oct 2004

2×29×61 Posts
Default

The existing QS programs sieve until the exact point where a matrix is possible because using two large primes lets you treat the computation of the cutoff point as an easy graph problem, that can be solved as relations are added. This, plus the fact that the linear algebra is never a bottleneck for QS, means that it isn't worth doing oversieving to try to make the matrix take less time.

Once you move to QS with more than two large primes in the sieving, it's not straightforward anymore to compute the exact finishing point incrementally, so in that case you have little choice but to do 'test filtering' until a matrix can be formed. The payoff is that with three large primes the sieving is faster, so the total time will hopefully be less even with the messier postprocessing.

Actually, as long as you get the input dataset into a canonical format, filtering and linear algebra for QS and NFS can be made to look exactly the same. All the tricks we use to build a good NFS matrix can be scaled down to build a good QS matrix.

Per your statement above, we are actually looking for a dataset where the number of relations remaining exceeds by a specified amount the number of primes remaining, after useless copies of both are removed. I would think the statistics of QS and NFS datasets are identical if the same filtering techniques are used on each; the only reason we don't know for sure is that QS would have to sieve for a very long time to form a matrix as big as even a small NFS factorization (~100 digits) produces regularly.

Last fiddled with by jasonp on 2011-06-23 at 17:28
jasonp is offline   Reply With Quote