mersenneforum.org  

Go Back   mersenneforum.org > Factoring Projects > Factoring

Reply
 
Thread Tools
Old 2007-05-08, 21:30   #1
fivemack
(loop (#_fork))
 
fivemack's Avatar
 
Feb 2006
Cambridge, England

193616 Posts
Default Why only three large primes

ggnfs doesn't allow relations to have more than three large primes on either side. This means that, when I try reprocessing some relations against a smaller factor base, I lose a fair number of relations because they fail to factor fully, and end up with #rels << #primes and almost nothing coming out of the combining-partial-relations phase.

From a brief look at xilman's 'MPQS with three primes' paper, I can't see why you can't have more than three large primes on each side, and have relations with three or more large primes acting as 'cross-linkers' in his polymer model of filtering.

I was expecting reprocessing against smaller and smaller factor bases to give me matrices whatever happened, but denser and denser matrices so that N*W became quickly impractically large as N got small ... it appears that this isn't the case, I just get no effective combinations at all. Is this an artefact of losing enough relations from the #p<=3 filtering that I've gone well below the combination-explosion threshold, or is there some subtler reason why large and small primes are so fundamentally different?
fivemack is offline   Reply With Quote
Old 2007-05-08, 22:21   #2
jasonp
Tribal Bullet
 
jasonp's Avatar
 
Oct 2004

5·23·31 Posts
Default

Quote:
Originally Posted by fivemack View Post
ggnfs doesn't allow relations to have more than three large primes on either side. This means that, when I try reprocessing some relations against a smaller factor base, I lose a fair number of relations because they fail to factor fully, and end up with #rels << #primes and almost nothing coming out of the combining-partial-relations phase.

From a brief look at xilman's 'MPQS with three primes' paper, I can't see why you can't have more than three large primes on each side, and have relations with three or more large primes acting as 'cross-linkers' in his polymer model of filtering.

I was expecting reprocessing against smaller and smaller factor bases to give me matrices whatever happened, but denser and denser matrices so that N*W became quickly impractically large as N got small ... it appears that this isn't the case, I just get no effective combinations at all. Is this an artefact of losing enough relations from the #p<=3 filtering that I've gone well below the combination-explosion threshold, or is there some subtler reason why large and small primes are so fundamentally different?
The conventional wisdom is that having more than two large primes means you have an explosion of medium size numbers to factor, only a very few of which will have all factors below the large prime bound, at the same time that factoring them becomes much more expensive. GGNFS only allows 3 primes because one of them is the special-Q from lattice sieving, so it's like getting 3 large primes for the price of 2.

There's nothing special about the number of large primes that appear in the filtering stage; any number will do. If memory serves for QS with 3 large primes there was never really an explosion; the cycle creation rate was just very low and after a certain point the rate switched to becoming very high. The difference between the two rates was more pronounced when the factor base started off small. I don't know that anyone has looked systematically at the cycle behavior in this case, since it's equivalent to 5 large primes in relations and not just the 4 that papers describe.

If it will help, msieve's filtering assumes an arbitrary number of large primes. Rational and algebraic large primes are always dealt with in a unified way, and the filtering bound is dynamically adjusted until the amount of excess is enough to form a reasonable matrix. It handles relations with 7+ large primes fairly regularly.

I also meant to mention that with your C200 the large prime bound seems to be 30 bits, and that means a huge number of relations are needed to get to the explosion point. Your run has something like 60% excess with 67M relations, whereas Greg's C204 with 28-bit large primes has something like 200% excess with 54M relations

Last fiddled with by jasonp on 2007-05-08 at 22:29
jasonp is offline   Reply With Quote
Old 2007-05-08, 23:00   #3
fivemack
(loop (#_fork))
 
fivemack's Avatar
 
Feb 2006
Cambridge, England

2·7·461 Posts
Default

Quote:
Originally Posted by jasonp View Post
The conventional wisdom is that having more than two large primes means you have an explosion of medium size numbers to factor, only a very few of which will have all factors below the large prime bound, at the same time that factoring them becomes much more expensive. GGNFS only allows 3 primes because one of them is the special-Q from lattice sieving, so it's like getting 3 large primes for the price of 2.
I understand that; what I'm trying to do is, by sieving with a larger factor bound and then processing with a smaller one, to see what happens if the factor base is smaller than optimal ... Silverman's paper says that too small a factor base is instant doom because numbers don't split in it, but that doesn't seem necessarily to apply if you have large primes around.

What I'm running into with ggnfs, aside from seeing relations discarded because they have too many large primes, is the restriction on the number of relations that can be combined into any given cycle ... I get things like

Code:
There are 20609 relations with 0 large primes.
There are 7212 relations with 1 large primes.
There are 435327 relations with 2 large primes.
There are 269734 relations with 3 large primes.
There are 87287 relations with 4 large primes.
There are 13568 relations with 5 large primes.
There are 718 relations with 6 large primes.
where loads of relations can't be combined because they would give too complicated a cycle; presumably, if I lifted the restriction and had a machine with unlimited memory, I would end up with a perfectly useless *dense* 366k by 366k matrix.

Quote:
If it will help, msieve's filtering assumes an arbitrary number of large primes. Rational and algebraic large primes are always dealt with in a unified way, and the filtering bound is dynamically adjusted until the amount of excess is enough to form a reasonable matrix. It handles relations with 7+ large primes fairly regularly.
What is the number that I should be looking at in msieve output to compare with the 'More columns needed (current = 20609, min = 366630)' that I'm using to judge 'progress' in ggnfs output?

Quote:
I also meant to mention that with your C200 the large prime bound seems to be 30 bits, and that means a huge number of relations are needed to get to the explosion point. Your run has something like 60% excess with 67M relations, whereas Greg's C204 with 28-bit large primes has something like 200% excess with 54M relations
Yes, the parameters for my C200 are horribly wrong, I picked them out of my head rather than doing short pre-runs. Too many large primes, too many small primes. The matrix ends up very very big, though elegantly sparse.

For the C136 that I'm sieving at the moment, I did pre-runs with various sizes of large prime and of factor base, and optimised for [time per relation] * [2^lpbd] on the principle that pi(2^30) was near enough 2*pi(2^29) to make little difference, and thinking that the size of the factor base was a second-order effect in comparison to coupon-collecting enough primes to reach the explosion point. Ended up with 29-bit large primes (28 was nearly as good, 30 significantly worse) and spbd=9e6, 18 relations/second/CPU and aiming for ~30 million relations in about three CPU-weeks.

I did the same sort of pre-runs for 2^841-1 out of curiosity (yes, I realise this will take a couple of CPU-years to lattice-sieve, and I can't afford that many CPUs, but I've only wasted three days); there it looked as if lpb=2^29, smallprime=2^24 would optimise the sieving time; that seemed a very small small-prime bound, which is why I started reprocessing data on a smaller example to see what happens if the small-prime bound was too small. I suspect that after the two CPU-years I would end up with an *impossibly* dense matrix. I'm trying to get some idea of what the density-smallprimebound curve actually looks like.

There may be a good paper which talks about all of this, but I haven't seen it yet; the really-big GNFS runs have produced only one-page 'look! parameters! factors!' papers, I'm not sure I've ever seen a really clear justification for the bounds chosen for a large factorisation.
fivemack is offline   Reply With Quote
Old 2007-05-09, 02:26   #4
bdodson
 
bdodson's Avatar
 
Jun 2005
lehigh.edu

102410 Posts
Default

Quote:
Originally Posted by fivemack View Post
...
From a brief look at xilman's 'MPQS with three primes' paper, ...
Tom,
Not to be picky, but if you check the title page you'd see
xilman, Arjen Lenstra, bdodson and Sam (just of the co-authors I
recall ... Moffet, maybe?). While the credit's certainly mainly Paul's (and
the writing as well), there was a point early in the computation where
Arjen and I helped assure that the computation was feasible (iirc). -bd
bdodson is offline   Reply With Quote
Old 2007-05-09, 06:42   #5
jasonp
Tribal Bullet
 
jasonp's Avatar
 
Oct 2004

5·23·31 Posts
Default

Quote:
Originally Posted by fivemack View Post
What is the number that I should be looking at in msieve output to compare with the 'More columns needed (current = 20609, min = 366630)' that I'm using to judge 'progress' in ggnfs output?
'ignoring smallest XX rational and YY algebraic ideals' gives the minimal number of matrix columns; 'need ZZ more relations than ideals' gives the current expected amount of excess. All of the 'reduce to AA relations and BB ideals' lines show how progress is being made. The code compares AA-BB to ZZ, assuming that this approximates the number of cycles that would be formed during the merge phase (it's a systematically too-low estimate from what I've seen). Anytime you see 'filtering wants CC more relations', this is 3x the amount of excess needed to get from AA-BB to ZZ.

If you're willing to do a recompile, you can hardwire the value of the variable 'filtmin' in gnfs/filter/filter.c to force the initial bound on large ideals; otherwise the code sets filtmin to a weighted average of all the ideals encountered when relations were read in during duplicate removal. If it turns out there's a lot of excess with that choice, filtmin is reduced 10% at a time and the singleton and clique processing is rerun until there's ~8% excess.
jasonp is offline   Reply With Quote
Old 2007-05-09, 12:21   #6
R.D. Silverman
 
R.D. Silverman's Avatar
 
"Bob Silverman"
Nov 2003
North of Boston

22·1,889 Posts
Default

Quote:
Originally Posted by jasonp View Post
The conventional wisdom is that having more than two large primes means you have an explosion of medium size numbers to factor, only a very few of which will have all factors below the large prime bound, at the same time that factoring them becomes much more expensive. ith 28-bit large primes has something like 200% excess with 54M relations
Indeed. For numbers that can be done in a reasonable time with today's
computers even 3 large primes is problematic.

Suppose we have a norm of a lattice point: f(a,b). We want
f(a,b) = II p_i A1 A2 A3

where p_i are in the factor base and A1, A2, A3 are outside, so
A1 > max(p_i)

and we have C = A1 A2 A3 < max(p_i)^(3 + epsilon) for some selected epsilon.

You will find that for numbers we can do today that you will find a LOT of
such C. Most of them will get rejected because C will split into (say)
q1 q2 where q1 and q2 are beyond the large prime bound, or else C itself
will be prime, or else C = r1 r2 r3 where r1 is just barely beyond max(p_i)
and r2 and r3 are too big.

Note also that once factor base primes are sieved, you must try to
factor the candidates you suspect are smooth by resieving. When you
have a LOT of such candidates you will find memory requirements for
the sieve code will increase by quite a bit.

The net result is that you will spend much too much time finding, processing
and *rejecting* these false positives.

For SNFS my code already implements 3 large primes for the linear side
(one, of course, is a special q). For the algebraic side, I estimate that
3 large primes do not become effective until one starts using a septic
polynomial. For degree 5 and 6, 3 large primes are not effective.
Note also that splitting C = A1 A2 A3 is aso much more time consuming
than splitting two primes.

Degree 7 does not become effective until about 250 digits.
R.D. Silverman is offline   Reply With Quote
Old 2007-05-09, 13:47   #7
jasonp
Tribal Bullet
 
jasonp's Avatar
 
Oct 2004

5×23×31 Posts
Default

Quote:
Originally Posted by R.D. Silverman View Post
Indeed. For numbers that can be done in a reasonable time with today's computers even 3 large primes is problematic.
I think I misinterpreted what Tom was asking. Is it better to sieve with a much-too-large factor base and track many of those largest ideals during filtering, so that the filtering is given a dataset that acts as if it had more than two large primes per side? My intuition says no, primes that are 'large' occur rarely enough in relations that it's too much work to expect to systematically deal with all of them in the hope of catching all the relations where multiple large primes stack up. But I say that without having tried it, and was actually wondering the same thing for a while.

The difficulty of resieving would be manageable; if you have really so many sieve reports that you have a ton of resieving to do, it still saves loads of time to resieve the sieve reports in blocks. There are worse problems to have than too many reports out of the sieve :)
jasonp is offline   Reply With Quote
Old 2007-05-09, 14:06   #8
R.D. Silverman
 
R.D. Silverman's Avatar
 
"Bob Silverman"
Nov 2003
North of Boston

755610 Posts
Default

Quote:
Originally Posted by jasonp View Post
I think I misinterpreted what Tom was asking. Is it better to sieve with a much-too-large factor base and track many of those largest ideals during filtering, so that the filtering is given a dataset that acts as if it had more than two large primes per side? My intuition says no, primes that are 'large' occur rarely enough in relations that it's too much work to expect to systematically deal with all of them in the hope of catching all the relations where multiple large primes stack up. But I say that without having tried it, and was actually wondering the same thing for a while.

The difficulty of resieving would be manageable; if you have really so many sieve reports that you have a ton of resieving to do, it still saves loads of time to resieve the sieve reports in blocks. There are worse problems to have than too many reports out of the sieve :)
I do not understand what you are saying: "track many of those largest
ideals during filtering"??? "systematically deal with all of them...."?????
Filtering is *easy* and takes little time relative to sieving or the LA.

And the amount of resieving has NOTHING to do with the number of
reports. The resieving work is fixed by the size of the factor base and the size of the sieve region. What does increase is the *memory* needed to
store the primes when resieving detects that a prime has hit a lattice
point that is a smooth candidate.

I outlined the costs associated with a wrong-sized factor base in my
recent paper. The cost of a factor base that is too large grows linearly
with the size of the factor base. The cost of one that is too small
grows faster than exponentially. The cost of dealing with large primes
is *negligible* unless you use too many of them [with respect to the
size of the norms]. As numbers grow larger it is worthwhile to use more
large primes. It is almost certainly worthwhile to use 3 large primes on
each size for (say) RSA-768. For SNFS, 3 large primes on the algebraic
side do not become worthwhile until one starts using degree 7. For degree
5 or 6 you get way too many false positives.
R.D. Silverman is offline   Reply With Quote
Old 2007-05-09, 16:49   #9
jasonp
Tribal Bullet
 
jasonp's Avatar
 
Oct 2004

67558 Posts
Default

Quote:
Originally Posted by R.D. Silverman View Post
I do not understand what you are saying: "track many of those largest
ideals during filtering"??? "systematically deal with all of them...."?????
Filtering is *easy* and takes little time relative to sieving or the LA.
What I was trying to say: you can say you want two large primes, then include small factoring routines that are capable of splitting composites that have two of the requisite size factors. You can also use a much larger factor base, say with twice the number of ideals you think you need, and still only split composites that have two factors after sieving. Then, when filtering, make the filtering code deal with ideals that are in the high half of the factor base (i.e. have some of the factor base be subject to clique removal and merging just like the largest ideals). This implies a new definition of what a large prime is: it's a prime that appears in the filtering, not necessarily a prime that does not appear in the factor base. By this definition of a large prime, you will have relations that can contain more than two large primes per side, yet you didn't have to split any tri-composites to get to that point. So the big question: which is faster overall, doing twice as much sieving to get many more multi-large-prime relations, or using a smaller size factor base, splitting tri-composites with ECM or QS, and getting *way* many more relations but at higher cost? I honestly don't know.

It's a sign of my ignorance that I don't consider filtering to be easy at all :)

Quote:
Originally Posted by R.D. Silverman View Post
And the amount of resieving has NOTHING to do with the number of
reports. The resieving work is fixed by the size of the factor base and the size of the sieve region. What does increase is the *memory* needed to
store the primes when resieving detects that a prime has hit a lattice
point that is a smooth candidate.
The latter is what I meant. If you want to choose parameters that will cause a lot of sieve reports to be found, and want to resieve all of them but don't want to use up too much memory in handling the worst case number of reports, you can always resieve the same region of the sieve several times, dealing with a group of reports at a time. For a given lattice, just keep accumulating reports until you've buffered too many of them, then do the resieving on just what you've buffered, then empty the buffer and fill it up again with the next piece of the sieve region. It does mean you incur the cost of resieving several times, but the extra relations found would be worth it. Admittedly this is more important for a line siever than a lattice siever, since the number of reports will vary greatly over the sieving region in the former case.

Last fiddled with by jasonp on 2007-05-09 at 16:53
jasonp is offline   Reply With Quote
Old 2007-05-09, 17:08   #10
R.D. Silverman
 
R.D. Silverman's Avatar
 
"Bob Silverman"
Nov 2003
North of Boston

166048 Posts
Default

Quote:
Originally Posted by jasonp View Post
What I was trying to say: you can say you want two large primes, then include small factoring routines that are capable of splitting composites that have two of the requisite size factors. You can also use a much larger factor base, say with twice the number of ideals you think you need, and still only split composites that have two factors after sieving. Then, when filtering, make the filtering code deal with ideals that are in the high half of the factor base (i.e. have some of the factor base be subject to clique removal and merging just like the largest ideals). This implies a new definition of what a large prime is: it's a prime that appears in the filtering, not necessarily a prime that does not appear in the factor base. By this definition of a large prime, you will have relations that can contain more than two large primes per side, yet you didn't have to split any tri-composites to get to that point. So the big question: which is faster overall, doing twice as much sieving to get many more multi-large-prime relations, or using a smaller size factor base, splitting tri-composites with ECM or QS, and getting *way* many more relations but at higher cost? I honestly don't know.

It's a sign of my ignorance that I don't consider filtering to be easy at all :)

.
You are starting to spout gibberish. Once one enters the filtering stage,
the distinction between factor base primes and 'large primes' TOTALLY
VANISHES. What one has is relations consisting of the product of primes.
PERIOD.

Filtering gathers those primes together, combining matching primes in different
relations while preserving sparseness. It doesn't care what kind of primes they are.

Using a smaller sized factor base will increase sieve time faster than
exponentially. The added cost of rejecting false hits will only exacerbate
this problem. Using a factor base that is too small is VERY EXPENSIVE.
Using a too small factor base while trying to compensate with additional
large primes will be a DISASTER.
R.D. Silverman is offline   Reply With Quote
Old 2007-05-09, 17:21   #11
jasonp
Tribal Bullet
 
jasonp's Avatar
 
Oct 2004

356510 Posts
Default

Quote:
Originally Posted by R.D. Silverman View Post
You are starting to spout gibberish. Once one enters the filtering stage,
the distinction between factor base primes and 'large primes' TOTALLY
VANISHES. What one has is relations consisting of the product of primes.
PERIOD
I understand that (you can't code it up and not realize that). But I think it's valid to ask if there are ways to tweak NFS parameters that speed up the algorithm for input sizes of interest.
jasonp is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Where do I send my PRP primes with large k? Trilo Riesel Prime Search 3 2013-08-20 00:32
48-bit large primes! jasonp Msieve 24 2010-06-01 19:14
lots of large primes Peter Hackman Factoring 2 2008-08-15 14:26
NFS with 5 and 6 large primes jasonp Factoring 4 2007-12-04 18:32
What is the use of these large primes Prime Monster Lounge 34 2004-06-10 18:12

All times are UTC. The time now is 00:48.


Sun Jun 4 00:48:39 UTC 2023 up 289 days, 22:17, 0 users, load averages: 0.88, 1.10, 1.09

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.

≠ ± ∓ ÷ × · − √ ‰ ⊗ ⊕ ⊖ ⊘ ⊙ ≤ ≥ ≦ ≧ ≨ ≩ ≺ ≻ ≼ ≽ ⊏ ⊐ ⊑ ⊒ ² ³ °
∠ ∟ ° ≅ ~ ‖ ⟂ ⫛
≡ ≜ ≈ ∝ ∞ ≪ ≫ ⌊⌋ ⌈⌉ ∘ ∏ ∐ ∑ ∧ ∨ ∩ ∪ ⨀ ⊕ ⊗ 𝖕 𝖖 𝖗 ⊲ ⊳
∅ ∖ ∁ ↦ ↣ ∩ ∪ ⊆ ⊂ ⊄ ⊊ ⊇ ⊃ ⊅ ⊋ ⊖ ∈ ∉ ∋ ∌ ℕ ℤ ℚ ℝ ℂ ℵ ℶ ℷ ℸ 𝓟
¬ ∨ ∧ ⊕ → ← ⇒ ⇐ ⇔ ∀ ∃ ∄ ∴ ∵ ⊤ ⊥ ⊢ ⊨ ⫤ ⊣ … ⋯ ⋮ ⋰ ⋱
∫ ∬ ∭ ∮ ∯ ∰ ∇ ∆ δ ∂ ℱ ℒ ℓ
𝛢𝛼 𝛣𝛽 𝛤𝛾 𝛥𝛿 𝛦𝜀𝜖 𝛧𝜁 𝛨𝜂 𝛩𝜃𝜗 𝛪𝜄 𝛫𝜅 𝛬𝜆 𝛭𝜇 𝛮𝜈 𝛯𝜉 𝛰𝜊 𝛱𝜋 𝛲𝜌 𝛴𝜎𝜍 𝛵𝜏 𝛶𝜐 𝛷𝜙𝜑 𝛸𝜒 𝛹𝜓 𝛺𝜔