mersenneforum.org  

Go Back   mersenneforum.org > Other Stuff > Archived Projects > NFSNET Discussion

 
 
Thread Tools
Old 2005-11-14, 13:45   #1
R.D. Silverman
 
R.D. Silverman's Avatar
 
Nov 2003

22·5·373 Posts
Default Upcoming Work

I understand that NFSNET will do 2,764+ after they do 2,761-.
I have another week of sieving for 2,1382M and then perhaps
2 weeks for 2,1402L. I will then start on 2,764+ with my lattice
sieve while NFSNET proceeds with a line siever.

What parameters shall we use? 4x^6 + 1 is a good choice for the
polynomial, but we should decide on factor base sizes and LP bounds.
Will a factor base bound of 32million be sufficient?

At the current rate, I will get started before 2,761- finishes.

I will compress my data and send it to RKW via CD. I expect a lot of
duplicates; this is an experiment.

Bob
R.D. Silverman is offline  
Old 2005-11-14, 16:30   #2
xilman
Bamboozled!
 
xilman's Avatar
 
"π’‰Ίπ’ŒŒπ’‡·π’†·π’€­"
May 2003
Down not across

2×5,323 Posts
Default

Quote:
Originally Posted by R.D. Silverman
I understand that NFSNET will do 2,764+ after they do 2,761-.
I have another week of sieving for 2,1382M and then perhaps
2 weeks for 2,1402L. I will then start on 2,764+ with my lattice
sieve while NFSNET proceeds with a line siever.

What parameters shall we use? 4x^6 + 1 is a good choice for the
polynomial, but we should decide on factor base sizes and LP bounds.
Will a factor base bound of 32million be sufficient?

At the current rate, I will get started before 2,761- finishes.

I will compress my data and send it to RKW via CD. I expect a lot of
duplicates; this is an experiment.

Bob
Polynomials are x^6+4, 2^{127}x-1 (i.e. reciprocal polynomials), FB=40M on each side, LPB=1G.

I notice Wacky is reading this thread as I type, so I'll leave him to comment on any other aspects.


Paul
xilman is offline  
Old 2005-11-14, 17:17   #3
R.D. Silverman
 
R.D. Silverman's Avatar
 
Nov 2003

746010 Posts
Default

Quote:
Originally Posted by xilman
Polynomials are x^6+4, 2^{127}x-1 (i.e. reciprocal polynomials), FB=40M on each side, LPB=1G.

I notice Wacky is reading this thread as I type, so I'll leave him to comment on any other aspects.


Paul
Is it clear that sending 4x^6 + 1, root = 2^127 to x^6 + 4, root =
2^-127 is better?

I think it will be worse. Consider the linear norm. It will be
2^127 a - b (versus a - 2^127 b). For much of the sieve region,
a is quite a big bigger than b, resulting in bigger linear norms.
Yes, a^6 - 4b^6 will be slightly smaller than 4a^6 - b^6, but not
enough smaller to make up for the larger rational side norms.

Discussion please.

With a bound of 40M, I will need to recompile my lattice siever. I will
lose some machines as a result, because they only have 1/2 Gbyte
of memory. I can't run in 1/2G with a factor base this large.
May I suggest that I use a bound of only 32M?
R.D. Silverman is offline  
Old 2005-11-15, 21:43   #4
R.D. Silverman
 
R.D. Silverman's Avatar
 
Nov 2003

746010 Posts
Default

Quote:
Originally Posted by R.D. Silverman
Is it clear that sending 4x^6 + 1, root = 2^127 to x^6 + 4, root =
2^-127 is better?

I think it will be worse. Consider the linear norm. It will be
2^127 a - b (versus a - 2^127 b). For much of the sieve region,
a is quite a big bigger than b, resulting in bigger linear norms.
Yes, a^6 - 4b^6 will be slightly smaller than 4a^6 - b^6, but not
enough smaller to make up for the larger rational side norms.

Discussion please.

With a bound of 40M, I will need to recompile my lattice siever. I will
lose some machines as a result, because they only have 1/2 Gbyte
of memory. I can't run in 1/2G with a factor base this large.
May I suggest that I use a bound of only 32M?

Comments Please?????
R.D. Silverman is offline  
Old 2005-11-16, 08:24   #5
xilman
Bamboozled!
 
xilman's Avatar
 
"π’‰Ίπ’ŒŒπ’‡·π’†·π’€­"
May 2003
Down not across

2·5,323 Posts
Default

Quote:
Originally Posted by R.D. Silverman
Comments Please?????
Something has gone wrong. I posted a lengthy comment, but it seems to have disappeared.

Summary: running with FB primes to 32M should be fine. You won't find as many relations but what you find are as good as any.

The sieving area is close to square in that we expect to have b<40M with |a|<45M. The reciprocal polynomial gives skewness >1 (forget the exact figure, around 1.2) and although I didn't test yields on this occasion, every previous time gave slightly better yields with skew>1 than skew<1. When the skewness is so close to unity it doesn't much matter anyway.

Using half gig RAM sounds excessive. The CWI line siever takes about 80M and Franke's lattice siever about 130M. What are you doing that requires 4-6 times as much memory as the opposition?

I hope this post appears ...


Paul

Last fiddled with by xilman on 2005-11-16 at 08:25
xilman is offline  
Old 2005-11-16, 10:57   #6
R.D. Silverman
 
R.D. Silverman's Avatar
 
Nov 2003

22×5×373 Posts
Default

Quote:
Originally Posted by xilman
Something has gone wrong. I posted a lengthy comment, but it seems to have disappeared.

Summary: running with FB primes to 32M should be fine. You won't find as many relations but what you find are as good as any.

The sieving area is close to square in that we expect to have b<40M with |a|<45M. The reciprocal polynomial gives skewness >1 (forget the exact figure, around 1.2) and although I didn't test yields on this occasion, every previous time gave slightly better yields with skew>1 than skew<1. When the skewness is so close to unity it doesn't much matter anyway.

Using half gig RAM sounds excessive. The CWI line siever takes about 80M and Franke's lattice siever about 130M. What are you doing that requires 4-6 times as much memory as the opposition?

I hope this post appears ...


Paul
According to info I received from Alex, the Franke siever takes OVER 1/2 G.
I recall a number closer to 800M.


A square sieve area is *sub optimal*. Read my paper. Near the origin
you should be sieving ~|a| < 150M. (or perhaps even more) This should drop to |a| < 75M for b > 100K, then drop to 50M for b > 1000K, then gradually drop to ~25 to 30M as b > 10M.
R.D. Silverman is offline  
Old 2005-11-16, 12:51   #7
Wacky
 
Wacky's Avatar
 
Jun 2003
The Texas Hill Country

32·112 Posts
Default

Bob,

Sorry that I have been unable to reply sooner. I finally closed on the purchase of my new home. My servers are currently spread across 3 counties. I'm exhausted from the effort, and I have just begun to move.

There are some false assumptions that you have made. These are likely the result of your reasonable interpretation of misleading data.

We have already started sieving on 2,764+ and have collected over 10M relations to date.

I had hoped to have completed the sieving on 2,761- by now. However, 80M relations looks as if it will produce a matrix that is too large. With part of the participating sievers, I'm now collecting additional relations in hopes of reducing that matrix size. However, the majority are working on 2,764+.

I'm somewhat concerned about the "duplication" rate caused by running both lattice and line sievers. My only previous experience produced about 30% duplicates. But that was done with special-q outside the factor base.

My other concern with lattice sieving is the resulting matrix size. Because of the way lattice sieving produces relations, it virtually assures that each special-q will survive to the final matrix.
Wacky is offline  
Old 2005-11-16, 13:00   #8
Wacky
 
Wacky's Avatar
 
Jun 2003
The Texas Hill Country

100010000012 Posts
Default

Quote:
Originally Posted by R.D. Silverman
A square sieve area is *sub optimal*. Read my paper. Near the origin
you should be sieving ~|a| < 150M. (or perhaps even more) This should drop to |a| < 75M for b > 100K, then drop to 50M for b > 1000K, then gradually drop to ~25 to 30M as b > 10M.
NFSNET is using a "top hat" sieving area. Because the CWI line siever does rectangular areas, we treat each project as a set of sub-projects where the sieving width is set to progressively smaller sizes as b increases.

Driven by administrative and factor base overheads (we don't currently share factor bases between sub projects), I have usually limited the number of areas to 2 or 3. I base the transitions on the marginal cost comparing the yield rate of additional width in the lower strata to that of extending the maximum "b".
Wacky is offline  
Old 2005-11-16, 13:03   #9
R.D. Silverman
 
R.D. Silverman's Avatar
 
Nov 2003

22·5·373 Posts
Default

Quote:
Originally Posted by Wacky
Bob,

Sorry that I have been unable to reply sooner. I finally closed on the purchase of my new home. My servers are currently spread across 3 counties. I'm exhausted from the effort, and I have just begun to move.

There are some false assumptions that you have made. These are likely the result of your reasonable interpretation of misleading data.

We have already started sieving on 2,764+ and have collected over 10M relations to date.

I had hoped to have completed the sieving on 2,761- by now. However, 80M relations looks as if it will produce a matrix that is too large. With part of the participating sievers, I'm now collecting additional relations in hopes of reducing that matrix size. However, the majority are working on 2,764+.

I'm somewhat concerned about the "duplication" rate caused by running both lattice and line sievers. My only previous experience produced about 30% duplicates. But that was done with special-q outside the factor base.

My other concern with lattice sieving is the resulting matrix size. Because of the way lattice sieving produces relations, it virtually assures that each special-q will survive to the final matrix.

The reason I use 'special-q' from inside the factor base is precisely
because it does NOT lead to larger matrices. But it does result in more
duplicates. Most primes inside the factor base wind up in the final matrix
anyway.

The NFSNET web site reports that 2,761- is less than 60% done.
Perhaps that web page needs to be made current??? It also failed
to announce that e.g. 2,719+ had been done....

I am about 2/3 done with 2,1382M and planned to do 2,1402L before
helping out on 2,764+. Based on the *public* report of progress
for 2,761-, I would have finished 2,1402L about the same time 2,761-
finished........

I am, even as I type this, running some experiments to see which of
the two proposed polynomials (mine or yours) will have higher yields.
However, since you have already started 2,764+, the issue is moot.

Given your concerns, I will work on something else.

I think I will do 2,781- after 2,1402L.

Bob
R.D. Silverman is offline  
Old 2005-11-16, 13:23   #10
xilman
Bamboozled!
 
xilman's Avatar
 
"π’‰Ίπ’ŒŒπ’‡·π’†·π’€­"
May 2003
Down not across

2×5,323 Posts
Default

Quote:
Originally Posted by R.D. Silverman
The NFSNET web site reports that 2,761- is less than 60% done.
Perhaps that web page needs to be made current??? It also failed
to announce that e.g. 2,719+ had been done....
Not sure whether Lars reads this. Wacky or I should give him a prod...

Quote:
Originally Posted by R.D. Silverman
I am, even as I type this, running some experiments to see which of
the two proposed polynomials (mine or yours) will have higher yields.
Please post the results. As stated earlier, I didn't test this particular case but previous tests have always shown the skew>1 polynomial to be better when using the CWI line siever, though sometimes only very slightly so. This, presumably, is why Peter put a message into the rootfinder output which recommends using the reciprocal polynomials when skew<1.

I'm well aware of the sub-optimality of a rectangular sieving area, having read your paper and seen the work of a number of other researchers. As Richard said, we use a top-hat region because that is (in our experience) a good trade-off between efficiency and practicality. Arjen Lenstra describes the optimal area as a "crown" (think fairy-tale kings & queens) but, as far as I know, he still sieves over a simple rectangle.


Paul
xilman is offline  
Old 2005-11-16, 13:33   #11
xilman
Bamboozled!
 
xilman's Avatar
 
"π’‰Ίπ’ŒŒπ’‡·π’†·π’€­"
May 2003
Down not across

2×5,323 Posts
Default

Quote:
Originally Posted by R.D. Silverman
According to info I received from Alex, the Franke siever takes OVER 1/2 G.
I recall a number closer to 800M.
I need to check my notes, or fire up the Franke siever again. It appears that my recollection of 130M or so may be incorrect.

The question remains, though: why does your lattice siever take six times the memory of the CWI siever when factoring integers with comparable sized factorbases?

Is it, perhaps, that you keep an entire line in RAM? We sieve in sections, finding that the small overhead incurred is greatly outweighed by the number of machines that can contribute to the sieving. By and large, I'd rather have twice as many machines running at 90% efficiency than have half of my potential pool of sievers not doing anything useful because they don't have enough memory.


Paul
xilman is offline  
 

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
At least one upcoming paper on polsel Dubslow Msieve 0 2016-03-16 06:24
Thoughts about upcoming LLRnet rallies gd_barnes No Prime Left Behind 59 2008-06-07 03:54
Upcoming features Xyzzy Forum Feedback 1 2007-11-26 18:57
Upcoming Prime95 monsters (processors) Dresdenboy Hardware 96 2007-05-16 07:06
Upcoming INTEL chips????? georgekh Hardware 28 2004-11-20 03:53

All times are UTC. The time now is 00:41.

Mon Apr 19 00:41:44 UTC 2021 up 10 days, 19:22, 0 users, load averages: 2.74, 2.21, 2.19

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.