20151202, 03:47  #1 
"Curtis"
Feb 2005
Riverside, CA
2^{2}·7·13^{2} Posts 
Modern parameter choice for large 14e/small 15e projects
I've slowly been working my way up in NFS difficulty over the years, and have some observations and questions about parameter choice for mediumsized projects.
What I've found: a) 31bit LP is better than 30bit once 80M raw relations are required for 30bit. This is just under SNFS220, or GNFS ~153. b) For 31LP tasks, time is saved by using targetdensity settings higher than default 70. I don't have enough trials for an optimal choice, but 84 to 90 have worked well for me (58M extra raw rels, matrix time savings greater than extra sieve time spent). c) 32LP is better than 31bit once 150M raw rels are required for 31bit. I have less data for this transition, but SNFS230 appears faster at 14e/32 than 14e/31. I have only 2 runs at 32LP for GNFS, at 162 and 165 digits; both were clear 32LP choices. If 150M raw rels works for 31LP, 250M should work for 32 (with target density = 100). d) testsieving a SNFS240 is unclear whether 15e/31 or 15e/32 is best; the 3132 transition size appears different for 14e than 15e. It's possible that moving up to 15e would be paired with moving back down to 31LP. Questions: 1. Yield and sec/rel both improve with increased alambda for GNFS, likewise using 3LP on algebraic side. What is the downside to using 3.6 instead of 2.6 for alambda, and 94 instead of 64 for mfba (assuming 32LP)? Will more raw relations be required? Will the ensuing matrix be bigger? Is it reasonable to test such settings on a smaller project (say, GNFS 155 on 31LP) to compare to a previous run with 2.6 and/or mfba = 64? 2. I have read that going up a siever allows fewer raw relations to build a matrix, mostly due to fewer duplicate relations. Going from 14e to 15e, is there a consistent % fewer raw rels I can assume? For instance, if GNFS 165 takes 260M raw rels with 14e, and I use 15e for a GNFS 165, may I expect 5% fewer raw rels needed? 10%? (am I asking the wrong question?) 3. Question 2 is attempting to quantify the amount of extra sec/rel to accept in 15e vs 14e to justify running the larger siever for a project. If I need 10% fewer rels from 15e vs 14e but 15e is only 8% slower, it seems clear to choose 15e for a project over 14e. With all the data points gathered from NFS@home stretching 14e into GNFS 165170 and SNFS 240+, is there a consensus about where the optimal move to 15e should be? I am testsieving a GNFS 166 now, and 14e appears 15% faster than 15e averaged across the expected sieving ranges; I am not sure if 15% faster is enough to justify using 14e. 4. When I do finally select 15e for a project, roughly half the specialQ range must be sieved compared to 14e. Does that mean I may decrease alim/rlim and gain efficiency (or a smaller matrix)? For instance, my present GNFS 165 job has alim=rlim=55M, and I expect to sieve q from 10M to 70M to build a matrix with targetdensity 100. If I had run 15e instead, I would only need q from 10M to 40M to get a similar number of relations; does that mean I could profitably reduce alim and rlim to 35M or 40M? I welcome your experiences with abcd above, or opinions about my questions. Curtis 
20151202, 06:53  #2 
"Curtis"
Feb 2005
Riverside, CA
2^{2}×7×13^{2} Posts 
I just found http://mersenneforum.org/showthread.php?t=20015 from Jan '15, with Mr Womack's experiments with 14e/32 and 15e/32 on some GNFS 165178 candidates. Specifically, post 14 states that 15e/32 needed 270M rels for a GNFS170 job, while 14e/32 needs 360M for targetdensity 120. That's way way more than the 15% I was guessing! I assume the add'l relations for 14e is worse as composite size increases, but GNFS166 isn't that much smaller than 170, so I think I'll try 15e/32 and aim for 250M rels and targetdensity 100.

20151202, 14:30  #3  
Just call me Henry
"David"
Sep 2007
Cambridge (GMT/BST)
5×1,171 Posts 
Quote:
Quote:
It would be interesting to attempt to fit a linear model to the different parameters to determine what parameters should be selected. Something possibly worth looking at would be how many relations are required if you increase lpbr/a but not mfbr/a and how the relation collecting speed changes. It always seems a shame when there is a load of mpqs done in vain. Some of these things could do with further tests I think. 

20151202, 20:15  #4  
"Curtis"
Feb 2005
Riverside, CA
11174_{8} Posts 
Quote:


20151203, 17:47  #5  
Sep 2009
3751_{8} Posts 
I tend to rely on a rule of thumb from the "GNFS parameter selection pearls of wisdom" thread from 2009:
Quote:
Chris 

20151204, 00:11  #6 
"Curtis"
Feb 2005
Riverside, CA
2^{2}·7·13^{2} Posts 
I have found that advice to be suboptimal, by quite a bit if I get yield of 2.0, going up a LP nearly always results in less sieve time to produce a matrix. Most of my recent projects run with yields of 4 to 6, meaning I went up a LP even when the yield could have been 2.0 to 3.0.
If you have a pair of samesized SNFS projects, try it run one at 30LP for, say, 80M relations, and the next at 31LP aiming for 135M relations. I believe it takes less time to gather 135M at 31LP than it does to gather 80M at 30LP, and the resulting matrices are usually of roughly the same size (say, within the variation among 30LP projects). Specifically, SNFS222+ should be faster at 31LP, and 3031 are about equivalent down into 218220 range. Edit: "quite a bit" is a misnomer here the smaller LP choice is often < 10% slower, which is not a large mistake. I intended "quite a bit" to mean "quite often", rather than how much slower the slower choice is. Last fiddled with by VBCurtis on 20151204 at 00:20 
20151204, 12:12  #7  
Nov 2003
2^{2}×5×373 Posts 
Quote:
With your parameter choices, how long do the 14e (resp.) 15e sievers take to process 1 (ONE) special q? How big is the sieve region for the 14e/15e sievers? [Does e.g. 14e mean 2^14 x 2^14?) 

20151204, 17:08  #8 
Sep 2009
11111101001_{2} Posts 
The 14e siever uses a region 2^14 by 2^13. They all work like that, ie 2^n by 2^(n1) where n is the number in the name.
They do have an option to reduce the smaller dimension, but I don't think it's used very often (I don't know of any case where it's an obvious benefit). Chris 
20151204, 17:19  #9  
Nov 2003
2^{2}×5×373 Posts 
Quote:
A simple calcofvariations optimization shows that the sieve AREA should be proportional to the average probability of success for each specialq. Smaller specialq should have a larger sieve area since they have higher yield. This is essentially the optimization that I showed in my paper on Optimal SNFS Parameterization, except that paper assumed a line siever. The same analysis can be applied to a lattice siever and gives the above result. 

20151204, 17:56  #10  
"Ben"
Feb 2007
41·83 Posts 
Quote:
You must know all this. So why state it other than to plug your paper? Last fiddled with by bsquared on 20151204 at 17:56 

20151204, 17:56  #11 
"Curtis"
Feb 2005
Riverside, CA
127C_{16} Posts 
Mr Silverman
Do you mean that using, say, the 15e siever for smaller specialq and the 14e siever for larger specialq is likely to be more efficient (i.e. less total computation?) than using the same siever for an entire project? I suppose it should depend on how close the sieverchoice decision is to the cutoff between 14e and 15e... More specifically, is the effect you cite powerful enough to justify a 4x increase in sieve area (as I understand it, 15e doubles both dimensions of sieve region, so 4x area) for some specialq, compared to staying with 14e? Perhaps I can answer these for myself, by testsieving 14e vs 15e on the start of an expected sieving range (say q = 10M when planning to sieve 10M to 50M), the middle, and the end. My previous experience with testsieving a GNFS165 is that the time per relation is more competitive between 14e and 15e at the higher end of q, while 14e is quite a bit faster than 15e at the lower end. This seems to be the opposite of your claim, so I fear I misunderstand something. 
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
P95 PrimeNet causes BSOD; small FFTs, large FFTs, and blend test don't  KarateF22  PrimeNet  16  20131028 00:34 
Small distributed projects?  Citrix  Open Projects  21  20100126 09:04 
Large small factor  ZetaFlux  Factoring  96  20070514 16:59 
newbie question  finding small factors of very large numbers  NeoGen  Math  7  20070313 00:04 
Problems with Large FFT but not Small FFT's?  RichTJ99  Hardware  2  20060208 23:38 