mersenneforum.org Advice for large GNFS jobs?
 User Name Remember Me? Password
 Register FAQ Search Today's Posts Mark Forums Read

2013-07-23, 19:21   #56
R.D. Silverman

Nov 2003

26×113 Posts

Quote:
 Originally Posted by swellman I look to minimize total estimated sieving time, provided the resulting LA fits within my memory limitations. I use the formula Sieving_time = (total_rels_req) * (sec/rel) / (#_threads) where in your case total_rels_req is about 400M (for lpba and lpbr of 32) times the scaling factor.
This is a BAD estimate. (sec/rel) changes quite a bit as the special_q
changes.

What you want to minimize is:

Sieve_area * loglog (max prime in factor base) * #special_q.

Sieve area and max prime are input parameters.
Their product represent the time to sieve each special_q.

For a given pair of values assigned to these parameters you estimate the
number of special_q that you will need by trial sieving (say) 20 different
special_q over your anticipated range of special q. This will result in a YIELD CURVE.

(yield changes as special_q changes) . Now, you want the area under this
curve to equal the number of relations you will need. This latter parameter
will depend on the size of the large primes and the size of the factor base.
Choose the maximum special q so that this area is sufficiently large.

For an example of deriving a yield curve (albeit in the context of a line siever,
but the ideas are the same) read my paper Parametric Optimization of SNFS.

 2013-07-23, 21:46 #57 firejuggler     Apr 2010 Over the rainbow 32×52×11 Posts I wanted to add that, if you put a harsh bond on np1, the nps can keep up, allowing you to use -t 2 or -t 3 without falling behind
2013-07-23, 23:02   #58
VBCurtis

"Curtis"
Feb 2005
Riverside, CA

2·2,243 Posts

Quote:
 Originally Posted by firejuggler I wanted to add that, if you put a harsh bond on np1, the nps can keep up, allowing you to use -t 2 or -t 3 without falling behind
This has no advantage, though. It restricts the search space per coeff for no gain, while adding overhead in preparing each new coeff. It also has the side effect of forcing a search of only the lowest segment of a coeff, meaning that if two people search the same coeff range, their range is exactly duplicated; under default stage 1, there could be as many as dozens of segments with each search covering just one (randomly chosen) segment.

Imagine that stage1norm restricts the second coeff for each first coeff; there is no reason to believe this helps produce better polys, but the code must perform the same overhead per coeff, so cycles are wasted.

Last fiddled with by VBCurtis on 2013-07-23 at 23:06

2013-07-24, 00:55   #59
swellman

Jun 2012

24·181 Posts

Quote:
 Originally Posted by R.D. Silverman This is a BAD estimate. (sec/rel) changes quite a bit as the special_q changes. What you want to minimize is: Sieve_area * loglog (max prime in factor base) * #special_q. Sieve area and max prime are input parameters. Their product represent the time to sieve each special_q. For a given pair of values assigned to these parameters you estimate the number of special_q that you will need by trial sieving (say) 20 different special_q over your anticipated range of special q. This will result in a YIELD CURVE. (yield changes as special_q changes) . Now, you want the area under this curve to equal the number of relations you will need. This latter parameter will depend on the size of the large primes and the size of the factor base. Choose the maximum special q so that this area is sufficiently large. For an example of deriving a yield curve (albeit in the context of a line siever, but the ideas are the same) read my paper Parametric Optimization of SNFS.
Thank you for the detailed reply. Very informative. I shall reread your paper - not being a mathematician (I'm an engineer), it can be tough to fully grasp but I will give it another go.

In the shallow end of the factorization pool where I swim, the sec/rel and yield measures seem reasonably well behaved as spec_q increases, and I only use the sieving time estimate to normalize the performance of various polys for comparison. But I will explore the method you've described.

2013-07-30, 01:13   #60
swellman

Jun 2012

24×181 Posts

Quote:
 Originally Posted by swellman No, I was using 14e when I posted my benchmarks, but have since done some more test sieving with 15e. Much better. I'm away from that machine for the weekend, but I will post my results when I return. Appreciate your thoughts. Looks like I need to vary r/alim more - I've been using values of a range of 100-111M.
I've settled on the following parameters for my C169 using the 15e siever

Code:
n: 2203286292154236920662074580008136560385550762038571072069284129582298550469011615783387269827436918721335468107066200517568432204890391543672088684775850203864157356993
skew: 5721717.05
c0: 252681583117408750059938913868667397960
c1: 716807602129660819233465802155626
c2: 1024911528196794072738767285
c3: 155515608837130473266
c4: -37852190172270
c5: 453960
Y0: -344517009720320657345668021520609
Y1: 870535396513771
rlim: 90000000
alim: 90000000
lpbr: 30
lpba: 30
mfbr: 61
mfba: 61
rlambda: 2.7
alambda: 2.7
I tried varying the parameters and siever used. The following data resulted.

Code:
lpba   lpbr  mfba   mfbr  r/alim (M)  r/alambda   Siever   Rels_req (M)  ETA (hrs)  rel/spec_q
30     30     60     60     67     2.7    15e    92     4870    1.5655
30     30     60     60     67     2.7    15e    92     4870    1.5655
30     30     61     61     67     2.7    15e    92     4873    1.5655
30     30     60     60     75     2.7    15e    92     4939    1.5775
30     30     60     60     90     2.7    15e    92     4989    1.598
30     30     61     61     100    2.7    15e    92     5043    1.6115
30     30     61     61     111    2.7    15e    92     5087    1.623
31     30     62     61     111    2.7    15e    137    5531    2.3125
30     31     61     62     111    2.7    15e    137    5816    2.1625
31     31     62     62     111    2.7    15e    182    5840    3.091
30     30     61     61     67     2.7    14e    92     5899    0.7105
30     30     60     60     67     2.7    14e    92     5901    0.7105
30     30     60     60     75     2.7    14e    92     5979    0.715
30     30     60     60     90     2.7    14e    92     6206    0.7265
30     30     61     61     100    2.7    14e    92     6353    0.735
30     30     61     61     111    2.7    14e    92     6496    0.7415
30     30     61     61     111    2.7    16e    92     6691    3.4185
31     30     62     61     111    2.7    14e    137    6940    1.056
31     31     62     62     111    2.7    14e    182    7133    1.412
31     31     62     62     111    2.7    16e    182    7144    6.7215
30     31     61     62     111    2.7    14e    137    7346    0.992
Planning to start this factorization in mid August.

Last fiddled with by swellman on 2013-07-30 at 01:37 Reason: Formatting and clarification

 Similar Threads Thread Thread Starter Forum Replies Last Post ryanp Factoring 69 2013-04-30 00:28 ixfd64 Factoring 3 2012-06-06 08:27 WraithX Msieve 18 2012-05-20 22:19 Syd Aliquot Sequences 7 2011-03-14 18:35 bdodson Factoring 20 2008-11-26 20:45

All times are UTC. The time now is 02:47.

Sun Nov 29 02:47:28 UTC 2020 up 79 days, 23:58, 3 users, load averages: 1.06, 1.16, 1.19