20130121, 19:41  #1 
Jun 2012
Boulder, CO
3×97 Posts 
Advice for large SNFS jobs?
Hi all,
I'm working on factoring (2801^831)/2800 using a modified factmsieve.py  and yes, I have a cluster available to me... :) factmsieve tells me at the getgo: Fri Jan 18 09:58:41 2013 > Estimated minimum relations needed: 5.53168e+08 I'm able to make it up to about 200M relations with the default parameters (starting at rational q from 238450000, FAMAX = 476900000) before hitting GGNFS' limit: it can't handle special q >= 2^30  1. Does anyone have any advice for what to do with jobs this big? Try to change the sieving window somehow? (And if so  to what)? Here's the polynomial I'm using: Code:
n: 4784427753962229503583191777575386925462640502543527013793934480234680863804447852383959785408791045459809147067083157248015897910382151758867576620242257524246139326208569043470479714282260046673050230392057658284742406595942226610043596316622243579005395853667131475327572196568483 m: 1829715316371090533839726975772594414416841479201 deg: 6 skew: 0 type: snfs c6: 1 c0: 2801 Code:
N 4784427753962229503583191777575386925462640502543527013793934480234680863804447852383959785408791045459809147067083157248015897910382151758867576620242257524246139326208569043470479714282260046673050230392057658284742406595942226610043596316622243579005395853667131475327572196568483 SKEW 3.75 A6 1 A0 2801 R1 1 R0 1829715316371090533839726975772594414416841479201 FAMAX 476900000 FRMAX 476900000 SALPMAX 4294967296 SRLPMAX 4294967296 
20130121, 19:50  #2 
(loop (#_fork))
Feb 2006
Cambridge, England
3×2,141 Posts 
I'm impressed by the scale of your cluster, but factmsieve is not designed for jobs this big.
The polynomial is right, and the alim and lp look reasonable, but you're clearly using the wrong sieving binary since you're getting 0.25 relations per Q. I think you should be using 16e, and you should be using three large primes on the rational side (lpbr=32 mfbr=96 rlambda=3.6) For things this large I tend to start from small Q (eg Q=1e7) rather than Q=Qmax/2. 
20130121, 19:56  #3 
Sep 2009
977 Posts 
What siever did factmsieve.py choose for such a job ? NFS@Home would probably choose ggnfslasieve4I16e, if not the corresponding lasieve5.

20130121, 22:19  #4 
Jun 2012
Boulder, CO
3×97 Posts 
debrouxl: It's using gnfslasieve4I16e, as I expected.
fivemack: Thanks, I'll try starting with Q=1e7 and the mfbr/rlambda values you suggested... With those values, do you think there's a shot that GGNFS/msieve will be able to finish this thing? :) 
20130121, 22:33  #5 
Just call me Henry
"David"
Sep 2007
Cambridge (GMT/BST)
2^{2}·1,471 Posts 
Even if it doesn't assuming you are on linux you should be able to run the later version of the siever that will sieve higher Qs.

20130121, 22:58  #6 
Jun 2012
Boulder, CO
3·97 Posts 

20130122, 00:27  #7 
Just call me Henry
"David"
Sep 2007
Cambridge (GMT/BST)
2^{2}·1,471 Posts 
Here is a link to the newer siever. There shouldn't be much speed difference unless you can get ecm working helpfully.
http://mersenneforum.org/showpost.ph...8&postcount=15 I don't think the source is in the svn. 
20130122, 01:11  #8 
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
2×47×101 Posts 

20130122, 07:34  #9  
Jun 2012
Boulder, CO
100100011_{2} Posts 
Quote:
Code:
./gnfslasieve4I16e k o spairs.out.test v n0 r input.job.test gnfslasieve4I16e (with asm64): L1_BITS=15, SVN $Revision: 399 $ Cannot handle special q >= 1073741823 

20130122, 07:36  #10 
Jun 2012
Boulder, CO
3·97 Posts 
Erm, perhaps I missed this in INSTALL:
Code:
NOTE for Phenom/K8 users: replace in athlon64/lsdefs.asm define(l1_bits,15)dnl => define(l1_bits,16)dnl and in athlon64/sieverconfig.h #define L1_BITS 15 => #define L1_BITS 16 
20130122, 08:00  #11 
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
2×47×101 Posts 
No, this is only for the CPUs that have 64Kb L1 cache, i.e. AMD CPUs. (log_{2}64Kb = 16) Don't change L1 bits for Intel CPUs.
How much of the q area have you already sieved? What side have you sieved on? There's no need really for a project of this size to go over q>2^30. Try to cover the area from q=10^7 to your current lower limit (where you started, 238450000). Even if you go over 2^30, the yield will be less and less. You may get a better yield by repeating some of the most productive (lower q) areas with the parameters that Tom (fivemack) suggested earlier. Have you used 3LP? Like  Code:
lpbr: 33 lpba: 33 mfba: 66 mfbr: 96 alambda: 2.55 rlambda: 3.7 Have you tried to filter your existing set of relations? Last but not the least, do you have a computer (set of computers) to solve the resulting >40M matrix? (As the saying goes, take no offense,  it's not the size (of the cluster), it's how you use it that matters. Have you done a snfs~270280 before doing this snfs290?) If you really want to go to very high q values, use the link to lasieve5 message. Last fiddled with by Batalov on 20130122 at 08:20 
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
29 to 30 bit large prime SNFS crossover  VBCurtis  Factoring  11  20150309 07:01 
How many jobs should I run?  Warlord  Software  12  20131011 22:18 
Advice for large GNFS jobs?  WraithX  Factoring  59  20130730 01:13 
doing large NFS jobs on Amazon EC2?  ixfd64  Factoring  3  20120606 08:27 
Filtering on large NFS jobs, particularly 2^908+1  bdodson  Factoring  20  20081126 20:45 