20211029, 13:35  #89  
Jun 2012
110100011110_{2} Posts 
Quote:
Apologies for my hashed (and erroneous) remarks a couple of posts back, here are the revised parameters I just finished test sieving based on my interpretation of your original comment: Code:
n: 12734530900787107377713574161011868289324430536561585108001584634844458530077827798443825688784448329151924256365768391241981555179087907348803260669361241812142769026337465374565031975093261 skew: 26407748.919 type: gnfs lss: 0 c0: 82995649502610121695803377054054562879174460 c1: 68939390311785240168852442729643236969 c2: 1159047757477660892006812203906 c3: 166840082438286438551781 c4: 583630704694594 c5: 15763440 Y0: 4381990159763602434411484418467631498 Y1: 1351413343517779682402767 # MurphyE (Bf=8.590e+09,Bg=4.295e+09,area=5.469e+16) = 1.091e08 = 1.892e14 per cownoise lpbr: 31 lpba: 32 mfbr: 62 mfba: 94 rlim: 268000000 alim: 134000000 rlambda: 2.7 alambda: 3.5 Code:
Q(M) Norm_yield 60 30804 100 28474 150 26688 200 23296 I always test sieve the 2/2, 3/2 and 2/3 LP scenarios for a new GNFS. And in this case I also looked at 31/32 as well as 32/31. The 31/32 case was the best performer, with a target number of raw rels of 360M. A 32/32 job would need 460M+ raw rels, though it sieves a little faster. As to the idea of lim = 2^(lpb4), it’s an old rule of thumb told to me in the past. I don’t know the theory behind it, or now if there is any. My recent results show this HCN could be sieved faster using ~360M raw rels, but perhaps this choice of parameters makes that estimate null and void? Maybe it should be higher? I could run this as a 16f/32/32 job with the lims biased towards the rational side but I don’t often see that much improvement. I still feel this number can be sieved quicker/easier as a 31/32 job. Not sure about sieving with these new parameters but the results were very surprising (to me anyway). Last fiddled with by swellman on 20211029 at 17:52 Reason: ETA  added two lines to the poly  lss:0, and type: gnfs 

20211029, 14:44  #90 
"Curtis"
Feb 2005
Riverside, CA
7·11·67 Posts 
Doing things a little differently can reveal new ideas or information I wouldn't think to try 31/32LP, so I'm curious what the matrix size/ number of rels needed will be. Neat!
My last meddling suggestion start at Q=30 or 35M. With lim on the sieving side of 134M, even Q=30M isn't that small. Qmax to Qmin of 6 is a conservative setup (i.e. should keep duplicate relations low), and that would be e.g. 30M165M. The CADO developers suggest 8 for that ratio, and since adopting that plan I've had fairly consistent & reasonable duplicate ratios; using 6 for that ratio here should come with little downside but the upside of sieving smaller Q. 
20211029, 17:54  #91  
Jun 2012
D1E_{16} Posts 
Quote:


20211030, 13:05  #92 
Jun 2012
2·23·73 Posts 
3+2,1890L Revisited
QUEUED AS 3p2_1890L
I did test sieve both job files at 30M, results for the first poly file listed below: Code:
Q(M) Norm_yield 30 27640 60 28820 100 26626 150 25334 200 23482 And the results for the second poly file: Code:
Q(M) Norm_yield 30 29419 60 30804 100 28474 150 26688 200 23296 Both sieving range estimates have with a bit of hedging up to allow for a few extra dups at low Q. I’m hesitant to go with the novel set of parameters (i.e. the second job file) for this factorization. The sievers seem to work just fine with inflated rlim but I have no feel for the “quality” or dup rate of the relations generated. Maybe I’ll try this strategy on a composite more suited for 14d or 15e_small first before bringing it to 16f_small. On flip side  what’s the worst possible outcome? More sieving? Last fiddled with by swellman on 20211104 at 13:28 
20211030, 16:46  #93 
"Curtis"
Feb 2005
Riverside, CA
7·11·67 Posts 
I can't imagine what negative outcome would take place, but a lack of imagination is what holds us back from finding faster params choices so I shouldn't claim that as any sort of green light!
I got the idea from the stock CADO params files, and I think the reasoning is that a 2LP relation with a large lim will also be found as a 3LP relation with a small lim, in the cases where one of the large primes is between the new small lim choice and the old large lim choice. So, going with a small lim "costs" very few missed relations when chosen on the 3LP side. However, increasing lim on the 2LP side categorically finds more relations many more than the shrunken lim on the 3LP side "loses". Charybdis has made a similar argument for going with 3LP on both sides for big 16f jobs; this would make up for the artificially low lim's at the cost of a lot of cofactorization effort. I hope that using a small mfb on the side "promoted" to 3LP might make up some of the cofactorization cost, but I haven't testsieved anything yet. CADO does this on the default files starting surprisingly small, like c190. CADO params for c190: lpb0 32 lpb1 33 mfb0 85 mfb1 96 It's that 85 that interests me to experiment with. 
20211030, 17:27  #94  
Apr 2020
593_{10} Posts 
Quote:
That mfb0=85 in the c190 file is 2LP in disguise: it might as well have said mfb0=64. There is no difference because the lim0 value of 340600000 = 2^{28.34...} is large enough that any product of 3 large primes will be larger than 2^{85}. (OK, strictly speaking there is a difference, because without a strict lambda value a few small 3LP composites will sneak through to resieving before being eliminated for being too large  so in fact 85 will perform slightly worse than 64.) This isn't the only odd/misleading mfb choice in the CADO default params either, e.g. the c180 file has lpb1=32, mfb1=99 which is no different from 96. Last fiddled with by charybdis on 20211030 at 17:30 

20211030, 18:51  #95 
"Curtis"
Feb 2005
Riverside, CA
12047_{8} Posts 
Aha! When I glanced at the lim's, I reversed lim0 and lim1 and estimated lim0 ^ 3 to be 80 or so.
On the bright side, at least the cado lim choices match the observation that the lim on the 3LP side can be made quite small 120M compared to 340M on the 2LP side (now that you've shown me 85 is still 2LP!). 
20211210, 08:58  #97 
Jul 2003
So Cal
4317_{8} Posts 
I'm extending the sieving range for this one because I goofed and lost part of the relations file.
Last fiddled with by frmky on 20211210 at 08:58 
20211216, 10:27  #98 
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
10010111011100_{2} Posts 
Let's maybe do a Cunnigham 2,2862L c282 as a sextic (diff.287) in small 16e framework? I would do the matrix.
(Note: 2862 is divisible by 9.) Code:
#2,2862L c282 n: 545629750739501799280194070406487970378506858957770879330178979671814625379370778997434467805416136050292493686629273066713825676666897481896675700300513633128088404372464823275499457071462468015411161576673380683274045519676777704972676354086496793742260326047144865678794200433121 Y1: 604462909807314587353088 Y0: 730750818665451459101842416358141509827966271489 c6: 1 c5: 0 c4: 12 c3: 4 c2: 36 c1: 24 c0: 8 skew: 1.414 
20211216, 12:56  #99 
Jun 2012
110100011110_{2} Posts 
2,2862L c282
This appears to be a 33/33 job, and quite in line with the 16_small performance envelope.
I can look at 2 vs 3 LPs, run test sieving etc. to at least establish a baseline job file. Safe to assume ECM due diligence is met? 
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Queue management for e_small and 15e queues  VBCurtis  NFS@Home  254  20220102 01:59 
Queue management for 14e queue  VBCurtis  NFS@Home  77  20211229 15:23 
Run down the queue on MPRIME without quitting GIMPS  Rodrigo  Software  7  20180525 13:26 
Improving the queue management.  debrouxl  NFS@Home  10  20180506 21:05 
split a prime95 queue & client installation  joblack  Information & Answers  1  20090106 08:45 