20210110, 03:18  #78 
"Curtis"
Feb 2005
Riverside, CA
2·5·467 Posts 
QUEUED AS 13_2_875m1 on 15e_small
13*2^8751 is ready for esmall: Code:
n: 240866437725169990435961145869700978148638973131500627154459277995207161586111018965696779698382676903650395902770674542234980927906435470377593973397210397595151381518023436309800144471706314282870511294518800271 skew: 0.73 type: snfs c6: 13 c0: 2 Y1: 1 Y0: 89202980794122492566142873090593446023921664 rlim: 134000000 alim: 134000000 lpbr: 31 lpba: 32 mfbr: 60 mfba: 93 rlambda: 2.7 alambda: 3.7 Code:
#Q=20M 3208 rels, 0.251 sec/rel #Q=50M 2650 rels, 0.255 sec/rel #Q=80M 2100 rels, 0.352 sec/rel #Q=110M 2295 rels, 0.384 sec/rel #Q=140M 1995 rels, 0.428 sec/rel #Q=170M 1537 rels, 0.343 sec/rel Last fiddled with by swellman on 20210110 at 20:33 
20210110, 03:26  #79 
Apr 2020
273_{8} Posts 

20210110, 04:31  #80 
"Curtis"
Feb 2005
Riverside, CA
4670_{10} Posts 
Edited (I thought initially you were talking about 13_2_875).
That may indeed be an error I copied settings from a C185 and played with a couple lim and mfb settings, but didn't test a. If I get a good improvement using the other side, can the currently queued job be killed? It looks like it'll be a month before it sieves, but I dunno how the management controls work. I'll test the a side overnight. Last fiddled with by VBCurtis on 20210110 at 16:04 
20210110, 12:41  #81 
Jun 2012
2^{4}×5×37 Posts 
The Aliquot 4788:12574 job has been queued but it can changed if necessary prior to starting sieving. I too questioned if sieving on the r side of a GNFS job was really the intent but stranger things have happened. Meant to PM @VBCurtis to verify which side should be used but then RL interrupted.
As to 13*2^8751, I will hold off enqueuing it until the parameters are confirmed. 
20210110, 19:29  #82  
"Curtis"
Feb 2005
Riverside, CA
2×5×467 Posts 
Job C189_4788_12574 updated on 15e
Quote:
Corrected C189 aliquot 4788 settings: Code:
n: 521522190388785541331160787195976801952432005240028067707250869703871311778809493591678394522430885815662329064194127806602359010987662524451917763292547675317210495917002745173292704484659 # norm 2.678191e18 alpha 8.963268 e 2.857e14 rroots 5 type: gnfs skew: 87271318.89 c0: 6381367517597696742295702359937525488210006576 c1: 202288237039298283646164033232164328687 c2: 13346320611031566286558670660366 c3: 32289428163110989016897 c4: 1983739878858060 c5: 3603600 Y0: 3106799826410507065935794801180448266 Y1: 1970023755595084066583 rlim: 266000000 alim: 134000000 lpbr: 33 lpba: 32 mfbr: 63 mfba: 92 rlambda: 2.5 alambda: 3.5 Testsieving on the a side is a bit noisier for yield: 1kQ intervals tested; Let's try Q: 40M260M to start? Estimated 500M rels needed. Code:
#Q=40M 2672 rels, 0.299 sec/rel #Q=70M 2236 rels, 0.318 sec/rel #Q=100M 3200 rels, 0.395 sec/rel #Q=130M 2282 rels, 0.446 sec/rel #Q=160M 2152 rels, 0.482 sec/rel #Q=190M 1971 rels, 0.472 sec/rel #Q=220M 1743 rels, 0.476 sec/rel #Q=250M 2765 rels, 0.378 sec/rel #Q=280M 1784 rels, 0.393 sec/rel Last fiddled with by swellman on 20210110 at 20:06 

20210110, 21:53  #83 
Apr 2020
11·17 Posts 
This is to be expected in general. The noise in the yield comes from the number of specialq in a range deviating from the expectation. The number of specialq for a given prime is the number of roots of the polynomial (algebraic/rational as appropriate) modulo that prime. So on the r side you just get one specialq for each prime, but on the a side the number of roots varies from 0 to the degree of the poly though the average is still 1. This gives a fair bit more noise on the a side.
We can use the Prime Number Theorem to get rid of this noise  this goes for the r side too: Calculate the yield per specialq using the number of specialq that the siever gives on the line Code:
xxx Special q, yyy reduction iterations 
20210112, 16:49  #84 
Sep 2009
7D1_{16} Posts 
QUEUED AS f15_214p1
(15^214+1)/22732825976092858 from the Brent tables: Code:
# Built Mon Jan 11 17:16:42 2021 # Estimated SNFS difficulty 255, GNFS equivalent 185, GNFS difficulty 236, degree 6 n: 21226383702704470284297672536317375712518833555091354446533724171406254282667235101595281882269958782358801776580242580275543700366662745507116654826058612131880537225318956721108530915584785846944519273597131186910740973237434385947397 type: snfs c6: 1 c0: 225 # Y0 = 15^36 Y0: 2184164409074570299708284437656402587890625 # Y1 = 1^0 Y1: 1 # msieve rating: skew 2.47, size 9.468e13, alpha 1.979, combined = 1.149e13 rroots = 0 skew: 2.47 rlim: 134000000 alim: 134000000 lpbr: 31 lpba: 31 mfbr: 62 mfba: 62 rlambda: 2.6 alambda: 2.6 lss: 0 Test sieving 10k ranges: Code:
Q yield 34M 18488 84M 16567 134M 13048 184M 13594 Chris Last fiddled with by swellman on 20210112 at 22:10 
20210112, 16:53  #85 
Sep 2009
3721_{8} Posts 

20210112, 17:15  #86 
Apr 2020
11·17 Posts 
It's natural log. PNT says the probability that n is prime is ~1/log(n), so C/log(Q) is indeed the expected number of primes in the range.

20210113, 18:44  #87 
Aug 2005
Seattle, WA
1,667 Posts 
QUEUED AS 11m8_287 on 15e_small
SNFS256.18 C238 HCN (118,287), ECM to t56+. Close call between 15e and 15e small. Code:
n: 4193395358480086533437048773540367935071777413550194641716903549545943064931241232905118971150870758434976471524111103419133492347220200321929233540962166247437784611890289821560911694458974548805609413715451263199663891070038125246424287 # 11^2878^287, difficulty: 256.18, skewness: 1.00, alpha: 2.24 skew: 1.000 c6: 1 c5: 1 c4: 1 c3: 1 c2: 1 c1: 1 c0: 1 Y1: 10633823966279326983230456482242756608 Y0: 4978518112499354698647829163838661251242411 rlim: 134000000 alim: 134000000 lpbr: 31 lpba: 31 mfbr: 62 mfba: 62 rlambda: 2.6 alambda: 2.6 Code:
Q Yield   20M 4971 50M 3981 80M 3291 110M 4012 140M 3174 170M 2773 200M 3109 230M 2622 Last fiddled with by swellman on 20210113 at 19:20 
20210113, 19:55  #88  
"Curtis"
Feb 2005
Riverside, CA
1001000111110_{2} Posts 
Quote:
While the decision of where to send a job should remain fuzzy to try to keep any one queue from getting too long, I'd say that any job that sieves well at 134/134 lims with Qrange under 170M (say, 30200M) is better on esmall than e from the viewpoint of keeping things moving. One would expect in the medium term for the typical job to get tougher (since many projects work their way through smaller composites), so I doubt we'll get stricter on esmall as time goes on. 

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
System management notes  kriesel  kriesel  7  20201021 18:52 
Run down the queue on MPRIME without quitting GIMPS  Rodrigo  Software  7  20180525 13:26 
Improving the queue management.  debrouxl  NFS@Home  10  20180506 21:05 
split a prime95 queue & client installation  joblack  Information & Answers  1  20090106 08:45 
Power Management settings  PrimeCroat  Hardware  3  20040217 19:11 