20210904, 01:16  #1849  
"Curtis"
Feb 2005
Riverside, CA
2^{5}×7×23 Posts 
Quote:
GNFS208 is not too hard for fsmall; lim's of 225M are a bit restrictive for that number, same for 33LP, but it would work. However, there's plenty of work on fsmall right now, and that GNFS208 job would take quite a long time to sieve. I think we should evaluate via testsieve how much faster CADO would be with lim's around twice as large and 33/34LP or 34/34, before throwing it on fsmall. Or, since it has waited 4 years already, let's wait until we run a 204206 digit GNFS job on fsmall before jumping to 208. 

20210904, 01:34  #1850  
Apr 2020
247_{16} Posts 
Quote:


20211026, 02:24  #1851  
Jun 2012
7·479 Posts 
Bump.
I realize 3,748+ is currently working, just trying to maintain visibility on this record poly supporting a record sized nearrepdigit composite cofactor. If there is any interest I can run some test sieving (with suitable adjustments to the parameters) though perhaps this should be run as a 34bit job(?) We could try it on 16f_small  this job seems to be at the limits for that siever. Maybe Greg will comment. Quote:


20211026, 03:43  #1852 
"Curtis"
Feb 2005
Riverside, CA
2^{5}·7·23 Posts 
The only difference between the fsmall queue that we have access to and Greg's "big" queue is the lim restriction of 225M for fsmall, and Greg's selfimposed 250M for the "big" one. That difference doesn't change the biggestpossible job much maybe a digit or two? So, GNFS208 is no problem for the siever. Opportunity cost is a bit of an issue what jobs would wait while we run a 208digit GNFS job?
I'd run it as 33/34LP, but I always like looser bounds and higher yields. Greg pointed out in the past that disk space isn't unlimited and going arbitrarily large LP bounds can eat a ton of space, but with the esmall and e sievers having so few jobs in postprocessing it seems to me that using disk for a 33/34 job and 1.3G relations instead of 33/33 and 1.0G relations is acceptable? 
20211026, 03:55  #1853 
Jul 2003
So Cal
2^{2}×563 Posts 
Yes, the only difference is the smaller fb limits I've requested for 16f_small. It'll take awhile but a C208 should be no problem for 16f_small. With the fast turnaround enabled by GPU LA, disk space is currently not a problem. Feel free to use 33/34 or even 34/34bit LPs.

20211027, 15:21  #1854 
Jun 2012
D19_{16} Posts 
71111_329
I ran some test sieving and it appears the best case for this c208 nearrepdigit is as a 33/34 with 3LPs on the algebraic side and sieved on the a side:
Code:
n: 1122306776491337588607322631818708778200214577213206237089370253284687138834200770294167044440649977604444005064220843458726284245767449140856119245849745806395315477933380280425456889166505800707705642211253 skew: 326211947.89 type: gnfs lss: 0 c0: 192558034193459742319046371398586259699707875364425 c1: 5387479895042599888816938205129089277819185 c2: 1098937059524264818729815697407187 c3: 104048030268009541344497393 c4: 47857156754642446 c5: 421635720 Y0: 4842112079991293167491670985200158992968 Y1: 4692246580297096789 # Murphy_E = 1.439e15, selected by Erik Branger rlim: 225000000 alim: 225000000 lpbr: 33 lpba: 34 mfbr: 66 mfba: 100 rlambda: 3.0 alambda: 3.7 Code:
Yield # Spec_Q Norm_Yield sec/rel 3947 75 2857 1.030 Code:
Yield # Spec_Q Norm_Yield sec/rel 5085 75 3681 0.839 In all cases, I used r/lim both 225M and mfb=3*lpb2 for the 3LP side. Not sure if there are better refinements available here, but I was reluctant to completely (and tediously) test sieve either scenario without pausing here for suggestions or advice. 
20211027, 15:57  #1855 
"Curtis"
Feb 2005
Riverside, CA
2^{5}×7×23 Posts 
I've had good results with smaller lim on the 3LP side, so I would try alim of 182M and rlim of 268M.
If that is faster, then I would try alim 134M and rlim 316M. My rule of thumb is that adding an LP to both sides needs 70% more relations, so adding LP to one side needs 30% more (1.3 * 1.3 = 1.7, roughly). That's where I got 1.3B raw relations as estimate I'd aim for 1B if this was run as 33LP. Using that same scaling, I'd aim for 1.7B for this job as 34/34. Edit: perhaps 70% more is reasonable at smaller sizes, and 75% is better at this size to compensate for the likely larger matrix that a 34LP job would make compared to a 33LP job. Your estimates are as good as mine, I didn't consider that 33 to 34 is different. mfba = 99 might be a bit faster, but it's not likely much of a difference. Last fiddled with by VBCurtis on 20211027 at 15:59 
20211027, 16:48  #1856  
Jun 2012
7×479 Posts 
Quote:
Quote:
Quote:


20211029, 14:24  #1857 
Jun 2012
7×479 Posts 
Still plugging away on 71111_329. My recent test sieving effort used the following polynomial:
Code:
n: 1122306776491337588607322631818708778200214577213206237089370253284687138834200770294167044440649977604444005064220843458726284245767449140856119245849745806395315477933380280425456889166505800707705642211253 skew: 326211947.89 type: gnfs lss: 0 c0: 192558034193459742319046371398586259699707875364425 c1: 5387479895042599888816938205129089277819185 c2: 1098937059524264818729815697407187 c3: 104048030268009541344497393 c4: 47857156754642446 c5: 421635720 Y0: 4842112079991293167491670985200158992968 Y1: 4692246580297096789 # Murphy_E = 1.439e15, selected by Erik Branger # Polynomial selection took 4 months on a GTX760 # selected mechanically rlim: 225000000 alim: 225000000 lpbr: 33 lpba: 34 mfbr: 66 mfba: 100 rlambda: 3.0 alambda: 3.7 Code:
MQ Norm_yield 60 30519 110 27785 200 24873 300 22587 400 20463 500 18931 600 17836 700 16875 Changing r/alim and mfba to 99, and all combinations, proved to be a bit less efficient. Maybe the lims are already so low for a 33/34 job that slight shifts in rlim and alim has little effect? I looked at the 34/34 version of this job but is a beast. The estimated target # of raw rels is 1.8B but it does sieve faster. Still seems the 33/34 job works best based on speed and # rels. Will attempt to test sieve the 34/34 version over the weekend. 
20211103, 11:46  #1858 
Jun 2012
7·479 Posts 
Finished test sieving 71111_329 as a 34/34 job using the following poly:
Code:
n: 1122306776491337588607322631818708778200214577213206237089370253284687138834200770294167044440649977604444005064220843458726284245767449140856119245849745806395315477933380280425456889166505800707705642211253 skew: 326211947.89 type: gnfs lss: 0 c0: 192558034193459742319046371398586259699707875364425 c1: 5387479895042599888816938205129089277819185 c2: 1098937059524264818729815697407187 c3: 104048030268009541344497393 c4: 47857156754642446 c5: 421635720 Y0: 4842112079991293167491670985200158992968 Y1: 4692246580297096789 # Murphy_E = 1.439e15, selected by Erik Branger # Polynomial selection took 4 months on a GTX760 # selected mechanically rlim: 225000000 alim: 225000000 lpbr: 34 lpba: 34 mfbr: 68 mfba: 100 rlambda: 3.1 alambda: 3.7 Code:
MQ Norm_yield 60 40284 110 36659 200 33153 300 29648 400 27000 500 24974 600 23503 700 22224 Back of the envelope calculations say the 33/34 job will sieve faster and likely LA will be easier to process but either way it is still a behemoth job. Greg  would you be willing to run the LA if we sieve this on 16f_small? I don’t think even my best machine could digest this thing! 
20211103, 16:23  #1859 
"Curtis"
Feb 2005
Riverside, CA
2^{5}×7×23 Posts 
I agree that the data suggests a dead heat for sieving time on 34/34 vs 33/34, so we should go with the smaller LP bound for disk space and expected matrix difficulty reasons. Maybe two digits bigger would call for 34/34.

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Boinc Statistics for NFS@Home borked ?  thomasn  NFS@Home  1  20131002 15:31 
BOINC NFS sieving  RSALS  debrouxl  NFS@Home  621  20121214 23:44 
BOINC?  masser  Sierpinski/Riesel Base 5  1  20090209 01:10 
BOINC?  KEP  Twin Prime Search  212  20070425 10:29 
BOINC  bebarce  Software  3  20051215 18:35 