Calibration of GNFS with 32bit LP on 14e/15e
5748.1537: C170, e=3.296e13
313977758 relations; 267758585 unique, 263815040 unique ideals above 120M. Clique removal starts with 111319707 relations and 92985659 unique ideals Ends with 27577554 relations and 26858833 ideals 2way merge gets 17413948 relation sets and 16695227 unique ideals Full merge: weight of 9679427 cycles is about 677730741 (70.02/cycle) 249999853 relations; 220181907 unique, 240511093 unique ideals above 120M. Clique removal starts with 59193528 relations and 56844744 unique ideals Ends with 39640238 relations and 39263674 ideals 2way merge gets 22161369 relation sets and 21784806 unique ideals Full merge: weight of 12403006 cycles is about 868508517 (70.02/cycle) Next trials: 250M relations at td=120, 240M relations at td=70 (both of those expected to fail), full relation set at td=120, 260M relations at td=120. 
250M relations, target density 120 failed at the fullmerge stage, unsurprisingly.

With target density 120 and the full 313977758 relations, you get
weight of 8103427 cycles is about 972830684 (120.05/cycle) with an ETA of 62h11m as opposed to 65h30m with density 70, so that was hardly worth it ... 
What's the lpba and q range be sieved? Is the oversieve on purpose or by accident?

I've found the lpba is 32 from NFS@Home's website,so it is not oversieved. What surprised me is that this number was factored with lpba 32,is there any suitable reason to not use lpba=31?

[QUOTE=wreck;393770]I've found the lpba is 32 from NFS@Home's website,so it is not oversieved. What surprised me is that this number was factored with lpba 32,is there any suitable reason to not use lpba=31?[/QUOTE]
Well, 250M raw relations were sufficient, while 313M were obtained. On what basis do you call it not oversieved? Or is 25% extra relations pretty standard for NFS@home projects to make the matrices easier? I believe for GNFS, mid 160s is where 31 and 32 bits takes the same time to sieve. At C170, the larger matrix that 32 might produce should be made up in sievetime savings. See [url]http://mersenneforum.org/showpost.php?p=393688&postcount=1732[/url] for reference about C164 taking no more time to sieve with 32 bit than C163 took with 31bit. 
[QUOTE=wreck;393770]I've found the lpba is 32 from NFS@Home's website,so it is not oversieved. What surprised me is that this number was factored with lpba 32,is there any suitable reason to not use lpba=31?[/QUOTE]
The whole point of this experiment is that I believe lpba=32 is faster, because it doesn't need many more relations and it gathers them more quickly; I'm trying to collect evidence to back up this belief. 
I've done some testsieving on SNFS246, and 15e/32 was 68% faster than 14e/32 or 15e/31. The test sieve was 4 points spaced across the sieve interval, about a quadcore hour per trial. I assumed 185M raw rels for 31 bit, 325M raw for 32 bit. It looks like your tests indicate 32 bits doesn't need that many relations for a number this size, so the sieve savings may be more than 8%.

Same number: 5748.1537: C170, e=3.296e13
259999848 relations, 227826466 unique reduce to 20670071 relation sets and 20242852 unique ideals Cannot build matrix with td=120 269999845 relations, 235404460 unique reduce to 19655970 relation sets and 19176009 unique ideals weight of 9162209 cycles is about 1099605715 (120.02/cycle) 279999840 relations, 242893906 unique reduce to 18973265 relation sets and 18437728 unique ideals weight of 8841928 cycles is about 1061125245 (120.01/cycle) ETA is 77h41m Sadly nearly all the sieving for this number was done while the tool which produced everytenminutes statistics summaries wasn't running, so I can't measure the tradeoff of realtimespentsieving against realtimespentinlinalg. 
Why I surprise is that in the past most gnfs (a c165 10^1050 by Kurt Beschorner,a c166 factored by Aliquot M.Forum+fivemack,a c170 factored by Aliquot M.Forum+Carlos,etc.) from c165 to c170 are factored using lpba=30,a c178 6^353+1 is factored using lpba=31,a c210 HP49(117) is using lpba=33 by Wraith,etc.

Calibration for 32/14 GNFS
XYYXF_134_120, C165, e=7.947e13
Processing different numbers of relations with target density 120 [code] Relations Unique Fullmerge R Fullmerge I Cycles 219998793 199128974 didn't get to merge 239998789 215580435 didn't get to merge 259998788 232463428 22384191 21919710 full merge failed 279998788 249411185 18888183 18387877 full merge failed 299998787 265785963 17053450 16438817 7655017 ETA 37:50 319998782 282021412 15877727 15147712 7177912 339998781 297649001 15003077 14159547 6815747 359998781 313744699 14477360 13516779 6574979 373499372 324483360 14071650 13031826 6378026 ETA 24:34 (i7/4770 t4) actually 21:55 [/code] For this one I do have the relationsperhour figure, it's 3.2 million on average. So we spent 23 hours getting the relations from 300M to 373.5M, which turns out to save 13 hours; suggesting that 340M or so would be optimal at this level. There is a subtlety in that the relationsperhour figure drops off significantly at the end of ranges, and I'm not quite sure how to handle that. 
All times are UTC. The time now is 23:47. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2020, Jelsoft Enterprises Ltd.