20160917, 22:29  #276 
May 2009
Russia, Moscow
2×3^{3}×47 Posts 
C149 was cracked by ECM (p50 after ~400@11e6 curves  I'm lucky here).
Now again C158, I can run polyselect for this but not all GNFS. 
20160919, 15:48  #277 
May 2009
Russia, Moscow
2×3^{3}×47 Posts 
Poly for C158.
Code:
# norm 2.829565e15 alpha 9.674228 e 1.832e12 rroots 1 n: 79933515103235306815732304856672491074680074688676425625899406692334007147016700671551909312123530653854370338778979049814698834222425825561192039057975774481 skew: 45476477.92 c0: 2104403158320717301966750387083391826024832 c1: 167898735083323752751067777774833704 c2: 1989821115747940481877005858 c3: 151701621080234560409 c4: 351786144456 c5: 36900 Y0: 4646655632492287543013586279001 Y1: 119797584633535873 rlim: 36800000 alim: 36800000 lpbr: 30 lpba: 30 mfbr: 60 mfba: 60 rlambda: 2.6 alambda: 2.6 
20160920, 02:23  #278 
"Ed Hall"
Dec 2009
Adirondack Mtns
5×727 Posts 
If no one jumps up to take this on soon, I might want to experiment with it. It's been a while since I did this type of single number work and I can't find (or remember) anything on choosing the proper sieve. Would this be lasieve4I14e?
Or, do I need to test? Maybe that's why I can't find it... 
20160920, 05:10  #279 
"Curtis"
Feb 2005
Riverside, CA
1001001000110_{2} Posts 
158 is squarely in 14e, though I suggest 31LP rather than 30. I think NFS@home often runs one largeprimebit small because data sets 40% smaller is worth 48% extra computation effort; for an individual effort, the tradeoff for less effort is valuable.
I believe 31 is faster than 30 at 155 digits, and 32 is faster than 31 at 166 digits. The transition to 15e is somewhere around 170, well above the typical singlemachine project. Basically, I run almost all my projects one LP bit higher than NFS@home chooses, with very good results. Something near 150M raw relations should allow you to build a matrix with target density 96 or 100. If the architecture you run LA on is older than Haswell, I'd set target density at 100110, while LA on haswell is fast enough that 90 or 96 will save you more sieve time than it costs you in LA (compared to, say, 104 or 110). Last fiddled with by VBCurtis on 20160920 at 05:12 
20160920, 06:01  #280 
"Curtis"
Feb 2005
Riverside, CA
4678_{10} Posts 

20160920, 14:14  #281  
"Ed Hall"
Dec 2009
Adirondack Mtns
5·727 Posts 
Quote:
I guess I'll work this number and see if I can actually complete it. 

20160920, 14:28  #282  
Just call me Henry
"David"
Sep 2007
Cambridge (GMT/BST)
16BE_{16} Posts 
Quote:


20160920, 16:17  #283  
"Ed Hall"
Dec 2009
Adirondack Mtns
5·727 Posts 
Quote:
How much RAM will I need for the LA step? (I'm actually thinking about resurrecting my openmpi setup for that  nah, probably not... at least for now...) Thanks to both VBCurtis and henryzz! 

20160920, 18:45  #284 
"Curtis"
Feb 2005
Riverside, CA
11106_{8} Posts 
Well, that depends how far you oversieve, what targetdensity you choose, and (of course) some luck. If you have to use a 4GB system, you might need some extra relations to get the matrix small enough.

20160921, 01:29  #285  
"Ed Hall"
Dec 2009
Adirondack Mtns
111000110011_{2} Posts 
Quote:
Thanks... 

20160921, 18:34  #286 
"Ed Hall"
Dec 2009
Adirondack Mtns
5·727 Posts 
Edit: I think I have it figured out.
Sorry for the following, but my memory is failing me badly (or, is it actually failing me very well...)? I can't find any notes and I wasn't clear from the readmes. I did set up a two machine cluster to help with the LA. I think I can get that running. Last fiddled with by EdH on 20160921 at 19:33 
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Reserved for MF  Sequence 3366  RichD  Aliquot Sequences  468  20210127 01:16 
Reserved for MF  Sequence 4788  schickel  Aliquot Sequences  2934  20210107 18:52 
Reserved for MF  Sequence 276  kar_bon  Aliquot Sequences  127  20201217 10:05 
Team Sieve #37: 3408:i1287  RichD  Aliquot Sequences  14  20130802 17:02 
80M to 64 bits ... but not really reserved  petrw1  Lone Mersenne Hunters  82  20100111 01:57 