20161004, 22:38  #111 
Mar 2006
11·43 Posts 
I was recently asked about the status of HP49 119 c251, and wanted to update everyone. I'd also like to ask if anyone here on the forum would help me tackle this number with GNFS?
First, the work I've done so far can be summed up with the following tables: Code:
HP49 Step 119 c251: Expected number of curves to find a factor of n digits: digits 35 40 45 50 55 60 65 70 75  B1 = 11e6 138 788 5208 39497 336066 3167410 3.2e+007 3.7e+008 B1 = 43e6 61 278 1459 8704 57844 419970 3346252 2.9e+007 B1 = 260e6 23 82 335 1521 7650 42057 250476 1603736 B1 = 1e9 14 44 154 599 2553 11843 59619 319570 B1 = 3e9 11 31 96 335 1279 5292 23661 112329 565999 B1 = 3e9 11 31 99 344 1315 5446 24234 115138 580561 (maxmem 4096) Number of curves completed so far, and their equivalent tlevel: 12000 @ 11e6 = 86.956x 15.228x 2.304x 0.303x 0.035x 0.003x 18000 @ 43e6 = 295.081x 64.748x 12.337x 2.068x 0.311x 0.042x 0.005x 90000 @ 260e6 = 3913.043x 1097.560x 268.656x 59.171x 11.764x 2.139x 0.359x 0.056x 120000 @ 1e9 = 8571.428x 2727.272x 779.220x 200.333x 47.003x 10.132x 2.012x 0.375x 16000 @3e9(4g)= 1454.545x 516.129x 161.616x 46.511x 12.167x 2.937x 0.660x 0.138x 19200 @ 3e9 = 1745.454x 619.354x 200.000x 57.313x 15.011x 3.628x 0.811x 0.170x  16066.507x 5040.291x 1424.133x 365.699x 86.291x 18.881x 3.847x 0.739x Code:
55digits: e^(86.291) ~= 3.344e38 60digits: e^(18.881) ~= 6.310e9 65digits: e^(3.847) ~= 0.0213 70digits: e^(0.739) ~= 0.4775 (almost 50%!) 
20161005, 02:54  #112 
"Curtis"
Feb 2005
Riverside, CA
11174_{8} Posts 
I'll donate a few hundred ECM curves, as I have grown tired of M1277 ECM work.
While the sieving is intimidating, the matrix is what should concern you when contemplating this job. I saw your GNFS210 was 77M matrix; a WAG that matrix size continues to scale with job size has a GNFS170 around 9M, so perhaps a GNFS250 would have a matrix on the order of 300M by 300M? Perhaps 400M? Memory needs scale with the square of the dimension, so 3040 times more memory than the 32GB you needed for GNFS210. A terabyte of RAM might do it? Two questions for the bigjob experts: 1. When matrixsolving on a cluster, does each node need only its portion of the matrix in memory? That is, if a 1TBram job were split onto 32 nodes, would each node need something less than 64GB? 2. Do the CADO tools have an equivalent of lasieve17e? 
20161005, 03:02  #113 
Basketry That Evening!
"Bunslow the Bold"
Jun 2011
40<A<43 89<O<88
3×29×83 Posts 
Err... pardon me, but wouldn't this set the (public) record for largest GNFS factorization ever? This counts out at 834 bits, rather more than the current 768 record (...unless 896 is in progress by the Paul Zimmerman group?)

20161005, 15:16  #114 
Tribal Bullet
Oct 2004
6720_{8} Posts 
Yes, it would be a record for public factorizations.
The memory usage of the matrix scales linearly with the dimension for these kinds of problems, because we store it in sparse format.Spreading the linear algebra over multiple machines means the matrix is cut up into parts and each machine sees only its local part; vectors are shuttled back and forth between machines as the computation needs them. I would be worried about the filtering instead. The final sieving dataset would have several tens of billions of relations and this is far larger than what either CADO or Msieve has ever processed. Msieve still has a bug that limits filtering to about a billion relations; even without the bug, more than 4 billion relations would require major plumbing changes to the code. 
20161005, 17:31  #115  
"Carlos Pinho"
Oct 2011
Milton Keynes, UK
1315_{16} Posts 
Quote:


20161005, 18:35  #116 
Tribal Bullet
Oct 2004
2^{4}×13×17 Posts 
That wasn't volunteering :(

20161005, 18:54  #117 
"Carlos Pinho"
Oct 2011
Milton Keynes, UK
5·977 Posts 

20161005, 22:02  #118 
"Curtis"
Feb 2005
Riverside, CA
2^{2}×7×13^{2} Posts 
I think you should run some curves, so you can call it a forum effort.
The filtering would have to be done by CADO; there's no reason to think CADO can't handle it, except for the obvious fact that it hasn't been tested on such a problem. M1177 SNFS job has 15G raw relations, so we know there is no 4G limit. Those huge Mersenne factorizations used 37bit large primes, 3 rational large primes (109bit mfbr) 2 algebraic. We could testsieve 37 vs 38 vs 39 bit large primes? If CADO can run a sieve region bigger than 16e equivalent, the sieve step is mere CPU time (right, guys? Guys?). Last fiddled with by VBCurtis on 20161005 at 22:13 
20161006, 10:54  #119  
Bamboozled!
"πΊππ·π·π"
May 2003
Down not across
29·367 Posts 
Quote:
Note that C251 is within range of GPU stage 1. Perhaps someone would like to write a V2 ecmclient compatible wrapper. It's something I've been thinking about for a long time but the global shortage of round tuits has hit hard. 

20161006, 12:25  #120  
Just call me Henry
"David"
Sep 2007
Cambridge (GMT/BST)
5×1,171 Posts 
Quote:
It looks like the cado siever supports fb primes >32 bits and large primes >32 bits. The CADO siever used to have a SUPPORT_I17 configuration. This has now been removed. I assume that it always supports it now. I can't find any limitation in the code. Last fiddled with by henryzz on 20161006 at 12:26 

20161006, 14:20  #121 
"Curtis"
Feb 2005
Riverside, CA
4732_{10} Posts 
RSA768 was done with 16e (equivalent, CADO). The CADO defaults for siever region use smaller sievers than we usually do with lasieve; e.g. 13e into the mid 140s GNFS. 251 would be right at the top of 17e even if 18e was available; testsieving would be fun, right?

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
P1 on M1061 and HP49.99  ATH  Factoring  21  20091013 13:16 