![]() |
![]() |
#45 |
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
19·232 Posts |
![]()
Congrats! Nice sizeable job.
Wouldn't you want to get your name on the board with A.Cunnigham, The Cabal, A.K. Lenstra, Wagstaff, Dodson, Leyland, and many others? You could reserve quite a few numbers there with your throughput. (Especially those that are marked with MWanted?) |
![]() |
![]() |
![]() |
#46 |
Tribal Bullet
Oct 2004
355510 Posts |
![]()
Welcome to the big leagues ryanp :)
|
![]() |
![]() |
![]() |
#47 |
Mar 2007
Germany
23·3·11 Posts |
![]()
Congratulation!
![]() |
![]() |
![]() |
![]() |
#48 |
Jun 2012
Boulder, CO
52×17 Posts |
![]()
OK, help needed, part 2. :)
I'm working on the remaining 247-digit cofactor of np_135 = 135^135 + 136^136, http://factordb.com/index.php?id=1100000000295975717. So far I've sieved over a billion input relations, which leads to about 740M unique relations in filtering: Code:
Sun Mar 24 08:28:31 2013 commencing relation filtering Sun Mar 24 08:28:31 2013 estimated available RAM is 64442.4 MB Sun Mar 24 08:28:31 2013 commencing duplicate removal, pass 1 Sun Mar 24 10:54:01 2013 error -9 reading relation 609926392 Sun Mar 24 10:54:36 2013 error -15 reading relation 613889282 Sun Mar 24 12:46:39 2013 skipped 985917 relations with b > 2^32 Sun Mar 24 12:46:39 2013 found 334458904 hash collisions in 1090446680 relations Sun Mar 24 12:47:39 2013 added 348416 free relations Sun Mar 24 12:47:39 2013 commencing duplicate removal, pass 2 Sun Mar 24 13:10:52 2013 found 347797003 duplicates and 742998093 unique relations Sun Mar 24 13:10:52 2013 memory use: 8780.0 MB Sun Mar 24 13:10:53 2013 reading ideals above 917307392 Sun Mar 24 13:10:53 2013 commencing singleton removal, initial pass Code:
Mon Mar 25 00:08:23 2013 commencing linear algebra Mon Mar 25 00:08:56 2013 read 47704177 cycles Mon Mar 25 00:10:49 2013 cycles contain 153106684 unique relations Mon Mar 25 01:03:03 2013 read 153106684 relations Mon Mar 25 01:09:43 2013 using 20 quadratic characters above 4294917296 Mon Mar 25 01:21:41 2013 building initial matrix Mon Mar 25 02:00:59 2013 memory use: 20663.6 MB Mon Mar 25 02:07:59 2013 read 47704177 cycles Mon Mar 25 02:08:08 2013 matrix is 47712511 x 47704177 (22480.6 MB) with weight 6449966745 (135.21/col) Mon Mar 25 02:08:08 2013 sparse part has weight 5368409052 (112.54/col) Mon Mar 25 02:26:05 2013 filtering completed in 2 passes Mon Mar 25 02:26:17 2013 matrix is 47703563 x 47698304 (22478.7 MB) with weight 6449395982 (135.21/col) Mon Mar 25 02:26:17 2013 sparse part has weight 5367974708 (112.54/col) Mon Mar 25 02:41:36 2013 matrix starts at (0, 0) Mon Mar 25 02:41:45 2013 matrix is 47703563 x 47698304 (22478.7 MB) with weight 6449395982 (135.21/col) Mon Mar 25 02:41:45 2013 sparse part has weight 5367974708 (112.54/col) Mon Mar 25 02:41:45 2013 matrix needs more columns than rows; try adding 2-3% more relations |
![]() |
![]() |
![]() |
#49 |
Just call me Henry
"David"
Sep 2007
Liverpool (GMT/BST)
10111100011112 Posts |
![]()
Lowering the target density a little should do it.
|
![]() |
![]() |
![]() |
#50 |
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
19×232 Posts |
![]()
Lowering density may help, but sieving some more will help more.
A 47M matrix shows that you are at the "cusp of convergence". ![]() |
![]() |
![]() |
![]() |
#51 |
Tribal Bullet
Oct 2004
32·5·79 Posts |
![]()
Can you post the full log from the filtering and LA? This is exactly what happened when Paul Zimmermann used Msieve on a dataset for RSA704, that was slightly undersieved. My concern is there's a bug in the code somewhere that is making the model of the matrix used in the filtering diverge from the actual underlying relations. The LA counts the rows and columns of the matrix exactly, starting only from the original relations, and it's suspicious that even the initial result of the matrix build clearly does not have any excess columns.
|
![]() |
![]() |
![]() |
#52 | |
Jun 2012
Boulder, CO
52·17 Posts |
![]() Quote:
http://pastebin.com/SaZwK3iq |
|
![]() |
![]() |
![]() |
#53 | |
"Bob Silverman"
Nov 2003
North of Boston
7,507 Posts |
![]() Quote:
A calculus of variations analysis (similar to the one in my Optimal Paramterization of SNFS paper) performed on GNFS reveals the following. If one wants to optimize the SIZE of the sieving regions as a function of the special q, then the size should be proportional to the probability of finding a smooth relation with that particular q. This means that since smaller q's have higher yields that one should use much LARGER sieve regions for the smaller q, then gradually reduce the size of the region as q increases. This in intuitively obvious: one should want to do the most sieving when the probability of success is highest. How large should the first region be? It depends on the size of the factor base, and the number and size of the large primes that get accepted in relations. A question of how quickly one should shrink the sieve area gets a similar response. In practice, one does trial sieving to answer these questions. Note that using this strategy is problematic in practice for the Franke/ Kleinjung approach for building the sub-sieve region. Their construction requires that the size be a power of two (for efficiency regions). Making it a non-power-of-two would kill any gain one might get from the optimal sizing strategy that I suggest above. My siever, on the other hand can use an arbitrarily sized sieve region with no loss of efficiency. I have tried the suggested strategy and it does reduce total sieving time for numbers in the range that I can do. |
|
![]() |
![]() |
![]() |
#54 | |
Jun 2012
Boulder, CO
1101010012 Posts |
![]() Quote:
Code:
Tue Mar 26 02:08:53 2013 commencing linear algebra Tue Mar 26 02:09:11 2013 read 50682177 cycles Tue Mar 26 02:10:51 2013 cycles contain 153108686 unique relations Tue Mar 26 02:54:57 2013 read 153108686 relations Tue Mar 26 02:59:39 2013 using 20 quadratic characters above 4294917296 Tue Mar 26 03:11:41 2013 building initial matrix Tue Mar 26 03:48:02 2013 memory use: 20613.6 MB Tue Mar 26 03:53:34 2013 read 50682177 cycles Tue Mar 26 03:53:45 2013 matrix is 50690200 x 50682177 (20516.5 MB) with weight 5915593165 (116.72/col) Tue Mar 26 03:53:45 2013 sparse part has weight 4820773930 (95.12/col) Tue Mar 26 04:06:31 2013 filtering completed in 2 passes Tue Mar 26 04:06:46 2013 matrix is 50678722 x 50673715 (20514.6 MB) with weight 5915011768 (116.73/col) Tue Mar 26 04:06:46 2013 sparse part has weight 4820374768 (95.13/col) Tue Mar 26 04:17:50 2013 matrix starts at (0, 0) Tue Mar 26 04:18:02 2013 matrix is 50678722 x 50673715 (20514.6 MB) with weight 5915011768 (116.73/col) Tue Mar 26 04:18:02 2013 sparse part has weight 4820374768 (95.13/col) Tue Mar 26 04:18:02 2013 matrix needs more columns than rows; try adding 2-3% more relations |
|
![]() |
![]() |
![]() |
#55 |
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
274316 Posts |
![]()
Jason may be on to something with his post. The filtering log looks as if it should converge much better than it did; the initial excess looked quite nice...
This is a degree 7 project. Could degree 7 throw off something in the filtering process? |
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
29 to 30 bit large prime SNFS crossover | VBCurtis | Factoring | 11 | 2015-03-09 07:01 |
How many jobs should I run? | Warlord | Software | 12 | 2013-10-11 22:18 |
Advice for large GNFS jobs? | WraithX | Factoring | 59 | 2013-07-30 01:13 |
doing large NFS jobs on Amazon EC2? | ixfd64 | Factoring | 3 | 2012-06-06 08:27 |
Filtering on large NFS jobs, particularly 2^908+1 | bdodson | Factoring | 20 | 2008-11-26 20:45 |