mersenneforum.org  

Go Back   mersenneforum.org > Factoring Projects > Factoring

Reply
 
Thread Tools
Old 2013-03-15, 20:44   #45
Batalov
 
Batalov's Avatar
 
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2

9,127 Posts
Default

Congrats! Nice sizeable job.

Wouldn't you want to get your name on the board with A.Cunnigham, The Cabal, A.K. Lenstra, Wagstaff, Dodson, Leyland, and many others?

You could reserve quite a few numbers there with your throughput. (Especially those that are marked with MWanted?)
Batalov is offline   Reply With Quote
Old 2013-03-15, 22:16   #46
jasonp
Tribal Bullet
 
jasonp's Avatar
 
Oct 2004

3,529 Posts
Default

Welcome to the big leagues ryanp :)
jasonp is offline   Reply With Quote
Old 2013-03-16, 12:46   #47
Andi_HB
 
Andi_HB's Avatar
 
Mar 2007
Germany

23×3×11 Posts
Default

Congratulation!
Andi_HB is offline   Reply With Quote
Old 2013-03-25, 14:06   #48
ryanp
 
ryanp's Avatar
 
Jun 2012
Boulder, CO

2×7×13 Posts
Default

OK, help needed, part 2. :)

I'm working on the remaining 247-digit cofactor of np_135 = 135^135 + 136^136, http://factordb.com/index.php?id=1100000000295975717. So far I've sieved over a billion input relations, which leads to about 740M unique relations in filtering:

Code:
Sun Mar 24 08:28:31 2013  commencing relation filtering
Sun Mar 24 08:28:31 2013  estimated available RAM is 64442.4 MB
Sun Mar 24 08:28:31 2013  commencing duplicate removal, pass 1
Sun Mar 24 10:54:01 2013  error -9 reading relation 609926392
Sun Mar 24 10:54:36 2013  error -15 reading relation 613889282
Sun Mar 24 12:46:39 2013  skipped 985917 relations with b > 2^32
Sun Mar 24 12:46:39 2013  found 334458904 hash collisions in 1090446680 relations
Sun Mar 24 12:47:39 2013  added 348416 free relations
Sun Mar 24 12:47:39 2013  commencing duplicate removal, pass 2
Sun Mar 24 13:10:52 2013  found 347797003 duplicates and 742998093 unique relations
Sun Mar 24 13:10:52 2013  memory use: 8780.0 MB
Sun Mar 24 13:10:53 2013  reading ideals above 917307392
Sun Mar 24 13:10:53 2013  commencing singleton removal, initial pass
I'm able to generate a matrix with "-D 130" (target density; I think the flag was renamed after msieve 1.50). But once it actually gets to linear algebra:

Code:
Mon Mar 25 00:08:23 2013  commencing linear algebra
Mon Mar 25 00:08:56 2013  read 47704177 cycles
Mon Mar 25 00:10:49 2013  cycles contain 153106684 unique relations
Mon Mar 25 01:03:03 2013  read 153106684 relations
Mon Mar 25 01:09:43 2013  using 20 quadratic characters above 4294917296
Mon Mar 25 01:21:41 2013  building initial matrix
Mon Mar 25 02:00:59 2013  memory use: 20663.6 MB
Mon Mar 25 02:07:59 2013  read 47704177 cycles
Mon Mar 25 02:08:08 2013  matrix is 47712511 x 47704177 (22480.6 MB) with weight 6449966745 (135.21/col)
Mon Mar 25 02:08:08 2013  sparse part has weight 5368409052 (112.54/col)
Mon Mar 25 02:26:05 2013  filtering completed in 2 passes
Mon Mar 25 02:26:17 2013  matrix is 47703563 x 47698304 (22478.7 MB) with weight 6449395982 (135.21/col)
Mon Mar 25 02:26:17 2013  sparse part has weight 5367974708 (112.54/col)
Mon Mar 25 02:41:36 2013  matrix starts at (0, 0)
Mon Mar 25 02:41:45 2013  matrix is 47703563 x 47698304 (22478.7 MB) with weight 6449395982 (135.21/col)
Mon Mar 25 02:41:45 2013  sparse part has weight 5367974708 (112.54/col)
Mon Mar 25 02:41:45 2013  matrix needs more columns than rows; try adding 2-3% more relations
Is further sieving really required here? Should I try lowering the target density? (That seems like it will produce a matrix that will take even longer to solve, though). Or something else?
ryanp is offline   Reply With Quote
Old 2013-03-25, 14:08   #49
henryzz
Just call me Henry
 
henryzz's Avatar
 
"David"
Sep 2007
Cambridge (GMT/BST)

165A16 Posts
Default

Lowering the target density a little should do it.
henryzz is offline   Reply With Quote
Old 2013-03-25, 16:47   #50
Batalov
 
Batalov's Avatar
 
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2

9,127 Posts
Default

Lowering density may help, but sieving some more will help more.
A 47M matrix shows that you are at the "cusp of convergence".
Batalov is offline   Reply With Quote
Old 2013-03-25, 17:47   #51
jasonp
Tribal Bullet
 
jasonp's Avatar
 
Oct 2004

3,529 Posts
Default

Can you post the full log from the filtering and LA? This is exactly what happened when Paul Zimmermann used Msieve on a dataset for RSA704, that was slightly undersieved. My concern is there's a bug in the code somewhere that is making the model of the matrix used in the filtering diverge from the actual underlying relations. The LA counts the rows and columns of the matrix exactly, starting only from the original relations, and it's suspicious that even the initial result of the matrix build clearly does not have any excess columns.
jasonp is offline   Reply With Quote
Old 2013-03-25, 18:59   #52
ryanp
 
ryanp's Avatar
 
Jun 2012
Boulder, CO

2×7×13 Posts
Default

Quote:
Originally Posted by jasonp View Post
Can you post the full log from the filtering and LA? This is exactly what happened when Paul Zimmermann used Msieve on a dataset for RSA704, that was slightly undersieved. My concern is there's a bug in the code somewhere that is making the model of the matrix used in the filtering diverge from the actual underlying relations. The LA counts the rows and columns of the matrix exactly, starting only from the original relations, and it's suspicious that even the initial result of the matrix build clearly does not have any excess columns.
Seems like it's too big to fit on a post here, so I've pasted it here:

http://pastebin.com/SaZwK3iq
ryanp is offline   Reply With Quote
Old 2013-03-25, 19:16   #53
R.D. Silverman
 
R.D. Silverman's Avatar
 
Nov 2003

746010 Posts
Default

Quote:
Originally Posted by ryanp View Post
OK, help needed, part 2. :)

I'm working on the remaining 247-digit cofactor of np_135 = 135^135 + 136^136, http://factordb.com/index.php?id=1100000000295975717. So far I've sieved over a billion input relations, which leads to about 740M unique relations in filtering:

Code:
Sun Mar 24 08:28:31 2013  commencing relation filtering
Sun Mar 24 08:28:31 2013  estimated available RAM is 64442.4 MB
Sun Mar 24 08:28:31 2013  commencing duplicate removal, pass 1
Sun Mar 24 10:54:01 2013  error -9 reading relation 609926392
Sun Mar 24 10:54:36 2013  error -15 reading relation 613889282
Sun Mar 24 12:46:39 2013  skipped 985917 relations with b > 2^32
Sun Mar 24 12:46:39 2013  found 334458904 hash collisions in 1090446680 relations
Sun Mar 24 12:47:39 2013  added 348416 free relations
Sun Mar 24 12:47:39 2013  commencing duplicate removal, pass 2
Sun Mar 24 13:10:52 2013  found 347797003 duplicates and 742998093 unique relations
Sun Mar 24 13:10:52 2013  memory use: 8780.0 MB
Sun Mar 24 13:10:53 2013  reading ideals above 917307392
Sun Mar 24 13:10:53 2013  commencing singleton removal, initial pass
I'm able to generate a matrix with "-D 130" (target density; I think the flag was renamed after msieve 1.50). But once it actually gets to linear algebra:

Code:
Mon Mar 25 00:08:23 2013  commencing linear algebra
Mon Mar 25 00:08:56 2013  read 47704177 cycles
Mon Mar 25 00:10:49 2013  cycles contain 153106684 unique relations
Mon Mar 25 01:03:03 2013  read 153106684 relations
Mon Mar 25 01:09:43 2013  using 20 quadratic characters above 4294917296
Mon Mar 25 01:21:41 2013  building initial matrix
Mon Mar 25 02:00:59 2013  memory use: 20663.6 MB
Mon Mar 25 02:07:59 2013  read 47704177 cycles
Mon Mar 25 02:08:08 2013  matrix is 47712511 x 47704177 (22480.6 MB) with weight 6449966745 (135.21/col)
Mon Mar 25 02:08:08 2013  sparse part has weight 5368409052 (112.54/col)
Mon Mar 25 02:26:05 2013  filtering completed in 2 passes
Mon Mar 25 02:26:17 2013  matrix is 47703563 x 47698304 (22478.7 MB) with weight 6449395982 (135.21/col)
Mon Mar 25 02:26:17 2013  sparse part has weight 5367974708 (112.54/col)
Mon Mar 25 02:41:36 2013  matrix starts at (0, 0)
Mon Mar 25 02:41:45 2013  matrix is 47703563 x 47698304 (22478.7 MB) with weight 6449395982 (135.21/col)
Mon Mar 25 02:41:45 2013  sparse part has weight 5367974708 (112.54/col)
Mon Mar 25 02:41:45 2013  matrix needs more columns than rows; try adding 2-3% more relations
Is further sieving really required here? Should I try lowering the target density? (That seems like it will produce a matrix that will take even longer to solve, though). Or something else?
A theoretical tidbit for large NFS jobs:

A calculus of variations analysis (similar to the one in my Optimal Paramterization of SNFS paper) performed on GNFS reveals the following.

If one wants to optimize the SIZE of the sieving regions as a function of
the special q, then the size should be proportional to the probability of
finding a smooth relation with that particular q.

This means that since smaller q's have higher yields that one should use
much LARGER sieve regions for the smaller q, then gradually reduce the
size of the region as q increases.

This in intuitively obvious: one should want to do the most sieving when the
probability of success is highest.

How large should the first region be? It depends on the size of the
factor base, and the number and size of the large primes that get
accepted in relations. A question of how quickly one should shrink the
sieve area gets a similar response. In practice, one does trial sieving
to answer these questions.

Note that using this strategy is problematic in practice for the Franke/
Kleinjung approach for building the sub-sieve region. Their construction
requires that the size be a power of two (for efficiency regions). Making
it a non-power-of-two would kill any gain one might get from the
optimal sizing strategy that I suggest above.

My siever, on the other hand can use an arbitrarily sized sieve region with
no loss of efficiency. I have tried the suggested strategy and it does
reduce total sieving time for numbers in the range that I can do.
R.D. Silverman is offline   Reply With Quote
Old 2013-03-26, 13:20   #54
ryanp
 
ryanp's Avatar
 
Jun 2012
Boulder, CO

2×7×13 Posts
Default

Quote:
Originally Posted by Batalov View Post
Lowering density may help, but sieving some more will help more.
A 47M matrix shows that you are at the "cusp of convergence".
FYI, even lowering to -D 100 produces the same result:

Code:
Tue Mar 26 02:08:53 2013  commencing linear algebra
Tue Mar 26 02:09:11 2013  read 50682177 cycles
Tue Mar 26 02:10:51 2013  cycles contain 153108686 unique relations
Tue Mar 26 02:54:57 2013  read 153108686 relations
Tue Mar 26 02:59:39 2013  using 20 quadratic characters above 4294917296
Tue Mar 26 03:11:41 2013  building initial matrix
Tue Mar 26 03:48:02 2013  memory use: 20613.6 MB
Tue Mar 26 03:53:34 2013  read 50682177 cycles
Tue Mar 26 03:53:45 2013  matrix is 50690200 x 50682177 (20516.5 MB) with weight 5915593165 (116.72/col)
Tue Mar 26 03:53:45 2013  sparse part has weight 4820773930 (95.12/col)
Tue Mar 26 04:06:31 2013  filtering completed in 2 passes
Tue Mar 26 04:06:46 2013  matrix is 50678722 x 50673715 (20514.6 MB) with weight 5915011768 (116.73/col)
Tue Mar 26 04:06:46 2013  sparse part has weight 4820374768 (95.13/col)
Tue Mar 26 04:17:50 2013  matrix starts at (0, 0)
Tue Mar 26 04:18:02 2013  matrix is 50678722 x 50673715 (20514.6 MB) with weight 5915011768 (116.73/col)
Tue Mar 26 04:18:02 2013  sparse part has weight 4820374768 (95.13/col)
Tue Mar 26 04:18:02 2013  matrix needs more columns than rows; try adding 2-3% more relations
I guess I'll try sieving more. But I'm worried that I've hit a bug somewhere...
ryanp is offline   Reply With Quote
Old 2013-03-26, 16:19   #55
Batalov
 
Batalov's Avatar
 
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2

9,127 Posts
Default

Jason may be on to something with his post. The filtering log looks as if it should converge much better than it did; the initial excess looked quite nice...

This is a degree 7 project. Could degree 7 throw off something in the filtering process?
Batalov is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
29 to 30 bit large prime SNFS crossover VBCurtis Factoring 11 2015-03-09 07:01
How many jobs should I run? Warlord Software 12 2013-10-11 22:18
Advice for large GNFS jobs? WraithX Factoring 59 2013-07-30 01:13
doing large NFS jobs on Amazon EC2? ixfd64 Factoring 3 2012-06-06 08:27
Filtering on large NFS jobs, particularly 2^908+1 bdodson Factoring 20 2008-11-26 20:45

All times are UTC. The time now is 10:37.

Wed Sep 30 10:37:41 UTC 2020 up 20 days, 7:48, 0 users, load averages: 1.08, 1.26, 1.30

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.