mersenneforum.org  

Go Back   mersenneforum.org > Factoring Projects > Factoring

Reply
 
Thread Tools
Old 2008-11-18, 19:49   #1
bdodson
 
bdodson's Avatar
 
Jun 2005
lehigh.edu

210 Posts
Default Filtering on large NFS jobs, particularly 2^908+1

Quote:
Originally Posted by fivemack View Post
Code:
Sun Oct  5 11:34:14 2008  Msieve v. 1.38
Sun Oct  5 11:34:14 2008  random seeds: de557d1b 1764b796
Sun Oct  5 11:34:14 2008  factoring 856747567165...2189199 (180 digits)
  ...
Sun Oct  5 13:42:48 2008  memory use: 5917.1 MB
Sun Oct  5 13:43:02 2008  read 16983811 cycles
Sun Oct  5 13:44:30 2008  matrix is 16981887 x 16983811 (5080.2 MB) 
    with weight 1647336740 (96.99/col)
So the extra sieving has given us a nice reasonable-sized matrix, which I expect to be finished by the end of November. The filtering log is below, because uploading it as an attachment didn't seem to work.
I'm puzzled at what happens when the filter switches from the
initial filtering bound on large primes (set for memory use), to 720M
(for the last part). Here that's


Code:
  ...
Fri Oct  3 22:24:35 2008  found 80406920 duplicates and 301728045 unique relations
Fri Oct  3 22:24:35 2008  memory use: 2195.0 MB
Fri Oct  3 22:24:36 2008  reading rational ideals above 339804160
Fri Oct  3 22:24:36 2008  reading algebraic ideals above 339804160
  ... 
Fri Oct  3 23:11:31 2008  301728045 relations and about 108858095 large ideals  
   ...
Sat Oct  4 18:20:06 2008  removing 1161225 relations and 761225 ideals in 400000 cliques
Sat Oct  4 18:20:07 2008  commencing in-memory singleton removal
Sat Oct  4 18:20:12 2008  begin with 50364234 relations and 31834525 unique ideals
Sat Oct  4 18:20:41 2008  reduce to 50341956 relations and 31050679 ideals in 5 passes
Sat Oct  4 18:20:41 2008  max relations containing the same ideal: 17
Sat Oct  4 18:21:10 2008  removing 869078 relations and 589859 ideals in 279219 cliques
   ...
Sat Oct  4 18:21:46 2008  reduce to 49463363 relations and 30451234 ideals in 5 passes
Sat Oct  4 18:21:46 2008  max relations containing the same ideal: 17
Sat Oct  4 18:21:51 2008  dataset too sparse, retrying
Sat Oct  4 18:21:52 2008  reading rational ideals above 720000
Sat Oct  4 18:21:52 2008  reading algebraic ideals above 720000
Sat Oct  4 18:21:52 2008  commencing singleton removal, final pass
Sat Oct  4 22:59:33 2008  keeping 145552576 ideals with weight <= 25, new excess is 12962109
Sat Oct  4 23:03:59 2008  memory use: 5161.8 MB
Sat Oct  4 23:04:07 2008  commencing in-memory singleton removal
Sat Oct  4 23:04:42 2008  begin with 209403509 relations and 145552576 
...
Uhm; well. More than one thing to puzzle about; but for the moment,
the first filter bound reports removing enough relations/ideals to drop
the number of relns from 301M to 50M. I can see that dropping the
filter bound will raise the number of ideals; so there were 30.4M above
39804160, and then 145552576 ideals above 720M. But I'm not clear
on why the number of relns didn't stay at 50M; and instead jumped way
up to 209M. That sounds like c. 250M relns removed, then some
150M relns put back.

Our current number is 2,908+ C268 on which we're sieving with 32-bit large
primes on both the algebraic and the rational sides. We got a 25.729M^2
matrix with 324.779M nonduplicate relns; and I've been waiting since
the 5^421- factorization report for an update. The number of nondup
relations went up to 414.150M and the filter past 720K at 26 hours (at
which point my question ...). It's now another 20 hours since then, and
the max relns has just dropped from 20 to 19. Meanwhile, another 100M
range of q's have finished, and I'm hoping that one more will get us up
past Greg's target 450M nondup. -Bruce

30M-500M first; added 500M-800M; with 800M-900M during
duplicate removal and filtering the new relns.

Last fiddled with by bdodson on 2008-11-18 at 19:56 Reason: that's 100M ...
bdodson is offline   Reply With Quote
Old 2008-11-18, 21:36   #2
jasonp
Tribal Bullet
 
jasonp's Avatar
 
Oct 2004

3×1,163 Posts
Default

Quote:
Originally Posted by bdodson View Post
Uhm; well. More than one thing to puzzle about; but for the moment,
the first filter bound reports removing enough relations/ideals to drop
the number of relns from 301M to 50M. I can see that dropping the
filter bound will raise the number of ideals; so there were 30.4M above
39804160, and then 145552576 ideals above 720M. But I'm not clear
on why the number of relns didn't stay at 50M; and instead jumped way
up to 209M. That sounds like c. 250M relns removed, then some
150M relns put back.
If a relation was pruned in the singleton removal phase then it is pruned in all subsequent passes. However, relations pruned during clique processing are not forgotten, and the clique processing starts from scratch whenever the filtering bound changes. That's because changing the bound is accompanied by a tolerance for a larger ideal weight, which changes the distribution of cliques, which means that cliques that are now good could have been previously deleted.

The goal here is to get a dataset that has the correct amount of excess and for which the maximum ideal weight is about 20, or slightly more; if the max weight is much less, then you are probably hiding usable relation sets from the merge phase (which can do spanning-tree-based merges up to weight 20), and the initial parameters should have been set up differently. The clique processing restarts when the max ideal weight is less than 18.

Quote:
Our current number is 2,908+ C268 on which we're sieving with 32-bit large
primes on both the algebraic and the rational sides. We got a 25.729M^2
matrix with 324.779M nonduplicate relns; and I've been waiting since
the 5^421- factorization report for an update. The number of nondup
relations went up to 414.150M and the filter past 720K at 26 hours (at
which point my question ...). It's now another 20 hours since then, and
the max relns has just dropped from 20 to 19. Meanwhile, another 100M
range of q's have finished, and I'm hoping that one more will get us up
past Greg's target 450M nondup. -Bruce
Wow, that's a big job. Are you waiting on an update of some sort from me? It sounds like everything is working fine (if slow) with this run.
jasonp is offline   Reply With Quote
Old 2008-11-18, 23:34   #3
fivemack
(loop (#_fork))
 
fivemack's Avatar
 
Feb 2006
Cambridge, England

6,323 Posts
Default

Bruce: would you mind elaborating a little on sieving yields for 2,908+?

I decipher your paragraph as

AR 30M-500M got 324.779M nonduplicate and a 25.729M^2 matrix
AR 30M-800M got 414.150M nonduplicate, matrix size not yet known
AR 30M-900M is expected to not quite reach 450M nonduplicate

which suggests either an incredibly law raw rate of relations (200k relations on each side in a million-Q range?) or an absolutely cataclysmic rate of duplication; what's your factor-base size, and what sort of raw yield of relations per million-Q-range are you seeing on each side? I'm wondering if you're past the crossover point for gnfs-lasieve4I16e.

I'm generally unhappy with yields below 1M relations per side in a 1MQ range: the furthest-up range that I sieved for 5,421- was 339.5M-340M with gnfs-lasieve4I15e

total yield: 541558, q=340000019 (0.68690 sec/rel)

and 3221,73- R 80M-81M is
total yield: 1532391, q=81000001 (0.43947 sec/rel)

with an already rather worrying duplication proportion of 11.1% among the first 95 million relations.

Whilst 2^908+1 S274 is a very big number, I'm surprised that it's big enough for the yields to have dropped back by almost another order of magnitude from an S253; does 4x^6+1 have truly dismal root properties?
fivemack is offline   Reply With Quote
Old 2008-11-19, 02:08   #4
bdodson
 
bdodson's Avatar
 
Jun 2005
lehigh.edu

40016 Posts
Default

Quote:
Originally Posted by jasonp View Post
If a relation was pruned in the singleton removal phase then it is pruned in all subsequent passes. However, relations pruned during clique processing are not forgotten, and the clique processing starts from scratch whenever the filtering bound changes.
So before the first round of clique deletes, we have (in Tom's filter log):
Quote:
Sat Oct 4 06:19:04 2008
reduce to 209403509 relations and 121973435 ideals
And then, at the end of the first round of clique deletes, we have
Quote:
Sat Oct 4 18:21:17 2008 begin with 49472878 relations and 31050679 unique ideals
Sat Oct 4 18:21:46 2008 reduce to 49463363 relations and 30451234 ideals in 5 passes
Sat Oct 4 18:21:46 2008 max relations containing the same ideal: 17
Sat Oct 4 18:21:51 2008 dataset too sparse, retrying
Sat Oct 4 18:21:52 2008 reading rational ideals above 720000
Sat Oct 4 18:21:52 2008 reading algebraic ideals above 720000
Sat Oct 4 18:21:52 2008 commencing singleton removal, final pass
Sat Oct 4 22:59:33 2008 keeping 145552576 ideals with weight <= 25, new excess is 12962109
Sat Oct 4 23:03:59 2008 memory use: 5161.8 MB
Sat Oct 4 23:04:07 2008 commencing in-memory singleton removal
Sat Oct 4 23:04:42 2008 begin with 209403509 relations and 145552576 unique ideals
That's 6am to 11pm, 17 hours of computing later, and the same data set.
Perhaps this is a diagnostic step, to see that there are enough relns,
but for these large datasets it seems a bit expensive. Or perhaps I
misread your reply?

Quote:
That's because changing the bound is accompanied by a tolerance for a larger ideal weight, which changes the distribution of cliques, which means that cliques that are now good could have been previously deleted.

The goal here is to get a dataset that has the correct amount of excess and for which the maximum ideal weight is about 20, or slightly more; if the max weight is much less, then you are probably hiding usable relation sets from the merge phase (which can do spanning-tree-based merges up to weight 20), and the initial parameters should have been set up differently. The clique processing restarts when the max ideal weight is less than 18.

Wow, that's a big job. Are you waiting on an update of some sort from me? It sounds like everything is working fine (if slow) with this run.
Msieve came out fine,
Code:
Tue Nov 18 15:44:56 2008  found 25001658 cycles, need 22508330
Tue Nov 18 15:45:24 2008  weight of 22508330 cycles is about 1575923426 (70.02/cycle)
The matrix has dropped from 25.729M^2 to 22.508^2. That's less
progress than I was hoping for; looks like I'll be sieving further than
I expected/hoped.

On the duplicates, I got
Code:
p908m300-m400.err:Found 72271509 unique, 2875993 duplicate, and 0 
p908m400-m500.err:Found 65544866 unique, 1000514 duplicate, and 0 
p908m500-m600.err:Found 60542443 unique, 1572584 duplicate, and 0 
p908m600-m700.err:Found 56633905 unique, 1259063 duplicate, and 0 
p908m700-m800.err:Found 53454220 unique, 1038601 duplicate, and 0
for 161697992 "raw unique" from 500M-800M. So the increase of unique
of 414.150-324.779 = 89.371M did see quite a bit of a hit from duplicates.
This is with rlim = alim: 90000000. -Bruce
bdodson is offline   Reply With Quote
Old 2008-11-19, 02:44   #5
jasonp
Tribal Bullet
 
jasonp's Avatar
 
Oct 2004

3×1,163 Posts
Default

Quote:
Originally Posted by bdodson View Post
That's 6am to 11pm, 17 hours of computing later, and the same data set.
Perhaps this is a diagnostic step, to see that there are enough relns,
but for these large datasets it seems a bit expensive. Or perhaps I
misread your reply?
No, you have it correct. In this case the dataset was sparse enough that you could have skipped the second clique removal pass, but there are other factorizations where the max ideal weight was 17 (for example) and the largest relation set after merging also had 17 relations, meaning that you could have gotten away with a smaller matrix if you had a higher max frequency going into the merge phase. Perhaps it would be better to run the merge phase anyway (it only takes a few minutes) and find out if merging could benefit from a rerun of the clique removal.
jasonp is offline   Reply With Quote
Old 2008-11-19, 17:59   #6
fivemack
(loop (#_fork))
 
fivemack's Avatar
 
Feb 2006
Cambridge, England

6,323 Posts
Default

Those yields look really alarmingly low to me, to the point that I wonder if you're using lasieve14e: with lasieve15e I get (in a 1k interval at 500M on the algebraic and rational sides respectively)

total yield: 837, q=500001001 (1.95886 sec/rel)

total yield: 763, q=500001001 (2.43182 sec/rel)

for the 90M .. 90M+1k range I tried also using 16e, which is much slower per relation but gets more than twice as many relations per Q:

total yield: 1492, q=90001003 (1.31825 sec/rel) with 15e

total yield: 3163, q=90001003 (1.93834 sec/rel) with 16e

total yield: 1748, q=500001001 (2.59062 sec/rel) for 500M .. 500M+1k using 16e

Code:
n: 1523730872006476065948363676129458676149278444031460349472566973967334168144122839831601974268424799167249117433966350288390769855219490595979807225811317497929238722439950754301281289518517474506321086318238010563399750224881258591495613213552981037427691447296080033
c6: 4
c0: 1
Y1: 1
Y0: -2854495385411919762116571938898990272765493248
type: snfs
skew: 1
rlambda: 2.6
alambda: 2.6
alim: 90000000
rlim: 90000000
lpbr: 32
lpba: 32
mfbr: 64
mfba: 64

Last fiddled with by fivemack on 2008-11-19 at 18:02
fivemack is offline   Reply With Quote
Old 2008-11-19, 18:32   #7
Andi47
 
Andi47's Avatar
 
Oct 2004
Austria

46478 Posts
Default

Quote:
Originally Posted by fivemack View Post
Code:
n: 1523730872006476065948363676129458676149278444031460349472566973967334168144122839831601974268424799167249117433966350288390769855219490595979807225811317497929238722439950754301281289518517474506321086318238010563399750224881258591495613213552981037427691447296080033
c6: 4
c0: 1
Y1: 1
Y0: -2854495385411919762116571938898990272765493248
type: snfs
skew: 1
rlambda: 2.6
alambda: 2.6
alim: 90000000
rlim: 90000000
lpbr: 32
lpba: 32
mfbr: 64
mfba: 64
This poly file gives me a "the polynomials don't have a common root" error.
Andi47 is offline   Reply With Quote
Old 2008-11-19, 19:49   #8
Batalov
 
Batalov's Avatar
 
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2

22·3·7·109 Posts
Default

Quote:
Originally Posted by Andi47 View Post
This poly file gives me a "the polynomials don't have a common root" error.
You have to patch the siever. The number is longer than 255 digits. (see the same also here)
--Serge



P.S. Sorry, I don't do Windows and cannot answer that. The GGNFS project may later have everything updated, when Chris releases the new version (which was preannounced in the Yahoo group).

Last fiddled with by Batalov on 2008-11-19 at 20:19 Reason: patched binaries
Batalov is offline   Reply With Quote
Old 2008-11-19, 20:08   #9
Andi47
 
Andi47's Avatar
 
Oct 2004
Austria

7·353 Posts
Default

Quote:
Originally Posted by Batalov View Post
You have to patch the siever. The number is longer than 255 digits. (see the same also here)
--Serge
Do patched windows binarys of 14e, 15e and 16e exist?
Andi47 is offline   Reply With Quote
Old 2008-11-19, 20:50   #10
bdodson
 
bdodson's Avatar
 
Jun 2005
lehigh.edu

210 Posts
Default

Quote:
Originally Posted by fivemack View Post
Those yields look really alarmingly low to me, to the point that I wonder if you're using lasieve14e: with lasieve15e I get (in a 1k interval at 500M on the algebraic and rational sides respectively)

total yield: 837, q=500001001 (1.95886 sec/rel)

total yield: 763, q=500001001 (2.43182 sec/rel)
We're only sieving on the rational side. I don't have any other
binaries; just 15e. Root properties should be reflected in the
scores(?); we have
Code:
size score = 3.818099e-13, Murphy alpha = 1.946683, 
combined = 2.189247e-13
My thought is that 90M is way-small; but we're doing that on
purpose, extra work sieving, attempting to keep the matrix from
getting out-of-range. Greg's .poly says skew 0.7937, but everything
else matches. In fact, I did 30M-100M with alim:=100M, rlim:=110M
(for skew < 1); but that hardly made a dent. If we survive P908,
next up would be M919, for which it's clear that we'll need something
more like 150M. About
Quote:
161697992 "raw unique" from 500M-800M
I re-did this without fudging some extra relns; the correct count
is "Found 163201489 unique, 7429080 duplicate". That was
174500816 = 170630568+3870248 raw from the three 100M ranges,
then another 7429080 duplicates for the entire 300M range, which
should have been 163.2M "raw uniq", of which only the
89.371M was new uniq out of 30M-800M.

I'm not panicking here; just settling in to head towards 1500M-or-so.
By contrast, M857 is seriously broken, according to a report from
Batalov with another instance of the "empty col" error, for which it
seems that Jason and he have a patch. It's not a corupt file. I'm
wondering whether I can get past the difficulty (the heavy cols are
too heavy, so that removing them leaves things too sparse?) with
some additional over-sieving; or wait for an update. I'm not in a
hurry, 2p908 has my attention for the moment. -Bruce

PS - I did some extended checking with the 5p389 sieving, which
appears to confirm our previous data that for these snfs's doing
both rational and algebraic just gives even more duplicates --- adding
another small alg range gives way more raw relns, but not after
duplicates --- it was always better to add a way larger rational
range, after allowing for duplicates.
bdodson is offline   Reply With Quote
Old 2008-11-19, 22:43   #11
Batalov
 
Batalov's Avatar
 
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2

22·3·7·109 Posts
Default Digression: the short "empty col" story

Quote:
Originally Posted by bdodson View Post
By contrast, M857 is seriously broken, according to a report from Batalov with another instance of the "empty col" error, for which it seems that Jason and he have a patch. It's not a corupt file. I'm wondering whether I can get past the difficulty (the heavy cols are too heavy, so that removing them leaves things too sparse?) with some additional over-sieving; or wait for an update. I'm not in a hurry, 2p908 has my attention for the moment. -Bruce
Just to get the cat out of the sack, a rare* matrix condition was encountered (empty or duplicated columns). Common pre-existing condition is a large positive Murphy alpha (e.g. >1.9). I personally tweaked and simply rebuild the matrix, many times (with different variations and a post-build/pre-BL ad hoc litmus test of the resulting matrix) and finally got lucky (and my number is now finished). The lucky changes were POST_LANCZOS_ROWS 36 (and since then I actually froze it in my everyday binary to 42) and TARGET_DENSITY 72.0 ...but there's no guarantee, it very well may be stochastic.

M857 may be far too large for such tweaking. (For reference, my lucky filtering run was something like 8th... but in my case they only took a few hours each. For monstrous projects, like Bruce's or Tom's, it's days.)

Jason says he will take a solid (no-luck-involved) fix on this in 1.39. And then M857 could be the first pancake.

Serge
___
*meaning, most likely you will never be affected. Specifically, GNFS projects will definitely not be affected (as per Jasonp).
Batalov is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
How many jobs should I run? Warlord Software 12 2013-10-11 22:18
Advice for large GNFS jobs? WraithX Factoring 59 2013-07-30 01:13
Advice for large SNFS jobs? ryanp Factoring 69 2013-04-30 00:28
doing large NFS jobs on Amazon EC2? ixfd64 Factoring 3 2012-06-06 08:27
Jobs R.D. Silverman Lounge 25 2009-10-15 05:41

All times are UTC. The time now is 16:07.

Tue Nov 24 16:07:49 UTC 2020 up 75 days, 13:18, 4 users, load averages: 1.52, 1.73, 1.72

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.