mersenneforum.org  

Go Back   mersenneforum.org > Factoring Projects > Cunningham Tables

Reply
 
Thread Tools
Old 2011-02-11, 23:13   #1
R.D. Silverman
 
R.D. Silverman's Avatar
 
Nov 2003

26·113 Posts
Default Distributed finishing for 2,1870L

It appears that I will need help to finish sieving 2,1870L. I just
don't have the resources.

I sent the relations that I had to Serge, but a lot more is needed.

I have already sieved special-q all the way to 452million, and the yield rate
is dropping. I doubt whether my siever can gather enough relations.

Any volunteers?
__________________________

EDIT (S.B): Here are the instructions:

* Save file <<t.poly>>
Code:
# sieve with 16e -r from 90 to 120M, in ranges
# Command line: gnfs-lasieve4I16e -v -r t.poly -f $start -c 1000000
n: 16995692987522455651754339410455320150093771210144273643775083936188200124843949967119977515852759358871709763714726542633958784170913772900370407491298241066753915069723640845561
Y0: -196159429230833773869868419475239575503198607639501078529
Y1: 9903520314283042199192993792
skew: 2.0
c4: 1
c3: -2
c2: -6
c1: 12
c0: -4
type: snfs
lpbr: 31
lpba: 30
mfbr: 62
mfba: 60
rlambda: 2.55
alambda: 2.55
rlim: 120000000
alim:  16777215
* Get a gnfs-lasieve4I16e (preferably a 64-bit linux binary). E.g. this one:
gnfs-lasieve4I16e.zip (but if it won't work on your system, search for other binaries on the forum or build from source)
* Reserve a range here, in chunks of 1M (this will serve as $start in the command-line). Each 1M range will take ~1.5M CPU-seconds on a 3GHz 64-bit CPU, and will produce ~200Mb of data after compression (400Mb plain)
* Run gnfs-lasieve4I16e -v -r t.poly -f $start -c 1000000
(or split in smaller ranges, -f controls start, -c controls length of the range; both plain numbers, no 'M's or 'e's)
* The memory requirement will be modest - 300-400Mb per process
* Concatenate result files (they will have names t.poly.lasieve-1.<number>-<number>), gzip (or bzip, 7zip, tar cvz, etc) and post at sendspace, dropbox or for very large files PM Batalov for direct sftp login.

Posprocessing will be done by Batalov.

Reservations:
Code:
up to 450M R.D. Silverman (own siever) DONE 75M unique relations (119M raw)
------ free relations 3.657M
90-91M   Batalov DONE 3.8M relns
91-92M   jrk DONE 3.98M relns
92-94M   jyb DONE 7.47M relns
94-100M  bsquared DONE 22.7M relns
100-101M xilman DONE  3.78M relns
101-102M fivemack DONE 3.84M relns
102-103M xilman DONE 3.85M relns
103-104M xilman DONE 3.85M relns
104-110M bsquared DONE 22229942 unique, 175378 dup.
110-112.4M fivemack DONE 9.38M relns
112.4-113M fivemack DONE 2.35M relns
113-114M bsquared DONE  3.87M relns
#this lot should suffice

Last fiddled with by Batalov on 2011-02-22 at 19:22 Reason: this lot should suffice
R.D. Silverman is offline   Reply With Quote
Old 2011-02-11, 23:32   #2
Batalov
 
Batalov's Avatar
 
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2

22×2,281 Posts
Default

Here is a very brief digest of the existing relation set, that we've discussed and I will repost here:

* the FB lims are 14.5M / 86M and approximately 30/31-bit LP lims;
* usually that parameter set would yield a matrix with around 150M unique relations (possibly less, but a larger matrix);
* currently, there are 78,573,143 unique rels (with free rels included)
=====remdups_out.txt=====
Found 78132102 unique, 44165546 duplicate, and 0 bad relations. (~122M raw relations)
* filtering is at this point now:
Fri Feb 11 05:20:05 2011 reading all ideals from disk
Fri Feb 11 05:20:34 2011 memory use: 3042.4 MB
Fri Feb 11 05:21:02 2011 keeping 103773081 ideals with weight <= 200, target excess is 418036
Fri Feb 11 05:21:30 2011 commencing in-memory singleton removal
Fri Feb 11 05:21:53 2011 begin with 78573143 relations and 103773081 unique ideals
Fri Feb 11 05:22:57 2011 reduce to 33899 relations and 2 ideals in 11 passes
Fri Feb 11 05:22:57 2011 max relations containing the same ideal: 2
* sieving on the other side will not help (this is a quartic), probably a 15e re-sieving (or even 16e?) will be needed; I can simulate, remdups with the existing set to estimate quasi-unique additional yields and will post later.

--Serge
Batalov is offline   Reply With Quote
Old 2011-02-11, 23:49   #3
R.D. Silverman
 
R.D. Silverman's Avatar
 
Nov 2003

26·113 Posts
Default

Quote:
Originally Posted by Batalov View Post
Here is a very brief digest of the existing relation set, that we've discussed and I will repost here:

* the FB lims are 14.5M / 86M and approximately 30/31-bit LP lims;
* usually that parameter set would yield a matrix with around 150M unique relations (possibly less, but a larger matrix);
* currently, there are 78,573,143 unique rels (with free rels included)
=====remdups_out.txt=====
Found 78132102 unique, 44165546 duplicate, and 0 bad relations. (~122M raw relations)
* filtering is at this point now:
Fri Feb 11 05:20:05 2011 reading all ideals from disk
Fri Feb 11 05:20:34 2011 memory use: 3042.4 MB
Fri Feb 11 05:21:02 2011 keeping 103773081 ideals with weight <= 200, target excess is 418036
Fri Feb 11 05:21:30 2011 commencing in-memory singleton removal
Fri Feb 11 05:21:53 2011 begin with 78573143 relations and 103773081 unique ideals
Fri Feb 11 05:22:57 2011 reduce to 33899 relations and 2 ideals in 11 passes
Fri Feb 11 05:22:57 2011 max relations containing the same ideal: 2
* sieving on the other side will not help (this is a quartic), probably a 15e re-sieving (or even 16e?) will be needed; I can simulate, remdups with the existing set to estimate quasi-unique additional yields and will post later.

--Serge
Some additional data.

I was using a sieve area of 10K x 20K per special q. Results
show that this was too small. yield per q was too low.


Currently, for q near 450M, I am getting just under 4 relations/q.

Rather than proceed with sieving q > 450million, it will probably be
better to resieve some of the smaller q.

I will finish sieving all q up to 450M this weekend.
R.D. Silverman is offline   Reply With Quote
Old 2011-02-12, 00:56   #4
Batalov
 
Batalov's Avatar
 
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2

22×2,281 Posts
Default

For comparison, I found the sibling 2,1870M's logs (courtesy of B.Dodson's significant oversieving; long story short, it was easier to fire and forget than stop at a intermediate point):

Pre-simmed recipe (with experimental use of 16e and 3LP):
Quote:
Please sieve with 16e -r from 60 to 110-120M, expect 165M+ unique rels. After 110M, remdups and if already more than 170-180M unique rels, then stop, else add 110-120M.
Result:
Quote:
Originally Posted by bdodson
On 2M1870, as usual, I didn't get a chance to pause and count, just ran the entire range 60M-120M, towards "expect 165M+ unique". That got me "Found 219,351,522 unique with 19,798,477 duplicates".
TD=100 really crushed this filtering job,
Sun Nov 14 12:37:59 2010 matrix is 6023342 x 6023575 (2242.2 MB) with weight 565777368 (93.93/col)
Sun Nov 14 12:37:59 2010 sparse part has weight 527556294 (87.58/col)
...
Sun Nov 14 12:38:13 2010 memory use: 2790.2 MB
Sun Nov 14 12:38:57 2010 linear algebra at 0.0%, ETA 46h 6m [!!]
/and then it was done around ETA/
16e was an overshot, but the redundancy was very low as a result. It was a fun experiment.
Possibly 16e could be used again, here, for finishing (these are virtually identical projects). I will sim over the weekend (I cannot significantly sieve, 4Intel+6Phenom cores is all I got, but I can sim).

EDIT: not 3LP. Here's what it was:
Code:
# sieve with 16e -r from 60 to 110-120M, expect 165M+ unique rels
n: 1387312376442199554837407296900851895433665230080527991970122352522509034451214731923682531140863318446032709537489490131868927679840546823810213417373743367475664367890147487119660449174892741
Y0: -196159429230833773869868419475239575503198607639501078529
Y1: 9903520314283042199192993792
skew: 2.0
c4: 1
c3: 2
c2: -6
c1: -12
c0: -4
type: snfs
lpbr: 31
lpba: 30
mfbr: 62
mfba: 60
rlambda: 2.55
alambda: 2.55
rlim: 134000000
alim:  33554431

Last fiddled with by Batalov on 2011-02-12 at 01:04 Reason: remembered wrong; 3LP was not helping and was not used
Batalov is offline   Reply With Quote
Old 2011-02-12, 00:56   #5
bsquared
 
bsquared's Avatar
 
"Ben"
Feb 2007

2·17·97 Posts
Default

I can help. Batalov will you be coordinating things?
bsquared is offline   Reply With Quote
Old 2011-02-12, 01:09   #6
Batalov
 
Batalov's Avatar
 
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2

23A416 Posts
Default

Can do. If you would be willing to do all, then you won't need sendspace - I'll open you a sftp entry for the results into the compute node.
Let me prepare one large workunit and post it here. You would need to be prepared for a few hundred core-days.
Batalov is offline   Reply With Quote
Old 2011-02-12, 01:28   #7
xilman
Bamboozled!
 
xilman's Avatar
 
"π’‰Ίπ’ŒŒπ’‡·π’†·π’€­"
May 2003
Down not across

278A16 Posts
Default

I should be able to help. Please give me fairly clear instructions on what I need to do.


Paul
xilman is offline   Reply With Quote
Old 2011-02-12, 02:32   #8
Batalov
 
Batalov's Avatar
 
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2

912410 Posts
Default

I will run tests, prepare a desired target range (and tentatively time it), and then post here. The set up will very similar to distributed project templates from the past. E.g. like this. In short, one command-line, run many times on as many nodes you have access to (or qsub'bed), then the results gzipped-or-bzipped-or-7zipped (your choice) and sendspace'd or (let's insert a plug for trolls here) dropbox'd.

I will sim now. Give me a few hours.

Instructions posted in Post #1.
Please reserve. Each 1M chunk will take 1.5M CPU seconds (~420 hours) on a 64-bit linux system with a 3GHz CPU, or ~630 hours on a 2GHz CPU, or twice as much on a 32-bit system.

Last fiddled with by Batalov on 2011-02-12 at 04:39
Batalov is offline   Reply With Quote
Old 2011-02-12, 04:46   #9
Batalov
 
Batalov's Avatar
 
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2

22×2,281 Posts
Default

Tested (they work well with existing set; FBlims are slightly increased, so that we would get new relations even in the worst case). Posted.

Will delete reservation messages and record in post #1.
Estimate is about ~600 core-days (+/- 50% depending on what CPUs will come to play). With estimated 50 cores participating, let's try to wrap it in two weeks (so, plase don't reserve a month worth of work).
Batalov is offline   Reply With Quote
Old 2011-02-12, 14:43   #10
R.D. Silverman
 
R.D. Silverman's Avatar
 
Nov 2003

161008 Posts
Default

Quote:
Originally Posted by Batalov View Post
Tested (they work well with existing set; FBlims are slightly increased, so that we would get new relations even in the worst case). Posted.

Will delete reservation messages and record in post #1.
Estimate is about ~600 core-days (+/- 50% depending on what CPUs will come to play). With estimated 50 cores participating, let's try to wrap it in two weeks (so, plase don't reserve a month worth of work).
I have a total of ~25 cores, most of them at night only. Would
you like me to keep sieving? (this is why it was taking so long!)
R.D. Silverman is offline   Reply With Quote
Old 2011-02-13, 05:54   #11
Batalov
 
Batalov's Avatar
 
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2

22×2,281 Posts
Default

Quote:
Originally Posted by R.D. Silverman View Post
I have a total of ~25 cores, most of them at night only. Would
you like me to keep sieving? (this is why it was taking so long!)
Yes, just allow for time in mail.
Thanks.
Batalov is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Finishing expired LL test Damian PrimeNet 2 2017-11-16 00:36
1870L : Yield Rate R.D. Silverman Cunningham Tables 44 2010-10-21 14:32
Finishing mprime runs bill-p Software 1 2009-12-08 17:45
distributed.net completes OGR-25 ixfd64 Lounge 4 2008-11-22 01:59
LL assignments on slow PCs "die" shortly before finishing? Andi47 PrimeNet 1 2007-02-28 22:03

All times are UTC. The time now is 23:24.

Sat Oct 24 23:24:42 UTC 2020 up 44 days, 20:35, 1 user, load averages: 2.41, 1.97, 1.79

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.