20070204, 22:10  #122  
Apprentice Crank
Mar 2006
11·41 Posts 
Quote:
Anyway, here's the thread that states some of LLR's limitations: http://www.mersenneforum.org/showthread.php?t=5579 Quote:


20070205, 07:41  #123  
Oct 2005
Italy
3·113 Posts 
Quote:
Thanks 

20070205, 14:05  #124 
Nov 2006
1010011_{2} Posts 
Use head to generate the first file. Then use tail to cut off the first file part. Use head again, cut off the exported part again, and so on.
Head/tail accepts negative line number counts, so you can use head with positive number and tail with the same number, just negative  it will give you all lines except first N, so you won't need to know the number of lines in the file. [I just read what I wrote and it probably isn't clear... But i hope you understand) 
20070205, 16:05  #125 
Mar 2004
Belgium
1505_{8} Posts 
Thank you!
I will certainely look into this! Other method, sieve a range to ?T test it, sieve the next range and so on. Regards Cedric 
20070205, 19:31  #126 
Mar 2005
Internet; Ukraine, Kiev
11·37 Posts 
I use a little perl script... Probably not as efficient as head/tail, but is allows you to think in terms of millions, not just k count.
Code:
#! /usr/bin/perl use warnings; use strict; our $min = $ARGV[0]; our $max = $ARGV[1]; $_ = <STDIN>; print $_; while($_ = <STDIN>) { if(/^(\d+) \d+\r?$/) { if(($1 >= $min) && ($1 <= $max)) { print $_; } if($1 > $max) { exit; } } } cat latest_sieve_file  ./get_llr_file.pl 1 300000000 > 00001e600300e6_333333.txt 
20070205, 19:34  #127 
Oct 2005
Italy
3·113 Posts 
Thanks Gribozavr !
For Windows user, you can download some Unix utilities (among them, cat) here: http://gnuwin.epfl.ch/apps/unxutils/...l/unxutils.exe just run and install. Of course , to run Perl script, you need the Perl interpreter: http://www.activestate.com/Products/ActivePerl/ Last fiddled with by pacionet on 20070205 at 20:04 
20070212, 18:20  #128 
Oct 2005
Italy
339_{10} Posts 
n=500,000
range: 050G sieve depth: 35T remaining candidates: 21,401,304 rate: 1 k every 0.3 seconds 
20070212, 18:58  #129  
Mar 2004
3×127 Posts 
Quote:
Every T you sieve seperately, the efficiency is only half as good as it could be. 

20070212, 19:08  #130  
Oct 2005
Italy
3×113 Posts 
Quote:
We have not planned to merge our file. 

20070212, 21:03  #131  
Apprentice Crank
Mar 2006
451_{10} Posts 
Quote:
I've sieved a total of 430T so far, but it's not in order (since they are done on different machines). It's more like: Machine 1: 1150T Machine 2: 400T550T Machine 3: 800T865T Machine 4: 1000T1065T This is because I won't have access to machines 2,3, and 4 until a month from now. By that time, the status will be: Machine 1: 1400T Machine 2: 400T800T Machine 3: 800T1000T Machine 4: 1000T1200T and I'll merge the files then. I had to wait until ~30T before my .dat files got small enough to sieve the whole 50G208G at once, and even then, it almost exceeded my limit of 512 MB RAM. Last fiddled with by MooooMoo on 20070212 at 21:14 

20070218, 14:35  #132 
Oct 2005
Italy
3×113 Posts 
n=500,000
sieving depth= 47.5 T candidates= 20,986,579 rate= 1 k every 0.5 seconds Last fiddled with by pacionet on 20070218 at 14:36 
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
S9 and general sieving discussion  Lennart  Conjectures 'R Us  31  20140914 15:14 
Sieving discussion thread  philmoore  Five or Bust  The Dual Sierpinski Problem  66  20100210 14:34 
Combined sieving discussion  ltd  Prime Sierpinski Project  76  20080725 11:44 
Sieving Discussion  ltd  Prime Sierpinski Project  26  20051101 07:45 
Sieving Discussion  R.D. Silverman  Factoring  7  20050930 12:57 