Register FAQ Search Today's Posts Mark Forums Read

2007-02-04, 22:10   #122
MooooMoo
Apprentice Crank

Mar 2006

11·41 Posts

Quote:
 Originally Posted by biwema Hi Note: The deeper you sieve, the smaller gets the chance of finding a twin in a given range. Note: The efficiency of the sieve is proportional to the number of candidates in the array.
Not only that, but using NewPGen's BiTwin option generates higher k's, which increases the .dat file and the RAM requirements.

Anyway, here's the thread that states some of LLR's limitations:

Quote:
 Originally Posted by Kosmaj Larry says that somewhere at 10^48 LLR slows down.

2007-02-05, 07:41   #123
pacionet

Oct 2005
Italy

3·113 Posts

Quote:
 Originally Posted by CedricVonck But how can I "split" this monster file in smaller chunks? Remember I am using Windows.
Probably Gribozavr has some script to do that. I need such script too, when, in the future, I'll have to release pre-sieved files. I use Windows too (maybe something written in Java ?)

Thanks

 2007-02-05, 14:05 #124 Rytis     Nov 2006 10100112 Posts Use head to generate the first file. Then use tail to cut off the first file part. Use head again, cut off the exported part again, and so on. Head/tail accepts negative line number counts, so you can use head with positive number and tail with the same number, just negative - it will give you all lines except first N, so you won't need to know the number of lines in the file. [I just read what I wrote and it probably isn't clear... But i hope you understand)
 2007-02-05, 16:05 #125 ValerieVonck     Mar 2004 Belgium 15058 Posts Thank you! I will certainely look into this! Other method, sieve a range to ?T test it, sieve the next range and so on. Regards Cedric
 2007-02-05, 19:31 #126 gribozavr     Mar 2005 Internet; Ukraine, Kiev 11·37 Posts I use a little perl script... Probably not as efficient as head/tail, but is allows you to think in terms of millions, not just k count. Code: #! /usr/bin/perl use warnings; use strict; our $min =$ARGV[0]; our $max =$ARGV[1]; $_ = ; print$_; while($_ = ) { if(/^(\d+) \d+\r?$/) { if(($1 >=$min) && ($1 <=$max)) { print $_; } if($1 > \$max) { exit; } } } I use it like this: cat latest_sieve_file | ./get_llr_file.pl 1 300000000 > 00001e6-00300e6_333333.txt
 2007-02-05, 19:34 #127 pacionet     Oct 2005 Italy 3·113 Posts Thanks Gribozavr ! For Windows user, you can download some Unix utilities (among them, cat) here: http://gnuwin.epfl.ch/apps/unxutils/...l/unxutils.exe just run and install. Of course , to run Perl script, you need the Perl interpreter: http://www.activestate.com/Products/ActivePerl/ Last fiddled with by pacionet on 2007-02-05 at 20:04
 2007-02-12, 18:20 #128 pacionet     Oct 2005 Italy 33910 Posts n=500,000 range: 0-50G sieve depth: 35T remaining candidates: 21,401,304 rate: 1 k every 0.3 seconds
2007-02-12, 18:58   #129
biwema

Mar 2004

3×127 Posts

Quote:
 Originally Posted by pacionet n=500,000 range: 0-50G sieve depth: 35T remaining candidates: 21,401,304 rate: 1 k every 0.3 seconds
I recommend merging your range as soon as possible. already at 1T the whole 250G is small enough to sieve as a whole.
Every T you sieve seperately, the efficiency is only half as good as it could be.

2007-02-12, 19:08   #130
pacionet

Oct 2005
Italy

3×113 Posts

Quote:
 Originally Posted by biwema I recommend merging your range as soon as possible. already at 1T the whole 250G is small enough to sieve as a whole. Every T you sieve seperately, the efficiency is only half as good as it could be.
At the moment I am sieving 0-50G and MooMooo is sieving 50G-207G .
We have not planned to merge our file.

2007-02-12, 21:03   #131
MooooMoo
Apprentice Crank

Mar 2006

45110 Posts

Quote:
 Originally Posted by pacionet At the moment I am sieving 0-50G and MooMooo is sieving 50G-207G .
Actually, it's 50G-208G (50G - 207,999,999,999)

I've sieved a total of 430T so far, but it's not in order (since they are done on different machines). It's more like:

Machine 1: 1-150T
Machine 2: 400T-550T
Machine 3: 800T-865T
Machine 4: 1000T-1065T

This is because I won't have access to machines 2,3, and 4 until a month from now. By that time, the status will be:

Machine 1: 1-400T
Machine 2: 400T-800T
Machine 3: 800T-1000T
Machine 4: 1000T-1200T

and I'll merge the files then.

I had to wait until ~30T before my .dat files got small enough to sieve the whole 50G-208G at once, and even then, it almost exceeded my limit of 512 MB RAM.

Last fiddled with by MooooMoo on 2007-02-12 at 21:14

 2007-02-18, 14:35 #132 pacionet     Oct 2005 Italy 3×113 Posts n=500,000 sieving depth= 47.5 T candidates= 20,986,579 rate= 1 k every 0.5 seconds Last fiddled with by pacionet on 2007-02-18 at 14:36

 Similar Threads Thread Thread Starter Forum Replies Last Post Lennart Conjectures 'R Us 31 2014-09-14 15:14 philmoore Five or Bust - The Dual Sierpinski Problem 66 2010-02-10 14:34 ltd Prime Sierpinski Project 76 2008-07-25 11:44 ltd Prime Sierpinski Project 26 2005-11-01 07:45 R.D. Silverman Factoring 7 2005-09-30 12:57

All times are UTC. The time now is 10:26.

Sun Nov 29 10:26:22 UTC 2020 up 80 days, 7:37, 3 users, load averages: 0.96, 1.07, 1.07