mersenneforum.org Thanks PG for Sieving, now time to pick a new n?
 Register FAQ Search Today's Posts Mark Forums Read

 2009-06-19, 05:16 #1 cipher     Feb 2007 211 Posts Thanks PG for Sieving, now time to pick a new n? 30 days ago sieving for TPS n=333333 seemed doomed or even abandoned. People were slowly losing interest and here came jmblazek marching PG and Peter-Puzzle with his arsenal of cores and revived Sieving and hopefully in 3 months We will have finished all the LLR' and hopefully find a new Monster Twin Prime. Its time to look to the future and pick our next n, But how to decide which n would be our next candidate. Few reasons to keep in mind to pick our next n. 1) n should be large enough, so any primes found would end in Top 5000 list. 2) n shouldn't be so large that we need to LLR 1000G to have a decent probability of finding a twin. 3) continue n=333333 from 100G- till until we find a prime (highly unlikely as primes found with this n will not end up in Top 5000 list and contributors will loose interest.) Please recommend any next n or any suggestions on how to pick next n and reasons supporting why we should pick it. I recommend or suggest n=999999 as LLR time is approx 1 hour, and any prime found with this n would end in Top 5000 list @ 300-5000 range, assuring they will stay on for next 10 years. If someone can calculate the probability of how many G's we need to sieve to have 90% chance of finding Twin and how many primes (not twin) can we expect in 0-100G range. thanks cipher
2009-06-19, 06:36   #2
MooooMoo
Apprentice Crank

Mar 2006

2·227 Posts

Quote:
 Originally Posted by cipher I recommend or suggest n=999999 as LLR time is approx 1 hour, and any prime found with this n would end in Top 5000 list @ 300-5000 range, assuring they will stay on for next 10 years. If someone can calculate the probability of how many G's we need to sieve to have 90% chance of finding Twin and how many primes (not twin) can we expect in 0-100G range.
This was discussed a bit in a post a few years ago:

http://www.mersenneforum.org/showpos...&postcount=146

For n=999999, you need to sieve 827G to have a 90% chance of finding a twin, and the optimal sieve depth will be over 1000P. There will be about 2500-3000 primes in a 0-100G range at n=999999.

2009-06-19, 10:13   #3
Puzzle-Peter

Jun 2009

23×5×17 Posts

Quote:
 Originally Posted by MooooMoo This was discussed a bit in a post a few years ago: http://www.mersenneforum.org/showpos...&postcount=146 For n=999999, you need to sieve 827G to have a 90% chance of finding a twin, and the optimal sieve depth will be over 1000P. There will be about 2500-3000 primes in a 0-100G range at n=999999.
How do you calculate these numbers? There must be some formulas behind this, but I have no knowledge of them.

Thanks,
Peter

 2009-06-20, 15:56 #4 mdettweiler A Sunny Moo     Aug 2007 USA (GMT-5) 3×2,083 Posts A while back there was a lot of discussion in this thread about doing a fixed *range* of n over a smaller range of k. Somewhere in there an analysis was linked to that mathematically proved this to be more efficient than searching by fixed n, variable k. Any thoughts on utilizing such an approach instead of just picking another n? Last fiddled with by mdettweiler on 2009-06-20 at 15:57
 2009-06-21, 07:54 #5 cipher     Feb 2007 211 Posts I am all about efficiency but what utilities do we need to use hence i am guessing we would have to write a new Program. (which is very time consuming.) can we use tools like newpgen, sr2sieve tpsieve etc to work with this?
2009-06-21, 17:17   #6
mdettweiler
A Sunny Moo

Aug 2007
USA (GMT-5)

3·2,083 Posts

Quote:
 Originally Posted by cipher I am all about efficiency but what utilities do we need to use hence i am guessing we would have to write a new Program. (which is very time consuming.) can we use tools like newpgen, sr2sieve tpsieve etc to work with this?
As was discussed in the thread I linked to earlier, the current de facto way of sieving such a range was to use NewPGen's fixed-n mode with the increment counter turned on. Possibly tpsieve or sr2sieve could be used in a similar way with the help of a batch file/shell script. (We'd want to try all three sieving programs to see which one operates the fastest, since all of them could theoretically work for this type of range.)

2009-06-22, 02:29   #7
geoff

Mar 2003
New Zealand

48516 Posts

Quote:
 Originally Posted by cipher I am all about efficiency but what utilities do we need to use hence i am guessing we would have to write a new Program. (which is very time consuming.) can we use tools like newpgen, sr2sieve tpsieve etc to work with this?
tpsieve can sieve a range of n, that is a range of k,n pairs with k odd and k0 <= k <= k1 and n0 <= n <= n1.

However you need to start the sieve with NewPGen sieving each n separately until p > k1 at least, and then merge the resulting files for input to tpsieve (Just delete the header from all except the first file and then concanenate the results).

Edit: For testing purposes you can sieve without an existing sieve file, just start tpsieve with all of the -k -K -n -N -p -P switches. This will find all factors in the range without checking to see whether they have already been removed from the (non-existing) sieve file.

Last fiddled with by geoff on 2009-06-22 at 02:41

 2009-06-22, 10:18 #8 biwema     Mar 2004 3×127 Posts The advantage of using many N is the smaller FFT size, if k is not too large. The influence is proportional to log(k), so the difference between k=3 and k=9 is the same as between k= 10^6 and k=10^12. That means, we need a large number of N that k gets reasonably smaller. For example, if we only use 10 N, we have to sieve 100G each instead of 1T. This imprvement makes not much difference. The larger the number of N gets, the higher is the sieving time and will reduce its advantage. I think there is some research necessary that we can find out how big k can be (for a certain N) before a FFT size change occurs (for the most common CPU architecture). If we use for example 100 N that are close to each other, there is still the possibility to sieve up to 20 N together: At the beginning sieve all N separately up to (for example) p=1 billion. At theis level the number of remaining candidates is so small that it is possible to use hashtables instead of arrays. Then we can group 10 .. 20 N together: Use the smallest exponent, multiply all candidates k (where the exponent is N+1,2,3) with 2,4,8 and merge these k. It depends on the limits of the siever, how large k it can handle.
 2009-06-27, 06:13 #9 Puzzle-Peter     Jun 2009 12508 Posts Any new ideas / insights on this? I'm lacking the knowledge to be useful at this stage, so I stick to being curious Last fiddled with by Puzzle-Peter on 2009-06-27 at 06:15
 2009-07-06, 04:32 #10 gd_barnes     May 2007 Kansas; USA 7×13×113 Posts If you're going to go after a record twin, keep it reasonable. Make it a moderate-sized range of k and n around n=205K-225K. Don't just stick with one n. Keep the k's somewhat low by going after a range of n=100 or 500 or something like that. Testing times are very fast at this level so sieving efficiency is much less important. Besides, the much longer testing times for large k's likely completely offsets any sieving advantage of a single n. Don't let an out-of-date piece of sieving software (NewPGen) dictate your project goals. With the current talk and at the current rate, a single individual could easily come up with a record twin long before a huge DC effort does. IMHO, talk of going after a twin at n=500K, 666666, 666777, or 999999 is nonsense. Making a big slash with a monster twin seems cool but the risk is too large of not finding one for 5 years or more and encourages people to do more manageable searches individually. Make it a sure thing and work your way up and you'll get more participants in the long run. A twin at n=220K or 225K would bring many more participants to future efforts for n=250K or 275K or 300K. Gary
2009-07-06, 11:01   #11
Flatlander
I quite division it

"Chris"
Feb 2005
England

31×67 Posts

Quote:
 Originally Posted by gd_barnes If you're going to go after a record twin, keep it reasonable. Make it a moderate-sized range of k and n around n=205K-225K. Gary
Sounds like a good idea. The top-5000 rate is proceeding too fast for twin searching imho. Btw I am still currently searching below 206k *, so your searching above would be good.

*Specifically k=2001-993277, 13 ns at a time using NewPgen with its maximum 485Mb of RAM. (I emailed Paul Jobling to see if the max RAM could be increased, but Windows prevents him.)
iirc, There is a bit of chat about different methods in the "List of small twins ..." thread. (My method is referred to in post #143.)

Maybe it's faster using tpsieve now anyway.

 Similar Threads Thread Thread Starter Forum Replies Last Post pepi37 Software 1 2017-05-15 17:17 paul0 Factoring 5 2015-11-18 13:58 Citrix Prime Sierpinski Project 2 2014-02-16 18:47 jasong Sierpinski/Riesel Base 5 10 2009-01-25 15:56 jasong Math 18 2006-03-31 03:14

All times are UTC. The time now is 02:13.

Thu Jan 28 02:13:44 UTC 2021 up 55 days, 22:25, 0 users, load averages: 2.29, 2.46, 2.58