20080626, 02:05  #23 
A Sunny Moo
Aug 2007
USA (GMT5)
3·2,083 Posts 
185T190T complete, 15 factors found and submitted.

20080626, 04:54  #24 
Oct 2007
Manchester, UK
2×3×227 Posts 
Excellent, thanks for your CPU time!
I've now updated the sieve file in the first post to one that has been sieved completely to 200 T. This file is 147 candidates lighter than the previous one that was sieved to 150 T. That's a maximum break in snooker! 
20080626, 05:25  #25 
"Jacob"
Sep 2006
Brussels, Belgium
2·13·67 Posts 
Not completely true, if a clean break starts by a free ball the total could be higher (up to a maximum break of 155.)
Jacob (nitpicking) Last fiddled with by S485122 on 20080626 at 05:26 Reason: removing a word 
20080701, 10:29  #26 
Oct 2007
Manchester, UK
2×3×227 Posts 
Returning 200T205T complete, 9 factors found. Taking 215  220 T.

20080706, 08:52  #27 
Oct 2007
Manchester, UK
2×3×227 Posts 
Returning 205  210 T and 210  215 T, 7 and 11 factors found. Taking 220  225 T and 225  230 T.

20080711, 08:58  #28 
May 2007
Kansas; USA
3^{2}·7·167 Posts 
You might consider removing all of the nvalues that are divisible by 3 from your sieve (.dat) file. They would have algebraic factors since k=27 is a perfect cube.
There are 417 of them in the file currently attached to post 1 here. I can easily do this and send you an updated file if you want it. That's more factors than you're likely to get in a P=250T sieve range! Since it's 3% of the remaining 14740 candidates, you'll likely get a slight sievespeed boost. Every little bit helps at this nlevel! One more thing that I'll mention: I don't want to rain on the parade here and I realize that you're likely trying to stay below an fftlen increase to reduce testing time per candidate but with only 14323 candidates remaining (after removing k's divisible by 3) on a sieve to P=215T, the chances of finding a prime in the entire file are about 1 in 28. Further sieving will not change that. It will only increase the the chance of each individual candidate being prime but there will be less candidates remaining. At P=215T, the chance of each candidate being prime are 1 in 393,733. If you sieve to P=570T (as far as the table goes), it's 1 in 382,434. Edit: On a sieve to the preferred P=2^52 limit, i.e. P=4.504*10^15, the chances would be 1 in 360,502. I mention this because a consideration could be given to adding additional k's to the effort but keeping them below any fftlen change. It might make things a little more interesting if the chance of a prime in the file are closer to 1 in 2! Gary Last fiddled with by gd_barnes on 20080711 at 09:03 
20080711, 15:46  #29 
Oct 2007
Manchester, UK
552_{16} Posts 
I know that there are some exponants that are divisible by three in the file which can be removed, however I'm graphing the progress of the number of candidates in the sieve against sieve depth, and suddenly removing 400+ would put quite a kink in it. I will remove them at the end of sieving before testing though. If anyone is interested, I've attached the graph between sieve depth 1  215 T.
I agree that there's not a massive chance of there being a prime within this range, I make the expected number of primes to be around 0.043, however there will still be around 14,000 candidates to test. 14,000 tests will take a long time, if there is enough interest to actually get through all those it would be worthwhile to sieve a much larger range, but until then it doesn't make sense to. The range required to have an expected number of primes of 0.5 would be all the way up to 37,342,012, which would result in an estimated 172,000 candidates after sieving. To have 1 expected prime in the range would require searching to 41,976,410, which would leave 365,000 candidates after sieving. I'm not sure what method you're using to calculate the odds by the way, but compared to the way the odds are calculated for the Mersenne primes, they seem a little high. For the highest candidate in the range after being sieved to 2^50, I calculate the odds at: 1 in ( log(27)/log(2) + 33,554,422 ) / (2 * 50) 1 in 335,544 And if sieved to 2^52 I get 1 in 322,639. As far as staying below an fft length, that wasn't why I picked this range. I just picked the upper limit of 2^25 because I have a thing for round numbers, the next jump in fft length comes at 35,314,134. Incidentally, speaking of FFT length, the current Mersenne assignments are a little over 43,000,000, which gives them an FFT length of 2560K. The exponants I'm sieving here all have an FFT length of 2048K and require 28.15  29.45% fewer iterations during a test. Overall that means the Mersennes will take about 70% longer to test. I just did a quick and dirty experiment with the latest versions of LLR and Prime95 and found the current Mersennes would merely take 64% longer. Having said that I didn't run the test for very long and my computer is quite busy at the moment, so the true figure could well be higher. Therefore from an individuals point of view it makes sense to test exponants in this range because they have about the same odds as being prime, but it's possible to test ~5 of these in the same time as 3 Mersennes. Last fiddled with by lavalamp on 20080711 at 21:43 
20080712, 00:19  #30 
May 2007
Kansas; USA
10521_{10} Posts 
OK, that makes sense on keeping the n's divisible by 3 and keeping the scope small for now. Thanks for your reasoning.
On the odds, the formula you are using is for Mersenne forms. That would be incorrect for any other form and would show too large of a chance of prime as you found. The reason why the chance of prime is higher for Mersenne forms is that factors can only be of a certain form, which reduces greatly the chance of finding a factor in the future at any given sieve depth. That's why Mersennes are by far the best choice if you're forced to choose between them and another kvalue at the same nlevel and sieve depth. Without research, I can't remember what the form must be for Mersenne factors but I'm sure that several people could chime in here and state what it is...something like 80*2^q1. As for most other kvalues, factors can be almost anything so the odds of prime are much less at the same nlevel and Plevel of sieving. The formulas that I used were from Axn1 in the RPS forum and are included in the attached spreadsheet that I developed for determing the odds of prime at any k, base, and nrange after sieving to a given depth. (Obviously the k makes little difference except in the case of Mersennes where k=1.) It computes the odds for an entire file as well as the odds of an individual candidate. It even shows the odds of a twin, triplet, and quadruplet; clearly virtually nonexistent at this nlevel. That's a great point about testing just above 10Mdigit candidates. You would need far fewer CPU years then GIMPS to test the same number of candidates. Perhaps at some point, you can convince some GIMPS crunchers to come here since their chance at $100K would be increased by spending less time crunching on each candidate. Wouldn't that be fun to get someone with 100 crunchers or more working on this effort? Edit: I just thought of something: Because the chances of prime are lower at the same nlevel as Mersennes even with a sieve to the same Pdepth, the chances of a prime here are likely the same or only slightly better than Mersennes at n=43M. But as you astutely analyzed, you can LLR them 60+% faster. Hence, once sieved optimally, this is a superior choice. Good luck with the effort! Gary Last fiddled with by gd_barnes on 20080712 at 00:36 
20080714, 15:42  #31 
Oct 2007
Manchester, UK
2×3×227 Posts 
OK, I went away and wrote some code that searched through all factors found thus far, removed any multiple factors for the same n value (keeping the lowest), and then reorganised them and wrote them into systematically named files.
What I ended up with is a series of files of factors which lets me put lots of data into an Excel document so that I can graph the removal of candidates against the log (base 2) of sieve depth. What I determined is that the candidate removal is very similar to that of the Mersennes. In other words, having sieved to 2^39, the number of candidates that will be removed by 2^40 is roughly 1/40. I've attached the excel file which contains a lot of numbers and three graphs. Therefore surely the probabilities of any candidate being prime should be calculated in the exact same way (at least for this k value). The sieve is almost at 2^48 now, so we'll see how the graphs look after that milestone is reached. In the Excel file that you attached, where does the 1.781 come from used in calculation 1? I tried and failed to see anything special about that number that would warrant it being there. When calculating the odds for a Mersenne prime that number is a 2, because you only need to trial factor up to half the bit depth of the candidate (square root) to prove primality (not that anyone would or could prove primality by that method for the 10m digit candidates). 
20080715, 19:26  #32 
Oct 2007
Manchester, UK
2·3·227 Posts 
Returning 215  220 T, 220  225 T and 225  230 T, found 24 factors between them.
Taking 230  235 T, 235  240 T and 240  245 T. 
20080726, 13:26  #33 
Oct 2007
Manchester, UK
2·3·227 Posts 
Returning 230  235 T and 235  240 T, just 6 and 4 factors found.
It's time for me to transcode some more DVDs, but I'll take out a couple more blocks soon. 
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
News from subproject Deep Sieving  Batalov  Operazione Doppi Mersennes  59  20210915 09:47 
Deep Sieving MM49 in parallel  ET_  Operazione Doppi Mersennes  22  20160728 11:23 
Deep Hash  diep  Math  5  20121005 17:44 
Alternative Sieving for 10M digit prime search  jasong  Lone Mersenne Hunters  200  20081029 13:21 
Help Sieving 10 Million Digit Candidates  lavalamp  Riesel Prime Search  26  20080525 08:24 