What is the status of the search and another link
Since the Announcement mesg was locked I thought I'd start this new post.
LLR can also be found at http://mersenne.org/gimps/llr.zip for those of you who are not Yahoo members and do not want the hassle of adding onto a new group. Paul, it would be a good idea if you gave us a simple explanation of what you are trying to achieve, what the steps are  eg what sieving etc. and what the current status is. I know you answered several of these questions in your original post  which I moved to the Psearch forum  Will/Mike can you move that post here?  But it would be a good idea to restate all of that  prefereable in the announcement thread. Thanks 
Re: What is the status of the search and another link
We are building on work done on primes of the form k*2^n1 by Wilfred Keller et al:
http://www.prothsearch.net/riesel2.html Therefore the known values of 'n' that make 3*2^n1 prime are: 1, 2, 3, 4, 6, 7, 11, 18, 34, 38, 43, 55, 64, 76, 94, 103, 143, 206, 216, 306, 324, 391, 458, 470, 827, 1274, 3276, 4204, 5134, 7559, 12676, 14898, 18123, 18819, 25690, 26459, 41628, 51387, 71783, 80330, 85687, 88171, 97063, 123630, 155930, 164987 The domain for 'n' in this search is initially 1916001000000. I started sieving with Paul Jobling's NewPGen on the 2nd of this month using a 1GHz Athlon. Eleven blocks of work (191600300000) became available on the 5th. Of those, one was completed in a couple of days using a P4 and LLR. Three of us are currently processing five blocks. This leaves five blocks and already the sieve is approaching the point where I release some more for testing. Unlike Mersenne primes these numbers don't have to have a prime binary length. They look like this in binary: 101 1011 10111 101111 10111111 etc 
I have a 1 GHz Athlon available and not connected to the Internet for a while.
Which range(s) may I try to help you? Luigi 
Luigi, I have mailed you a block.
We are now five people with over 10 GHz of computers and there are three blocks available for anyone to test  'n' less than 300,000. I am sieving 'n' ( 300,000  1,000,000 ). I just checked NewPGen and it is knocking out one candidate every 4:12 minutes. The target is over 5 minutes for the release of blocks from 300,000 to 400,000 for testing. NewPGen's 'p', the current divisor being checked, is 366.2 billion. 
Paul,
Regards sieving have you looked at the discussions on the Seventeen or Bust forum site. They have some very interesting ideas and the efficiency of the sievers has gone up about 100 times since Newpgen. Paul Jobling who wrote newpgen wrote a specialized version of newpgen called sobsieve and it was much faster. Then Phil Carmody also wrote nbegone which is works for nonx86 computers as well. They also had some discussions about the algorithms behind the sieving as well. [Edit: Corrections to programs attributed to different people.] 
[quote]They have some very interesting ideas and the efficiency of the sievers has gone up about 100 times since Newpgen. Phil Carmody who wrote newpgen  I think  wrote a specialized version of newpgen called nbegone and it was much faster. Then Paul jobling also wrote sobsieve which is even faster.[/quote]
Newpgen was written by Paul Jobling, and many of the optimizations made for SoBsieve are now also in Newpgen. Also, a big part of the 'efficiency' is due to the fact that 12 K values are sieved at the same time, which is faster then sieving 12 K's seperately. Paul: If you are planning to search higher then N=1M, it's much more efficient to sieve the complete N range at once since sieving is proportional to the square root of the range. 
When to stop sieving and start using LLR, PRP or PFGW is difficult to establish. There are many things to take into account:
* it is nice to have some nonsieve testing happening on smaller 'n' * the time take to test '2*n' is just over four times of that for 'n' * sieving is proportional to the square root of the range * judging NewPGen  elimination timing goes up and down * LLR/PRP/PFGW timings vary but not exactly proportional to size * different computer architectures * demand Originally, I was going to sieve in 100,000 'n' blocks. I have decided that the work involved to complete 'n' to 1 million is enough work to organise for now. Of course Paul Jobling's NewPGen is to be attributed when submitting primes to the top 5000 primes. Phil Carmody helped Paul with some maths to get a fixedk sieve running faster. After my last message NewPGen dropped to 3:20 minutes, but now reads 5:48 minutes. So by rough calculations the block 300,000400,000 can be released for testing soon. There are two blocks left from 200,000300,000 free for testing ( email me ). How I break up 300,000400,000 depends on feedback from participants, but 2 GHz weeks seems about the right size for a block. 
We now number seven. I have just taken 300,000 to 400,000 out of the sieve leaving 42423 candidates in there  it had sieved upto NewPGen's p=553,420,892,818. I timed a PrP test on my Athlon 1GHz for 3*2^470001 at 3,600 seconds or exactly a hour meaning that NewPGen's new cut off point for 'n' between 400,000 and 500,000 is 10 plus minutes. Meanwhile I have started to issue blocks of 5000 from 300,000 to 400,000; there are nineteen blocks left for anyone to help test  please email me at paulunderwood@mindless.com to get a block.
[Edit: should be 3*2^4700001] 
[quote]I timed a PrP test on my Athlon 1GHz for 3*2^470001 at 3,600 seconds or exactly a hour meaning that NewPGen's new cut off point for 'n' between 400,000 and 500,000 is 10 plus minutes. [/quote]
I don't agree on this one, i think you should be sieving much deeper, close to 1 hour per candidate removal. Especially when lower tests are still available to assign. sieving form 400K1M is only 10% slower then sieving from 500K1M 
:surprised:ops: Thanks for making this clearer to me smh. I have now upped the sieving throughput by using NewPGen's service: The sieving has been split across three Athlons (1 TB and 2 XP's ). Each is doing a range of 600,000,000,000 (0.6 trillion). I will have to wait for the slowest to finish before merging the files. I guess this is going to take a week, but it takes the maximum divisor from 0.6 trillion to 2.4 trillion. I will then review the rates of candidate elimination.
Meanwhile we have two new members and there are seventeen blocks available for team members to test. 
I want people not to be off joining the search because of the mistake I made in sieving. I have lost about 120 Ghz hours in a project that is going to take maybe 120,000 GHz hours to complete. We still need your help. ;) So please write to me ( paulunderwood@mindless.com ) if you want to join in.

All times are UTC. The time now is 05:17. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2022, Jelsoft Enterprises Ltd.