mersenneforum.org  

Go Back   mersenneforum.org > Other Stuff > Archived Projects > 3*2^n-1 Search

 
 
Thread Tools
Old 2003-04-16, 15:11   #12
EDillio
 

29·131 Posts
Default

Paul my email is not working to good today for some reason, everything is running ok so far :?
 
Old 2003-04-17, 20:16   #13
paulunderwood
 
paulunderwood's Avatar
 
Sep 2002
Database er0rr

2·11·167 Posts
Default

Thanks for helping out Ed :D

All blocks for 'n' below 355,000 have been taken. There are blocks available up 400,000 for anyone to test. Anything higher is in the sieve which should be up to divsors of 2.4 trillion by the end of the weekend.
paulunderwood is offline  
Old 2003-04-19, 21:40   #14
Xyzzy
 
Xyzzy's Avatar
 
"Mike"
Aug 2002

2·3·1,361 Posts
Default

I just ran a block on my 1.5GHz Celery... It took around 5 days... It was very well behaved and put a minimal memory hit on my computer...

Paul is very easy to work with... If I had a real computer and not a laptop I would seriously consider running this full time...

I try to run through at least one "block" of whatever we have added to the "other projects" forums just to make sure there aren't any weird issues to deal with...

Anyways, it was fun and educational! Thanks Paul! (YGM!)
Xyzzy is offline  
Old 2003-04-20, 09:50   #15
paulunderwood
 
paulunderwood's Avatar
 
Sep 2002
Database er0rr

2×11×167 Posts
Default

Thanks for your help Mike. :)

I have sieved up to 2.4 trillion -- it was eliminating a candidate on average every 2413 seconds. I will immediately do a further 1.5 trillion on the 400,000-1,000,000 range and then review the sieve timings.

Here are some LLR timings on my 1GHz Athlon:

3*2^400011-1 Time : 2364.714 sec.
3*2^500006-1 Time : 3861.034 sec.
3*2^600023-1 Time : 5595.876 sec.
3*2^700003-1 Time : 11707.696 sec.
3*2^800023-1 Time : 13288.771 sec.
3*2^900000-1 Time : 18232.217 sec.
3*2^999988-1 Time : 23783.626 sec.

What do you consider is the ideal cut-off time for the sieve
paulunderwood is offline  
Old 2003-04-20, 12:50   #16
TTn
 

2,063 Posts
Default Sieving ideas

I thought it might be nice to start a thread for sieving tips, to new users.

I'll start with my amateur techniques.
First start with an estimated time for the project at hand,

A. First I start a newpen file ( not sure of the optimum size of exponent)
I like to search a run larger than 3000.

B. Then, after a short time, stop the file and test my particular computer with the largest/last number in the file.( a couple of tests is nice but not practical with larger exponents)

C. Record seconds it takes to test with LLR. **important as computers vary extremely**
I never sieve longer than recorded length, and sometimes only 80% of recorded length, depending on the size of the file.

D. Larger files can be broken up into smaller ones, and re-sieved not exceeding recorded length. (by choosing option sieve until rate of k/n is __ seconds.) I THINK this helps in theory since, larger files exclude many composites, but then become cumbersome due to the bias of the recorded length. In theory one could continue breaking up the file from a larger one. We need some kind of derivative equation???

E. Fixed n searches, can be improved by excluding prime k, as they are not as likely to produce a prime.


Please correct me if I am wrong anywhere here!
Please suggest any tips you have picked up along the way.....

Shane F. :) :D :D ;)
 
Old 2003-04-20, 15:40   #17
paulunderwood
 
paulunderwood's Avatar
 
Sep 2002
Database er0rr

2×11×167 Posts
Default

Quote:
First start with an estimated time for the project at hand,
I'd say 13-14 GHz years :(

Quote:
I like to search a run larger than 3000, and smaller than 100000.
We are going up to 1,000,000 -- It takes some 1,000 times as much time to find a prime that is 10 times longer.

Quote:
Larger files can be broken up into smaller ones, and re-sieved not exceeding recorded length
The help in NewPGen says:

To get the best performance, it is recommended that you use a wide range of n values (at least 3,000), as then NewPGen can use an algorithm that has a speed that is proportional to the square root of the range of n values being sieved (so a range that is 4 times as wide only takes twice as long to sieve).

Quote:
Fixed n searches, can be improved by excluding prime k, as they are not as likely to produce a prime.
I can't agree with you on this unless you give some analysis. Remember you have different prime densities with different 'k' -- sieving and LLR testing times vary.

So my question remains:
Quote:
What do you consider is the ideal cut-off time for the sieve ?
Lastly: Shane, join us ;)
paulunderwood is offline  
Old 2003-04-22, 11:57   #18
TTn
 

32×7×43 Posts
Default

It has been expained to me like this:

"If p divides k, then p does not divide k*2^n-1. This means that N=k*2^n-1 is prime with probability k/phi(k) * 1/ln(N) instead of probability 1/ln(N).

For k<100 with k*2^n-1 prime, this moves the probability that k is prime from 1 in 4 to 1 in 7; for k<1000, the probability of k prime moves from 1 in 6 to 1 in 11; for k<10000, the probability moves from 1 in 8 to 1 in 16."


I understand that especially highly composite k, eliminate those possible factors of (k*2^n-1). But maybe this is a catch-22 depending on the particular k in question. ?


I would like to know the reason why three was chosen for k in this project.
Shouldn't we look for the shortest distance, to find the most primes per CPU cycle. Surely this is not ? ? ?

I may join with one of my computers for the hell of it though, as I have 15 Ghz at my disposal now.

So you see that k=3 is not likely to be practical for such a search.
I am confident however that k has a frequent form( a shortest distance if you will), and would make for the ultimate collaborative effort to rival that of George's W.I.M.P.S!
 
Old 2003-04-22, 12:58   #19
TTn
 

62608 Posts
Default

:D
 
Old 2003-04-22, 15:46   #20
smh
 
smh's Avatar
 
"Sander"
Oct 2002
52.345322,5.52471

29×41 Posts
Default

Why break up the file?

It's much more efficient to sieve the whole range at once since sieving time is proportional to the square root of the range.

Sieving a range 4 times longer will cost you only twice the time to sieve (and thus you will be removing more candidates in a given time)
smh is offline  
Old 2003-04-22, 22:17   #21
TTn
 

2·3·929 Posts
Default

I see, maybe the file should have started bigger than the original intention. There is a cutoff point somewhere. If you spend ~23783 sec per exponent on the entire file, then when it grabs a composite at this rate in the 400011-500006 range of this file, it cancels some of that efficiency mentioned. Since it should only take ~3000 seconds max, which is ~10 times faster, rather than the 1/2 ratio.

This is a great question!
ops:
 
Old 2003-04-23, 00:31   #22
paulunderwood
 
paulunderwood's Avatar
 
Sep 2002
Database er0rr

2×11×167 Posts
Default

:( thanks for joining our search, Shane.

There are 17 blocks being tested 8)

5 blocks are available for 'n' below 400,000

At 'p' equals 3.5 trillion, NewPGen is eliminating a candidate every 47:15 minutes ( 2835 seconds ).
paulunderwood is offline  
 

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Forum Donation and Expense List Xyzzy Lounge 129 2021-05-04 12:33
link to status table paulunderwood 3*2^n-1 Search 2 2004-05-17 22:13
Status of Fermat Search rogue Factoring 13 2004-05-01 14:48
Interesting link... Xyzzy Hardware 0 2003-08-21 00:06
Nice link... Xyzzy Lounge 4 2003-06-28 13:37

All times are UTC. The time now is 22:06.

Fri May 14 22:06:09 UTC 2021 up 36 days, 16:47, 0 users, load averages: 4.30, 3.74, 3.50

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.