mersenneforum.org  

Go Back   mersenneforum.org > Other Stuff > Archived Projects > Prime Cullen Prime

 
 
Thread Tools
Old 2007-04-21, 10:43   #45
em99010pepe
 
em99010pepe's Avatar
 
Sep 2004

2×5×283 Posts
Default

Quote:
Originally Posted by hhh View Post
Me too. What is your factor/time ratio now? Do you still get a factor per day? How fast would your computers be in LLR?
In other words: how many times sieving is still more efficient than LLR?

I' make the update this afternoon, I think. Anyways, the speed increase will be less than 5%, I think. H.
26433 sec/factor on my home machine. At work I have access to three 3.0GHz HT P4 machines and one 2.8GHz P4 dual core machine but I never tried LLR on them.

Carlos
em99010pepe is offline  
Old 2007-04-21, 14:17   #46
hhh
 
hhh's Avatar
 
Jun 2005

373 Posts
Default

That means that we can keep sieving fo rhte moment. But we can start LLR as well and take out the lower numbers from the sieve.
hhh is offline  
Old 2007-04-23, 20:48   #47
em99010pepe
 
em99010pepe's Avatar
 
Sep 2004

B0E16 Posts
Default

What's our goal on sieve, going up to 10T, 20T....?

Carlos

Last fiddled with by em99010pepe on 2007-04-23 at 20:48
em99010pepe is offline  
Old 2007-04-23, 23:12   #48
Citrix
 
Citrix's Avatar
 
Jun 2003

1,553 Posts
Default

I don't think more than 5T.
2.5T might be enough.

As long as you can find 1 factor a day on a fast athlon, then you should sieve, else start with LLR.

edit: If we are removing candidates as we are LLRing them, then this would change things and we might be able to go deeper as the sieve would become faster.


Last fiddled with by Citrix on 2007-04-23 at 23:13
Citrix is offline  
Old 2007-04-24, 08:40   #49
hhh
 
hhh's Avatar
 
Jun 2005

373 Posts
Default

We don't even need to LLR to take candidates out. We can take them out just like this as well. That's what I'm going to do with the next import, as we almost don't find any factors below 2M, thanks to P-1. As for the rest, I vote for sieving rather too deep than too shallow (native speeker assistance please!), as there are too many undersieved projects and I'd like to invert this.
But as usual, everybody is free to do whatever he is pleased to do. H.
hhh is offline  
Old 2007-04-24, 11:36   #50
em99010pepe
 
em99010pepe's Avatar
 
Sep 2004

B0E16 Posts
Default

I'm going to finish the current ranges then I am out.

Carlos
em99010pepe is offline  
Old 2007-04-24, 20:38   #51
hhh
 
hhh's Avatar
 
Jun 2005

1011101012 Posts
Default

Quote:
Originally Posted by em99010pepe View Post
I'm going to finish the current ranges then I am out.

Carlos
That's a pity. Thank you anyway for the nice boost you gave to this project.
H.
hhh is offline  
Old 2007-04-26, 04:00   #52
geoff
 
geoff's Avatar
 
Mar 2003
New Zealand

13×89 Posts
Default

On my 2.9GHz P4, LLR with exponent 3,250,000 should take about 32500 seconds (9 hours) at 10.0 ms/bit. LLR with exponent 5,000,000 should take about 82500 sec (23 hours) at 16.5 ms/bit.

I think sieving up to 2.5T-3T is probably about right, if we are not taking double checking into account. Maybe up to 5T allowing for double checks.

If the project doesn't find a prime below exponent 5,000,000 then my guess is that there won't be a lot of interest in double checking, people would be more interested in doing first time tests on higher ranges to find the first prime.

If a prime is found below exponent 5,000,000 then there could be more interest in double checking, to prove that it is the smallest such prime.
geoff is offline  
Old 2007-04-26, 06:50   #53
Citrix
 
Citrix's Avatar
 
Jun 2003

1,553 Posts
Default

Tests at different levels take different amount of time. I think we are still under sieved for the tests with n=4M to 5M.

The best approach would be to continue sieving and removing candidates as they are LLRed. For example, I think we are sieved well to n=2.5M. So we should assign all these candidates to LLR and then remove them from the sieve.

This will make the sieve client much faster and the time per factor should drop, thus we can effectively sieve beyond 2.5T.

I created a dat file, only above 2.4M, if any one wants to try it to see what speeds and time per factor we get. (see attached)
Attached Files
File Type: txt sieve.txt (42.9 KB, 114 views)
Citrix is offline  
Old 2007-04-26, 07:50   #54
em99010pepe
 
em99010pepe's Avatar
 
Sep 2004

2·5·283 Posts
Default

Please decide which approach is better because I'm really inclined to move to another project.

Carlos
em99010pepe is offline  
Old 2007-04-26, 11:38   #55
hhh
 
hhh's Avatar
 
Jun 2005

373 Posts
Default

Up to 2M, we have sieved nicely and done a decent P-1, such that we almost don't find factors by sieve anymore.

Up to 2.5M, I'm going to write out a decent P-1 as well.

So I vote for just deleting the lines below 2.5M from the sieve.txt, and to continue sieving.

This way, we won't be getting more factors per time, but the factors we are going to find will be worth more, on average.

For the moment, everybody feel free to use the sieve.txt Citrix posted. The next official release will be truncated at 2.5M anyways.

This is to be sure that no cycles are wasted.

Yours H.
hhh is offline  
 

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
SIEVE GAP pepi37 Other Mathematical Topics 2 2016-03-19 06:55
Advantage of lattice sieve over line sieve binu Factoring 3 2013-04-13 16:32
Combined Sieve Guide Discussion Joe O Prime Sierpinski Project 35 2006-09-01 13:44
Sieve discussion Meaning of first/second pass, combined Citrix Prime Sierpinski Project 14 2005-12-31 19:39
New Sieve Thread Discussion Citrix Prime Sierpinski Project 15 2005-08-29 13:56

All times are UTC. The time now is 20:23.

Fri Jul 10 20:23:01 UTC 2020 up 107 days, 17:56, 1 user, load averages: 1.23, 1.46, 1.60

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.