mersenneforum.org  

Go Back   mersenneforum.org > Other Stuff > Archived Projects > Prime Cullen Prime

 
 
Thread Tools
Old 2007-03-24, 20:40   #12
SB2
 
SB2's Avatar
 
Jul 2003

2·3·19 Posts
Default

If it is alright with all involved, I'm going to sieve the 60-70 range.
SB2 is offline  
Old 2007-03-25, 08:48   #13
hhh
 
hhh's Avatar
 
Jun 2005

373 Posts
Default

As for the stats within the client:
I let it run
12 hours on a Athlon XP 2000+, found
31 factors, from 32 551M to 36 020M, that makes
3.5G, and it shows
1480 seconds per candidate.

I will give some other this evening. H.
hhh is offline  
Old 2007-03-25, 16:13   #14
Citrix
 
Citrix's Avatar
 
Jun 2003

30218 Posts
Default

Stats

Program ran for 2 hrs
cpu: Intel celeron 1.4 ghz
found 3 factors.
time per factor=40 min
was able to do about 650M
range 50,000M to 50,650M

I think p-1 might be better, after we sieve to 100G, if the time per factor is rising so rapidly.

edit: any bugs or changes anyone wants?

Last fiddled with by Citrix on 2007-03-25 at 16:14
Citrix is offline  
Old 2007-03-25, 16:55   #15
hhh
 
hhh's Avatar
 
Jun 2005

373 Posts
Default

Can the time it takes for the first factor to be found be taken into account for the time per candidate calculation? Thet would make it more accurate.
That's the only thing to make it better I can see.

As for the factor density, your three factors can be a statistical deviation. And I think we should still continue sieving for a moment, as
1) we find nonsmooth factors as well and
2) the time to eliminate an average canditate by sieving is still much lower than the time for LLR; finally
3) Why not spend too much time in sieving rather than spend too much time in LLR, for a change? Never a project was so easy to over-sieve, I vote for this luxury.

BTW, question: Should we keep searching for factors of candidates that have a LLR-residue?
How is the relation between sievespeed and number of candidates in the list? Proportional? logarithmic? Citrix?

H.
hhh is offline  
Old 2007-03-25, 17:22   #16
Citrix
 
Citrix's Avatar
 
Jun 2003

61116 Posts
Default

There are two stages to the algorithm

Stage 1) Takes about 2 sec per million range and this is fixed and does not vary with the number of candidates

Stage 2) Takes about 14 sec per million. If we reduced the number of candidates by 1/2 then this would take 7 sec. So propotional.

But since LLR and machines are not perfect, I think we should try to find a factor for all numbers even when they are LLRed, there might have been some error. No point on doing p-1 once they are llred. We can remove them from the sieve file once a candidate is double checked. This is the same as how PSP is set up.

If you want you can sieve n=1.5-2M first and then the rest. Only the first 2 sec per million is duplicated in this , the rest is the same, but you will have ranges to LLR sooner. This method will require more book keeping effort.

Also I think we should p-1 all candiates with low bounds say B1=10000 and b2=100,000 and quickly find all the low lying factors. Perhaps ECM with low bounds. Then see how many candidates left and then sieve.

The time it takes to find the first candidate is already taken into account to calculate time per factor.
Citrix is offline  
Old 2007-03-25, 18:26   #17
SB2
 
SB2's Avatar
 
Jul 2003

1628 Posts
Default

Just finished my first range 60-70

Program ran for 21.5 hrs
cpu: Opteron 248 @ 2.2GHz
found 38 factors.
Sieving Rate 1341.70 sec/candidate
465M / hr.
SB2 is offline  
Old 2007-03-25, 20:45   #18
hhh
 
hhh's Avatar
 
Jun 2005

373 Posts
Default

12h
3.6G sieved
25 factors found
2500 seconds/factor
hhh is offline  
Old 2007-03-25, 20:59   #19
hhh
 
hhh's Avatar
 
Jun 2005

373 Posts
Default

Quote:
Originally Posted by Citrix View Post
But since LLR and machines are not perfect, I think we should try to find a factor for all numbers even when they are LLRed, there might have been some error. No point on doing p-1 once they are llred. We can remove them from the sieve file once a candidate is double checked. This is the same as how PSP is set up.
Until here, we agree, just that I thought we could let the doublecheck to Ray Ballinger et al., if they agree, in the frame of their normal Cullen search. I didn't contact them yet; It would make things easier.

Quote:
Originally Posted by Citrix View Post
If you want you can sieve n=1.5-2M first and then the rest. Only the first 2 sec per million is duplicated in this , the rest is the same, but you will have ranges to LLR sooner. This method will require more book keeping effort.
VETO. I am for computational efficiency, but the free-time efficiency counts as well.

Quote:
Originally Posted by Citrix View Post
Also I think we should p-1 all candiates with low bounds say B1=10000 and b2=100,000 and quickly find all the low lying factors. Perhaps ECM with low bounds. Then see how many candidates left and then sieve.

The time it takes to find the first candidate is already taken into account to calculate time per factor.
I don't see the the point of this. You want low lying factors? You sieve. You want efficiency? You let prime95 choose the bounds, and ECM is slower. You want to be sure not to miss some easy factors before LLR? Then you P-1. The whole philosophy is sieve-P-1-LLR.
Or I missed you point, that's possible. Perhaps you wanted to propose some sophisticated P-1/sieve-mix that is even more efficient. Please explain.

After all, everybody is free to do whatever he is pleased to do in this project, as long as it is halfway reasonable and doesn't cause too much work for bookkeeping(<--what a word, that!).

Yours H.

Last fiddled with by hhh on 2007-03-25 at 21:00
hhh is offline  
Old 2007-03-25, 21:06   #20
Xyzzy
 
Xyzzy's Avatar
 
"Mike"
Aug 2002

7,561 Posts
Default

Quote:
...bookkeeping(<--what a word, that!)...
http://answers.yahoo.com/question/in...5022759AALr5af
Xyzzy is offline  
Old 2007-03-25, 21:22   #21
Citrix
 
Citrix's Avatar
 
Jun 2003

1,553 Posts
Default

Quote:
Originally Posted by hhh View Post
Until here, we agree, just that I thought we could let the doublecheck to Ray Ballinger et al., if they agree, in the frame of their normal Cullen search. I didn't contact them yet; It would make things easier.


VETO. I am for computational efficiency, but the free-time efficiency counts as well.


I don't see the the point of this. You want low lying factors? You sieve. You want efficiency? You let prime95 choose the bounds, and ECM is slower. You want to be sure not to miss some easy factors before LLR? Then you P-1. The whole philosophy is sieve-P-1-LLR.
Or I missed you point, that's possible. Perhaps you wanted to propose some sophisticated P-1/sieve-mix that is even more efficient. Please explain.


Yours H.
It is ok with me if you want to deligate the work of double checking for Ray Ballinger et al. If so then you can remove candidates from the sieve. Just ask all users with new machines to double check 1-2 candidates before they reserve a new range or run Prime95 stress testing.

One thing is that if double checking missed a prime, you may have to PRP a long way before you find one more and solve the question. Consider SOB and their missed prime. But I leave the decision upto you.


For p-1, I looked at the 10 or so of the factors I found. Most of the factors could have been found within a few min of p-1 work compared to 40 min on the sieve for each factor. I suggest we do some basic p-1 with low bounds like b1=10000 and b2=100000. Then sieve with the remaining candidates and then return to p-1 with larger bounds. Anyway,we should do what ever is most efficient.

Book Keeping? I always thought, it was two words? What are the roots of the word?
Citrix is offline  
Old 2007-03-25, 21:28   #22
hhh
 
hhh's Avatar
 
Jun 2005

5658 Posts
Default

Quote:
Originally Posted by Citrix View Post
But I leave the decision upto you.
Anyways, we are going to think about DC only when we reach 5M or something.
Quote:
Originally Posted by Citrix View Post
For p-1, I looked at the 10 or so of the factors I found. Most of the factors could have been found within a few min of p-1 work compared to 40 min on the sieve for each factor.
The problem with P-1 are all those that you test without finding a factor.
Quote:
Originally Posted by Citrix View Post
I suggest we do some basic p-1 with low bounds like b1=10000 and b2=100000. Then sieve with the remaining candidates and then return to p-1 with larger bounds. Anyway,we should do what ever is most efficient.
Free-time-Veto again. If somebody wants to do P-1, he can, but then correctly, please. It's still fast enough.
Yours H.
hhh is offline  
 

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
SIEVE GAP pepi37 Other Mathematical Topics 2 2016-03-19 06:55
Advantage of lattice sieve over line sieve binu Factoring 3 2013-04-13 16:32
Combined Sieve Guide Discussion Joe O Prime Sierpinski Project 35 2006-09-01 13:44
Sieve discussion Meaning of first/second pass, combined Citrix Prime Sierpinski Project 14 2005-12-31 19:39
New Sieve Thread Discussion Citrix Prime Sierpinski Project 15 2005-08-29 13:56

All times are UTC. The time now is 20:37.

Fri Jul 10 20:37:14 UTC 2020 up 107 days, 18:10, 1 user, load averages: 1.64, 1.68, 1.68

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.