20070324, 20:40  #12 
Jul 2003
72_{16} Posts 
If it is alright with all involved, I'm going to sieve the 6070 range.

20070325, 08:48  #13 
Jun 2005
373 Posts 
As for the stats within the client:
I let it run 12 hours on a Athlon XP 2000+, found 31 factors, from 32 551M to 36 020M, that makes 3.5G, and it shows 1480 seconds per candidate. I will give some other this evening. H. 
20070325, 16:13  #14 
Jun 2003
3·503 Posts 
Stats
Program ran for 2 hrs cpu: Intel celeron 1.4 ghz found 3 factors. time per factor=40 min was able to do about 650M range 50,000M to 50,650M I think p1 might be better, after we sieve to 100G, if the time per factor is rising so rapidly. edit: any bugs or changes anyone wants? Last fiddled with by Citrix on 20070325 at 16:14 
20070325, 16:55  #15 
Jun 2005
373 Posts 
Can the time it takes for the first factor to be found be taken into account for the time per candidate calculation? Thet would make it more accurate.
That's the only thing to make it better I can see. As for the factor density, your three factors can be a statistical deviation. And I think we should still continue sieving for a moment, as 1) we find nonsmooth factors as well and 2) the time to eliminate an average canditate by sieving is still much lower than the time for LLR; finally 3) Why not spend too much time in sieving rather than spend too much time in LLR, for a change? Never a project was so easy to oversieve, I vote for this luxury. BTW, question: Should we keep searching for factors of candidates that have a LLRresidue? How is the relation between sievespeed and number of candidates in the list? Proportional? logarithmic? Citrix? H. 
20070325, 17:22  #16 
Jun 2003
3×503 Posts 
There are two stages to the algorithm
Stage 1) Takes about 2 sec per million range and this is fixed and does not vary with the number of candidates Stage 2) Takes about 14 sec per million. If we reduced the number of candidates by 1/2 then this would take 7 sec. So propotional. But since LLR and machines are not perfect, I think we should try to find a factor for all numbers even when they are LLRed, there might have been some error. No point on doing p1 once they are llred. We can remove them from the sieve file once a candidate is double checked. This is the same as how PSP is set up. If you want you can sieve n=1.52M first and then the rest. Only the first 2 sec per million is duplicated in this , the rest is the same, but you will have ranges to LLR sooner. This method will require more book keeping effort. Also I think we should p1 all candiates with low bounds say B1=10000 and b2=100,000 and quickly find all the low lying factors. Perhaps ECM with low bounds. Then see how many candidates left and then sieve. The time it takes to find the first candidate is already taken into account to calculate time per factor. 
20070325, 18:26  #17 
Jul 2003
114_{10} Posts 
Just finished my first range 6070
Program ran for 21.5 hrs cpu: Opteron 248 @ 2.2GHz found 38 factors. Sieving Rate 1341.70 sec/candidate 465M / hr. 
20070325, 20:45  #18 
Jun 2005
175_{16} Posts 
12h
3.6G sieved 25 factors found 2500 seconds/factor 
20070325, 20:59  #19  
Jun 2005
175_{16} Posts 
Quote:
Quote:
Quote:
Or I missed you point, that's possible. Perhaps you wanted to propose some sophisticated P1/sievemix that is even more efficient. Please explain. After all, everybody is free to do whatever he is pleased to do in this project, as long as it is halfway reasonable and doesn't cause too much work for bookkeeping(<what a word, that!). Yours H. Last fiddled with by hhh on 20070325 at 21:00 

20070325, 21:06  #20  
"Mike"
Aug 2002
2·3^{2}·5·83 Posts 
Quote:


20070325, 21:22  #21  
Jun 2003
3·503 Posts 
Quote:
One thing is that if double checking missed a prime, you may have to PRP a long way before you find one more and solve the question. Consider SOB and their missed prime. But I leave the decision upto you. For p1, I looked at the 10 or so of the factors I found. Most of the factors could have been found within a few min of p1 work compared to 40 min on the sieve for each factor. I suggest we do some basic p1 with low bounds like b1=10000 and b2=100000. Then sieve with the remaining candidates and then return to p1 with larger bounds. Anyway,we should do what ever is most efficient. Book Keeping? I always thought, it was two words? What are the roots of the word? 

20070325, 21:28  #22  
Jun 2005
373 Posts 
Anyways, we are going to think about DC only when we reach 5M or something.
Quote:
Quote:
Yours H. 

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
SIEVE GAP  pepi37  Other Mathematical Topics  2  20160319 06:55 
Advantage of lattice sieve over line sieve  binu  Factoring  3  20130413 16:32 
Combined Sieve Guide Discussion  Joe O  Prime Sierpinski Project  35  20060901 13:44 
Sieve discussion Meaning of first/second pass, combined  Citrix  Prime Sierpinski Project  14  20051231 19:39 
New Sieve Thread Discussion  Citrix  Prime Sierpinski Project  15  20050829 13:56 