20050924, 15:14  #1 
May 2005
1618_{10} Posts 
Sieving with NewPGen
I have been working on my k=736320585 for some time, I am now LLR'ing both 250300k and 300350k ranges of "n". The problem I have is the number of candidates per every 50k range of "n"  I have sieved both ranges till 1T and I still have ~7900 candidates per every range which is a lot compared to what others report on this forum... any hints on what I may be doing wrong?

20050924, 17:56  #2 
Jun 2003
3×521 Posts 
Try sieving from scratch and see if you get the same number of candidates. Other than that if you post the first few lines of the file, we can look into it.

20050924, 19:09  #3 
I quite division it
"Chris"
Feb 2005
England
2077_{10} Posts 
Assuming you have chosen k.b^n1 with k fixed in NewPGen:
My testing of various 'k's (between 1000 and 20000) up to an n of 40000 indicates that a higher number of 'n's left after sieving probably means there will be more primes produced. I suppose it is reasonable to assume this holds for higher n and larger k? I don't think you are doing anything wrong. I have never had to sieve passed 500600 billion. Your larger amount of 'n's left suggest to me the possibility of more primes than I have been finding at those ranges. But, of course, it will take you longer to test all those 'n's ! I only ever sieve to the level suggested in the instructions for NewPGen. Happy hunting! (Not my use ofthe words ''probably', 'suggest' and 'possibility' ! ) 
20050924, 20:32  #4  
May 2005
2×809 Posts 
Quote:
Quote:


20050924, 20:40  #5 
Jun 2004
106_{10} Posts 
This k you are testing is a quite heavy one! For k*2^n+/1, each choice for k has a certain weight, or: the numbers that remain after sieving.
Resieving will not help: sieving is the process to determine which numbers are divisible by a certain numer (in NewPGen, this is 'p'). So all values for 'n' that are divisible by a number 'p', will also be divisible by that same value for 'p' the next time. As I see it: you (by accident) chose a k that has a very large weight, so nonprimes are not easily detected by sieving. I recommend you to stop sieving. If I am mistaken, please correct me! 
20050924, 20:44  #6  
May 2005
2·809 Posts 
Quote:
Same applies to 200250k and 250300k ranges  in first range I have found zero primes (the number of candidates was ~8000), in second range I am currently at ~269k and still no primes So I guess it's just my luck to pick such a "nasty" k... 

20050924, 20:52  #7  
May 2005
652_{16} Posts 
Quote:


20050924, 23:06  #8 
3^{5}·31 Posts 
Why not use RMA.NET? It does the work for you, with less waste.
If you sieve to a p bound, you will most likely: A. Over sieve and waste cycles. B. Under sieve and waste cycles. The quickest, and most acurate way to do this is to, sieve until the rate at which Newpgen is throwing out compostites, is equal to the rate at which LLR can perform a primality test on the numbers. Hence no Over/Under sieving. Other distributed projects can also have this problem, although in general they save time by, presieving files on machines that are good at sieving. If both stategies were used in combination, optimal CPU time can be achieved. Last fiddled with by TTn on 20050924 at 23:08 
20050926, 08:49  #9  
Nov 2003
2·1,811 Posts 
7900 candidates per 50k range of n is not a lot! That's usual for this kind of numbers. Have a look at the thread for k=2995125705, the average is about 150 per a 1000 range of k, which means 7500 per 50k, that's only 400 less than your case (because it is sieved to 18T). So just keep on going.
OTOH, some k's have large gaps (in terms on n) between primes. If you want you can stop this one and select another k. Quote:


20050926, 15:36  #10  
May 2003
11100111_{2} Posts 
Quote:
Phil 

20050926, 17:06  #11 
"Curtis"
Feb 2005
Riverside, CA
19·223 Posts 
When planning to run LLR to a very high value (on a highweight k, 1M is very high), I find that maximum efficiency is produced by breaking a "chunk" off for LLR when the sieve time is a little less than double the LLR test for that exponent. This is not traditional wisdom, because sieve time per pvalue does not scale linearly with range to be sieved it scales with the square root of range, so it's more efficient to leave nvalues in than it first seems. Since you're planning to run to 1M eventually, I'd sieve 300k350k until sieve time is quite a bit longer than LLR time if LLR is 8 min, I'd go 12 to 14 min on the sieve before breaking those off. This adds sieve time up front, but reduces LLR time at least as much.
Shane you say RMA automates this. Does it take this effect into account? Your reply sounds like it breaks off a power when LLR time is equal to sieve time this is not as efficient as it could be. Comments? Curtis 
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Parallel sieving with newpgen  fivemack  And now for something completely different  3  20170516 17:55 
NewPgen  Cybertronic  Factoring  0  20140322 10:07 
Does NewPGen have a bug?  MooooMoo  Riesel Prime Search  16  20081211 11:46 
Faster sieving with NewPGen and srsieve  geoff  Sierpinski/Riesel Base 5  11  20060710 01:10 
Sieving multiple NewPGen files in a row(How?)  jasong  Software  18  20051230 20:23 