![]() |
![]() |
#1 |
May 2005
31308 Posts |
![]()
I have been working on my k=736320585 for some time, I am now LLR'ing both 250-300k and 300-350k ranges of "n". The problem I have is the number of candidates per every 50k range of "n" - I have sieved both ranges till 1T and I still have ~7900 candidates per every range which is a lot compared to what others report on this forum... any hints on what I may be doing wrong?
|
![]() |
![]() |
#2 |
Jun 2003
1,579 Posts |
![]()
Try sieving from scratch and see if you get the same number of candidates. Other than that if you post the first few lines of the file, we can look into it.
|
![]() |
![]() |
#3 |
I quite division it
"Chris"
Feb 2005
England
31×67 Posts |
![]()
Assuming you have chosen k.b^n-1 with k fixed in NewPGen:
My testing of various 'k's (between 1000 and 20000) up to an n of 40000 indicates that a higher number of 'n's left after sieving probably means there will be more primes produced. I suppose it is reasonable to assume this holds for higher n and larger k? I don't think you are doing anything wrong. I have never had to sieve passed 500-600 billion. Your larger amount of 'n's left suggest to me the possibility of more primes than I have been finding at those ranges. But, of course, it will take you longer to test all those 'n's ! I only ever sieve to the level suggested in the instructions for NewPGen. Happy hunting! (Not my use ofthe words ''probably', 'suggest' and 'possibility' ! ) |
![]() |
![]() |
#4 | ||
May 2005
23·7·29 Posts |
![]() Quote:
Quote:
|
||
![]() |
![]() |
#5 |
Jun 2004
2×53 Posts |
![]()
This k you are testing is a quite heavy one! For k*2^n+/-1, each choice for k has a certain weight, or: the numbers that remain after sieving.
Resieving will not help: sieving is the process to determine which numbers are divisible by a certain numer (in NewPGen, this is 'p'). So all values for 'n' that are divisible by a number 'p', will also be divisible by that same value for 'p' the next time. As I see it: you (by accident) chose a k that has a very large weight, so non-primes are not easily detected by sieving. I recommend you to stop sieving. If I am mistaken, please correct me! ![]() |
![]() |
![]() |
#6 | |
May 2005
23×7×29 Posts |
![]() Quote:
Same applies to 200-250k and 250-300k ranges - in first range I have found zero primes (the number of candidates was ~8000), in second range I am currently at ~269k and still no primes ![]() So I guess it's just my luck to pick such a "nasty" k... |
|
![]() |
![]() |
#7 | |
May 2005
65816 Posts |
![]() Quote:
![]() |
|
![]() |
![]() |
#8 |
54108 Posts |
![]()
Why not use RMA.NET? It does the work for you, with less waste.
If you sieve to a p bound, you will most likely: A. Over sieve and waste cycles. B. Under sieve and waste cycles. The quickest, and most acurate way to do this is to, sieve until the rate at which Newpgen is throwing out compostites, is equal to the rate at which LLR can perform a primality test on the numbers. Hence no Over/Under sieving. Other distributed projects can also have this problem, although in general they save time by, pre-sieving files on machines that are good at sieving. If both stategies were used in combination, optimal CPU time can be achieved. Last fiddled with by TTn on 2005-09-24 at 23:08 |
![]() |
#9 | |
Nov 2003
2×1,811 Posts |
![]()
7900 candidates per 50k range of n is not a lot! That's usual for this kind of numbers. Have a look at the thread for k=2995125705, the average is about 150 per a 1000 range of k, which means 7500 per 50k, that's only 400 less than your case (because it is sieved to 18T). So just keep on going.
OTOH, some k's have large gaps (in terms on n) between primes. If you want you can stop this one and select another k. Quote:
|
|
![]() |
![]() |
#10 | |
May 2003
3×7×11 Posts |
![]() Quote:
Phil |
|
![]() |
![]() |
#11 |
"Curtis"
Feb 2005
Riverside, CA
3×1,579 Posts |
![]()
When planning to run LLR to a very high value (on a high-weight k, 1M is very high), I find that maximum efficiency is produced by breaking a "chunk" off for LLR when the sieve time is a little less than double the LLR test for that exponent. This is not traditional wisdom, because sieve time per p-value does not scale linearly with range to be sieved-- it scales with the square root of range, so it's more efficient to leave n-values in than it first seems. Since you're planning to run to 1M eventually, I'd sieve 300k-350k until sieve time is quite a bit longer than LLR time-- if LLR is 8 min, I'd go 12 to 14 min on the sieve before breaking those off. This adds sieve time up front, but reduces LLR time at least as much.
Shane-- you say RMA automates this. Does it take this effect into account? Your reply sounds like it breaks off a power when LLR time is equal to sieve time-- this is not as efficient as it could be. Comments? -Curtis |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Parallel sieving with newpgen | fivemack | And now for something completely different | 3 | 2017-05-16 17:55 |
NewPgen | Cybertronic | Factoring | 0 | 2014-03-22 10:07 |
Does NewPGen have a bug? | MooooMoo | Riesel Prime Search | 16 | 2008-12-11 11:46 |
Faster sieving with NewPGen and srsieve | geoff | Sierpinski/Riesel Base 5 | 11 | 2006-07-10 01:10 |
Sieving multiple NewPGen files in a row(How?) | jasong | Software | 18 | 2005-12-30 20:23 |