mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   15k Search (https://www.mersenneforum.org/forumdisplay.php?f=16)
-   -   Sieving with NewPGen (https://www.mersenneforum.org/showthread.php?t=4743)

Cruelty 2005-09-24 15:14

Sieving with NewPGen
 
I have been working on my k=736320585 for some time, I am now LLR'ing both 250-300k and 300-350k ranges of "n". The problem I have is the number of candidates per every 50k range of "n" - I have sieved both ranges till 1T and I still have ~7900 candidates per every range which is [B]a lot[/B] compared to what others report on this forum... any hints on what I may be doing wrong?

Citrix 2005-09-24 17:56

Try sieving from scratch and see if you get the same number of candidates. Other than that if you post the first few lines of the file, we can look into it.

Flatlander 2005-09-24 19:09

Assuming you have chosen k.b^n-1 with k fixed in NewPGen:

My testing of various 'k's (between 1000 and 20000) up to an n of 40000 indicates that a higher number of 'n's left after sieving [i]probably[/i] means there will be more primes produced.

I suppose it is reasonable to assume this holds for higher n and larger k?

I don't think you are doing anything wrong. I have never had to sieve passed 500-600 billion. Your larger amount of 'n's left [i]suggest [/i]to me the [i]possibility[/i] of more primes than I have been finding at those ranges. But, of course, it will take you longer to test all those 'n's !

I only ever sieve to the level suggested in the instructions for NewPGen.

Happy hunting!

(Not my use ofthe words ''probably', 'suggest' and 'possibility' ! )

Cruelty 2005-09-24 20:32

[QUOTE=Citrix]Try sieving from scratch and see if you get the same number of candidates. Other than that if you post the first few lines of the file, we can look into it.[/QUOTE]
Here is the beginning of NewPGen result file:
[quote]
1002680287792:M:0:2:258
736320585 300002
736320585 300012
736320585 300018
736320585 300020
736320585 300029
736320585 300032
736320585 300039
[/quote]
As for sieving the range again from the beginning - is it possible that NewPGen will return different results each time I run it?

Templus 2005-09-24 20:40

This k you are testing is a quite heavy one! For k*2^n+/-1, each choice for k has a certain weight, or: the numbers that remain after sieving.

Resieving will not help: sieving is the process to determine which numbers are divisible by a certain numer (in NewPGen, this is 'p'). So all values for 'n' that are divisible by a number 'p', will also be divisible by that same value for 'p' the next time.

As I see it: you (by accident) chose a k that has a very large weight, so non-primes are not easily detected by sieving. I recommend you to stop sieving.

If I am mistaken, please correct me! :smile:

Cruelty 2005-09-24 20:44

[QUOTE=Flatlander]I have never had to sieve passed 500-600 billion. Your larger amount of 'n's left [i]suggest [/i]to me the [i]possibility[/i] of more primes than I have been finding at those ranges.[/QUOTE]
I have sieved a little bit more than I should based on the LLR/NewPGen break-even point - according to my test it should be 8'25" for n=350000, and I sieved until ~9'30".
Same applies to 200-250k and 250-300k ranges - in first range I have found [B]zero[/B] primes (the number of candidates was ~8000), in second range I am currently at ~269k and still [B]no[/B] primes :no:
So I guess it's just my luck to pick such a "nasty" k...

Cruelty 2005-09-24 20:52

[QUOTE=Templus]As I see it: you (by accident) chose a k that has a very large weight, so non-primes are not easily detected by sieving. I recommend you to stop sieving.[/QUOTE]
I am stubborn and I will get this k till 1M... eventually :wink:

TTn 2005-09-24 23:06

Why not use RMA.NET? It does the work for you, with less waste.

If you sieve to a p bound, you will most likely:
A. Over sieve and waste cycles.
B. Under sieve and waste cycles.

The quickest, and most acurate way to do this is to, sieve until the rate at which Newpgen is throwing out compostites, is equal to the rate at which LLR can perform a primality test on the numbers.
Hence no Over/Under sieving.

Other distributed projects can also have this problem, although in general they save time by, pre-sieving files on machines that are good at sieving.
If both stategies were used in combination, optimal CPU time can be achieved.

Kosmaj 2005-09-26 08:49

7900 candidates per 50k range of n is not a lot! That's usual for this kind of numbers. Have a look at the thread for k=2995125705, the average is about 150 per a 1000 range of k, which means 7500 per 50k, that's only 400 less than your case (because it is sieved to 18T). So just keep on going.

OTOH, some k's have large gaps (in terms on n) between primes. If you want you can stop this one and select another k.

[QUOTE=Cruelty]I have been working on my k=736320585 for some time, I am now LLR'ing both 250-300k and 300-350k ranges of "n". The problem I have is the number of candidates per every 50k range of "n" - I have sieved both ranges till 1T and I still have ~7900 candidates per every range which is [B]a lot[/B] compared to what others report on this forum... any hints on what I may be doing wrong?[/QUOTE]

fatphil 2005-09-26 15:36

[QUOTE=Templus]This k you are testing is a quite heavy one! For k*2^n+/-1, each choice for k has a certain weight, or: the numbers that remain after sieving.

Resieving will not help: sieving is the process to determine which numbers are divisible by a certain numer (in NewPGen, this is 'p'). So all values for 'n' that are divisible by a number 'p', will also be divisible by that same value for 'p' the next time.

As I see it: you (by accident) chose a k that has a very large weight, so non-primes are not easily detected by sieving. I recommend you to stop sieving.

If I am mistaken, please correct me! :smile:[/QUOTE]

Roughly speaking, ks with high weight prefer to be sieved more deeply. This is because the each new prime you sieve with is more likely to remove a remaining candidate compared to a k where small primes have already removed a large proportion of the candidates. This implies that candidate removal times will tend to stay higher with heavy k, and so you'll stop sieving later, and so get a marginally better prime/test ratio from the testing phase.

Phil

VBCurtis 2005-09-26 17:06

When planning to run LLR to a very high value (on a high-weight k, 1M is very high), I find that maximum efficiency is produced by breaking a "chunk" off for LLR when the sieve time is a little less than double the LLR test for that exponent. This is not traditional wisdom, because sieve time per p-value does not scale linearly with range to be sieved-- it scales with the square root of range, so it's more efficient to leave n-values in than it first seems. Since you're planning to run to 1M eventually, I'd sieve 300k-350k until sieve time is quite a bit longer than LLR time-- if LLR is 8 min, I'd go 12 to 14 min on the sieve before breaking those off. This adds sieve time up front, but reduces LLR time at least as much.

Shane-- you say RMA automates this. Does it take this effect into account? Your reply sounds like it breaks off a power when LLR time is equal to sieve time-- this is not as efficient as it could be. Comments?
-Curtis


All times are UTC. The time now is 01:32.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.