mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Twin Prime Search (https://www.mersenneforum.org/forumdisplay.php?f=65)
-   -   Two sieving questions (https://www.mersenneforum.org/showthread.php?t=13729)

The Carnivore 2010-08-15 20:59

Two sieving questions
 
1.) What's the optimal sieve depth, and how is it calculated? I'm assuming the LLR part of the project goes like this:

A.) Test k<100K for n=480K-495K. If a twin is found, work on "Operation Megabit Twin". If not, go to step B.
B.) Test 100K<k<1M for n=480-495K. If a twin is found, work on "Operation Megabit Twin". If not, go to step C.
C.) Test 1M<k<10M. If a twin is found, work on "Operation Megabit Twin". If not, repeat the same process for n=495K-500K.

Would it be a good idea to stop sieving now until the project is done with k<1M? At p=550T, it takes 5-6 minutes to find a k<1M factor on one core, but only four and a half minutes to finish an LLR test.

2.) How much efficiency is lost by breaking up the 480K-495K range into three separate ranges? Shouldn't they be merged into one 480K-495K file?

mdettweiler 2010-08-15 23:27

[quote=The Carnivore;225578]1.) What's the optimal sieve depth, and how is it calculated? I'm assuming the LLR part of the project goes like this:

A.) Test k<100K for n=480K-495K. If a twin is found, work on "Operation Megabit Twin". If not, go to step B.
B.) Test 100K<k<1M for n=480-495K. If a twin is found, work on "Operation Megabit Twin". If not, go to step C.
C.) Test 1M<k<10M. If a twin is found, work on "Operation Megabit Twin". If not, repeat the same process for n=495K-500K.[/quote]
I've heard that the optimal depth is estimated to be around p=3P for the entire range. I don't believe any hard calculations have been done to that effect, though.

[quote]Would it be a good idea to stop sieving now until the project is done with k<1M? At p=550T, it takes 5-6 minutes to find a k<1M factor on one core, but only four and a half minutes to finish an LLR test.[/quote]
For a range with small variability of k and large variability of n (a la NPLB or RPS--usually sieved with one of the srsieve programs), that is how you'd calculate optimal depth, since a smaller n-range sieves faster. But for something with small variability of n and large variability of k (usually sieved with tpsive, as we're doing here), it actually doesn't make any sizeable difference in sieving speed when you extend the k-range. So the sieve would proceed just as fast if we were only working on k<1M. Unless I'm overlooking something here, that means that we should be sieving to the same depth for k<1M as we would for k<10M; otherwise, we're effectively throwing out our advantage that we gained by sieving that entire range together.

Note that this assumes that the entire file (k<10M) will eventually be used. I'm not sure how we're going to do that, whether we'll test the whole range or stop after the first twin. It really is most efficient to go and test the whole file (as otherwise we'd be potentially wasting quite a bit of sieve work), but usually the popular opinion is for the next twin to be significantly bigger than the last (not just marginally so as it would be from later in the same range).
[quote]2.) How much efficiency is lost by breaking up the 480K-495K range into three separate ranges? Shouldn't they be merged into one 480K-495K file?[/quote]
I'm not sure how much efficiency is lost, but I do know that combining them is more efficient. That's what gribozavr's doing for n=485K-495K. However, memory usage increases rather quickly as you do a bigger n-range, which is why the 480K-500K range was split into 4 pieces to begin with. If you have sufficient memory, though (as gribozavr does) then combining two or three of the subranges may be worth it.

Oddball 2010-08-16 06:28

[quote=mdettweiler;225591]I've heard that the optimal depth is estimated to be around p=3P for the entire range. I don't believe any hard calculations have been done to that effect, though.[/quote]
I've just run a quick benchmark: sieving 5999T-6000T yields 171 factors. It took me just over two hours (7401 seconds, to be precise) to finish that 1T range, or a rate of 7401/171 = 43.28 seconds per factor.

If I were to use all cores of that same machine for LLR instead, I'd be completing tests at a rate of 65-66 seconds per test.

When you take other things into account (duplicate factors, factors for candidates which have already been LLR tested, and computers which are better in LLRing than at sieving), you'll find that 6P is more or less the optimal sieve depth.

Lennart 2011-04-13 23:46

[QUOTE=Oddball;225630]I've just run a quick benchmark: sieving 5999T-6000T yields 171 factors. It took me just over two hours (7401 seconds, to be precise) to finish that 1T range, or a rate of 7401/171 = 43.28 seconds per factor.

If I were to use all cores of that same machine for LLR instead, I'd be completing tests at a rate of 65-66 seconds per test.

When you take other things into account (duplicate factors, factors for candidates which have already been LLR tested, and computers which are better in LLRing than at sieving), you'll find that 6P is more or less the optimal sieve depth.[/QUOTE]


When I sieve on my GPU I get 227 f/hr Thats 3600/227 = 15.9 sec

If you merge two files to 480k-490k You will do it faster.

Lennart

EDIT I forgot to say this was on 6P-6002T

hemiboso 2013-04-24 01:37

Question re Prime Spiral Sieve Spin on Twin Primes
 
I'm wondering if the way I've presented the factorization of twin primes is even remotely useful to any of you ... [url]http://www.primesdemystified.com/twinprimes[/url] ... accepting that this is from the perspective of a crackpot arithmetician sticking his neck out to ask a sincere question of bonafide mathematicians and programmers(?)

Puzzle-Peter 2018-12-19 18:38

Talking about sieving - I tried to run tpsieve-cuda on a recent and powerful GPU. But it appears to be looking for some libcudart library that is way outdated and nowhere to be found on the system. Does anybody know how to get it to run on a modern system?

LaurV 2018-12-20 07:32

can you post lib name? there are lots of old cuda libs here somewhere, we may be able to find it for you...

henryzz 2018-12-21 10:12

If the binaries are ancient then you are probably better off recompiling as it will then be optimised for your modern gpu this might give you a nice speed boost(or not).

Puzzle-Peter 2018-12-25 17:57

It's looking for libcudart.so.2 but I don't know where it's trying to find that file.
Is the source available? Trying to compile might be a good idea.

LaurV 2018-12-27 10:02

That library is part of cuda runtime toolkit 2.3 (from 2009) which you can take from [URL="https://developer.nvidia.com/cuda-toolkit-23-downloads"]here[/URL].

Old discussion [URL="https://devtalk.nvidia.com/default/topic/414720/error-while-loading-shared-libraries-libcudart-so-2-wrong-elf-class-elfclass32-error-executing/"]here[/URL] (just first google searched link).

You should not need anything like that for the new cards. You need to reinstall new stuff, and possible tweak the source code of that old program.

Puzzle-Peter 2019-07-04 18:26

I had a look at the source, but that's not what I am very good at. I learned it is "built from the ppsieve package" but I am at a loss at how to get a tpsieve from that?


I had a hard time tweaking polysieve which is only a single file of code and a short one also. These multifile builds just go over my head. It clearly shows I never really learned to write software.


All times are UTC. The time now is 00:35.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.