mersenneforum.org Sieving discussion thread
 User Name Remember Me? Password
 Register FAQ Search Today's Posts Mark Forums Read

2009-08-10, 04:32   #210
cipher

Feb 2007

D316 Posts

Quote:
 Originally Posted by geoff You might be biting off more than you can chew with this range. Edit: The sieve efficiency will decrease as the range on n increases.
Geoff: You are correct. I just Tested a range of approx 100n it was going @ a rate of 26,000 p/s vs 130Mil p/sec i get for individual n.

Last fiddled with by cipher on 2009-08-10 at 05:02

2009-08-10, 06:07   #211
MooooMoo
Apprentice Crank

Mar 2006

1C616 Posts

Quote:
 Originally Posted by Ken_g6 I'll hazard an educated guess that each extra K takes 1/10 the time of the first K. That still means that testing each P will take about 500 times as long as normal for the same size range of candidates. How much longer does LLR take with those bigger K's? Twice as long? In that case, we're better off with a range of 10 or maybe 20 K's.
edit: You meant that each extra n takes 1/10th the time of the first n?

Anyway, we may still be better off with a range of 20000 n's than a range of 10 or 20 n's. The previous effort (for n=333333) reached a sieving depth of 25P with about 300 k per M. If this effort does 20000 times less work and only reaches a sieving depth of 1.25T, there'll be about 500 k per M. Bigger k's take about 70% longer to LLR than smaller ones, so sieving a huge range of n with smaller k's would be worth it in this case.

Even if those assumptions are off by a lot, it still wouldn't make sense to do a range of 10 or 20 n's - the benefits of shorter LLR testing times are gone once the k value is greater than a few M. Doing 10 or 20 n's would save a few days of LLR testing at most.

Last fiddled with by MooooMoo on 2009-08-10 at 06:09

2009-08-10, 17:24   #212
Ken_g6

Jan 2005
Caught in a sieve

39510 Posts

Quote:
 Originally Posted by MooooMoo edit: You meant that each extra n takes 1/10th the time of the first n?
Yep. I shouldn't post late at night, I always get something wrong.

Quote:
 Originally Posted by MooooMoo Anyway, we may still be better off with a range of 20000 n's than a range of 10 or 20 n's...
Let's look at this mathematically. On one side, we have the proportion of time saved by shorter LLRs. You say that's 1.7 times as long. On the other side we have the time wasted by having more candidates to LLR. The time used by LLR is about 1+1/10*R, where R is the size of the range of n's. Assuming we're going to do a constant amount of sieve work, which may or may not be to the optimal point, let P be the prime we would get to with a single n.

Now the point we would get to with a range R is Pr = P/(1+1/10*R). The log of the ratio P/Pr = (1+1/10*R) gives the proportion of extra candidates to be tested with LLR. So assuming a given R will halve LLR time, we need ln(1+1/10*R) < 1.7. Exponentiate both sides to get 1+1/10*R < e^1.7 which is about 5.5. So R can maybe be 45.

Edit: I think it's actually the ratio of the logs, not the log of the ratio, but I still can't imagine it's big enough for 20,000 n's.

Quote:
 Originally Posted by MooooMoo Even if those assumptions are off by a lot, it still wouldn't make sense to do a range of 10 or 20 n's - the benefits of shorter LLR testing times are gone once the k value is greater than a few M. Doing 10 or 20 n's would save a few days of LLR testing at most.
Right, one of my assumptions was that the possible R would get the K's low enough to get the LLR time down. So it's probably pointless to do more than just one n.

Last fiddled with by Ken_g6 on 2009-08-10 at 17:34 Reason: fuzzy math

2009-08-10, 18:29   #213
axn

Jun 2003

113528 Posts

Quote:
 Originally Posted by Ken_g6 Let's look at this mathematically. [snip]
The saving from sieving 20000 n's simultaneously is a factor of 1/10 per n. However, we lose a factor of 20000 (compared to single n). So, on the whole, sieving 20000n simultaneously to the same depth as one n, will take 2000x effort.

Let's say the optimal sieving point of single-n approach is 100P. For the same effort, we can, therefore get 20000 n's to 100P/2000 = 50T.

The ratio of number of candidates surviving after 50T vs 100P is:
[ln(100P)/ln(50T)]^2 = 1.54. So if the _average_ LLR for single n takes more than 1.54 times as much as _average_ LLR of 20000 n, the latter is more efficient. If not, the former is more efficient. [PS:- The 1/10 number may not be the actual number. We need to do benchmarks to figure out that factor]

 2009-08-10, 21:01 #214 Ken_g6     Jan 2005 Caught in a sieve 5×79 Posts That's amazing! I had no idea how little a few extra orders of magnitude mattered on sieving. I guess I have a limited imagination. One interesting thing to look at is what happens if the factor is 1. We do [ln(100P)/ln(5T)]^2 = 1.79, which is almost worth it by itself! Now, I've done some benchmarks on an improved tpsieve, which is approximately twice as fast as V0.2.2 for large groups of N's. For my version, the ratio is about 1/62! So assuming your formula is right, that gets [ln(100P)/ln(310T)]^2 = 1.38. Do we need to benchmark LLR, or has somebody already done that?
2009-08-11, 00:29   #215
Mini-Geek
Account Deleted

"Tim Sorbera"
Aug 2006
San Antonio, TX USA

17·251 Posts

Quote:
 Originally Posted by gribozavr Unfortunately, I don't know how to decompress lzma archives on Windows, but on a modern Linux system (Debian at least) you should already have lzma command.
7-zip can open just about anything, and uses the LZMA compression method for .7z by default, so it seems quite likely that it'll work. I'm downloading that file now to check if it can open it. I'll edit this with the results. Note that it doesn't associate .lzma with itself, so you need to open it manually.
Edit: It's not opening it, but perhaps the download was corrupted. I can't download again because I reached the download limit for free users (should reset in 12 minutes). Should the MD5 be 9A2F77A956D60BE4A84F088435CB9A62?

Last fiddled with by Mini-Geek on 2009-08-11 at 00:40

2009-08-11, 00:42   #216
mdettweiler
A Sunny Moo

Aug 2007
USA (GMT-5)

3·2,083 Posts

Quote:
7-Zip opens .lzma files just fine for me. I'm using v4.65, and even though in my case I have 7-zip associated with .lzma files, it should work fine even without that if you just navigate to the file in Z-Zip's file manager.

2009-08-11, 00:43   #217
gribozavr

Mar 2005
Internet; Ukraine, Kiev

11×37 Posts

Quote:
 Originally Posted by Mini-Geek Should the MD5 be 9A2F77A956D60BE4A84F088435CB9A62?
MD5 is correct.

2009-08-11, 00:53   #218
Mini-Geek
Account Deleted

"Tim Sorbera"
Aug 2006
San Antonio, TX USA

426710 Posts

I upgraded to 4.65 (from 4.57) and now it works. From http://www.7-zip.org/history.txt:
Quote:
 4.58 beta 2008-05-05 ------------------------- .. - 7-Zip now can unpack .lzma archives. ..
So if anybody is using 4.57 or older, upgrade and you can open the file.

2009-08-11, 01:59   #219
cipher

Feb 2007

211 Posts

The client is for Linux only? where is .exe for windows ken_g6.
thanks
cipher

Quote:
 Originally Posted by Ken_g6 That's amazing! I had no idea how little a few extra orders of magnitude mattered on sieving. I guess I have a limited imagination. One interesting thing to look at is what happens if the factor is 1. We do [ln(100P)/ln(5T)]^2 = 1.79, which is almost worth it by itself! Now, I've done some benchmarks on an improved tpsieve, which is approximately twice as fast as V0.2.2 for large groups of N's. For my version, the ratio is about 1/62! So assuming your formula is right, that gets [ln(100P)/ln(310T)]^2 = 1.38. Do we need to benchmark LLR, or has somebody already done that?

 2009-08-11, 05:05 #220 Ken_g6     Jan 2005 Caught in a sieve 5·79 Posts I was wondering when someone would ask about Windows. I'll look at making a 32-bit Windows .exe tomorrow. But I don't have a 64-bit Windows. Is there a gcc for 32-bit Windows or 32 or 64-bit Linux that will compile a 64-bit Windows binary? Or does someone want to compile it for me? By the way, I just made another ~20% speed improvement to the 64-bit code. It now runs 4 times as fast as the 32-bit code for many-N sieves. I also tried the 64-bit code with "-t 4", and I think because it does most of its work in the processor registers, 4 threads seem to be as effective as two instances of 2 threads would be. Last fiddled with by Ken_g6 on 2009-08-11 at 05:06

 Similar Threads Thread Thread Starter Forum Replies Last Post Lennart Conjectures 'R Us 31 2014-09-14 15:14 philmoore Five or Bust - The Dual Sierpinski Problem 66 2010-02-10 14:34 ltd Prime Sierpinski Project 76 2008-07-25 11:44 ltd Prime Sierpinski Project 26 2005-11-01 07:45 R.D. Silverman Factoring 7 2005-09-30 12:57

All times are UTC. The time now is 11:49.

Tue Jan 19 11:49:00 UTC 2021 up 47 days, 8 hrs, 0 users, load averages: 1.86, 1.72, 1.68