![]() |
![]() |
#210 |
Feb 2007
D316 Posts |
![]()
Geoff: You are correct. I just Tested a range of approx 100n it was going @ a rate of 26,000 p/s vs 130Mil p/sec i get for individual n.
Last fiddled with by cipher on 2009-08-10 at 05:02 |
![]() |
![]() |
![]() |
#211 | |
Apprentice Crank
Mar 2006
1C616 Posts |
![]() Quote:
Anyway, we may still be better off with a range of 20000 n's than a range of 10 or 20 n's. The previous effort (for n=333333) reached a sieving depth of 25P with about 300 k per M. If this effort does 20000 times less work and only reaches a sieving depth of 1.25T, there'll be about 500 k per M. Bigger k's take about 70% longer to LLR than smaller ones, so sieving a huge range of n with smaller k's would be worth it in this case. Even if those assumptions are off by a lot, it still wouldn't make sense to do a range of 10 or 20 n's - the benefits of shorter LLR testing times are gone once the k value is greater than a few M. Doing 10 or 20 n's would save a few days of LLR testing at most. Last fiddled with by MooooMoo on 2009-08-10 at 06:09 |
|
![]() |
![]() |
![]() |
#212 | ||
Jan 2005
Caught in a sieve
39510 Posts |
![]() Quote:
Quote:
Now the point we would get to with a range R is Pr = P/(1+1/10*R). The log of the ratio P/Pr = (1+1/10*R) gives the proportion of extra candidates to be tested with LLR. So assuming a given R will halve LLR time, we need ln(1+1/10*R) < 1.7. Exponentiate both sides to get 1+1/10*R < e^1.7 which is about 5.5. So R can maybe be 45. Edit: I think it's actually the ratio of the logs, not the log of the ratio, but I still can't imagine it's big enough for 20,000 n's. Right, one of my assumptions was that the possible R would get the K's low enough to get the LLR time down. So it's probably pointless to do more than just one n. Last fiddled with by Ken_g6 on 2009-08-10 at 17:34 Reason: fuzzy math |
||
![]() |
![]() |
![]() |
#213 |
Jun 2003
113528 Posts |
![]()
The saving from sieving 20000 n's simultaneously is a factor of 1/10 per n. However, we lose a factor of 20000 (compared to single n). So, on the whole, sieving 20000n simultaneously to the same depth as one n, will take 2000x effort.
Let's say the optimal sieving point of single-n approach is 100P. For the same effort, we can, therefore get 20000 n's to 100P/2000 = 50T. The ratio of number of candidates surviving after 50T vs 100P is: [ln(100P)/ln(50T)]^2 = 1.54. So if the _average_ LLR for single n takes more than 1.54 times as much as _average_ LLR of 20000 n, the latter is more efficient. If not, the former is more efficient. [PS:- The 1/10 number may not be the actual number. We need to do benchmarks to figure out that factor] |
![]() |
![]() |
![]() |
#214 |
Jan 2005
Caught in a sieve
5×79 Posts |
![]()
That's amazing! I had no idea how little a few extra orders of magnitude mattered on sieving. I guess I have a limited imagination.
One interesting thing to look at is what happens if the factor is 1. We do [ln(100P)/ln(5T)]^2 = 1.79, which is almost worth it by itself! Now, I've done some benchmarks on an improved tpsieve, which is approximately twice as fast as V0.2.2 for large groups of N's. For my version, the ratio is about 1/62! So assuming your formula is right, that gets [ln(100P)/ln(310T)]^2 = 1.38. Do we need to benchmark LLR, or has somebody already done that? |
![]() |
![]() |
![]() |
#215 | |
Account Deleted
"Tim Sorbera"
Aug 2006
San Antonio, TX USA
17·251 Posts |
![]() Quote:
![]() Edit: It's not opening it, but perhaps the download was corrupted. I can't download again because I reached the download limit for free users (should reset in 12 minutes). Should the MD5 be 9A2F77A956D60BE4A84F088435CB9A62? Last fiddled with by Mini-Geek on 2009-08-11 at 00:40 |
|
![]() |
![]() |
![]() |
#216 | |
A Sunny Moo
Aug 2007
USA (GMT-5)
3·2,083 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
#217 |
Mar 2005
Internet; Ukraine, Kiev
11×37 Posts |
![]() |
![]() |
![]() |
![]() |
#218 | |
Account Deleted
"Tim Sorbera"
Aug 2006
San Antonio, TX USA
426710 Posts |
![]()
I upgraded to 4.65 (from 4.57) and now it works. From http://www.7-zip.org/history.txt:
Quote:
|
|
![]() |
![]() |
![]() |
#219 | |
Feb 2007
211 Posts |
![]()
The client is for Linux only? where is .exe for windows ken_g6.
thanks cipher Quote:
|
|
![]() |
![]() |
![]() |
#220 |
Jan 2005
Caught in a sieve
5·79 Posts |
![]()
I was wondering when someone would ask about Windows. I'll look at making a 32-bit Windows .exe tomorrow. But I don't have a 64-bit Windows. Is there a gcc for 32-bit Windows or 32 or 64-bit Linux that will compile a 64-bit Windows binary? Or does someone want to compile it for me?
By the way, I just made another ~20% speed improvement to the 64-bit code. It now runs 4 times as fast as the 32-bit code for many-N sieves. I also tried the 64-bit code with "-t 4", and I think because it does most of its work in the processor registers, 4 threads seem to be as effective as two instances of 2 threads would be. Last fiddled with by Ken_g6 on 2009-08-11 at 05:06 |
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
S9 and general sieving discussion | Lennart | Conjectures 'R Us | 31 | 2014-09-14 15:14 |
Sieving discussion thread | philmoore | Five or Bust - The Dual Sierpinski Problem | 66 | 2010-02-10 14:34 |
Combined sieving discussion | ltd | Prime Sierpinski Project | 76 | 2008-07-25 11:44 |
Sieving Discussion | ltd | Prime Sierpinski Project | 26 | 2005-11-01 07:45 |
Sieving Discussion | R.D. Silverman | Factoring | 7 | 2005-09-30 12:57 |