![]() |
![]() |
#56 |
"Mark"
Apr 2003
Between here and the
3·31·71 Posts |
![]()
I have some questions and I'm certain that Gary or one of the other lurkers here can answer.
First, given a weight of one of these single k conjectures and a range of n, is there a formula that could tell one how deep that range needs to be sieved? I'm not looking for an exact formula, but I suspect that generating an estimate shouldn't be too hard. Second, given a weight of one of these single k conjectures and and a range of n, how does one estimate how many n will be left after sieving? By combining the two answers I can then estimate (given my hardware) how long it would take for me to sieve and PRP test a range of arbitrary size (presuming that no prime would be found). |
![]() |
![]() |
![]() |
#57 |
Account Deleted
"Tim Sorbera"
Aug 2006
San Antonio, TX USA
10B516 Posts |
![]()
Some more exact answers would be great, but here's what I've got:
I can't tell you how it is calculated, but if you run sr2sieve, it will tell you how many factors to expect in the range you tell it to search. Subtract some portion (I'm sure an exact formula exists and is not too complicated - don't have time ATM to figure it out) to account for multiple factors on one candidate, and then subtract that from the total, and you've got your estimate candidates remaining. This requires some amount of presieving, not just knowing the weight. |
![]() |
![]() |
![]() |
#58 | |
"Mark"
Apr 2003
Between here and the
660310 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
#59 |
May 2007
Kansas; USA
2·41·131 Posts |
![]()
Those are very tough questions and there is no exact answer. Personally I am very adamant about not sieving a range that would take more than 1-2 years to fully test because sieving and testing software speeds continue to increase so quickly. For that reason, at these levels for a "personal project", I would generally sieve a 2X or 2.5X range. That is where the high n-value is 2 to 2.5 times the low n-value. When doing that, you can test a candidate at ~60% of the n-range and then sieve until the factor removal rate is approximately the length of the test.
If you would like to sieve a range that gives about a 50% chance of prime, that can't very accurately be calculated by weight alone. I'd suggest sieving using srsieve, perhaps, a range of n=100K-500K to P=100M or 1G. Then plug everything into the "odds of prime" spreadsheet, which includes the # of candidates remaining. If the chances of prime are < 50%, then sieve a larger range. If > 50%, then sieve a smaller range. As for how deep to sieve it, that depends on n-max/n-min. If > 2.5X, then pieces need to be broken off and tested with sieving continuing on the remainder. If <= 2.5X, then refer to the first para. Gary |
![]() |
![]() |
![]() |
#60 | |
"Mark"
Apr 2003
Between here and the
147138 Posts |
![]() Quote:
![]() |
|
![]() |
![]() |
![]() |
#61 |
"Mark"
Apr 2003
Between here and the
11001110010112 Posts |
![]()
I have one more question, which I presume has an easy answer. I know that doubling the sieve depth reduces the number of candidates by about 30%. My related question is how does doubling the sieve depth impact removal rate? It is clearly not reduced by 30%. I would guess that it would be around 70%, i.e. if the removal rate is around 1 per second at 1e9, then it would be around 1.7 per second at 2e9. Does that sound right?
|
![]() |
![]() |
![]() |
#62 | ||
May 2007
Kansas; USA
2×41×131 Posts |
![]() Quote:
On the second statement, you have it reversed but I think I know what you mean. You're saying that if at a sieve depth of 1e9, the removal rate is 1 every 1 second, then at 2e9, the removal rate would be 1 every 1.7 secs (vs. 1.7 per 1 sec). Anyway, no, it's almost a straight ratio. When you double the sieve depth, you halve the removal rate. When you triple the depth, you divide the removal rate by 3. Why? Because each candidate has half (or 1/3rd) as much chance of having a specific prime factor that is being tested. So at 2e9, it would be 1 every 2 secs. Note that it would be an EXACT ratio of 2 to 1 or 3 to 1 if you left all candidates in the file to be sieved even after a factor was found for them. But since the programs do not continue to find factors for candidates that already have a factor (If not stopped mid-stream, sr2sieve knows to remove them in memory and not search them anymore.), then such ratios are more like 2.1 or 2.2 to 1 or 3.1 or 3.2 to 1. I have one more thing to add to a statement that I previously made: Quote:
Gary Last fiddled with by gd_barnes on 2011-04-13 at 06:22 |
||
![]() |
![]() |
![]() |
#63 | |
Jun 2009
12708 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
#64 |
May 2007
Kansas; USA
101001111101102 Posts |
![]()
Yes, because in addition to removing factors from n=100K-250K, you are also removing factors from n=250K-500K at the same time, which saves much time in the future and is quite efficient since the speed of sr(x)sieve scales approximately with the square root of the n-range. It's (almost) the fastest way to sieve and test the entire n=100K-500K range, regardless of whether a prime is found early or not. The hard part is determining what is the optimum range to sieve in the first place based on the fact that we stop testing when finding a prime. If you sieve too large of a range, there is a great chance of finding a prime before you are done resulting in a lot of wasted sieving. If you sieve too small of a range, you lose a lot of sieving efficiency because of the speed scaling mentioned above. I generally recommend sieving an n-max/n-min ratio of 2X to 4X. You could go higher if the k or k's are extremely low weight but I'd suggest almost never going lower.
Last fiddled with by gd_barnes on 2011-04-13 at 19:14 |
![]() |
![]() |
![]() |
#65 |
"Mark"
Apr 2003
Between here and the
3×31×71 Posts |
![]()
Should the weights listed in the first post account for the removal of algebraic factorizations? I know that they don't current do so. For example S182 has a weight of 389, but if one removes n where 8*182^n+1 can be factored algebraically, that is reduced to 120. My point is that this conjecture is much harder to prove since so many n are removed for that reason.
These are the updated numbers: Code:
8*86^n+1 1017 571 32*87^n+1 342 257 8*182^n+1 389 120 27*220^n+1 1136 0 27*252^n+1 2164 1312 8*263^n+1 363 211 27*328^n+1 870 489 8*353^n+1 613 0 8*426^n+1 1288 486 8*428^n+1 655 258 8*497^n+1 943 570 9*724^n+1 1573 1573 8*758^n+1 549 281 8*785^n+1 588 178 8*828^n+1 1136 607 8*866^n+1 440 0 8*930^n+1 1645 1107 8*953^n+1 1155 757 8*983^n+1 853 425 4*72^n-1 1211 838 64*177^n-1 1607 1307 32*221^n-1 1317 988 4*275^n-1 1472 1472 8*321^n-1 1017 606 8*328^n-1 915 522 16*333^n-1 1737 1389 32*402^n-1 1126 842 64*425^n-1 1607 1607 9*636^n-1 2840 1758 4*650^n-1 1122 1122 8*665^n-1 1582 610 9*688^n-1 1252 641 32*702^n-1 2339 1780 64*741^n-1 2429 2429 8*761^n-1 1527 1142 4*812^n-1 1052 1052 8*815^n-1 1988 984 8*836^n-1 1446 768 221*850^n-1 1414 1414 8*866^n-1 1595 803 8*867^n-1 836 836 4*968^n-1 938 938 Could somebody take the time to verify my synopsis? If it is verified, then we have a problem. It is possible that the Sierpinski/Riesel number for other unproven conjectures is wrong. If I'm wrong, then I must have a bug in my release of srsieve. |
![]() |
![]() |
![]() |
#66 |
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
23×1,229 Posts |
![]() |
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
P-1 factoring attempts at smallest-remaining Mersenne numbers with no known factors | UberNumberGeek | Factoring | 51 | 2017-02-13 20:30 |
20 Easy Pieces - The Remaining 29-bit Jobs | swellman | XYYXF Project | 5 | 2016-02-27 22:35 |
Discussion about CPU time needed and k's remaining | Siemelink | Conjectures 'R Us | 41 | 2008-07-11 23:05 |
Easiest Remaining Cunninghams | R.D. Silverman | Factoring | 1 | 2008-03-12 03:34 |
distribution of remaining candidates | thommy | 3*2^n-1 Search | 41 | 2004-04-11 22:05 |