20181206, 02:41  #23  
Sep 2003
A12_{16} Posts 
Quote:
I estimate the chances are around 10 to 20%, and I consider that to be "little crossover". Those are still fairly low odds by normal everyday standards. Quote:
Edit: based on the Factoring Effort report at mersenne.org, the typical P−1 limits used in the 89M range are B1=720k B2=14M, do you agree? Quote:
Last fiddled with by GP2 on 20181206 at 02:59 

20181206, 02:56  #24  
If I May
"Chris Halsall"
Sep 2002
Barbados
21170_{8} Posts 
Quote:
Given the knowledge of the candidates being TF'ed to 76 bits, how many are likely to be factored by P1'ing? And, separately, what is the savings of the P1 algorithm knowing the candidates have already been TF'ed to 76? 

20181206, 03:12  #25  
Sep 2003
2×1,289 Posts 
Quote:
That's why I'm suggesting to use a bitlength range where the factors have all already been found by TF, and then I can tell you precisely which ones would have also been found by P−1 with some specific B1, B2 if they had been missed by TF. Quote:
However I think Primenet does make use of that information, to adjust the B1 and B2 limits it assigns. If I'm not mistaken, if a larger amount of TF has already been done, then Primenet will actually assign larger B1 and B2 for the P−1 test. Last fiddled with by GP2 on 20181206 at 03:16 

20181206, 03:28  #26  
Jun 2003
3×1,511 Posts 
Quote:
a) an exponent had a factor in the TF search space and b) said factor was not found due to faulty TF (h/w, s/w, PEBKAC, etc.) and c) said factor could be found by P1 and finally d) said factor was indeed found by P1 (i.e P1 itself was not faulty, or run with poor bounds) Your comments is only regarding point c. His is (I interpreted as) concerning all 4 together. 

20181206, 03:30  #27  
Jun 2003
4533_{10} Posts 
Quote:


20181206, 03:56  #28  
Sep 2003
2×1,289 Posts 
Quote:
You have to use Pfactor as documented in undoc.txt, rather than Pminus1, since the latter doesn't let you specify how much TF was already done. Code:
Pfactor=1,2,n,1,how_far_factored,num_primality_tests_saved 

20181206, 04:07  #29  
Sep 2003
5022_{8} Posts 
Quote:
So the odds of b) are 100% because it's actually his starting assumption, and for d) he assumes specific, notpoor bounds. And regarding a), he specifies that the factor was missed due to a bad machine or bad luck (cosmic ray?), not because it doesn't actually exist. Last fiddled with by GP2 on 20181206 at 04:17 

20181206, 22:11  #30  
If I May
"Chris Halsall"
Sep 2002
Barbados
8824_{10} Posts 
Quote:
Surely knowing that there are no factors below 76 bit would be useful for further optimized searching by P1 et al methods? 

20181207, 00:31  #31  
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
17·251 Posts 
Quote:
Would not that answer the question; not based on math but at least based on stats? 

20181207, 00:39  #32  
Sep 2003
2×1,289 Posts 
Quote:
User TJAOI has systematically found all factors of 65 bits or less using an alternate "by k" method, and he did find some factors that CPUbased TF had missed, many years earlier. Now that TF testing is done by GPU, there is still scope for errors. A consumergrade GPU is designed for graphics, so nobody cares if there's one pixel somewhere in one frame of a video on your highres screen that momentarily takes on a bad value. But for calculations it does matter. Quote:


20181207, 00:41  #33 
"Forget I exist"
Jul 2009
Dumbassville
20C0_{16} Posts 
assuming twice as many possibilities as Laurv's numbers you get the lower bound of P1 to be above 20000003 to find anything. at least by my math.
Last fiddled with by science_man_88 on 20181207 at 00:48 
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
TF bit level  davieddy  Puzzles  71  20131222 07:26 
Probability of TF per bit level  James Heinrich  PrimeNet  11  20110126 20:07 
Expiring policy for V5 Server  lycorn  PrimeNet  16  20081012 22:35 
k=5 and policy on reservations  gd_barnes  Riesel Prime Search  33  20071014 07:46 
Reservation policy  gd_barnes  Riesel Prime Search  6  20071001 18:52 