mersenneforum.org TF Level policy
 Register FAQ Search Today's Posts Mark Forums Read

2018-12-06, 02:41   #23
GP2

Sep 2003

A1216 Posts

Quote:
 Originally Posted by chalsall You yourself demonstrated that there is little cross-over between factors found by TF'ing or P-1'ing.
What LaurV said was "All in all, you may have a chance of 0.0025% plus or minus something, to find a factor by P-1 that was missed by TF.", and this is completely wrong.

I estimate the chances are around 10 to 20%, and I consider that to be "little cross-over". Those are still fairly low odds by normal everyday standards.

Quote:
 We have now effectively finished TF'ing 89M to 76 bits, and are close to finishing 90M to same.
Pick a B1 and B2 and I will tell you what percentage of the 76-bit factors in the 89M range would have been found by P−1 using those limits.

Edit: based on the Factoring Effort report at mersenne.org, the typical P−1 limits used in the 89M range are B1=720k B2=14M, do you agree?

Quote:
 Are you now telling us that we're wasting our time TF'ing?
No, because at least 80% of the factors you are finding with TF will not be found by P−1.

Last fiddled with by GP2 on 2018-12-06 at 02:59

2018-12-06, 02:56   #24
chalsall
If I May

"Chris Halsall"
Sep 2002

211708 Posts

Quote:
 Originally Posted by GP2 Pick a B1 and B2 and I will tell you what percentage of the 76-bit factors in the 89M range would have been found by P−1 using those limits.
How about going the other way?

Given the knowledge of the candidates being TF'ed to 76 bits, how many are likely to be factored by P-1'ing? And, separately, what is the savings of the P-1 algorithm knowing the candidates have already been TF'ed to 76?

2018-12-06, 03:12   #25
GP2

Sep 2003

2×1,289 Posts

Quote:
 Originally Posted by chalsall Given the knowledge of the candidates being TF'ed to 76 bits, how many are likely to be factored by P-1'ing?
Well, I've already been giving estimates. But you have to actually specify a particular factor in order to retroactively determine which B1 and B2 would have found that factor by P−1.

That's why I'm suggesting to use a bit-length range where the factors have all already been found by TF, and then I can tell you precisely which ones would have also been found by P−1 with some specific B1, B2 if they had been missed by TF.

Quote:
 And, separately, what is the savings of the P-1 algorithm knowing the candidates have already been TF'ed to 76?
I don't think the P−1 algorithm itself can benefit from the fact that an exponent has already been TF'd to a certain number of bits, in the sense that TF doesn't provide "sieving" for P−1.

However I think Primenet does make use of that information, to adjust the B1 and B2 limits it assigns. If I'm not mistaken, if a larger amount of TF has already been done, then Primenet will actually assign larger B1 and B2 for the P−1 test.

Last fiddled with by GP2 on 2018-12-06 at 03:16

2018-12-06, 03:28   #26
axn

Jun 2003

3×1,511 Posts

Quote:
 Originally Posted by GP2 What LaurV said was "All in all, you may have a chance of 0.0025% plus or minus something, to find a factor by P-1 that was missed by TF.", and this is completely wrong.
But you also need to factor in (no pun intended) the odds that a TF "missed" a factor. That means:
a) an exponent had a factor in the TF search space and
b) said factor was not found due to faulty TF (h/w, s/w, PEBKAC, etc.) and
c) said factor could be found by P-1 and finally
d) said factor was indeed found by P-1 (i.e P-1 itself was not faulty, or run with poor bounds)

Your comments is only regarding point c. His is (I interpreted as) concerning all 4 together.

2018-12-06, 03:30   #27
axn

Jun 2003

453310 Posts

Quote:
 Originally Posted by GP2 If I'm not mistaken, if a larger amount of TF has already been done, then Primenet will actually assign larger B1 and B2 for the P−1 test.
No. Larger TF leads to smaller P-1 bounds. Don't know why it behaves that way. You can play around with James's calculator: https://www.mersenne.ca/prob.php

2018-12-06, 03:56   #28
GP2

Sep 2003

2×1,289 Posts

Quote:
 Originally Posted by axn No. Larger TF leads to smaller P-1 bounds. Don't know why it behaves that way. You can play around with James's calculator: https://www.mersenne.ca/prob.php
I could have sworn I actually tested it directly with mprime a few months ago, but I can't find the scratch directory I used for the test.

You have to use Pfactor as documented in undoc.txt, rather than Pminus1, since the latter doesn't let you specify how much TF was already done.

Code:
Pfactor=1,2,n,-1,how_far_factored,num_primality_tests_saved
If you run it with ./mprime -d (or in the Prime95 window), it tells you what B1 and B2 it selected.

2018-12-06, 04:07   #29
GP2

Sep 2003

50228 Posts

Quote:
 Originally Posted by axn But you also need to factor in (no pun intended) the odds that a TF "missed" a factor. That means: a) an exponent had a factor in the TF search space and b) said factor was not found due to faulty TF (h/w, s/w, PEBKAC, etc.) and c) said factor could be found by P-1 and finally d) said factor was indeed found by P-1 (i.e P-1 itself was not faulty, or run with poor bounds) Your comments is only regarding point c. His is (I interpreted as) concerning all 4 together.
LaurV's comment starts with "If [...] you TF to 75 bits and missed a factor, due to bad machine or odd luck, or whatever", specifies "say you make P-1 with B1=10M and B2=300M" in the middle, and ends with "All in all, you may have a chance of 0.0025% plus or minus something, to find a factor by P-1 that was missed by TF."

So the odds of b) are 100% because it's actually his starting assumption, and for d) he assumes specific, not-poor bounds. And regarding a), he specifies that the factor was missed due to a bad machine or bad luck (cosmic ray?), not because it doesn't actually exist.

Last fiddled with by GP2 on 2018-12-06 at 04:17

2018-12-06, 22:11   #30
chalsall
If I May

"Chris Halsall"
Sep 2002

882410 Posts

Quote:
 Originally Posted by GP2 LaurV's comment starts with "If [...] you TF to 75 bits and missed a factor, due to bad machine or odd luck, or whatever".
Perhaps unreasonably, let us presume that all TF'ing attempts actually searched the possible factor space successfully. Even if a few factors were missed because of bad kit or cheating, this would likely be exceptionally rare.

Surely knowing that there are no factors below 76 bit would be useful for further optimized searching by P-1 et al methods?

2018-12-07, 00:31   #31
petrw1
1976 Toyota Corona years forever!

"Wayne"
Nov 2006

17·251 Posts

Quote:
 Originally Posted by GP2 What LaurV said was "All in all, you may have a chance of 0.0025% plus or minus something, to find a factor by P-1 that was missed by TF.", and this is completely wrong. I estimate the chances are around 10 to 20%, and I consider that to be "little cross-over". Those are still fairly low odds by normal everyday standards. No, because at least 80% of the factors you are finding with TF will not be found by P−1.
Could someone do a database query to see what percent of P-1 factors are within the limits of the current TF level?

Would not that answer the question; not based on math but at least based on stats?

2018-12-07, 00:39   #32
GP2

Sep 2003

2×1,289 Posts

Quote:
 Originally Posted by chalsall Perhaps unreasonably, let us presume that all TF'ing attempts actually searched the possible factor space successfully. Even if a few factors were missed because of bad kit or cheating, this would likely be exceptionally rare.
LL testing has an error rate of up to 4% or so. Some people overclock beyond prudent limits. That would affect CPU-based TF testing too, not that there's much of that done anymore.

User TJAOI has systematically found all factors of 65 bits or less using an alternate "by k" method, and he did find some factors that CPU-based TF had missed, many years earlier.

Now that TF testing is done by GPU, there is still scope for errors. A consumer-grade GPU is designed for graphics, so nobody cares if there's one pixel somewhere in one frame of a video on your high-res screen that momentarily takes on a bad value. But for calculations it does matter.

Quote:
 Surely knowing that there are no factors below 76 bit would be useful for further optimized searching by P-1 et al methods?
My grasp of math is tenuous, but I don't see any place in the P−1 algorithm where it could makes use of the fact that factors must be above a certain minimum size. Although that information might guide your choice of B1 and B2, in the same way that you'll pick larger B1 and B2 for ECM when you think the factors are on the large side.

2018-12-07, 00:41   #33
science_man_88

"Forget I exist"
Jul 2009
Dumbassville

20C016 Posts

Quote:
 Originally Posted by chalsall Surely knowing that there are no factors below 76 bit would be useful for further optimized searching by P-1 et al methods?
assuming twice as many possibilities as Laurv's numbers you get the lower bound of P-1 to be above 20000003 to find anything. at least by my math.

Last fiddled with by science_man_88 on 2018-12-07 at 00:48

 Similar Threads Thread Thread Starter Forum Replies Last Post davieddy Puzzles 71 2013-12-22 07:26 James Heinrich PrimeNet 11 2011-01-26 20:07 lycorn PrimeNet 16 2008-10-12 22:35 gd_barnes Riesel Prime Search 33 2007-10-14 07:46 gd_barnes Riesel Prime Search 6 2007-10-01 18:52

All times are UTC. The time now is 10:35.

Tue Mar 31 10:35:31 UTC 2020 up 6 days, 8:08, 0 users, load averages: 1.70, 1.31, 1.18