mersenneforum.org Attempts vs. Successes oddity
 Register FAQ Search Today's Posts Mark Forums Read

 2014-09-06, 05:02 #1 Rodrigo     Jun 2010 Pennsylvania 2×467 Posts Attempts vs. Successes oddity Over time I've developed a sense of the GPU factoring process, including an idea of the general proportion of TF exponents that come out as Not Prime and how long (in GHz-Days) it takes to factor to various exponent levels. Thus, for example, I can grasp how "dbaugh," who stands at #5 on the Top Trial Factoring Producers list, could have such a large number of GHz-Days for the small number of TF attempts: I attribute this to factoring to very deep levels. (Right?) However, there is one remarkable phenomenon that leaves me scratching my head. How can #16, Bill Staffen, have found only 2 factors in more than eight thousand tries? Is it simply bad luck, or can someone explain (in not too-technical terms -- I'm no mathematician ) how that can be? Typical success seems to run in the 1 - 1.5 percent range. How does one manage to get less than 0.025 percent? What exponent range and to what levels might one work on to achieve this? Just curious... Rodrigo Last fiddled with by Rodrigo on 2014-09-06 at 05:04 Reason: added hyperlink
 2014-09-06, 05:52 #2 LaurV Romulan Interpreter     Jun 2011 Thailand 100100111001012 Posts It may be attributed to a bad hardware, or ignorance. As some big gun says here on the forum, don't attribute to malice what it can be attributed to stupidity, hehe. I also suspected some people of wrong doing on those lists, even have proof for some, but what can we do? Last fiddled with by LaurV on 2014-09-06 at 05:57
 2014-09-06, 06:23 #3 Rodrigo     Jun 2010 Pennsylvania 2×467 Posts Huh, I hadn't even thought about that sort of possibility (malice or ignorance). I was wondering basically about how, mathematically speaking, such a low ratio of successes to attempts could come about. I'm not sure how I could achieve that kind of ratio, even if I set out to do it on purpose. The laws of probability would seem to preclude it, and how would I know beforehand which exponents to avoid? Rodrigo Last fiddled with by Rodrigo on 2014-09-06 at 06:24 Reason: add'l info
 2014-09-06, 06:42 #4 LaurV Romulan Interpreter     Jun 2011 Thailand 5×1,889 Posts You are thinking too much like a honest guy. Just make a list of 100 lines of "no factor for exponent xxxx from aa to bb [mfakto blah blah]" with Notepad, and send it to the server. It will be digested and you will get the credit. No need to do any work. Or pick an ECM assignment (3 curves is enough) and do it with P95 offline, so it won't be able to submit, then submit the results manually, but before submission change from "3 curves" to "150 curves". They generate the same checksum, and you even have a valid assignment key. Otherwise how do you explain some guy like NOOE (that with the palindromic name) going from zero to hero in the ECM lifetime top lists in such a short time? (the same guy who both LLed and "double checked" the largest exponent, ~383M, or so, George said he knows the guy and he is not faker, but let me doubt). I mean, I am also a bit of "credit whore", but in different direction: I like to get the right credit I worked for, but I won't go so far as deliberately falsifying results. Other people do. This subject is over-debated, if you look around in the forum. At the end, they don't cause too much damage, as all exponents will end up either with a factor, or with a double/triple check done by independent users. The only "bad" things is that in case a factor is missed because the range was fake-reported, then someone will lose few days with a LL test which would not have been needed if the factor would have been known. [edit: it may be nice to know which assignments your guy fulfilled, they are not many, and if reasonable, I can repeat them with my farm. I say "if reasonable", because he may be doing low-expos, where lots of P-1 and ECM was done, and the chances to find factors are much lower. This would also justify the high credit, for example doing 100k expos to 63 is the same effort as doing 100M to 73 (a 2^10 factor in both cases), but the chances to find a factor is null in comparison. Doing this, he gets high credit, and invests even more time, as the tools to factor lowexpo ranges are not so proficient, think about mfaktc, which is doing 400GHzD/D on out frontline TF, but it will only do 200GHzD/D or so, on the same card, for low expos]. Last fiddled with by LaurV on 2014-09-06 at 06:57
 2014-09-06, 06:48 #5 VictordeHolland     "Victor de Hollander" Aug 2011 the Netherlands 23·3·72 Posts Looking at: http://www.mersenne.ca/stats.php?sho...s=Bill+Staffen he found at least 18 factors this year (2014) Looking at P-1 factoring: 509 attempts 148 factors (which is a lot), so I think his TF factors are reported as P1 factors. Last fiddled with by VictordeHolland on 2014-09-06 at 06:53 Reason: P-1 factors
 2014-09-06, 07:06 #6 LaurV Romulan Interpreter     Jun 2011 Thailand 944510 Posts Good catch! Those are all TF factors. They are a bit below 2^74, and many have huge (HUGE!) values for B2, so they can't be P-1 factors. I think we just witnessed the "TF factor recorded as P-1 factor" bug in action. This bug will not bother anymore in the future, James just said he solved it and it is confirmed as solved, see some posts from today in a parallel thread. I think the poor guy was separating the "factor" lines from "no factor" lines (I used to do this long time ago too, as I was recording the factors I found) and send them separate, therefore if it is no "no factor" line in the file, all the factors were recorded as P-1. Mystery solved... Last fiddled with by LaurV on 2014-09-06 at 07:08
 2014-09-06, 15:58 #7 Rodrigo     Jun 2010 Pennsylvania 2×467 Posts Thanks LaurV and Victor, that was enlightening. Makes sense now. I'll go look for that other thread about the TF/P1 bug. Rodrigo
 2014-09-06, 16:03 #8 TheMawn     May 2013 East. Always East. 110101111112 Posts Yep, it's TF factors being reported as P-1, somehow. I had this question a while back and that was the answer.
2014-09-19, 08:44   #9
snme2pm1

"Graham uses ISO 8601"
Mar 2014
AU, Sydney

241 Posts

Quote:
 Originally Posted by LaurV wrong doing on those lists, even have proof for some, but what can we do?
Surely if you have clear evidence then it can be stated.
If the evidence is vague, then perhaps better not.
If such feedback were to expose faulty hardware, then is that not also useful?
I'm a tiny bit curious as to the nature of wrong doing evidence that you possess.

Also, don't call me Shirley.

 Similar Threads Thread Thread Starter Forum Replies Last Post aurashift PrimeNet 3 2017-02-24 04:00 houding PrimeNet 1 2014-09-16 12:24 sixblueboxes PrimeNet 8 2014-04-18 14:46 Christenson Information & Answers 1 2011-02-03 05:25 hhh Aliquot Sequences 28 2009-05-14 19:54

All times are UTC. The time now is 14:42.

Tue May 11 14:42:50 UTC 2021 up 33 days, 9:23, 2 users, load averages: 3.19, 2.96, 2.42