mersenneforum.org > Data Thinking out loud about getting under 20M unfactored exponents
 Register FAQ Search Today's Posts Mark Forums Read

2021-09-29, 17:37   #771
chalsall
If I May

"Chris Halsall"
Sep 2002

2×5,023 Posts

Quote:
 Originally Posted by De Wandelaar I'll take 34.4.
"Released" for you to work. Good luck!

2021-09-29, 17:40   #772
De Wandelaar

"Yves"
Jul 2017
Belgium

2·3·13 Posts

Quote:
 Originally Posted by chalsall "Released" for you to work. Good luck!
Thank you, Chris !!
Yves

2021-09-30, 08:35   #773
kruoli

"Oliver"
Sep 2017
Porta Westfalica, DE

13778 Posts

Quote:
 Originally Posted by petrw1 According to the math page TF should produce factors at the rate of about 1/X. So for TF75 that would be 1/75 or about 26 factors. However, I have found that, on average, with these ranges that have lots of P1 done the rate is closer to 1/100.
Maybe I should rephrase my question, since it was about P-1 and not TF (have I signed up for TF? ). In your listing, you wrote "don't pick B1/B2 with over 3-4% success rate". Do you mean I shall pick them such that the factor probability is 3-4% when ignoring previous factor work (what I would have called "gross", which would be extremely close to the original P-1 work, so basically less than 1 factor per 100 runs "in the real world") or do you mean that I shall use bounds that take previous work (TF and P-1) into consideration and will have 3-4% "real world" success probability (what is roughly what I was targeting)?

For 26.3M, I guess you'll have to strike me out because Chris is finishing it up.

@Chris: Thanks for releasing the other two ranges for me.

2021-09-30, 17:25   #774
masser

Jul 2003
Behind BB

110111000012 Posts

Quote:
 Originally Posted by kruoli Do you mean I shall pick them such that the factor probability is 3-4% when ignoring previous factor work (what I would have called "gross", which would be extremely close to the original P-1 work, so basically less than 1 factor per 100 runs "in the real world") or do you mean that I shall use bounds that take previous work (TF and P-1) into consideration and will have 3-4% "real world" success probability (what is roughly what I was targeting)?
I'm fairly certain that he meant 3-4% "real world" success probability.

The formula I use is: Actual Probability = (Pr(new)-Pr(old)) / (1-Pr(old))

2021-09-30, 19:24   #775
chalsall
If I May

"Chris Halsall"
Sep 2002

2·5,023 Posts

Quote:
 Originally Posted by masser The formula I use is: Actual Probability = (Pr(new)-Pr(old)) / (1-Pr(old))
Please forgive me for revealing my massive lack of knowledge...

I have found that the probabilities given by way of James' specialized Worktodo generator give quite different probabilities, compared to mprime during runtime calculations, compared to James' deep drill-down iif a factor is found.

I'm mostly just wondering if anyone else sees this, or if I'm not understanding things deeply enough (very high probability of the latter).

P.S. BTW, we seriously overshot 13.7M. Sorry about that. P-1 was more successful than expected.

2021-09-30, 19:49   #776
masser

Jul 2003
Behind BB

6E116 Posts

Quote:
 Originally Posted by chalsall Please forgive me for revealing my massive lack of knowledge... I have found that the probabilities given by way of James' specialized Worktodo generator give quite different probabilities, compared to mprime during runtime calculations, compared to James' deep drill-down iif a factor is found. I'm mostly just wondering if anyone else sees this, or if I'm not understanding things deeply enough (very high probability of the latter). P.S. BTW, we seriously overshot 13.7M. Sorry about that. P-1 was more successful than expected.
I have noticed the discrepancies between the various probabilities reported. A few hunches/observations:
1. mprime is most correct; its calculator is getting attention with recent updates to the P-1/ECM/P+1 algorithms and comparisons to gpuOwl.
2. James' exponent pages (the deep drill downs) report values fairly close to the mprime probabilities
3. Something weird happens with the "worst bounds" page. It's not quite as dynamic as we might like; exponents can only be removed (when someone does an improved P-1). Perhaps it should be possible for exponents to join the list when the P-1 bounds become "small" relative to the amount of TF that's been done.

2021-10-01, 00:40   #777
petrw1
1976 Toyota Corona years forever!

"Wayne"
Nov 2006

3·1,613 Posts

Quote:
 Originally Posted by masser I'm fairly certain that he meant 3-4% "real world" success probability. The formula I use is: Actual Probability = (Pr(new)-Pr(old)) / (1-Pr(old))
No, correct my if I'm wrong. I don't have a PhD in Math or anything for that matter.
All I use is: Actual Probability = Pr(new)-Pr(old).

If the prior P1 had a 2% prob and the new P1 has a 5% prob then isn't the new P1 actaully expected to have a net 3% success rate??

2021-10-01, 00:45   #778
petrw1
1976 Toyota Corona years forever!

"Wayne"
Nov 2006

113478 Posts

Quote:
 Originally Posted by chalsall Please forgive me for revealing my massive lack of knowledge... I have found that the probabilities given by way of James' specialized Worktodo generator give quite different probabilities, compared to mprime during runtime calculations, compared to James' deep drill-down iif a factor is found. I'm mostly just wondering if anyone else sees this, or if I'm not understanding things deeply enough (very high probability of the latter).
I have done close to 54,000 P1 for this project ... I think that is a statistically significant sample size?
As in my formula above: Actual Probability = Pr(new)-Pr(old).
My overall average success rate is about 0.25% higher than this.
Though it does seem to be a little closer with the new version of P1 (which I assume includes new estimations).
Of course I have really bad ranges and also really good ones (like your 13.7)....but overall....

2021-10-01, 00:51   #779
petrw1
1976 Toyota Corona years forever!

"Wayne"
Nov 2006

3·1,613 Posts

Quote:
 Originally Posted by kruoli Maybe I should rephrase my question, since it was about P-1 and not TF (have I signed up for TF? ). In your listing, you wrote "don't pick B1/B2 with over 3-4% success rate". Do you mean I shall pick them such that the factor probability is 3-4% when ignoring previous factor work (what I would have called "gross", which would be extremely close to the original P-1 work, so basically less than 1 factor per 100 runs "in the real world") or do you mean that I shall use bounds that take previous work (TF and P-1) into consideration and will have 3-4% "real world" success probability (what is roughly what I was targeting)? For 26.3M, I guess you'll have to strike me out because Chris is finishing it up. @Chris: Thanks for releasing the other two ranges for me.
Yes, I have you with 27.3 and 28.1
See my formula in response to masser's response above.
I'm waiting for him to tell my if I'm mathematically confused. :(

 2021-10-01, 00:59 #780 masser     Jul 2003 Behind BB 110111000012 Posts RDS once reminded me about conditional probabilities. When P1 is relatively small, P2-P1 is a good enough approximation.
2021-10-01, 01:06   #781
petrw1
1976 Toyota Corona years forever!

"Wayne"
Nov 2006

3·1,613 Posts

Quote:
 Originally Posted by masser RDS once reminded me about conditional probabilities. When P1 is relatively small, P2-P1 is a good enough approximation.
So he is talking about ECM ... does that make a difference?

 Similar Threads Thread Thread Starter Forum Replies Last Post jschwar313 GPU to 72 3 2016-01-31 00:50 Batalov Factoring 6 2011-12-27 22:40 jasong jasong 1 2008-11-11 09:43 devarajkandadai Math 4 2007-07-25 03:01 WraithX GMP-ECM 1 2006-03-19 22:16

All times are UTC. The time now is 07:08.

Thu Dec 2 07:08:14 UTC 2021 up 132 days, 1:37, 0 users, load averages: 0.97, 1.10, 1.17