mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > PrimeNet

Reply
 
Thread Tools
Old 2011-10-20, 16:26   #694
axn
 
axn's Avatar
 
Jun 2003

112558 Posts
Default

Quote:
Originally Posted by Chuck View Post
When I receive a P-1 assignment, sometimes I do additional trial factoring with the GPU from 68—>71 before the P-1 work begins. It it important to change the worktodo file to reflect this increase before the P-1 process starts?

Chuck
Sort of. The level of TF done does affect the optimal P-1 bounds
axn is offline   Reply With Quote
Old 2011-10-20, 16:36   #695
Mr. P-1
 
Mr. P-1's Avatar
 
Jun 2003

49116 Posts
Default

Quote:
Originally Posted by Chuck View Post
When I receive a P-1 assignment, sometimes I do additional trial factoring with the GPU from 68—>71 before the P-1 work begins. It it important to change the worktodo file to reflect this increase before the P-1 process starts?
It's not critically important. The client uses this information to compute the optimal bounds. If the client thinks the exponent has been factored less deeply than it actually has been (or will be, the order of factoring doesn't make any difference to the factors you will actually find), then it will choose somewhat higher bounds than is optimal.
Mr. P-1 is offline   Reply With Quote
Old 2011-10-20, 16:50   #696
Mr. P-1
 
Mr. P-1's Avatar
 
Jun 2003

100100100012 Posts
Default

Quote:
Originally Posted by fivemack View Post
Isn't it only bad if the increased runtime doesn't give a proportionate increase in probability of factor?
The problem is, as AXN points out, that we don't know that it does.

In the case where the assignment is a P-1 (rather than an LL assignment getting an initial P-1) there is another issue: the more time spent on each assignment, the fewer the client is able to complete in a given period of time, and the more assignments which pass through to LL testing without having been P-1ed first. Of these, about half never get a stage 2. This means that, in exchange for a slightly increased chance of finding a factor with the exponents we do test, we're losing even more with the exponents we don't.
Mr. P-1 is offline   Reply With Quote
Old 2011-10-20, 20:56   #697
Christenson
 
Christenson's Avatar
 
Dec 2010
Monticello

32608 Posts
Default

The correct optimality criterion is, for the vast majority of mersenne exponents, how to prove the most of them composite for the least amount of effort. Factors found per GHz-Day is the correct metric.

Mr P-1 points out that by doing relatively deep P-1, we have many exponents not getting any stage 2 P-1, which has a significantly higher return of factors found per time spent. Thus, exponents that could have had a factor found relatively easily are getting LL tested instead.

This is also happening with TF, though in this case, the change is due to an increase in the ease of doing TF on GPUs.
Christenson is offline   Reply With Quote
Old 2011-10-20, 21:32   #698
kladner
 
kladner's Avatar
 
"Kieren"
Jul 2011
In My Own Galaxy!

1005010 Posts
Default B-S extension

How does P95/64 indicate that B-S has kicked in? Also, what is Stage 1 GCD?
kladner is offline   Reply With Quote
Old 2011-10-20, 22:07   #699
Mr. P-1
 
Mr. P-1's Avatar
 
Jun 2003

7×167 Posts
Default

Quote:
Originally Posted by kladner View Post
How does P95/64 indicate that B-S has kicked in?
You'll see an "E=6" (or higher) in your results.txt file, if it fails to find a factor. For some reason it doesn't say when it finds one.

Quote:
Also, what is Stage 1 GCD?
The client performs a GCD (Greatest Common Divisor) calculation at the end of each stage. The GDC extracts the factor(s) found (if any) from the result of the computation in each stage.

Last fiddled with by Mr. P-1 on 2011-10-20 at 22:14
Mr. P-1 is offline   Reply With Quote
Old 2011-10-20, 22:09   #700
kladner
 
kladner's Avatar
 
"Kieren"
Jul 2011
In My Own Galaxy!

2·3·52·67 Posts
Default

Thanks, Mr. P-1!
kladner is offline   Reply With Quote
Old 2011-10-20, 22:36   #701
delta_t
 
delta_t's Avatar
 
Nov 2002
Anchorage, AK

3×7×17 Posts
Default

Quote:
Originally Posted by Christenson View Post
Mr P-1 points out that by doing relatively deep P-1, we have many exponents not getting any stage 2 P-1, which has a significantly higher return of factors found per time spent. Thus, exponents that could have had a factor found relatively easily are getting LL tested instead
This is too bad this is happening. I know a lot of us have been trying to get P-1 done before the first LL, but I guess there are still too few P-1 still getting done before PrimeNet hands them out for first LL? I've practically turned all my computers that have enough memory to doing P-1. So is it that we are just too few doing P-1?
delta_t is offline   Reply With Quote
Old 2011-10-20, 22:51   #702
Mr. P-1
 
Mr. P-1's Avatar
 
Jun 2003

22218 Posts
Default

Quote:
Originally Posted by delta_t View Post
I've practically turned all my computers that have enough memory to doing P-1. So is it that we are just too few doing P-1
Basically yes. Turning every computer that has enough memory to doing P-1 is probably the best thing you could be doing for GIMPS. The only exception is if you have TF-capable GPUs. Currently the GPU factoring programs also need a great deal of CPU time (typically an entire core or two) to support the GPU. Depending of the specific work you do, this may be even more beneficial to GIMPS than devoting those cores to P-1

Last fiddled with by Mr. P-1 on 2011-10-20 at 22:57
Mr. P-1 is offline   Reply With Quote
Old 2011-10-20, 23:12   #703
Mr. P-1
 
Mr. P-1's Avatar
 
Jun 2003

7×167 Posts
Default

Quote:
Originally Posted by Christenson View Post
The correct optimality criterion is, for the vast majority of mersenne exponents, how to prove the most of them composite for the least amount of effort. Factors found per GHz-Day is the correct metric.
In fact a dedicated P-1er's contribution is optimal if he maximises the number of factors he finds that would otherwise not be found (and minimises the number of factors he fails to find that would otherwise be found.) It's easy to see that P-1ers "should" choose lower bounds than LL testers doing preliminary P-1s, but quantifying how much lower is extraordinarily difficult.

Despite the logic, it "feels" wrong to deliberately reduce the bounds in any way, so I don't do this. A dedicated P-1er with a reasonable amount of memory who uses prime95's default bounds calculation is making a contribution to GIMPS that is significantly greater than if he devoted his cores to LL testing. And that is good enough for me.
Mr. P-1 is offline   Reply With Quote
Old 2011-10-20, 23:28   #704
kladner
 
kladner's Avatar
 
"Kieren"
Jul 2011
In My Own Galaxy!

2·3·52·67 Posts
Default

Quote Mr. P-1: "You'll see an "E=6" (or higher) in your results.txt file, if it fails to find a factor."

Ah! Like this:

[Wed Oct 19 21:45:54 2011]
UID: kladner/pod64, M52315441 completed P-1, B1=610000, B2=15555000, E=6, We4: 498F4FED, AID: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
[Wed Oct 19 22:32:00 2011]
UID: kladner/pod64, M52310233 completed P-1, B1=610000, B2=15555000, E=6, We4: 49964FA4, AID: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

So given discussion just previous, perhaps I don't need to allocate quite so much RAM; perhaps instead dedicating another worker to P-1 and spreading the benefits further.
kladner is offline   Reply With Quote
Reply

Thread Tools


All times are UTC. The time now is 23:56.

Sun Nov 29 23:56:41 UTC 2020 up 80 days, 21:07, 3 users, load averages: 1.19, 1.32, 1.33

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.