View Single Post
Old 2011-12-13, 04:58   #6
EdH
 
EdH's Avatar
 
"Ed Hall"
Dec 2009
Adirondack Mtns

2×7×239 Posts
Default

Quote:
Originally Posted by Batalov View Post
Seriously speaking, there's also a possibility that you evaluated the necessary Q-range on the admission that the relation yield is a constant. But it isn't, and it is not easy to guesstimate it. Generally, it goes down as Q goes up, but the question is - how fast.

One way (frequently used before launching large projects) is a dense set of spot checking runs (with many starting Qs and a span of 2000 or a 1000), followed by a spline (or better yet with normalization the by number of reported special_q's), and a guesstimate from experience with similar runs of what redundancy is going to be.
I actually follow all this, but haven't the experience to make use of it. I therefore made use of the following logic:

For example, let's say q is going up by 1M each time and relations are growing at a rate of 5% for each 1M. And, it started at 20M. 100% (in a perfect world) would place the top at 40M. So, let's start machine 2 at 40M. I'm hoping that the relations turned up by machine 2 will offset the 40M top of machine 1 downward more so than the diminishing relations will affect the overall count. The trickier part is figuring out the starting points for machines 3, 4, 5, etc. I don't want any overlap there either, but the further away from the machine 1 range, the less return.

Last fiddled with by EdH on 2011-12-13 at 05:00 Reason: removal of a word for clarity...
EdH is offline   Reply With Quote