![]() |
![]() |
#298 |
"Ed Hall"
Dec 2009
Adirondack Mtns
2×23×79 Posts |
![]()
Rough calculations across all my machines working on the c162 put me at about:
Code:
740 @ 11e3 2140 @ 5e4 4300 @ 25e4 9040 @ 1e6 23500 @ 3e6 6500 @ 11e6 800 @ 43e6 100 @ 11e7 Were the grossly overdone (10x) smaller B1 values wasted? Is there any reason to let the machines that are performing the YAFU ECM steps continue with the 11e6 work? Is 7500 a good figure for 43e6? Is 7000 a good figure for 11e7? How do I tell what t value I'm current at? What t value should I strive for? Thanks! |
![]() |
![]() |
![]() |
#299 | |
Romulan Interpreter
Jun 2011
Thailand
52·7·53 Posts |
![]() Quote:
You can put "v=1" (no quotes) in yafu.ini to see info about expected t values and all the "verbose" stuff. It helps a lot, but it slows yafu a bit (below 1%). For the future, if you have more computers doing the same job is better to use "plan=custom" and supply a higher ecm percent for only one computer, but supply lower (or even none, you can use aliquiet with -e switch, or -noecm for yafu, etc) for the others. If you have different levels of ecm, some computers will finish much faster and will switch to poly. When the ECM finishes in the last one, you will have already one good poly. You can check which poly is best, and copy it in other machines, then resume with -e. It looks like a lot of work, but this way you save a lot of time with the poly selection, if you don't do that by GPU. If you do it by GPU, then it does not matter, that is much faster anyhow. Also, if you have many old machines running 32-bit XP there, be aware that yafu's ecm is much slower on 32 bit machines, about half of the speed. You should try to run ecm in 64 bit OS. Last fiddled with by LaurV on 2016-09-30 at 02:40 |
|
![]() |
![]() |
![]() |
#300 | |
"Ed Hall"
Dec 2009
Adirondack Mtns
2·23·79 Posts |
![]() Quote:
All my 32-bit machines are dormant. The current machines, although old, are at least 64-bit and multi-core. They are all running linux. I do have two NVidia cards that worked way back when, but they are too ancient to run with anything current. |
|
![]() |
![]() |
![]() |
#301 | |
"Curtis"
Feb 2005
Riverside, CA
111038 Posts |
![]() Quote:
Invoking ecm -v with a B1 bound will show you how many curves at that bound are needed for a variety of t-levels. For instance, ECM 7 indicates that 24,000 curves at 3e6 are equivalent to a t45 (the level usually run by curves at 11e6), and 240,000 are a t50. At 11e6, 39500 curves are a t50. So, your 3e6 curves are about 10% of a t50, while your 11e6 curves are 16% of a t50, and your 43e6 curves are worth 9% of a t50 (8700 curves, again using ECM 7.0). As for "wasted", overkill on smaller curves is an ineffcient way to find largish factors, but there is definitely a chance to do so; so compared with a fastest-plan, the super-extra-3M curve count maybe wasted 20 or 30% of the computrons spent beyond the usual t40 number of curves. Adding these up, you've done just over 1/3rd of a t50, so another 5000 or 6000 curves at 43e6 would be enough to justify proceeding to GNFS. Note that "enough" is both a rough and broad optimum; some folks feel strong regret if GNFS turns up a factor that ECM "could have" or even "should have" found; those people should likely do a bit more ECM to reduce incidence of that regret. fivemack wrote a bayesian-analysis tool for ECM, taking already-run curves as input and outputting what curves should be run before GNFS. Alas, I can't find the thread presently. Last fiddled with by VBCurtis on 2016-09-30 at 05:38 |
|
![]() |
![]() |
![]() |
#302 |
"Ed Hall"
Dec 2009
Adirondack Mtns
2×23×79 Posts |
![]()
Thank you, VBCurtis. I think I now have some understanding, but let's see if I've caught one thing correctly. The, "Adding these up," is referring to the percentages? In which case, the just over 1/3 is because 10% + 16% +9% = 35%? And, the efficiency is determined by how long it takes to run different B1 curves? Are you also saying that I should run a particular B1 to the t40 level and then move up to the next B1 for best efficiency?
I would like to see that analysis tool, if you happen to find it. (Or, if someone else knows where it is.) The last questions (for now) would be to all: How do we know if someone is running ECM on the current composite and what the total t-level may be across all work? Or, does it matter? |
![]() |
![]() |
![]() |
#303 |
Just call me Henry
"David"
Sep 2007
Cambridge (GMT/BST)
5,821 Posts |
![]()
Something that came out of that Bayesian approach was that it is best to run very little of the smaller curves and then finish off with larger curves. For upto a t50 something like 5-10% of curves at t20-50 and then finish up the t50 with curves at the t55 level would be better. I will find the thread next time I am on a PC rather than a tablet. The mainstream approach wastes a lot of time.
Edit:http://mersenneforum.org/showthread.php?t=21420 Last fiddled with by henryzz on 2016-09-30 at 13:36 |
![]() |
![]() |
![]() |
#304 | |
"Curtis"
Feb 2005
Riverside, CA
52×11×17 Posts |
![]() Quote:
Yes to adding the percentages. Usually, so few curves are done at a small level that the contribution of, say, B1=3M curves to a t50 is so small as to be ignored. In your case, you did so many that the percentage was worth noting. My experiments toward minimizing time to find a factor of unknown size led me to run half the number of curves for a t-level before moving up to the next B1 size; I might run 500 at 1M before moving to 3M, instead of 900. Henry's experience with the bayesian tool suggests even less than that; both my heuristic and the bayesian tool fly in the face of RDS' published papers from the old days, which were the source of the traditional method. To be clear, the traditional method is what you summarized: Complete curves at B1=3M sufficient for a t40 (according to help documentation or the -v flag of ECM), then move to 11M and run a number of curves equal to t45, etc. I use "efficiency" to mean "best chance to find a factor per computron". I did a bunch of messing about with -v curve counts, k-values that determine B2 size, etc, and I think I gained a few percentage points of efficiency by using different B1 values than standard. The folks who actually know what they're doing like to remind us that the optimal settings are a very broad maximum, and it hardly matters what we choose so long as we don't use redundantly large numbers of curves with small B1. |
|
![]() |
![]() |
![]() |
#305 |
Basketry That Evening!
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88
3·29·83 Posts |
![]()
I wouldn't go so far as to call it "fly in the face" of the old paper. It's quite accurate. For the problem it states, it gives the optimal solution. The problem we're trying to solve isn't necessarily the same as the problem solved in the paper.
|
![]() |
![]() |
![]() |
#306 |
"Ed Hall"
Dec 2009
Adirondack Mtns
2×23×79 Posts |
![]()
Thanks for all the help. I'm studying the link, but must confess to not really understanding it yet.
My next dilemma, though, is that I have several instances of ali.pl running, that have all completed the 2350 curves at 3e6. But, since the perl based scripts don't provide all the intermediate info from YAFU, I can't tell anything about how many curves have been accomplished by any of those machines at 11e6. It seems like a bit of work, but would it seem reasonable that I could come up with a fair estimate by canceling YAFU, running ECM for one curve and then dividing the times? Or, is there an easier method to determine the count? |
![]() |
![]() |
![]() |
#307 |
"Ed Hall"
Dec 2009
Adirondack Mtns
2×23×79 Posts |
![]() |
![]() |
![]() |
![]() |
#308 |
Just call me Henry
"David"
Sep 2007
Cambridge (GMT/BST)
582110 Posts |
![]()
Now that I have the script in front of me I can say that you need to do a further 4000 at 110e6 and 300-1600 at 850e6 depending upon the ratio between ecm speed and nfs speed on your pcs. Anything below 110e6 is inefficient now according to the script. Even with nothing done it only recommends 10@3e6, 100@11e6 and 100@43e6 below 110e6.
|
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Reserved for MF - Sequence 3366 | RichD | Aliquot Sequences | 468 | 2021-01-27 01:16 |
Reserved for MF - Sequence 4788 | schickel | Aliquot Sequences | 2934 | 2021-01-07 18:52 |
Reserved for MF - Sequence 276 | kar_bon | Aliquot Sequences | 127 | 2020-12-17 10:05 |
Team Sieve #37: 3408:i1287 | RichD | Aliquot Sequences | 14 | 2013-08-02 17:02 |
80M to 64 bits ... but not really reserved | petrw1 | Lone Mersenne Hunters | 82 | 2010-01-11 01:57 |