![]() |
![]() |
#1 |
"Ed Hall"
Dec 2009
Adirondack Mtns
1101110011112 Posts |
![]()
I have a script that tasks several machines to perform ecm on a given number, using the following table:
Code:
# 30 @ B1 = 2k # 74 @ B1 = 11k # 107 @ B1 = 50k # 432 @ B1 = 250k # 900 @ B1 = 1M # 2350 @ B1 = 3M # 3850 @ B1 = 11M # 2000 @ B1 = 43M # 2000 @ B1 = 110M # 5000 @ B1 = 200M |
![]() |
![]() |
![]() |
#2 | |
Nov 2003
22·5·373 Posts |
![]() Quote:
"wrong-headed". Let's take it a bit at a time. (0) When you say "better", one must ask: Better for what? What is the metric?? (1) What is your objective? [Don't say: to factor numbers]. What is your ECM objective? Is it to maximize the probability of success given that you are spending a **fixed** amount of time? i.e. you are selecting a priori the amount of time you want to spend on each number. Or is it to maximize the probability of success per unit time? Here, you allow the amount of time you are willing to spend to vary, depending on the size of the factor you are looking for. (2) The number of curves and the B1 limit you select depends, not on the size of the composite, but rather on the size of the factor(s) you are looking for. Read: R. Silverman & S.S. Wagstaff Jr. A Practical Analysis of ECM Mathematics of Computation. It explains just about everything. BTW, an optimal strategy is NOT to select a fixed B1 limit and a fixed number of curves. An optimal strategy selects a limit and a number of curves, then uses the information gained by a failure to select a new set of paramaters. The above paper shows how. |
|
![]() |
![]() |
![]() |
#3 |
"Curtis"
Feb 2005
Riverside, CA
3·5·307 Posts |
![]()
Mr Silverman has pointed the way here, rather politely too. I believe the question you need an answer to is "how much time should I spend on ECM for a given number before proceeding to NFS?"
I believe the paper cited above will help with that also, but I can cite heuristics used in this forum: Run enough curves to find a factor of 2/9ths the number of digits of difficulty of an SNFS-able composite, and somewhere between 0.30 and 0.33 the number of digits of a GNFS composite. For a 160-digit number, I'd do a t50 at B1 = 43M (roughly 7500 curves), and a t50 worth of curves at B1 = 110M (roughly 3000 curves). I would not use a B1 higher than 110M for a number smaller than 170 digits. Your number of curves table is produced by the -v flag in ECM, and varies by version of the software; as stage 2 improvements have been found, the default B2 bound has risen, allowing fewer curves per level for never versions of ECM compared to older versions and the paper. Even after reading the paper, I am unclear why these specific B1 values are chosen to find factors of a specific size. For example, to find 45-digit factors I find B1 anywhere from 18M to 30M produces lower expected time to complete the T45 level than B1 = 11M. Perhaps the time estimates GMP-ECM produces are inaccurate, perhaps a software enhancement has made larger B1 values optimal, perhaps I just need to re-read the paper... |
![]() |
![]() |
![]() |
#4 | |
Account Deleted
"Tim Sorbera"
Aug 2006
San Antonio, TX USA
10AB16 Posts |
![]() Quote:
Last fiddled with by Mini-Geek on 2013-10-15 at 20:51 |
|
![]() |
![]() |
![]() |
#5 | |
Nov 2003
22×5×373 Posts |
![]() Quote:
much TIME in step 2 as in step 1. The optimal B2/B1 ratio then depends on how fast the implementation is while in Step 2. The theoretical optimum for step 2 is B2 = B1^2, because a theoretically perfect convolution implementation can run Step 2 to B1^2 in the same time as B1. However, even the latest version of GMPECM does not take B2 that high. Note, however, that as the paper also points out the response surface is very very very flat in the neighborhood of the optimal point. Thus, one can change parameters by a fair bit without noticeably affecting performance. |
|
![]() |
![]() |
![]() |
#6 |
"Ed Hall"
Dec 2009
Adirondack Mtns
1101110011112 Posts |
![]()
Thank you all for the assistance. I realized the question I was asking was not quite "right," but what I'm trying to do is come up with a "general" table that will take ecm to certain levels across a spectrum of composite sizes. I know that it won't produce the same level for two different size composites, but I'd like to maximize the success when running several machines.
I'd further like to set up the script such that at some point in the table, it will switch to gnfs. I've tried to make use of aliqueit and YAFU, but I'm having trouble getting the most out of multiple machines. I can run ecm.py (and do) and that is great for multiple cores on a single machine, but I haven't discovered how to incorporate the other machines, so even if I run the others based on the curves chosen by aliqueit or YAFU, I can't then tell aliqueit or YAFU what I've already done, so they can move on. I've looked a little into openmpi and may go that route, but I'm not comfortable yet with it. I know the table should be based on the size of the number to determine the proper level and time to devote, but I was kind of searching for a general case. I'll go away for now and come back after my reading assignment. Thanks again. |
![]() |
![]() |
![]() |
#7 |
Romulan Interpreter
Jun 2011
Thailand
24·571 Posts |
![]()
You may try to see how yafu is doing it. I love its way, in fact. I use yafu with custom plan and 0.33 ECM percent. The reason is quite superficial: trying to eliminate composites that "split in 3", before going to NFS. I am working under, and around, 120-130 digits. I find yafu a wonderful tool, ECM timing is ok, compared to NFS timing (I would say "optimal" if I would not be afraid that some guys with more knowledge will jump to my throat
![]() Last fiddled with by LaurV on 2013-10-16 at 03:18 Reason: the parenthesis |
![]() |
![]() |
![]() |
#8 |
"Curtis"
Feb 2005
Riverside, CA
3·5·307 Posts |
![]()
Ed-
This is just what I was hoping you were asking, and is why I gave the heuristics for SNFS and GNFS search depths for ECM as a fraction of the composite size. YAFU uses those to generate its ECM decisions, and even has a "pretest ratio" flag that can be adjusted if more or less ECM is desired for GNFS jobs. mklasson (the guy who wrote Aliqueit software) mentions somewhere that one should ECM for time roughly equal to 25% of the expected NFS time, but I have not done the footwork myself to see why that is optimal. I believe the 25% of time and the 33% of composite size heuristics are compatible. |
![]() |
![]() |
![]() |
#9 | |
"Curtis"
Feb 2005
Riverside, CA
3·5·307 Posts |
![]() Quote:
To illustrate how flat the response curve is, I tested B1 values every 1M or 2M from 3M to 66M to look for optimal B1 for t40 and t45 runs. For t45, the 11M level by tradition, I found that expected time for a t45 is within 3% of the best B1 from 11M to 34M, with B1=19M and B2=96G the best at 3% faster than default 11M. For t50, every B1 from 30M to >66M is within 2% of the best B1. Henry and I read these numbers and conclude that since we only pay a small penalty in efficiency to use a B1 much larger than standard practice, we may as well double or triple the usual B1 values for a particular level in order to increase our chances to find a larger factor. I use B1 = 21M for my t45 level, which is both faster than 11M for a t45 and completes 15% of a t50 whilst doing a t45 (2368 curves). For a t40, I find that B1= 11M is within 2% of the time of B1 = 3M. In my opinion, that means we can simply skip the 3M curves, and do 700 curves at 11M as a t40. But similar arguments can be made for smaller ECM levels... |
|
![]() |
![]() |
![]() |
#10 | |||
Nov 2003
746010 Posts |
![]() Quote:
what I have written. Especially when I take the time to give (at least moderately) detailed answers. The last sentence you wrote above shows clearly that you did not read what I wrote. The ECM "level" has NOTHING TO DO WITH THE SIZE OF THE COMPOSITES. It only involves the size of the FACTORS. Quote:
Quote:
optimize! "Proper time and level" depends on your OBJECTIVE FUNCTION. You have not said what that is. Learn how to say exactly what it is that you want. Others have given advice as well. You have been told, e.g. that a general "rule of thumb" for running ECM before switching to SNFS is to run ECM to look for a factor that is about 2/9 the size of the composite. This applies somewhat to to decision to switch to QS as well. To run ECM before switching to GNFS, run ECM to a level that is about 30% of the size of the composite. Note that these are only "rough estimates". A true estimate would require exact knowledge of the relative performance of ECM and QS/NFS on the machines that you are using. It isn't just about CPU speed. Cache sizes play a big role, as does DRAM latency and size. |
|||
![]() |
![]() |
![]() |
#11 | |
"Ben"
Feb 2007
25·3·5·7 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Good air-cooler good enough for overclocked i7-5820K | RienS | Hardware | 17 | 2014-11-18 22:58 |
Benchmark Estimate | Primeinator | Information & Answers | 8 | 2009-06-11 23:39 |
V25.7 TF estimate way out ... or am I? | petrw1 | PrimeNet | 5 | 2008-11-08 02:23 |
Estimate the new primes | davieddy | Lounge | 34 | 2008-09-17 04:22 |
A tricky way to estimate pi(x) | XYYXF | Math | 13 | 2006-09-01 15:44 |