View Single Post
Old 2009-03-19, 04:45   #18
bsquared
 
bsquared's Avatar
 
"Ben"
Feb 2007

22×32×7×13 Posts
Default

Quote:
Originally Posted by mklasson View Post
I think 1/3*N might be a bit excessive. For a 120-digit composite that amount of ecm (factors <= 40 digits using gmp-ecm-readme settings) would seem to take me ~10 cpu-hours, whereas just pounding on it with ggnfs takes slightly more than twice that time. That's too much ecm, isn't it?

I did some experimental benchmarking today to find good ecm levels for <65 digits. It's a shame it's not that easy for c100+.

Hm, come to think of it, as nfs has better asymptotics shouldn't the ecm scaling factor for nfs be smaller than the corresponding one for qs? I.e. using 2/9*N for qs and 3/9*N for nfs seems inherently wrong. I realise they're both fuzzy guidelines, but don't you agree?

Maybe something like 11+1/5*N would be better for nfs? Or maybe I'm just wrong. In any case, how much ecm do you people often running big nfs jobs normally do?
This is a good discussion, I'd love to hear someone else's opinion too. The 2/9 and 3/9 levels are often quoted for SNFS vs GNFS, and when QS is in its wheelhouse (65 to 95 digits) I liken it more to SNFS. After that point, scaling takes over and QS goes away entirely. I'll admit I don't often do the complete 1/3*N ECM level for GNFS, but I run the 64 bit linux AMD optimized lattice siever which many people still may not run, and that throws the ratio off a bit I think.

I agree that spending half the time a gnfs run would have taken on ecm seems excessive, but I don't have any data to suggest a better choice. Over a large number of factorizations, maybe something like 1/3 the time, max, would be better? Sorry I can't be more concrete.
bsquared is offline   Reply With Quote