mersenneforum.org P-1 factoring attempts at smallest-remaining Mersenne numbers with no known factors
 User Name Remember Me? Password
 Register FAQ Search Today's Posts Mark Forums Read

 2016-12-16, 05:34 #23 VBCurtis     "Curtis" Feb 2005 Riverside, CA 112138 Posts My experience with big-bound ECM runs is that halving the memory adds about 30% to the stage 2 runtime for a given B2 bound. The -maxmem option will cut memory footprint by factors of 2 whilst increasing "k" (the number of chunks stage 2 is divided into) by factors of four. EDIT: this behavior is how ECM treats regular numbers; for Mersenne candidates it uses finer steps, in ways I do not recall. -maxmem might select k = 7 or 9 to fit under the memory boundary, which isn't possible for non-mersenne candidates. I'd like to hear about timings for stage 2 using k-values over 30; if you run any, please report your results! Last fiddled with by VBCurtis on 2016-12-16 at 05:37
2016-12-16, 10:20   #24
Gordon

Nov 2008

3·167 Posts

Quote:
 Originally Posted by GP2 Yes, but in doing so you lose some of the benefit of using gmp-ecm Let's say you want to do an exponent to B2 = 100,000 but you only have enough memory to do B2 = 1000 (I'm using ridiculously low values in order to simplify the example).
Agreed, but as was pointed out to me at least it gets run, slower is better than not at all

2016-12-16, 13:29   #25
fivemack
(loop (#_fork))

Feb 2006
Cambridge, England

638310 Posts

Quote:
 Originally Posted by Gordon Agreed, but as was pointed out to me at least it gets run, slower is better than not at all
I think it is better to not run a job on a 16GB machine in 2016 and run it on a cheap 256GB machine in 2022, than to burn sixteen times the coal getting the job done on the too-small machine today. There is no urgency to determining the factors of 2^10061-1

 2016-12-16, 14:06 #26 lycorn     Sep 2002 Oeiras, Portugal 1,453 Posts That makes some sense, yes, but if we were to adhere too rigidly to that principle, no project would ever start. Like the famous diet that always starts tomorrow...
2016-12-16, 17:16   #27
Gordon

Nov 2008

7658 Posts

Quote:
 Originally Posted by fivemack I think it is better to not run a job on a 16GB machine in 2016 and run it on a cheap 256GB machine in 2022, than to burn sixteen times the coal getting the job done on the too-small machine today. There is no urgency to determining the factors of 2^10061-1
By that reckoning I should never have run my first LL test on that P-90 machine back in 1997, as what took 3 days then takes 5 minutes now...

...so lets stop all testing until the year 2100 when we can do the next 84 years of work in 12 months

You might not want to see a first time factor of a sub 10k exponent, but a fair few of us do

2016-12-16, 23:53   #28
Gordon

Nov 2008

3×167 Posts

Quote:
 Originally Posted by fivemack I think it is better to not run a job on a 16GB machine in 2016 and run it on a cheap 256GB machine in 2022, than to burn sixteen times the coal getting the job done on the too-small machine today. There is no urgency to determining the factors of 2^10061-1
Have reread your post, this isn't a 16 gig machine it's 32 gig, there's an obscure bug in gmp-ecm which makes it (most times) fail when trying to allocate over 16 gig of ram.

2016-12-17, 00:04   #29
VBCurtis

"Curtis"
Feb 2005
Riverside, CA

10010100010112 Posts

Quote:
 Originally Posted by Gordon By that reckoning I should never have run my first LL test on that P-90 machine back in 1997, as what took 3 days then takes 5 minutes now... ...so lets stop all testing until the year 2100 when we can do the next 84 years of work in 12 months You might not want to see a first time factor of a sub 10k exponent, but a fair few of us do
No, you're missing the point. Most of the tasks we do on this forum scale with CPU speed, but running ECM on machines with too-small memory require time that scales with both CPU speed AND memory size. So, waiting for future machines for these ECM tasks results in speedups much greater than the speedups for LL testing, or small-bound ECM, or any number of the other things we collectively like to do.

A future machine that has double the CPU speed and double the memory will do LL testing twice as fast, but big-bound ECM ~3 times as fast. So, in project-efficiency terms, do the work now that will benefit less from future speedups, and delay work that will benefit more.

2016-12-17, 00:29   #30
retina
Undefined

"The unspeakable one"
Jun 2006
My evil lair

24×383 Posts

Quote:
 Originally Posted by VBCurtis A future machine that has double the CPU speed and double the memory will do LL testing twice as fast, but big-bound ECM ~3 times as fast. So, in project-efficiency terms, do the work now that will benefit less from future speedups, and delay work that will benefit more.
And in six years time with the shiny new 256GB computer (using your figures) we can make the exact same argument, don't run ECM yet, wait for a time when it will be more efficient. Then there becomes no time at which we can do the test, ever. Because there will always be some future time when it will be more efficient.

 2016-12-17, 01:36 #31 VBCurtis     "Curtis" Feb 2005 Riverside, CA 47×101 Posts Once ECM has all the memory it wants, the only future efficiency gained is from CPU speed. It's the combination of gains from more memory and more CPU that are worth waiting for, and that only applies to ECM bounds that desire more memory than the machine has.
2016-12-17, 02:00   #32
GP2

Sep 2003

22×647 Posts

Quote:
 Originally Posted by Gordon there's an obscure bug in gmp-ecm which makes it (most times) fail when trying to allocate over 16 gig of ram.
I very much doubt it, because I routinely use more than that. In fact, I have one job using 125 GB of memory right now. That is for P−1 rather than ECM, but it shouldn't make a difference.

I am using version 7.0.4 on Linux, compiled from source code.

PS, in the cloud you can use machines with up to 2 TB of memory.

 2016-12-17, 03:04 #33 VBCurtis     "Curtis" Feb 2005 Riverside, CA 47·101 Posts Likewise, I have allocated 28-30GB of ram on 32GB systems for P-1, P+1, and regular ECM curves. No crashes.

 Similar Threads Thread Thread Starter Forum Replies Last Post siegert81 Math 23 2014-03-18 11:50 devarajkandadai Miscellaneous Math 15 2012-05-29 13:18 asdf Math 17 2004-07-24 14:00 Erasmus Factoring 32 2004-02-27 11:41 Fusion_power Math 13 2003-10-28 20:52

All times are UTC. The time now is 13:23.

Fri Apr 23 13:23:12 UTC 2021 up 15 days, 8:04, 0 users, load averages: 2.17, 1.91, 1.98