mersenneforum.org ECM Stage 2 RAM
 Register FAQ Search Today's Posts Mark Forums Read

 2018-11-10, 17:50 #12 ATH Einyen     Dec 2003 Denmark D6A16 Posts B2=B1*100 is actually low. GMP-ECM chooses a much higher B2 if you do not specify it yourself: B1=11000, B2=1684420 B1=50000, B2=15446350 B1=250000, B2=183032866 B1=1000000, B2=974637522 B1=3000000, B2=4592487916 B1=11000000, B2=30114149530 So roughly B2=3.8 * B11.4 I think ideally stage2 should take the same amount of time as stage1. Last fiddled with by ATH on 2018-11-10 at 17:51
2018-11-10, 17:58   #13
GP2

Sep 2003

50358 Posts

Quote:
 Originally Posted by storm5510 What happens if I were to narrow this range? An example might be lowering B2 to 2,500,500.
I think a factor of 10 is just too small. If anything, some people even recommend using a factor larger than 100.

Quote:
 Based on what I've seen, B1 seems to be much more important than B2. A shot-in-the-dark: These are bounds for calculations using the seed value that appears at the start of Stage 1. If this is not the case, then I am unsure how they are used.
Stage 1 is only affected by B1. Stage 2 is determined by both B1 and B2. In my experience working with ECM on small exponents, factors are much more likely to be found during stage 2. So the choice of B2 does matter a lot. It's a tradeoff. If it's too small, you'll have much less chance of finding anything, if it's too large, there will be diminishing returns and it will take too long. I would stick with B1 × 100.

2018-11-10, 18:03   #14
GP2

Sep 2003

50358 Posts

Quote:
 Originally Posted by ATH B2=B1*100 is actually low. GMP-ECM chooses a much higher B2 if you do not specify it yourself
But GMP-ECM uses a different algorithm that is only helpful for quite small exponents. He is talking about the exponent 109229, I think that is out of GMP-ECM's useful range.

For mprime I'd say the factor of 100 rule of thumb would still apply. No sense spending a ton of time pursuing diminishing returns on one ECM curve when you can just start another curve and maybe a factor will pop out much more easily with that one.

2018-11-10, 22:33   #15
lycorn

"GIMFS"
Sep 2002
Oeiras, Portugal

30428 Posts

Quote:
 Originally Posted by GP2 He is talking about the exponent 109229, I think that is out of GMP-ECM's useful range.
It is indeed. I did some tests with GMP-ECM a while ago and, AFAIR, the useful limit was ~40K.

2018-11-11, 00:07   #16
storm5510
Random Account

Aug 2009
Not U. + S.A.

32×281 Posts

Quote:
 Originally Posted by ATH B2=B1*100 is actually low. GMP-ECM chooses a much higher B2 if you do not specify it yourself:....So roughly B2=3.8 * B11.4.
An interesting formula. How would you format a worktodo line without specifying B2? Something would have to fill that space.

Now, we get back to RAM and on-topic. The most I can allocate on this I7 is 2.5GB. When I built this last year, I had intended to add more RAM later on. That didn't happen. So, it still has 8GB. As a point of reference, my much older HP has 4GB and seems to do quite well.

As B2 increases, so does the amount of RAM required, or so it seems. George talks about memory-thrashing in his documentation. I would prefer to avoid that. In my experimentation this morning, I tried squeezing the amount for Stage 2 RAM lower and lower. The only result I saw was that Stage 2 would take longer to run.

2018-11-11, 01:01   #17
GP2

Sep 2003

1010000111012 Posts

Quote:
 Originally Posted by storm5510 An interesting formula. How would you format a worktodo line without specifying B2? Something would have to fill that space.
He is talking about an entirely different program called GMP-ECM. It doesn't use worktodo.txt files. If you use mprime/Prime95, you have to specify B2.

One reason that GMP-ECM can use much larger B2 is because it uses a much more efficient algorithm for stage 2. But it can only be used for small exponents. As lycorn mentioned, for anything bigger than about 40k, you need to use mprime instead.

Quote:
 As B2 increases, so does the amount of RAM required, or so it seems. George talks about memory-thrashing in his documentation. I would prefer to avoid that. In my experimentation this morning, I tried squeezing the amount for Stage 2 RAM lower and lower. The only result I saw was that Stage 2 would take longer to run.
Higher B2 will take larger memory to run efficiently. If you reduce the memory a lot, it will take a lot longer to run.

With B1 = 250k and B2 = 100 × B1 and exponents in the 100k range, I don't think it will need anywhere near 2.5 GB actually.

If you run with mprime -d, then when it starts ECM stage 2 it will output lines of the form Using ____MB of memory in stage 2. Obviously this will be no greater than the amount of kB you specified in the Memory= line in your local.txt file.

Last fiddled with by GP2 on 2018-11-11 at 01:03

 2018-11-11, 02:01 #18 ATH Einyen     Dec 2003 Denmark 2×17×101 Posts I did not mean of for you to not enter B2 value in Prime95, it is not possible. I was just illustrating that the "standard" B2 in GMP-ECM is so high, and yes this algorithm will not work for higher numbers. But my point was that B2=100*B1 is not crazy high and it might be a bit lower than we could do. I do think that the optimal values are such that stage1 time = stage2 time, so if you feel like optimizing and your stage2 time is way faster than stage1 you could raise B2 provided you give Prime95 enough RAM, otherwise it will just be slow if you do a big B2 with low RAM, it has to split stage2 in too many stages. But otherwise just leave it at B2=100*B1, or lower it to 10*B1 if you want, it is your clock cycles and power bill to spend like you want
2018-11-12, 13:19   #19
storm5510
Random Account

Aug 2009
Not U. + S.A.

252910 Posts

Quote:
 Originally Posted by ATH ....I do think that the optimal values are such that stage1 time = stage2 time, so if you feel like optimizing and your stage2 time is way faster than stage1 you could raise B2 provided you give Prime95 enough RAM, otherwise it will just be slow if you do a big B2 with low RAM, it has to split stage2 in too many stages. But otherwise just leave it at B2=100*B1, or lower it to 10*B1 if you want, it is your clock cycles and power bill to spend like you want
Actually, I used your formula from above. B2 = B11.4 * 3.8. The time difference is very small. So, I will keep using it.

Windows 10 has a pretty good Task Manager. As I was testing, I would watch the RAM usage by Prime95. I found that I could go up to 4,096 MB with no problems. 5,120 may even be possible, but I don't see the need. Windows 10 compresses its memory contents on-the-fly. For some strange reason, I set this using powers of 2. Imagine that.

Running all this does not seem to have a lot of affect on my power bill. My rate is 0.1286 USD per kWh. It stays in that area year round.

About mprime, I've never ran any flavor of Linux. I've watched local friends run it. It sails over my head. Too many years of MS.

Edit: Look at the examples below. Are the bounds too close to each other from one test to the next for the same exponent?

Code:
ECM2=<ID>,1,2,<Exponent>,-1,10000 1000000,<Curves>,"<Known factors>"
ECM2=<ID>,1,2,<Exponent>,-1,11000 1100000,<Curves>,"<Known factors>"

Last fiddled with by storm5510 on 2018-11-12 at 13:43 Reason: Question

 2018-11-12, 17:35 #20 VBCurtis     "Curtis" Feb 2005 Riverside, CA 2·32·313 Posts ECM uses a random seed for its work, and each seed does work independently of other seeds. You can use the same bounds as many times as you like without repeating work. It's normal to complete a "level" of ECM testing without changing bounds (though some folks jump to higher bounds before a previous level completes; mersenne.org converts these curves to count work reasonably accurately). So, unlike PM1 work, don't worry about using the same bounds multiple times.
2018-11-13, 03:01   #21
storm5510
Random Account

Aug 2009
Not U. + S.A.

32·281 Posts

Quote:
 Originally Posted by VBCurtis ...It's normal to complete a "level" of ECM testing without changing bounds (though some folks jump to higher bounds before a previous level completes...
Could you define "level" in this context?

After doing some observation while Prime95 was running, I saw that the "at prime" value does not exceed B1 in stage 1 or B2 in stage 2. This would explain the variation I see in the bounds when looking at the recent data on mersenne.org. This would sort of support the argument that some say wider bounds are better.

The formula that ATH posted above goes well beyond B2 = B1 * 100. With a B1 of 10,000, the typical B2 would be 1,000,000. With his formula B2 becomes 1,512,807. Of course, this is not set in stone. He also indicates the time for Stage 1 and Stage 2 should be about the same. In order to match this, I would have to further increase B2 by 27%, more or less.

2018-11-13, 04:41   #22
axn

Jun 2003

22·32·151 Posts

Quote:
 Originally Posted by storm5510 After doing some observation while Prime95 was running, I saw that the "at prime" value does not exceed B1 in stage 1 or B2 in stage 2. This would explain the variation I see in the bounds when looking at the recent data on mersenne.org. This would sort of support the argument that some say wider bounds are better. The formula that ATH posted above goes well beyond B2 = B1 * 100. With a B1 of 10,000, the typical B2 would be 1,000,000. With his formula B2 becomes 1,512,807. Of course, this is not set in stone. He also indicates the time for Stage 1 and Stage 2 should be about the same. In order to match this, I would have to further increase B2 by 27%, more or less.
What are you trying to do? What is wrong with just completing the assignment you were given?

 Similar Threads Thread Thread Starter Forum Replies Last Post Prime95 Lone Mersenne Hunters 118 2022-07-04 18:19 G_A_FURTADO Information & Answers 1 2008-10-26 15:21 D. B. Staple Factoring 2 2007-12-14 00:21 jasong GMP-ECM 9 2007-10-25 22:32 Matthias C. Noc PrimeNet 5 2004-08-25 15:42

All times are UTC. The time now is 10:06.

Wed Feb 1 10:06:41 UTC 2023 up 167 days, 7:35, 0 users, load averages: 0.73, 0.98, 0.89