mersenneforum.org ECM Stage 2 RAM
 User Name Remember Me? Password
 Register FAQ Search Today's Posts Mark Forums Read

2018-12-06, 02:23   #45
chalsall
If I May

"Chris Halsall"
Sep 2002

2·5,431 Posts

Quote:
 Originally Posted by storm5510 Still, the possibility remains.
Of course. None zero probability applies here. Many nines, but not zero....

2018-12-06, 02:29   #46
GP2

Sep 2003

258710 Posts

Quote:
 Originally Posted by PhilF The numbers I'm being assigned are in the M12,000,000 and M19,000,000 range, with 4096MB of memory assigned (prime95 is actually using 3808MB in stage 2). From what I can tell, all the numbers I'm getting have never had any curves run on them. So, based on your input, on numbers that have no prior curves run, the probability of any 1 curve finding a factor is not dependent on the length of the number being factored.
I briefly let Primenet assign me exponents for ECM in the 17M range which had no known factors. It assigned one ECM curve per exponent, with B1=50000, B2=5000000 (i.e., t=25 depth).

It ran for maybe 11 days on two cores and found 2 factors out of 260 attempts.

On the other hand, if I was trying to do the same thing in a smaller range like, say, under 100k — namely, finding first factors of exponents with no known factors — then it's really extremely unlikely that I'd find even one factor with the equivalent time and CPU effort, because it's been searched more extensively already.

On the other hand, if you look for additional factors of exponents in the sub-100k range that do already have known factors, you'd have better luck, but you'd still find factors at a slower rate than in the 17M range.

2018-12-06, 03:12   #47
VBCurtis

"Curtis"
Feb 2005
Riverside, CA

7·13·61 Posts

Quote:
 Originally Posted by Gordon Not quite strictly true, here's one I found earlier ECM found a factor in curve #25, stage #2 Sigma=3673943552503015, B1=1000000, B2=1000000000. UID: nitro/haswell, M85027 has a factor: 113574028377227867558212550573836752813871 (ECM curve 25, B1=1000000, B2=1000000000)
Gordon-
You quoted a post about GMP-ECM, but wasn't your demonstrated factor found with Prime95? If not, what version of GMP-ECM uses sigmas larger than 32 bits and has a user ID?

2018-12-06, 03:18   #48
VBCurtis

"Curtis"
Feb 2005
Riverside, CA

7×13×61 Posts

Quote:
 Originally Posted by chalsall Incorrect. Flip a fair coin a thousand times and it turns up heads each time. What is the chance it will turn up tails the next flip?
Oh wise one, please illuminate the flaw in my thinking about ECM and Bayesian probability modification. What does tossing a coin have to do with looking for a factor of a certain size?

To try to use your terrible analogy- if I flip 1000 ECM "coins" and get heads every time, it's much more likely that tails doesn't exist for this size of curve than it was before I hadn't run any curves. The probability my next curve finds a factor is rather lower because I have 1000 curves' outcomes that didn't find a factor, when compared to running my first curve.

2018-12-06, 03:32   #49
PhilF

"6800 descendent"
Feb 2005

13328 Posts

Quote:
 Originally Posted by GP2 I briefly let Primenet assign me exponents for ECM in the 17M range which had no known factors. It assigned one ECM curve per exponent, with B1=50000, B2=5000000 (i.e., t=25 depth). It ran for maybe 11 days on two cores and found 2 factors out of 260 attempts. On the other hand, if I was trying to do the same thing in a smaller range like, say, under 100k — namely, finding first factors of exponents with no known factors — then it's really extremely unlikely that I'd find even one factor with the equivalent time and CPU effort, because it's been searched more extensively already. On the other hand, if you look for additional factors of exponents in the sub-100k range that do already have known factors, you'd have better luck, but you'd still find factors at a slower rate than in the 17M range.
All of that makes perfect sense to me. Thank you.

2018-12-06, 03:36   #50
GP2

Sep 2003

13×199 Posts

Quote:
 Originally Posted by chalsall Incorrect. Flip a fair coin a thousand times and it turns up heads each time. What is the chance it will turn up tails the next flip?
Throw a thousand darts blindfolded, and fail to hit anything. What is the chance that there is actually a dartboard within ten feet in front of you that you will hit on the next throw?

Note: the dartboard might be a hundred feet away, and you might need a million throws and a slingshot. Or it might be a hundred miles away, and you need a trillion projectiles and a railgun and a prayer.

2018-12-06, 09:35   #51
ATH
Einyen

Dec 2003
Denmark

5·683 Posts

Quote:
 Originally Posted by Gordon I am referring now to gmp-ecm. I'm currently working on M4007 with fairly high values for B1,B2 B1=32X1011 B2=12x1014 I have 32gb in this system and gmp-ecm will try to use all that it can, there is a bug either in gmp-ecm or windows memory management. When it gets to 16gig (not a typo) and tries to allocate another 4 gig chunk it fails with out of memory - even though 8 gig is still free...so I cap it at 14 gb for stage 2
It needs a lot of RAM with those high values maybe terabytes of RAM, so unless you use -maxmem 14000 or the -k option it will crash. If you start a stage 2 with the -v parameter and without -maxmem or -k it should estimate the RAM for stage 2.

2018-12-06, 15:02   #52
GP2

Sep 2003

13·199 Posts

Quote:
 Originally Posted by ATH It needs a lot of RAM with those high values maybe terabytes of RAM, so unless you use -maxmem 14000 or the -k option it will crash. If you start a stage 2 with the -v parameter and without -maxmem or -k it should estimate the RAM for stage 2.
If he's using GMP-ECM for stage 2, then I think it responds non-linearly to memory, no? For a given B2 value it wants a certain amount of memory, and if you reduce that memory by a factor of N (by using the -k option to use multiple blocks), it will take considerably longer than N times as much execution time.

So if you're giving 14 GB to a stage 2 that really wants terabytes, you're giving it at least a hundred times less memory than it wants, and it will take maybe many thousands of times longer to complete.

Seems like a highly inefficient use of the machine. It could be tackling right-sized tasks instead.

PS,
For what it's worth, x1.32xlarge instances on AWS have 64 hyperthreaded cores and nearly 2 TB of memory (1952 GiB to be exact), and they cost $4/hour at current spot prices in us-east-2. If you need a mere 768 GiB of memory, an r5.24xlarge has that for$1/hour at spot prices (and 48 hyperthreaded cores). Various smaller options all the way down to a one-core r5.large with 16 GiB for 2 cents an hour.

There are even AWS cloud machines with 12 TiB of memory and 224 cores, but those aren't available for spot or even on-demand, just for dedicated corporate in-memory databases. Give it a few years, though, and that kind of power will probably trickle down to general availability.

So if there's some specific number-crunching task that really responds well to enormous amounts of memory and can complete in a reasonable amount of time and you can splurge or pool funding, then renting is an option.

2018-12-06, 16:43   #53
Gordon

Nov 2008

3·132 Posts

Quote:
 Originally Posted by VBCurtis Gordon- You quoted a post about GMP-ECM, but wasn't your demonstrated factor found with Prime95? If not, what version of GMP-ECM uses sigmas larger than 32 bits and has a user ID?
that was the submission file to mersenne.org

This is the output file...I chopped off the last 28k characters.

***

GMP-ECM 7.0-dev [configured with MPIR 2.7.0, --enable-asm-redc] [ECM]
Save file line has no equal sign after: [Tue Oct 13 22:
Resuming ECM residue saved with Prime95
Input number is 4758184975...7139017727 (25596 digits)
Using B1=1000000, B2=1000000000, polynomial Dickson(6), sigma=0:3673943552503015
Step 1 took 3847484ms
Step 2 took 1217010ms
********** Factor found in step 2: 113574028377227867558212550573836752813871
Found prime factor of 42 digits: 113574028377227867558212550573836752813871
Proving primality of 25555 digit cofactor may take a while...

Composite cofactor

2018-12-06, 16:45   #54
Gordon

Nov 2008

3×132 Posts

Quote:
 Originally Posted by ATH It needs a lot of RAM with those high values maybe terabytes of RAM, so unless you use -maxmem 14000 or the -k option it will crash. If you start a stage 2 with the -v parameter and without -maxmem or -k it should estimate the RAM for stage 2.
I do use -maxmem 14000

2018-12-06, 16:46   #55
Gordon

Nov 2008

50710 Posts

Quote:
 Originally Posted by GP2 If he's using GMP-ECM for stage 2, then I think it responds non-linearly to memory, no? For a given B2 value it wants a certain amount of memory, and if you reduce that memory by a factor of N (by using the -k option to use multiple blocks), it will take considerably longer than N times as much execution time. So if you're giving 14 GB to a stage 2 that really wants terabytes, you're giving it at least a hundred times less memory than it wants, and it will take maybe many thousands of times longer to complete. Seems like a highly inefficient use of the machine. It could be tackling right-sized tasks instead. PS, For what it's worth, x1.32xlarge instances on AWS have 64 hyperthreaded cores and nearly 2 TB of memory (1952 GiB to be exact), and they cost $4/hour at current spot prices in us-east-2. If you need a mere 768 GiB of memory, an r5.24xlarge has that for$1/hour at spot prices (and 48 hyperthreaded cores). Various smaller options all the way down to a one-core r5.large with 16 GiB for 2 cents an hour. There are even AWS cloud machines with 12 TiB of memory and 224 cores, but those aren't available for spot or even on-demand, just for dedicated corporate in-memory databases. Give it a few years, though, and that kind of power will probably trickle down to general availability. So if there's some specific number-crunching task that really responds well to enormous amounts of memory and can complete in a reasonable amount of time and you can splurge or pool funding, then renting is an option.
I don't know about thousands of times, it completes a run in about 4 days of wall time.

 Similar Threads Thread Thread Starter Forum Replies Last Post Prime95 Lone Mersenne Hunters 118 2022-07-04 18:19 G_A_FURTADO Information & Answers 1 2008-10-26 15:21 D. B. Staple Factoring 2 2007-12-14 00:21 jasong GMP-ECM 9 2007-10-25 22:32 Matthias C. Noc PrimeNet 5 2004-08-25 15:42

All times are UTC. The time now is 07:28.

Fri Dec 2 07:28:09 UTC 2022 up 106 days, 4:56, 0 users, load averages: 0.98, 0.87, 0.82

Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.

≠ ± ∓ ÷ × · − √ ‰ ⊗ ⊕ ⊖ ⊘ ⊙ ≤ ≥ ≦ ≧ ≨ ≩ ≺ ≻ ≼ ≽ ⊏ ⊐ ⊑ ⊒ ² ³ °
∠ ∟ ° ≅ ~ ‖ ⟂ ⫛
≡ ≜ ≈ ∝ ∞ ≪ ≫ ⌊⌋ ⌈⌉ ∘ ∏ ∐ ∑ ∧ ∨ ∩ ∪ ⨀ ⊕ ⊗ 𝖕 𝖖 𝖗 ⊲ ⊳
∅ ∖ ∁ ↦ ↣ ∩ ∪ ⊆ ⊂ ⊄ ⊊ ⊇ ⊃ ⊅ ⊋ ⊖ ∈ ∉ ∋ ∌ ℕ ℤ ℚ ℝ ℂ ℵ ℶ ℷ ℸ 𝓟
¬ ∨ ∧ ⊕ → ← ⇒ ⇐ ⇔ ∀ ∃ ∄ ∴ ∵ ⊤ ⊥ ⊢ ⊨ ⫤ ⊣ … ⋯ ⋮ ⋰ ⋱
∫ ∬ ∭ ∮ ∯ ∰ ∇ ∆ δ ∂ ℱ ℒ ℓ
𝛢𝛼 𝛣𝛽 𝛤𝛾 𝛥𝛿 𝛦𝜀𝜖 𝛧𝜁 𝛨𝜂 𝛩𝜃𝜗 𝛪𝜄 𝛫𝜅 𝛬𝜆 𝛭𝜇 𝛮𝜈 𝛯𝜉 𝛰𝜊 𝛱𝜋 𝛲𝜌 𝛴𝜎𝜍 𝛵𝜏 𝛶𝜐 𝛷𝜙𝜑 𝛸𝜒 𝛹𝜓 𝛺𝜔