mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > Hardware > Cloud Computing

Reply
 
Thread Tools
Old 2013-04-04, 15:16   #1
patrik
 
patrik's Avatar
 
"Patrik Johansson"
Aug 2002
Uppsala, Sweden

52·17 Posts
Default Amazon EC2

I've been trying out the Amazon EC2 linux sites the last few days after noticing chalsall's post last week. (The noticing took place last week.) I've attached an image of the benchmarks I got.

I also ran some real iterations of double checks and first time tests. (I.e. only double checks with FFT size 1728K running, or only first time tests (possibly of a few different sizes, since I got them at different times) running.) Lines with those timings are given just below the well-known lines.

I ran the first tests on their site in Europe, and then I moved to Oregon, since that is slightly cheaper (at least right now).

The names follow the same naming convention as Amazon do, or an abbreviation thereof. I have appended "-oreg" to the computer names of the tests in Oregon.

The benchmark of t1.micro is somewhat misleading, since you only get those times for a short time. For a longer time you may get a sixth of that throughput.

I did not notice any slowing down of the first time tests when running first time tests on all cores versus running first time tests on most cores but double checks on the rest.

Spot instances of "Cluster Compute Eight Extra Large" (cc2.8xlarge) in Oregon seems to give the best bang for the buck at present.

Edit: I meant to post this in the Hardware forum (not in the GPU subforum). Can someone with mod rights move it, please?
Attached Thumbnails
Click image for larger version

Name:	ec2_bench.png
Views:	607
Size:	79.0 KB
ID:	9645  

Last fiddled with by patrik on 2013-04-04 at 16:00
patrik is offline   Reply With Quote
Old 2013-04-04, 17:23   #2
chalsall
If I May
 
chalsall's Avatar
 
"Chris Halsall"
Sep 2002
Barbados

3×3,767 Posts
Default

Quote:
Originally Posted by patrik View Post
The benchmark of t1.micro is somewhat misleading, since you only get those times for a short time. For a longer time you may get a sixth of that throughput.

I did not notice any slowing down of the first time tests when running first time tests on all cores versus running first time tests on most cores but double checks on the rest.
Interesting analysis. Thanks for sharing.

It is known that the t1.micro instances are not really intended for CPU bound jobs -- they can be used for such, but they're slow. They are best for things like low-load web serving, DNS serving, and file transfers into a ECS volume for processing under a larger instance later. (I use them for the latter two applications.)

Quote:
Originally Posted by patrik View Post
Spot instances of "Cluster Compute Eight Extra Large" (cc2.8xlarge) in Oregon seems to give the best bang for the buck at present.
It is also known that the larger VM instances give you better performance, since you tend to get most of the "real" machine.

Also, for this type of work, "Spot" instances are definitely the way to go. For one of my projects (computer vision), it is almost cheaper to rent a EC2 virtual machine than to pay the electricity to run an equivalent machine here in Bimshire. Factor in (no pun intended) the fact that there's no capital expenditure, and I can spin them up only when I need them (and as many as I need), it's a big win for me.
chalsall is offline   Reply With Quote
Old 2013-05-08, 11:22   #3
Manpowre
 
"Svein Johansen"
May 2013
Norway

3×67 Posts
Default interesting

This is very interesting, I also see EC2 has GPU cloud with 2x Tesla boards.. any test on this with MfaktC ?
Manpowre is offline   Reply With Quote
Old 2013-05-09, 05:54   #4
Karl M Johnson
 
Karl M Johnson's Avatar
 
Mar 2010

6338 Posts
Default

No point, mfaktc does not use dp fp calculations.

Last fiddled with by Karl M Johnson on 2013-05-09 at 05:55
Karl M Johnson is offline   Reply With Quote
Old 2013-05-11, 22:17   #5
Manpowre
 
"Svein Johansen"
May 2013
Norway

3×67 Posts
Default

Quote:
Originally Posted by Karl M Johnson View Post
No point, mfaktc does not use dp fp calculations.
Ahh, Only CudaLucas that does that.. gotcha.
Manpowre is offline   Reply With Quote
Old 2013-05-11, 23:49   #6
TheJudger
 
TheJudger's Avatar
 
"Oliver"
Mar 2005
Germany

5·223 Posts
Default

Quote:
Originally Posted by Karl M Johnson View Post
No point, mfaktc does not use dp fp calculations.
At least no in performance relevant sections. Some screen outputs are calculated as double and sieve initialization and worktodo parsing (checking limits) use very few DP instructions. But again: not performance relevant!

Oliver
TheJudger is offline   Reply With Quote
Old 2014-04-19, 14:40   #7
patrik
 
patrik's Avatar
 
"Patrik Johansson"
Aug 2002
Uppsala, Sweden

52×17 Posts
Default

As a follow-up of this old thread, I just want to say that I have just benchmarked the new servers Amazon provide, especially their c3.8xlarge machine. Compared to the old cc2.8xlarge, iteration times are about 13% lower for the new c3.8xlarge, while being just under 1.2% more expensive per hour with the spot instance prices at their lowest level. (This is while testing 16 exponents with 3840K FFT.)

However, it is important to select the correct virtualization type: hvm. You select that when you choose the so called AMI (Amazon Machine Image), containing OS etc. for your instance. Otherwise the iteration times get much worse.

Code:
File	Setup					Iterations	Avg. time (ms)
==============================================================================
c3.8xlarge:
bench1	16 workers, 32 threads, paravirtual	 1180000	49.6707
bench2	16 workers, 32 threads, paravirtual	  130000	49.0276
bench3	16 workers, 32 threads, hvm		 3160000	31.7276
bench4	16 workers, 16 threads, hvm		34140000	30.8819
bench5	16 workers, 16 threads, hvm		51190000	30.895

Compare old cluster cc2.8xlarge:
bench1	16 workers, 32 threads, hvm		  480000	37.5672
bench2	16 workers, 16 threads, hvm		  810000	35.5468
Attached Thumbnails
Click image for larger version

Name:	ec2-os'es_hvm.jpg
Views:	352
Size:	335.7 KB
ID:	11058  

Last fiddled with by patrik on 2014-04-19 at 14:43 Reason: Times are in ms
patrik is offline   Reply With Quote
Old 2014-04-20, 08:51   #8
ET_
Banned
 
ET_'s Avatar
 
"Luigi"
Aug 2002
Team Italia

113768 Posts
Default

Quote:
Originally Posted by patrik View Post
As a follow-up of this old thread, I just want to say that I have just benchmarked the new servers Amazon provide, especially their c3.8xlarge machine. Compared to the old cc2.8xlarge, iteration times are about 13% lower for the new c3.8xlarge, while being just under 1.2% more expensive per hour with the spot instance prices at their lowest level. (This is while testing 16 exponents with 3840K FFT.)

However, it is important to select the correct virtualization type: hvm. You select that when you choose the so called AMI (Amazon Machine Image), containing OS etc. for your instance. Otherwise the iteration times get much worse.

Code:
File	Setup					Iterations	Avg. time (ms)
==============================================================================
c3.8xlarge:
bench1	16 workers, 32 threads, paravirtual	 1180000	49.6707
bench2	16 workers, 32 threads, paravirtual	  130000	49.0276
bench3	16 workers, 32 threads, hvm		 3160000	31.7276
bench4	16 workers, 16 threads, hvm		34140000	30.8819
bench5	16 workers, 16 threads, hvm		51190000	30.895

Compare old cluster cc2.8xlarge:
bench1	16 workers, 32 threads, hvm		  480000	37.5672
bench2	16 workers, 16 threads, hvm		  810000	35.5468
Are there price differences between hvm(AMI) and paravirtual?

Luigi
---
ET_ is offline   Reply With Quote
Old 2014-04-20, 12:16   #9
patrik
 
patrik's Avatar
 
"Patrik Johansson"
Aug 2002
Uppsala, Sweden

1101010012 Posts
Default

As I understand there are free AMI's for either paravirtual or HVM, and you pay the time you have the instance (and for storage and I/O). However, I can't access the details of my bill yet.

I'm actually not sure that it is the virtualization that matters, but that is the only difference in the description that I can find between the two images I've tried.

Last fiddled with by patrik on 2014-04-20 at 12:16 Reason: spelling
patrik is offline   Reply With Quote
Old 2014-04-20, 16:11   #10
ET_
Banned
 
ET_'s Avatar
 
"Luigi"
Aug 2002
Team Italia

2·11·13·17 Posts
Default

Quote:
Originally Posted by patrik View Post
As I understand there are free AMI's for either paravirtual or HVM, and you pay the time you have the instance (and for storage and I/O). However, I can't access the details of my bill yet.

I'm actually not sure that it is the virtualization that matters, but that is the only difference in the description that I can find between the two images I've tried.
I see. Thanks!

luihgi
ET_ is offline   Reply With Quote
Old 2014-06-17, 18:04   #11
Mark Rose
 
Mark Rose's Avatar
 
"/X\(‘-‘)/X\"
Jan 2013
https://pedan.tech/

2×33×59 Posts
Default

Quote:
Originally Posted by ET_ View Post
Are there price differences between hvm(AMI) and paravirtual?

Luigi
---
Not directly. The newer generation HVM hardware (m3/c3/r3/i2/g2) is about 40% cheaper than the older PVM hardware. The m3 instances can also run PVM, but I would always run HVM if possible as there's way less jitter with disk and network IO.

On-demand pricing is cheaper than buying if you only need it for a short time. If you are running hardware for more than a few months per year, you're way better off buying a reservation than paying on-demand pricing (can be up to 70% cheaper). Reserved pricing is competitive with running your own hardware if you have to pay someone to maintain that hardware, network, etc.

Amazon already runs mprime/prime95.

If you're looking to rent hardware for crunching primes Digital Ocean is a far better deal. For $5/month, you'll get a VPS that will give performance of about half a c3.large that would cost you $75/month on-demand.

AMIs are usually free, but they will charge more per hour if you run Windows or other paid AMIs.
Mark Rose is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Did Amazon just join GIMPS? ixfd64 Lounge 20 2018-04-24 06:53
How about using Amazon's hardware instead? GP2 Cloud Computing 154 2017-03-29 16:02
Amazon Cloud Outrage kladner Science & Technology 7 2017-03-02 14:18
doing large NFS jobs on Amazon EC2? ixfd64 Factoring 3 2012-06-06 08:27
Amazon is a greedy bastard of a company. jasong jasong 14 2007-12-13 21:02

All times are UTC. The time now is 15:43.


Thu Jun 8 15:43:26 UTC 2023 up 294 days, 13:12, 0 users, load averages: 1.17, 1.09, 1.04

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.

≠ ± ∓ ÷ × · − √ ‰ ⊗ ⊕ ⊖ ⊘ ⊙ ≤ ≥ ≦ ≧ ≨ ≩ ≺ ≻ ≼ ≽ ⊏ ⊐ ⊑ ⊒ ² ³ °
∠ ∟ ° ≅ ~ ‖ ⟂ ⫛
≡ ≜ ≈ ∝ ∞ ≪ ≫ ⌊⌋ ⌈⌉ ∘ ∏ ∐ ∑ ∧ ∨ ∩ ∪ ⨀ ⊕ ⊗ 𝖕 𝖖 𝖗 ⊲ ⊳
∅ ∖ ∁ ↦ ↣ ∩ ∪ ⊆ ⊂ ⊄ ⊊ ⊇ ⊃ ⊅ ⊋ ⊖ ∈ ∉ ∋ ∌ ℕ ℤ ℚ ℝ ℂ ℵ ℶ ℷ ℸ 𝓟
¬ ∨ ∧ ⊕ → ← ⇒ ⇐ ⇔ ∀ ∃ ∄ ∴ ∵ ⊤ ⊥ ⊢ ⊨ ⫤ ⊣ … ⋯ ⋮ ⋰ ⋱
∫ ∬ ∭ ∮ ∯ ∰ ∇ ∆ δ ∂ ℱ ℒ ℓ
𝛢𝛼 𝛣𝛽 𝛤𝛾 𝛥𝛿 𝛦𝜀𝜖 𝛧𝜁 𝛨𝜂 𝛩𝜃𝜗 𝛪𝜄 𝛫𝜅 𝛬𝜆 𝛭𝜇 𝛮𝜈 𝛯𝜉 𝛰𝜊 𝛱𝜋 𝛲𝜌 𝛴𝜎𝜍 𝛵𝜏 𝛶𝜐 𝛷𝜙𝜑 𝛸𝜒 𝛹𝜓 𝛺𝜔