mersenneforum.org AVX CPU LL vs CUDA LL
 Register FAQ Search Today's Posts Mark Forums Read

 2012-01-03, 13:33 #1 nucleon     Mar 2003 Melbourne 20316 Posts AVX CPU LL vs CUDA LL Which one comes out on top? I have a feeling using 27.x prime95 code with AVX on a 6-core i7 3930k is beating CUDA LL on a GTX580 in these metrics: 1) GHz-days per initial cost* 2) GHz-days per ongoing costs** 3) Raw GHz-days/day throughput Of course a GTX580 still beats a single AVX 3930k core for latency. Anyone have authoritive stats between the 2? Bonus points if you can prove either way effective LL throughput of doing TF only on the GPUs/CPUs with mfaktc vs LL on AVX 6core CPU. -- Craig *I'm thinking initial CPU costs include only cpu, cooler, mboard, ram. Initial GPU costs are cost of the GPU. **I'm thinking ongoings is equivalent to power costs. Total power of CPU setup is 'wall' power measurement of 6cores at 100% LL each core. One can get GPU power consumption from any number of sites incl wiki pages.
 2012-01-03, 13:40 #2 nucleon     Mar 2003 Melbourne 10038 Posts I guess what I'm getting at, hypothetically if I were to hand over a credit card* for you to buy equipment and to pay the power bill for the equipment, what would you buy to maximize it's contribution to the project to make the most of the funds you had access too. Or to phrase it in a more down to earth way - given what equipment I have at my disposal - what do I do to maximize my contribution. (BTW I've removed the fx8120 from my farm, and replaced it with a core i7-3930k@4.2GHz, it's _much_ _much_ better than fx8120.) -- Craig *Don't get any ideas - I'm not going to hand over any of my credit cards :)
 2012-01-03, 14:12 #3 KyleAskine     Oct 2011 Maryland 2·5·29 Posts I would be curious to see how the 3930k stacks up with a 2500k on your mentioned metrics. Between the expensive chip and the expensive board and the much higher base wattage (lower OC headroom), I would imagine that the 2500k is still probably the way to go. I imagine that you are correct in thinking that processor LL is superior to GPU LL, if you are only interested in doing LL. But I would be interested in seeing statistics too.
 2012-01-03, 21:34 #4 Dubslow Basketry That Evening!     "Bunslow the Bold" Jun 2011 40
 2012-01-03, 21:51 #5 Brain     Dec 2009 Peine, Germany 331 Posts breakeven point Here we have a GTX 580 running a 54M exponent at 8.6ms per iteration. 3930K needs 130W, GTX 580 315W, ratio = 2.4 3930K does 6 jobs on 6 cores, GTX 580 1 job 8.6 ms * 2.4 * 6 = 125ms @54M exponent This implies that the computing power / energy breakeven point is at 125ms, so a single 3930K LL test would have 125ms time for an iteration to be "as powerful as the GPU"... Last fiddled with by Brain on 2012-01-03 at 21:56 Reason: +energy
 2012-01-03, 23:39 #6 Jaxon   Dec 2011 2·32 Posts It also depends on how fast you clock the 3930k. Tom's Hardware's test of the chip last month gives us a power draw of around 165W for the processor at 4.2GHz. http://www.tomshardware.com/reviews/...rk,3090-2.html James has iteration times here for a 3930 at 4.9GHz. This would draw over 180W according to the above chart http://mersenne-aries.sili.net/bench...z&l2=&orderby= A report of 9.3ms/iteration for a 54M exponent on a GTX 580 is here. No mention of what clock speed the card is operating at. http://www.mersenneforum.org/showpos...7&postcount=61 A 54M exponent uses around a what, 2900k FFT size? A similar exponent on the benchmarked 3930k overclocked to 4.9GHZ would take a little under 23 ms per iteration. If the 3930k can keep that up across all 6 cores, it would have roughly 2.4 times the throughput of a GTX580. Iteration times on the 3930k would have to increase to more than 55ms before the two reached parity.
 2012-01-03, 23:58 #7 Jaxon   Dec 2011 100102 Posts Making some assumptions about TF rate for a GTX 580... 1 hour to TF from 71 to 72, probability of finding a factor at this depth is 1/72 = an average of 72 hours of computation to clear one 54M exponent for the GTX. At this time, TF on the GTX clears clears exponents more than 3x faster than running LL tests on the GTX. (9.3 ms/iteration x 54M iterations x 2 = 279 hours) For the 3930k, 23ms per iteration x 54M iterations = 1242 x 10^6 ms = 345 hours per core, divided by 6 cores = 57.5 hours x 2 LL checks = an average throughput of one exponent cleared every 115 hours.
 2012-01-04, 00:42 #8 Jaxon   Dec 2011 2×32 Posts One more thing to consider when calculating power for your graphics card is that when it performs trial factoring work, it requires not only the dedicated card power supply, but also a portion of the power needed to operate the CPU as well. A card performing LL work needs only minimal CPU resources. 3930k @4.9GHZ: 180W x 115 hours = 20.7kWh to clear an exponent GTX 580 Power consumption: 315W + 100W(2 cores of a 2500k @4.6GHz drawing ~200W) = 415W x 72 hours = ~29.9kWh to clear an exponent Some very rough approximations of system costs: From the Tom's Hardware article, $880 for the 3930k system plus maybe 120 for a decent power supply would make$1000 total. It would probably be most economical to run two 580s on a 2500k system. It would be around $500 for the processor, motherboard, and memory, around$200 for a 900+W power supply, and the cards themselves are starting to dip into the $450 range, giving an approximate cost of$1600. A system with three 580s running off a 3930k would cost nearly \$2500. The GPU system costs 1.6 times as much, but is 3.2 times faster. However, in the process, it will end up drawing 4.6 times the power. So the 3930k system is more power efficient, but slower.
 2012-01-04, 01:07 #9 Dubslow Basketry That Evening!     "Bunslow the Bold" Jun 2011 40
2012-01-04, 06:36   #10
Brain

Dec 2009
Peine, Germany

331 Posts

Quote:
 Originally Posted by Jaxon 3930k @4.9GHZ: 180W x 115 hours = 20.7kWh to clear an exponent GTX 580 Power consumption: 315W + 100W(2 cores of a 2500k @4.6GHz drawing ~200W) = 415W x 72 hours = ~29.9kWh to clear an exponent
I love this scale: kWh per LL candidate. Never calculated this.

Every CPU LL result costs me 5€. Better than smoking...

2012-01-04, 16:57   #11
pinhodecarlos

"Carlos Pinho"
Oct 2011
Milton Keynes, UK

23×607 Posts

Quote:
 Originally Posted by Brain I love this scale: kWh per LL candidate. Never calculated this.
It's like specific energy (energy per unit mass). Here it is energy per work LL done. It is this ratio that should be compared between CPU and GPU, not only energy.

Last fiddled with by pinhodecarlos on 2012-01-04 at 16:58

 Similar Threads Thread Thread Starter Forum Replies Last Post ET_ GPU Computing 2 2013-06-13 15:50 Christenson GPU Computing 24 2011-05-01 00:06 Christenson GPU Computing 8 2011-03-22 02:33 nucleon GPU Computing 2 2010-11-17 17:52 Xentar Conjectures 'R Us 6 2010-03-31 07:43

All times are UTC. The time now is 08:38.

Thu Jan 28 08:38:19 UTC 2021 up 56 days, 4:49, 0 users, load averages: 2.58, 2.38, 2.27