mersenneforum.org mfaktc: a CUDA program for Mersenne prefactoring
 Register FAQ Search Today's Posts Mark Forums Read

 2020-07-24, 22:55 #3279 storm5510 Random Account     Aug 2009 U.S.A. 179210 Posts There is something I feel I need to pass on. I received a new GPU today. A GTX 1650. The older CUDA 10 version of mfaktc will not run with it. The newer 2047 runs very well. Off-topic: I suspect gpuOwl will run with it. I have my doubts about CUDALucas and CUDAPm1. I will have to try each. It is running around 950 GHz-d/day. The temperature is 64°C. The clock speeds are slightly less than my 1080. Everything on the back of the machine is relatively cool to the touch. So, it appears everything will be fine. If my 1080 decides to die, I know what I will replace it with. For the cost, I have no complaints at all.
 2020-07-28, 19:46 #3280 TheJudger     "Oliver" Mar 2005 Germany 45616 Posts Hi, seems like mfaktc runs fine with CUDA 11 on Ampere (no specific changes for Ampere except Makefile). Code: mfaktc v0.22-pre8 (64bit built) [...] CUDA version info binary compiled for CUDA 11.0 CUDA runtime version 11.0 CUDA driver version 11.0 CUDA device info name A100-SXM4-40GB compute capability 8.0 max threads per block 1024 max shared memory per MP 167936 byte number of multiprocessors 108 clock rate (CUDA cores) 1410MHz memory clock rate: 1215MHz memory bus width: 5120 bit [...] Starting trial factoring M66362159 from 2^74 to 2^75 (57.65 GHz-days) k_min = 142321062303420 k_max = 284642124610180 Using GPU kernel "barrett76_mul32_gs" Date Time | class Pct | time ETA | GHz-d/day Sieve Wait Jul 19 21:19 | 0 0.1% | 0.829 13m15s | 6259.18 82485 n.a.% Jul 19 21:19 | 4 0.2% | 0.779 12m26s | 6660.92 82485 n.a.% Jul 19 21:19 | 9 0.3% | 0.780 12m26s | 6652.38 82485 n.a.% [...] Jul 19 21:31 | 4617 100.0% | 0.780 0m00s | 6652.38 82485 n.a.% no factor for M66362159 from 2^74 to 2^75 [mfaktc 0.22-pre8 barrett76_mul32_gs CUDA 11.0 arch 8.0] 51D74917 tf(): total time spent: 12m 32.323s New absolute performance champion and I guess best performance per watt, too! Older benchmark data for Turing (RTX 2080 Ti): https://mersenneforum.org/showpost.p...postcount=2912 Oliver
2020-07-28, 20:37   #3281
James Heinrich

"James Heinrich"
May 2004
ex-Northern Ontario

3×23×47 Posts

Quote:
 Originally Posted by TheJudger New absolute performance champion and I guess best performance per watt, too!
Oooh, juicy new benchmark! It is indeed the current performance champion, but (according to my numbers) the Telsa T4 is still the efficiency champion (70W vs 400W is a big difference).

If you have access to this GPU again and can run a quick CUDAlucas and/or GPUowl benchmark I would appreciate it.

 2020-07-28, 20:46 #3282 TheJudger     "Oliver" Mar 2005 Germany 2·3·5·37 Posts Hi James, I can't remember a T4 hitting 2500 GHz-d/d. And while that A100 has a TDP of 400 W it reports during mfaktc "just" 290 to 300 W. But you still might be correct that a T4 has a better performance per watt ratio when looking just at the power consumption of the GPU itself. I tend to ignore those smaller GPUs, sorry. Oliver Last fiddled with by TheJudger on 2020-07-28 at 21:15
2020-07-28, 21:00   #3283
James Heinrich

"James Heinrich"
May 2004
ex-Northern Ontario

3×23×47 Posts

Quote:
 Originally Posted by TheJudger I tend to ignore those smaller GPUs, sorry.
Those "smaller" GPUs that still have nearly 5x the TF performance of my RX 480

2020-07-28, 21:12   #3284
James Heinrich

"James Heinrich"
May 2004
ex-Northern Ontario

3×23×47 Posts

Quote:
 Originally Posted by TheJudger I can't remember a T4 hitting 2500 THz-d/d.
Neither can I. Not even 2500 GHz-d/d

I actually looked back and found one actual benchmark for a T4 that put it at around 1700/day. On my chart it was being lumped in with the other CUDA 7.5 cards, I have adjusted my data so the numbers should be a little more accurate. Still more efficient than the A100, but only 50% better not 100% better.

 2020-07-28, 21:17 #3285 TheJudger     "Oliver" Mar 2005 Germany 2·3·5·37 Posts I guess for many of us a RX 480 is far more enjoyable than a T4 for home usage (PC games )
 2020-07-29, 01:19 #3286 storm5510 Random Account     Aug 2009 U.S.A. 28·7 Posts I only use the two I have now for this project. I don't play games with them. I watch the power consumption. 200W for one and 75W for the other. 6.6 kWh in 24 hours, if I keep them running continuously, which I do not. In simpler terms, about $25 USD a month for both at the current rate. That's no back-breaker. Utility costs here are quite steady. An average would be around$0.13 USD per kWh. There are members here to pay far more. If I had to pay what some are, there wold be a lot of oil lamps and candles used. I don't see how they manage.
2020-07-29, 12:40   #3287
kriesel

"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

29·167 Posts

Quote:
 Originally Posted by storm5510 Utility costs here are quite steady. An average would be around $0.13 USD per kWh. There are members here to pay far more. If I had to pay what some are, there wold be a lot of oil lamps and candles used. I don't see how they manage. Price paraffin candles on a$/Btu and $/lumen-hour basis, and electricity even at$1/kw-hr won't look so bad. The same goes for kerosene lamps.
Have you tried using nvidia-smi to reduce power use on your gpus? I find running RTX20xx or GTX1650 gpus at 50% power still provides 80% of TF throughput. It's a lot easier on the air conditioner too, so power and cost savings are considerable, even at my ~$.12/kwhr. Code: :d0 gtx1080 90 to 291 W "c:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe" -i 0 -pl 90 :d1 gtx1650 45 to 75 w "c:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe" -i 1 -pl 45 :d2 gtx1650 45 to 90 w "c:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe" -i 2 -pl 45 :d3 rtx2080 125 to 258 w "c:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe" -i 3 -pl 125 :d4 gtx1080ti 125 to 300 w "c:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe" -i 4 -pl 125 Last fiddled with by kriesel on 2020-07-29 at 12:50 2020-07-29, 15:54 #3288 storm5510 Random Account Aug 2009 U.S.A. 28·7 Posts Quote:  Originally Posted by kriesel Price paraffin candles on a$/Btu and $/lumen-hour basis, and electricity even at$1/kw-hr won't look so bad. The same goes for kerosene lamps. Have you tried using nvidia-smi to reduce power use on your gpus? I find running RTX20xx or GTX1650 gpus at 50% power still provides 80% of TF throughput. It's a lot easier on the air conditioner too, so power and cost savings are considerable, even at my ~$.12/kwhr. Two years ago when I started using this 1080, my UPS would begin to "whistle" after running for a few seconds. I used MSI Afterburner to throttle the GPU back to 85% of capacity. It was at, or beyond, the capacity of the UPS. It is really not large enough having a maximum capacity of 300W. Now, I no longer need to throttle. The GPU's overall performance has dropped around 10%. It still runs above a thousand gigahertz days per day running mfaktc. My average monthly utility cost is$115 USD during the warm weather months. In the winter is where I have to be cautious. My furnace has two 20 ampere heating elements. It is 240 VAC. It can eat a decent size hole in my pocket if I fail to pay attention.

2020-07-29, 16:03   #3289
chalsall
If I May

"Chris Halsall"
Sep 2002

2·3·1,567 Posts

Quote:
 Originally Posted by storm5510 In the winter is where I have to be cautious. My furnace has two 20 ampere heating elements. It is 240 VAC. It can eat a decent size hole in my pocket if I fail to pay attention.
Isn't that what GPUs are really for? Space heaters???

 Similar Threads Thread Thread Starter Forum Replies Last Post Bdot GPU Computing 1668 2020-12-22 15:38 firejuggler GPU Computing 753 2020-12-12 18:07 MrRepunit GPU Computing 32 2020-11-11 19:56 keisentraut Software 2 2020-08-18 07:03 fivemack Programming 112 2015-02-12 22:51

All times are UTC. The time now is 09:55.

Tue Jan 19 09:55:27 UTC 2021 up 47 days, 6:06, 0 users, load averages: 1.42, 1.80, 1.93