![]() |
![]() |
#1 |
Jun 2010
Pennsylvania
32×103 Posts |
![]()
Quick question:
Do you run MFAKTx on your GPU all the time, or do you let it take a break every so often? I'm wondering about the "health effects" on the card of running it constantly at full tilt. Anything there to be concerned about, or not really? What do you do? Thanks in advance. Rodrigo |
![]() |
![]() |
![]() |
#2 |
"/X\(‘-‘)/X\"
Jan 2013
29·101 Posts |
![]()
You can eventually wear out the fans after some time (years).
I run GPUs 24/7, except when I'm using the computer in question. Giving them a break is not necessary. Keeping a steady temperature is probably better for them. |
![]() |
![]() |
![]() |
#3 |
Mar 2013
Dallas, TX
2×3×5 Posts |
![]()
I have 2 GTX-460s running 24/7 for the last 1.5 yrs. Temps are around 70 deg. C for 1 and 80 deg. C for the other. So far, all is good. If the temps start to rise a few degrees I'll shut the system down and blow all of the dust out of the heat pipes in the cards.
|
![]() |
![]() |
![]() |
#4 |
May 2013
East. Always East.
172710 Posts |
![]()
GPUs tend to be much hardier pieces of equipment than their CPU counterparts. Not to take away from CPUs though, because both are certainly well built these days.
I wouldn't worry about 24/7 operation but a good idea would be to replace the fans after a couple of years or if performance seems to be degrading for whatever reason. Keep them dust-free, etc. Make sure the temperatures are reasonable. They can take 100C though I would certainly stay away from that for any length of time. My watercooled GPU is in the mid 30's and my air-cooled one is in the mid 60's. Not too concerned about either. |
![]() |
![]() |
![]() |
#5 |
"Victor de Hollander"
Aug 2011
the Netherlands
23×3×72 Posts |
![]()
I've had a ASUS HD7950 with a malfunctioning fan after 1.5 year of 24/7 running, so I send it for RMA. ASUS was so kind to swap it for a new card :D. Fans are going to be your biggest concern, next to cooling and your electricity bill :P.
Temperature and Fans I try to keep my GPUs <75C, but that might be conservative?? Temperatures on my HD7950 and 280X fluctuate between 67C (night) and 73C (warm days). I've got a 140mm fan blowing cold air on the GPU and 2x120mm pushing warm air out on the top of one case. The other case has go an open side panel. The fans on the GPUs are running at 1700RPM, which for me is the sweet spot between temps and noise. My GPUs/CPUs are running most of the time 24/7, except when I'm racing (F1 2013) or playing Assassins Creed. Constant temperature is better than hot/cold cycles, which can cause miniature cracks in the solder. Cooling VRMs Cooling the VRMs (Voltage Regulator Modules) is also important to some extent, since on some cards can get much hotter than the GPU core (90C is not exceptional). They are usually rated for a maximum between 100-130C though, but I would advice keeping them at <90C just to be on the safe side. With most cards you can check the VRM temp with GPU-Z (sensor tap). For instance the MSI GTX780 TwinFrozr 3 has a high VRM temp issue: http://hardforum.com/showthread.php?t=1807147 Power phases I would pick a card with a decent amount of power phases. The reference GTX570 PCB only had a 6 phase VRM design (4 GPU + 2 memory), which is terrible for a card pulling 200W! Custom PCBs usually have more and higher quality phase designs. For instance a ASUS GTX570 DC2 has 8 power phases (6+2). 2 extra power phases may not look like much of a difference, but 200W provided by 4 phases = 50VA per phase (rough calculation), compared to 33VA per phase with 6 phases. Compare it with a car cruising on the highway and one running full throttle all the time. Some references: http://www.overclock.net/t/929152/ha...-buy-some-570s http://forums.evga.com/ASUS-GTX-570-...e-m960486.aspx (scoll down to post #6) Quality PSU I've had a crappy 800W PSU (don't remember the brand) which failed after 1.5 years of provided juice to a 2500k @4.0 and a single GTX480. Together they probably don't even draw 400W, but that constant power draw was too much for that B quality PSU. Now I am only using 80+ Gold rated Seasonic and Cooler Master PSUs. Overclocking I would advice against overclocking GPUs, it usually results in more power draw, higher temps, which stresses the GPU core and VRMs. Requires higher fan speeds to compensate, which results in more fan wear. Better buy more cards and stick them in multiple cases ;). |
![]() |
![]() |
![]() |
#6 |
Undefined
"The unspeakable one"
Jun 2006
My evil lair
600410 Posts |
![]() |
![]() |
![]() |
![]() |
#7 |
Romulan Interpreter
Jun 2011
Thailand
2·23·199 Posts |
![]()
I have two of my 580's (the oldest of the "battery") running 24/7 from November 2011. Yes, there is no mistake, and I mean 24/7.
I water-cooled them (somewhere in the mid of 2012, the discussion is somewhere here on the forum) The only "bad" things can happen are that the fans' "bearings" can get damaged over time, or the plastic propellers can get bent due to temperature. But... BUT! Almost all cards (especially the expensive one) have fans which are brush-less, bearing-less, (magnetic suspension), and if you clear the dust clogs from time to time, there will be no problem with the fans, even if you stay on "air cooled". In fact, water cooling is more "sensitive" to 24/7 running, the first thing that crash is the pump (which is only guaranteed like 5000 hours from the manufacturer, or so), so if you switch to liquid, buy a good pump, and always have one for spare. A well-designed water cooling solution will be able to run your computer without a pump for a while, for "normal work" (not P95, neither cuda/games, but winword/excel/outlook/office stuff will still work, the water circulates at a lower pace, due to thermal convection). Beside of the noises and the fact that you waste more money for the electricity, and you generate more heat, etc., there is no negative side of running 24/7. Contrarily, there are lots of positive sides. One is the thermal expansion. Repeatedly starting and stopping your computer, especially in a cold room/climate, causes the components to cool and heat up in cycles, causing mechanical expansion/contraction. Like repeatedly bending a wire till is broken, when you don't have pliers. The little silicon balls and rods are exposed to thermal strain/stress every time your computer is heating and cooling, causing damages. Not to count the time needed to wait till they boot up ![]() |
![]() |
![]() |
![]() |
#8 |
May 2013
East. Always East.
110101111112 Posts |
![]()
Regarding the thermal expansion and contraction cycles:
The GPU never stops whereas the CPU does while running Prime95. As far as I know, there isn't a way to write the save files in a staggered manner, so each worker stops and waits for the slow clunky hard drive to write up a file for several megabytes per worker which is enough time for the CPU to cool if it is under a strong enough cooler. For example, right now my GPU is at 38C (warm day) MAX 40 MIN 32, and this is over the course of well over a week, where I do occasionally stop my GPU to actually do things with it. My second GPU is at 65 MAX 69 MIN 61 (EDIT: I want to stress the +/- 4C over the course of an entire week). On the other hand, my CPU is at 66 MAX 70 MIN 34. A good stability test is to stress the living sh*t out of the CPU just like Prime95 does. A good durability test would be to stop and start such stress tests multiple times per minute to get the temperatures to fluctuate from 30C to 100C repeatedly. Last fiddled with by TheMawn on 2014-08-01 at 04:53 |
![]() |
![]() |
![]() |
#9 |
Jun 2010
Pennsylvania
16378 Posts |
![]()
Wow, this yielded way more knowledge than I'd ever expected! Thank you all very much -- hopefully I won't be the only one who benefits from reading this thread.
Experience has led me, too, to keep the cases open on two of my PCs (one containing a GT630 and the other a GT430). At some point I noticed that the throughput (as measured by MFAKTC's running display) had gone way down, so I opened the cases and dusted off the GPUs with compressed air. But each time, after closing the cases back up and restarting MFAKTC, the GHz-days/day soon dropped back dramatically, so I figured the problem was that there wasn't enough airflow inside, and decided to leave the cases open. That helped, but performance didn't return to its former levels until I also placed a small table fan to blow onto the GPU in the open case. I'm bookmarking this thread. ![]() Rodrigo |
![]() |
![]() |
![]() |
#10 | |
May 2011
Orange Park, FL
22·7·31 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
#11 | |
Sep 2010
So Cal
2×52 Posts |
![]() Quote:
Recently, I had to RMA an Asus Titan that lasted just under a year. Abruptly replacing the PSU with an el cheapo 750 watt Corsair ended up burning out the entire card, including the PSU. ![]() ![]() Ever since I received the replacement card, I've been using another Seasonic 80+ Gold rated, PSU and nothing else. Lesson LEARNED the hard way. ![]() |
|
![]() |
![]() |