mersenneforum.org George's dream build
 Register FAQ Search Today's Posts Mark Forums Read

2016-05-19, 01:42   #122
bgbeuning

Dec 2014

FC16 Posts

Quote:
 Originally Posted by Madpoo One thing I've picked up on with server designs is that they use plastic baffles to channel the air over the parts that need it.
Many servers have a wall of fans across the width of the case.
The air has no place to go but through the case and out the back.

Some CPU coolers state they are meant for a "ducted" case.
In my mini-ITX server, having the boards close together made the case
more "ducted" because it meant more air had to move through the fan-less
CPU coolers and there was little space for the air to flow around the CPU
coolers.

2016-05-19, 05:41   #123
VBCurtis

"Curtis"
Feb 2005
Riverside, CA

2×2,237 Posts

Quote:
 Originally Posted by bgbeuning I assume it takes just as much AC power to cool air as it does to heat it. So if a PC is putting out 500 W of heat, the HVAC uses 500 W of AC power to remove the heat. So if my electric bill went up X from running a PC then it will go up 2X if the PC is in an area cooled by HVAC. After thinking about it, I am not sure why I think this. Is it true?
Nope. Try google or wiki for coefficient of performance, or heat pump (thermodynamics). It takes energy to move heat from a cool region to a warmer region, and the amount it takes is related to the temperature difference between inside and outside.

2016-05-19, 12:13   #124
bgbeuning

Dec 2014

22·32·7 Posts

Quote:
 Originally Posted by VBCurtis Nope. Try google or wiki for coefficient of performance, or heat pump (thermodynamics). It takes energy to move heat from a cool region to a warmer region, and the amount it takes is related to the temperature difference between inside and outside.
The COP wiki page lost me when it started talking about hot and cold reservoirs.
But the link to the SEER page was more accessible. It explains a SEER 10 (average)
system can remove 1500 W heat using 500 W AC power. And a SEER 13 system

The wiki page says a "split system" go up to SEER 30.

2016-05-19, 23:38   #125
Serpentine Vermin Jar

Jul 2014

29·113 Posts

Quote:
 Originally Posted by bgbeuning Many servers have a wall of fans across the width of the case. The air has no place to go but through the case and out the back. Some CPU coolers state they are meant for a "ducted" case. In my mini-ITX server, having the boards close together made the case more "ducted" because it meant more air had to move through the fan-less CPU coolers and there was little space for the air to flow around the CPU coolers.
Yes ^^^ that.

The HP Proliants anyway (others too, but I'm not sure) work that way. Well, there are the drive bays up front where the air comes in and then the fans pulling air from the front and shoving it towards the back.

That air flow is channeled by the ducting I mentioned. There are NOT any "pull" fans in the back to pull the air out. Well, the power supplies in the rear have some fans that exhaust out, but the enclosed nature of those (hot swappable and all that) means they need to pull some air from the case into the power supply, otherwise it wouldn't happen all by itself.

And believe me, on a fully loaded system with all of the fans going, you will know it. These things are NOT designed to run quiet, they're designed to move air.

When I'm in a datacenter working in front of the servers, it actually gets cold because in the cold aisle, you're usually standing right over/under an AC duct (could just be holes in the raised floor panels... the entire underside of the raised floor is like a big AC duct, with vented panels where the cold air should escape).

But when I get too cold, I just go around to the back and there's a temp differential of probably 20 degrees F, from maybe 65-70 on the cold to 85-90 on the warm aisle, where the air is recycled higher up, back to the CRACs (computer room air coolers).

In fancy systems with really high power density, you will be REQUIRED to install blanking panels in your rack to make sure any open rack units are blocked off to prevent casual cold air flow from front to back. It ensures all that precious cold air is forced to go through equipment and not just through spaces in between. They also have plastic panels that surround the cold aisle.

The colocation where Primenet runs is like that, but they're definitely a high density host. When you rent space by the rack unit, you want to maximize your power and cooling, so the entire cold aisle is like a meat locker.

Anyway, you could take that model and downsize to a single unit... have a general area of cold air in front and then take your warm air out the back and pipe it somewhere. If ductwork wouldn't be unsightly, just get pick something up, even that flexible stuff for a clothes dryer, and route it outside. They sell inline fans that can mount *inside* ductwork to draw additional air, controlled by a relay, but I think you'd have to use rigid ducts for that...never thought about using the flexi stuff.

But then you could mount that inline fan far away from you so the noise wouldn't be a bother, and it can be powerful enough to move a few hundred CFM. Bathroom vent fans are like that although the placement of the fans in those cases make it a "push" rather than "pull".

To solve the issue of where the cool air comes from, just cool down the whole room where the computer is. After all, why not have AC on a hot day? Plus the large volume of air will help keep humidity controlled while you're fussing with the temperature. And unless you're super into overclocking and want to run your system at zero K, I think a pleasant 68-70F room temp and adequate airflow will keep your prime cruncher happy.

 2016-05-31, 04:19 #126 Mark Rose     "/X\(‘-‘)/X\" Jan 2013 2·31·47 Posts George & Fred: did you have to use specific memory modules for the DDR4 overclock with the ASRock H110M-ITX?
2016-05-31, 04:24   #127
Prime95
P90 years forever!

Aug 2002
Yeehaw, FL

2·3·1,193 Posts

Quote:
 Originally Posted by Mark Rose George & Fred: did you have to use specific memory modules for the DDR4 overclock with the ASRock H110M-ITX?
Dirt cheap DDR4: http://www.newegg.com/Product/Produc...82E16820231962

I did have to update the BIOS for the DDR4 overclock option to become available.

2016-05-31, 22:20   #128
henryzz
Just call me Henry

"David"
Sep 2007
Cambridge (GMT/BST)

3·5·383 Posts

Quote:
 Originally Posted by Prime95 Dirt cheap DDR4: http://www.newegg.com/Product/Produc...82E16820231962 I did have to update the BIOS for the DDR4 overclock option to become available.
Is it still available with the latest bios?

2016-05-31, 23:55   #129
Prime95
P90 years forever!

Aug 2002
Yeehaw, FL

157668 Posts

Quote:
 Originally Posted by henryzz Is it still available with the latest bios?
I'm pretty sure I'm running version 1.40 listed here: http://www.asrock.com/mb/Intel/H110M...wnload&os=BIOS

The notes for 1.50 does not mention de-supporting non-Z DDR4 OC

2016-06-01, 04:37   #130
Mark Rose

"/X\(‘-‘)/X\"
Jan 2013

2×31×47 Posts

Quote:
 Originally Posted by Prime95 Dirt cheap DDR4: http://www.newegg.com/Product/Produc...82E16820231962 I did have to update the BIOS for the DDR4 overclock option to become available.
Did you try sticking your DDR4-3200 in the H110M-ITX?

I can get DDR4-2400 for $43.49, DDR4-2800 for$52.93, and DDR4-3200 for \$61.98.

For less than 5% of the total system build, I can get 33% faster memory.

So I wonder if DDR4-3200 will work, and how DDR4-3200 compares to DDR4-2400 with the i5-6500 in the H110M-ITX.

2016-06-01, 05:14   #131
Prime95
P90 years forever!

Aug 2002
Yeehaw, FL

2×3×1,193 Posts

Quote:
 Originally Posted by Mark Rose Did you try sticking your DDR4-3200 in the H110M-ITX?
Never tried that. For some reason, I don't think that will provide any benefit over DDR4-2400. I don't remember where I got that impression -- perhaps from the Asrock web site which suggests an 8-ish% bandwidth increase, not a 50% possible increase.

2016-06-01, 15:32   #132
Mark Rose

"/X\(‘-‘)/X\"
Jan 2013

55428 Posts

Quote:
 Originally Posted by Prime95 Never tried that. For some reason, I don't think that will provide any benefit over DDR4-2400. I don't remember where I got that impression -- perhaps from the Asrock web site which suggests an 8-ish% bandwidth increase, not a 50% possible increase.
They mention a slight performance increase from the reduced latency from the higher clock speed, which is what makes a difference for most applications. They also only show DDR4-2400 being used... they make no comment about anything higher working or not.

 Similar Threads Thread Thread Starter Forum Replies Last Post firejuggler GPU Computing 0 2018-03-28 16:02 Gordon GMP-ECM 2 2017-09-04 04:05 cappy95833 Hardware 10 2014-03-29 15:02 plandon Hardware 39 2009-08-30 09:36 fetofs Puzzles 8 2006-07-09 09:33

All times are UTC. The time now is 05:06.

Thu Nov 26 05:06:02 UTC 2020 up 77 days, 2:17, 3 users, load averages: 2.07, 1.63, 1.47