mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > Hardware

Reply
 
Thread Tools
Old 2003-10-03, 13:36   #1
nomadicus
 
nomadicus's Avatar
 
Jan 2003
North Carolina

24610 Posts
Default Mean time between failures

In discussing hardware reliability, I was looking at disk drives. Suppose a disk has a 300,000 hours MTBF (Mean Time Between Failures). If we have 100 disks on a system (assume equal usage), does that say the next failure of any single disk will be 300,000/100 or 3,000 hours?
What if the 100 disks are spread equally between 4 systems? Does that change the probabilities?
Do probability mathematics come into play? (of which I know very little).
nomadicus is offline   Reply With Quote
Old 2003-10-03, 17:51   #2
xtreme2k
 
xtreme2k's Avatar
 
Aug 2002

2×3×29 Posts
Default

http://www.storagereview.com/guide20.../specMTBF.html

This should help you understand MTBF. After you have read it you will see that lot of us has misconceptions about it.
xtreme2k is offline   Reply With Quote
Old 2003-10-03, 23:20   #3
PageFault
 
PageFault's Avatar
 
Aug 2002
Dawn of the Dead

5×47 Posts
Default Re: Mean time between failures

Yes they do, possibly the most tortuous of mathematics when I was taking those courses. Because it is "mean" time before failure, you must account for the variances and sample sizes. Now, when you add more units, you have not only more chances of failure but also a higher probability of a premature failure ... consider building a 1000 disk array, its MTBF is probably zero ... because one of those suckers is DOA ... even when running, it wouldn't last long ...

Quote:
Originally posted by nomadicus
What if the 100 disks are spread equally between 4 systems? Does that change the probabilities?
Do probability mathematics come into play? (of which I know very little).
PageFault is offline   Reply With Quote
Old 2003-10-06, 16:35   #4
nomadicus
 
nomadicus's Avatar
 
Jan 2003
North Carolina

2×3×41 Posts
Default

So now I should consider infant mortality of a group of disks I get, the service life of the disks within that group, the operational MTBF (if I can get it), balanced along with the MTBF as a guideline toward understanding when a group of disks would become more prone to failure.

Things are never as simple as they seem.

Great pointer.
Thanks!
nomadicus is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Time to End davar55 Lounge 4 2013-02-23 02:40
How much time Unregistered Information & Answers 4 2008-12-20 21:00
Hardware Failures at different FFT lengths? CADavis Hardware 8 2005-11-29 10:29
Question about what failures "mean" Semm Hardware 7 2005-02-06 23:26
Complex, but deceptively simple question about Prime95 failures? halcion Software 6 2004-12-16 20:10

All times are UTC. The time now is 05:35.

Sat May 8 05:35:57 UTC 2021 up 30 days, 16 mins, 0 users, load averages: 3.59, 3.29, 3.08

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.