Disk Failures in The Real World: What Does An MTTF of 1,000,000 Hours Mean To You?
Disk Failures in The Real World: What Does An MTTF of 1,000,000 Hours Mean To You?
Abstract
1 Motivation
Component failure in large-scale IT installations is becoming an ever larger problem as the number of components in a single cluster approaches a million.
In this paper, we present and analyze field-gathered
disk replacement data from a number of large production
systems, including high-performance computing sites
and internet services sites. About 100,000 disks are covered by this data, some for an entire lifetime of five years.
The data include drives with SCSI and FC, as well as
SATA interfaces. The mean time to failure (MTTF) of
those drives, as specified in their datasheets, ranges from
1,000,000 to 1,500,000 hours, suggesting a nominal annual failure rate of at most 0.88%.
We find that in the field, annual disk replacement rates
typically exceed 1%, with 2-4% common and up to 13%
observed on some systems. This suggests that field replacement is a fairly different process than one might
predict based on datasheet MTTF.
We also find evidence, based on records of disk replacements in the field, that failure rate is not constant
with age, and that, rather than a significant infant mortality effect, we see a significant early onset of wear-out
degradation. That is, replacement rates in our data grew
constantly with age, an effect often assumed not to set in
until after a nominal lifetime of 5 years.
Interestingly, we observe little difference in replacement rates between SCSI, FC and SATA drives, potentially an indication that disk-independent factors, such as
operating conditions, affect replacement rates more than
component specific factors. On the other hand, we see
only one instance of a customer rejecting an entire population of disks as a bad batch, in this case because of
media error rates, and this instance involved SATA disks.
Time between replacement, a proxy for time between
failure, is not well modeled by an exponential distribution and exhibits significant levels of correlation, including autocorrelation and long-range dependence.
those models [4, 5, 33]. Too much academic and corporate research is based on anecdotes and back of the
envelope calculations, rather than empirical data [28].
The work in this paper is part of a broader research
agenda with the long-term goal of providing a better understanding of failures in IT systems by collecting, analyzing and making publicly available a diverse set of real
failure histories from large-scale production systems. In
our pursuit, we have spoken to a number of large production sites and were able to convince several of them
to provide failure data from some of their systems.
In this paper, we provide an analysis of seven data sets
we have collected, with a focus on storage-related failures. The data sets come from a number of large-scale
production systems, including high-performance computing sites and large internet services sites, and consist
primarily of hardware replacement logs. The data sets
vary in duration from one month to five years and cover
in total a population of more than 100,000 drives from at
least four different vendors. Disks covered by this data
include drives with SCSI and FC interfaces, commonly
represented as the most reliable types of disk drives, as
well as drives with SATA interfaces, common in desktop
and nearline systems. Although 100,000 drives is a very
large sample relative to previously published studies, it
is small compared to the estimated 35 million enterprise
drives, and 300 million total drives built in 2006 [1]. Phenomena such as bad batches caused by fabrication line
changes may require much larger data sets to fully characterize.
We analyze three different aspects of the data. We begin in Section 3 by asking how disk replacement frequencies compare to replacement frequencies of other hardware components. In Section 4, we provide a quantitative
analysis of disk replacement rates observed in the field
and compare our observations with common predictors
and models used by vendors. In Section 5, we analyze
the statistical properties of disk replacement rates. We
study correlations between disk replacements and identify the key properties of the empirical distribution of
time between replacements, and compare our results to
common models and assumptions. Section 6 provides an
overview of related work and Section 7 concludes.
2 Methodology
2.1 What is a disk failure?
While it is often assumed that disk failures follow a
simple fail-stop model (where disks either work perfectly or fail absolutely and in an easily detectable manner [22, 24]), disk failures are much more complex in
reality. For example, disk drives can experience latent
sector faults or transient performance problems. Often it
2
Data set
HPC1
HPC2
HPC3
HPC4
COM1
COM2
COM3
Type of
cluster
HPC
08/01 - 05/06
HPC
HPC
HPC
HPC
Various
HPC
clusters
Int. serv.
Int. serv.
Int. serv.
01/04 - 07/06
12/05 - 11/06
12/05 - 11/06
12/05 - 08/06
09/03 - 08/06
11/05 - 08/06
09/05 - 08/06
May 2006
09/04 - 04/06
01/05 - 12/05
Duration
#Disk
events
474
124
14
103
4
253
269
7
9
84
506
2
132
108
104
# Servers
765
64
256
1,532
N/A
N/A
N/A
N/A
N/A
N/A
9,232
N/A
N/A
N/A
N/A
Disk
Count
2,318
1,088
520
3,064
144
11,000
8,430
2,030
3,158
26,734
39,039
56
2,450
796
432
Disk
Parameters
18GB 10K SCSI
36GB 10K SCSI
36GB 10K SCSI
146GB 15K SCSI
73GB 15K SCSI
250GB 7.2K SATA
250GB SATA
500GB SATA
400GB SATA
10K SCSI
15K SCSI
10K FC
10K FC
10K FC
10K FC
MTTF
(Mhours)
1.2
1.2
1.2
1.5
1.5
1.0
1.0
1.0
1.0
1.0
1.2
1.2
1.2
1.2
1.2
Date of first
Deploym.
08/01
12/01
08/05
09/03
11/05
09/05
2001
2004
N/A
N/A
N/A
1998
ARR
(%)
4.0
2.2
1.1
3.7
3.0
3.3
2.2
0.5
0.8
2.8
3.1
3.6
5.4
13.6
24.1
Table 1: Overview of the seven failure data sets. Note that the disk count given in the table is the number of drives in
the system at the end of the data collection period. For some systems the number of drives changed during the data
collection period, and we account for that in our analysis. The disk parameters 10K and 15K refer to the rotation
speed in revolutions per minute; drives not labeled 10K or 15K probably have a rotation speed of 7200 rpm.
common assumption for drives in servers is that they are
powered on 100% of the time. Our data set providers
all believe that their disks are powered on and in use at
all times. The MTTFs specified for todays highest quality disks range from 1,000,000 hours to 1,500,000 hours,
corresponding to AFRs of 0.58% to 0.88%. The AFR
and MTTF estimates of the manufacturer are included in
a drives datasheet and we refer to them in the remainder
as the datasheet AFR and the datasheet MTTF.
In contrast, in our data analysis we will report the
annual replacement rate (ARR) to reflect the fact that,
strictly speaking, disk replacements that are reported in
the customer logs do not necessarily equal disk failures
(as explained in Section 2.1).
tributed sites. Each record in the data contains a timestamp of when the failure was repaired, information on
the failure symptoms, and a list of steps that were taken
to diagnose and repair the problem. The data does not
contain information on when each failure actually happened, only when repair took place. The data covers a
population of 26,734 10K rpm SCSI disk drives. The total number of servers in the monitored sites is not known.
COM2 is a warranty service log of hardware failures
recorded on behalf of an internet service provider aggregating events in multiple distributed sites. Each failure
record contains a repair code (e.g. Replace hard drive)
and the time when the repair was finished. Again there is
no information on the start time of each failure. The log
does not contain entries for failures of disks that were replaced in the customer site by hot-swapping in a spare
disk, since the data was created by the warranty processing, which does not participate in on-site hot-swap
replacements. To account for the missing disk replacements we obtained numbers for the periodic replenishments of on-site spare disks from the internet service
provider. The size of the underlying system changed significantly during the measurement period, starting with
420 servers in 2004 and ending with 9,232 servers in
2006. We obtained quarterly hardware purchase records
covering this time period to estimate the size of the disk
population in our ARR analysis.
The COM3 data set comes from a large external storage system used by an internet service provider and comprises four populations of different types of FC disks (see
Table 1). While this data was gathered in 2005, the system has some legacy components that were as old as from
1998 and were known to have been physically moved after initial installation. We did not include these obsolete disk replacements in our analysis. COM3 differs
from the other data sets in that it provides only aggregate
statistics of disk failures, rather than individual records
for each failure. The data contains the counts of disks
that failed and were replaced in 2005 for each of the four
disk populations.
HPC1
Component
CPU
Memory
Hard drive
PCI motherboard
Power supply
tion function (CDF) and how well it is fit by four probability distributions commonly used in reliability theory:
the exponential distribution; the Weibull distribution; the
gamma distribution; and the lognormal distribution. We
parameterize the distributions through maximum likelihood estimation and evaluate the goodness of fit by visual inspection, the negative log-likelihood and the chisquare tests.
We will also discuss the hazard rate of the distribution of time between replacements. In general, the hazard
rate of a random variable t with probability distribution
f (t) and cumulative distribution function F(t) is defined
as [25]
f (t)
h(t) =
1 F(t)
%
44
29
16
9
2
Intuitively, if the random variable t denotes the time between failures, the hazard rate h(t) describes the instantaneous failure rate as a function of the time since the most
recently observed failure. An important property of ts
distribution is whether its hazard rate is constant (which
is the case for an exponential distribution) or increasing
or decreasing. A constant hazard rate implies that the
probability of failure at a given point in time does not
depend on how long it has been since the most recent
failure. An increasing hazard rate means that the probability of a failure increases, if the time since the last failure has been long. A decreasing hazard rate means that
the probability of a failure decreases, if the time since the
last failure has been long.
The hazard rate is often studied for the distribution of
lifetimes. It is important to note that we will focus on the
hazard rate of the time between disk replacements, and
not the hazard rate of disk lifetime distributions.
Since we are interested in correlations between disk
failures we need a measure for the degree of correlation.
The autocorrelation function (ACF) measures the correlation of a random variable with itself at different time
lags l. The ACF, for example, can be used to determine
whether the number of failures in one day is correlated
with the number of failures observed l days later. The autocorrelation coefficient can range between 1 (high positive correlation) and -1 (high negative correlation). A
value of zero would indicate no correlation, supporting
independence of failures per day.
Another aspect of the failure process that we will study
is long-range dependence. Long-range dependence measures the memory of a process, in particular how quickly
the autocorrelation coefficient decays with growing lags.
The strength of the long-range dependence is quantified by the Hurst exponent. A series exhibits long-range
dependence if the Hurst exponent, H, is 0.5 < H < 1.
We use the Selfis tool [14] to obtain estimates of the
Hurst parameter using five different methods: the absolute value method, the variance method, the R/S method,
HPC1
Component
Hard drive
Memory
Misc/Unk
CPU
PCI motherboard
Controller
QSW
Power supply
MLB
SCSI BP
%
30.6
28.5
14.4
12.4
4.9
2.9
1.7
1.6
1.0
0.3
COM1
Component
Power supply
Memory
Hard drive
Case
Fan
CPU
SCSI Board
NIC Card
LV Power Board
CPU heatsink
%
34.8
20.1
18.1
11.4
8.0
2.0
0.6
1.2
0.6
0.6
COM2
Component
Hard drive
Motherboard
Power supply
RAID card
Memory
SCSI cable
Fan
CPU
CD-ROM
Raid Controller
%
49.1
23.4
10.1
4.1
3.4
2.2
2.2
2.2
0.6
0.6
Table 3: Relative frequency of hardware component replacements for the ten most frequently replaced components in
systems HPC1, COM1 and COM2, respectively. Abbreviations are taken directly from service data and are not known
to have identical definitions across data sets.
ware components (CPU, memory, disks, motherboards).
We estimate that there is a total of 3,060 CPUs, 3,060
memory dimms, and 765 motherboards, compared to a
disk population of 3,406. Combining these numbers with
the data in Table 3, we conclude that for the HPC1 system, the rate at which in five years of use a memory
dimm was replaced is roughly comparable to that of a
hard drive replacement; a CPU was about 2.5 times less
often replaced than a hard drive; and a motherboard was
50% less often replaced than a hard drive.
The above discussion covers only failures that required a hardware component to be replaced. When running a large system one is often interested in any hardware failure that causes a node outage, not only those
that necessitate a hardware replacement. We therefore
obtained the HPC1 troubleshooting records for any node
outage that was attributed to a hardware problem, including problems that required hardware replacements
as well as problems that were fixed in some other way.
Table 2 gives a breakdown of all records in the troubleshooting data, broken down by the hardware component that was identified as the root cause. We observe that 16% of all outage records pertain to disk drives
(compared to 30% in Table 3), making it the third most
common root cause reported in the data. The two most
commonly reported outage root causes are CPU and
memory, with 44% and 29%, respectively.
For a complete picture, we also need to take the severity of an anomalous event into account. A closer look
at the HPC1 troubleshooting data reveals that a large
number of the problems attributed to CPU and memory
failures were triggered by parity errors, i.e. the number
of errors is too large for the embedded error correcting
code to correct them. In those cases, a simple reboot
will bring the affected node back up. On the other hand,
the majority of the problems that were attributed to hard
6
Avrg. ARR
ARR=0.88
ARR=0.58
HPC1
HPC2
HPC3
HPC4
COM1 COM2
COM3
Figure 1: Comparison of datasheet AFRs (solid and dashed line in the graph) and ARRs observed in the field. Each
bar in the graph corresponds to one row in Table 1. The dotted line represents the weighted average over all data sets.
Only disks within the nominal lifetime of five years are included, i.e. there is no bar for the COM3 drives that were
deployed in 1998. The third bar for COM3 in the graph is cut off its ARR is 13.5%.
ARRs of drives in the HPC4 data set, which are exclusively SATA drives, are among the lowest of all data
sets. Moreover, the HPC3 data set includes both SCSI
and SATA drives (as part of the same system in the same
operating environment) and they have nearly identical replacement rates. Of course, these HPC3 SATA drives
were decommissioned because of media error rates attributed to lubricant breakdown (recall Section 2.1), our
only evidence of a bad batch, so perhaps more data is
needed to better understand the impact of batches in
overall quality.
It is also interesting to observe that the only drives that
have an observed ARR below the datasheet AFR are the
second and third type of drives in data set HPC4. One
possible reason might be that these are relatively new
drives, all less than one year old (recall Table 1). Also,
these ARRs are based on only 16 replacements, perhaps
too little data to draw a definitive conclusion.
A natural question arises: why are the observed disk
replacement rates so much higher in the field data than
the datasheet MTTF would suggest, even for drives in
the first years of operation. As discussed in Sections 2.1
and 2.2, there are multiple possible reasons.
First, customers and vendors might not always agree
on the definition of when a drive is faulty. The fact
that a disk was replaced implies that it failed some (possibly customer specific) health test. When a health test
is conservative, it might lead to replacing a drive that the
vendor tests would find to be healthy. Note, however,
that even if we scale down the ARRs in Figure 1 to 57%
of their actual values, to estimate the fraction of drives
returned to the manufacturer that fail the latters health
test [1], the resulting AFR estimates are still more than a
factor of two higher than datasheet AFRs in most cases.
ARR (%)
ARR (%)
ARR (%)
2
3
4
Years of operation
2
3
4
Years of operation
2
Age (years)
HPC4
18
18
16
16
14
14
12
12
10
8
10
8
0
0
0
0
10
20
30
40
Months of operation
50
60
ARR (%)
ARR (%)
ARR (%)
Figure 3: ARR for the first five years of system HPC1s lifetime, for the compute nodes (left) and the file system nodes
(middle). ARR for the first type of drives in HPC4 as a function of drive age in years (right).
5
4
3
2
1
10
20
30
40
Months of operation
50
60
0
0
10
15
20
25
Age (months)
30
35
HPC4
Figure 4: ARR per month over the first five years of system HPC1s lifetime, for the compute nodes (left) and the file
system nodes (middle). ARR for the first type of drives in HPC4 as a function of drive age in months (right).
two larger than expected for the compute nodes. In year
4 and year 5 (which are still within the nominal lifetime
of these disks), the actual replacement rates are 710
times higher than the failure rates we expected based on
datasheet MTTF.
The second observation is that replacement rates are
rising significantly over the years, even during early
years in the lifecycle. Replacement rates in HPC1 nearly
double from year 1 to 2, or from year 2 to 3. This observation suggests that wear-out may start much earlier
than expected, leading to steadily increasing replacement
rates during most of a systems useful life. This is an interesting observation because it does not agree with the
common assumption that after the first year of operation,
failure rates reach a steady state for a few years, forming
the bottom of the bathtub.
Next, we move to the per-month view of replacement
rates, shown in Figure 4. We observe that for the HPC1
file system nodes there are no replacements during the
first 12 months of operation, i.e. theres is no detectable
infant mortality. For HPC4, the ARR of drives is not
higher in the first few months of the first year than the
last few months of the first year. In the case of the
HPC1 compute nodes, infant mortality is limited to the
0.8
0.8
0.6
0.6
Pr (X<=x)
Pr (X<=x)
0.4
0.2
0
0
0.2
Data
Poisson Dist.
10
20
30
Number of replacements per month
0.4
0
0
40
All years
10
15
20
Number of replacements per month
25
Years 2-3
A chi-square test reveals that we can reject the hypothesis that the number of disk replacements per month follows a Poisson distribution at the 0.05 significance level.
All above results are similar when looking at the distribution of number of disk replacements per day or per week,
rather than per month.
0.8
0.8
Autocorrelation
Autocorrelation
0.6
0.4
0.2
0.6
0.4
0.2
0
0
50
100
150
Lag (weeks)
200
0
0
250
All years
10
20
30
Lag (weeks)
40
50
60
Year 3
Figure 6: Autocorrelation function for the number of disk replacements per week computed across the entire lifetime
of the HPC1 system (left) and computed across only one year of HPC1s operation (right).
5
All years
3.5
3
2.5
2
1.5
1
0.5
0
Year 3
5.2 Correlations
In this section, we focus on the first key property of
a Poisson process, the independence of failures. Intuitively, it is clear that in practice failures of disks in the
same system are never completely independent. The failure probability of disks depends for example on many
factors, such as environmental factors, like temperature,
that are shared by all disks in the system. When the temperature in a machine room is far outside nominal values,
all disks in the room experience a higher than normal
probability of failure. The goal of this section is to statistically quantify and characterize the correlation between
disk replacements.
We start with a simple test in which we determine the
correlation of the number of disk replacements observed
in successive weeks or months by computing the correlation coefficient between the number of replacements in
a given week or month and the previous week or month.
For data coming from a Poisson processes we would expect correlation coefficients to be close to 0. Instead we
find significant levels of correlations, both at the monthly
and the weekly level.
11
Empirical CDF
Empirical CDF
0.8
Data
Lognormal
Gamma
Weibull
Exponential
0.35
0.3
Data
Lognormal
Gamma
Weibull
Exponential
0.25
F(x)
F(x)
0.6
0.4
0.4
0.2
0.15
0.1
0.2
0.05
0
10
10
10
Time between Replacements (min)
200
10
400
600
800
1000
1200
Time between Replacements (min)
1400
Figure 8: Distribution of time between disk replacements across all nodes in HPC1.
per week computed across the HPC1 data set. For a stationary failure process (e.g. data coming from a Poisson
process) the autocorrelation would be close to zero at all
lags. Instead, we observe strong autocorrelation even for
large lags in the range of 100 weeks (nearly 2 years).
We repeated the same autocorrelation test for only
parts of HPC1s lifetime and find similar levels of autocorrelation. Figure 6 (right), for example, shows the
autocorrelation function computed only on the data of
the third year of HPC1s life. Correlation is significant
for lags in the range of up to 30 weeks.
Another measure for dependency is long range
dependence, as quantified by the Hurst exponent H. The
Hurst exponent measures how fast the autocorrelation
functions drops with increasing lags. A Hurst parameter
between 0.51 signifies a statistical process with a long
memory and a slow drop of the autocorrelation function.
Applying several different estimators (see Section 2) to
the HPC1 data, we determine a Hurst exponent between
0.6-0.8 at the weekly granularity. These values are
comparable to Hurst exponents reported for Ethernet
traffic, which is known to exhibit strong long range
dependence [16].
Empirical CDF
1
Pr (X<=x)
0.8
Data
Lognormal
Gamma
Weibull
Exponential
0.6
0.4
0.2
0 2
10
10
10
10
Time between Replacements (sec)
10
Compute nodes
Filesystem nodes
All nodes
Data
Exponential Dist
14
12
10
8
6
4
2
0
5
10
15
Time since last replacement (days)
20
16
Distribution / Parameters
Weibull
Gamma
Shape Scale Shape Scale
0.73 0.037
0.65 176.4
0.76 0.013
0.64 482.6
0.71 0.049
0.59 160.9
6 Related work
There is very little work published on analyzing failures
in real, large-scale storage systems, probably as a result
of the reluctance of the owners of such systems to release
failure data.
13
Among the few existing studies is the work by Talagala et al. [29], which provides a study of error logs in a
research prototype storage system used for a web server
and includes a comparison of failure rates of different
hardware components. They identify SCSI disk enclosures as the least reliable components and SCSI disks as
one of the most reliable component, which differs from
our results.
In a recently initiated effort, Schwarz et al. [28] have
started to gather failure data at the Internet Archive,
which they plan to use to study disk failure rates and
bit rot rates and how they are affected by different environmental parameters. In their preliminary results, they
report ARR values of 26% and note that the Internet
Archive does not seem to see significant infant mortality.
Both observations are in agreement with our findings.
Gray [31] reports the frequency of uncorrectable read
errors in disks and finds that their numbers are smaller
than vendor data sheets suggest. Gray also provides ARR
estimates for SCSI and ATA disks, in the range of 36%,
which is in the range of ARRs that we observe for SCSI
drives in our data sets.
Pinheiro et al. analyze disk replacement data from a
large population of serial and parallel ATA drives [23].
They report ARR values ranging from 1.7% to 8.6%,
which agrees with our results. The focus of their study
is on the correlation between various system parameters and drive failures. They find that while temperature
and utilization exhibit much less correlation with failures
than expected, the value of several SMART counters correlate highly with failures. For example, they report that
after a scrub error drives are 39 times more likely to fail
within 60 days than drives without scrub errors and that
44% of all failed drives had increased SMART counts in
at least one of four specific counters.
Many have criticized the accuracy of MTTF based
failure rate predictions and have pointed out the need for
more realistic models. A particular concern is the fact
that a single MTTF value cannot capture life cycle patterns [4, 5, 33]. Our analysis of life cycle patterns shows
that this concern is justified, since we find failure rates
to vary quite significantly over even the first two to three
years of the life cycle. However, the most common life
cycle concern in published research is underrepresenting
infant mortality. Our analysis does not support this. Instead we observe significant underrepresentation of the
early onset of wear-out.
Early work on RAID systems [8] provided some statistical analysis of time between disk failures for disks
used in the 1980s, but didnt find sufficient evidence to
reject the hypothesis of exponential times between failure with high confidence. However, time between failure
has been analyzed for other, non-storage data in several
studies [11, 17, 26, 27, 30, 32]. Four of the studies use
7 Conclusion
Many have pointed out the need for a better understanding of what disk failures look like in the field. Yet hardly
any published work exists that provides a large-scale
study of disk failures in production systems. As a first
step towards closing this gap, we have analyzed disk replacement data from a number of large production systems, spanning more than 100,000 drives from at least
four different vendors, including drives with SCSI, FC
and SATA interfaces. Below is a summary of a few of
our results.
Large-scale installation field usage appears to differ
widely from nominal datasheet MTTF conditions.
The field replacement rates of systems were significantly larger than we expected based on datasheet
MTTFs.
For drives less than five years old, field replacement
rates were larger than what the datasheet MTTF
suggested by a factor of 210. For five to eight year
old drives, field replacement rates were a factor of
30 higher than what the datasheet MTTF suggested.
Changes in disk replacement rates during the first
five years of the lifecycle were more dramatic than
often assumed. While replacement rates are often
expected to be in steady state in year 2-5 of operation (bottom of the bathtub curve), we observed
a continuous increase in replacement rates, starting
as early as in the second year of operation.
In our data sets, the replacement rates of SATA
disks are not worse than the replacement rates of
SCSI or FC disks. This may indicate that diskindependent factors, such as operating conditions,
usage and environmental factors, affect replacement
14
rates more than component specific factors. However, the only evidence we have of a bad batch
of disks was found in a collection of SATA disks
experiencing high media error rates. We have
too little data on bad batches to estimate the relative frequency of bad batches by type of disk, although there is plenty of anecdotal evidence that
bad batches are not unique to SATA disks.
Notes
References
[1] Personal communication with Dan Dummer, Andrei Khurshudov, Erik Riedel, Ron Watts of Seagate, 2006.
[2] G. Cole. Estimating drive reliability in desktop
computers and consumer electronics systems. TP338.1. Seagate. 2000.
[3] P. F. Corbett, R. English, A. Goel, T. Grcanac,
S. Kleiman, J. Leong, and S. Sankar. Row-diagonal
parity for double disk failure correction. In Proc. of
the FAST 04 Conference on File and Storage Technologies, 2004.
[4] J. G. Elerath. AFR: problems of definition, calculation and measurement in a commercial environment. In Proc. of the Annual Reliability and Maintainability Symposium, 2000.
[5] J. G. Elerath. Specifying reliability in the disk drive
industry: No more MTBFs. In Proc. of the Annual
Reliability and Maintainability Symposium, 2000.
8 Acknowledgments
We would like to thank Jamez Nunez and Gary Grider
from the High Performance Computing Division at Los
Alamos National Lab and Katie Vargo, J. Ray Scott and
Robin Flaus from the Pittsburgh Supercomputing Center for collecting and providing us with data and helping
us to interpret the data. We also thank the other people
and organizations, who have provided us with data, but
would like to remain unnamed. For discussions relating
to the use of high end systems, we would like to thank
15
[21] D. L. Oppenheimer, A. Ganapathi, and D. A. Patterson. Why do internet services fail, and what can be
done about it? In USENIX Symposium on Internet
Technologies and Systems, 2003.
[22] D. Patterson, G. Gibson, and R. Katz. A case for
redundant arrays of inexpensive disks (RAID). In
Proc. of the ACM SIGMOD International Conference on Management of Data, 1988.
[23] E. Pinheiro, W. D. Weber, and L. A. Barroso. Failure trends in a large disk drive population. In Proc.
of the FAST 07 Conference on File and Storage
Technologies, 2007.
[24] V. Prabhakaran, L. N. Bairavasundaram,
N. Agrawal, H. S. Gunawi, A. C. Arpaci-Dusseau,
and R. H. Arpaci-Dusseau. Iron file systems. In
Proc. of the 20th ACM Symposium on Operating
Systems Principles (SOSP05), 2005.
[25] S. M. Ross. In Introduction to probability models.
6th edition. Academic Press.
[26] R. K. Sahoo, R. K., A. Sivasubramaniam, M. S.
Squillante, and Y. Zhang. Failure data analysis of
a large-scale heterogeneous server environment. In
Proc. of the 2004 International Conference on Dependable Systems and Networks (DSN04), 2004.
[27] B. Schroeder and G. Gibson. A large-scale study
of failures in high-performance computing systems. In Proc. of the 2006 International Conference
on Dependable Systems and Networks (DSN06),
2006.
[28] T. Schwarz, M. Baker, S. Bassi, B. Baumgart,
W. Flagg, C. van Ingen, K. Joste, M. Manasse, and
M. Shah. Disk failure investigations at the internet
archive. In Work-in-Progess session, NASA/IEEE
Conference on Mass Storage Systems and Technologies (MSST2006), 2006.
[29] N. Talagala and D. Patterson. An analysis of error
behaviour in a large storage system. In The IEEE
Workshop on Fault Tolerance in Parallel and Distributed Systems, 1999.
[30] D. Tang, R. K. Iyer, and S. S. Subramani. Failure
analysis and modelling of a VAX cluster system.
In Proc. International Symposium on Fault-tolerant
computing, 1990.
[31] C. van Ingen and J. Gray. Empirical measurements
of disk failure rates and error rates. In MSR-TR2005-166, 2005.
[32] J. Xu, Z. Kalbarczyk, and R. K. Iyer. Networked
Windows NT system field failure data analysis. In
Proc. of the 1999 Pacific Rim International Symposium on Dependable Computing, 1999.
[33] J. Yang and F.-B. Sun. A comprehensive review of
hard-disk drive reliability. In Proc. of the Annual
Reliability and Maintainability Symposium, 1999.