34308
34308
com
https://ebookgate.com/product/advances-in-reliability-1st-
edition-n-balakrishnan/
OR CLICK HERE
DOWLOAD NOW
https://ebookgate.com/product/advances-in-survival-analysis-n-
balakrishnan/
ebookgate.com
https://ebookgate.com/product/reliability-engineering-advances-1st-
edition-gregory-i-hayworth/
ebookgate.com
https://ebookgate.com/product/quantile-based-reliability-analysis-1st-
edition-n-unnikrishnan-nair/
ebookgate.com
https://ebookgate.com/product/reliability-in-pragmatics-1st-edition-
mccready/
ebookgate.com
https://ebookgate.com/product/female-entrepreneurship-routledge-
advances-in-management-and-business-studies-1st-edition-n-carter/
ebookgate.com
https://ebookgate.com/product/managerial-accounting-1st-edition-ramji-
balakrishnan/
ebookgate.com
https://ebookgate.com/product/degradation-processes-in-
reliability-1st-edition-kahle/
ebookgate.com
https://ebookgate.com/product/safety-reliability-and-risk-analysis-
theory-methods-and-applications-3rd-edition-4-volumes-sebastian-
martorell/
ebookgate.com
Preface
The area of Reliability has become a very important and an active area of
research. This is clearly evident from the large body of literature that has been
developed in the form of books, volumes and research papers since 1988 when the
previous Handbook of Statistics on this area was prepared by P. R. Krishnaiah
and C. R. Rao. This is the reason we felt that this is indeed a right time to
dedicate another volume in the Handbook of Statistics series to highlight some
recent advances in the area of Reliability.
With this purpose in mind, we solicited articles from leading experts working in
the area of Reliability from both academia and industry. This, in our opinion, has
resulted in a volume with a nice blend of articles (33 in total) dealing with
theoretical, methodological and applied issues in Reliability.
For the convenience of readers, we have divided this volume into 13 parts as
follows:
I Reliability Models
II Life Distributions
III Reliability Properties
IV Reliability Systems
V Progressive Censoring
VI Analysis for Repairable Systems
VII Analysis for Masked Data
VIII Analysis for Warranty Data
IX Accelerated Testing
X Destructive Testing
XI Test Plans
XII Software Reliability
XIII Inferential Methods
We hope that this broad coverage of the area of Reliability will not only
provide the readers with a general overview of the area but also explain to them
what the current state is in each of the topics listed above.
We express our sincere thanks to all the authors for their fine contributions and
for helping us in bringing out this volume in a timely manner. Our special thanks
go to Ms. Nicolette van Dijk for taking a keen interest in this project and also for
helping us with the final production of this volume.
N. Balakrishnan
C. R. Rao
Contributors
J. A. Achcar, ICMC, University of $8o Paulo, C.P. 668, 13560-970, Säo Carlos,
SP, BraziI, e-mail: [email protected] (Ch. 29)
R. Aggarwala, Department of Mathematics and Statistics, University of Calgary,
2500 University Drive N.W., Calgary, Alberta, Canada T2N 1N4, e-mail."
[email protected] (Ch. 13)
R. Agrawal, GE Corporate Audit Staff, Fairfield, CT 06432-1008, USA, e-mail:
[email protected] (Ch. 27)
P. A. Akersten, Center for Dependability and Maintenance, Luleä University of
Technology, Luleä, Sweden, e-mail: [email protected] (Ch. 16)
E. K. AL-Hussaini, Department of Mathematics, University of Assiut, Assiut
71516, Egypt, e-mail." [email protected] (Ch. 5)
S. Aki, Division of Mathematical Science, OsaÆa University, Graduate
School of Engineering Science, Toyonaka, Osaka 560-8531, Japan, e-mail:
[email protected] (Ch. 11)
M. Asadi, Department of Statistics, University of Isfahan, Isfahan 81744, Iran,
e-mail: [email protected] (Ch. 7)
N. Balakrishnan, Department of Mathematics and Statistics, McMaster
University, Hamilon, Ontario, Canada L8S 4Kl, e-mail: [email protected].
mcmaster.ca (Chs. 1, 14, 23)
U. Balasooriya, Department of Statistics and Applied Probability, National
University of Singapore, Lower Kent Ridge Road, Singapore 119260, e-mail:
[email protected] (Ch. 15)
M. Banerjee, Center for Health Care Effectiveness Research, Wayne State
University, Detroit, MI 48201, USA, e-mail." [email protected]
(Ch. 19)
A. P. Basu, Department of Statistics, University of Missouri at Columbia,
Columbia, MO 65211-0001, USA, e-mail: [email protected] (Ch. 2)
S. Basu, Division of Statistics, Northern Illinois University, DeKalb, IL 60115-
2854, USA, e-mail: [email protected] or [email protected] (Ch. 19)
B. Bergman, Division of Total Quality Management, Chalmers University of
Technology, Gothenburg, Sweden, e-mail." [email protected] (Ch. 16)
W. R. Blischke, Emeritus Professor, Department of Information and Operations
Management, University of Southern California, Los Angeles, CA 90089-1421,
USA, e-mail: [email protected] (Ch. 20)
xix
xx Contributors
Notation
SMP semi-Markov process
EMC embedded M a r k o v chain
r.v, r a n d o m variable
RE relative error
R(t) reliability function
A(t) pointwise availability function
A limit availability
M(t) maintainability function
system failure rate function
MTTF mean time to failure
MTTR mean time to repair
MUT mean up time
MDT mean down time
MTBF mean time between failures
N(t) number o f j u m p s of the semi-Markov process in the time interval
(0,t]
~.(t) number of visits of the semi-Markov process into the state i in the
time interval (0, t]
Qij(t) semi-Markov kernel: discrete state space case; i E E, j E E, t E IP,+
p8 transition function of the M a r k o v chain (Jn): discrete time case
H,(O distribution function of sojourn time in the state i, i E E
~ij(t) M a r k o v renewal function: discrete state space case;
i EE, j EE, t E IR+
initial law
#U mean hitting time of the SMP into the state j, starting in state i
#u mean hitting time of the E M C into the state j, starting in state i
mi mean first jump time under IPi or mean sojourn time in state i
«(x,; t E ,r) the a-algebra generated by the family of r a n d o m variables (Xt; t E I)
Q1 -x Q2 Stieltjes convolution of two semi-Markov kernels on E
Q(,) nth fold Stieltjes convolution of the semi-Markov kernel Q, n E N
Stieltjes convolution product
2 N. Balakrishnan, N. Limnios and C. Papadopoulos
{1 ifxŒA
lA(x)= 0 ifx~A
I1 ifx_>0
l(x)= 0 ifx<0
1. Introduction
The aim of this chapter is to give the basic probabilistic models in reliability. We
present thus the discrete time Markov chains (DTMC), the continuous time
Markov chain (CTMC) and the semi-Markov model in continuous time. In the
case of DTMC, where the reliability is modeled in discrete time, the formulation is
simple and can be used to model reliability in a first approach. Moreover, in most
cases the numerical accuracy of this formulation is good. In the case of CTMC we
give explicit formulae for reliability-related indicators in continuous time. As far
as semi-Markov processes are concerned, we give an explicit formulation of re-
liability-related indicators in continuous time and of finite state space. We also
give statistical estimation for reliability and availability.
We then continue with the basics of Monte Carlo methods that are used quite
often in reliability theory. The simplicity of a Monte Carlo method, its efficiency
on higher dimensions as weil as its ability to model any arbitrary system, are the
main advantages of this method. We present the basic idea of Monte Carlo
method that was originally used to estimate integrals, and we continue with the
presentation of simple algorithms for the simulation of discrete and continuous
random variable (r.v.), as well as D T M C and CTMC. We discuss the problem of
rare event estimation and we briefly review the basic principles of the well-known
variance reduction methods. Importance sampling is also quite useful in reliability
systems where some basic system's parameters can be estimated by simulating the
corresponding model over regenerative cycles.
Basic probabilistic models in reliability 3
2.1.1. Definitions
Let X = (Xn, n E N) be an E-valued stochastic process defined on (~2, ~ , IP).
P n ( i , j ) = IP(Xn =jIÆ0 = i) .
~(i) = IP(XO = i) .
PROPOSITION 2.1. For all n >_ 1 and all i0, il, . . . , in E E, we have:
1. IP(Xo = io,X1 = i l , . . . ,Xn 1 = in-l,Xn = in) = c t ( i o ) P ( i o , i l ) . . . P ( i n - l , i n ) ;
2. 1P(Xù+I -----i l , . . . ,Xn+k-~ = ik-l,Xn+k = ikIXn = io) = P(io, il)...P(ik-1, ik);
3. ]P(Xn+m = jlXm = i) = IP(Xn = j[Xo = i) = p n ( i , j ) .
4 N. BaIakrishnan, N. Limnios and C. Papadopoulos
PROPOSITION 2.2.
1. The sojourn time o f the system into the state i E E is a geometric r.v. with
parameter P ( i , j ) .
p(ij)
2. The probability that the system enters stare j when it leaves stare i is' 1-p(i,i)"
The above two propositions allow us to simulate by M o n t e Carlo m e t h o d s a
M a r k o v chain.
~(~~ ~) q 1-q
and initial distribution (a, 1 - ~) (c~ E [0, 1]). After some calculus we obtain the
following spectral representation for the powers of the transition matrix P:
pn = 1- p p 1 -~
q 1 q --p+q p+q -q
(3)
The state probability vector is
(1 - p - q)n
Pl(n) - q ~- (p~ - q(1 - ~)) . (5)
P+q P+q
times between two consecutive returns to state j, which are called recurrence times
of state j and
PROPOSlTION 2.3.
PROPOSlTION 2.5. For every recurrent state i, we have: re(i) = l /#ii. For each state
i, deßne di = g.c.d. {n : n > 1,P"(i,i) > 0}.
6 N. Balakrishnan, N. Limnios and C. Papadopoulos
DEFINITION 2.5. A recurrent positive and aperiodic state is called an ergodic state.
An irreducible M a r k o v chain with all its states aperiodic is called an ergodic Markov
chain.
PROPOSITION 2.7. I f E is finite and the chain is irreducible and aperiodic, then
exists and pn converges toward FI = l~z with an exponential rate.
f(n)
(18)
;(~) - R(~)
PROPOSITION 2.8. Consider two independent Markov cha&s, X 1 and X 2 say, $1 and
$2 valued with transition functions p1 and p2, respectively. Then the (X1,X 2) is an
S 1 x S2-valued Markov chain with transition function P, with P((i,j), (k, g)) --
p l ( i , k ) p 2 ( j , g ) , for all i,k E S 1 and j, g E S 2.
The above proposition can be generalized for more than two processes.
In the case of binary systems, we have to partition the whole stare space E into
disjoint sets, U and D say, where U includes the working states (up stares) and D
the failure states (down states), (i.e. U U D -- E, U N D = ~ and U ¢ (~, D ¢; (~.)
The reliability-related indicators at time n > 0, become:
• Reliability: R(n) = IP(Vr E [0, n] M N,Xv E U).
• Availability (pointwise): A(n) = 1P(Xn E U).
• Maintainability: M(n) -- 1 -1P(Vv E [0, n I N N,X~ c D).
Consider now a binary multistate system (BMS) with state space
E={1,..,s}, up states U = { 1 , . . , r } and down states D = { r ÷ l , . . . , s } ,
described by an E-valued Markov chain X, with transition probability matrix P
and initial distribution vector et.
8 N. Balakrishnan, N. Limnios and C. Papadopoulos
T = inf{k >_ 0 : X~ E D}
and
Y=inf{k>0:X~EU}
PROPOSITION 2.10.
PROPOSITION 2.1 1.
7"Cllr
M U T := ~E3~[Tl -- rtiP211r (21)
Basic probabilistic models in reliability 9
/r2 ls-r
M D T := IEB2[Y] ~2P121 r , (22)
where fil and fl2 are the input distributions to the sets U and D, respectively, under re.
For the M T B F we have the following representation."
PROPOSITION 2.12. The variance o f the hitting time T o f the set D is given by
l
MTTR = MDT =
1-q
P~,ù(i,j) = l{i=j} •
10 N. Balakrishnan, N. Limnios and C. Papadopoulos
forn<r<m.
Define now the matrices Pn = (Pn(i,j); i,j E E) and Pn,m = (Pn,m(i,j); i,j C E).
F r o m the C h a p m a n - K o l m o g o r o v equation we can easily obtain
r-1
Pn,n+r(id) -- I1 Pn+~(i,j)
k=0
Let Œ be an initial distribution on E. Then, using the conditional proba-
bility formula and the M a r k o v property, the following relations can be easily
derived:
PROPOSITION 2.13. For all n >_ 1 and all io, il,..., in C E, we have:
Pn = n+2
1 1+ •
n+--~ ~
Then for any initial distribution e, we have
(i 11@+ /
Pn,m = n+21 21
n+2 2 q- ~ /
The following results can also be f o u n d in Platis et al. (1998).
M T T F = Cq ( I
X
/
oe k D)
MTTR = ~2 ( I
\ +Z~P~~=0 1~~
( (n+l)a
, (n+l)a
~ 1_~)
Pn = 1 1
(n+l)b0 (n+l)b0 1 ~ j ,
where a > 1 and b > 1. The stare space partition is: U = {1,2} and D = {3}.
Then, we have:
2(a + b) ~-I
A(n)=R(n)- n[anbù_l , n> 1
2b .1+~
MTTF=I+ [e~ - 1 ] .
a+b
3.1. Definition
This section deals with continuous time M a r k o v chains, i.e. I = IP,+. As in the
previous section we will consider here a probability space (f~, ~ , IP) where we
define an E-valued stochastic process X = (X(t), t c IP,+). The state space E here
is at m o s t a countabte space.
Pt+h(i,j) = Z P t ( i , k ) P h ( k , j ) , (26)
kEE
which in matrix form can be written as
Pt+h - PtPh • (27)
Thus the family (Pt, t _> 0) forms a semi-group. We will study here Markov
processes w/th semi-group satisfying the following property:
limp,(i,j) = l{i_j} (28)
t+0
for all i,j E E and the semi-group will be called standard. If limtloPt(i,i) = 1 is
satisfied uniformly with respect to i E E, then the semi-group will be called uniform.
A= (An A12)
\A21 A22
0{ = (0{1 0{2) .
Availability:
PROPOSITION 3.1. The (pointwise) availability of a C T M system is g/ven by
A(t) = 0{etAls,r . (29)
Aoo : Z ~ ( k ) : 7c-ls,r •
kEU
Reliability:
PROPOSITION 3.2. The system's reliability is R(t) = 0{~etanlr.
Maintainability:
PROPOSITION 3.3. The maintainability is g/ven by M(t) = 1 - 0{2etA221s_r.
Hitting times:
For the mean hitting times that were defined in the previous section we have the
following:
Basic probabilistic models in reliability 13
PROPOSITION 3.4.
MTTF = - CqAllllr,
MTTR = - Œ2A~lls r •
7"Cll m
MUT - - -
7ZlA21 l r '
7"C21N_m
MDT -
7z2A121s-r
and M T B F = M U T + MDT.
PROPOSITION 3 . 6 .
Var(T) = 2«lAi-~l - ( C q A l ? l ) 2 .
A:(~ A0
0)
and (c~,~ZN+I).
F(x) = 1 - c~exrl .
0:0)
and 7 = (e, CgN+lfl)"
Let us define now some elements of Kronecker's algebra useful for the next
proposition. The operation ® is the Kronecker's sum of two matrices. Note d//m,
the space of matrices of dimension m x n and let A c J g , , and B E Jmm; the
Kronecker's sum is defined as follows:
A = (A ®Ira) + (B®I,)
L = (T@So0 I®SOTo T O o 1 )
K > O, 2 > 0 with - 2 the eigenvalue of T having the greatest modulus of real part
and K = c~v where v is the right eigenvector of T corresponding to the eigenvalue
Some systems satisfy a M a r k o v property not for all points of time but only for a
special family of increasing stopping times. These times are the state change times
of the considered stochastic process.
DEFINITION 4.1. The stochastic process (J, S) is called a Markov Renewal Process
( M R P ) if it satisfies the following relation:
Z(t) := JN(t) •
Hi(t) = Z Qij(t)'
j~E
mi=
/o ~ [1 - Hi(u)ldu •
The M R P considered here will be strongly reguIar, i.e., for every t E IR+,
N(t) < oc (a.s.), and for all j c E, mj < oc.
3. Renewal processes:
(a) Ordinary: it is an M P R with two states E = {0, 1}, P(0, 1) = P(1,0) = 1
and Q0~ (') = F(.), where F is the c o m m o n distribution of the inter-arrival
times of the renewal process.
(b) Modified or delayed: it is an M R P with E = {0,1,2}, P ( 0 , 1 ) = 1,
P(1,2) = P(2, 1) = 1 and 0 elsewhere and Q01 (') = F0(.), Q12(') = Q21 (') =
F(.), where F0 is the distribution function of the first arrival time and F is
the c o m m o n distribution function of the other inter-arrival times of the
renewal process.
(c) Alternating: E = { 0 , 1 } , P(0,1)=P(1,0)= 1, and 0 elsewhere and
Q01(') = F(.), Qa0(') = G(.), where F and G are the distribution functions
corresponding to the odd and even inter-arrival times.
Solution:
If supjHj(t) < 1 for some t > 0 and maxi,j SUpxI(I - H(x))(i,j)l _< 1, the solution
of the above M R E exists, is unique and it is given by
where
O!~)(t) =
x-~U
{0~k f ó~/~1/
Qik (t-u)Qkjdu, ift>O
ift<O
and
(o)
Qij (t) = l{i_j}l{t>o},
(1)
Qij (t) Qij(t) .
Under the hypotheses that E~[X1] < oc, and a2i = IEi([Y[ - SlIEr(X1)] 2) < ec
where yi = 2j=s~ 1+1Xj, S~ = 0, n = 1 , 2 , . . . , then S i, is the recurrence time of state
i for the Markov chain (J~).
N~j(t)
Wf(t)= Z E f ( i , j , Xij~)
i,j n=l
Aij=
/0 f(i,j,x)dQij(x), Ai=
j=l
Aij,
0(3 S
Bij =
~0 (f(i,j,x))ZdQij(x), Bi = Z B i j
j=l
and
s
mi = ZAJ/
j=l
ii 5
= - + Br i;/ )j
r--1
s
mi ~2
mf~--~ B f - -_ - - z .
#ii ~lii
THEOREM 4.6 (Pyke and Schaufele, 1964). Under the hypotheses that the above
moments are finite, we have that, as t -+ oc,
W(t) = g(Z(u))du
THEOREM 4.7 (Limnios and Oprisan, 1999a, b). Suppose that (H) isfulfilled and
~ n > l ~(n) 1/2 < oo and Var~X1 < oo. Put
0n (t) = + SE<
and
(72 = Var~[g(Jo)X1] 2 q- 2 ~ Cov~(g(Jo)Xa,g(Jk j)Xk) •
k>_2
provided that «2 > O. W is the Wiener measure and ~ denotes weak convergence.
Put
k>l
1 ~
ôgj(t, T) = Ni ~ l{j~_,=+,J~:+X~<_t} .
20 N. Balakrishnan, N. Limnios and C. Papadopoulos
THEOREM 4.8 (Moore and Pyke, 1961). The empirical estimator of the semi-
Markov kernel." Qij(t, T) is uniformly strongly consistent, i.e.
where
S S
Reliability:
a.s.
sup [Rij(t, T) - Ru(t)[ ---+ 0, asT--+e~
O<_t<_L
(L ~ ~,+).
Reliability:
R(t) = ~ l ( I - Q l l ( t ) ) ( 1) s r ( I - H l ( t ) ) l r •
Availability:
M T T F = cq(I - P l l ) - l m l .
M T T R = ~z2(I - P l l ) - l m 2 .
Mean up time:
7qm1
MUT -
7~2P211~
MDT - g2m2
7rlP121D
5.1. Introduction
A M o n t e C a r l o m e t h o d is u s u a l l y u s e d to p r o v i d e a n a p p r o x i m a t e s o l u t i o n to the
f o l l o w i n g e s t i m a t i o n p r o b l e m : L e t h be a real function, h : IR --+ IR. We want to
B n d the value o f the integral
22 N. Balakrishnan, N. Limnios and C. Papadopoulos
=
ä•01 h(x)~. (33)
I = (34)
f0 ~h ( x ) d x = /0 ~h ( x ) f x ( x ) d x .
x0 initial value,
xn=(aXù-l+C) modulom, n> 1 ,
where the initial value x0 is called the seed of the generator, and ~ and c are given
positive integers. The elements of the generated sequence xn will then be ap-
proximately uniformly distributed in the interval {0, 1 , . . , m - 1}. Consequently,
m is called the p e r i o d of the generator and surely, after a large number of trials
(less than m) the sequence of number will be reproduced exactly the same. It is
desirable, therefore, to choose ~, c and m in a way that the period of the generator
is the largest possible, and in fact constructing " g o o d " pseudo-random number
generators constitutes an active research area.
As far as reliability theory is concerned, Monte Carlo simulation is a very
efficient tool since it does not make any restrictive assumptions on the system
model. We can thus simulate without any particular difficulty DTMC, CTMC,
semi-Markov processes, as well as non-Markovian ones. The output of the sim-
ulation of a stochastic process is normally the trajectories describing the evolution
of the system in time, i.e. the stare occupied by the system at any instant. No
doubt, these simulated trajectories will be close to the real trajectories of the
system depending of course on the system model and out ability to clearly de-
scribe it. Consequently, a number of simulated trajectories can be used in order to
estimate system's parameters such as the reliability, the availability, the M T T F ,
the stationary distribution, etc.
In the following, we give an algorithm describing how to simulate a discrete
random variable. This algorithm will be useful to construct an algorithm for the
simulation of DTMC.
Basic probabilistic models in reliability 23
k-I k
Z P(X~(o,),j) < ~ < ~ P(X~(oo),j)
j=l j-1
4. Set n = n + 1, Xn(o~) = k;
5. Repeat steps 2 4 for the number of jumps of the chain needed;
6. Output the sequence of states visited, described by Xn(c~).
Different implementations of the previous algorithm are also possible. For
example, given that we are in state i at time n, one could generate a geometric
r a n d o m variable having parameter 1 - P(i, i) to find out when the chain will exit
the current state i and then use the transformed transition matrix to find the state
of the system in which the process will finally jump into.
Consider again the binary system described in Section 2.1.1. One typical
(simulated) trajectory of the corresponding M a r k o v chain is given in Figure 1.
Moreover, independent simulations can be carried out in order to estimate the
principal system's parameters.
24 N. Balakrishnan, N. Limnios and C. Papadopoulos
õ
o
-3
I I I [ I I
1 2 4 5 6 9 10
Time
1
x = F - l ( u ) = - 7 l o g ( 1 - u) B
Consider now the class of systems whose failure is an event of a small probability.
Examples of this type of systems are computer systems and networks, nuclear
stations, communication systems, etc. In order to analyze the behavior of such
systems our principal tool will be the variance reduction methods.
Throughout this chapter the term "rare" will imply a probability that will be
"sufficiently" small. For example, in the communication systems field we are
interested in the probability of error during the transmission of bits. This prob-
ability is often smaller than 10 .6 and doing the analysis of such a system by
taking into account events of this type, becomes an extraordinary task. We have
to underline however, that a definition of the rare event does not clearly exist,
since it depends not only on the probability of the event but also on the system
model. Moreover, the term "rare" reflects our difficulty and the effort in us
obtaining estimates of the associated measure. Thus, the event of a failure of a
two-component system having a failure probability of 10 -9 may be stated as "less
rare" than the failure of a 300-component system whose corresponding failure
probability is 10 s. Moreover, the notion of rare event changes as years pass by.
It may also be the case that the system under question is quite complex con-
taining a large number of components. To model a system having n components
we need a state space of at least 2 n different states. The analysis becomes more
complicated when the system at hand is, furthermore, non-Markovian in nature.
In the case of a large state space, state lumping or state aggregation methods can
be used (cf. Goyal et al., 1987; Kemeny and Snell, 1976) allowing us to reduce the
dimension of the problem and treat a system $1 instead of the original S. However,
these techniques fall into practical difficulties as a considerable amount of com-
puter time and m e m o r y is still required. Furthermore, they are not easily applied
to complicated system models and even if this is possible it may sometimes be quite
difficult to assess the error incurred through the state aggregation process.
Existing analytical methods are not efficient in such settings and we are quickly
obliged to use simulation in order to estimate system's performance. In the case of
rare events direct simulation is not efficient since, in order to get an idea about the
probability of the event, we have to hit it several times and thus use a large
number of samples. For instance, if we want a 20% error and 95% confidence
interval for the estimation of a probability of the order of 10 6, we will certainly
need at least 108 realizations.
26 N. Balakrishnan, N. Limnios and C. Papadopoulos
Given that direct simulation is too expensive in terms of the number of samples
needed in order to obtain a given confidence level, other simulation methods have
appeared to face this problem. These methods are known as variance reduction
methods, and are quicker than the standard Monte Carlo simulation scheine and
can be principally classified into four separate categories (see Bratley et al., 1987;
Fishman, 1996; Ripley, 1987; Ross, 1990):
1. correlation methods;
2. conditioning methods;
3. methods of importance;
4. others...
In the first category, we can find the method of antithetic variables together with
the one of control variables. The underlying principal idea consists in using
correlated (negatively or positively) random variables in order to reduce as rauch
as possible the variance of the corresponding estimator.
The second category comprises the method of conditioning as well as the
method of stratified sampling, both of which are based on the formula of the
conditional expectation/variance.
The third one, deals with the method of importance sampling, a method that
is actually used to estimate systems parameters especially in highly reliable
markovian systems (see Glynn and Iglehart, 1989; Heidelberger, 1995). This
method consists in modifying the original probabilistic dynamics of the system
and carrying out the simulation using a new distribution. A compensatory factor,
called the likeIihood ratio, will remove the bias introduced to the estimator by this
change of measure. The aim of this method is to choose a new distribution that
will result in a variance reduction. Normally, this distribution has to privilege the
rare event under question and make it happen more frequently. In such cases, a
significant amount of variance reduction may be obtained. However, a bad choice
of this new distribution may give rise to an infinite variance and therefore we have
to be very careful when doing such changes of measure.
var[ô,,] - Var[xl]
n
Basic probabilistic models in reliability 27
Consider now the case when n = 2k, with k E N and the estimator
is equal to
e* - Cov[X, Y]
VarV]
Moreover, using such a value for the parameter c the variance of the estimator
(35) beeomes
(Cov[x, y])2
Var[X + c*(Y - U~)] = Var[X] VarV] '
Even if using the method of control varies we have the benefit of effectively
reducing the variance of the estimator, this method has the inconvenience to
necessitate some additional information. We need to know for example Var(Y)
and Cov(X, Y) which is not always the case. In practice, these quantities have to
be estimated using the first samples of the simulation and the estimates obtained
can be used to give an approximate value for c*. We can then carry out the rest of
the simulation using this value.
The name of control variables sterns from the fact that the random variable Y
and the samples obtained by the simulation play the role of the correction/control
factor. In other words, when the simulated Y value is greater than its already
known expected value, then i f X and Y are positively correlated, X will have the
tendency to be greater than its mean (0), also. However in this case, Cov(X, Y)
will be positive making thus c* to be negative which will consequently adjust the
value o f X + c * ( Y - tty) more closer to 0 than does X itself. Similar arguments
can be given for the case when X and Y are negatively correlated or the observed
value of Y is smaller than its known mean.
No doubt, the efficiency of the method depends principally on the chosen
control variable Y as well as on the precision we have in estimating the c* value.
Since the variance is always positive, the terms at the right of this last expression
are all positive, implying thus that
in each of these• Then, by taking the expected value of the output of the simu-
lations on all of the strata we can obtain the desired estimate of the system's
parameter.
However, the principal difference with the method of conditioning is the fact
that the former uses (38) to prove the variance reduction while in the case of
stratified sampling the basic argument is that Var[X] _> IE(Var[XIYI). Moreover,
it is sometimes difficult to separate the sample space into strata and to define
how many samples we have to take from each one in order to have the largest
possible variance reduction. This depends clearly on the problem at hand and
may be difficult to do in the general case, except in problems having an intrinsic
layered structure. See Ross (1990), for the details and examples of this method.
n •
In the case where h(x) = lA (x) with lA (x) = 1, if x ~ A and 0 otherwise, then
7 = IP(X ~ A). Moreover, the variance of the ~n estimator is equal to 7(1 - 7)/n,
while the associated 100 x (1 - 6)% confidence interval will be
where z6/2 is defined by the equation 6/2 = P(Z > z6/2) and Z denotes a random
variable having the standard normal distribution N(0, 1). If we are interested in
constructing a confidence interval for 7 the natural way will be to continue the
simulation until the interval's half width becomes less than ~c (~c E]0, 1D times the
value of the parameter that we are trying to estimate. Thus, the stopping criterion
for our simulation will be
)~(1 - ~~) < ~~, which implies that z~/2 < ~c . (40)
z~/2 n 7
30 N. Balakrishnan, N. Limniosand C. Papadopoulos
Note however, that the relative error (RE) of the estimator ~ù, which is defined to
be the ratio of its standard deviation to its expected value, will be given by (as
n ~ +oc)
Bl~-~ù) 1 ^ n-,+o~
REG) = z6/~ - - ~, ,'~ Z 6 / 2 ~, since ?. "7 •
It is this last equation that clearly illustrates the inconvenience of using direct
simulation: the relative error of the estimator remains without bounds, while the
event becomes rarer and rarer (i.e. RE(~n) ~ +ec when 7 --+ 0). It also means that
in order for Eq. (40) to be satisfied and thus obtain the desired relative precision
of estimation, we have to considerably increase the size n of the sample. In other
words, in order to estimate ? up to a certain level of precision, one has to increase
the number of simulation runs as the probability of the event becomes smaller and
smaller.
+o~ h(x ) ~ f
=
f j ~x)
' ( ~ ) & = ~s, Eh(x)L(x)] , (41)
where L(X) = f ( X ) / f ' ( X ) represents the corresponding likelihood ratio and the
subscript f ' means that the expected value is now taken with respect to the new
density fl.
The name given to this method is due to the fact that the process is sampled in
the areas that are more important for the estimation of 7, in the case where
h(x) = lA(x) the areas where the event {X E A} is realized. Consequently, the new
density f~ has to be chosen in a way to make the rare event under consideration
more likely to occur. Since this change of measure introduces a bias to our
estimation, the results obtained by the simulation have to be multiplied by the
appropriate likelihood ratio. This term plays the role of the compensatory factor,
since the system has been simulated using a probability measure that is not
directly associated to the system's model.
Eq. (41) is valid only in the case that f ( x ) > 0, for every x E IP, with f ( x ) > 0
and h(x) > 0, which implies that a possible value o f X under f , is also possible
under S - It is possible however to have f ( x ) = 0 and f(x) > 0, for any x E IP,
with h(x) = 0. By making this change of measure, the new unbiased estimator of 7
will be ~n(f') = ~ ~ L 1 h(oi)L(°»i), where the new n-sample (col,..., con) has now
been generated using density S . Its corresponding variance is given by
Basic probabilistic models in reliability 31
Varf,[~n(f')] = ~ h(x) S ( x ) d x _ ?2
= IEf[h(X)L(X)] - 72 (42)
The main aim of importance sampling is to find a suitable - and easily im-
p l e m e n t a b l e - new density f l in order to minimize the variance of 7n(ff) and by
doing this, reduce the cost of the estimation procedure. Thus, using importance
sampling the rare event has to be realized more often, meaning that its new
probability taust be greater than the original one. The corresponding L term in
Eq. (42) has to be kept as small as possible. In the case where the L term is
uniformly less than one, then Varf,[~ù(f')l < 7 - 7 2 = Varf[Tn(f)] and we will
certainly obtain a variance reduction. Another alternative would be to choose f~
in a way that lEf Ih(X)L(X)] is of the same order of magnitude as 72. In such cases
the associated change of measure is sometimes called asymptotically efficient or
asymptotically optimal (see the survey of Heidelberger, 1995 for a discussion on
this matter).
An optimal change of measure is defined to be a measure that results in a zero
variance estimator for the unknown quantity (see Kuruganti and Strickland,
1995) and it always exists. For our example, this corresponds to choosing
Using the optimal change of measure for the simulation, the exact value of the
parameter will be obtained in the first simulation tun. Unfortunately, it has the
disadvantage of containing 7, the parameter that we are trying to estimate,
making it thus not directly exploitable. Nevertheless, in some special cases, we can
explicitly construct this optimal change of measure, which will enable us not only
to estimate ? at a minimum cost, but also - and more importantly - to find its
exact value, as a by-product of the intermediate calculations (see Kuruganti and
Strickland, 1995, 1997).
The conditions on the applicability as well as the theoretical framework behind
importance sampling are given in Glynn and Iglehart (1989). In their work, im-
portance sampling is extended to problems arising in the simulation of both
discrete time and continuous time Markov processes, as well as in generalized
semi-Markov processes. In the same paper, the authors discuss the problem of
steady-state quantities estimation, that can be carried out by exploiting the re-
generative structure of the Markov chain, as well as the estimation of transient
quantities, where a different approach has to be used.
B-Xtù, n=0,1,.. ,
will be the embedded discrete time M a r k o v chain associated to Xt. The elements
ofits transition matrix P = { P ( x , y ) : x , y E E}, are given by P(x,y) = q ( x , y ) / q ( x ) ,
when x ¢ y and 0 otherwise.
Importance sampling can be easily extended to the case of Markovian sys-
tems. In order to modify the probabilistic dynamics of the system, one has to
basically modify the transition probability matrix or the generator of the pro-
cess (see Glynn and Iglehart, 1989) and/or the initial distribution of the process.
Basic probabilistic models in reliability 33
en - { / = ( y 0 , y , , . . ,yn): • c E} .
P(Y") = #(Yo)P(Yo,Yl)...P(Yn-I,Yn) ,
where #(Y0) = P(Y0 = y0) is the initial law of the chain. Moreover, let Bn c f~n
stand for the set of all paths for which {z = n}. We have the following propo-
sition:
PROPOSmON 6.1 (Goyal et al., 1992). Consider a discrete time Markov chain with
transition probability matrix P. Ler P be the probability measure associated with
the different trajectories of the chain and z a stopping time which is finite under P,
with probability 1. Note also Z, a measurable function of Y~ .for which
IEe [[Z(Y~)11 < ~ . Let P' be a new probability measure for whieh z is also finite with
probability 1 and for any y~ ¢ Bù, p'(yn) • 0 whenever Z(yn)P(y n) ~ O. Then
IEe[Z(Y~)] = IEp,[Z(Y ~) L(Y~)], with L(Y ~) = P ( S ) / P ' ( y n ) , for any yn C Bh.
Remark however, that in this case it is not necessary for the new importance
sampling measure to correspond to a time-homogeneous Markov chain. A dif-
ferent measure P' given by
considered failed. Let h(y) = 1/q(y) be the mean sojourn time in state y and let
g(Y) = 1F(y)h(y). Then, we can write (see Crane and Iglehart, 1975)
nz rv,~0-1
« = ~PL~k=0 g(:~k)] (45)
-1 h(rk)l
] l P rV'VCo
L2Jk=O
Let us now define TB = inf{t > 0 : Xt E B}, the hitting time for B C E (with the
convention that infO = +co) and r» = inf{n > 0 : I1, E B}, the number of jumps
for Y to enter B C E. A somehow similar representation holds for the mean time
to failure (MTTF) of the system, which may be written as (see Goyal et al.,
1992)
nz rv'~min(z0,ZF) - I h(Yk)]
M T T F = ]Ep[TF] = ]te[min(T0, TF)] = uzP[2--*k=0 (46)
P(TF < To) ]lp[l{zF<~0} ] '
where P(rF < z0) = ]lp[l{zF<ZO} ] stands for the probability of a failure during a
cycle (a cycle is defined to be the period between two consecutive instants when
the system is in state 0 and a component failure occurs).
This last expression is the key relation for the M T T F estimation using Monte
Carlo simulation. Thus, in both cases the general problem of estimation boils
down to the estimation of the ratio of two expected values
]lp[a]
- ]lp[H] '
(2}~~ GjLy)/L~nJ
0(,,~) = (~}'-r«,] HjL«)/F(1 - ()nl "
This is a consistent estimator for ~, and we have (see Goyal et al., 1992):
Basic probabilistic models in reliability 35
lim 0~n r~ =
PROPOSITION 6.2 (Shahabuddin, 1994). The probability 7 = P('CF < "Co) may be
represented as aoer + o(er), where ao and r are positive constants.
In this, e stands for the maximum failure rate of the components in the system
and it is exactly this parameter that reflects the highly reliable nature of the
system. For this reason, we call e the rarity parameter of the system. Conse-
quently, Markovian systems can be classified in balanced systems, where all
components have failure rates of the same order of magnitude and unbalanced
systems otherwise. See Nakayama (1994, 1995, 1996) for a detailed description of
the system model.
Thus for the estimation of the probability P("CF < "Co), good importance sam-
pling schemes are those making the first term in the expression of the variance of
the importance sampling estimator to be close enough to e2r, the order of mag-
nitude of 72. In such cases, the corresponding method will be said to possess the
bounded relative error property, since the relative error of the importance sam-
pling estimator will be bounded by a constant. The methods actually used in
practice, where the total failure (repair) probability at each state is increased
(decreased), are called failure biasing methods. They are distinguished from each
other from the way the new failure probability is allocated to individual transi-
tions. See the references for the different failure biasing methods.
Nakayama (1996) gave necessary and sufficient conditions for a failure biasing
method to have bounded relative error. These conditions are difficult to verify in
36 N. Balakrishnan, N. Limnios and C. Papadopoulos
practice, since one has to find the order of magnitude of a large number of paths
of the process.
Note also that the results obtained by any arbitrary simulation scheme can be
ameliorated by eliminating all transitions to state 0. This is a direct consequence
of the optimal change of measure (see Kuruganti and Strickland, 1997).
In what follows, we present two different algorithms for the simulation of semi-
Markov systems: the method of competing risk and the embedded Markov chain
method. We give also simulation results for a simple 3-state semi-Markov system.
The method of competing risk is based on the following proposition (see Opri~an,
1999).
~01 ~2 z~3
initial 3
state :tate
~qo t51
Fig. 2. A 3-out-of-5system.
Basic probabilistic models in reliability 37
BFB(0.9)
4.4
- - BFB(0.9) + no transitions to state 0
Exact method
4.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
~ 3.8
3.6 ................................................................
8.4
: i
3.2 I i ~ I
1000 2000 3000 4000 5000 6000 7000 8000 9000 lOOOO
Iterations
BFB(0.9)
0ù9 ....................... - - BFB(0.9) + no transitions to state 0
0.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.~ 0.6 ............................................................
E
~ 0,5 ...... ". . . . . . . . . . . . . . . . . . . . . . . . ......... . . . . . . . . . . . . . .
ò
ku 0 . 4 .........................................................................
0.3
0.2
0.1
I I I i I i I
1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
Iterations
PROPOSITION 7.1 ( K o r o l y u k and Turbin, 1982). For all i c E, there exists a family
o f independent random variables {~ik, k E E}, taking values in ~ + with the
distribution functions
Aik(t) =-
{ 1- [ , Qikld~tl /f« > 0,
exp - fö I-H~(ù)J (49)
0 otherwise .
where
1 - Hi(u)
hij(u) - 1 - Aij(u) -- IE[~j]z/j = u 1
and Iij is the indicator function o f the event {mink~Æ zik = zij}.
The algorithm for the realization of one sample trajectory of the process is the
following.
Algorithm 4.
I n p u t data: The state space of the process, its initial law #(.), and the distribution
functions Aij.
1. Sample a r.v. X ~ # and set t = 0, X(t, o~) = X(m);
2. Set i = X(t, ~o); generate zifs (/" C E) using the distribution functions Aij;
3. Set ~ = minj~s ~,ij, t = t + z, set X(t, o~) = arg minjcs Tij'~
4. Repeat steps 2-3 for the n u m b e r of j u m p s of the process needed, or until the
time t becomes greater than the observation period T.
Exp(0.001)
~x~~oo~~~®/we~~u~~~o~~~
Fig. 4. A 3 states semi-Markov system.
Basic probabilistic models in reliability 39
0.97
0.91
0 50 100 150 200 250 300 350 400 450 500
Time
0.950.850.9 I I ....................
0.6
0.75
07
0.65
0.6
50 100 150 200 250 300 350 400 450 500
Time
corresponding holding times to the states visited using the distribution functions
F/j(t). This algorithm is similar to the algorithm used for the simulation of C T M C
and is given below.
Algorithm 5.
Input data: The state space of the process, its initial law /~(.), the transition
probabilities Pij and the distribution functions ~j.
1. Sample a r.v. X ~ # and set t = 0, X(t, co) = X(co).
2. Set i = X(t, co); using the transition matrix of the embedded M a r k o v chain find
the state j in which the process will jump into.
3. Generate z, the holding time in state i, using the distribution function F~j(t),
4. S e t t = t + ~ , X ( t , co)=j.
5. Repeat steps 2M for the number of jumps of the process needed, or until the
time t becomes greater than the observation period T.
Hitting times:
Mean time to failure: M T T F = 1119.7,
Mean time to repair: M T T R = 100,
Mean up time: M U T = 1117.1,
Mean down time: M D T = 100.
References
~inlar, E. (1969). Markov renewal theory. Adv. Appl. Probab. 1, 123 187.
Crane, M. A. and D. L. Iglehart (1975). Simulating stable stochastic systems III, regenerative processes
and discrete event simulation. Oper. Res. 23, 3345.
Fishman, G. S. (1996). Monte Carlo. Concepts, Algorithms and Applications. Springer Series in
Operations Research, Springer, New York.
Fox, B. L. and P. W. Glynn (1986). Discrete time conversion for simulating semi-Markov processes.
Oper. Res. Lett. 5, 191 196.
Glynn, P. W. and D. L. Iglehart (1989). Importance sampling for stochastic simulations. Manage. Sci.
35, 1367 1392.
Goyal, A., P. Heidelberger and P. Shahabuddin (1987). Measure specific dynamic importance sam-
pling for availability simulations. In 1987 Winter Simulation Conference Proceedings, pp. 351-357.
IEEE Press.
Goyal, A., P. Shahabuddin, P. Heidelberger, V. F. Nicola and P. W. Glynn (1992). A unified
framework for simulating Markovian models of highly reliable systems. IEEE Trans. Comput.
41(1), 36-51.
Goyal, A., S. S. Lavenberg and K. S. Trivedi (1987). Probabilistic modeling of computer system
availability. Ann. Oper. Res. 8, 285 306.
Hammersley, J. M. and D. C. Handscomb (1964). Monte Carlo Methods. London, Methuen.
Heidelberger, P. (1995). Fast simulation of rare events in queueing and reliability models. A C M Trans.
Modeling Comput. Simul. 43-85.
Ionescu, D. C. and N. Limnios, (Eds.) (1999). Statistical and Probabilistic Models in Reliability.
Birkhäuser, Boston.
Janssen, J. and N. Limnios, (Ed.) (1999). Semi-Markov Models and Applications. Kluwer Academic
Publishers, Dordrecht, The Netherlands.
Kemeny, J. G. and J. L. Snell (1976). Finite Markov Chains. Springer, Berlin.
Korolyuk, V. S. and A. F. Turbin (1982). Markov Renewal Processes in Problems of Systems
Reliability. Naukova Dumka, Kiev (in Russian).
Kuruganti, I. and S. G. Strickland (1995). Optimal importance sampling for Markovian systems. In
Proceedings of the 1995 IEEE Systems, Man and Cybernetics Conference.
Kuruganti, I. and S. G. Strickland (1997). Optimal importance sampling for Markovian systems with
applications to tandem queues. Math. Comput. Simul. 44(1), 61 80.
Limnios, N. (1996). Dependability analysis of semi-Markov systems. Reliab. Eng. and Syst. Safety.
Limnios, N. and G. Oprisan (1997a). A general framework for reliability and performability analysis
of semi-Markov systems. In Eighth International Conference on ASMDA. Anacapri (Napoli). Italy,
June 1997.
Limnios, N. and G. Oprisan (1997b). A general framework for reliability and performability analysis
of semi-Markov systems. Appl. Stochast. Models Data AnaL (to appear).
Limnios, N. and G. Oprisan (1997c). Semi-Markov process to regard of their application. World
Energy Syst. J. 1(1), 6zF75.
Limnios, N. and G. Oprisan (1999a). Invariance principle for an additive functional of a semi-Markov
process. Rer. Roumaine. Math. Pures Appl. 44(1), 75-83.
Limnios, N. and G. Oprisan (1999b). Semi-Markov Processes and Reliability. Birkhäuser (to appear).
Nakayama, M. K. (t994). A Characterization of the simple failure biasing method for simulations of
highly reliable Markovian systems. A C M Trans. Modeling Comput. Simul. 4(1), 52-88.
Nakayama, M. K. (t995). Asymptotics for likelihood ratio derivative estimators in simulations of
highly reliable Markovian systems. Manag. Sci. 41, 52zP554.
Nakayama, M. K. (1996). General conditions for bounded relative error in simulations of highly
reliable Markovian systems. Adv. Appl. Prob. 28.
Neuts, M. F. (1981). Matrix-Geometric Solutions in Stochastic Models. The John Hopkins University
Press, Baltimore, MD.
Opri~an, G. (1999). On the failure rate. In Statistical and Probabilistic Models in Reliability (Eds.
Ionescu and Limnios).
42 N. Balakrishnan, N. Limnios and C. Papadopoulos
Ouhbi, B. and N. Limnios (1996). Non-parametric estimation for semi-Markov kernels with
application to reliability analysis. Appl. Stoch. Models Data Anal. 12, 209-220.
Ouhbi, B. and N. Limnios (1997). Estimation of kernels, Availability and Reliability functions of
semi-Markov Systems, In Statistical and Probabilistic Models in Reliability (Eds. Ionescu and
Limnios).
Platis A., N. Limnios and M. Le Du (1998). Hitting time in a finite non-homogeneous Markov chain
with applications. Applied Stoch. Models Data Anal. 14, 241-253.
Pyke, R. (1961a). Markov renewal processes: definitions and preliminary properties. Ann. Math. Stat.
32, 1231-1242.
Pyke, R. (1961b). Markov renewal processes with finitely many states. Ann. Math. Stat. 32, 1243-1259.
Pyke, R. and R. Schaufele (1964). Limit theorems for Markov renewal processes. Ann. Math. Stat. 35,
1746-1764.
Ripley, B. D. (1987). Stochastic Simulation. Wiley, New York.
Ross, S. M. (1990). A Course in Simulation. Maxwelt MacMillan International Editions.
Shahabuddin, P. (1994). Importance sampling for the simulation of highly reliable Markovian systems.
Manag. Sci. 40, 333-352.
Shahabuddin, P. and M. K. Nakayama (1993). Estimation of reliability and its derivatives for large
time horizons in Markovian systems. In 1993 Winter Simulation Conference Proceedings,
pp. 422429. IEEE Press.
Strickland, S. G. (1993). Optimal importance sampling for quick simulation of highly reliable
Markovian systems. In 1993 Winter Simulation Conference Proceedings, pp. 437~444. IEEE Press.
Taga, Y. (1963). On the limiting distributions in Markov renewal processes with finitely many states.
Ann. Inst. Star. Math. 15, 1-10.
N. Balakrishnan and C. R. Rao, eds., Handbook of Statistics, Vol. 20 ")
/_...,
© 2001 Elsevier Science B.V. All rights reserved.
Let N(t) denote the n u m b e r of events that occur at or before time t. Such a
r a n d o m variable is called a eounting proeess. W h e n the argument to N is an
interval, such as (a,b], then N(a,b] is defined to be the n u m b e r o f events that
occur in that interval. Thus, N(t) = N(0, tl. A counting process N(t) is said to be a
Poisson process if:
1. N(0) = 0.
2. F o r any a < b _< c < d the r a n d o m variables N(a, b] and N(c, d] are indepen-
dent. This property is called the independent inerements property.
3. There is a function 2, called the intensity fnnetion, such that
43
44 A. P. Basu and S. E. Rigdon
which gives the expected number of events through time t, is called the mean
funetion for the process. Clearly, A'(t) = 2(t).
The nonhomogeneous Poisson process having an intensity function of the form
or
,~(t) = ,t/~t ~-~, t> o
is called the power law process or the Weibull nonhomogeneous Poisson process.
This model has gone by many other names as well, including and most notably,
the Weibull process. When fi < 1 the intensity is a decreasing function of t. In this
case, failures will become less frequent as the system ages; this is reliability im-
provement. When fl > 1 the intensity is an increasing function of t, and in this case
failures will become more frequent as the system ages. This is called deterioration.
When fi = 1, then the intensity function is a constant. Thus, the homogeneous
Poisson process is a special case of the power law process. The power law process
can thereYore be used to model systems that improve, deteriorate, or remain
steady over time, hut it cannot be used to model systems that improve for some
intervals of t and deteriorate for other intervals.
Some repairable systems have an intensity function that has the bathtub shape
as shown in Figure 1. For small values of t, that is, when the system is young, the
rate of occurrence of failures (ROCOF) is high and failures are frequent. After the
bugs are removed, or after some of the weakest components fail, the R O C O F will
be smaller, and it will remain at this level throughout its useful life. Then as the
system ages, the R O C O F begins to increase. At this stage, the system is deteri-
orating.
The two functions in Figure 1 look nearly identical, but there is an important
difference in their interpretations. The bathtub intensity function indicates that
the system will initially experience reliability growth. A few early failures will be
followed by the useful life when failures occur at roughly a eonstant rate.
The Weibull nonhomogeneous Poisson process 45
< #
I' t "X
"~--Early failures'-'~ ~ Constant ~,----~ ~ Deterioration ~- Burn-in ~ ~ Useful life ~ ",*---- W e a r o u t
Eventually, as the system ages, the failures become more frequent. On the other
hand, the bathtub hazard function indicates that there is a high chance that the
system will fail (for the first and only time) early in its life. A few of the systems
have serious defects that will cause early failures. Eventually, a working system
will begin to wear out and the failure will ensue. The hazard function is the limit
of a conditional probability. For a system that is wearing out, the probability of
failure in (xo, xo + Ax] conditioned on survival past time x0 will be smaller than the
probability of failure in (xl,xl + Ax] conditioned on survival past time xl, pro-
vided x0 < xl. There are therefore two bathtub curves: a bathtub intensity func-
tion for repairable systems and a bathtub hazard function for non-repairable
systems. The bathtub hazard function expresses conditional probabilities of the
one and only failure of the system. The bathtub intensity function indicates that
the system will experience many failures early in its life, which will be followed by
a time when the R O C O F is constant; finally, as the system ages, the failures will
become more frequent.
Although the power law process cannot model the bathtub-shaped intensity
function, the bathtub curve concept helps to illustrate the difference between the
interpretations of the intensity and hazard functions. In particular, it helps to
illustrate the difference between the power law process (i.e., the Weibull non-
homogeneous Poisson process) and the Weibull distribution. The power law
process is the nonhomogeneous Poisson process with intensity function
2(t) = (~/O)(t/O) p 1 and the Weibull distribution is that distribution having a
hazard function of the form h(x) = (~/O)(x/O) ~ 1.
The power law process can be used to model the occurrence of events in time. The
most common application has been to model the failures of a repairable system.
We assume that a failed unit is immediately repaired, or that the repair time is not
counted in the operating time. If we also assume that a failed unit is brought back
to exactly the same condition as it was just before the failure, then it is clear that
the nonhomogeneous Poisson process is the appropriate model for the failure
Other documents randomly have
different content
= op uw credit worden geboekt; The property passed under the
will, was very large = bij testamentaire beschikking vermaakt; He
passed up coppers to the conductor = gaf door; This judgment was
passed upon him = werd over hem uitgesproken; Pass-bill =
geleibiljet; Pass-book = bestelboekje, kassiersboekje; Pass-check
= vrijbiljet; contremarque; Pass examination = gewoon examen,
tegenover Honours exam.; Pass-holder = houder van een vrijbiljet
of abonnementskaart; Pass-key = looper, huissleutel; Passman =
geslaagde (tegenover Classman = de met grooten lof geslaagde);
Pass-paper = schriftelijke examenopgaaf; Passport = paspoort;
Password = parool, wachtwoord; Passable = gangbaar, dragelijk,
begaanbaar; Passer = die passeert; Passer-by = voorbijganger;
Passing = voorbijgang, verloop, aanneming; adj. voortreffelijk,
uitstekend, in hooge mate, voorbijgaand, terloopsch: Passing-bell
= doodsklok; Passing-note = overgangstoon (muz.); He said it in
passing = in ’t voorbijgaan; We don’t see a bit of passing = zien
(hier) niemand voorbijgaan.
Passion, paš’n, het lijden (vooral het laatste lijden des Heeren),
hartstocht, liefde, toorn, smart, geestdrift, vuur: To be in a
towering passion = in hevigen toorn ontstoken; He fell (flew)
into a passion = werd woedend; Don’t give way to passion =
laat u niet door drift medesleepen; To have a passion for =
voorliefde hebben; To put a person into a passion = iemand in
drift doen ontsteken; Passion-flower = passiebloem; Passion-
play = passiespel; Passion-Sunday = Zondag vóór Paschen;
Passion tide = lijdensweken; Passion week = lijdensweek;
Passionate = hartstochtelijk, driftig, oploopend; subst.
Passionateness; Passionists = een bepaalde godsdienstige orde
der R. Katholieken, die behalve de gewone 3 beloften nog een vierde
afleggen, nl. tot voortdurende overweging van het lijden Onzes
Heeren (vandaar de naam); Passionless.
Paste, peist, subst. deeg, pasta, glasdeeg, valsche diamant; adj. uit
pasta gemaakt, onecht; Paste verb. vastplakken, beplakken; in
pasta werken; afranselen: A pair of ear-drops of glittering paste
= een paar simili oorknopjes; Pasteboard = bordpapier,
visitekaartje, speelkaart, biljet; Paste-pot = lijmpot.
Pat, pat, subst. tikje, klapje, opgemaakt stuk boter; adj. geschikt,
net van pas, toepasselijk; Pat verb. zachtjes tikken of kloppen: Pat
to the time = te rechter tijd; He [390]said the words pat on = glad
achter elkaar op; He had rhymes pat about all the persons present
= juist van toepassing; It came pat to the purpose = net van
pas; He patted little children on the back, head = tikte
(goedkeurend) op den rug, op het hoofd; Patness = juistheid,
gepastheid.
Patagonia, patəgounjə.
Patch, patš, subst. lap, stuk, moesje, stuk of lapje grond; Patch
verb. lappen, oplappen, samenflansen: This comedian is not a
patch upon his fellow-artist = haalt in de verte niet bij; She laid
on patches and made herself ridiculous = zij zag er belachelijk uit
met hare schoonheidsmoesjes; To put a patch on = lap opzetten;
The plaster is patching off the walls = valt bij stukken van den
muur af; The dress was patched up = in haast en slordig
opgelapt; Peace was patched up with them = een overhaaste
vrede werd gesloten; Patchwork quilt = lappendeken; Patcher =
lapper, knoeier; Patchy = gelapt, saamgeflanst; knorrig.
Patchouly, patšəli, patšûli, patchoeli plant, parfum daaruit bereid.
Pate, peit, kop: He broke his pate = kreeg een gat in zijn kop;
Pated (in samenst., zooals: Curly-pated).
Path, pɐ̂ th, subst. pad: To break (open) a path = een weg
banen; To leave the path to = iemand uit den weg gaan; Path-
breakers = baanbrekers; Pathway = voetpad; Pathless =
ongebaand.
Pathetic, pəthetik, gevoelvol, aandoenlijk.
Pave, peiv, bevloeren, plaveien: To pave the way for = den weg
banen; Pavement = bestrating, plaveisel; Foot-, Side-pavement
= trottoir, kleine steentjes; Paver = straatmaker, wegbereider,
straatsteen, stamper.
Paw, pô, subst. poot met klauw; hand; Paw verb. (met den
voorpoot) krabben (v. paarden); ruw aanpakken, flikflooien.
Pax, paks. Zie Osculatory: To cry pax = roepen dat het ‘genoeg’ is
(Schoolslang).
Pay, pei, subst. betaling, loon, soldij; Pay verb. betalen, vergoeden,
vergelden, kwijten; teeren, smeren: No pay no play = zonder geld
heb je niets; In the pay of = in dienst van; Officer on halfpay =
op wachtgeld; To pay one’s addresses to = het hof maken; To
pay attention = opletten; You paid him a bad compliment =
maakte; When he came home there was the devil to pay =
waren de poppen aan het dansen; He robs Peter to pay Paul =
hij maakt een gat om een ander gat te stoppen; To pay the piper
= het gelag betalen; I will pay him full tale for this = het hem
dubbel en dwars betaald zetten; To pay a visit = brengen; If you
don’t wish to get into debt, you must pay your way = moet ge uwe
verplichtingen nakomen; You will have to pay down = gij zult
moeten opdokken, contant betalen; You shall pay for this = zult
boeten; Will you pay for the book? = het boek betalen; He paid for
his treachery with his life = boette zijn verraad; The sum was paid
in to your account = gestort, afbetaald op uw debet-rekening; To
pay into a bank, the hands of a banker = deponeeren bij; I have
paid him off = het volle bedrag uitbetaald; het hem betaald gezet;
The loan will be paid off at the price of £ 106 for every £ 100 = is
aflosbaar; Pay him on = sla er op, raak hem; I have paid him out
(for it) = paid him home = het hem betaald gezet; Pay out more
cable = vier; We had to pay through the nose = ons werd het vel
over de ooren getrokken; I want to pay up my arrears = wensch te
betalen; The business does not pay = rendeert niet; Paid-up
shares = volgestorte aandeelen; He got well paid = kreeg het
goed betaald; Pay-bill = betaalsrol; Pay-box = loket, plaatsbureau
(theaters); Pay-day = traktementsdag; Paymaster =
betaalmeester, kwartiermeester, officier van administratie; Pay-
office = betaalkantoor; Pay-sheet = betaalsrol; Payable =
betaalbaar: Payable at sight to Mr. X. or order = betaalbaar op
zicht aan den heer X. of order; Payee, peiî, wien betaald wordt;
Payer = betaler; Paying = loonend; Payment = betaling, loon: He
has stopped (suspended) payment = zijne betalingen gestaakt.
Payne, pein.
Paynize, peinaiz, hout voor bederf bewaren door eene injectie van
zekere oplossing.
Pea, pî, erwt: They are as much alike as two peas (in a pod) =
zij lijken als twee droppels water op elkaar; Pea(s)-cod (-pod, -
shell) = erwtenpeul; Pea-gun (Pea-shooter) = proppenschieter;
Pea-nut = aardnoot; Pea-soup = erwtensoep.
Peabody, pîbodi.
Peace, pîs, vrede, rust, kalmte, harmonie: Peace to his ashes =
hij ruste in vrede; Peace there! = stilte! At peace = verzoend;
dood; I am not at peace with myself = het met mijzelf niet eens;
Hold your peace = houd je stil; They kept the peace better than
I had expected = zij hielden zich rustiger dan ik verwacht had; He
promised to keep the king’s (queen’s) peace = om de orde niet
meer te verstoren; He was sent to peace = werd gedood; Peace-
breaker = rustverstoorder; Peacemaker = vredestichter; Peace-
offering = zoenoffer; Peace-officer = politieagent; sheriff;
Peace-party = vredepartij; The Peace-society = vredebond; In
their peace-strength = sterkte in tijd van vrede; Peaceable =
vreedzaam, vredelievend; subst. Peaceableness; Peaceful =
vredig, kalm, stil; subst. Peacefulness; Peaceless = rusteloos,
woelig.
Peak, pîk, subst. piek, spits, punt, klep (van pet of hoed); Peak
verb. kwijnen, er ziekelijk uitzien, (eene ra) optoppen: A peaked
cap = met klep; Peaked beard = puntbaard; [392]A peaked look
= kwijnend; Peaked-up persons = stijve, opgeprikte personen;
Peaking = ziekelijk; geniepig; Peakish = ziekelijk; Peaky = spits,
ziekelijk uitziend.
Pearce, pîəs.
Peck, pek, subst. maat van ± 9 L.; pik, hap, kus, voer, kost; groote
hoeveelheid; Peck verb. pikken, vitten, eten: There are pecks and
pecks of = hoopen; It gave me a peck of trouble = een heele
portie last; We had to keep them peck and perch, all the year
round = we moesten hen onderhouden; She pecked at her chop =
knabbelde aan, at met kleine beetjes van; She pecked up all the
crumbs = raapte (pikte) op; Peck-alley = keel(gat); Pecker =
specht; snavel: To keep up one’s pecker = den moed (of eetlust)
niet verliezen; To go to Peckham = gaan eten; All holiday at
Peckham = Schraalhans is keukenmeester; Peckish = hongerig.
Pedal, ped’l, pîd’l, voet - -; subst. pedaal; Pedal verb. het pedaal
gebruiken; peddelen, fietsen; Pedal-note = aangehouden toon;
Pedal-pipes (of an organ); Pedal(l)er = fietser.
Peddle, ped’l, venten, met eene mars loopen; zich met beuzelarijen
ophouden: Peddler = marskramer (Z. Pedlar); Peddling =
beuzelachtig, onbeteekenend.
Peel, pîl, subst. schil; schieter (van bakkers), versterkte toren (op
de grens van Eng. en Schotl.); Peel verb. schillen, afschillen,
afschilferen, ontkleeden, plunderen: Candied peel = sukade;
Orange peel; To keep one’s eyes peeled = zijne oogen open
houden (fig.); Peelings = schillen.
Peg, peg, subst. pen of pin, nagel, klemhoutje, schroef; tand, voet,
bepaalde slag; cognac met spuitwater; Peg verb. met pinnen
vastmaken of merken, afranselen (into), geducht werken of eten
(away), met pennen uitzetten (out), uitknijpen (out), hard loopen:
Pegs = broek van boven zeer wijd en van onderen erg nauw; He is
a square (round) peg in a round (square) hole = niet op zijne
plaats; To come down a peg or two = een toontje lager zingen;
I’ll take (pull) him down a peg or two = ik zal hem een toon of
wat lager doen zingen; I have pegged two for you = twee voor u
aangezet; He was pegging away at his translation = werkte hard
aan; peg-top = priktol; peg-tops = broek, wijd van boven en
nauw om de enkels.
Pelf, pelf, vuil; “duiten”: It was the pelf that got her a husband =
zij kreeg een man om haar “moppen”.
Pen, pen, subst. schaapskooi, hok, perk; pen, veder, stijl; Pen verb.
opsluiten, stuwen; schrijven, neerpennen: Slip of the pen =
schrijffout, vergissing; The last stories from her pen = van hare
hand; These pens have hard, soft nibs = dit zijn pennen met
harde, zachte punten; I want this pen mended = ik wou deze
pen vermaakt hebben; To put the pen through = de pen halen
door; To set pen to paper: Pen-case = pennenkoker; Pen-driver
= pennelikker; Pen-fish = pijl-inktvisch; Penholder =
pennehouder: Penknife = pennemes; Pen-man = schoonschrijver;
Penmanship = schrijfkunst, manier van schrijven; Pen-name =
pseudoniem; Penwiper = pennenwisscher.
Penarth, pînəth.
Pendennis, pendenis.
Penelope, pəneləpî.
Penny, peni, 1⁄12 van een shilling, stuiver, geld: A penny saved is
a penny gained = een stuiver gespaard is een stuiver gewonnen;
In for a penny, in for a pound = komt men over den hond, dan
komt men over den staart; A penny at a time = zachtjes aan,
geleidelijk; To make a penny = geld verdienen; To turn an
honest penny = een eerlijk stuk brood verdienen; Penny-a-liner
= loon-, broodschrijver; Penny-a-lining = broodschrijverij;
Twopenny people = proleten; Twopenny halfpenny godliness
(picnic) penny goedkoope; Penny-in-the-slot machine =
automaat; Penny-gaff = tjingeltangel; Penny royal = polei;
Penny-wedding = bruiloftsfeest, waar de gasten bijdragen tot de
onkosten; Penny-weight = ± 1.55 gram; Penny-wise =
krenterig: Their penny-wise precautions = prullerige
voorzorgsmaatregelen; To be penny-wise and pound-foolish =
de zuinigheid de wijsheid laten bedriegen; Pennyworth, penəth,
peniwɐ̂ th, voor de waarde van een stuiver, iets ge- of verkochts,
koop, kleinigheid: You have got there a poor pennyworth = ge
hebt u daar leelijk in den nek laten zien.
Penzance, penzâns.
Pepin, pepin.
Per, pɐ̂ , per, door: Per advance = vooruit; Per advice = volgens
bericht; Per annum = per jaar; Per cent. = percent; As per
margin = volgens aanteekening op den rand.
Perch, pɐ̂ tš, subst. baars; stok, prik, hooge bok of zitplaats; maat
van 5,029 M.; ook 1⁄160 acre; Perch verb. als een vogel zitten of
gaan zitten, hoog zitten, zetten op: I am off to perch = ik ga op
stok (naar kooi); The bird was perched there; I was perched on
the roof = zat; Percher = vogel, die op takken pleegt te zitten;
groote altaarkaars; Perching-stick = prik.
Peregrine, perəgrin.
Peregrine(-falcon), perəgrin(fôk’n), edelvalk.
Perk, pɐ̂ k, mooi, net, verwaand, brutaal; Perk verb. den neus in
den wind steken, zich uitrekken, uitsteken, mooi maken: She
perked up her cap = maakte zich mooi om te behagen;
Perkiness = brutaliteit; Perky = Perk.
Pernambuco, pernambûkou.
Persia, pɐ̂ šə, Perzië; Persian, subst. en adj. Pers, Perzisch(e taal);
soort v. dunne zijde: Persian blinds = persiennes, soort
zonneblind; Persian cat = cypersche kat; Persian powder =
insectenpoeder; Persian wheel = rad met aangehechte emmers
om water te putten. [397]
ebookgate.com