0% found this document useful (0 votes)
6 views

Stochastic Process

Uploaded by

Pritom Das
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Stochastic Process

Uploaded by

Pritom Das
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Stochastic Process:

The theory of stochastic process deals with the systems of which develop Iin time or space in
accordance with probabilistic laws. Frome the theory of mathematical point of view, a stochastic
process is defined to be an indexed collection of random variables {Xt}, where the index t runs
through a given set T. T is generally taken to a set of non-negative integers. Here Xt represents a
measurable characteristic of interest.
For example, the stochastic process X1, X2, … can represent the collection of weekly or
monthly inventory levels of a given product or the stochastic process X1, X2, … can represent the
collection of weekly or monthly demands of a product.

Definition: A stochastic process is a family of random variables Xt parameterized by t  T , where


t  , when T={1,2,…} we shall say that Xt is a stochastic process in discrete time (i.e. a sequence
of random variables), when T is an interval in (typically T = [0, ] ), we shall say Xt is a
stochastic process in continuous time.
For every w the function t  T → X t (w)  X (t , w) is called a path (or sample path)
of Xt (the underlying idea being that of a family of random variables depending on time).
Families of random variables which are function of say time or space or both, are known
as stochastic processes (or random processes or random functions).

For example:
a) Consider a simple experiment like throwing a true die. i) suppose that Xn is the outcome of
the nth throw, n  1, then { X n , n  1} is a family of random variables such that for a distinct
value of n (1,2,…) one gets a distinct random variable Xn; { X n , n  1} constitutes a
stochastic process, known as Bernoulli process. ii) Suppose that Xn is the sixes in the first
n throws. For a distinct value of n=1, 2, … we get a distinct binomial variable Xn;
{ X n , n  1} which gives a family of random variables is a stochastic process.
b) Consider a random event occurring in time, such as number of telephone calls received at
a switchboard. Suppose that X(t) is the random variable which represents the number of
incoming calls in an interval (0, t) of duration t units. The number of calls within a fixed
interval of specific duration, say, one unit of time, is a random variable X(t) and the family
{ X (t ), t  T } constitute a stochastic process ( T = [0, ] )

State:
The different values of a stochastic process are the states of that process. It is a function of the
parameter time (or space). For example, let the stochastic process {Xt}, t=1, 2, … where Xt denotes
the number of head in the tth toss of a fair coin. Then Xt can take value either zero or one. Thus,
zero and one are the states of this stochastic process.

State Space:
The set of values that a stochastic process can assume or the set of all states of a stochastic process
is called the state space of that stochastic process. In the above example, the state space is X =
{0,1}.

Parameter Space:
The set of values that the parameter t of a stochastic process can assume is called the parameter
space of that stochastic process. In the above example, t can assume values 1, 2, … Thus, the
parameter space is given by T :{t , t  } .

Markov Property:
Let us consider a sequence of trials each with several possible outcomes E 1, E2, … and outcome
of any trial depends only on the outcome of preceding trial. This is known as Markov property.
In fact, Markov property states that the conditional probability of any future event, given
any past event and present event is independent of past event and depends only on the present
event of the process.
Thus, a stochastic process {Xt} is said to have Markov property if
{{ X t +1 = j | X 0 = K0 , X1 = K1 , , X t = i} = {{ X t +1 = j | X t = i}, t=0, 1, 2, …

1
Characteristics of Markov Property:
The Markov property facilitates the unique determination of the future behaviour of the process,
once the state of the system at the present state is given. This is known as the characteristic of the
Markov property.

Markov Process:
A stochastic process that possesses the Markov property is known as the Markov process. A
Markov process with discrete parameter space as well as discrete state space is called Markov
chain.

Markove Chain:
A stochastic process {Xt}, t=0,1,2, … is said to be a finite state Markov chain if it has the following
properties:
1. A finite number of states are there,
2. Must possess Markov property,
3. A set of initial probabilities P(X0=i).
That is, for a Markov chain {Xt}, t=0,1,2, …
P{X t +1 = j | X 0 = K0 , X1 = K1 , X t = i} = P{X t +1 = j | X t = i}
For example, let Xt denote the number of defective items in an acceptance sampling scheme
with t, the number of items inspected. If the sample size allowed is n, then Xt can take values 0, 1,
2, …, n. Also, the state at any trial depends only on the state at the preceding trial but not on the
previous trials. Hence {Xt} constitutes a Markov chain.

Transition Probabilities:
Suppose the outcome of a trial is Ei, then we say that the system is in state Ei. If the outcome of
the next trial is Ej, then we say there has been a transition of the type Ei → Ej.
The probability of such a transition is denoted by Pij.
Formally, let {Xt}, t=1,2, … be a Markov chain, then the conditional probability
P{ X t +1 = j | X t = i} that the process is in state j at time (t+1) given that it was in state I at
time t is called the transitional probability and is denoted by Pij.
Obviously, Pij must satisfy the following properties:
P
j
ij = 1, Pij  0 i, j = 1, 2, m

where m is the number of states that may be finite or infinite.

Stochastic Matrix:
Because of the dual subscripts of the conditions/ transition probabilities Pij can be arranged
in a matrix form as follows:
 P11 P12 P1m 
 
P P22 P2 m 
P =  21
 
 
 Pm1 Pm 2 Pmm 
which has the following properties:
1. It is a square matrix,
2. Pij  0  i, j = 1, 2, m
3. P
j
ij = 1  i = 1, 2, m

The above matrix is called the transition probability matrix (t.p.m)

You might also like