0% found this document useful (0 votes)
7 views

simulation notes

The document discusses the concepts of randomness and probability, defining random phenomena and the mathematical framework for analyzing uncertain outcomes. It explains key terms such as sample space, events, and types of random variables, including discrete and continuous variables. Additionally, it covers the characterization of random variables through probability measures and expectations.

Uploaded by

sahaj9897
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

simulation notes

The document discusses the concepts of randomness and probability, defining random phenomena and the mathematical framework for analyzing uncertain outcomes. It explains key terms such as sample space, events, and types of random variables, including discrete and continuous variables. Additionally, it covers the characterization of random variables through probability measures and expectations.

Uploaded by

sahaj9897
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 70

Example: Combining 3 LCGs

Advantages of CLCG

Modeling & Simulation: Random Numbers


Course: Modeling & Simulation - EEN14253

Dr. Jagat Jyoti Rath

Department of Electrical Engineering, MNNIT, Allahabad

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 1 / 49


Example: Combining 3 LCGs
Advantages of CLCG

What is a Random Phenomena


Randomness
A random phenomenon is one that can yield different outcomes in repeated experiments, even if we use
exactly the same conditions in each experiment.

Example: If we flip a coin, we know in advance that it will either come up heads or tails, but we cannot
predict before any given experiment which of these outcomes will occur.

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 2 / 49


Example: Combining 3 LCGs
Advantages of CLCG

What is a Random Phenomena


Randomness
A random phenomenon is one that can yield different outcomes in repeated experiments, even if we use
exactly the same conditions in each experiment.

Example: If we flip a coin, we know in advance that it will either come up heads or tails, but we cannot
predict before any given experiment which of these outcomes will occur.

Counter Example: Suppose I throw a ball many times at exactly the same angle and speed and under
exactly the same conditions. Every time we run this experiment, the ball will land in exactly the same
place: we can predict exactly what is going to happen. This is a deterministic system.
Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 2 / 49
Example: Combining 3 LCGs
Advantages of CLCG

What is a Random Phenomena

Some other examples of randomness are number of boys/girls in a classroom, number of typos in a page,
height of students in a class etc.
Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 3 / 49
Example: Combining 3 LCGs
Advantages of CLCG

Notion of Probability
Probability
Probability theory is the mathematical study of random phenomena.
The outcome of a random event cannot be determined before it occurs, but it may be any one of
several possible outcomes.
The analysis of events governed by probability is called statistics.

Mathematically, the probability that an event will occur is expressed as a number between 0 and 1.

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 4 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Notion of Probability
Probability
Probability theory is the mathematical study of random phenomena.
The outcome of a random event cannot be determined before it occurs, but it may be any one of
several possible outcomes.
The analysis of events governed by probability is called statistics.

Mathematically, the probability that an event will occur is expressed as a number between 0 and 1.

Let us consider the following case: Assume the probability of event A is represented by P (A). Then find
the values of P(A) for the following scenarios:
Event A will most probably not occur
The notion of probability refers to the concept and framework used to quantify uncertainty and the lik
Small chance of event A occurring
elihood of events occurring. It provides a mathematical foundation for reasoning about situations wh
ere outcomes are uncertain. Here are the key aspects of the notion of probability:
50-50 chance of event A occurring
high chance of event A happening
Event A will most definitely happen.
Note that, the sum of probabilities for all possible outcomes is equal to one. For example, if an experiment
can have three possible outcomes (A, B, and C), then P(A) + P(B) + P(C) = 1.
Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 4 / 49
Example: Combining 3 LCGs
Advantages of CLCG

Classical Definition of Probability


Let the sample space (denoted by Ω) be the set of all possible distinct outcomes to an experiment. The
probability of some event occurring is

number of ways the event can occur


number of outcomes in Ω
Ex: A fair die is rolled once. What is the probability of getting a ’6’ ?

Here, Ω = {1, 2, 3, 4, 5, 6} and A = 6. Then we can get

n = 6, na = 1.
Thus, the probability is
1
P(A) =
6
Ex 2: A fair coin is tossed twice. What is the probability of getting two heads? Here,
Ω = {HH, TT, HT, TH } and A = HH. Then we can get

n = 4, na = 1.
Thus, the probability is
1
P(A) =
4

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 5 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Concepts for Probability


To understand probability mathematically, lets understand a few concepts first which concern probability
theory.

Experiment & Random Experiment


An experiment or trial is any procedure that can be infinitely repeated and has a well-defined set of possible
outcomes. An experiment is said to be random if it has more than one possible outcome, and deterministic
if it has only one.

Sample Space
The sample space Ω is the set of all possible outcomes of a random experiment.

Ex 1: Consider the random experiment of waiting for a bus that will arrive at a random time in the future.
In this case, the outcome of the experiment can be any real number t ≥ 0 (t = 0 means the bus comes
immediately, t = 1.5 means the bus comes after 1.5 hours, etc.) We can therefore define the sample space as

Ω = [0, +∞[

Ex 2: Roll a dice with six sides to see if an odd or even side appears. The the sample space is

Ω = {1, 2, 3, 4, 5, 6}

i.e. the set of six sides of the dice.


Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 6 / 49
Example: Combining 3 LCGs
Advantages of CLCG

Concepts for Probability


Events
An event is a subset A of the sample space Ω. Informally, an event is a statement for which we can
determine whether it is true or false after the experiment has been performed.

Ex 1: Consider the event: ”The sum of the numbers on the dice is 7”


This event occurs in a given experiment if and only if the outcome of the experiment happens to lie in the
following subset of all possible outcomes:

{ (1, 6), (2, 5), (3, 4), (4, 3), (5, 2), (6, 1) } ⊂ Ω

We cannot predict in advance whether this event will occur, but we can determine whether it has occurred
once the outcome of the experiment is known.

Ex 2: Drawing 4 cards from a deck: Events include all spades, sum of the 4 cards is (assuming face cards
have a value of zero), a sequence of integers, a hand with a 2, 3, 4 and 5. Can you identify some more
events?

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 7 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Concepts for Probability


Events
An event is a subset A of the sample space Ω. Informally, an event is a statement for which we can
determine whether it is true or false after the experiment has been performed.

Ex 1: Consider the event: ”The sum of the numbers on the dice is 7”


This event occurs in a given experiment if and only if the outcome of the experiment happens to lie in the
following subset of all possible outcomes:

{ (1, 6), (2, 5), (3, 4), (4, 3), (5, 2), (6, 1) } ⊂ Ω

We cannot predict in advance whether this event will occur, but we can determine whether it has occurred
once the outcome of the experiment is known.

Ex 2: Drawing 4 cards from a deck: Events include all spades, sum of the 4 cards is (assuming face cards
have a value of zero), a sequence of integers, a hand with a 2, 3, 4 and 5. Can you identify some more
events?
Let us now see the different types of events:

Exhaustive Events
A set of events is said to be exhaustive, if it includes all the possible event.

Ex: In tossing a coin, the outcome can be either Head or Tail and there is no other possible outcome. So,
the set of events {H, T } is exhaustive.
Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 7 / 49
Example: Combining 3 LCGs
Advantages of CLCG

Concepts for Probability


Mutually Exclusive Events
Two events, A and B are said to be mutually exclusive if they cannot occur together i.e. the occurrence of
one/many event(s) prevents other events from happening.

Ex: In tossing a coin, both head and tail cannot happen simultaneously.

Equally Likely Events


If one of the events cannot be expected to happen in preference to another, then such events are said to be
Equally Likely Events.(Or) Each outcome of the random experiment has an equal chance of occurring.

Ex: In tossing a coin, the coming of the head or the tail is equally likely.

Independent Event
Two events are said to be independent, if happening or failure of one does not affect the happening or
failure of the other. Otherwise, the events are said to be dependent.

Ex: For example, if we flip a coin in the air and get the outcome as Head, then again if we flip the coin but
this time we get the outcome as Tail.

Non-mutually exclusive event


Non-mutually exclusive events are events that can happen at the same time.

Ex: driving and listening to the radio, even numbers and prime numbers on a die, losing a game and
Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 8 / 49
Example: Combining 3 LCGs
Advantages of CLCG

Concepts for Probability

Events in Probabilistic Sense


If two events are mutually exclusive then the probability of either occurring is:

P(A or B) = P(A ∪ B) = P(A) + P(B)

If two events are independent then their joint probability of occurring is:

P(A and B) = P(A ∩ B) = P(A)P(B)

If two events are Non-mutually exclusive then their probability of occurring is:

P(A or B) = P(A) + P(B) - P(A and B)

Probability measure
A probability measure is an assignment of a number P(A) to every event A such that the following rules are
satisfied:
0 ≤ P(A) ≤ 1 (probability is a ”degree of confidence”).
P(Ω) = 1 (we are certain that something will happen).

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 9 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Random Variable
Definition
A random variable taking values in D (set of values) is a function X that assigns a value X(w) ∈ D to every
possible outcome w ∈ Ω (sample space).

A random variable is thus a function that maps outcomes of a random experiment to real numbers.
A (real-valued) random variable, often denoted by X(or some other capital letter), is a function
mapping a probability space (S; P) into the real line R.
A random variable is a function that assigns a numerical value to each outcome of a random experimen

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 10 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Random Variable
Definition
A random variable taking values in D (set of values) is a function X that assigns a value X(w) ∈ D to every
possible outcome w ∈ Ω (sample space).

A random variable is thus a function that maps outcomes of a random experiment to real numbers.
A (real-valued) random variable, often denoted by X(or some other capital letter), is a function
mapping a probability space (S; P) into the real line R.

Ex: We flip three coins. A good sample space for this problem is
Ω = {HHH, THH, HTH, HHT, TTH, THT, HTT, TTT }
Let X be the total number of heads that are flipped in this experiment. Then X is a random variable that can
be defined explicitly as follows:
X (HHH) = 3, X (THH) = X (HTH) = X (HHT ) = 2, X (TTH) = X (THT ) = X (HTT ) = 1, X (TTT ) = 0.
In principle, random variables can take values
Dr. Jagat Jyoti Rath
in an arbitrary set D.
Modeling & Simulation: Random Numbers 10 / 49
Example: Combining 3 LCGs
Advantages of CLCG

Random Variables
Random Data
A random source of data produces data that is characterized by the unknown true value of 𝜃o of the
variable 𝜃 to be estimated and also is function of a random variable.

There are two types of random variables, discrete and continuous.

Discrete Random Variable


A random variable X : Ω → D is called discrete if it takes values in a finite or countable set D.

Continuous Random Variable


A random variable X : Ω → D is called continuous if it takes values in an infinite set D.

We can also have a mixed random variable.

Mixed Random Variable


A random variable X : Ω → D is called mixed if it has jump discontinuity at countable number of points
and increases continuously at least in one interval of D.

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 11 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Random Variables
Random Data
A random source of data produces data that is characterized by the unknown true value of 𝜃o of the
variable 𝜃 to be estimated and also is function of a random variable.

There are two types of random variables, discrete and continuous.

Discrete Random Variable


A random variable X : Ω → D is called discrete if it takes values in a finite or countable set D.

Continuous Random Variable


A random variable X : Ω → D is called continuous if it takes values in an infinite set D.

We can also have a mixed random variable.

Mixed Random Variable


A random variable X : Ω → D is called mixed if it has jump discontinuity at countable number of points
and increases continuously at least in one interval of D.

Ex: The suit of a randomly drawn card is discrete (it can only take four possible values), and the number of
customers that arrive at a store in a given day is also discrete. However, the height of a random person can
take any value which is not a countable set, and this is therefore not a discrete random variable. The latter
type of random variable is called continuous.
Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 11 / 49
Example: Combining 3 LCGs
Advantages of CLCG

Characterizing a Random Variable: Mean


Recall, probability P{X = i} has a natural interpretation as the fraction of repeated experiments in which
the random variable X takes the value i. But
This does not necessarily give us an immediate indication of how large X typically i
Or what is the average value of X over many repeated experiments are not indicated.

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 12 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Characterizing a Random Variable: Mean


Recall, probability P{X = i} has a natural interpretation as the fraction of repeated experiments in which
the random variable X takes the value i. But
This does not necessarily give us an immediate indication of how large X typically i
Or what is the average value of X over many repeated experiments are not indicated.

Expectation or Mean
Let X : Ω → D be a discrete random variable, and let 𝜑 : D → R be a function. The expectation of 𝜑 (X)
is defined as ∑︁
E[ 𝜑 (X) ] = 𝜑 (i)P{X = i}
i∈D
If X has a numerical value, then E[X] can be defined by choosing 𝜑 (i) = i.

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 12 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Characterizing a Random Variable: Mean


Recall, probability P{X = i} has a natural interpretation as the fraction of repeated experiments in which
the random variable X takes the value i. But
This does not necessarily give us an immediate indication of how large X typically i
Or what is the average value of X over many repeated experiments are not indicated.

Expectation or Mean
Let X : Ω → D be a discrete random variable, and let 𝜑 : D → R be a function. The expectation of 𝜑 (X)
is defined as ∑︁
E[ 𝜑 (X) ] = 𝜑 (i)P{X = i}
i∈D
If X has a numerical value, then E[X] can be defined by choosing 𝜑 (i) = i.

But how to interpret the significance of ’mean/expectation’?

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 12 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Characterizing a Random Variable: Mean


Recall, probability P{X = i} has a natural interpretation as the fraction of repeated experiments in which
the random variable X takes the value i. But
This does not necessarily give us an immediate indication of how large X typically i
Or what is the average value of X over many repeated experiments are not indicated.

Expectation or Mean
Let X : Ω → D be a discrete random variable, and let 𝜑 : D → R be a function. The expectation of 𝜑 (X)
is defined as ∑︁
E[ 𝜑 (X) ] = 𝜑 (i)P{X = i}
i∈D
If X has a numerical value, then E[X] can be defined by choosing 𝜑 (i) = i.

But how to interpret the significance of ’mean/expectation’?


Consider, for example, the number of TVs in a household

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 12 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Characterizing a Random Variable: Mean


If we define X as the random variable giving the number of TVs in the household. Then mean of X can be
calculated as:
E[X ] = 0 x 0.012 + 1 x 0.319 + . . . + 5 x 0.028 = 2.084

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 13 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Characterizing a Random Variable: Mean


If we define X as the random variable giving the number of TVs in the household. Then mean of X can be
calculated as:
E[X ] = 0 x 0.012 + 1 x 0.319 + . . . + 5 x 0.028 = 2.084

What does it mean when we say E[X] = 2.084 in the previous example? Do we “expect” to see any
household to have 2.084 TVs?

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 13 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Characterizing a Random Variable: Mean


If we define X as the random variable giving the number of TVs in the household. Then mean of X can be
calculated as:
E[X ] = 0 x 0.012 + 1 x 0.319 + . . . + 5 x 0.028 = 2.084

What does it mean when we say E[X] = 2.084 in the previous example? Do we “expect” to see any
household to have 2.084 TVs?

The correct answer is that the expected value should be interpreted as a long-run average. If x1 , x2 , . . . , xn
be n realizations of X, then we expect

This is called the law of large numbers.


Thus, the average number of TVs in a large number of randomly-selected households will approach the
expected value 2.084.
Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 13 / 49
Example: Combining 3 LCGs
Advantages of CLCG

Characterizing a Random Variable: Variance


Variance
A natural measure of “how random” X is quantifies the difference between X and its expectation E[X] and
is called the variance of X defined as

Var[X ] := E[ (X − E[X ] ) 2 ]

That is, Var[X] is the mean square difference between X and E[X]

The larger the variance Var[X], the “more random” is the random variable X.

Properties
Var[X ] ≥ 0

Var[X ] = 0 if and only if X is non-random

Var[X ] = E[X 2 ] − E[X ] 2

Var[X + Y ] = Var[X ] + Var[Y ] if X, Y are independent.

Var[aX ] = a2 Var[X ]; for a ∈ R

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 14 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Characterizing a Random Variable: Variance

Consider the previous example of TVs in a household. We can compute the variance of X as:

Var[X ] = (0 − 2.084) 2 x 0.012 + . . . + (5 − 2.0824) 2 x 0.028 = 1.107

The variance can be interpreted as the long-run average of squared deviations from the mean.

Thus the parameter Var is a measure of the extent of variability in successive realizations of X.

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 15 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Characterizing a Random Variable: Illustrative Examples

Ex: Consider a fair dice is rolled once and X be the resulting number. Find the mean and variance of X.

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 16 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Distribution of a Discrete Random Variable


A random variable is often depicted by specifying the probabilities P{X = i} of each of its possible values
i ∈ D. This is then the distribution of the random variable.

Distribution
let X : Ω → D be a discrete random variable. The collection (P{X = i} )i∈D is called the distribution of X.
Note that, It is perfectly possible, and very common, for different random variables to have the same
distribution.

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 17 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Distribution of a Discrete Random Variable


A random variable is often depicted by specifying the probabilities P{X = i} of each of its possible values
i ∈ D. This is then the distribution of the random variable.

Distribution
let X : Ω → D be a discrete random variable. The collection (P{X = i} )i∈D is called the distribution of X.
Note that, It is perfectly possible, and very common, for different random variables to have the same
distribution.

Ex: If we throw two dice, Let X be the outcome of the first die, and Y be the outcome of the second die,
Then X and Y are different random variables. But X and Y have the same distribution, as each die is
equally likely to yield every outcome P{X = i} = P{Y = i} = 1/ 6 for 1 ≤ i ≤ 6.

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 17 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Distribution of a Discrete Random Variable


A random variable is often depicted by specifying the probabilities P{X = i} of each of its possible values
i ∈ D. This is then the distribution of the random variable.

Distribution
let X : Ω → D be a discrete random variable. The collection (P{X = i} )i∈D is called the distribution of X.
Note that, It is perfectly possible, and very common, for different random variables to have the same
distribution.

Ex: If we throw two dice, Let X be the outcome of the first die, and Y be the outcome of the second die,
Then X and Y are different random variables. But X and Y have the same distribution, as each die is
equally likely to yield every outcome P{X = i} = P{Y = i} = 1/ 6 for 1 ≤ i ≤ 6.

For two random variables, X is independent of Y if conditioning on the outcome of Y does not affect the
distribution of X i.e.

Independent variable
If X : Ω → DX and Y : Ω → DY are two discrete random variables, then X and Y are independent iff

P{X = x|Y = y} = P{X = x}P{Y = y}; ∀x ∈ DX , y ∈ DY

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 17 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Random Variable
Uniform distribution
Let X be a random variable that is equally likely to take any value in [0, 1] i.e. P{X ∈ A} = length(A) for A
⊂ [0, 1]. We then say that X is uniformly distributed in the interval [0, 1].

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 18 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Random Variable
Uniform distribution
Let X be a random variable that is equally likely to take any value in [0, 1] i.e. P{X ∈ A} = length(A) for A
⊂ [0, 1]. We then say that X is uniformly distributed in the interval [0, 1].

Cumulative Distribution Function


The function Fx (x) = PX ≤ x is called the cumulative distribution function (CDF) of the random variable
X. It is a function giving the probability that the random variable X is less than or equal to x, for every
value x.

The CDF (or the Probability Distribution Function) has the following properties:
0 ≤ FX (x) ≤ 1
FX (x) is a non-decreasing function of X.
FX (−∞) = 0, FX (∞) = 1
P(X > x) = P(x < X < ∞) = 1 − FX (x)

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 18 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Random Variable
Uniform distribution
Let X be a random variable that is equally likely to take any value in [0, 1] i.e. P{X ∈ A} = length(A) for A
⊂ [0, 1]. We then say that X is uniformly distributed in the interval [0, 1].

Cumulative Distribution Function


The function Fx (x) = PX ≤ x is called the cumulative distribution function (CDF) of the random variable
X. It is a function giving the probability that the random variable X is less than or equal to x, for every
value x.

The CDF (or the Probability Distribution Function) has the following properties:
0 ≤ FX (x) ≤ 1
FX (x) is a non-decreasing function of X.
FX (−∞) = 0, FX (∞) = 1
P(X > x) = P(x < X < ∞) = 1 − FX (x)
Ex: Consider the random variable X defined by the following CDF
n 0; x < −2
FX (x) = 1 1
8 x + 4 ; −2 ≤ x < 0
1; x ≥ 0
Find P{X = 0}, P{X ≤ 0},P{X > 2}
Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 18 / 49
Example: Combining 3 LCGs
Advantages of CLCG

Probability Density Function


PDF
The probability density function(PDF) of a continuous random variable is a function which can be
integrated to obtain the probability that the random variable takes a value in a given interval.

Mathematically, the PDF f (x) of a continuous random variable X is the derivative of the cumulative
distribution function FX (x) i.e.
d
f (x) = Fx (x)
dx

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 19 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Probability Density Function


PDF
The probability density function(PDF) of a continuous random variable is a function which can be
integrated to obtain the probability that the random variable takes a value in a given interval.

Mathematically, the PDF f (x) of a continuous random variable X is the derivative of the cumulative
distribution function FX (x) i.e.
d
f (x) = Fx (x)
dx
If a function f (x) is a probability density function then it must obey two conditions:
The
∫ total probability for all possible values of the continuous random variable X is 1 i.e.
f (x)dx = 1
The probability density function can never be negative
Since the CDF always a non-decreasing function of X. Hence its derivative f (x) i.e the PDF will never be
negative.

Therefore, the area under the density in an interval corresponds to the probability that the random variable
will be in that interval.
Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 19 / 49
Example: Combining 3 LCGs
Advantages of CLCG

Special Distribution: Gaussian/Normal


Normal/Gaussian Random Variable
A random variable X is called with normal/Gaussian random variable its distribution (i.e. CDF) can be
given as:
2
− x
e 2𝜎 2
Fx (x) = √ dx
2 𝜋 𝜎2
where the mean of the distribution is zero and variance is 𝜎 2 .

A graphical illustration of the Gaussian random variable CDF is:

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 20 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Special Distribution: Gaussian/Normal

Subsequently, the probability density function of the Gaussian Random Variable can be computed as:

1 −0.5( x− 𝜇 ) 2
fx (x) = e 𝜎 ; −∞ ≤ x ≤ ∞
2𝜋 𝜎
𝜇 > 0, 𝜎 > 0 are real numbers. A graphical illustration of the Gaussian random variable PDF is :

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 21 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Special Distribution: Gaussian/Normal

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 22 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Special Distribution: Gaussian/Normal


Applications
Gaussian distribution is one the most significant probability distributions.
Gaussian distribution is widely used to model natural and man made phenomena.
Specifically, when the random variable is the result of the addition of large number of independent
random variables, it can be modeled as a normal random variable.

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 23 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Random Processes: Introduction


Random Processes
A random process is a time-varying function that assigns the outcome of a random experiment to each time
instant: X(t). In other words, a random process can be viewed as a collection of an infinite number of
random variables.

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 24 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Random Processes: Introduction


Random Processes
A random process is a time-varying function that assigns the outcome of a random experiment to each time
instant: X(t). In other words, a random process can be viewed as a collection of an infinite number of
random variables.

Let P = {Ω, Pr (·) } be a probability-space model for a experiment, and let {x(t, 𝜔) : 𝜔 ∈ Ω} be an
assignment of deterministic wave forms—functions of t—to the sample points 𝜔. Then this experiment
creates a random process X(t,:) as shown below:

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 24 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Random Processes: Characterization


A complete statistical characterization of a random process x(t) is defined to be the information sufficient to
deduce the probability density for any random vector x.

Mean function
The mean function, 𝜇X (t), of a random process, X(t), is a deterministic function of time whose value at an
arbitrary specific time, t = t1 , is the mean value of the random variable x(t1 ). Thus, we can write:

𝜇X (t) = E[X (t) ]

Auto-correlation
For a random process X(t), the auto-correlation function or, simply, the correlation function, RXX (t1 , t2 ), is
defined as
RXX (t1 , t2 ) = E[X (t1 )X (t2 ) ]

Auto-covariance
For a random process X(t), the auto-covariance function or, simply, the covariance function, KXX (t1 , t2 ), is
defined as

KXX (t1 , t2 ) = Cov[X (t1 )X (t2 ) ] = RXX (t1 , t2 ) − 𝜇X (t1 ) 𝜇X (t2 )

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 25 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Random Processes: Characterization

Ex: Let X (t), t ∈ [0, ∞) be defined as X (t) = A + Bt, for all t ∈ [0, ∞) where A and B are independent
normal random variables N(1,1). Find the correlation functions and covariance functions of these random
variables.

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 26 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Multiple Random Processes

Consider, the example, when investing in the stock market:


one considers several different stocks
How each stock is performing
If the stocks are correlated or not

This brings us to multiple random processes.

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 27 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Multiple Random Processes

Consider, the example, when investing in the stock market:


one considers several different stocks
How each stock is performing
If the stocks are correlated or not

This brings us to multiple random processes.

Characterizing Multiple Random Processes: Cross-correlation and Cross-Covariance


For two random processes X (t) and Y (t) the cross-correlation RXY and the cross-covariance KXY are
defined as:
RXY (t1 , t2 ) = E[X (t1 )X (t2 ) ]
CX Y (t1 , t2 ) = RXY (t1 , t2 ) − 𝜇X (t1 ) 𝜇Y (t2 )

How to use this information?


Suppose that X(t) is the price of oil (per gallon) and Y(t) is the price of gasoline (per gallon) at time t.
Since gasoline is produced from oil, as oil prices increase, the gasoline prices tend to increase, too. Thus,
we conclude that X(t) and Y(t) should be positively correlated (at least for the same t, i.e., RXY (t, t) > 0).

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 27 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Special Random Processes


Gaussian random process (GRP)
A random process, x(t), is a Gaussian random process if, for all t and N, the random vector, x, obtained by
sampling this process is Gaussian.

We can easily characterize a Gaussian random process completely by its mean and covariance i.e.
𝜇x (t) ≡ E[x(t) ]; −∞ < t < ∞
Kxx (t, s) ≡ E[Δx(t)Δx(s) ]; −∞ < t, s < ∞
where Δx = x(t) − 𝜇x (t) and Kxx is the covariance.

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 28 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Special Random Processes


Gaussian random process (GRP)
A random process, x(t), is a Gaussian random process if, for all t and N, the random vector, x, obtained by
sampling this process is Gaussian.

We can easily characterize a Gaussian random process completely by its mean and covariance i.e.
𝜇x (t) ≡ E[x(t) ]; −∞ < t < ∞
Kxx (t, s) ≡ E[Δx(t)Δx(s) ]; −∞ < t, s < ∞
where Δx = x(t) − 𝜇x (t) and Kxx is the covariance.

Let x(t) be a GRP with mean function 𝜇x (t) and covariance function Kxx (t, s). Then to completely
characterize its probability density, we need only its the mean vector and covariance matrix given as:
 𝜇x (t1 )   Kxx (t1 , t1 ) Kxx (t1 , t2 ) . . . Kxx (t1 , tN ) 
  
 𝜇x (t2 )   Kxx (t2 , t1 ) Kxx (t2 , t2 ) . . . Kxx (t2 , tN ) 
  
Mx =  ·  ; Gx =  · · ... · 

 ·   · · . . . · 
   
 𝜇x (tN )  Kxx (tN , t1 ) Kxx (tN , t2 ) . . . Kxx (tN , tN ) 
   
Note that a GRP is stationary in i.e. for arbitrary t, the random vector x has the same probability density
function as the random vector x̄ defined as follows
 T
x̄ ≡ x(0) x(t2 − t1 ) . . . x(tN −t1 )
Thus, its complete statistical characterization is time invariant.
Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 28 / 49
Example: Combining 3 LCGs
Advantages of CLCG

Properties of Random Numbers


Two important statistical properties of random numbers R1 , R2 , . . .:
Uniformity
Independence
Consider a continuous uniform distribution from which a random number is drawn. The PDF is given as:

A uniform distribution means that every number within a specified range has an equal probability of being
selected.

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 29 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Properties of Random Numbers


Two important statistical properties of random numbers R1 , R2 , . . .:
Uniformity
Independence
Consider a continuous uniform distribution from which a random number is drawn. The PDF is given as:

A uniform distribution means that every number within a specified range has an equal probability of being
selected.

Assume that a = 0 and b = 1. Then, the expected value of Ri is given as:

E[Ri ] =

This means that the likelihood of drawing any number in the range [0,1] is the same. For example:
A number like 0.25 has the same chance of occurring as 0.75.
When a large number of samples are generated, their histogram will appear flat, showing uniformity.

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 29 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Properties of Random Numbers

The value of one random number does not affect the next one i.e. they are independent.

Mathematically, if X1 , X2 , . . . are independent random numbers then ,

P(Xn+1 |X1 , X2 , . . . , Xn ) = P(Xn+1 )

This means the probability of drawing the next number does not depend on past numbers.

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 30 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Properties of Random Numbers

The value of one random number does not affect the next one i.e. they are independent.

Mathematically, if X1 , X2 , . . . are independent random numbers then ,

P(Xn+1 |X1 , X2 , . . . , Xn ) = P(Xn+1 )

This means the probability of drawing the next number does not depend on past numbers.

True Random Numbers?


Any one who considers arithmetical methods of producing random digits is, of course, in a state of sin. For,
as has been pointed out several times, there is no such thing as a random number — there are only methods
to produce random numbers, and a strict arithmetic procedure of course is not such a method.
John von Neumann, 1951

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 30 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Properties of Random Numbers

The value of one random number does not affect the next one i.e. they are independent.

Mathematically, if X1 , X2 , . . . are independent random numbers then ,

P(Xn+1 |X1 , X2 , . . . , Xn ) = P(Xn+1 )

This means the probability of drawing the next number does not depend on past numbers.

True Random Numbers?


Any one who considers arithmetical methods of producing random digits is, of course, in a state of sin. For,
as has been pointed out several times, there is no such thing as a random number — there are only methods
to produce random numbers, and a strict arithmetic procedure of course is not such a method.
John von Neumann, 1951

Hence we have pseudo-random numbers.

Approach: Arithmetically generation (calculation) of random numbers


“Pseudo”, because generating numbers using a known method removes the potential for true
randomness.

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 30 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Pseudo Random Numbers

Important properties of good random number routines:


Fast
Portable to different computers
Have sufficiently long cycle A long cycle ensures that the same sequence does not
repeat too soon, maintaining randomness.
Replicable
Use identical stream of random numbers for different systems
Closely approximate the ideal statistical properties of : uniformity and independence
f you start with the same seed, the generator sho
uld produce the same sequence every time.

Problems when generating pseudo-random numbers


Autocorrelation between numbers
Numbers successively higher or lower than adjacent numbers
Several numbers above the mean followed by several numbers below the mean

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 31 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Generation of Random Numbers


Some typical methods that are used for generating random numbers are :
Linear Congruential Method (LCM)
Combined Linear Congruential Generators (CLCG)

Linear Congruential Method (LCM)


The LCM method produces a sequence of integers X1 , X2 , . . . between 0 and m-1 as per the following
recursive relationship:
Xi+1 = [aXi + c] mod m ; i = 0, 1, 2, . . .
where a is the multiplier, c is the increment and m is the modulus. The assumptions for selection of these
constants are :
m > 0; a < m; c < m; X0 < m
The selection of the values for a, c, m, and X0 drastically affects the statistical properties and the
cycle length.
The random integers Xi are being generated in [0, m − 1]
Convert the integers Xi to random numbers Ri

Ri = Xi /m ; i = 1, 2, . . .

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 32 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Generation of Random Numbers

Ex: Use X0 = 27, a = 17, c = 43, and m = 100. Generate Xi and Ri using LCM method.

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 33 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Characteristics of a good generator


Maximum Density
The density of randomly generated numbers refers to how those numbers are distributed across a given
range.
If numbers are generated using a uniform distribution, they should be evenly spread across the range.
The values assumed by Ri , i=1,2,. . . leave no large gaps on [0, 1]

Maximum Period
The length of the cycle before repetition is called the period, and a good PRNG(Pseudo random number
generator) has a very long period.
Achieved by proper choice of a, c, m, and X0

Pseudorandom number generators (PRNGs) produce sequences that appear random but can be
reproduced if the initial seed X0 is known.
The generator should be computationally efficient and generate numbers quickly, especially in
high-performance applications like simulations and cryptography.
The generator should be able to produce numbers in various ranges and formats (integers, floats,
etc.) as required.

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 34 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Characteristics of a good generator: LCM

The LCM has full period if and only if the following three conditions hold:
Modulus and Multiplier Condition (m and a)
1 The modulus m and multiplier a must be relatively prime i.e. the greatest common divider must be 1
2 This ensures that a does not introduce any common factors with m, which could reduce the number of
unique values generated.
3 It helps in spreading numbers evenly across the full range of m.

Prime Factor Condition: The multiplier a must be chosen such that for given conditions of c as
follows:
1 When the modulus m is prime and the increment c = 0.
2 When the modulus m is power of 2 i.e. m = 2b and the increment c = 0.
3 When the modulus m is power of 2 i.e. m = 2b and the increment c ≠ 0.

Seed Condition : The initial seed X0 must be co-prime to m i.e. they have no common factors other
than 1. Thus, the sequence can reach all possible states before repeating.

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 35 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Combined Linear Congruential Generators


Reason: Longer period generator is needed because of the increasing complexity of simulated
systems.
Approach: Combine two or more linear congruential generators.

A CLCG uses multiple LCGs and combines their outputs:

Xn = (Xn,1 − Xn,2 ) mod m

where:
Xn,1 and Xn,2 are generated by two independent LCGs:

Xn+1,1 = (a1 Xn,1 + c1 ) mod m1

Xn+1,2 = (a2 Xn,2 + c2 ) mod m2

The final result is obtained by subtracting the two generators’ outputs and reducing it modulo m
(usually m = m1 − 1).

Much longer period than a single LCG.


Reduces correlations in generated numbers.
Improves statistical properties, making the generator pass randomness tests.

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 36 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Combined Linear Congruential Generators


The generic form can be given as
k
!
∑︁
Xn = (−1) i Xn,i mod (m1 − 1)
i=1

where:
Xn,i is generated by the i-th independent LCG:

Xn+1,i = (ai Xn,i + ci ) mod mi

Consider the following three independent LCGs:


LCG 1:
Xn+1,1 = (40014Xn,1 ) mod 2147483563

LCG 2:
Xn+1,2 = (40692Xn,2 ) mod 2147483399

LCG 3:
Xn+1,3 = (40692Xn,3 ) mod 2147483087

The combined output is:


Xn = (Xn,1 − Xn,2 + Xn,3 ) mod (m1 − 1)
where m1 = 2147483563 (since it’s the largest modulus).
Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 37 / 49
Example: Combining 3 LCGs
Advantages of CLCG

Combined Linear Congruential Generators

The period of the combined generator is approximately:

lcm(m1 − 1, m2 − 1, m3 − 1)

i.e. the least common multiplier, which can be significantly larger than the period of individual LCGs.

This ensures that the full cycle of all generators aligns before repeating, resulting in a longer period and
better randomness.

Much longer period than a single LCG.


Reduces correlations in generated numbers.
Improves statistical properties, making the generator pass randomness tests.

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 38 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Tests for Random Numbers

Frequency tests: Kolmogorov-Smirnov Test and Chi-Square Test to check if the distribution
generated by the sequence of random numbers is similar to the uniform distribution.
Autocorrelation Tests: Tests the correlation between the generated random numbers and compares
the sample correlation to expected correlation of zero.

Kolmogorov-Smirnov Test
The Kolmogorov-Smirnov (K-S) Test is a non-parametric statistical test that compares two distributions to
determine if they differ significantly.
One-Sample K-S Test: Compares a sample with a theoretical (expected) distribution
Two-Sample K-S Test: Compares two independent samples to check if they come from the same
distribution.

Null Hypothesis (H0 ): The two distributions are identical.


Alternative Hypothesis (HA ) : The two distributions are different.

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 39 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Kolmogorov-Smirnov Test

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 40 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Kolmogorov-Smirnov Test

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 41 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Kolmogorov-Smirnov Test

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 42 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Chi-Square Test

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 43 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Autocorrelation Test

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 44 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Autocorrelation Test

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 45 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Autocorrelation Test

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 46 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Autocorrelation Test

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 47 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Autocorrelation Test

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 48 / 49


Example: Combining 3 LCGs
Advantages of CLCG

Autocorrelation Test

Dr. Jagat Jyoti Rath Modeling & Simulation: Random Numbers 49 / 49

You might also like