0% found this document useful (0 votes)
33 views

Chapter 4 2

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views

Chapter 4 2

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

AMS 507

Chapter 4. Random Variables


Weihao Wang
4.6 Bernoulli and Binomial RV’s

A Bernoulli trial is a random experiment where there are


exactly two possible outcomes, typically labeled as
"success" and "failure." Each trial is independent of the
others, and the probability of success remains constant
across trials. The term comes from the Swiss
mathematician Jacob Bernoulli, who studied these types of
experiments in probability theory.
Key Features of a Bernoulli Trial

1. Two outcomes: The trial has only two possible


outcomes, often denoted as:
– Success (usually assigned the value 1)
– Failure (usually assigned the value 0)
2. Fixed probability of success: The probability of
success, denoted by p is the same for every trial, and
the probability of failure is 1−p.
3. Independence: Each trial is independent of the others,
meaning the outcome of one trial does not affect the
outcome of another.
Examples of Bernoulli Trials

1. Coin Tossing: In tossing a fair coin, there are two


possible outcomes: heads (success) or tails (failure),
each with a probability of 0.5.
2. Flipping a Switch: A switch can be either on (success)
or off (failure), with a certain probability for each.
3. Product Testing: A product works (success) or doesn’t
work (failure) with a fixed probability.
in this chapter denote x every time while solving a soln also its distribution also in short
form once see notes to how to declare
Bernoulli RV’s

1. PMF:

2. Expectation:

3. Variance:
Binomial Distribution

The binomial distribution is a discrete probability


distribution that models the number of successes in a fixed
number of independent Bernoulli trials. Each trial has two
possible outcomes (success or failure), and the probability
of success remains constant for all trials.
– The number of trials is fixed at n.
– Each trial is independent.
– The probability of success on each trial is p, and the probability
of failure is 1−p.
– The binomial distribution counts the number of successes,
denoted as X, out of the n trials.
Binomial Distribution

1. PMF:

2. Expectation:

3. Variance:
Example 4.6.1

Let X be the number of heads in 7 independent tosses of


an unbiased coin. Then X~Bin(7, ½).
Example 4.6.2

If the probability is 0.1 that a certain device fails a


comprehensive safety test, what are the probabilities that
among 15 of such devices.
a) At most two will fail?
b) At least three will fail?
Example 4.6.3

30% of the automobiles in a certain city are foreign made.


Four cars are selected at random.
X: # of cars sampled that are foreign made
P(X=3)
P(X>=3)
P(X<=1)
P(X<2)
Binomial distribution

P<0.5, skewed to the


right.

P>0.5, skewed to the


left.

P=0.5, symmetric
4.7 Poisson RV’s

The Poisson distribution is a discrete probability


distribution that models the number of events occurring
within a fixed interval of time or space, provided that these
events happen with a known constant rate and
independently of the time since the last event.
– The Poisson distribution is often used to model the number of
occurrences of events that happen randomly over a given
interval (such as time, distance, area, or volume).
– The rate at which events occur is described by the parameter
λ(lambda), which is both the mean and variance of the
distribution. This rate represents the expected number of events
in the given interval.
4.7 Poisson RV’s

λ: The average (or expected) number of events occurring in


a fixed interval. It is both the mean and variance of the
distribution.
PMF:

k is the number of occurrences (events) in the interval,


λ is the average number of occurrences,
e is Euler's number (approximately 2.718)
Applications of the Poisson Distribution

› Number of occurrences in a given time.


› Queue theory: To model the number of customers
arriving at a service center.
› Epidemiology: To model the number of disease cases in
a certain area.
› Telecommunications: To model the number of phone
calls received at a call center in a given period.
› Traffic flow: To model the number of cars passing
through an intersection in a given time period.
Poisson Distribution

The Poisson
distribution is highly
skewed for small
values of λ.
As λ increases, the
distribution
becomes more
symmetric.
Example 4.7.1

A 911 operator handles 4 calls every 3 hours on average.


a) What is the probability of no calls in the next hour?
b) Find the probability of at most two calls in the next hour?
Example 4.7.2

On average, 12 cars pass a toll gate booth in a minute


during rush hours.
a) Probability that one car passes the booth in 3 seconds:
b) Probability that at least 2 cars pass in 5 seconds:
c) Probability that at most one car passes in 10 seconds:
Example 4.7.3

Suppose the average number of customer arrivals at a


store in an hour is λ=3. What is the probability that exactly
5 customers will arrive in the next hour?
Poisson Approximation to the Binomial

The Poisson approximation to the binomial distribution is


a useful technique when dealing with binomial distributions
with a large number of trials n and a small probability of
success p. Under these conditions, the binomial
distribution Bin(n,p) can be approximated by a Poisson
distribution with parameter λ=n⋅p.
Poisson Approximation to the Binomial

Conditions for the Poisson Approximation:


• n (the number of trials) is large.
• p (the probability of success in each trial) is small.
• The mean λ=n⋅p is moderate (not too large or too small).
Why Poisson Approximation?

• When p is small, the likelihood of more than one success


in a single trial is minimal.
• As n becomes large, the distribution of successes
becomes increasingly spread out, and the Poisson
distribution with parameter λ=n⋅p provides a good
approximation.
• The binomial coefficient becomes difficult to compute
for large n, but the Poisson PMF is computationally
simpler.
Example 4.7.4

A factory produces screws, and the probability that a screw


is defective is p=0.01. In a batch of 200 screws (n=200),
what is the probability that exactly 3 screws are defective?
Example 4.7.5

A publisher of mystery novels tries to keep its books free of


typos. The probability of any given page containing at least
one typo is 0.003 and errors are independent from page to
page. What is the approximate probability that a 500 page
book has
a) Exactly 1 page with typos?
b) At most 2 pages with typos?
4.8 Other Discrete Probability Distributions

A geometric random variable represents the number of


trials needed to get the first success in a sequence of
independent Bernoulli trials, each with the same probability
of success p.
For a geometric random variable X, the probability that the
first success occurs on the k-th trial is given by the PMF:
4.8 Other Discrete Probability Distributions

Expectation:
geometric

Variance:

CDF:
4.8 Other Discrete Probability Distributions

Prove:
Expectation:

Variance:

CDF:
Example 4.8.1

A fair die is tossed until a certain number appears. What is


the probability that the 1st six appear at the fifth toss?
Example 4.8.2

Suppose a factory produces light bulbs, and each light bulb


has a 5% chance of being defective. You randomly test
light bulbs one by one.
1) What is the probability that the first defective light bulb
will be found on the 4th test?
2) Also, find the expected number of light bulbs you would
need to test before finding the first defective one.
Negative Binomial RV

A negative binomial random variable represents the


number of trials needed to achieve a specified number of
successes in a sequence of independent Bernoulli trials,
each with the same probability of success.
It's a generalization of the geometric distribution, where
instead of looking for the first success, we are interested in
the number of trials needed to achieve r successes.
Negative Binomial RV

– p is the probability of success on each trial,


– 1−p is the probability of failure on each trial,
– k is the total number of trials (including both successes and
failures),
– R is the number of successes we want to achieve,
– is the binomial coefficient, which counts the number of ways
to arrange r−1 successes in the first k−1 trials, ensuring the k-th
trial is a success.
Negative Binomial RV

Expectation:

Variance:

CDF:
Negative Binomial RV

Consider r independent geometric random variables X1,


X2, …, Xr, each with the same success probability p. Let Y
be the sum of these independent geometric random
variables:
Y=X1+X2+⋯+Xr
Here, Y represents the total number of trials needed to
achieve r successes. This is a classical setup for the
negative binomial distribution.
Negative Binomial RV

› Geometric distribution: Counts the number of trials until


the first success.
› Negative binomial distribution: Counts the number of
trials until the r-th success.
› The sum of r independent geometric random variables
with the same success probability p follows a negative
binomial distribution with parameters r and p.
Example 4.8.3

Find the expected value and variance of the number of


times one must throw a die until the outcome 1 has
occurred 4 times.
Example 4.8.4

A salesperson makes calls to potential customers. Each


call is independent, and the probability of making a sale on
any given call is 0.2. The salesperson wants to know how
many calls they need to make to achieve 5 sales.
1.What is the probability that the salesperson makes
exactly 8 calls to achieve 5 sales?
2.What is the expected number of calls the salesperson
needs to make to achieve 5 sales?
Example 4.8.5

A factory produces items, and each item has a 10%


chance of being defective. Items are inspected one by one,
and each inspection is independent of the others. The
factory is interested in finding out how many items need to
be inspected to find 3 defective products.
1.What is the probability that exactly 7 items need to be
inspected to find 3 defective items?
2.What is the variance of the number of items inspected to
find 3 defective products?
Hypergeometric RV

The hypergeometric random variable is a discrete random


variable that models the number of successes in a
sequence of draws from a finite population without
replacement. This is in contrast to the binomial
distribution, which models draws with replacement.
Hypergeometric RV

– Suppose you have a population of N items, where:


– K of these items are classified as "successes",
– N−K are classified as "failures".
– is the number of ways to choose k successes from K,
– is the number of ways to choose n−k failures from N−K,
– is the total number of ways to choose n items from N.
Hypergeometric RV

Expectation:

Variance:

CDF:
Example 4.8.6

A shipment of 25 CD’s contains 5 that are defective. If 10


of them are randomly chosen without replacement, what is
the probability that 2 of 10 will be defective?
Binomial Approximation to the Hypergeometric

The binomial approximation to the hypergeometric


distribution is used when the population size N is large, and
the sample size n is small relative to N. In such cases, sampling
without replacement (hypergeometric) can be approximated by
sampling with replacement (binomial), making the calculations
easier.
The probability of success p is K/N, which is the proportion of
successes in the population.
Example 4.8.7

For a lot of 100 CD’s, 20 are defective. Find the probability


that among a randomly selected sample of 10 CD’s, 2 are
defective, by using:
a) The formula for the hypergeometric distribution.
b) The binomial approximation to the hypergeometric.
Exercise

A batch of 500 products contains 50 defective items. A


quality control inspector randomly selects 20 products
from the batch without replacement to check for defects.
Let X be the number of defective items in the sample.
a) What is the exact probability of finding exactly 3
defective items in the sample?
(Use the hypergeometric distribution.)
b) Use the binomial approximation to estimate the
probability of finding exactly 3 defective items in the
sample.
Hw4

Chapter 4
Problems: 10, 14, 17, 18, 19
Theoretical Exercises: 2, 6, 8, 18, 25
In Theoretical Exercise 4.18(a) you need to use the result of
Theoretical Exercise 3.16 (4.15 is a typo).

You might also like