doc-cours_MathsV
doc-cours_MathsV
2024-2025
Claire Brécheteau
[email protected]
3
4.3.2 Large number law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.3.3 Approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5 Real random vectors and Estimators in Statistics 37
5.1 Real random vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.1.1 Probability distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.1.2 Sum and covariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5.1.3 Characteristic function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.1.4 Summary and general case . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.2 Independence of random variables . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.3 Usual probability distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5.3.1 Functions of random variables . . . . . . . . . . . . . . . . . . . . . . . . . 44
5.4 Short introduction to statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
5.5 Vocabulary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.6 Point estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.6.1 Estimators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.6.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.6.3 Moment estimators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.6.4 Maximum likelihood estimator . . . . . . . . . . . . . . . . . . . . . . . . 46
6 Confidence intervals and Statistical tests 49
6.1 Confidence interval estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
6.1.1 Examples of confidence intervals . . . . . . . . . . . . . . . . . . . . . . . 49
6.2 Statistical test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
6.2.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
6.2.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
7 Statistical tables 53
7.1 Binomial distribution B(n, p) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
7.2 Poisson distribution P(λ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
7.3 Normal distribution N(0, 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
7.4 Chi square distribution χ2 (ν) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
7.5 Student’s distribution t(ν) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
7.6 Behrens-Fischer-Snedecor distribution F (ν1 , ν2 ) . . . . . . . . . . . . . . . . . . . 61
4
Notations
Abbreviations
In this short course the following abbreviations are frequently used:
Greek letters
Greek letters are frequently used.
5
Other notations
Finally the notations frequently used in this course are the following :
• Ω: sample space.
• ω ∈ Ω: elementary event.
6
Chapter 1
Bases on Sets theory
In probability, to modelize the random experiment E we use the following mathematical tools :
3. P the probability.
Event : To represent one or many possible outcomes of a random experiment E, we use the
sample set Ω, the set of all possible outcomes also called events.
To represent events we use set theory. In that way, an event is associated to a subset of the
sample set Ω.
• two events are said to be incompatible, disjoint or mutually exclusive if they have no
elements in common: A ∩ B = ∅.
Given m events A1 , A2 , . . . Am :
Sm
• their union is j=1 Aj = A1 ∪ . . . ∪ Am = {x ∈ Ω | ∃j ∈ {1, . . . , m}, x ∈ Aj },
Tm
• their intersection is j=1 Aj = A1 ∩ . . . ∩ Am = {x ∈ Ω | ∀j ∈ {1, . . . , m}, x ∈ Aj }.
We recall that the symbol ∃ means "it exists" and ∀ means "for all".
7
Set of events A :
Definition 2. We call set of events, noted A, associated to the random experiment E any set of
subset of Ω such that
1. A contains Ω, ∅,
n∈N n∈N
The set A is also called σ-algebra on Ω. We call measurable space associated to E the pair
(Ω, A).
Definition 3. If Ω is finite or countable, P(Ω) or 2Ω is called the power set i.e. the set of all
possible subsets of Ω.
When Ω is finite or countable, we usually use A = P(Ω). The power set P(Ω) is indeed a σ-
algebra. For information, when Ω = R is continuous, we use A = B(R) the smallest σ-algebra
(for the inclusion) that contains all intervals [a, b] ⊂ R. B(R) is called the Borel σ-algebra.
Example 4. • The experiment of rolling one dice is modelised by the sample space
Ω = {1, 2, 3, 4, 5, 6}. Examples of events in A = P(Ω) are {1, 2, 3}, {1, 3, 5}, {1}, ∅,
etc. We will see in next chapter that for a non biased die, we are in situation of
equiprobability: the probability of elementary events is P({1}) = . . . = P({6}) = 16 .
Then, the probability to have an odd number is P({1, 3, 5}) = 36 = 12 . The probability
of the impossible event is P(∅) = 0. This is a discrete probability.
• The experiment if measuring the size of a person is modelised by the sample space
Ω = [0, 272] (in centimeters). Examples of events in A are "measuring at most 2
meters" [0, 200], "measuring more than 2 meters" [200, 272], Q ∩ [0, 50] (where Q is the
set of rational numbers), etc. We will see in next chapter how to compute probability
of events P(A) for A ∈ A = B(R). This is a continuous probability.
Example 5. • For Ω = {1, 2, 3, 4, 5, 6}, let A = {1, 3, 5} and B = {2, 4, 6}, then, A = B,
A ∩ B = ∅, A ∪ B = Ω.
8
Basic properties
• Commutativity: A ∪ B = B ∪ A, A ∩ B = B ∩ A
• Associativity: A ∪ (B ∪ C) = (A ∪ B) ∪ C, A ∩ (B ∩ C) = (A ∩ B) ∩ C
• Idempotence: A ∩ A = A, A ∪ A = A
• Distributivity: A ∩ (B ∪ C) = (A ∩ B) ∪ (A ∩ C), A ∪ (B ∩ C) = (A ∪ B) ∩ (A ∪ C)
• Excluded third: A ∪ A = Ω, A ∩ A = ∅
• Absorption: A ∪ Ω = Ω, A ∪ (A ∩ B) = A, A ∩ ∅ = ∅, A ∩ (A ∪ B) = A
• Neutral element: A ∪ ∅ = A, A ∩ Ω = A
• De Morgan’s law: (A ∪ B) = A ∩ B, (A ∩ B) = A ∪ B
Product set When we carry out 2 (resp. n ∈ N) successive experiments, the sample space
is given by the product of the 2 (resp. n) sample spaces.
Definition 7. The product A × B of two events A and B is the set of pairs of two elements with
the first one in A and the second one in B:
A × B = {(a, b) | a ∈ A, b ∈ B}.
Example 8. • The sample space associated with the experiment of rolling 2 dice
successively is Ω = {1, 2, 3, 4, 5, 6} × {1, 2, 3, 4, 5, 6} = {1, 2, 3, 4, 5, 6}2 . The event
A ="The first die gives an odd number" is: A = {1, 3, 5} × {1, 2, 3, 4, 5, 6}. Examples
of elementary events are: (1, 2), (1, 1), (5, 6), etc.
• {1, 2} × {2, 3} = {(1, 2), (1, 3), (2, 2), (2, 3)}
• {1, 2} × ∅ = ∅
1.3 Cardinality
Cardinality
9
Definition 9. The number of elements of A ∈ A is denoted by Card(A) or #A, this is the
cardinality of A.
Corollary 12. If A and B are mutually disjoint, then, Card(A ∪ B) = Card(A) + Card(B).
• Since {1, 2} and {3} are mutually excluded, Card({1, 2} ∪ {3}) = Card({1, 2}) +
Card({3}) = 3.
Partition
Definition 15. A partition {A1 , . . . , Am } of a sample space Ω is a family of m events satisfying:
• The events are disjoint and nonempty: ∀i, j ∈ {1, . . . , m}, Ai ∩ Aj = ∅,
Sm
• The events cover Ω: j=1 Aj = Ω.
• If A and B are two distinct events and so that A ̸= B, different from ∅ and Ω, then
{A ∩ B, A ∩ B, AcapB, A ∩ B} is a partition of Ω.
10
Example 18. In a class of 23 students, we consider the following events: A"The student
practice sport", B"The student wears glasses". If 9 students with glasses and 5 students
without glasses practice sport, 6 boys without glasses do not practice sport, then the number
of students with glasses who practice sport is: Card(A ∩ B) = Card(Ω) − Card(A ∩ B) −
Card(A ∩ B) − Card(A ∩ B) = 23 − 9 − 5 − 6 since {A ∩ B, A ∩ B, A ∩ B, A ∩ B} is a partition
of Ω.
1.4 Enumeration
In this part, we present different types of discrete events for which it is possible to compute the
cardinality.
p-list
Definition 19. A p-list (x1 , . . . , xp ) is a tuple of p elements in a set Ω of n elements. That is,
an element of Ωp .
Example 21. • A phone number in France is +33 followed by 9 numbers in {0, 1, . . . , 9}.
It can be seens as a 9-list of Ω = {0, 1, . . . , 9}. Thus, the number of phone numbers is
Card({0, 1, . . . , 9}9 ) = 109 = 1000000000.
• There are 25 = 32 possible results for tossing 5 coins successively. For instance,
(H, H, T, H, T ) is one of them.
Permutation
Proposition 23. The cardinality of the set of all permutations is n!, where the factorial of n is
defined by: n! = 1 × . . . × (n − 1) × n for n ≥ 1 and 0! = 1.
• There are 6 permutations of the set Ω = {1, 2, 3} : (1, 2, 3), (1, 3, 2), (2, 1, 3), (2, 3, 1),
(3, 1, 2), (3, 2, 1).
• In a box containing 6 red balls and 4 blue balls, the number of possibilities to draw
first a red one and then a blue one is 6! × 4!.
11
Partial permutation
Proposition 26. The cardinality of the set of partial permutations of p elements of Ω is: Apn =
n!
(n−p)! .
Example 27. In some competition with 70 participants, the number of possible podiums
70!
(gold, silver, bronze) is A370 = 67! = 70 × 69 × 68 = 328440.
Proposition 28. If n elements can be divided into c classes of alike elements, differing from class
to class, then the number of permutations of these elements taken all at a time is n1 !n2n!!...nc ! (with
n = n1 + n2 + . . . + nc and nj is the number of elements in the j-th class).
11!
Example 29. There are 5! anagrams of the word "maths", 2!×2!×2! of the word "mathematics"
6!
(2 "a", 2 "m", 2 "t") and 2!×3! of "baobab" (2 "a", 3 "b").
Combination
n
Proposition 31. The cardinality of the set of combinations of p elements of Ω is: Cnp = p =
n!
p!(n−p)! .
32 32!
Example 32. • The number of hands of 8 cards in a game of 32 cards is 8 = 8!×24! .
4
• The number of ways of drawing 2 balls simultaneously in a box of 4 balls is 2 =
4!
2!×2! = 6: {B1 , B2 }, {B1 , B3 }, {B1 , B4 }, {B2 , B3 }, {B2 , B4 }, {B3 , B4 }.
n
Binomial coefficient The binomial coefficient p satisfies the following properties:
n n
Proposition 33. • Symmetry: ∀p ∈ {0, 1, . . . , n}, p = n−p .
n n n+1
• Pascal’s Triangle: p + p+1 = p+1
12
Chapter 2
Introduction to probabilities : from events to random variables
2.1 Probability
The probability of an event A in an experiment is supposed to measure how frequently A is
about to occur if we make many trials.
Definition 34. We call probability on (Ω, A) a real valued fonction noted P defined from A to
[0, 1]. We denote P(A) the probability of an event A ∈ A. It satisfies the following axioms
(Kolmogorov’s axioms)
2. (σ-additivity axiom) for any finite or countable sequence {Ai }i∈I , I ⊂ N of events in A
s.t. Ai ∩ Aj = ∅, i ̸= j (i.e. incompatible) then
!
[ X
P Ai = P(Ai ).
i∈I i∈I
We call the triplet (Ω, A, P) the probability space associated to the random experiment E.
1. P(Ā) = 1 − P(A)
2. P(∅) = 0
3. P(A) ∈ [0, 1]
4. A ⊆ B =⇒ P(A) ≤ P(B)
Example 36. • In tossing a coin, if head (H) appears two times more often than tail (T),
then the probability P satisfies: P(Ω) = P(T ) + P(H) = 1, and P(H) = 2 × P(T ). So,
3 × P(T ) = 1 and P(H) = 2 × P(T ). So, P(T ) = 13 and P(H) = 23 .
13
2.1.1 Equiprobability and uniform probability
We say that we have equiprobability when the probability of all the elementary events are equal.
In that case, P is the uniform probability on (Ω, P(Ω)).
Example 37. • In rolling one fair die, the probability to get an even number is
Card({2,4,6})
P({2, 4, 6}) = Card({1,2,3,4,5,6}) = 63 = 0.5.
• In rolling two fair dice, the probability to get a sum equal to 4 is:
P({(1, 3), (2, 2), (3, 1)}) = Card({(1,3),(2,2),(3,1)})
Card([[1,6]]2 )
= 632 = 12
1
.
• In rolling a die, if A denotes the event "getting an odd number" and A "getting an even
number", then P(A) = 1 − P(A) = 1 − 21 = 12 .
• If B is the event "getting a number less than 4", then, the probability that the number
is odd or less than 4 is P(A∪B) = P(A)+P(B)−P(A∩B) = P({1, 3, 5})+P({1, 2, 3})−
P({1, 3}) = 12 + 12 − 13 = 23 .
1. P(A ∩ B) = P(A|B)P(B),
2. PB (A) + PB (A) = 1,
Proposition 40. The application A 7→ P(A|B) defined on P(Ω) to [0, 1] defines a new probability
also noted A 7→ PB (A) on (Ω, A). It is called the conditional probability knowing B.
14
Theorem 41 (Law of total probability). Let {Bi }i∈I be a partition of Ω with P(Bi ) ̸= 0. Then,
for any event A in A, we have:
n
X
P(A) = P(A|Bi )P(Bi ).
i=1
Proposition 42 (Bayes’rule). Let {Bi }i∈I be a partition of Ω with P(Bi ) ̸= 0 for every i ∈ I,
and A such that P(A) ̸= 0. Then
Corollary 43. If A such that P(A) ̸= 0, then for every event B so that P(B) > 0 and P(B) > 0:
P(A|B)P(B)
P(B|A) =
P(A|B)P(B) + P(A|B)P(B)
Example 44. A computer scientist writes code programs every day. When he is not tired,
he has 1/3 risk to make a mistake, when he is tired, he has 7/8 risk to make a mistake.
In average, he is tired 1 day a week. Today, he makes a mistake is his code. Then, the
probability that he is tired is given by:
7 1
P(B|A)P(A) 8 + 7 7
P(A|B) = = 71 16 = ≃ 0.304,
P(B|A)P(A) + P(B|A)P(A) 87 + 37
23
where we have set the events A"He is tired" and B"He makes mistakes".
2.2 Independence
Definition 45. Two events A and B are said to be independent for the probability P if P(A|B) =
P(A). Equivalently, two events A and B are independent if, and only if,
P(A ∩ B) = P(A)P(B).
15
Example 46. We have P(E ∩ M ) = 3/10 and P(E) = 7/10 and P(E) = 6/10 since P(E) ·
P(M ) ̸= P(E ∩ M ) the events E and B are not independent.
1. A, B are independent,
2. B, A are independent,
3. A, B are independent.
Proof We prove the first point, the two other holds similarly. We have that P(A) = P(A ∩
B) + P(A ∩ B) since A ∩ B and A ∩ B are incompatible. Then by independence of A, B we have
P(A ∩ B) = P(A) − P(A ∩ B) = P(A)(1 − P(B)) = P(A)P(B). ■
Definition 49 (Generalization of mutually independence). Let I ⊆ N. The events {Ai }i∈I are
(mutually) independent if for any J ⊂ I we have:
Y
P(∩i∈J Ai ) = P(Ai ).
i∈J
In general, the chain rule gives an expression for the probability of a multiple intersection in
terms of conditional probabilities:
Proposition 50 (Chain rule). Given m events with non empty common intersection,
Example 51. We consider a box containing 4 blue balls and 3 red balls. We draw 3 balls
without replacement. The probability to draw 2 blue balls and then a red one is:
433
P(B1 ∩ B2 ∩ B3 ) = P(B1 )P(B2 |B1 )P(B3 |B1 ∩ B2 ) = ,
765
where Bk is the event "The k-th ball is blue." for k ∈ {1, 2, 3}.
16
2.3.1 Definition
Definition 52. A random variable is a function noted X : Ω → R such that for any interval (or
union or intersection of intervals) in R noted B ⊂ R (also noted B ∈ B(R)), we can define the
probability of the event {X ∈ B} and such that the set noted X −1 (B) = {ω ∈ Ω, X(ω) ∈ B} is
an event of A.
• Random variables (r.v.) are noted with UPPER CASE letter X and deterministic values
or realization of X are noted with lower case letter.
• We denote X(Ω) = {X(ω), ω ∈ Ω} the set of all possible values taken by X defined on
(Ω, A, P).
A more precise discution about independence of random variable is given in Section 5.2.
In the following chapter, we will encounter discrete random variables, that is, random variables
with values in a finite or countable subset of R, and continuous random variables, that is, random
variables with values in R so that the probability of elementary events are nul: P({x}) = 0, ∀x ∈
R.
In this section, we show how to represent such samples of random variables.
1 import numpy as np
import matplotlib . pyplot as plt
3 import seaborn as sns
sns . set ()
5
from scipy . stats import poisson
7 rv1 = poisson ( 4 )
17
9 sample_size = 500
x1 = rv1 . rvs ( sample_size )
11 print ( x1 [ : 30 ] )
13 np . bincount ( x1 )
The outputs of the Python code are: [2.03526363 2.35041629 3.99294475 2.52395537 3.47432423
2.06028756 2.48965581 3.18317669 3.35493938 2.80249061 2.3948979 2.36698703 3.38252004
3.52181081 3.15779851 3.49725899 2.02852658 5.71199484 2.01454532 2.30722529 2.00115956
2.06058937 4.81529023 2.55645531 3.62620267 2.17999947 2.14571947 2.72158256 2.69666184
2.25309457] . Then, Figure 2.3.2 represents the observations in an histogram. The area of
the first bar corresponds to the relative frequency of observations in the first interval [0, 0.8...].
The sum of the areas of every bars is 1.
18
Chapter 3
Discrete and continuous random variables
Proposition 56. The probability distribution is entirely defined by its cumulative distribution
function FX . Then, two random variables X, Y have the same distribution if and only if
19
3.1.2 Particular cases
Let X be a random variable defined on (Ω, A, P).
Definition 57. The probability distribution PX of a discrete random variable X is then completely
given through the values
PX (xi ) = P(X = xi ), ∀xi ∈ X(Ω),
with PX (xi ) ≥ 0, ∀xi ∈ X(Ω) and
P
xi ∈X(Ω) PX (xi ) = 1.
∀x ∈ R
X
FX (x) = P(X = xi ),
xi ≤x,xi ∈X(Ω)
1. the set of discontinuities of fX is finite and the limit from the left and the right exist at
each point,
The function fX is called probability density function (p.d.f. of the continuous random
variable X.
R fX R fX = 1.
R R
1. The integral must converge and satisfies
20
3. fX exists so FX is piecewise differentiable and
Proposition 60. Let X, Y be two random variables with p.d.f fX , fY then X, Y have the same
distribution if and only if fX = fY .
if it is finite;
if it is finite.
We now give various properties and results that hold for both continuous and random variables.
Proposition 62 (Linearity). The expectation is a linear application i.e. given two random
variables X, Y , for all λ, µ ∈ R we have
21
✑ In particular we have for any random variables X, Y which admit an expectation
• For all a ∈ R : E(a) = a.
• For all a, b ∈ R : E(aX + b) = aE(X) + b.
• A consequence is that for any piecewise continuous functions g, h from X(Ω) to R then
E(ah(X) + bg(Y )) = aE(g(X)) + bE(h(Y )), ∀a, b ∈ R.
Proposition 64 (Markov’s inequality). Given a non negative random variable X with finite
expectation, then we have
E(X)
∀ε > 0, P(X ≥ ε) ≤ .
ε
V(X)
∀ε > 0, P(|X − E(X)| ≥ ε) ≤ .
ε2
22
3.2.3 Median and quantiles
Definition 68. The median of a random variable X is any value m ∈ R so that P(X ≥ m) ≥ 1
2
and P(X ≤ m) ≥ 12 .
1 1
In particular, F (m) ≥ 2 and F (x) ≤ 2 for every x ≤ m.
Definition 69. For any α ∈ [0, 1], the α-quantile of a random variable X, denoted by qX,α or
FX− (α), is the generalised inverse the cumulative distribution of X at α:
3.2.4 Mode
Definition 70. The mode of a discrete random variable X with probability mass function pX is
the value x for which pX (x) is maximal. The mode of a continuous random variable X with
probability density function fX is the value x for which fX (x) is maximal.
φX (t) = E(eitX ),
✑ If X is s.t. E(|X|m ) < +∞, the characteristic function φX of X admits derivatives of order
k ≤ m at 0 and φkX (0) = ik E(X k ) for k ≤ m.
k k k k)
Indeed formally we have that eitX = k≥0 (itX) then φX (t) = E(eitX ) = k≥0 i t E(X
P P
k! k! .
Remark. It corresponds to the Fourier transform of fX in the case where X admits a density.
φX (t) = φY (t), ∀t ∈ R
3.4 Summary
In the following table, formulae of the different moments for discrete and continuous real R.V.
are summarized.
23
Definition Discrete random variable Continuous random variable
Ω {ω1 , . . . , ωn } Ω
Definition 73. The discrete random variable X has a Bernoulli distribution B(p) of parameter
p ∈ [0, 1] if X(Ω) = {0, 1} and
P(X = 1) = p, P(X = 0) = 1 − p = q,
E(X) = p, V(X) = pq,
φX (t) = q + peit , ∀t ∈ R.
Definition 74. The discrete random variable X has a Binomial distribution B(n, p) of parameter
n ∈ N and p ∈ [0, 1] if X(Ω) = {0, 1, . . . , n} and
n!
P(X = k) = Cnk pk q n−k with 1 − p = q, and Cnk = k!(n−k)! .
24
Poisson distribution P(λ)
Definition 75. The discrete random variable X has a Poisson distribution P(λ) of parameter
λ > 0 if X(Ω) = N and
λk −λ
P(X = k) = e
k!
E(X) = λ, V(X) = λ,
it −1)
φX (t) = eλ(e , ∀t ∈ R.
Interpretation: A Poisson distribution is used for the quantification of rare phenomenons e.g.
death in accident, phone calls...
Definition 76. The discrete random variable X has a Geometric distribution G(p) of parameter
p ∈ N∗ if X(Ω) = N∗ and
Interpretation: Geometric law characterizes the waiting time for having a first success for a
Bernoulli experiment repeated n times.
Definition 77. The continuous random variable X has a Uniform distribution U(a, b) of parameter
a, b ∈ R, a < b if
1
f (x) = 1 (x), x ∈ R,
b − a [a,b]
a+b (b − a)2
E(X) = V(X) = ,
2 12
eitb − eita
φX (t) = , ∀t ∈ R.
it(b − a)
Definition 78. The continuous random variable X has a Normal distribution N(m, σ 2 ) of parameter
m ∈ R, σ > 0 if
1 1 x−m 2
f (x) = √ e− 2 ( σ ) , x ∈ R,
σ 2π
E(X) = m, V(X) = σ 2 ,
−t2 σ 2
φX (t) = eitm e 2 , ∀t ∈ R.
25
Proposition 79. Let consider X ∼ N(m, σ 2 ) then
X −m
1. X ∼ N(m, σ 2 ) if and only if the reduced centered variable satisfies Z = ∼ N(0, 1).
σ
2. Given zα > 0 and α ∈ [0, 1] then Z satisfies:
α
P(|Z| ≤ zα ) = 1 − α ⇔ P(Z ≤ zα ) = 1 − .
2
Definition 80. The continuous random variable X has a Gamma distribution Γ(α, β) of parameter
α, β > 0 if
xα−1 e−x/β
f (x) = 1 (x), x ∈ R,
β α Γ(α) ]0,+∞]
E(X) = αβ, V(X) = αβ 2 ,
φX (t) = (1 − iβt)α , ∀t ∈ R.
1
Definition 81. The continuous random variable X has an exponential distribution E(λ) = Γ(1, )
λ
of parameter λ > 0 if
f (x) = λe−λx 1]0,+∞] (x), x ∈ R,
1 1
E(X) = , V(X) = ,
λ λ2
λ
φX (t) = , ∀t ∈ R.
λ − it
n
Definition 82. The continuous random variable X has a chi-square distribution χ2 (n) = Γ( , 2)
2
of parameter n ∈ N∗ if
n x
x 2 −1 e− 2
f (x) = n n 1]0,+∞] (x), x ∈ R,
2 2 Γ( 2 )
E(X) = n, V(X) = 2n,
φX (t) = (1 − 2it)−n/2 , ∀t ∈ R.
26
3.6 Statistics : estimation of cdf, pmf, pdf, expectations etc. based on
samples
3.6.1 Empirical cumulative distribution functions, empirical probability mass functions
and empirical probability density functions
If X1 , . . . , Xn is an n-sample of distribution P. Then, the cumulative distribution function
associated to (X1 , . . . , Xn ) is called the empirical cumulative distribution function and is defined
by:
Card{i ∈ {1, . . . , n}, Xi ≤ x}
Fn (x) = , ∀x ∈ R.
n
Notice that Fn (x) is random since X1 , . . . , Xn are random variables. When x1 , . . . , xn are
observations of the random variables, then Fn is defined as for discrete random variables, with
X(Ω) = {x1 , . . . , xn } and uniform probability on the xi s (probability n1 of each observation xi ).
We can observe, using Python, that in general, Fn is close to FX . This is a result of statistics,
related to convergence of random variables (cf. Chapter 4).
We first plot the cumulative distribution functions for the sample from Poisson distribution,
that we compare to the true cdf of the Poisson distribution Figure 3.6.1, and on the other hand,
the cdf of the sample from the Exponential distribution, that we compare to the cdf of the
Exponential distribution 3.6.2.
fig , ax = plt . subplots ()
2 ax . plot ( np . sort ( x1 ) , np . linspace (0 , 1 , len ( x1 ) , endpoint = False ) , label = ’
empirical cdf ’)
4 rv = poisson ( 4 )
x = np . linspace (0 , 10 , num = 10000 )
6 ax . plot (x , rv1 . cdf ( x ) , label = ’ cdf ’)
8 ax . legend ()
The we compare the stem of the probability mass function for the Poisson distribution, to stem
of the sample in Figure 3.6.3, and the probability density of the exponential distribution to the
histogram of the sample from the Exponential distribution in Figure 3.6.4.
1 plt . hist ( x2 , density = True , label = " Histogram " ) ;
plt . plot (x , rv2 . pdf ( x ) , label = " Probability density function " )
3 plt . legend ()
27
Figure 3.6.1: Cumulative disribution Figure 3.6.2: Cumulative disribution
function function
Sample from a Poisson distribution Sample from an exponential distribution
Figure 3.6.3: Probability mass function Figure 3.6.4: Probability density function
Sample from a Poisson distribution Sample from an exponential distribution
28
print ( " \ nfirst quantile " , np . quantile ( x2 , 0 . 25 ) , rv2 . ppf ( 0 . 25 ) )
6 print ( " \ nthird quantile " , np . quantile ( x2 , 0 . 75 ) , rv2 . ppf ( 0 . 75 ) )
29
30
Chapter 4
Convergence of random variables and Limit theorems
Convergence in distribution, a.s., in probability. Estimators for the mean, variance (sans le
dire) Law of large numbers, TCL, Slutsky, continuous map theorem.
4.2 Convergence
Let (Ω, A, P) be a probability space associated to a random experiment E.
Definition 85 (Sequence of random variables). A sequence of real valued random variables is a
function noted
N × Ω → R,
X:
(n, ω) 7→ X(n, ω) = Xn (ω).
31
In general, we use the notation Xn (ω) instead of X(n, ω) to denote the n-th term of the sequence.
We also, forget ω such that Xn is seen as a random variable from Ω to R.
In what follows, a sequence of r.v. is noted {Xn }n∈N or {Xn }n≥n0 when it starts from n0 .
4.2.1 Definitions
Definition 86. The sequence {Xn }n∈N converges in probability towards the random variable X
if, and only if:
∀ε > 0, lim P(|Xn − X| > ε) = 0.
n→+∞
P
✑ The convergence in probability is noted Xn −→ X.
Example 87. Let {Xn }n∈N defined by Xn ∈ {0, n} s.t. P(Xn = 0) = 1 − 1/n and P(Xn =
n) = 1/n. For all ε > 0 we have that P(|Xn − 0| > ε) = 0 if ε > n and P(|Xn − 0| > ε) =
P
P(Xn = n) = 1/n otherwise. In each case limn→0 P(|Xn − 0| > ε) = 0. Then, Xn −→ 0.
Definition 88. The sequence {Xn }n∈N converges almost surely towards the random variable X
if, and only if:
P({ω ∈ Ω : lim Xn (ω) = X(ω)}) = 1.
n→+∞
a.s.
✑ The convergence almost sure is noted Xn −→ X.
Definition 89.
Example 90. Let U be a random variable with uniform distribution on [0, 1]. Let Xn =
n1U ≥1− 1 . Almost surely U < 1, so, ∃N, ∀n > N, U < 1 − n1 (since U < 1 − N1 ). So,
n
∀n ≥ N, Xn = 0. So, Xn converges to 0 a.s.
Proposition 91 (Borel-Cantelli Lemma). Let (An )n∈N be a sequence of events. The limite sup
of the events is defined by lim supn An = n≥0 k≥n Ak . In particular, ω ∈ lim supn An means
T S
that ω ∈ Ak for every k ≥ K for some K ∈ N, meaning that Ak is false from some rank.
• If n≥0 P(An ) < ∞, then P(lim supn An ) = 0. So, ∃n, ∀k ≥ n, Ak . (Ak is false after
P
some rank)
• If A1 , A2 , . . . , Ak are independent and n≥0 P(An ) = +∞, then, P(lim supn An ) = 1. So,
P
32
The sequence {Xn }n∈N converges in distribution towards the random variable X if, and only if:
L
✑ The convergence in distribution is noted Xn −→ X, or
Example 93. The random variables Xn above defined converge in distribution to 0, since
their cdf are defined by Fn (t) = (1 − n1 )1t≤n + 1t>n → 1t>0 when n → +∞.
Definition 94. The sequence {Xn }n∈N converges in Lp norm towards the random variable X if,
and only if:
lim E(|Xn − X|p ) = 0.
n→+∞
Lp
✑ The p-order convergence is noted Xn −→ X.
a.s. =⇒ P =⇒ L (4.2.1)
and
Lp =⇒ P. (4.2.2)
• Proof of (4.2.2): for evey continuous function f bounded by M > 0, then, E|f (Xn ) −
f (X)| ≤ ϵ + 2M P(|f (Xn ) − f (X)| > ϵ) with P(|f (Xn ) − f (X)| > ϵ) → 0 when n → +∞.
■
33
Theorem 96. Given the sequence of real random variables {Xn }n∈N and a continuous function
f : R → R then
a.s. a.s.
Xn −→ X ⇒ f (Xn ) −→ f (X).
and similarly
P P
Xn −→ X ⇒ f (Xn ) −→ f (X).
a.s. a.s.
Example 97. If Xn −→ X then since x 7→ x2 is continuous we have Xn2 −→ X 2 .
Theorem 98 (Slutsky’s theorem). Given two sequences of random variables {Xn }n∈N and
{Yn }n∈N then ( a.s.
Xn −→ c, L
L ⇒ Xn Yn −→ cY.
Yn −→ Y,
Theorem 99 (Central Limit Theorem of Laplace). Let {Xi }i=1,n , a set of independent and
identically distributed random variables with common expectation m and variance σ 2 . Then,
Xn − m
if X n = X1 +···+X
n
n
, we have that Zn = √ converges in distribution towards a random
σ/ n
variable Z which is N(0, 1).
Proposition 100. Let {Xi }i=1,n , a set of independent and identically distributed random variables
with common expectation m and variance σ 2 . Then, if X n = X1 +···+X and Q2 = n1 ni=1 (X n −
n
P
n
Xn − m
Xi )2 , we have that Zn = √ converges in distribution towards a random variable Z which
Q/ n
is N(0, 1).
Theorem 101 (Weak law of large number). Given {Xi }i=1,n a set of independent and identically
distributed random variables with finite expectation m and variance σ 2 . Then, the set {X n }n≥1
defined by X n = X1 +···+X
n
n
converges in probability towards to m i.e.
P
X n → E(X1 ).
34
Proof Let X n = X1 +···+X . Let observe that nE(X n ) = ni=1 E(Xi ) = nE(X1 ) and n2 V(X n ) =
n
P
Pn n
i=1 V(Xi ) = nV(X1 ) By Bienaymé-Chebychev inequality (cf. Proposition 67), for all ε > 0
we have:
V(X n ) V(X1 )
P(|X n − E(X n )| > ε) ≤ 2
⇒ P(|X n − E(X1 )| > ε) ≤ →0
ε nε2
Theorem 102 (Strong law of large number). Given {Xi }i=1,n a set of independent and identically
distributed random variables, with E(|Xi |) < ∞. Then, the set {X n }n≥1 defined by X n =
X1 +···+Xn
n almost surely towards to the expectation of Xi i.e.
a.s.
X n → E(Xi ) for n → ∞.
4.3.3 Approximations
The main applications to convergence theorems are to be approximate some distributions by
other ones depending on the values of their parameters.
Proposition 103. Let {Xn }n∈N be a sequence of random variables which have a binomial
distribution B(n, p). When n → +∞ so that np → λ, λ > 0, the sequence {Xn }n∈N converges
in distribution towards a random variable X which has the Poisson distribution P(λ).
Proposition 104. Let {Xλ }λ>0 be a family of random variables which has the Poisson
Xλ − λ
distribution P(λ). When λ → +∞, the family √ λ>0
converges in distribution towards
λ
a random variable X which has the standard normal distribution N(0, 1).
Remark: So, for λ big enough, we may approximate the Poisson distribution P(λ) by the
normal distribution N(λ, λ). In practice, the approximation of the Poisson distribution P(λ)
by a normal distribution is estimated satisfactory for λ > 18.
35
Proposition 105. Let {Xn }n∈N be a sequence of random variables which have the binomial
Xn − np
distribution B(n, p). We define: Un = √ , (q = 1 − p). When n → +∞, the sequence
npq
{Un }n∈N converges in distribution towards a random variable X which has the standard normal
distribution N(0, 1).
Remark: So, for n big enough, we may approximate the binomial distribution B(n, p) by
the normal distribution N(np, npq). In practice, we estime the approximation of the binomial
distribution B(n, p) by a normal distribution satisfactory as soon as np > 5 and nq > 5. We
apply especially this approximation when n > 50 and np > 18.
36
Chapter 5
Real random vectors and Estimators in Statistics
Estimators for the mean, variance. Moment estimators, Maximum likelihood estimator. CI :
for the mean and the proportion (Bernoulli, Normal, TCL, Slutsky).
Definition 108. The distribution function or the cumulative distribution function (c.d.f.) of
a random vector X = (X1 , . . . , Xn ) is a real valued function FX : Rn → [0, 1] defined for all
x = (x1 , . . . , xn ) ∈ Rn by
n
!
\
FX (x) = P(X1 ≤ x1 , . . . , Xn ≤ xn ) = P {Xk ≤ xk } .
k=1
FX defines the joint distribution of the vector X. The marginal distributions of a random
vector X are the distributions of the random variables Xi , i = 1, . . . , n given by their respective
cumulative distribution function FXi .
Proposition 109. The probability distribution of a random vector X is completely defined by its
cumulative distribution function FX . The, two random vectors X, Y have the same distribution
if FX = FY .
For the sake of simplicity, the previous definitions are illustrated in what follows for the
particular case n = 2, but they naturally extend to general case n ∈ N. For general case,
see booklet !
Discrete case, n = 2
If X = (X1 , X2 ) ∈ X(Ω) is a discrete random vector then X(Ω) is a finite or countable subset
of R2 .
37
Definition 110 (Joint distribution). The probability distribution of two discrete random variables
X1 , X2 or joint probability distribution of the vector (X1 , X2 ) is completely defined by the values
with
Definition 112 (Marginal distribution). The random variables X1 and X2 separately are called
marginal random variables of the pair (X1 , X2 ). We call the distribution of X1 noted PX1 the
marginal distribution of 1 defined as
X
PX1 (x1 ) = P(X1 = x1 ) = P(X1 = x1 , X2 = x2 ).
x2
Continuous case, n = 2
If X = (X1 , X2 ) ∈ X(Ω) ⊂ R2 is a continuous random vector takes then its values in any subset
X(Ω) ⊂ R2 .
Definition 114 (Probability density function). We say that X = (X1 , X2 ) is continuous if there
exists a piecewise continuous non negative function fX : R2 → R+ such that
38
2. the cumulative distribution function is given for all x = (x1 , x2 ) ∈ R2 by
Z x2 Z x1
FX (x) = fX (s1 , s2 )ds1 ds2 .
−∞ −∞
The function FX is called probability density function (p.d.f.) of the continuous random
vector X = (X1 , X2 ).
∂2
FX = fX , where fX is continuous.
∂x1 ∂x2
Definition 115. Given X = (X1 , X2 ) a random vector with p.d.f. fX , the marginal cumulative
distribution function (c.d.f.) of X1 is FX1 and defined for all x1 ∈ R by
Z x1 Z
FX1 (x1 ) = FX (s1 , s2 )ds2 ds1 .
−∞ R
Proposition 116. The p.d.f. FX completely defines the probability distribution of (X1 , X2 ).
fX (x1 , x2 )
fX1 |X2 =x2 (x1 ) = , x1 ∈ R.
fX2 (x2 )
• if (X1 , X2 ) is discrete
X
E(g(X1 , X2 )) = g(x1 , x2 )P(X1 = x1 , X2 = x2 )
(x1 ,x2 )∈X(Ω)
39
• if (X1 , X2 ) is continuous
Z
E(g(X1 , X2 )) = g(x1 , x2 )fX (x1 , x2 )dxdy,
R2
Definition 119. The covariance of two r.v. X1 , X2 with finite expectation and variance is defined
by:
Theorem 120. Let X1 , . . . , Xn be jointly distributed random variables with finite expectation and
variance then n
X X
V(X1 + · · · + Xn ) = V(Xi ) + Cov(Xi , Xj ).
i=1 i̸=j
Definition 121. The characteristic function of a random vector X = (X1 , X2 ) is the function
φX : R2 → C defined by:
In the following table, formulae of the joint and marginal distributions, different moments for
both discrete and continuous real valued random vectors of dimension n ∈ N∗ .
40
Discrete random variable Continuous random variable
Z
P(X1 = x1 , . . . , Xn = xn ) P(X ∈ B) = fX (X1 , . . . , Xn )dx1 . . . dxn , B a box in Rn
B
Z x1 Z xn
FX (x1 , . . . , xn ) = ··· fX (t1 , . . . , tn )dt1 . . . dtn
−∞ −∞
with FX continuous
∂n
and fX (x1 , . . . , xn ) = FX (x1 , . . . , xn )
∂x1 . . . ∂xn
where fX is continuous.
X Z
E(g(X)) = g(x1 , . . . , xn )P(X1 = x1 , . . . , Xn = xn ) E(g(X)) = g(x1 , . . . , xn )fX (x1 , . . . , xn )dx1 . . . dxn
(x1 ,...,xn )∈X(Ω) Rn
Definition 122. The distribution of X given Y is defined for all x ∈ Rn and y ∈ Rp such that
P(Y = y) ̸= 0 by
P(X = x, Y = y)
P(X = x|Y = y) = .
P(Y = y)
In addition, for any box A in Rn we have
X
P(X ∈ A|Y = y) = P(X = x|Y = y).
x∈A
Definition 123. The distribution of X given Y is defined for all x ∈ Rn and y ∈ Rp such that
fY (y) ̸= 0 by its conditional probability density function of X given Y
fX,Y (x, y)
fX|Y =y (x) = .
fY (y)
In addition, for A a box in Rn we have
Z
P(X ∈ A|Y = y) = fX|Y =y (x)dx1 . . . dxn .
A
41
Property: Given A ∈ Rp×n and B ∈ Rp then
✑ In particular:
Proposition 128. Let X1 , . . . , Xn be independent random variables with finite expectation and
variance. We have the following properties.
2. Cov(Xi , Xj ) = 0 for i ̸= j,
2
Here FX denotes the joint cumulative distribution of (X1 , . . . , Xn ).
42
5.3 Usual probability distributions
Multinomial distribution B(n; p1 , . . . , pk )
Definition 129. The discrete random vector X = (X1 , . . . , Xn ) ∈ Rn has a Multinomial distribution
B(n; p1 , . . . , pk ) of parameter pi ∈ [0, 1] s.t. p1 + · · · + pk = 1 and n ∈ N we have
n!
P(X1 = n1 , X2 = n2 , . . . , Xk = nk ) = pn1 . . . pnk k .
n1 ! . . . nk ! 1
Interpretation: This distribution characterizes an experiment, repeated n independent times,
that may lead to any of k mutually exclusive outcomes probabilities p1 , . . . , pk .
Definition 130. The continuous random vector X = (X1 , . . . , Xn ) ∈ Rn has the multivariate
normal distribution Nn (m, Σ) where m = (m1 , . . . , mn ) ∈ Rn and Σ = [σij ] ∈ Rn×n is a
symmetric positive definite matrix, if for all x = (x1 , . . . , xn ) ∈ Rn
1 1
f (x1 , . . . , xn ) = n √ exp[− (x − m)T Σ−1 (x − m)].
(2π) 2 det Σ 2
Equivalently if, and only if its moment generating function is for all t = (t1 , . . . , tn ) ∈ Rn :
1
GX (t1 , . . . , tn ) = exp(tT m) exp( tT Σt),
2
as well as its characteristic function reads as
1
φX (t1 , . . . , tn ) = exp(itT m) exp( tT Σt).
2
Proposition 131. If the random vector X = (X1 , . . . , Xn ) has the multivariate normal
distribution Nn (m, Σ) and if A ∈ Rp×n of rank p (p ≤ n), then the random vector Y = AX + B,
where B ∈ Rp , has the multivariate normal distribution Np (Am + B, AΣAT ).
Proposition 132 (Student’s distribution). Let Y and Z be two independent random variables
Y
such that Y is N(0, 1) and Z is χ2 (n). X = q has a t-distribution with n degrees of freedom
Z
n
t(n) whose p.d.f. is:
Γ n+2
1
1
fX (x) = √ 2 n+1 , ∀x ∈ R.
nπ Γ n2) (1 + xn ) 2
43
Proposition 133 (Berhens-Fisher-Snedecor distribution). Let Y and Z be two independent
Y
2 2 n1 n2 Y
random variables such that Y is χ (n1 ) and Z is χ (n2 ). X = Z = has a F -distribution
n2
n1 Z
with n1 and n2 degrees of freedom, F (n1 , n2 ), whose p.d.f. is:
n1 n1
( nn21 ) 2 Γ( n1 +n
2 )
2
x 2
−1
fX (x) = , (x > 0).
Γ( n21 ) Γ( n22 ) ( nn12 x + 1)
n1 +n2
2
Theorem 134. Let X1 , . . . , Xk be independent random variables such that Xi has the binomial
distribution B(ni , p). Then X = X1 + · · · + Xk has the binomial distribution B(n1 + · · · + nk , p).
Theorem 135. Let X1 , . . . , Xn be independent random variables such that Xi has the Poisson
distribution P(λi ) for i = 1, . . . , n. Then X = X1 + · · · + Xn has the Poisson distribution
P(λ1 + · · · + λn ).
Theorem 136. If X1 , . . . , Xn are independent random variables and if Xi is N(mi , σi2 ) for i =
n
X
1, . . . , n, then X = λi Xi , where λi is an arbitrary real number for i = 1, . . . , n, has the
i=1
n n
normal distribution N(m, σ 2 ) with m =
X X
λi mi , σ 2 = λ2i σi2 .
i=1 i=1
Theorem 137. Let X1 , . . . , Xk be independent random variables such that Xi is χ2 (ni ). Then
X = X1 + · · · + Xk has the chi-square distribution χ2 (n1 + · · · + nk ).
• collected data,
44
5.5 Vocabulary
The problem of estimation consists in finding an approximate value or range for the unknown
parameter θ ∈ Θ ⊂ R on which depends a probability distribution L = L(θ). To estimate θ 3 ,
we perform n independent observations or realisations denoted x1 , . . . , xn of X1 , . . . , Xn of the
random variable X having the distribution L. The estimation consists in finding a "numerical
value θ̂n (x1 , . . . , xn ) of θ" from these observed values or realizations4 x1 , . . . , xn .
Definition 138 (Sample set). Let X be a random variable of probability distribution L(θ). The
set {X1 , . . . , Xn } of n independent random variables with Xi having the same distribution as
X, is called random sample of size n of X or L(θ).
• unbiaised if E(θ̂n ) = θ,
L2
Proposition 142. If θ̂n is unbiased then θ̂n → θ for n → ∞ if and only if V(θ̂n ) → 0 for n → ∞.
3
Note that we say ESTIMATE that is a particular term used by statisticians.
4
That is to say the values we get when the random experiment is realized.
45
Proof 1. The r.h.s. of the equality leads to
V(θ̂n )+|E(θ̂n )−θ|2 = E(θ̂n2 )− θ̂n )2 + θ̂n )2 −2E(θ̂n )θ+θ2 = E(θ̂n2 −2θ̂n θ+θ2 ) = E(|θ̂n −θ|2 ).
E( E(
the second equality follows from linearity of E.
2. For θ̂n unbiased then E(θ̂n ) = θ then |E(θ̂n ) − θ| = 0 so V(θ̂n ) = E(|θ̂n − θ|2 ) which leads
to the results when n → ∞.
■
5.6.2 Examples
Proposition 143. Let consider a real random variable X with finite expectation, that has the
distribution L of unknown mean value m. Then, the mean value of the random sample of size
n of X : m̂n := X̄n = n1 (X1 + · · · + Xn ) is an estimator unbiased and consistent for m.
Proposition 144. Let consider a real random variable X with finite expectation and variance,
that has the distribution L of unknown variance σ 2 . Then, the variance of the random sample
1
of size n of X: σ̂n2 = n1 ni=1 (Xi − X)2 , where X = (X1 + · · · + Xn ), is a biased estimator for
P
n
1 Pn
σ 2 . However, S 2 = n−1 2
i=1 (Xi − X) is an unbiased estimator for σ .
2
46
Method. We can find an estimator θ̂ of θ by maximizing the likelihood function. It can be
understood as finding the value of the parameter θ for which the probability of observing
the values {x1 , . . . , xn } is the highest.
or equivalently5 :
log L(x, θ̂n (x)) = max log L(x, θ).
θ∈Θ
d
log L(x, θ̂(x)) = 0. (5.6.1)
dθ
5
As x 7→ log(x) is an increasing function. This second formula is often preferred in practice as it simplifies
calculations.
6
It means that θ̂(x) is a critical point.
47
48
Chapter 6
Confidence intervals and Statistical tests
Definition 150. Given the sample (X1 , . . . , Xn ), we said that I is a 100(1 − α)%, confidence
interval if
P(θ ∈ I(X1 , . . . , Xn )) = 1 − α,
where 1 − α is the confidence level α ∈ (0, 1) (equivalently with probability 1 − α).
✑ Here α represents the probability (risk) that the confidence interval I does not contain the
true value of the parameter θ. Given observed values x(x1 , . . . , xn ) of X = (X1 , . . . , Xn ) then
I(x) = I(x1 , . . . , xn ) gives an estimated range for θ.
CI for a proportion Let p ∈ [0, 1], and X1 , . . . , Xn ∼ B(p) i.i.d. The estimator for p in the
P n
Xi
proportion pn = n . According to the central limit theorem, the Slutsky theorem and the
i=1
49
CI for normal rv, σ unknown Let X1 , . . . , Xn ∼ N(θ, σ 2 ) i.i.d. Then,
√ X̄n − θ
n ∼ T(n − 1),
Sn
CI for large samples, based on the CLT Let X1 , . . . , Xn be an n-sample of real random
variables with expectation θ, and with n large (n ≥ 50).
According to the central limit theorem, the Slutsky theorem and the continuous map theorem,
√ X̄n − θ
n ∼ N(0, 1).
Sn
(n−1) Sn
Then, an asymptotic confident interval for θ with level 1 − α is Iˆ = X̄n ± z α √ when σ is
n
2
unknown.
(n−1) σ
When σ is known, an asymptotic confident interval for θ with level 1 − α is Iˆ = X̄n ± z α √ .
n
2
6.2.1 Definitions
Definition 151. A statistical hypothesis is an assertion about the distribution Pθ (or L(θ)).
There are two kinds of statistical hypothesis.
Definition 152. A test of a statistical hypothesis (H0 ) against an alternative hypothesis (H1 ) is
a rule which, from the observed values (x1 , . . . , xn )T of a sample (X1 , . . . , Xn )T of Pθ (or
L(θ)), leads to the decision to accept the hypothesis (H0 ) ( and therefore to reject (H1 )), or
to reject the hypothesis (H0 ) for (H1 ).
6.2.2 Method
① Define an acceptance region To realize a test, we consider the image of a random sample
of X: X(Ω)n and we define an acceptance region: A ⊂ X(Ω)n . Moreover, we denotes C =
X(Ω)n \ A the critical region.
• x∈
/ A ⇒ we reject (H0 ) (accept (H1 )).
50
Remark: The decision to accept (H0 ) does not mean that the hypothesis (H0 ) is true. This
means that the observations do not lead to reject (H0 ) for (H1 ).
Remark: If α is given, small, in (0, 1) , the test is better when the power of the test is very
big, so that the error of type II occurs with a very small probability.
③ Decision table According to the real situation, we have the following table:
Definition 153. The p-value is defined by the probability to observe worthe than the observed
statistic. More previsely, pval = inf{α | the test of level α does not reject H0 }.
H0 is rejected if and only if α < pval.
Examples:
• For tests of type ϕ = 1|T |≥cα . Then pval = P(|T | ≥ |tobs |).
Theorem 154. Under H0 , the p-value follows the uniform distributino on [0, 1].
Example 155. We consider that 80 percent of the population eats chocolate every week. After
the Olympic Games period, where lots of advertisement have been done on some chocolates,
we denote by p the proportion of french people who have eaten chocolate for the second week
of the Olympic Games.
51
We want to decide if the Olympic Games have had an infuence on the consumption of
chocolates in the french population, based on the interrogation of n = 100 people about their
consumption, with a probability of mistake of α = 0.05.
Therefore, we build the test of H0 ”p = 0.8” vs H1 ”p > 0.8. Assumption H1 means that the
Olympic Games have had a positive infuence on the consumption of chocolates. This is the
hypothesis that we want to demonstrate, if the data are sufficient to reject hypothesis that
Olympic Games did not have influence on chocolate consumption.
Let X1 , X2 , . . . , Xn be the preference of n people interrogated (Xi = 0 is the i-th person have
not eaten chocolate for the second week, and Xi = 1 P if the i-th person have eaten chocolate).
n
Xi
The decision is based on the test statistic Tn = pn = i=1 n , the proportion of people among
the n people, who have eaten chocolate.
The H0 hypothesis will be rejected when Tn > cα for some critical value cα . Therefore, the
test is ϕ = 1Tn ≥cα .
The critical value cα is chosen so that the first type error is at most α, that is, so that
P(ϕ = 1|H0 ) = α, P(Tn ≥ cα |H0 ) = α.
√ n −0.8
Under hypothesis H0 , Zn = n √T0.8×0.2 converges to the N(0, 1) distribution, according to
the central limit theorem, the Slutsky theorem and the continuous map theorem. So, for
√ α −0.8
P(Tn ≥ cα |H0 ) = α is equivalent to P(Zn ≤ n √c0.8×0.2 ) = 0.05. According to the table of
√ cα −0.8
the standard normal distribution, n 0.8×0.2 = 1.96. So, cα =.
√
52
Chapter 7
Statistical tables
In tables 7.1.1 and 7.1.2 are summarized values of the cumulative distribution function of the
binomial distribution B(n, p) of parameters n ∈ N+ , p ∈ [0, 1] defined as : F (c) = P(k ≤ c) =
c
X
Cnk pk (1 − p)n−k , ∀c ∈ N+ .
k=0
53
n c p 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50
5 0 0.5905 0.4437 0.3277 0.2373 0.1681 0.1160 0.0778 0.0503 0.0312
1 0.9185 0.8352 0.7373 0.6328 0.5282 0.4284 0.3370 0.2562 0.1875
2 0.9914 0.9734 0.9421 0.8965 0.8369 0.7648 0.6826 0.5931 0.5000
3 0.9995 0.9978 0.9933 0.9844 0.9692 0.9460 0.9130 0.8688 0.8125
4 1.0000 0.9999 0.9997 0.9990 0.9976 0.9947 0.9898 0.9815 0.9688
5 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
10 0 0.3487 0.1969 0.1074 0.0563 0.0282 0.0135 0.0060 0.0025 0.0010
1 0.7361 0.5443 0.3758 0.2440 0.1493 0.0860 0.0464 0.0233 0.0107
2 0.9298 0.8202 0.6778 0.5256 0.3828 0.2616 0.1673 0.0996 0.0547
3 0.9872 0.9500 0.8791 0.7759 0.6496 0.5138 0.3823 0.2660 0.1719
4 0.9984 0.9901 0.9672 0.9219 0.8497 0.7515 0.6331 0.5044 0.3770
5 0.9999 0.9986 0.9936 0.9803 0.9527 0.9051 0.8338 0.7384 0.6230
6 1.0000 0.9999 0.9991 0.9965 0.9894 0.9740 0.9452 0.8980 0.8281
7 1.0000 1.0000 0.9999 0.9996 0.9984 0.9952 0.9877 0.9726 0.9453
8 1.0000 1.0000 1.0000 1.0000 0.9999 0.9995 0.9983 0.9955 0.9893
9 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9999 0.9997 0.9990
10 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
15 0 0.2059 0.0874 0.0352 0.0134 0.0047 0.0016 0.0005 0.0001 0.0000
1 0.5490 0.3186 0.1671 0.0802 0.0353 0.0142 0.0052 0.0017 0.0005
2 0.8159 0.6042 0.3980 0.2361 0.1268 0.0617 0.0271 0.0107 0.0037
3 0.9444 0.8227 0.6482 0.4613 0.2969 0.1727 0.0905 0.0424 0.0176
4 0.9873 0.9383 0.8358 0.6865 0.5155 0.3519 0.2173 0.1204 0.0592
5 0.9978 0.9832 0.9389 0.8516 0.7216 0.5643 0.4032 0.2608 0.1509
6 0.9997 0.9964 0.9819 0.9434 0.8689 0.7548 0.6098 0.4522 0.3036
7 1.0000 0.9994 0.9958 0.9827 0.9500 0.8868 0.7869 0.6535 0.5000
8 1.0000 0.9999 0.9992 0.9958 0.9848 0.9578 0.9050 0.8182 0.6964
9 1.0000 1.0000 0.9999 0.9992 0.9963 0.9876 0.9662 0.9231 0.8491
10 1.0000 1.0000 1.0000 0.9999 0.9993 0.9972 0.9907 0.9745 0.9408
11 1.0000 1.0000 1.0000 1.0000 0.9999 0.9995 0.9981 0.9937 0.9824
12 1.0000 1.0000 1.0000 1.0000 1.0000 0.9999 0.9997 0.9989 0.9963
13 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9999 0.9995
14 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
20 0 0.1216 0.0388 0.0115 0.0032 0.0008 0.0001 0.0000 0.0000 0.0000
1 0.3917 0.1756 0.0692 0.0243 0.0076 0.0021 0.0005 0.0001 0.0000
2 0.6769 0.4049 0.2061 0.0913 0.0355 0.0121 0.0036 0.0009 0.0002
3 0.8670 0.6477 0.4114 0.2252 0.1071 0.0444 0.0160 0.0049 0.0013
4 0.9568 0.8298 0.6296 0.4148 0.2375 0.1182 0.0510 0.0189 0.0059
5 0.9887 0.9327 0.8042 0.6172 0.4164 0.2454 0.1256 0.0553 0.0207
6 0.9976 0.9781 0.9133 0.7858 0.6080 0.4166 0.2500 0.1299 0.0577
7 0.9996 0.9941 0.9679 0.8982 0.7723 0.6010 0.4159 0.2520 0.1316
8 0.9999 0.9987 0.9900 0.9591 0.8867 0.7624 0.5956 0.4143 0.2517
9 1.0000 0.9998 0.9974 0.9861 0.9520 0.8782 0.7553 0.5914 0.4119
10 1.0000 1.0000 0.9994 0.9961 0.9829 0.9468 0.8725 0.7507 0.5881
11 1.0000 1.0000 0.9999 0.9991 0.9949 0.9804 0.9435 0.8692 0.7483
12 1.0000 1.0000 1.0000 0.9998 0.9987 0.9940 0.9790 0.9420 0.8684
13 1.0000 1.0000 1.0000 1.0000 0.9997 0.9985 0.9935 0.9786 0.9423
14 1.0000 1.0000 1.0000 1.0000 1.0000 0.9997 0.9984 0.9936 0.9793
15 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9997 0.9985 0.9941
16 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9997 0.9987
17 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9998
18 1.0000 1.0000 1.0000 54
1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
In tables 7.2.1 and 7.2.2 are summarized values of the cumulative distribution function of the
Poisson distribution P(λ) for different values of the of parameter λ > 0 defined as :
c
λk
e−λ
X
F (c) = P(k ≤ c) = , ∀c ∈ N+ .
k=0
k!
55
c 4.2 4.4 4.6 4.8 5 5.2 5.4 5.6 5.8 6.
0 0.015 0.012 0.010 0.008 0.007 0.006 0.005 0.004 0.003 0.002
1 0.078 0.066 0.056 0.048 0.040 0.034 0.029 0.024 0.021 0.017
2 0.210 0.185 0.163 0.143 0.125 0.109 0.095 0.082 0.072 0.062
3 0.395 0.359 0.326 0.294 0.265 0.238 0.213 0.191 0.170 0.151
4 0.590 0.551 0.513 0.476 0.440 0.406 0.373 0.342 0.313 0.285
5 0.753 0.720 0.686 0.651 0.616 0.581 0.546 0.512 0.478 0.446
6 0.867 0.844 0.818 0.791 0.762 0.732 0.702 0.670 0.638 0.606
7 0.936 0.921 0.905 0.887 0.867 0.845 0.822 0.797 0.771 0.744
8 0.972 0.964 0.955 0.944 0.932 0.918 0.903 0.886 0.867 0.847
9 0.989 0.985 0.980 0.975 0.968 0.960 0.951 0.941 0.929 0.916
10 0.996 0.994 0.992 0.990 0.986 0.982 0.977 0.972 0.965 0.957
11 0.999 0.998 0.997 0.996 0.995 0.993 0.990 0.988 0.984 0.980
12 1.000 0.999 0.999 0.999 0.998 0.997 0.996 0.995 0.993 0.991
13 1.000 1.000 1.000 1.000 0.999 0.999 0.999 0.998 0.997 0.996
14 1.000 1.000 1.000 1.000 1.000 1.000 1.000 0.999 0.999 0.999
15 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 0.999
16 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
c 6.5 7 7.5 8 8.5 9 9.5 10 10.5 11
0 0.002 0.001 0.001 0.000 0.000 0.000 0.000 0.000 0.000 0.000
1 0.011 0.007 0.005 0.003 0.002 0.001 0.001 0.000 0.000 0.000
2 0.043 0.030 0.020 0.014 0.009 0.006 0.004 0.003 0.002 0.001
3 0.112 0.082 0.059 0.042 0.030 0.021 0.015 0.010 0.007 0.005
4 0.224 0.173 0.132 0.100 0.074 0.055 0.040 0.029 0.021 0.015
5 0.369 0.301 0.241 0.191 0.150 0.116 0.089 0.067 0.050 0.038
6 0.527 0.450 0.378 0.313 0.256 0.207 0.165 0.130 0.102 0.079
7 0.673 0.599 0.525 0.453 0.386 0.324 0.269 0.220 0.179 0.143
8 0.792 0.729 0.662 0.593 0.523 0.456 0.392 0.333 0.279 0.232
9 0.877 0.830 0.776 0.717 0.653 0.587 0.522 0.458 0.397 0.341
10 0.933 0.901 0.862 0.816 0.763 0.706 0.645 0.583 0.521 0.460
11 0.966 0.947 0.921 0.888 0.849 0.803 0.752 0.697 0.639 0.579
12 0.984 0.973 0.957 0.936 0.909 0.876 0.836 0.792 0.742 0.689
13 0.993 0.987 0.978 0.966 0.949 0.926 0.898 0.864 0.825 0.781
14 0.997 0.994 0.990 0.983 0.973 0.959 0.940 0.917 0.888 0.854
15 0.999 0.998 0.995 0.992 0.986 0.978 0.967 0.951 0.932 0.907
16 1.000 0.999 0.998 0.996 0.993 0.989 0.982 0.973 0.960 0.944
17 1.000 1.000 0.999 0.998 0.997 0.995 0.991 0.986 0.978 0.968
18 1.000 1.000 1.000 0.999 0.999 0.998 0.996 0.993 0.988 0.982
19 1.000 1.000 1.000 1.000 0.999 0.999 0.998 0.997 0.994 0.991
20 1.000 1.000 1.000 1.000 1.000 1.000 0.999 0.998 0.997 0.995
21 1.000 1.000 1.000 1.000 1.000 1.000 1.000 0.999 0.999 0.998
22 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 0.999 0.999
23 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
56
7.3 Normal distribution N(0, 1)
The table 7.3.1 contains values of cumulative distribution function (see figure 7.3.1) associated
to the Normal distribution N(0, 1) defined as
Z x
1 −u2
F (x) = √ e 2 du.
−∞ 2π
fX
FX (x)
57
x 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
0.0 0.5000 0.5040 0.5080 0.5120 0.5160 0.5199 0.5239 0.5279 0.5319 0.5359
0.1 0.5398 0.5438 0.5478 0.5517 0.5557 0.5596 0.5636 0.5675 0.5714 0.5753
0.2 0.5793 0.5832 0.5871 0.5910 0.5948 0.5987 0.6026 0.6064 0.6103 0.6141
0.3 0.6179 0.6217 0.6255 0.6293 0.6331 0.6368 0.6406 0.6443 0.6480 0.6517
0.4 0.6554 0.6591 0.6628 0.6664 0.6700 0.6736 0.6772 0.6808 0.6844 0.6879
0.5 0.6915 0.6950 0.6985 0.7019 0.7054 0.7088 0.7123 0.7157 0.7190 0.7224
0.6 0.7257 0.7291 0.7324 0.7357 0.7389 0.7422 0.7454 0.7486 0.7517 0.7549
0.7 0.7580 0.7611 0.7642 0.7673 0.7703 0.7734 0.7764 0.7794 0.7823 0.7852
0.8 0.7881 0.7910 0.7939 0.7967 0.7995 0.8023 0.8051 0.8078 0.8106 0.8133
0.9 0.8159 0.8186 0.8212 0.8238 0.8264 0.8289 0.8315 0.8340 0.8365 0.8389
1.0 0.8413 0.8438 0.8461 0.8485 0.8508 0.8531 0.8554 0.8577 0.8599 0.8621
1.1 0.8643 0.8665 0.8686 0.8708 0.8729 0.8749 0.8770 0.8790 0.8810 0.8830
1.2 0.8849 0.8869 0.8888 0.8907 0.8925 0.8944 0.8962 0.8980 0.8997 0.9015
1.3 0.9032 0.9049 0.9066 0.9082 0.9099 0.9115 0.9131 0.9147 0.9162 0.9177
1.4 0.9192 0.9207 0.9222 0.9236 0.9251 0.9265 0.9279 0.9292 0.9306 0.9319
1.5 0.9332 0.9345 0.9357 0.9370 0.9382 0.9394 0.9406 0.9418 0.9429 0.9441
1.6 0.9452 0.9463 0.9474 0.9484 0.9495 0.9505 0.9515 0.9525 0.9535 0.9545
1.7 0.9554 0.9564 0.9573 0.9582 0.9591 0.9599 0.9608 0.9616 0.9625 0.9633
1.8 0.9641 0.9649 0.9656 0.9664 0.9671 0.9678 0.9686 0.9693 0.9699 0.9706
1.9 0.9713 0.9719 0.9726 0.9732 0.9738 0.9744 0.9750 0.9756 0.9761 0.9767
2.0 0.9772 0.9778 0.9783 0.9788 0.9793 0.9798 0.9803 0.9808 0.9812 0.9817
2.1 0.9821 0.9826 0.9830 0.9834 0.9838 0.9842 0.9846 0.9850 0.9854 0.9857
2.2 0.9861 0.9864 0.9868 0.9871 0.9875 0.9878 0.9881 0.9884 0.9887 0.9890
2.3 0.9893 0.9896 0.9898 0.9901 0.9904 0.9906 0.9909 0.9911 0.9913 0.9916
2.4 0.9918 0.9920 0.9922 0.9925 0.9927 0.9929 0.9931 0.9932 0.9934 0.9936
2.5 0.9938 0.9940 0.9941 0.9943 0.9945 0.9946 0.9948 0.9949 0.9951 0.9952
2.6 0.9953 0.9955 0.9956 0.9957 0.9959 0.9960 0.9961 0.9962 0.9963 0.9964
2.7 0.9965 0.9966 0.9967 0.9968 0.9969 0.9970 0.9971 0.9972 0.9973 0.9974
2.8 0.9974 0.9975 0.9976 0.9977 0.9977 0.9978 0.9979 0.9979 0.9980 0.9981
2.9 0.9981 0.9982 0.9982 0.9983 0.9984 0.9984 0.9985 0.9985 0.9986 0.9986
3.0 0.9987 0.9987 0.9987 0.9988 0.9988 0.9989 0.9989 0.9989 0.9990 0.9990
3.1 0.9990 0.9991 0.9991 0.9991 0.9992 0.9992 0.9992 0.9992 0.9993 0.9993
3.2 0.9993 0.9993 0.9994 0.9994 0.9994 0.9994 0.9994 0.9995 0.9995 0.9995
3.3 0.9995 0.9995 0.9995 0.9996 0.9996 0.9996 0.9996 0.9996 0.9996 0.9997
3.4 0.9997 0.9997 0.9997 0.9997 0.9997 0.9997 0.9997 0.9997 0.9997 0.9998
3.5 0.9998 0.9998 0.9998 0.9998 0.9998 0.9998 0.9998 0.9998 0.9998 0.9998
3.6 0.9998 0.9998 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999
3.7 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999
3.8 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999
3.9 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
Tables 7.4.1 and 7.4.2 contain values of P = P(χ2 (ν) ≥ x) for a given ν.
58
fX
P
ν 0.999 0.995 0.99 0.975 0.95 0.9 0.875 0.8 0.75 0.66 0.5
1 0.000 0.000 0.000 0.001 0.004 0.016 0.025 0.064 0.102 0.186 0.455
2 0.002 0.010 0.020 0.051 0.103 0.211 0.267 0.446 0.575 0.811 1.386
3 0.024 0.072 0.115 0.216 0.352 0.584 0.692 1.005 1.213 1.568 2.366
4 0.091 0.207 0.297 0.484 0.711 1.064 1.219 1.649 1.923 2.378 3.357
5 0.210 0.412 0.554 0.831 1.145 1.610 1.808 2.343 2.675 3.216 4.351
6 0.381 0.676 0.872 1.237 1.635 2.204 2.441 3.070 3.455 4.074 5.348
7 0.598 0.989 1.239 1.690 2.167 2.833 3.106 3.822 4.255 4.945 6.346
8 0.857 1.344 1.646 2.180 2.733 3.490 3.797 4.594 5.071 5.826 7.344
9 1.152 1.735 2.088 2.700 3.325 4.168 4.507 5.380 5.899 6.716 8.343
10 1.479 2.156 2.558 3.247 3.940 4.865 5.234 6.179 6.737 7.612 9.342
11 1.834 2.603 3.053 3.816 4.575 5.578 5.975 6.989 7.584 8.514 10.341
12 2.214 3.074 3.571 4.404 5.226 6.304 6.729 7.807 8.438 9.420 11.340
13 2.617 3.565 4.107 5.009 5.892 7.042 7.493 8.634 9.299 10.331 12.340
14 3.041 4.075 4.660 5.629 6.571 7.790 8.266 9.467 10.165 11.245 13.339
15 3.483 4.601 5.229 6.262 7.261 8.547 9.048 10.307 11.037 12.163 14.339
16 3.942 5.142 5.812 6.908 7.962 9.312 9.837 11.152 11.912 13.083 15.338
17 4.416 5.697 6.408 7.564 8.672 10.085 10.633 12.002 12.792 14.006 16.338
18 4.905 6.265 7.015 8.231 9.390 10.865 11.435 12.857 13.675 14.931 17.338
19 5.407 6.844 7.633 8.907 10.117 11.651 12.242 13.716 14.562 15.859 18.338
20 5.921 7.434 8.260 9.591 10.851 12.443 13.055 14.578 15.452 16.788 19.337
21 6.447 8.034 8.897 10.283 11.591 13.240 13.873 15.445 16.344 17.720 20.337
22 6.983 8.643 9.542 10.982 12.338 14.041 14.695 16.314 17.240 18.653 21.337
23 7.529 9.260 10.196 11.689 13.091 14.848 15.521 17.187 18.137 19.587 22.337
24 8.085 9.886 10.856 12.401 13.848 15.659 16.351 18.062 19.037 20.523 23.337
25 8.649 10.520 11.524 13.120 14.611 16.473 17.184 18.940 19.939 21.461 24.337
26 9.222 11.160 12.198 13.844 15.379 17.292 18.021 19.820 20.843 22.399 25.336
27 9.803 11.808 12.879 14.573 16.151 18.114 18.861 20.703 21.749 23.339 26.336
28 10.391 12.461 13.565 15.308 16.928 18.939 19.704 21.588 22.657 24.280 27.336
29 10.986 13.121 14.256 16.047 17.708 19.768 20.550 22.475 23.567 25.222 28.336
30 11.588 13.787 14.953 16.791 18.493 20.599 21.399 23.364 24.478 26.165 29.336
35 14.688 17.192 18.509 20.569 22.465 24.797 25.678 27.836 29.054 30.894 34.336
40 17.916 20.707 22.164 24.433 26.509 29.051 30.008 32.345 33.660 35.643 39.335
45 21.251 24.311 25.901 28.366 30.612 33.350 34.379 36.884 38.291 40.407 44.335
50 24.674 27.991 29.707 32.357 34.764 37.689 38.785 41.449 42.942 45.184 49.335
55 28.173 31.735 33.570 36.398 38.958 42.060 43.220 46.036 47.610 49.972 54.335
60 31.738 35.534 37.485 40.482 43.188 46.459 47.680 50.641 52.294 54.770 59.335
59
ν 0.4 0.33 0.25 0.2 0.125 0.1 0.05 0.025 0.01 0.005 0.001
1 0.708 0.936 1.323 1.642 2.354 2.706 3.841 5.024 6.635 7.879 10.828
2 1.833 2.197 2.773 3.219 4.159 4.605 5.991 7.378 9.210 10.597 13.816
3 2.946 3.405 4.108 4.642 5.739 6.251 7.815 9.348 11.345 12.838 16.266
4 4.045 4.579 5.385 5.989 7.214 7.779 9.488 11.143 13.277 14.860 18.467
5 5.132 5.730 6.626 7.289 8.625 9.236 11.070 12.833 15.086 16.750 20.515
6 6.211 6.867 7.841 8.558 9.992 10.645 12.592 14.449 16.812 18.548 22.458
7 7.283 7.992 9.037 9.803 11.326 12.017 14.067 16.013 18.475 20.278 24.322
8 8.351 9.107 10.219 11.030 12.636 13.362 15.507 17.535 20.090 21.955 26.125
9 9.414 10.215 11.389 12.242 13.926 14.684 16.919 19.023 21.666 23.589 27.877
10 10.473 11.317 12.549 13.442 15.198 15.987 18.307 20.483 23.209 25.188 29.588
11 11.530 12.414 13.701 14.631 16.457 17.275 19.675 21.920 24.725 26.757 31.264
12 12.584 13.506 14.845 15.812 17.703 18.549 21.026 23.337 26.217 28.300 32.910
13 13.636 14.595 15.984 16.985 18.939 19.812 22.362 24.736 27.688 29.819 34.528
14 14.685 15.680 17.117 18.151 20.166 21.064 23.685 26.119 29.141 31.319 36.123
15 15.733 16.761 18.245 19.311 21.384 22.307 24.996 27.488 30.578 32.801 37.697
16 16.780 17.840 19.369 20.465 22.595 23.542 26.296 28.845 32.000 34.267 39.252
17 17.824 18.917 20.489 21.615 23.799 24.769 27.587 30.191 33.409 35.718 40.790
18 18.868 19.991 21.605 22.760 24.997 25.989 28.869 31.526 34.805 37.156 42.312
19 19.910 21.063 22.718 23.900 26.189 27.204 30.144 32.852 36.191 38.582 43.820
20 20.951 22.133 23.828 25.038 27.376 28.412 31.410 34.170 37.566 39.997 45.315
21 21.991 23.201 24.935 26.171 28.559 29.615 32.671 35.479 38.932 41.401 46.797
22 23.031 24.268 26.039 27.301 29.737 30.813 33.924 36.781 40.289 42.796 48.268
23 24.069 25.333 27.141 28.429 30.911 32.007 35.172 38.076 41.638 44.181 49.728
24 25.106 26.397 28.241 29.553 32.081 33.196 36.415 39.364 42.980 45.559 51.179
25 26.143 27.459 29.339 30.675 33.247 34.382 37.652 40.646 44.314 46.928 52.620
26 27.179 28.520 30.435 31.795 34.410 35.563 38.885 41.923 45.642 48.290 54.052
27 28.214 29.580 31.528 32.912 35.570 36.741 40.113 43.195 46.963 49.645 55.476
28 29.249 30.639 32.620 34.027 36.727 37.916 41.337 44.461 48.278 50.993 56.892
29 30.283 31.697 33.711 35.139 37.881 39.087 42.557 45.722 49.588 52.336 58.301
30 31.316 32.754 34.800 36.250 39.033 40.256 43.773 46.979 50.892 53.672 59.703
35 36.475 38.024 40.223 41.778 44.753 46.059 49.802 53.203 57.342 60.275 66.619
40 41.622 43.275 45.616 47.269 50.424 51.805 55.758 59.342 63.691 66.766 73.402
45 46.761 48.510 50.985 52.729 56.052 57.505 61.656 65.410 69.957 73.166 80.077
50 51.892 53.733 56.334 58.164 61.647 63.167 67.505 71.420 76.154 79.490 86.661
55 57.016 58.945 61.665 63.577 67.211 68.796 73.311 77.380 82.292 85.749 93.168
60 62.135 64.147 66.981 68.972 72.751 74.397 79.082 83.298 88.379 91.952 99.607
60
fX
P
61
fX
P
HH ν
HH 1 1 2 3 4 5 6 7 8 9 10
ν2 HH
1 39.863 49.500 53.593 55.833 57.240 58.204 58.906 59.439 59.858 60.195
2 8.526 9.000 9.162 9.243 9.293 9.326 9.349 9.367 9.381 9.392
3 5.538 5.462 5.391 5.343 5.309 5.285 5.266 5.252 5.240 5.230
4 4.545 4.325 4.191 4.107 4.051 4.010 3.979 3.955 3.936 3.920
5 4.060 3.780 3.619 3.520 3.453 3.405 3.368 3.339 3.316 3.297
6 3.776 3.463 3.289 3.181 3.108 3.055 3.014 2.983 2.958 2.937
7 3.589 3.257 3.074 2.961 2.883 2.827 2.785 2.752 2.725 2.703
8 3.458 3.113 2.924 2.806 2.726 2.668 2.624 2.589 2.561 2.538
9 3.360 3.006 2.813 2.693 2.611 2.551 2.505 2.469 2.440 2.416
10 3.285 2.924 2.728 2.605 2.522 2.461 2.414 2.377 2.347 2.323
11 3.225 2.860 2.660 2.536 2.451 2.389 2.342 2.304 2.274 2.248
12 3.177 2.807 2.606 2.480 2.394 2.331 2.283 2.245 2.214 2.188
13 3.136 2.763 2.560 2.434 2.347 2.283 2.234 2.195 2.164 2.138
14 3.102 2.726 2.522 2.395 2.307 2.243 2.193 2.154 2.122 2.095
15 3.073 2.695 2.490 2.361 2.273 2.208 2.158 2.119 2.086 2.059
16 3.048 2.668 2.462 2.333 2.244 2.178 2.128 2.088 2.055 2.028
17 3.026 2.645 2.437 2.308 2.218 2.152 2.102 2.061 2.028 2.001
18 3.007 2.624 2.416 2.286 2.196 2.130 2.079 2.038 2.005 1.977
19 2.990 2.606 2.397 2.266 2.176 2.109 2.058 2.017 1.984 1.956
20 2.975 2.589 2.380 2.249 2.158 2.091 2.040 1.999 1.965 1.937
21 2.961 2.575 2.365 2.233 2.142 2.075 2.023 1.982 1.948 1.920
22 2.949 2.561 2.351 2.219 2.128 2.060 2.008 1.967 1.933 1.904
23 2.937 2.549 2.339 2.207 2.115 2.047 1.995 1.953 1.919 1.890
24 2.927 2.538 2.327 2.195 2.103 2.035 1.983 1.941 1.906 1.877
25 2.918 2.528 2.317 2.184 2.092 2.024 1.971 1.929 1.895 1.866
26 2.909 2.519 2.307 2.174 2.082 2.014 1.961 1.919 1.884 1.855
27 2.901 2.511 2.299 2.165 2.073 2.005 1.952 1.909 1.874 1.845
28 2.894 2.503 2.291 2.157 2.064 1.996 1.943 1.900 1.865 1.836
29 2.887 2.495 2.283 2.149 2.057 1.988 1.935 1.892 1.857 1.827
30 2.881 2.489 2.276 2.142 2.049 1.980 1.927 1.884 1.849 1.819
62
HH ν
HH 1 11 12 13 14 15 16 17 18 19 20
ν2 HH
1 60.473 60.705 60.903 61.073 61.220 61.350 61.464 61.566 61.658 61.740
2 9.401 9.408 9.415 9.420 9.425 9.429 9.433 9.436 9.439 9.441
3 5.222 5.216 5.210 5.205 5.200 5.196 5.193 5.190 5.187 5.184
4 3.907 3.896 3.886 3.878 3.870 3.864 3.858 3.853 3.849 3.844
5 3.282 3.268 3.257 3.247 3.238 3.230 3.223 3.217 3.212 3.207
6 2.920 2.905 2.892 2.881 2.871 2.863 2.855 2.848 2.842 2.836
7 2.684 2.668 2.654 2.643 2.632 2.623 2.615 2.607 2.601 2.595
8 2.519 2.502 2.488 2.475 2.464 2.455 2.446 2.438 2.431 2.425
9 2.396 2.379 2.364 2.351 2.340 2.329 2.320 2.312 2.305 2.298
10 2.302 2.284 2.269 2.255 2.244 2.233 2.224 2.215 2.208 2.201
11 2.227 2.209 2.193 2.179 2.167 2.156 2.147 2.138 2.130 2.123
12 2.166 2.147 2.131 2.117 2.105 2.094 2.084 2.075 2.067 2.060
13 2.116 2.097 2.080 2.066 2.053 2.042 2.032 2.023 2.014 2.007
14 2.073 2.054 2.037 2.022 2.010 1.998 1.988 1.978 1.970 1.962
15 2.037 2.017 2.000 1.985 1.972 1.961 1.950 1.941 1.932 1.924
16 2.005 1.985 1.968 1.953 1.940 1.928 1.917 1.908 1.899 1.891
17 1.978 1.958 1.940 1.925 1.912 1.900 1.889 1.879 1.870 1.862
18 1.954 1.933 1.916 1.900 1.887 1.875 1.864 1.854 1.845 1.837
19 1.932 1.912 1.894 1.878 1.865 1.852 1.841 1.831 1.822 1.814
20 1.913 1.892 1.875 1.859 1.845 1.833 1.821 1.811 1.802 1.794
21 1.896 1.875 1.857 1.841 1.827 1.815 1.803 1.793 1.784 1.776
22 1.880 1.859 1.841 1.825 1.811 1.798 1.787 1.777 1.768 1.759
23 1.866 1.845 1.827 1.811 1.796 1.784 1.772 1.762 1.753 1.744
24 1.853 1.832 1.814 1.797 1.783 1.770 1.759 1.748 1.739 1.730
25 1.841 1.820 1.802 1.785 1.771 1.758 1.746 1.736 1.726 1.718
26 1.830 1.809 1.790 1.774 1.760 1.747 1.735 1.724 1.715 1.706
27 1.820 1.799 1.780 1.764 1.749 1.736 1.724 1.714 1.704 1.695
28 1.811 1.790 1.771 1.754 1.740 1.726 1.715 1.704 1.694 1.685
29 1.802 1.781 1.762 1.745 1.731 1.717 1.705 1.695 1.685 1.676
30 1.794 1.773 1.754 1.737 1.722 1.709 1.697 1.686 1.676 1.667
63
HH ν
HH 1 1 2 3 4 5 6 7 8 9 10
ν2 HH
1 161.448 199.500 215.707 224.583 230.162 233.986 236.768 238.882 240.543 241.882
2 18.513 19.000 19.164 19.247 19.296 19.330 19.353 19.371 19.385 19.396
3 10.128 9.552 9.277 9.117 9.013 8.941 8.887 8.845 8.812 8.786
4 7.709 6.944 6.591 6.388 6.256 6.163 6.094 6.041 5.999 5.964
5 6.608 5.786 5.409 5.192 5.050 4.950 4.876 4.818 4.772 4.735
6 5.987 5.143 4.757 4.534 4.387 4.284 4.207 4.147 4.099 4.060
7 5.591 4.737 4.347 4.120 3.972 3.866 3.787 3.726 3.677 3.637
8 5.318 4.459 4.066 3.838 3.687 3.581 3.500 3.438 3.388 3.347
9 5.117 4.256 3.863 3.633 3.482 3.374 3.293 3.230 3.179 3.137
10 4.965 4.103 3.708 3.478 3.326 3.217 3.135 3.072 3.020 2.978
11 4.844 3.982 3.587 3.357 3.204 3.095 3.012 2.948 2.896 2.854
12 4.747 3.885 3.490 3.259 3.106 2.996 2.913 2.849 2.796 2.753
13 4.667 3.806 3.411 3.179 3.025 2.915 2.832 2.767 2.714 2.671
14 4.600 3.739 3.344 3.112 2.958 2.848 2.764 2.699 2.646 2.602
15 4.543 3.682 3.287 3.056 2.901 2.790 2.707 2.641 2.588 2.544
16 4.494 3.634 3.239 3.007 2.852 2.741 2.657 2.591 2.538 2.494
17 4.451 3.592 3.197 2.965 2.810 2.699 2.614 2.548 2.494 2.450
18 4.414 3.555 3.160 2.928 2.773 2.661 2.577 2.510 2.456 2.412
19 4.381 3.522 3.127 2.895 2.740 2.628 2.544 2.477 2.423 2.378
20 4.351 3.493 3.098 2.866 2.711 2.599 2.514 2.447 2.393 2.348
21 4.325 3.467 3.072 2.840 2.685 2.573 2.488 2.420 2.366 2.321
22 4.301 3.443 3.049 2.817 2.661 2.549 2.464 2.397 2.342 2.297
23 4.279 3.422 3.028 2.796 2.640 2.528 2.442 2.375 2.320 2.275
24 4.260 3.403 3.009 2.776 2.621 2.508 2.423 2.355 2.300 2.255
25 4.242 3.385 2.991 2.759 2.603 2.490 2.405 2.337 2.282 2.236
26 4.225 3.369 2.975 2.743 2.587 2.474 2.388 2.321 2.265 2.220
27 4.210 3.354 2.960 2.728 2.572 2.459 2.373 2.305 2.250 2.204
28 4.196 3.340 2.947 2.714 2.558 2.445 2.359 2.291 2.236 2.190
29 4.183 3.328 2.934 2.701 2.545 2.432 2.346 2.278 2.223 2.177
30 4.171 3.316 2.922 2.690 2.534 2.421 2.334 2.266 2.211 2.165
64
HH ν
HH 1 11 12 13 14 15 16 17 18 19 20
ν2 HH
1 242.983 243.906 244.690 245.364 245.950 246.464 246.918 247.323 247.686 248.013
2 19.405 19.413 19.419 19.424 19.429 19.433 19.437 19.440 19.443 19.446
3 8.763 8.745 8.729 8.715 8.703 8.692 8.683 8.675 8.667 8.660
4 5.936 5.912 5.891 5.873 5.858 5.844 5.832 5.821 5.811 5.803
5 4.704 4.678 4.655 4.636 4.619 4.604 4.590 4.579 4.568 4.558
6 4.027 4.000 3.976 3.956 3.938 3.922 3.908 3.896 3.884 3.874
7 3.603 3.575 3.550 3.529 3.511 3.494 3.480 3.467 3.455 3.445
8 3.313 3.284 3.259 3.237 3.218 3.202 3.187 3.173 3.161 3.150
9 3.102 3.073 3.048 3.025 3.006 2.989 2.974 2.960 2.948 2.936
10 2.943 2.913 2.887 2.865 2.845 2.828 2.812 2.798 2.785 2.774
11 2.818 2.788 2.761 2.739 2.719 2.701 2.685 2.671 2.658 2.646
12 2.717 2.687 2.660 2.637 2.617 2.599 2.583 2.568 2.555 2.544
13 2.635 2.604 2.577 2.554 2.533 2.515 2.499 2.484 2.471 2.459
14 2.565 2.534 2.507 2.484 2.463 2.445 2.428 2.413 2.400 2.388
15 2.507 2.475 2.448 2.424 2.403 2.385 2.368 2.353 2.340 2.328
16 2.456 2.425 2.397 2.373 2.352 2.333 2.317 2.302 2.288 2.276
17 2.413 2.381 2.353 2.329 2.308 2.289 2.272 2.257 2.243 2.230
18 2.374 2.342 2.314 2.290 2.269 2.250 2.233 2.217 2.203 2.191
19 2.340 2.308 2.280 2.256 2.234 2.215 2.198 2.182 2.168 2.155
20 2.310 2.278 2.250 2.225 2.203 2.184 2.167 2.151 2.137 2.124
21 2.283 2.250 2.222 2.197 2.176 2.156 2.139 2.123 2.109 2.096
22 2.259 2.226 2.198 2.173 2.151 2.131 2.114 2.098 2.084 2.071
23 2.236 2.204 2.175 2.150 2.128 2.109 2.091 2.075 2.061 2.048
24 2.216 2.183 2.155 2.130 2.108 2.088 2.070 2.054 2.040 2.027
25 2.198 2.165 2.136 2.111 2.089 2.069 2.051 2.035 2.021 2.007
26 2.181 2.148 2.119 2.094 2.072 2.052 2.034 2.018 2.003 1.990
27 2.166 2.132 2.103 2.078 2.056 2.036 2.018 2.002 1.987 1.974
28 2.151 2.118 2.089 2.064 2.041 2.021 2.003 1.987 1.972 1.959
29 2.138 2.104 2.075 2.050 2.027 2.007 1.989 1.973 1.958 1.945
30 2.126 2.092 2.063 2.037 2.015 1.995 1.976 1.960 1.945 1.932
65
HH ν
HH 1 1 2 3 4 5 6 7 8 9 10
ν2 HH
1 4052.19 4999.52 5403.34 5624.62 5763.65 5858.97 5928.33 5981.10 6022.50 6055.85
2 98.502 99.000 99.166 99.249 99.300 99.333 99.356 99.374 99.388 99.399
3 34.116 30.816 29.457 28.710 28.237 27.911 27.672 27.489 27.345 27.229
4 21.198 18.000 16.694 15.977 15.522 15.207 14.976 14.799 14.659 14.546
5 16.258 13.274 12.060 11.392 10.967 10.672 10.456 10.289 10.158 10.051
6 13.745 10.925 9.780 9.148 8.746 8.466 8.260 8.102 7.976 7.874
7 12.246 9.547 8.451 7.847 7.460 7.191 6.993 6.840 6.719 6.620
8 11.259 8.649 7.591 7.006 6.632 6.371 6.178 6.029 5.911 5.814
9 10.561 8.022 6.992 6.422 6.057 5.802 5.613 5.467 5.351 5.257
10 10.044 7.559 6.552 5.994 5.636 5.386 5.200 5.057 4.942 4.849
11 9.646 7.206 6.217 5.668 5.316 5.069 4.886 4.744 4.632 4.539
12 9.330 6.927 5.953 5.412 5.064 4.821 4.640 4.499 4.388 4.296
13 9.074 6.701 5.739 5.205 4.862 4.620 4.441 4.302 4.191 4.100
14 8.862 6.515 5.564 5.035 4.695 4.456 4.278 4.140 4.030 3.939
15 8.683 6.359 5.417 4.893 4.556 4.318 4.142 4.004 3.895 3.805
16 8.531 6.226 5.292 4.773 4.437 4.202 4.026 3.890 3.780 3.691
17 8.400 6.112 5.185 4.669 4.336 4.102 3.927 3.791 3.682 3.593
18 8.285 6.013 5.092 4.579 4.248 4.015 3.841 3.705 3.597 3.508
19 8.185 5.926 5.010 4.500 4.171 3.939 3.765 3.631 3.523 3.434
20 8.096 5.849 4.938 4.431 4.103 3.871 3.699 3.564 3.457 3.368
21 8.017 5.780 4.874 4.369 4.042 3.812 3.640 3.506 3.398 3.310
22 7.945 5.719 4.817 4.313 3.988 3.758 3.587 3.453 3.346 3.258
23 7.881 5.664 4.765 4.264 3.939 3.710 3.539 3.406 3.299 3.211
24 7.823 5.614 4.718 4.218 3.895 3.667 3.496 3.363 3.256 3.168
25 7.770 5.568 4.675 4.177 3.855 3.627 3.457 3.324 3.217 3.129
26 7.721 5.526 4.637 4.140 3.818 3.591 3.421 3.288 3.182 3.094
27 7.677 5.488 4.601 4.106 3.785 3.558 3.388 3.256 3.149 3.062
28 7.636 5.453 4.568 4.074 3.754 3.528 3.358 3.226 3.120 3.032
29 7.598 5.420 4.538 4.045 3.725 3.499 3.330 3.198 3.092 3.005
30 7.562 5.390 4.510 4.018 3.699 3.473 3.305 3.173 3.067 2.979
66
HH ν
HH 1 11 12 13 14 15 16 17 18 19 20
ν2 HH
1. 6083.35 6106.35 6125.86 6142.70 6157.28 6170.12 6181.42 6191.52 6200.58 6208.74
2. 99.408 99.416 99.422 99.428 99.432 99.437 99.440 99.444 99.447 99.449
3. 27.133 27.052 26.983 26.924 26.872 26.827 26.787 26.751 26.719 26.690
4. 14.452 14.374 14.307 14.249 14.198 14.154 14.115 14.080 14.048 14.020
5. 9.963 9.888 9.825 9.770 9.722 9.680 9.643 9.610 9.580 9.553
6. 7.790 7.718 7.657 7.605 7.559 7.519 7.483 7.451 7.422 7.396
7. 6.538 6.469 6.410 6.359 6.314 6.275 6.240 6.209 6.181 6.155
8. 5.734 5.667 5.609 5.559 5.515 5.477 5.442 5.412 5.384 5.359
9. 5.178 5.111 5.055 5.005 4.962 4.924 4.890 4.860 4.833 4.808
10. 4.772 4.706 4.650 4.601 4.558 4.520 4.487 4.457 4.430 4.405
11. 4.462 4.397 4.342 4.293 4.251 4.213 4.180 4.150 4.123 4.099
12. 4.220 4.155 4.100 4.052 4.010 3.972 3.939 3.909 3.883 3.858
13. 4.025 3.960 3.905 3.857 3.815 3.778 3.745 3.716 3.689 3.665
14. 3.864 3.800 3.745 3.698 3.656 3.619 3.586 3.556 3.529 3.505
15. 3.730 3.666 3.612 3.564 3.522 3.485 3.452 3.423 3.396 3.372
16. 3.616 3.553 3.498 3.451 3.409 3.372 3.339 3.310 3.283 3.259
17. 3.519 3.455 3.401 3.353 3.312 3.275 3.242 3.212 3.186 3.162
18. 3.434 3.371 3.316 3.269 3.227 3.190 3.158 3.128 3.101 3.077
19. 3.360 3.297 3.242 3.195 3.153 3.116 3.084 3.054 3.027 3.003
20. 3.294 3.231 3.177 3.130 3.088 3.051 3.018 2.989 2.962 2.938
21. 3.236 3.173 3.119 3.072 3.030 2.993 2.960 2.931 2.904 2.880
22. 3.184 3.121 3.067 3.019 2.978 2.941 2.908 2.879 2.852 2.827
23. 3.137 3.074 3.020 2.973 2.931 2.894 2.861 2.832 2.805 2.781
24. 3.094 3.032 2.977 2.930 2.889 2.852 2.819 2.789 2.762 2.738
25. 3.056 2.993 2.939 2.892 2.850 2.813 2.780 2.751 2.724 2.699
26. 3.021 2.958 2.904 2.857 2.815 2.778 2.745 2.715 2.688 2.664
27. 2.988 2.926 2.871 2.824 2.783 2.746 2.713 2.683 2.656 2.632
28. 2.959 2.896 2.842 2.795 2.753 2.716 2.683 2.653 2.626 2.602
29. 2.931 2.868 2.814 2.767 2.726 2.689 2.656 2.626 2.599 2.574
30. 2.906 2.843 2.789 2.742 2.700 2.663 2.630 2.600 2.573 2.549
67
68
Bibliography
• http://www.york.ac.uk/depts/maths/tables/sources.htm
• http://www.itl.nist.gov/div898/handbook/eda/section3/eda367.htm
• http://en.wikipedia.org/wiki/Student%27s_t-distribution
69