6 - Super-Cheatsheet-Mathematics
6 - Super-Cheatsheet-Mathematics
5 Refreshers
r Axioms of probability – For each event E, we denote P (E) as the probability of event E
Deep learning occuring. By noting E1 ,...,En mutually exclusive events, we have the 3 following axioms:
n
! n
[ X
(1) 0 6 P (E) 6 1 (2) P (S) = 1 (3) P Ei = P (Ei )
i=1 i=1
r Partition – Let {Ai , i ∈ [[1,n]]} be such that for all i, Ai 6= ∅. We say that {Ai } is a partition
if we have:
n
[
∀i 6= j, Ai ∩ Aj = ∅ and Ai = S
i=1
n
X
Remark: for any event B in the sample space, we have P (B) = P (B|Ai )P (Ai ).
i=1
r Extended form of Bayes’ rule – Let {Ai , i ∈ [[1,n]]} be a partition of the sample space. r Expectation and Moments of the Distribution – Here are the expressions of the expected
We have: value E[X], generalized expected value E[g(X)], kth moment E[X k ] and characteristic function
ψ(ω) for the discrete and continuous cases:
P (B|Ak )P (Ak )
P (Ak |B) = n
X
P (B|Ai )P (Ai ) Case E[X] E[g(X)] E[X k ] ψ(ω)
i=1 n n n n
X X X X
(D) xi f (xi ) g(xi )f (xi ) xki f (xi ) f (xi )eiωxi
r Independence – Two events A and B are independent if and only if we have: i=1 i=1 i=1 i=1
ˆ +∞ ˆ +∞ ˆ +∞ ˆ +∞
P (A ∩ B) = P (A)P (B) (C) xf (x)dx g(x)f (x)dx xk f (x)dx f (x)eiωx dx
−∞ −∞ −∞ −∞
F (x) = P (X 6 x)
r Transformation of random variables – Let the variables X and Y be linked by some
function. By noting fX and fY the distribution function of X and Y respectively, we have:
Remark: we have P (a < X 6 B) = F (b) − F (a).
dx
fY (y) = fX (x)
r Probability density function (PDF) – The probability density function f is the probability dy
that X takes on values between two adjacent realizations of the random variable.
r Relationships involving the PDF and CDF – Here are the important properties to know r Leibniz integral rule – Let g be a function of x and potentially c, and a, b boundaries that
in the discrete (D) and the continuous (C) cases. may depend on c. We have:
ˆ ˆ b
∂ b ∂b ∂a ∂g
g(x)dx = · g(b) − · g(a) + (x)dx
Case CDF F PDF f Properties of PDF ∂c a ∂c ∂c a ∂c
X X
(D) F (x) = P (X = xi ) f (xj ) = P (X = xj ) 0 6 f (xj ) 6 1 and f (xj ) = 1
r Chebyshev’s inequality – Let X be a random variable with expected value µ and standard
xi 6x j deviation σ. For k, σ > 0, we have the following inequality:
ˆ x ˆ +∞ 1
dF
(C) F (x) = f (y)dy f (x) = f (x) > 0 and f (x)dx = 1 P (|X − µ| > kσ) 6
−∞ dx −∞ k2
r Marginal density and cumulative distribution – From the joint density probability 5.1.5 Parameter estimation
function fXY , we have:
r Random sample – A random sample is a collection of n random variables X1 , ..., Xn that
Case Marginal density Cumulative function are independent and identically distributed with X.
r Estimator – An estimator θ̂ is a function of the data that is used to infer the value of an
unknown parameter θ in a statistical model.
X XX
(D) fX (xi ) = fXY (xi ,yj ) FXY (x,y) = fXY (xi ,yj )
j xi 6x yj 6y r Bias – The bias of an estimator θ̂ is defined as being the difference between the expected
ˆ ˆ ˆ value of the distribution of θ̂ and the true value, i.e.:
+∞ x y
(C) fX (x) = fXY (x,y)dy FXY (x,y) = fXY (x0 ,y 0 )dx0 dy 0 Bias(θ̂) = E[θ̂] − θ
−∞ −∞ −∞
r Diagonal matrix – A diagonal matrix D ∈ Rn×n is a square matrix with nonzero values in ∀i,j, i,j = Aj,i
AT
its diagonal and zero everywhere else:
d1 0 · · · 0 Remark: for matrices A,B, we have (AB)T = B T AT .
.. .. ..
. .
D= 0 .
r Inverse – The inverse of an invertible square matrix A is noted A−1 and is the only matrix
.. .. ..
. . . 0 such that:
0 ··· 0 dn
AA−1 = A−1 A = I
Remark: we also note D as diag(d1 ,...,dn ).
Remark: not all square matrices are invertible. Also, for matrices A,B, we have (AB)−1 =
5.2.2 Matrix operations B −1 A−1
r Trace – The trace of a square matrix A, noted tr(A), is the sum of its diagonal entries:
r Vector-vector multiplication – There are two types of vector-vector products:
n
• inner product: for x,y ∈ Rn , we have:
X
tr(A) = Ai,i
i=1
n
X
xT y = xi yi ∈ R
Remark: for matrices A,B, we have tr(AT ) = tr(A) and tr(AB) = tr(BA)
i=1
r Determinant – The determinant of a square matrix A ∈ Rn×n , noted |A| or det(A) is
expressed recursively in terms of A\i,\j , which is the matrix A without its ith row and j th
• outer product: for x ∈ Rm , y ∈ Rn , we have:
column, as follows:
x1 y1 ··· x1 yn n
.. ..
X
xy T = ∈ Rm×n det(A) = |A| = (−1)i+j Ai,j |A\i,\j |
. .
xm y1 ··· xm yn j=1
Remark: A is invertible if and only if |A| 6= 0. Also, |AB| = |A||B| and |AT | = |A|.
r Matrix-vector multiplication – The product of matrix A ∈ Rm×n and vector x ∈ Rn is a
vector of size Rm , such that:
aT
5.2.3 Matrix properties
r,1 x n
..
X
Ax = = ac,i xi ∈ R m
r Symmetric decomposition – A given matrix A can be expressed in terms of its symmetric
.
T
ar,m x i=1 and antisymmetric parts as follows:
A + AT A − AT
where aT
r,i are the vector rows and ac,j are the vector columns of A, and xi are the entries
A= +
2 2
of x. | {z } | {z }
Symmetric Antisymmetric
r Matrix-matrix multiplication – The product of matrices A ∈ Rm×n and B ∈ Rn×p is a
matrix of size Rn×p , such that:
aT ··· aT
r Norm – A norm is a function N : V −→ [0, + ∞[ where V is a vector space, and such that
r,1 bc,1 r,1 bc,p n
for all x,y ∈ V , we have:
.. ..
X
AB = = ac,i bT n×p
∈R
. . r,i
T
ar,m bc,1 ··· T
ar,m bc,p i=1 • N (x + y) 6 N (x) + N (y)
tively.
• if N (x) = 0, then x = 0
r Transpose – The transpose of a matrix A ∈ Rm×n , noted AT , is such that its entries are
flipped: For x ∈ V , the most commonly used norms are summed up in the table below:
r Linearly dependence – A set of vectors is said to be linearly dependent if one of the vectors ∇A tr(ABAT C) = CAB + C T AB T ∇A |A| = |A|(A−1 )T
in the set can be defined as a linear combination of the others.
Remark: if no vector can be written this way, then the vectors are said to be linearly independent.
r Matrix rank – The rank of a given matrix A is noted rank(A) and is the dimension of the
vector space generated by its columns. This is equivalent to the maximum number of linearly
independent columns of A.
A = AT and ∀x ∈ Rn , xT Ax > 0
∃Λ diagonal, A = U ΛU T
A = U ΣV T