Linear Algebra - class notes
Linear Algebra - class notes
[ ]
e.g. a11x11 + ... + a1nx1n = 0 a 11 ⋯ a 1n
a21x21 + ... + a2nx2n = 0 a 21 ⋯ a 2n
... ⋮ ⋱ ⋮
am1xm1 + ... + amnxmn = 0 am 1 ⋯ amn
When an equation (matrix row) is the linear combination of other equations (rows) in the same
system, then the equation is dependent on the rest of the system. If there is no way to obtain any
equation (row) from others by addition or multiplication, then the system is independent.
If the determinant of a matrix (= a11 a22 – a21 a12 for 2x2 matrices) is used to determine singularity. If
the determinant is 0, the matrix is singular.
There are 3 elementary matrix row operations that preserve singularity and can help solve a system:
1. Add a scalar multiple of one row to another one and replace the latter with the result;
2. Multiply a row with a non-zero scalar
3. Swap rows
The row echelon form of a matrix is such that
• all rows consisting of 0s only are at the bottom; and
• the leading (leftmost non-0) entry or pivot is to the right of that of the previous row
The reduced row echelon form of a matrix has two additional requirements:
• the leading entries are all 1s; and
• each leading entry is followed by 0s only.
The number of pivots of the row echelon form is equal to the rank of the matrix.
Vectors
√∑
n n
The L1-norm of a vector is ∑|x i| and the L2-norm is 2
xi
i=1 i=1
[] [] [ ] [] [] [ ]
x 11 xm1 x 11 +⋯+ x m 1 x 11 xm1 x 11−⋯−xm 1
⋮ + ⋯ + ⋮ = ⋮ ⋮ − ⋯ − ⋮ = ⋮
x1 n x mn x 1 n+⋯+ y mn x1 n x mn x 1 n−⋯− y mn
Geometrically,
⃗u + ⃗v is a vector with origin = origin of u⃗ and vertex = vertex of ⃗v ; and
⃗u − ⃗v is a vector with origin = vertex of ⃗v and vertex is the vertex of ⃗u .
n
The L1-distance of two vectors is |u−v|1 = ∑|u n−v n|;
i=1
√∑
n
2
the L2-distance of two vectors is |u−v|2 = (u n−v n) ; and
i=1
[][ ]
x1 λ⋅x 1
The scalar multiplication of a vector is simple member-wise multiplication: λ⋅ ⋮ = ⋮ , which
xn λ⋅x n
geometrically means a stretching of the vector by factor λ.
The dot product of two vectors can be considered geometrically as the projection of one vector
onto the other and then multiplying the two norms together: ⟨ u , v ⟩=|u|⋅|v|⋅cos( θ ) .
If ⟨ u , v ⟩ = 0 if θ = ± π ⟨ u , v ⟩ = 1 if θ = ±π
2
⟨ u , v ⟩ > 0 if θ ∈ − π , π
[ ] ⟨ u , v ⟩ < 0 if θ ∈ π ,− π [ ]
2 2 2 2
n
Algebraically, the dot product of two n vectors is u⃗⋅⃗v = ∑ u i vi
i=1
[ ]
n
∑ v i x 1i
[ ][ ]
x 11 ⋯ x 1 n v 1 i=1
⋮ ⋱ ⋮ ⋅⋮ = ⋮
x m 1 ⋯ x mn v n n
∑ vi xmi
i=1
Linear transformations
[ ][ ]
x 11 ⋯ x 1 n y 11 ⋯ y 1 p
X⋅Y = ⋮ ⋱ ⋮ ⋅ ⋮ ⋱ ⋮ =
x m 1 ⋯ x mn y n 1 ⋯ y np
[ [] []
][ ]
y 11 y1 p
[ 11
x ⋯ x 1n ]
⋅ ⋮ ⋯ [ 11
x ⋯ x 1n ]
⋅ ⋮
n n
yn1 y np ∑ x1i yi 1 ⋯ ∑ x 1i y i p
i=1 i=1
⋮ ⋱ ⋮ = ⋮ ⋱ ⋮
[] []
n n
y 11 y1 p
[ m1
x ⋯ x mn ] ⋅ ⋮ ⋯ [ x m 1 ⋯ x mn ]⋅ ⋮ ∑ xmi yi 1 ⋯ ∑ x mi y i p
i=1 i=1
yn1 y np
[ ]
1 0 ⋯ 0
The identity matrix I = 0 1 ⋯ 0 is such that multiplying any vector A by it results in A;
⋮ ⋮ ⋱ ⋮
0 0 ⋯ 1
The inverse A-1 of a matrix A is such that A A-1 = I. Only non-singular matrices are invertible;
therefore a det(A) ≠ 0 indicates that A-1 exists. The determinant of the inverse of a matrix A is
−1 1
det ( A ) = .
det ( A)
The linear transformation associated with a singular matrix collapses some or all of the
dimensions of the original space. The resulting number of dimensions is equal to the rank of the
matrix. The determinant of an n x n matrix equals to the proportion between any area/space of the
V
original space and the area/space mapped onto: det ( A)= resulting , which is 0 for singular matrices.
V original
A negative determinant means that the space is flipped (counter-clockwise order for 2D).
Eigenvectors and eigenvalues
Bases of a linear system are the minimal set of vectors that span the whole space. In 2D, the
traditional bases are the unit vectors i^ =
1
0 []
and ^j =
0
1
. []
A linear transformation is associated with a corresponding change of basis whereby the original
bases are mapped onto a set of new vectors. Given a linear transformation defined by matrix A, the
vectors whose mapping consists in at most a scalar multiplication (but no change in
directionality) of the original vectors are called eigenvectors. The corresponding scalars are called
eigenvalues. The basis of a space consisting entirely of eigenvectors is called eigenbasis.
([ ] [ ]) [ ] [ ]
a b
c d
− λ⋅
1 0 x
⋅
0 1 y
=
0
0
( d+ a)±√ (d + a) 2−4(ad−bc)
The roots of this equation identify the eigenvalues: λ 1,2 = .
2
Corresponding eigenvectors can be found by simplifying the transformation matrix into a system of
equations:
[ ][ ] [
a b x
c d y
⋅ =
λ⋅1 0
⋅
0 λ⋅1 y .
x
][ ]
a x +b y = λ⋅x +0⋅y
c x +d y = 0⋅x + λ⋅y
For each of λ1 and λ2, the system can be solved to obtain a set of coordinates [ xy ], which identifies
the eigenvector associated with each eigenvalue.
For example:
Given a transformation A = [−64 53] the two eigenvalues are:
[−64 35 ] − λ⋅[ 10 01 ] = 0
[−6−4 λ 5−3 λ ] = 0
(−6− λ )⋅(5−λ )−3⋅4=0
λ 2 + λ −42=0
[]
1
4 [ ]
1
−3