0% found this document useful (0 votes)
2 views5 pages

Linear Algebra - class notes

The document explains systems of linear equations, including types of systems (complete, redundant, contradictory) and their ranks. It covers the representation of linear equations using matrices, vector operations, linear transformations, and eigenvalues/eigenvectors. Key concepts such as determinants, row echelon forms, and the significance of eigenvectors in linear transformations are also discussed.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as ODT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views5 pages

Linear Algebra - class notes

The document explains systems of linear equations, including types of systems (complete, redundant, contradictory) and their ranks. It covers the representation of linear equations using matrices, vector operations, linear transformations, and eigenvalues/eigenvectors. Key concepts such as determinants, row echelon forms, and the significance of eigenvectors in linear transformations are also discussed.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as ODT, PDF, TXT or read online on Scribd
You are on page 1/ 5

Systems of linear equations

A linear equation is of form a1x1 + a2x2 + ... + b = 0.


A system of equations can be complete a1x1 = 0 and a2x2 = 0 (non-singular)
only 1 unique solution
redundant a1x1 = 0 and a1x1 = 0 (singular)
infinitely many solutions
contradictorya1x1 = 0 and a1x1 = 1 (singular)
no solution
The rank of a system measures the number of non-contradictory pieces of information conveyed;
if rank = number of equations, then the system is complete. The number of dimensions of the
solution space of a system and the rank of the system always add up to the number of equations of
the system:
dimensions of solution space + rank = equations
Matrices can represent the coefficients of a linear system of equations:

[ ]
e.g. a11x11 + ... + a1nx1n = 0 a 11 ⋯ a 1n
a21x21 + ... + a2nx2n = 0 a 21 ⋯ a 2n
... ⋮ ⋱ ⋮
am1xm1 + ... + amnxmn = 0 am 1 ⋯ amn

When an equation (matrix row) is the linear combination of other equations (rows) in the same
system, then the equation is dependent on the rest of the system. If there is no way to obtain any
equation (row) from others by addition or multiplication, then the system is independent.
If the determinant of a matrix (= a11 a22 – a21 a12 for 2x2 matrices) is used to determine singularity. If
the determinant is 0, the matrix is singular.
There are 3 elementary matrix row operations that preserve singularity and can help solve a system:
1. Add a scalar multiple of one row to another one and replace the latter with the result;
2. Multiply a row with a non-zero scalar
3. Swap rows
The row echelon form of a matrix is such that
• all rows consisting of 0s only are at the bottom; and
• the leading (leftmost non-0) entry or pivot is to the right of that of the previous row
The reduced row echelon form of a matrix has two additional requirements:
• the leading entries are all 1s; and
• each leading entry is followed by 0s only.
The number of pivots of the row echelon form is equal to the rank of the matrix.
Vectors

A vector can be considered a collection of n numbers or a pointer in n-dimensional space with n


coordinates whose direction and length are precisely defined by these coordinates.

√∑
n n
The L1-norm of a vector is ∑|x i| and the L2-norm is 2
xi
i=1 i=1

Vector addition and subtraction happens member-wise:

[] [] [ ] [] [] [ ]
x 11 xm1 x 11 +⋯+ x m 1 x 11 xm1 x 11−⋯−xm 1
⋮ + ⋯ + ⋮ = ⋮ ⋮ − ⋯ − ⋮ = ⋮
x1 n x mn x 1 n+⋯+ y mn x1 n x mn x 1 n−⋯− y mn

Geometrically,
⃗u + ⃗v is a vector with origin = origin of u⃗ and vertex = vertex of ⃗v ; and
⃗u − ⃗v is a vector with origin = vertex of ⃗v and vertex is the vertex of ⃗u .
n
The L1-distance of two vectors is |u−v|1 = ∑|u n−v n|;
i=1

√∑
n
2
the L2-distance of two vectors is |u−v|2 = (u n−v n) ; and
i=1

the cosine distance of two vectors is cos ( θ )

[][ ]
x1 λ⋅x 1
The scalar multiplication of a vector is simple member-wise multiplication: λ⋅ ⋮ = ⋮ , which
xn λ⋅x n
geometrically means a stretching of the vector by factor λ.
The dot product of two vectors can be considered geometrically as the projection of one vector
onto the other and then multiplying the two norms together: ⟨ u , v ⟩=|u|⋅|v|⋅cos( θ ) .
If ⟨ u , v ⟩ = 0 if θ = ± π ⟨ u , v ⟩ = 1 if θ = ±π
2

⟨ u , v ⟩ > 0 if θ ∈ − π , π
[ ] ⟨ u , v ⟩ < 0 if θ ∈ π ,− π [ ]
2 2 2 2
n
Algebraically, the dot product of two n vectors is u⃗⋅⃗v = ∑ u i vi
i=1

The product of an n x m matrix with an m vector is a vector:

[ ]
n

∑ v i x 1i
[ ][ ]
x 11 ⋯ x 1 n v 1 i=1
⋮ ⋱ ⋮ ⋅⋮ = ⋮
x m 1 ⋯ x mn v n n

∑ vi xmi
i=1
Linear transformations

Multiplication by a matrix can be interpreted as a linear transformation: an n x n matrix maps


each point in the n-dimensional space onto another point.
Multiplication of matrices then can be considered as combinations of linear transformations:

[ ][ ]
x 11 ⋯ x 1 n y 11 ⋯ y 1 p
X⋅Y = ⋮ ⋱ ⋮ ⋅ ⋮ ⋱ ⋮ =
x m 1 ⋯ x mn y n 1 ⋯ y np

[ [] []
][ ]
y 11 y1 p
[ 11
x ⋯ x 1n ]
⋅ ⋮ ⋯ [ 11
x ⋯ x 1n ]
⋅ ⋮
n n

yn1 y np ∑ x1i yi 1 ⋯ ∑ x 1i y i p
i=1 i=1
⋮ ⋱ ⋮ = ⋮ ⋱ ⋮

[] []
n n
y 11 y1 p
[ m1
x ⋯ x mn ] ⋅ ⋮ ⋯ [ x m 1 ⋯ x mn ]⋅ ⋮ ∑ xmi yi 1 ⋯ ∑ x mi y i p
i=1 i=1
yn1 y np

When multiplying two matrices A and B, the det ( AB)=det ( A)det ( B) .


When multiplying matrices, the number of columns in the first matrix has to be equal to the
number of rows in the second matrix: (m, n) (n, p) = (m x p). Numpy’s matmul method works on
vectors: when Numpy arrays are shaped as (m,), matmul returns a dot product. When they are
shapes in 2D (m, 1), however, an error is thrown. Similarly, Numpy’s dot method works with
matrices and returns a matrix product by broadcasting the dot product operation to all rows and
columns. Broadcasting works with other operations: e.g. (m, n) – scalar = (m, n). (See lab.)

[ ]
1 0 ⋯ 0
The identity matrix I = 0 1 ⋯ 0 is such that multiplying any vector A by it results in A;
⋮ ⋮ ⋱ ⋮
0 0 ⋯ 1

the identity matrix maps each point onto itself.

The inverse A-1 of a matrix A is such that A A-1 = I. Only non-singular matrices are invertible;
therefore a det(A) ≠ 0 indicates that A-1 exists. The determinant of the inverse of a matrix A is
−1 1
det ( A ) = .
det ( A)

The linear transformation associated with a singular matrix collapses some or all of the
dimensions of the original space. The resulting number of dimensions is equal to the rank of the
matrix. The determinant of an n x n matrix equals to the proportion between any area/space of the
V
original space and the area/space mapped onto: det ( A)= resulting , which is 0 for singular matrices.
V original

A negative determinant means that the space is flipped (counter-clockwise order for 2D).
Eigenvectors and eigenvalues

Bases of a linear system are the minimal set of vectors that span the whole space. In 2D, the
traditional bases are the unit vectors i^ =
1
0 []
and ^j =
0
1
. []
A linear transformation is associated with a corresponding change of basis whereby the original
bases are mapped onto a set of new vectors. Given a linear transformation defined by matrix A, the
vectors whose mapping consists in at most a scalar multiplication (but no change in
directionality) of the original vectors are called eigenvectors. The corresponding scalars are called
eigenvalues. The basis of a space consisting entirely of eigenvectors is called eigenbasis.

How to find eigenvalues and eigenvectors:


Since
eigenvectors are the same as (scaled versions of) the original vectors; and
an identity matrix (scaled by an eigenvalue λ of an eigenvector of A)
simply scales all vectors of the space,
a linear transformation associated with matrix A coincides with the linear transformation associated
with λI exactly at the eigenvectors of A. That is, at any eigenvector v,
A · v = λI · v or
A · v – λI · v = 0
The difference of A · v and λI · v must be a null vector! In 2D, this means

([ ] [ ]) [ ] [ ]
a b
c d
− λ⋅
1 0 x

0 1 y
=
0
0

([ ac bd ]−[ λ0⋅1 λ0⋅1])⋅[ xy ] = [ 00].


([ a−c λ d−b λ ])⋅[ xy ] = [ 00]
But if the product is a null vector, then the transformation given by the difference matrix A – λI
must be singular with determinant 0. The determinant formula for eigenvalues is called the
characteristic polynomial:
(a−λ )(d−λ )−bc = 0
ad− λ d− λ a+ λ −bc = 0 .
2

λ 2−(d +a) λ + ad−bc = 0

( d+ a)±√ (d + a) 2−4(ad−bc)
The roots of this equation identify the eigenvalues: λ 1,2 = .
2
Corresponding eigenvectors can be found by simplifying the transformation matrix into a system of
equations:

[ ][ ] [
a b x
c d y
⋅ =
λ⋅1 0

0 λ⋅1 y .
x
][ ]
a x +b y = λ⋅x +0⋅y
c x +d y = 0⋅x + λ⋅y

For each of λ1 and λ2, the system can be solved to obtain a set of coordinates [ xy ], which identifies
the eigenvector associated with each eigenvalue.

For example:
Given a transformation A = [−64 53] the two eigenvalues are:
[−64 35 ] − λ⋅[ 10 01 ] = 0
[−6−4 λ 5−3 λ ] = 0
(−6− λ )⋅(5−λ )−3⋅4=0
λ 2 + λ −42=0

the two eigenvalues are:


λ 1=6 and λ 2=−7

[−64 35][ xy ]=6[ xy ] [−64 35][ xy ]=−7[ xy ]


−6 x+ 3 y=6 x and −6 x+ 3 y=−7 x and
4 x +5 y=6 y 4 x +5 y=−7 y
Then for λ1 = 6: x=1 and y =4 and for λ2 = -7: x=1 and y=−3

the eigenvector is: the eigenvector is:

[]
1
4 [ ]
1
−3

You might also like