2LA Notes - AS-2
2LA Notes - AS-2
BY ALESHAN SUBBAN
LECTURER: DR JURIE CONRADIE
INTRODUCTION
What is Linear Algebra?
Linear – think “line”
𝑦 = 𝑚𝑥 + 𝑐 : A line in 2D
𝑥 𝑎1 𝑏1
𝑦 𝑎 𝑏
( ) = 𝜆 ( 2) + ( 2) 𝜆 ∈ ℝ : The vector equation of a line in 3D
𝑧 𝑎3 𝑏3
Algebra – think “letters”, “rules” and “abstraction”
NOTATIONS
a. We use U, V, W, etc. for Vector Spaces.
b. We use u, v, w, etc. for vectors in these spaces.
c. We use Greek letters 𝜆, α, β, µ, etc. for scalars.
RULES
i. αu + βv is yet another vector in V.
ii. u+v=v+u
iii. (α + β)u = αu + βv
α(u + v) = αu + αv
iv. (αβ)u = α(βu)
v. Vector 0 ∈ V such that
0+v=v
vi. For v ∈ V, 0 · v = 0
vii. For v ∈ V, 1 · v = v
Two such matrices are equal if and only if corresponding entries are equal.
Addition = Usual Matrix Addition.
Scalar Multiplication = Multiply every entry by 𝜆 where 𝜆 ∈ ℝ.
All rules are satisfied. Check!
What is the “zero vector” in this case? Answer: The matrix with all entries being
0’s.
Special case = Square Matrix where n=m therefore we have Mnxm(ℝ) = Mn(ℝ).
2
4. Pn = The set of all polynomials with degree at most n in the variable x, where n is fixed.
𝑎𝑛 𝑥 𝑛 + 𝑎𝑛−1 𝑥 𝑛−1 + ⋯ + 𝑎1 𝑥 + 𝑎0
𝑎𝑖 𝑓𝑜𝑟 𝑖 = 1 … 𝑛 are real constants and 𝑛 ∈ ℕ
Equality:
𝑎𝑛 𝑥 𝑛 + 𝑎𝑛−1 𝑥 𝑛−1 + ⋯ + 𝑎1 𝑥 + 𝑎0 = 𝑏𝑛 𝑥 𝑛 + 𝑏𝑛−1 𝑥 𝑛−1 + ⋯ + 𝑏1 𝑥 + 𝑏0 ⇔ 𝑎𝑖 = 𝑏𝑖
Addition:
𝑎𝑛 𝑥 𝑛 + ⋯ + 𝑎0 + 𝑏𝑛 𝑥 𝑛 + ⋯ + 𝑏0 = (𝑎𝑛 + 𝑏𝑛 )𝑥 𝑛 + …. + (𝑎0 + 𝑏0 ).
Scalar Multiplication:
𝜆(𝑎𝑛 𝑥 𝑛 + ⋯ + 𝑎0 ) = 𝜆𝑎𝑛 𝑥 𝑛 + ⋯ + 𝜆𝑎0
5. V = {(𝑥1 ; 𝑥2 ; 𝑥3 ) ∈ ℝ ∶ 𝑥1 + 𝑥2 + 𝑥3 = 1 }
Use usual addition and scalar multiplication for ℝ3, is V a vector space?
If (𝑥1 ; 𝑥2 ; 𝑥3 ) ∈ 𝑉 then 0(𝑥1 ; 𝑥2 ; 𝑥3 ) = (0; 0; 0) is not in V
Hence V cannot be a vector space!
SUBSPACES
If V is a vector space and S is a subset of V, the S inherits the operations of addition and
scalar multiplication from V.
Definition
If V is a vector space and S ⊆ V then we say S is a vector subspace of V if it is a vector space
in its own right, with the operations it inherits from V.
Properties of a Subspace
If S is a subspace of a vector space V:
i. 0∈S
ii. S is closed under addition if u, v ∈ S ⇒ u + v ∈ S
iii. S is closed under scalar multiplication if 𝜆 ∈ ℝ , v ∈ S ⇒ 𝜆𝐯 ∈ S
EXAMPLES OF SUBSPACES
1. V = ℝ2 ; S = { (𝑥1 ; 𝑥2 ) ∈ ℝ2 ∶ 𝑥1 ≥ 0, 𝑥2 ≥ 0 }
2. V = ℝ2 ; S = { (𝑥1 ; 𝑥2 ) ∈ ℝ2 ∶ 𝑥1 𝑥2 ≥ 0 }
3
3. V = ℝ2 ; S = { (𝑥1 ; 𝑥2 ) ∈ ℝ2 ∶ 𝑥2 = 2 𝑥1 }
Proposition 1
If V is a vector space, S is a subset of V, then S is a vector subset of V, then S is a vector
subspace iff:
i. 0∈S
ii. S is closed under addition if u, v ∈ S ⇒ u + v ∈ S
iii. S is closed under scalar multiplication if 𝜆 ∈ ℝ , v ∈ S ⇒ 𝜆𝐯 ∈ S
Proof
If S is a subspace, then S us a vector space so by definition S must be closed under addition
and scalar multiplication. Also S must have the zero vector, z. But V has a zero vector 0. But
S is closed under scalar multiplication, so 0 · z ∈ S, so 0 = 0 · z ∈ S. Hence 0 ∈ S; z = 0.
If S satisfies the above conditions then we have to check that all the definitions are satisfied.
But since they are satisfied in V, they are certainly satisfied in S.
5. V = ℝ2
Subspaces are: 0, ℝ2, Every line passing through the origin, The Axes
6. If there are two subsets S, T ∈ 𝑉, the intersection of these two subspaces is a subspace
itself:
𝑆 ∩𝑇 =𝑋 ∈𝑉
8. V = ℝ2 ; S = { (𝑥1 ; 𝑥2 ) ∈ ℝ2 ∶ 𝑥1 ≥ 0, 𝑥2 ≥ 0 }
V = ℝ2 ; S = { (𝑥1 ; 𝑥2 ) ∈ ℝ2 ∶ 𝑥1 𝑥2 ≥ 0 }
Proposition 2
If S and T are subspaces of V, so is S ⋂ T = { u ∈ 𝑉 : u ∈ 𝑆 and u ∈ 𝑇 }
Proof
i. 0 ∈ S, 0 ∈ T ⟹ 0 ∈ S ⋂ T
ii. u, v ∈ S ⋂ T ⟹ u, v ∈ S and u, v ∈ T
⟹ u + v ∈ S and u + v ∈ T
4
⟹ u, v ∈ S ⋂ T
iii. u ∈ S ⋂ T, 𝜆 is a scalar
⟹ u ∈ S and u ∈ T
⟹ 𝜆 u ∈ S and 𝜆 u ∈ T
⟹ 𝜆 u∈S⋂T
Analysis
V = F[𝑎; 𝑏]
i. S = C[𝑎; 𝑏] is the set of all continuous functions f: [𝑎; 𝑏] ⇾ ℝ
S is a subspace!
ii. S = { f ∈ C[0; 1] : f(0) = 1 }
S is not a subspace, the zero vector, 0, is not in S.
iii. S = { f ∈ C[0; 1] : f(0) = 0 }
S is a subspace!
LINEAR DEPENDENCE
For a vector space V where u, v ∈ V, if v = 𝜆u, we say that V is linearly dependent on u.
LINEAR COMBINATIONS
If v1; v2; … ; vn ∈ 𝑉 and 𝜆1 ; 𝜆2 ; … ; 𝜆𝑛 ∈ ℝ
𝜆1 v1 + 𝜆2 v2 + … + 𝜆𝑛 vn
Is called a linear combination of the vectors v1; v2; … ; vn ∈ 𝑉. We call 𝜆1 ; 𝜆2 ; … ; 𝜆𝑛 ∈ ℝ the
coefficients.
We call
0v1 + 0v2 + … + 0vn = 0
The trivial linear combination. It produces the zero vector, 0.
Definition
We say the vector v is linearly dependent on the vectors v1; v2; … ; vn if we can write v as a
combination of v1; v2; … ; vn. So we can find 𝜆1 ; 𝜆2 ; … ; 𝜆𝑛 ∈ ℝ such that:
v =𝜆1 v1 + 𝜆2 v2 + … + 𝜆𝑛 vn
0 = 𝜆1 v1 + 𝜆2 v2 + … + 𝜆𝑛 vn + (-1)v
Hence there is a linear combination of v1; v2; … ; vn and v which equals the 0 and at least one
of the coefficients is not 0.
Definition
We say that the vectors v1; v2; … ; vn ∈ 𝑉 are linearly dependent if we can find a linear
combination of these vectors which equals the 0, and where not all the coefficients are 0.
5
We say the vectors are linearly independent if they are not linearly dependent.
Analysis
i. V = ℝ2
v1 = <1; 0; 0> v2 = <0; 1; 0> v3 = <2; 3; 0>
2 v1 + 3 v2 + (-1) v3 = 0
A non-trivial linear combination so the vectors are dependent.
ii. V = ℝ2
v1 = <1; 0; 0> v2 = <0; 1; 0> v3 = <2; 3; 1>
𝜆1 v1 + 𝜆2 v2 + 𝜆3 v3 = 0
𝜆1 + 2𝜆3 = 0
𝜆2 + 3𝜆3 = 0
𝜆3 = 0
∴ 𝜆1 = 𝜆2 = 𝜆3 = 0
So the vectors are linearly independent!
Proposition 3
Vectors v1; v2; … ; vn in the vector space V are linearly dependent iff:
0 = 𝜆1 v1 + 𝜆2 v2 + … + 𝜆𝑛 vn
⇒ 𝜆1 = 𝜆2 = ⋯ = 𝜆𝑛 = 0
Proof
Vectors are linearly independent iff they are not linearly dependent iff there is no non-trivial
linear combinations of them equal to 0. Then the only linear combinations of them equal to 0
is the trivial solution.
0 = 𝜆1 v1 + 𝜆2 v2 + … + 𝜆𝑛 vn
This, when written out, will usually give us a set of linear equations in the unknowns
𝜆1 ; 𝜆2 ; … ; 𝜆𝑛 ∈ ℝ.
This set of equations will always have one solution: 𝜆1 = 𝜆2 = … = 𝜆𝑛 = 0, the trivial
solution.
2. If the trivial solution is the only one, the vectors are linearly independent. If there are
other solutions, they are linearly dependent.
Analysis
i. In ℝ3 , let
v1 = <1; 0; 0> v2 = <0; 1; 0> v3 = <0; 0; 1>
So we have,
𝜆1 v1 + 𝜆2 v2 + 𝜆3 v3 = 0
<𝜆1 ; 0; 0> + <0: 𝜆2 ; 0> + <0; 0: 𝜆3 > = <0; 0; 0>
6
⟹ 𝜆1 = 𝜆2 = 𝜆3 = 0
These vectors are linearly independent.
ii. In V = M2 (ℝ)
1 0 0 1 1 1 0 0
A=( ) B=( ) C=( ) D=( )
0 1 1 0 0 0 1 1
𝜆1 A + 𝜆2 B + 𝜆3 C + 𝜆4 D = 0
𝜆 0 0 𝜆2 𝜆 𝜆3 0 0 0 0
( 1 )+ ( )+ ( 3 )+ ( )= ( )
0 𝜆1 𝜆2 0 0 0 𝜆 4 𝜆4 0 0
𝜆1 + 𝜆3 = 0, 𝜆2 = 𝜆3 , 𝜆1 =𝜆4 , 𝜆2 =-𝜆4
Therefore there are infinitely many solutions. The matrices are linearly dependent.
iii. V = P3
p0(x) = 1 p1(x) = x p2(x) = x2 p3(x) = x3
𝜆1 = 𝜆2 = 𝜆3 = 𝜆4 = 0
Remark
1. Any set of vectors containing the zero vector is linearly dependent.
2. Three vectors in ℝ2 V3
V3 = 𝜆1 V1 + 𝜆2 V2 V1 V2
3. Let V be a vector space where v1; v2; … ; vn ∈ 𝑉. Then v1; v2; … ; vn are linearly
dependent iff we can write one of the vectors as a linear combination of the others.
The set of all linear combinations of v1; v2; … ; vn we call the linear span of vectors.
Span(A) = { ∑𝑛𝑖=1 𝜆𝑖 𝑣𝑖 ∶ 𝜆𝑖 ∈ ℝ}
7
EXAMPLES OF LINEAR SPANS
i. In V = M2 (ℝ)
1 0 0 0
A={( ); ( )}
0 0 0 1
1 0 0 0
Span(A) = { 𝜆1 ( ) ; 𝜆2 ( ) ∶ 𝜆1 , 𝜆2 ∈ ℝ }
0 0 0 1
𝜆 0
={( 1 ) ∶ 𝜆1 , 𝜆2 ∈ ℝ }
0 𝜆2
= Set of all 2x2 diagonal matrices
Proposition 4
If V represents the vector space and A = { v1; v2; … ; vn } ⊆ V
a. Span(A) is a subspace of V
b. If W is a subspace of V and A ⊆ W then span(A) ⊆ W.
So span(A) is the smallest subspace of V containing A.
Proof
a. 0 ∈ span(A)
If { ∑𝑛𝑖=1 𝜆𝑖 𝑣𝑖 ∶ 𝜆𝑖 ∈ ℝ} and { ∑𝑛𝑖=1 𝜇𝑖 𝑣𝑖 ∶ 𝜇𝑖 ∈ ℝ} ∈ span(A) then
𝑛 𝑛 𝑛
Analysis
1 0
1. V = ℝ3 : A = { (1) ; (0) }
0 1
1 0
Span(A) = { 𝜆 (1) ; 𝜇 (0) ∶ 𝜆, 𝜇 ∈ ℝ}
0 1
= Plane through the origin containing the two vectors.
2. V = P2
p1(x) = 𝑥 2 − 2𝑥 p2(x) = 2𝑥 2 + 𝑥 + 2 p3(x) = 5𝑥 2 − 5𝑥 + 2
A = { p1(x), p2(x), p3(x) }
Remark
If V is a vector space and A ⊆ V and span(A) = V, then we can write every v ∈ 𝑉 as a linear
combination of A.
For example, If V = ℝ3 : A = { <1; 0; 0>, <0; 1; 0>, <0; 0; 1> }, then the span(A) = ℝ3 .
BASIS
Definition
A subset B = { b1; b2; … ; bn } of a vector space V is a basis of V iff:
a. Span(B) = V
b. B is linearly independent.
Analysis
1. B = { <1; 0; 0>, <0; 1; 0>, <0; 0; 1> } is a basis for ℝ3 , called the standard basis.
1 0 0 1 0 0 0 0
3. { ( ); ( ); ( ); ( ) } is a basis for M2 (ℝ).
0 0 0 0 1 0 0 1
Proposition 5
If B = { b1; b2; … ; bn } is a basis for a vector space V, then every v ∈ 𝑉 can be written in a
unique way as a linear combination of elements of B.
Proof
Since span(B) = V; every v ∈ 𝑉 can be written in the form:
v =𝜆1 v1 + 𝜆2 v2 + … + 𝜆𝑛 vn
v =𝛾1 v1 + 𝛾2 v2 + … + 𝛾𝑛 vn
Example, in ℝ2 , <1;0> and <0;1> is a basis. So is <2;0> and <0;2> as well as <1;0> and <1;1>.
Definition
A vector space is:
Finite Dimensional if it has a basis with a finite number of elements.
Infinite Dimensional if this is not the case.
Zero dimensional if it consists only of the zero vector, 0.
Proof
If n is a fixed rational number. We prove the theorem by induction on m.
Since w1 ≠ 0, not all the 𝜆 will be 0. Suppose 𝜆1 ≠ 0 (If it is, we can remember the vectors in
B such that the one with the non-zero coefficient comes first), we then get:
1
b1 = 𝜆 ( w1 - 𝜆2 b2 - … - 𝜆𝑛 bn ) ∈ span(w1; b2; … ; bn)
1
span(V) = span(B)
= span(w1; b2; … ; bn)
⊆V
Now suppose for any linearly independent set of S with m elements where m < n, there is a B0
⊆ V, where B0 has (n – m) elements such that:
Span(S ⋃ B0) = V
10
We prove that it follows from this assumption that the statement is true for a set with (m + 1)
elements.
Then S0 has m elements and is linearly independent. By this assumption there is a subset B0
of B with exactly (n – m) elements such that:
Span(S0 ⋃ B0) = V
Not all the coefficients 𝜇1 … 𝜇𝑛−𝑚 can be 0. If there were, wm+1 would be a linear combination
of S0 and this cannot be as it is linearly independent.
Lets assume 𝜇1 ≠ 0
1
b1 = 𝜇 (wm+1 - 𝜆1 w1 - 𝜆2 w2 - … - 𝜆𝑚 wm - 𝜇2 b2 - … - 𝜇𝑛−𝑚 bn-m)
1
So:
b1 ∈ span(w1; w2; … ; wm ; wm+1; b2; b3; … ; bn-m) = V = span(S0 ⋃ B`0)
where B`0 = ⊆ B and B`0 has (n – (m+1) ) vectors. Then this proves the statement for (m + 1).
Corollary 1
If V has a basis with n vectors, then any linearly independent set with n vectors is also a
basis.
Proof
Take m = n, them B0 has (n – n) elements, i.e. B0 = ∅. So the span(S) = V and S is linearly
independent therefore it is a basis.
Corollary 2
If V has a basis with n elements any set S with (n + 1) elements is linearly dependent.
Proof
Let S = { v1; v2; … ; vn+1 } and suppose it is linearly independent. Then { v1; v2; … ; vn } is also
linearly independent.
Proposition 6
If V is a finite vector space with a basis with n elements, then any other basis also has n
elements.
11
Proof
Let B = { b1; b2; … ; bn } be a basis and suppose C is a basis with m elements.
If m < n, then V has a basis C with m elements. But the B, with n > m elements is linearly
dependent and this is a contradiction. Hence m = n.
Definition
The number of elements in a basis for a finite dimensional space V is called its dimension. We
write:
dim(V)
Analysis
1. dim(ℝ𝑛 ) = n
2. dim(Pn) = n + 1 (n + constant)
3. dim(Mnxm(ℝ)) = n x m
4. dim(Mn(ℝ)) = n2
5. dim(V) where V = {(𝑥1 ; 𝑥2 ; 𝑥3 ) ∈ ℝ ∶ 𝑥1 + 𝑥2 + 𝑥3 = 0 }
Remarks
Suppose V is a vector space and dim(V) = n.
a. every linearly independent set with n vectors forms a basis for V.
b. every set with n vectors that span V is linearly independent.
Therefore any set with n elements in an n-dimensional vector space is a basis, if it is linearly
independent and spans the space.
Remarks
A basis for a vector space is:
a. a maximal linearly independent set (i.e. it is not contained in a larger linearly
independent set)
b. a minimal spanning set (i.e. it does not contain a smaller spanning set)
Definition
I. A set of infinitely many vectors is linearly independent if every finite subset of it is
linear independent.
12
II. A set of infinitely many vectors span a vector space V if every v ∈ V is a linear
combination of a finite number of vectors in the set S.
Analysis
Let P be the set of all Polynomials of any degree. P is a vector space.
B = { 1; x; x2; x3; … } is an infinite set of P
Span(B) = P
B is linearly independent
A basis for a vector space (infinite or finite) is a linearly independent spanning set. In the
example, B is a basis for P.
Theorem
Every vector space has a basis.
Proposition 7
V is finite dimensional, say dim(V) = n and W is a subspace of V, then dim(W) ≤ dim(V). If
dim(W) = dim(V), then W = V.
Proof
Every linearly independent set of V can have at most n elements. If B is a basis for W, then B
is a linearly independent in V. Hence, the number of elements in B ≤ n.
So dim(W) ≤ n = dim(V)
Analysis
W = { (𝑥1 ; 𝑥2 ; 𝑥3 ; 𝑥4 ; 𝑥5 ) ∈ ℝ5 : 𝑥1 + 𝑥3 + 𝑥5 = 0, 𝑥2 = 𝑥4 }
If (𝑥1 ; 𝑥2 ; 𝑥3 ; 𝑥4 ; 𝑥5 ) ∈ W then
𝑥1 = -𝑥3 -𝑥5
𝑥2 = 𝑥4
So (𝑥1 ; 𝑥2 ; 𝑥3 ; 𝑥4 ; 𝑥5 ) = (-𝑥3 -𝑥5 ; 𝑥2 ; 𝑥3 ; 𝑥2 ; 𝑥5 )
= 𝑥2 (0; 1; 0; 1; 0) + 𝑥3 (-1; 0; 1; 0; 0) + 𝑥5 (-1; 0; 0; 0; 1)
E(i, j) = matrices with 1 in row (i), column (j) and 0’s everywhere else.
( )
In a symmetric matrix, the part below the diagonal is a mirror image of the part above the
diagonal.
The set V of all (n x n) symmetric matrices is a vector space for all Mn(ℝ), since:
a) The zero matrix is symmetric
b) (𝐴 + 𝐵)𝑡 = 𝐴𝑡 + 𝐵 𝑡
c) (𝜆𝐴)𝑡 = 𝜆𝐴𝑡
Basis for V:
LINEAR TRANSFORMATIONS
Definition
Let V, W be real vector spacse. A function T: V ⇾ W, is called a linear transformation/map iff:
i. For every u, v ∈ V : T(u + v) = T(u) + T(v) … Preserves Addition
ii. For every u ∈ V, 𝜆 ∈ ℝ : T(𝜆u) = 𝜆T(u) … Preserves Scalar Multiplication
Remark
T is a linear transformation iff:
T(𝜆1 u +𝜆2 v ) = 𝜆1 T(u) + 𝜆2 T(v)
Analysis
1. If V = W are vector spaces, define:
I : V ⇾ W by Iv = v
I is linear and is called the Identity transformation
14
2. If V, W are two vector spaces, then define:
Z: V ⇾ W by Z(v) = 0 for all v ∈ V
3. V = W = ℝ
T(x) = 2x for all x ∈ ℝ
4. V = W = ℝ
T(x) = 2x + 3
This is not linear! Check!
6. V = P3 and W = P2
T(p) = p’ called the derivative transformation
Let p, q ∈ P3 and 𝜆1 𝜆2 ∈ ℝ
T(𝜆1 p +𝜆2 q ) = (𝜆1 p + 𝜆2 q)’
= (𝜆1 p)’ + (𝜆2 q)’
= (𝜆1 )(p)’ + (𝜆2 )(q)’
= (𝜆1 )T(p) + (𝜆2 )T(q)
7. T : ℝ2 → ℝ2 defined by
T((x; y)) = (x; -y)
This is a linear transformation, check!
8. T: C[0; 1] → ℝ
1
T(f) = ∫0 𝑓(𝑥) 𝑑𝑥
Since integration is a linear operation, the above expression is linear transformation.
b. If T: V ⇾ W is a function such that T(0) ≠ 0 then we can conclude that T is not linear.
However, if T(0) = 0 we cannot necessarily conclude that T is linear
e.g. T((x; y)) = (x + 1; y - 3)
T((0; 0)) = (1; -3) ≠ (0; 0) therefore this is not linear
T(x) = x2
T(0) = 0 however, T is not linear, check!
15
d. T(∑𝑛𝑖=1 𝜆𝑖 𝑣𝑖 ) = ∑𝑛𝑖=1 𝜆𝑖 𝑇(𝑣𝑖 ) this can be proved by simple induction, therefore it preserves
linear combinations.
Analysis
I. T : P 3 ⇾ P2
T(p) = p’
Ker(T) = { p ∈ P3 : T(p) = 0 }
= { p ∈ P3 : p’ = 0 }
= { p ∈ P3 : constant polynomials }
R(T) = P2
II. T : ℝ2 → ℝ2
T((x; y)) = (x; 0)
𝑥1
T(( … )) = T(x) = Ax
𝑥𝑛
Where A is (m x n), x is (n x 1) and T(x) is (m x 1)
T is linear
Ker(T) = { x ∈ ℝ𝑛 : T(x) = 0 }
= { x ∈ ℝ𝑛 : Ax = 0 } This is the set of all solutions to homogenous systems of
linear equations
16
R(T) = { y ∈ ℝ𝑚 : y = Ax for some x ∈ ℝ𝑛 }
= { Ax : x ∈ ℝ𝑛 }
= span of the columns of A
Proposition 8
Let T: V ⇾ W be a linear transformation:
I. Ker(T) is a subspace of V
II. R(T) is a subspace of W
Proof
I. Ker(T) is a subspace
0 ∈ Ker(T), since T(0) = 0
If x, y ∈ Ker(T) and 𝜆1 𝜆2 ∈ ℝ
T(𝜆1 u +𝜆2 v ) = 𝜆1 T(u) + 𝜆2 T(v) = 𝜆1 0 + 𝜆2 0 = 0
𝜆1 x + 𝜆2 y = 𝜆1 T(v1) + 𝜆2 T(v2)
= T(𝜆1 v1 + 𝜆2 v2)
= T(v) ∈ R(T)
Where v = 𝜆1 v1 + 𝜆2 v2
Proposition 9
Let T: V ⇾ W be a linear transformation:
B = { b1; b2; … ; bn } is a basis for V, then R(T) = span{ T(b1); T(b2); … ;T(bn) }.
Proof
T(b1); T(b2); … ;T(bn) ∈ R(T) and R(T) is a subspace of W.
span{ T(b1); T(b2); … ;T(bn) } is the smallest subspace containing T(b1); T(b2); … ;T(bn).
B = { 1; x; x2; x3 }
Note that this example shows that T(B) need not be a basis for R(T).
Proposition 10
Let T: V ⇾ W be a linear transformation that is one-to-one iff:
Ker(T) = {0}
Proof
Suppose that ker(T) = {0} and T(v) = T(w) then T(v – w) = 0 so v – w ∈ Ker(T) = {0}.
Analysis
I. T : P3 ⇾ P2 with T(p) = p’
Ker(T) = span{1} ≠ {0} So T is not one-to-one.
II. T : ℝ2 → ℝ2 defined by
T((x; y)) = (x; -y)
(x; -y) = (0; 0) where x = 0 and y = 0 therefore ker(T) = {(0; 0)} so T is one-to-one
RANK-NULLITY THEOREM
Definitions
Nullity of T = dim(ker(T))
Rank of T = dim(R(T))
Theorem
Suppose T: V ⇾ W is a linear transformation and V is finite dimensional, say dim(V) = n then:
dim(ker(T)) + dim(R(T)) = dim(V) = n
Proof
Let B = { b1; b2; … ; bn } be a basis for V, and S = { v1; v2; … ; vk } be a basis for ker(T) where
k ≤ n.
18
If k = n, then dim(ker(T)) = dim(V) = n and so ker(T) = V. But this means that T(v) = 0 for
every v ∈ V and R(T) = 0 so dim(R(T)) = 0. We have:
dim(ker(T) + dim(R(T)) = n + 0 = n = dim(V)
We know that the span{ T(v1); … ;T(vn) } = R(T), but T(v1) = … = T(vk) = 0. This means that
span{ T(vk+1); … ; T(vn) } = R(T)
We prove that T(vk+1) … T(vn) are linearly independent, and therefore form a basis for R(T)
and dim(R(T)) = n – k.
𝜆𝑘+1 vk+1 + … + 𝜆𝑛 vn = 𝜆1 v1 + … + 𝜆𝑘 vk
𝜆𝑘+1 vk+1 + … + 𝜆𝑛 vn - 𝜆1 v1 - … - 𝜆𝑘 vk = 0
So { T(vk+1); … ; T(vn) } is linearly independent and is a basis for R(T) thus dim(R(T)) = n – k.
Corollary 1
If dim(V) = dim(W) < ∞ then a linear transformation T: V ⇾ W is one-to-one iff it is onto. ( ie.
T is onto (subjective) if R(T) = W )
Proof
T is one-to-one
⇔ ker(T) = {0} ⇔ dim(ker(T)) = 0 ⇔ dim(R(T)) = dim(V) – dim(ker(T)) = dim(V) (By RNT)
⇔ dim(R(T)) = dim(V) = dim(W)
⇔ R(T) = W (Since R(T) is a subspace of W with the same dimensions as W)
Analysis
1. V = W = ℝ2
T((x; y)) = (x; -y)
Ker(T) = {(0; 0)}
19
T is one-to-one and T is onto.
2. V = P3 and W = P2
T(p)(x) = p’(x)
Ker(T) = a constant polynomial ∴ Dim(ker(T)) = 1
Dim(V) = 4; dim(ker(T)) + dim(R(T)) = dim(V) = 4
∴ 1 + dim(R(T)) = 4 ∴ dim(R(T)) = 3
3. V = P2 and W = P3
T: V ⇾ W defined by
𝑥
T(p(x)) = 𝑥𝑝′ (𝑥) + ∫0 𝑝(𝑡)𝑑𝑡 check that T is linear!
B = {1; x; x2} is a basis for P2
T(1) = x
1
T(x) = x + 2 x2
1
T(x2) = 2x2 + 3 x3
1 1
R(T) = span { x; x + 2 x2; 2x2 + 3 x3 }
= span { x; x2; x3 }
= { p ∈ P3 : p(0) = 0 }
Dim(R(T)) = 3
Dim(R(T)) + dim(ker(T)) = dim(V) = dim(P2) = 3
3 + dim(ker(T)) = 3 ∴ dim(ker(T)) = 0 so ker(T) = { 0 }
∴ one-to-one and not onto
Remark
Suppose T: V ⇾ W is a linear map and B = { b1; b2; … ; bn } is a basis for V. Suppose we also
know that:
T(b1) = w1 ; T(b2) = w2 ; … ; T(bn) = wn
If v ∈ V then v = 𝜆1 b1 + 𝜆2 b2 + … + 𝜆𝑛 bn
Hence if we know a function T is linear and we know what T does to every vector in a basis for
its domain then we know what T does to every vector in its domain.
Analysis
1. V = W = ℝ2 and T: V ⇾ W is a linear transformation
B = { (1; 0) ; (0; 1) }
20
T((1; 0)) = (1; 0)
T((0; 1)) = (1; 1)
Proposition 11
Let V,W be vector spaces and B = { b1; b2; … ; bn } a basis for V and {w1; w2; … ; wn} ∈ W, there
is a unique linear map T: V ⇾ W.
Proof
If v ∈ V then v = 𝜆1 b1 + 𝜆2 b2 + … + 𝜆𝑛 bn
Clearly 𝑇(𝑏𝑖 ) = 𝑤𝑖
ISOMORPHISMS
Definition
Let V, W be vector spaces. We say a linear map T : V ⇾ W is an isomorphism if T is one-to-one and
onto
T
V W
T-1
We say V and W are isomorphic if there is a linear isomorphism T : V ⇾ W.
Example: V = M2x2 (ℝ) and W = ℝ4
𝑎11
𝑎11 𝑎12 𝑎12
T((𝑎 )) = (𝑎21 ) T is an isomorphism!
21 𝑎12
𝑎22
Recall
If B = { b1; b2; … ; bn } is a basis for a vector space V. Then every v ∈ V has a unique
representation:
v = 𝜆1 b1 + 𝜆2 b2 + … + 𝜆𝑛 bn
21
We call:
[v]B = (𝜆1 ; … ; 𝜆𝑛 )
The coordinate vector of V with respect to the basis B.
Analysis
Proposition 12
If V is a vector space and dim(V) = n, then V is isomorphic to ℝ𝑛 (we write V ≃ ℝ𝑛 )
Proof
Since dim(V) = n, there is a basis: B = { b1; b2; … ; bn } for V. We define T : V ⇾ ℝ𝑛 to be the
linear map such that:
T(b1) = (0; 0; …. ; 0; 1; 0 … ; 0) = ei ∈ ℝ𝑛
Where ei is the vector with 0 in every entry except in the ith position where there is a 1.
Proposition 13
If V and W are finite dimensional vector spaces and dim(V) ≠ dim(W) then V and W are not
isomorphic.
Proof
Suppose they are isomorphic and T : V ⇾ W is an isomorphism. Then by the RNT:
Dim(V) = dim(ker(T)) + dim(R(T))
= 0 + dim(W)
Dim(V) = dim(W) which is a contradiction.
𝑥1 𝑦1
x = ( … ) ∈ ℝ𝑛 and y = ( … ) ∈ ℝ𝑚 and y = Ax, then y = ∑𝑛𝑖=1 𝑥𝑖 𝐴𝑖 where Ai is the ith column of
𝑥𝑛 𝑦𝑛
A.
22
Remarks
1. ek is the vector with 1 in the kth position and 0 everywhere else.
2. If A is an (m x n) matrix, then Tx = Ax defines the linear map T : ℝ𝑛 → ℝ𝑚 .
3. If A is a (m x n) matrix and B is a (n x p) matrix, then AB is a (m x p) matrix:
Define T : ℝ𝑝 → ℝ𝑛 by Tx = Bx
Define S : ℝ𝑛 → ℝ𝑚 by Sx = Ax
𝑇 𝑆
ℝ𝑝 → ℝ𝑛 → ℝ𝑚
B A
S∘T
AB
S∘T(x) = S(T(x)) = S(Bx) = A(Bx) = (AB)(x)
Therefore the composition of linear maps corresponds to matrix multiplication.
4. If T : ℝ𝑛 → ℝ𝑚 is a linear map, is there a (m x n) matrix A, such that Tx = Ax for all
x ∈ ℝ𝑛 ?
If x ∈ ℝ𝑛 , then x = ∑𝑛𝑖=1 𝑥𝑖 𝑒𝑖 and Tx = ∑𝑛𝑖=1 𝑥𝑖 𝑇(𝑒𝑖 ).
Now let A be the (m x n) matrix with ith column equal to T(ei) ∈ ℝ𝑚 and it follows that
Tx = Ax.
Theorem
A function T : ℝ𝑛 → ℝ𝑚 is a linear transformation iff there is an (m x n) matrix A, denoted [T]
such that:
Tx = [T]x for x ∈ ℝ𝑛
If this is true then the matrix [T] has ith column T(ei).
More generally let V be an n-dimensional vector space, with a basis B = { b1; b2; … ; bn } and
W is an m dimensional vector space with basis C = { c1; c2; … ; cn }. We write JB as the
isomorphism:
JB : V → ℝ𝑛 , where
JB(v) = [v]B
And
JC : W → ℝ𝑚 , where
JC(v) = [v]C
𝑇
V → W
JB JC
𝐴
ℝ𝑛 → ℝ𝑚
23
Can we choose an (m x n) matrix A such that JC(T(v)) = A(JB(v)) for all v ∈ V ?
Yes, by choosing A to be the matrix with the ith column [T(bi)]C
𝜆1
Let v ∈ V then v = ∑𝑛𝑖=1 𝜆𝑖 𝑏𝑖 and so [v]B = ( … )
𝜆𝑛
Thus T(v) = ∑𝑖=1 𝜆𝑖 𝑇(𝑏𝑖 )
𝑛
[T(v)]C = JC(T(v))
= JC(∑𝑛𝑖=1 𝜆𝑖 𝑇(𝑏𝑖 ))
= ∑𝑛𝑖=1 𝜆𝑖 𝐽𝐶 (𝑇(𝑏𝑖 ))
= ∑𝑛𝑖=1 𝜆𝑖 [𝑇(𝑏𝑖 )]𝐶
𝜆1
= A( … ) = A[v]B
𝜆𝑛
∴ [T(v)]C = A[v]B
∴ JC(T(v)) = A(JB(v))
Theorem
Let V be an n-dimensional vector space, with a basis B = { b1; b2; … ; bn } and W is an m
dimensional vector space with basis C = { c1; c2; … ; cn }. T : V → W is linear and there is an
(m x n) matrix [𝑇]𝐶𝐵 such that:
[𝑇𝑣]𝐶 = [𝑇]𝐶𝐵 [𝑣]𝐵
𝑇
V → W
JB JC
[𝑇]𝐶
𝐵
ℝ𝑛 → ℝ𝑚
Analysis
1. V = P2 and W = P3
B = {1; x; x2} C = {1; x; x2; x3} T : P 2 ⇾ P3
𝑥
T(p(x)) = 𝑥𝑝′ (𝑥) + ∫0 𝑝(𝑡)𝑑𝑡
0 0 0
1 1 0
[𝑇]𝐶𝐵 = ( 1
2)
0 2
0 0 13
24
2. If V is an n – dimensional vector space and IV : V ⇾ V is the identity map:
IV(v) = v
If we have B as a basis for V then:
[𝐼𝑣]𝐵𝐵 = In
Theorem
Let T : V ⇾ W be a linear map then the following statements are equivalent:
I. T is invertible
II. T is one-to-one and onto
III. T is an isomorphism
Proposition 14
If T : V ⇾ W is linear, and there is a function S : W ⇾ V such that T(S(w)) = w, S(T(v)) = v for
all v ∈ V and w ∈ W then S is linear.
Proof
Suppose w1, w2 ∈ W and 𝜆, 𝜇 ∈ ℝ
Since T is onto, so there are v1 and v2 ∈ V such that w1 = T(v1) and w2 = T(v2) so v1 = S(w1) and
v2 = S(w2) therefore:
AB = I = BA
dim(V) = dim(W)
Proposition 15
Let V, W be two n – dimensional vector spaces, with bases B = { b1; b2; … ; bn } and C = { c1; c2;
… ; cn } respectively. If T : V ⇾ W is linear, then T is an isomorphism iff [𝑇]𝐶𝐵 is invertible.
25
Proof
Suppose T is an isomorphism, then it is invertible, so there is a linear map T-1 : W ⇾ V such
that:
T-1 ∘ T = IV and T ∘ T-1 = IW
𝑇 𝑇 −1
V → W → V
JB JC JB
ℝ𝑛 → ℝ𝑚 → −1 𝐵 ℝ𝑛
[𝑇]𝐶
𝐵 [𝑇 ]𝐶
𝐼𝑣
V → V
JB JB
𝐼𝑛
ℝ𝑛 → ℝ𝑛
Hence [𝑇]𝐶𝐵 [𝑇 −1 ]𝐵𝐶 = 𝐼𝑛 and in the same way we can prove that [𝑇 −1 ]𝐵𝐶 [𝑇]𝐶𝐵 = 𝐼𝑛
We therefore say that [𝑇]𝐶𝐵 is invertible and has inverse [𝑇 −1 ]𝐶𝐵 . Conversely suppose A = [𝑇]𝐶𝐵 is
an invertible matrix then there is an (n x n) matrix such that:
AA-1 = I = A-1A
To show that T is an isomorphism we have to find that a linear map S : W ⇾ V such that
T ∘ S = IW and S ∘ T = IV
We have:
𝑇
V → W
JB JC
𝐴
ℝ𝑛 → ℝ𝑛
We want:
𝑆
V ← W
JB-1 JB JC
𝐴−1
ℝ𝑛 ← ℝ𝑛
Put S(w) = JB-1 A-1JC(w) Then S is a linear map. We want to check that T ∘ S = IW and S ∘ T =
IV
For all v ∈ V
26
S(T(v)) = JB-1 A-1JC(T(v))
= JB-1 A-1A JB(v)
= JB-1 I JB(v)
= JB-1 JB(v)
= I(v)
=v
Analysis
1. Let V be an n – dimensional vector space with two bases B = { b1; b2; … ; bn } and C = {
c1; c2; … ; cn }
𝐼𝑣
(V, B) → (V, C)
JB JC
[𝐼𝑣]𝐶
𝐵
ℝ𝑛 → ℝ𝑛
Where [𝐼𝑣]𝐶𝐵 is called the Change of Base Matrix, with its ith column being [IV(bi)]C =
[bi]C.
2. V = ℝ2 with B = {(-1; 1); (1; 1)} and C = {(1; 2); (1; 0)}
−1 1 −1
The change of basis matrix [𝐼ℝ2 ]𝐶𝐵 has columns [( )]C and [( )]C. To find [( )]C we
1 1 1
have to find α & β such that:
−1 1 1
( ) = α( ) + β( ) from this we get -1 = α + β and 1 = 2α + β, therefore α = 12 and
1 2 0
β = −32
1 1
−1 2 1 2
So [( )]C = ( 3), similarly [( )]C = (1)
1 − 1
2 2
1 1
Proposition 16
V is n – dimensional, with two bases B = { b1; b2; … ; bn } and C = { c1; c2; … ; cn }and T : V ⇾ V
is a linear map, then:
[𝑇]𝐵𝐵 = [ [𝐼𝑉 ]𝐶𝐵 ]−1 [𝑇]𝐶𝐶 [𝐼𝑉 𝐶𝐵 ]
27
Proof
[𝐼𝑉 ]𝐶𝐵 [𝑇]𝐵𝐵 [v]B = [𝑇]𝐶𝐶 [𝐼𝑉 ]𝐶𝐵 [v]B
Analysis
1. V = P1 with bases C = { 1 ; x } and B = { 1 – x ; 2 – x }
4 2 −1 −2 1 2
[𝑇]𝐶𝐶 = ( ) and [𝐼𝑃 ]𝐶𝐵 = ( ) and [ [𝐼𝑉 ]𝐶𝐵 ]−1 = ( )
−1 1 1 1 −1 −1
1 2 4 2 −1 −2
Therefore [𝑇]𝐵𝐵 = ( )( )( )
−1 −1 −1 1 1 1
Definition
If A and B are square matrices say (n x m) then we say they are similar if there is an
invertible (n x n) matrix Q such that:
A = Q-1 B Q
L(V, W) is the set of all linear transformations from V to W. Let S, T ∈ L(V, W) and 𝜇 ∈ ℝ. We
define:
(S + T)(v) = S(v) + T(v) and (𝜇T)(v) = 𝜇T(v) ∀ v ∈ V
These are linear maps from V to W. So we can define addition and scalar multiplication in
L(V, W). With these operations this set becomes a vector space.
Definition
Let V be a real vector space. Then we call L(V, ℝ) the dual of V, denoted by V*. The elements
of V* are just linear transformations from V to ℝ, we call them linear functionals.
𝜙𝑖 (𝑏𝑗 ) = 1 if i = j and 0 if i ≠ j
Proof
B* spans V* : Let 𝜙 ∈ V*. We have to show the 𝜙 is a linear combination of the
elements B*. If v ∈ V, then
𝜙(v) = ∑𝑛𝑖=1 𝜙𝑖 (𝑣)𝜙(𝑏𝑖 ) = (∑𝑛𝑖=1 𝜙(𝑏𝑖 ) 𝜙𝑖 )(v) where 𝜙(𝑏𝑖 ) is some constant.
Hence span(B*) = V*
DUAL BASIS
B = { b1; b2; … ; bn } a basis for V. Dual basis B* = { 𝜙1; 𝜙2; … ; 𝜙n } for Dual space V* where
𝜙𝑖 (𝑏𝑗 ) = 1 if i = j and 0 if i ≠ j
Analysis
V = ℝ𝑛 ; B = { e1; e2; … ; en } and B* = { e1*; e2*; … ; en* }
We apply ek*(x) = ∑𝑛𝑖=1 𝑒𝑘∗ (𝑥𝑖 𝑒𝑖 ) = ∑𝑛𝑖=1 𝑥𝑖 𝑒𝑘∗ (𝑒𝑖 ) = xk. We can write x = ∑𝑛𝑖=1 𝑒𝑖∗ (𝑥)𝑒𝑖 .
If 𝜙 ∈ (ℝ𝑛 )* then by the above 𝜙(x) = ∑𝑛𝑖=1 𝑒𝑘∗ (𝑥)𝜙(𝑒𝑖 ) = (∑𝑛𝑖=1 𝜙(𝑒𝑖 ) 𝑒𝑖∗ )(x)
Analysis
Let V = ℝ2 and the standard basis being B = { e1; e2 } and C = { (1; -1) ; (2; 1) } = { e1 – e2; 2e1 +
e2 }. We find the dual basis for C, C*, in terms of the elements of B* = { e1* ; e2* }.
Let C* = { 𝜙1; 𝜙2 } where 𝜙1(e1 – e2) = 1 and 𝜙1(2e1 + e2) = 0. Using the principles of linearity:
1 = 𝜙1(e1) - 𝜙1(e2) …. (1)
0 = 2 𝜙1(e1) + 𝜙1(e2) …. (2)
So, 𝜙1(e2) = -2 𝜙1(e1) …. (3)
29
By substituting (3) into (1) we get 𝜙1(e1) = 13 and 𝜙1(e2) = −2
3
.
Similarly we have 𝜙2(e1 – e2) = 0 and 𝜙2(2e1 + e2) = 1. Using the principles of linearity we get:
0 = 𝜙2(e1) – 𝜙2(e2) …. (1)
1 = 2 𝜙2(e1) + 𝜙2(e2) …. (2)
So, 𝜙2(e2) = 𝜙2(e1) …. (3)
By substituting (3) into (2) we get 𝜙2(e1) = 𝜙2(e2) = 13
THE BIDUAL
We can form the dual (V*)* of V*. We write V** for (V*)* and call V** the bidual of V.
We can give an isomorphism from V to V** that does not depend on choosing a basis in V.
Definition
Φ : V ⇾ V** by Φ(v) = 𝑣̂ where 𝑣̂(𝜙) = 𝜙(v) for ∀ 𝜙 ∈ V* since 𝑣̂ ∈ V** , Φ is an isomorphism.
Recall
𝑎11𝑎12
If A = (𝑎 ) then det(A) = a11a22 – a21a12. Also used: |A| for det(A) and note that this
21 𝑎22
need not mean that the determinant is only positive.
In the above system of equations we multiply (1) by a21 and (2) by a11 and subtract each other:
𝑎 𝑏1
| 11 |
𝑎21 𝑏2
𝑥1 = 𝑎
11 𝑎12
|𝑎 |
21 𝑎22
𝑏 𝑎12
| 1 |
𝑏2 𝑎22
𝑥2 = 𝑎
11 𝑎12
|𝑎 |
21 𝑎22
30
CRAMER’S RULE
To find the inverse of A we have to solve the above system, first with b1 = 1 and b2 = 0 and
then with b1 = 0 and b2 = 1.
𝑎 −𝑎21 −𝑎12 𝑎11
For the first one, x1 = det22 and x 2 = . For the second, x 1= and x 2 = .
(𝐴) det(𝐴) det(𝐴) det(𝐴)
1 𝑎22 −𝑎12 1 0
Put B = det(𝐴) (−𝑎 𝑎11 ) if det(A) ≠ 0. Check that BA = AB = (0 ). Thus A is invertible
21 1
Where x is the position vector of any point on the line, a is the position vector of a fixed point
on the line, b is the directional vector of the line.
b
Let T : ℝ ⇾ ℝ be linear. Then the image of this line has equation
2 2
x
T(x) = T(a) + 𝜆T(b) a
So is a line again, with direction vector T(b) (except when T(b) = 0, when we get a point). If
two lines L1 and L2 are parallel, their image under T will be parallel.
1 A L2 C L2` C`
L1 S L3 L1` A` B` L3`
L4`
0 L4 B1 0
𝑥1 𝑎11 𝑎12 𝑥1
Let T = AA , i.e. T(𝑥 ) = (𝑎 𝑎22 ) (𝑥2 ) A` C`
2 21
B`
D C
31
A` (a12; a22) and B` (a11; a21)
If T(S) was the shape were the points A` and B` were interchanged then Area of T(S) = -det(A)
Analysis
𝑎11 𝑎12
A = (𝑎 𝑎22 )
21
M11 = a22 M12 = a21 M21 = a12 M22 = a11
C11 = (-1)2+2 a22 C12 = (-1)1+2 a21 C21 = (-1)1+2 a12 C22 = (-1)2+2 a11
Definition
Let A = (aij) be an (n x n) matrix. Then we call
det(A) = a11C11 + a12C12 + … + a1nC1n
The expansion in terms of the first row.
Analysis
1 2 3
A = (4 5 6)
7 6 9
32
5 6 4 6 4 5
det(A) = 1 (-1)2det(( )) + 2(-1)3 det(( )) + 3(-1)4 det(( )) = -12
6 9 7 9 7 6
RULE OF SARRUS
The Rule of Sarrus is used to calculate the determinant of a (3 x 3) matrix.
Step 4 : Sum up the dashed lines and subtract the solid lines = det(A)
PROPERTIES OF DETERMINANTS
I. If A is (n x n) and 1 < i, j < n. Then
det(A) = ai1Ci1 + … + ainCin
Is the expansion in terms of the ith row and
det(A) = a1jC1j + … + anjCnj
Is the expansion in terms of the jth column.
33
VII. det(AB) = det(A)det(B)
XII. If B is the matrix obtained from A by adding a multiple of one row to another row then
det(B) = det(A)
Analysis
2 0 1 2 0 1 2 1
A=( ) is a diagonal matrix then take 𝑥 = ( ) so for 𝐴𝑥 = ( )( ) = ( ) = 2( )
0 −3 0 0 −3 0 0 0
1 1 1 1 1 1
If x ∈ span { ( ) } then x = 𝜆 ( ), so Ax = A(𝜆 ( )) = 𝜆A(( )) = 𝜆2 ( ) = 2(𝜆 ( )) = 2x
0 0 0 0 0 0
34
1 0
So A scales any vector in span { ( ) by 2. Similarly A scales any vector in span { ( ) } by -3.
0 1
More generally suppose V is a vector space and T : V ⇾ V. Suppose there is a non – zero vector
v ∈ V and a scalar 𝜆 such that:
T(v) = 𝜆v
Then we say that 𝜆 is an Eigenvalue (characteristic value) of T and v is the corresponding
Eigenvector (characteristic equation).
Remarks
I. An eigenvector must be non-zero, since T(0) = 0 = 𝜆0.
Analysis
4 2 𝜆 0
𝐴= ( ) 𝜆𝐼2 = ( )
−1 1 0 𝜆
Remarks
For an (n x n) matrix A, the equation det(A – 𝜆In) = 0 is called the characteristic equation of A.
It is a polynomial equation of degree n in the variable 𝜆. It may have no real solutions but it
always has n complex solutions. If we count each solution the number of times it occurs is
called the multiplicity of the solution.
4−2 2 𝑥 0
From the first equation we get ( ) (𝑦) = ( )
−1 1 − 2 0
2 2 𝑥 0
( )( ) = ( )
−1 −1 𝑦 0
By Gauss Reduction we get x = -y
−1
Therefore the eigenvectors are of the form 𝛼 ( ) where 𝛼 ≠ 0.
1
1 2 𝑥 0
From the second equation above we get ( ) (𝑦) = ( )
−1 −2 0
By Gauss Reduction we get x = -2y
−2
Therefore the eigenvectors are of the form 𝛼 ( ) where 𝛼 ≠ 0.
1
36