0% found this document useful (0 votes)
8 views

THEORY3-4 Eigen Vectors and Eigen Values

This document discusses endomorphisms and their properties. It defines endomorphisms as linear maps from a vector space to itself. It then defines eigenvalues and eigenvectors of endomorphisms and proves various properties about them, including that the sum of eigenspaces corresponding to distinct eigenvalues is a direct sum. It also introduces the characteristic polynomial of an endomorphism.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

THEORY3-4 Eigen Vectors and Eigen Values

This document discusses endomorphisms and their properties. It defines endomorphisms as linear maps from a vector space to itself. It then defines eigenvalues and eigenvectors of endomorphisms and proves various properties about them, including that the sum of eigenspaces corresponding to distinct eigenvalues is a direct sum. It also introduces the characteristic polynomial of an endomorphism.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Part III. Chapter 4. Endomorphisms. Linear Algebra I. Departamento de Matemáticas. UDC.

4. Endomorphisms. The scalar λ satisfying this condition is also called proper value or characteristic
value; the vectors x̄ satisfying this condition are called eigenvectors or proper
vectors or characteristic vectors associated to λ.
1 Introduction.
We next make some considerations concerning this definition.
We saw in the previous chapter that an endomorphism is a linear map from a
vector space to itself: t : U −→ U . If we fix a basis of U 1. If λ = 0 is an eigenvalue of t, then the eigenvectors associated to λ are exactly
the vectors in the kernel of t.
B = {ū1 , . . . , ūn },
2. An eigenvector x̄ 6= 0 cannot be associated with two different eigenvalues.
we can work with the matrix of t relative to this basis.
Proof: If x̄ is associated with the eigenvalues λ, µ we have:
t(ūi ) = tij ūi , ⇒ TBB = (tij ).
λx̄ = t(x̄) = µx̄ ⇒ (λ − µ)x̄ = 0.
TBB is called matrix of the endomorphism t with respect to the basis B.
We will usually simply denote it TB . Since x̄ 6= 0̄, we deduce that λ − µ = 0 and therefore both eigenvalues are equal.
Given another basis U 3. Once fixed a basis B in U , the matrix expression of the eigenvalue
B 0 = {ū01 , . . . , ū0n }, condition is
we can consider the change-of-basis matrix MB 0 B . It allows us to connect the ma- t(x̄) = λx̄ ⇐⇒ TB (x) = λ(x).
trices of t relative to both bases:
Thus one can assign eigenvalues and eigenvectors to a square matrix.
TB 0 B 0 = MB 0 B TBB MBB 0 = (MBB 0 )−1 TBB MB 0 B
4. Two similar matrices have the same eigenvalues.
We deduce that Proof: We saw that any two similar matrices can be regarded as matrices as-
sociated to the same endomorphism relative to two different bases. But the
Proposition 1.1 Two matrices are associated with the same endomorphism if and eigenvalues of a matrix are the same as the eigenvalues of the associated endo-
only if they are similar. morphism, and these do not depend on the basis (actually the definition of an
eigenvalue given above does not involve any basis).
Let us call End(U ) the vector space Hom(U, U ) of endomorphisms of U . We saw
in the previous chapter that given any basis B, the map
π: End(U ) −→ Mn×n 2.2 Characteristic subspaces.
t −→ TB
Definition 2.2 The union of the zero vector with the set of eigenvectors associated
is an isomorphism. Therefore, the study of endomorphisms is equivalent to the study
to an eigenvalue λ will be denoted by Sλ and called the characteristic subspace or
of square matrices and the similarity relation.
eigenspace associated to λ:
One of the fundamental goals of this chapter will be to find bases in which the
matrix associated to a given endomorphism is as simple as possible. Equivalently, Sλ = {x̄ ∈ U | t(x̄) = λx̄}.
given a square matrix find a similar matrix as simple as possible.
Proposition 2.3 The set Sλ of eigenvectors associated to an eigenvalue λ is a vector
subspace.
2 Eigenvalues and eigenvectors.
Proof: First, Sλ 6= ∅ because 0̄ ∈ Sλ . Moreover, given x̄, ȳ ∈ Sλ and α, β ∈ IK:
2.1 Definition and properties.
t(αx̄ + β ȳ) = αt(x̄) + βt(ȳ) = αλx̄ + βλȳ = λ(αx̄ + β ȳ)
Definition 2.1 Given an endomorphism t : U −→ U , a scalar λ is said to be an ↑
eigenvalue of t if there exists a nonzero vector x̄ such that: x̄, ȳ ∈ Sλ ⇐⇒ t(x̄) = λx̄, t(ȳ) = λȳ

t(x̄) = λx̄. and then αx̄ + β ȳ ∈ Sλ .

45
Part III. Chapter 4. Endomorphisms. Linear Algebra I. Departamento de Matemáticas. UDC.

Remark 2.4 A different way to prove the above result is as follows. Given an 2.3 Characteristic polynomial.
endomorphism t and an eigenvalue λ, we define the map
Definition 2.7 Let t : U −→ U be an endomorphism, B a basis of U and T the
t0 : U −→ U ; t0 (x̄) = t(x̄) − λx̄ ⇐⇒ t0 = t − λId.
matrix associated to t with respect to this basis. The characteristic polynomial of
This is a linear map because it is the difference of two linear maps. Furthermore t is
pt (λ) = det(T − λId).
Sλ = {x̄ ∈ U | t(x̄) = λx̄} = {x̄ ∈ U | t(x̄) − λx̄ = 0̄} = ker(t − λId).
We see that Proposition 2.8 The characteristic polynomial is independent of the choice of the
Sλ = ker(t − λId) basis.

is a vector subspace because it is the kernel of a linear map.


Proof: Suppose that T and T 0 are two matrices associated to the same endomor-
phism t : U −→ U , with respect to two different bases. We know that T and T 0 are
Proposition 2.5 If λ1 and λ2 are different eigenvalues of an endomorphism then similar, that is, there exists a regular matrix P satisfying that T 0 = P −1 T P . Now:
Sλ1 ∩ Sλ2 = {0̄}.
|T 0 − λI| = |P −1 T P − λP −1 P | = |P −1 (T − λI)P | = |P −1 ||T − λI||P | = |T − λI|.
Equivalently, the sum Sλ1 + Sλ2 is a direct sum.

Proof: It follows from the fact that an eigenvector cannot be associated to different
eigenvalues. Theorem 2.9 Given any endomorphism t : U −→ U , λ is an eigenvalue of t if and
only if pt (λ) = 0.
Proposition 2.6 If λ1 , . . . , λk are pairwise different eigenvalues of an endomor-
phism then (Sλ1 + . . . + Sλi ) ∩ Sλi+1 = {0} for i = 1, . . . , k − 1. Proof: By definiton λ is an eigenvalue of t if and only if there is a vector x̄ 6= 0 with:

Equivalently, the sum Sλ1 + . . . + Sλk is a direct sum. t(x̄) = λx̄.


If we choose a basis B of U , we can write this condition as:
Proof: We will prove it by induction. Note that the case k = 2 has already been
proved. TB (x) = λ(x) or equivalently (TB − λI)(x) = 0̄.

Suppose it is true for i eigenspaces and let us prove it for i + 1. That is, λ is an eigenvalue of t if and only if the homogeneous system

Fix x̄ ∈ (Sλ1 + . . . + Sλi ) ∩ Sλi+1 . Then we have (TB − λI)(x) = 0̄

x̄ = x̄1 + . . . + x̄i , with xj ∈ Sλj , j = 1, . . . , i. has a non trivial solution. But this happens if and only if the matrix of the system
is singular, that is:
Applying t we obtain: |T − λI| = 0 ⇐⇒ pt (λ) = 0.
t(x̄) = t(x̄1 ) + . . . + t(x̄i ) = λ1 x̄1 + . . . + λi x̄i .
On the other hand, since x̄ ∈ Sλi+1 , we have:
t(x̄) = λi+1 x̄ = λi+1 x̄1 + . . . + λi+1 x̄i . 2.4 Algebraic and geometric multiplicity of an eigen-
value.
Comparing both expressions:
(λ1 − λi+1 )x̄1 + . . . + (λi − λi+1 )x̄i = 0̄ Definition 2.10 Given an endomorphism t : U −→ U and an eigenvalue λ of t we
define
Since Sλ1 , . . . , Sλi is a direct sum, the decomposition of 0̄ as a sum of elements of
Sλ1 . . . , Sλi is unique. Hence - Algebraic multiplicity of λ: It is the multiplicity of λ as a root of the char-
acteristic polynomial pt (λ). It is denoted by m(λ).
(λ1 − λi+1 )x̄1 = . . . = (λi − λi+1 )x̄i = 0̄
- Geometric multiplicity of λ: It is the dimension of the characteristic space
Since all eigenvalues are different, x̄1 = . . . = x̄i = 0̄ and x̄ = 0̄. associated to λ. It is denoted by d(λ).

46
Part III. Chapter 4. Endomorphisms. Linear Algebra I. Departamento de Matemáticas. UDC.

Let us see some properties of these multiplicities: We see that λi is a root of the characteristic polynomial of t with at least
multiplicity d, therefore:
1. The geometric multiplicity of an eigenvalue is always greater than 0. m(λi ) ≥ d(λi ).

d(λ) ≥ 1
4. If λ is an eigenvalue with m(λ) = 1, then d(λ) = m(λ) = 1.
Proof: It is sufficient to note that by definition, any eigenvalue has always a
5. The maximum number of independent eigenvectors of an endomorphism is
nonzero eigenvector.
2. If n is the dimension of the vector space U and T is the associated matrix to t d(λ1 ) + . . . + d(λk )
with respect to some basis, a frequent way to compute the geometric multiplicity
is : where λ1 , . . . , λk are all eigenvalues of t.
d(λ) = n − rank(T − λI) Proof: Note that the sum of all spaces of characteristic vectors is a direct sum.
In addition, we have:
Proof: If follows from:
d(λ1 ) + . . . + d(λk ) ≤ m(λ1 ) + . . . + m(λk ) ≤ n.
d(λ) = dim(Sλ ) = dim(ker(t − λI)) = n − dim(im(t − λI)) = n − rank(T − λI).

3.
Algebraic multiplicity ≥ Geometric multuplicity m(λi ) ≥ d(λi ) 3 Diagonalization and similarity.
Proof: Suppose λi is an eigenvalue of the endomorphism t. Let d = d(λi ) be
its geometric multiplicity. We consider a basis of the characteristic subspace 3.1 Diagonalizable endomorphisms.
Sλi :
{ū1 , . . . , ūd } Definition 3.1 An endomorphism t : U −→ U is said to be diagonalizable if there
is a basis B of U such that the matrix associated to t with respect to B is diagonal:
and we complete it up to a base B of U :
t diagonalizable ⇐⇒ ∃ basis B/ TB diagonal.
B = {ū1 , . . . , ūd , ūd+1 , . . . , ūn }.

Let us see what the matrix T associated to t with respect to this basis looks We can define the analogous concept for square matrices:
like. We know that

t(ū1 ) = λi ū1 ; ... t(ūd ) = λi ūd . Definition 3.2 A matrix T is said to be diagonalizable when it is a similar to a
diagonal matrix:
Thus the matrix T has the form:
  T diagonalizable ⇐⇒ ∃D diagonal and P regular , D = P −1 T P.
λi 0 ... 0
 0 λi ... 0 
 . .. .. .. A  A ∈ Md×(n−d) (IK). Since two matrices are similar precisely when they are associated to the same
 ..
T = . . . with

 B ∈ M(n−d)×(n−d) (IK). endomorphism, the study of diagonalization of matrices is equivalent to that of di-
 0 0 ... λi 
agonalization of endomorphisms.
Ω B

Now, if we compute the characteristic polynomial of t using this matrix, we get:


3.2 Matrix of an endomorphism respect to a basis of
λi − λ 0 ... 0 eigenvectors.
0 λi − λ ... 0
.. .. .. .. A
|T − λI| = . . . . = (λi − λ)d |B − λI| Theorem 3.3 The necessary and sufficient condition for the associated matrix to
0 0 ... λi − λ an endomorphism to be diagonal is that it is expressed with respect to a basis of
Ω B − λI eigenvectors.

47
Part III. Chapter 4. Endomorphisms. Linear Algebra I. Departamento de Matemáticas. UDC.

Proof: Reciprocally if d(λ1 ) + . . . + d(λk ) = n, since the sum of characteristic subspaces


Sλ1 + . . . + Sλk is a direct sum, we can choose a basis of U formed by eigenvectors.
=⇒: Suppose that B = {ū1 , . . . , ūn } is a basis of U , such that TB is diagonal:
Therefore t is diagonalizable.
 
t1 0 ... 0 From the previous proof we deduce that the characterization can be written simply
0 t2 ... 0 as follows:
TB = 
 ... .. .. ..  .
. . . 
0 0 ... tn Corollary 3.6 If t is an endomorphism of a vector space U of dimension n:
This means that:
t diagonalizable ⇐⇒ d(λ1 ) + . . . + d(λk ) = n
t(ū1 ) = t1 ū1 ; ... t(ūn ) = tn ūn .
and hence all vectors of B are eigenvectors. where λ1 , . . . , λk is the set of eigenvalues of t.
⇐=: Suppose that B = {ū1 , . . . , ūn } is a basis of eigenvectors of U , then we have:

t(ū1 ) = λ1 ū1 ; ... t(ūn ) = λn ūn . 3.4 Steps for the diagonalization of an endomorphism t.
It follows that the associate matrix TB is Suppose we have a vector space U of dimension n and an endomorphism t : U −→ U
  whose matrix with respect to a basis B is TB . The steps to diagonalize it (if it is
λ1 0 ... 0
diagonalizable at all) are as follows.
 0 λ2 ... 0 
TB =  .. .. .. ..  .
 . . . .  1. Compute the characteristic polynomial pt (λ) = |TB − λI|.
0 0 ... λn
2. Find the roots of the characteristic polynomial, i. e. the eigenvalues of t. We
will obtain values λ1 , . . . , λk with algebraic multiplicities mλ1 , . . . , mλk .
3.3 Characterization of a diagonalizable endomorphism. (a) If mλ1 + . . . + mλk < n the endomorphism is not diagonalizable.
We have seen that (b) If mλ1 + . . . + mλk = n, we compute the geometric multiplicities of each
eigenvalue of t:
Proposition 3.4 An endomorphism t : U −→ U is diagonalizable if and only if d(λi ) = dim(Sλi ) = n − rank(T − λI) i = 1, . . . , k.
there is a basis of U formed by eigenvectors of t.
i. If some geometric multiplicity does not coincide with the correspond-
This result is usually reformulated as follows: ing algebraic one, then t is not diagonalizable.
ii. If all algebraic multiplicities are equal to the corresponding geometric
Theorem 3.5 An endomorphism t of an n-dimensional space U is diagonalizable if ones, then t is diagonalizable.
and only if the sum of the algebraic multiplicities is equal to the dimension of U and The corresponding diagonal matrix has all the eigenvalues on its di-
the algebraic multiplicities are equal to the geometric multiplicities. Equivalently: agonal, repeated as many times as their algebraic multiplicities:
 λ ... 0 
1
m(λ1 ) + . . . + m(λk ) = n.
(
t diagonalizable ⇐⇒  ... . . . ... ... Ω 
 
d(λ1 ) = m(λ1 ), . . . , d(λk ) = m(λk ).  0 . . . λ1 
 
 
where λ1 , . . . , λk is the set of eigenvalues of t.  .. .. .. 
D=
 . . . 

 
Proof: It is sufficient to note that if t is diagonalizable, then there is a basis of 
 λk ... 0 

eigenvectors of U . In this basis the matrix T is diagonal and therefore  .. .. .. 

 Ω ... . . .
n = d(λ1 ) + . . . + d(λk ) ≤ m(λ1 ) + . . . + m(λk ) = n 0 ... λk

48
Part III. Chapter 4. Endomorphisms. Linear Algebra I. Departamento de Matemáticas. UDC.

so that 4 Triangularization and similarity.


−1
D = (M BB 0 ) TB M BB 0

where MBB 0 is the change-of-basis matrix from the basis B 0 of eigen- 4.1 Triangularizable endomorphisms.
vectors of t to the initial basis B.
The eigenvectors of each eigenvalue λi of t are computed by solving Definition 4.1 An endomorphism t : U −→ U is said to be triangularizable if
the system there is a basis B of U such that the associated matrix TB is triangular.

x1 t triangularizable ⇐⇒ ∃ basis B/ TB triangular.


   
.
 
1 n
Sλi = ker(t − λi I) = (x , . . . , x ) ∈ U | (T − λi I)  ..  = 0̄ ,
xn The analogous concept for square matrices would be
 

for each i = 1, 2, . . . , k.
Definition 4.2 A matrix T is said to be triangularizable when there is a trian-
In other words the columns of MBB 0 are the coordinates, with
gular matrix similar to it:
respect to the initial basis B, of n linearly independent eigen-
vectors of t. In addition, they must be ordered consistently T triangularizable ⇐⇒ ∃J triangular and P regular , J = P −1 T P.
with the eigenvalues, so that if the eigenvalue λs appears on
the j-th column of the diagonal matrix D, then an eigen-
Remark 4.3 If the matrix of an endomorphism t : U −→ U relative to a basis
vector associated with the eigenvalue λs must appear on the
column j-th of MBB 0 . B = {ū1 , . . . , ūn }

is upper triangular, then there is basis B 0 such that the associated matrix is lower
3.5 Application to obtaining powers of matrices. triangular and conversely. It is sufficient to take the vectors of B in the inverse
order:
Suppose that A is a diagonalizable matrix and we are interested in finding Ap . We B 0 = {ūn , . . . , ū1 }.
can proceed as follows.
We will look for upper triangular matrices similar to a given one. The basic
Since A is diagonalizable, there is a regular matrix P such that
result is the following:
 
d1 0 ... 0
0 d2 ... 0  Theorem 4.4 Let U be an n-dimensional vector space. An endomorphism t : U −→
D = P −1 AP where D=
 ... .. .. ..  . U is triangularizable if and only if t has n eigenvalues (taking algebraic multiplicities
. . . 
into account). Namely
0 0 ... dn
t triangularizable ⇐⇒ m(λ1 ) + . . . + m(λk ) = n
Then A = P DP −1 and
where {λ1 , . . . , λk } is the set of eigenvalues of t.
Ak = (P DP −1 ) · (P DP −1 ) · . . . · (P DP −1 ) =
| {z }
k times Proof:
= P (P −1 P D) · (P −1 P D) . . . · (P −1 P · D) P −1 = P Dk P −1 .
| {z } =⇒: Suppose that t is triangularizable. Then there is a basis B such that the
k times matrix TB is triangular. If we calculate the characteristic polynomial using this
basis, we get:
That is,
|T − λI| = (t11 − λ)(t22 − λ) . . . (tnn − λ)
dk1
 
0 ... 0
0 dk2 ... 0  and we see that there are exactly n eigenvalues (possibly not pairwise different).
Ak = P Dk P −1 with Dk = 
 ... .. .. .
.. 
. . . ⇐=: Suppose that t has n eigenvalues (possibly not pairwise different). Let us see
0 0 ... dkn that t is triangularizable. We will prove it by induction:

49
Part III. Chapter 4. Endomorphisms. Linear Algebra I. Departamento de Matemáticas. UDC.

- For n = 1 it is true because a matrix of dimension 1 × 1 is always triangular. Remark 4.5 If we are working on the field IK = C I of complex numbers, the Fun-
damental Theorem of Algebra states that every polynomial of degree n has exactly
- Suppose the result is true for n − 1 and prove it for n.
n complex roots (same or different). Thus, every square matrix over the field of
Let λ1 be an eigenvalue. Then d(λ1 ) ≥ 1 and there is at least one eigenvector complex numbers is triangularizable.
6 0 associated to λ1 . We choose a basis B1 whose first vector is x̄1 :
x̄1 =
In the field of real numbers we do not have this result. There are polynomials of
B1 = {x̄1 , ū2 , . . . , ūn }. degree n that do not have n real solutions. Thus, there exist non-triangularizable
square matrices.
The associated matrix relative to this basis is
 
λ1 t12 ... t1n
 0  4.2 Jordan canonical form.
TB1 =
 ...

T0 
Definition 4.6 Given an scalar λ we call Jordan block or Jordan box associated
0 to λ and of dimension m the m × m matrix
Note that λ 1 0 ... 0 0

|TB1 − λI| = (λ1 − λ)|T 0 − λI|,
0 λ 1 ... 0 0
that is, the characteristic polynomial of t factorizes through the characteristic poly- 0 0 λ ... 0 0
nomial of T 0 . So if t has n eigenvalues, T 0 will have n − 1 eigenvalues. J =  .. .. .. .. .. 
 
..
. . . . . .
Now by induction hypothesis, T 0 is triangularizable. We know that there is a

0 0 0 ... λ 1

regular matrix P ∈ Mn−1×n−1 (IK) such that: 0 0 0 ... 0 λ
B = P −1 T 0 P where B is upper triangular.
Definition 4.7 We call Jordan matrix to a matrix formed by several Jordan blocks
Let Q be the matrix:   associated with equal or different scalars, placed on the diagonal as follows:
1 0 ... 0
J1 Ω ... Ω
 
 0 
Q= .. 
 Ω J2 ... Ω 
 . P 
J =
 .. .. .. ,
.. 
0 . . . .
Then: Ω Ω ... Jp
   
1 0 ... 0 λ1 t12 ... t1n 1 0 ... 0 where J1 , J2 , . . . , Jp are Jordan blocks.
−1
 0  0  0 
Q TB1 Q =
 ...
 .  . =
P −1   .. T0   .. P 
We will assume without proof the validity of the following theorem:
 0 0
  0
λ1 t12 ... t1n 1 0 ... 0
 0  0  Theorem 4.8 Any triangularizable matrix is similar to a Jordan matrix.
= .. −1 0
 .. =
 . P T  . P 
0 0 In the remainder of this section our goal is to describe how we can obtain the
   
λ1 s12 ... s1n λ1 s12 ... s1n Jordan form of any given triangularizable matrix.
 0   0 
= .. = .. =S
 . P −1 T 0 P   . B 
4.3 Obtaining the Jordan form.
0 0
where since B is upper triangular, S is upper triangular. We have proved that Let us see the steps to calculate the Jordan form of a matrix T ∈ Mn×n (IK).
the matrix TB1 associated with t is triangularizable by similarity and therefore t is
triangularizable. 1. Compute the characteristic polynomial pt (λ) = |TB − λI|.

50
Part III. Chapter 4. Endomorphisms. Linear Algebra I. Departamento de Matemáticas. UDC.

2. Find the roots of the characteristic polynomial, i. e. the eigenvalues of t. We blocks relative to the eigenvalue λi :
will obtain values λ1 , . . . , λk with algebraic multiplicities mλ1 , . . . , mλk . Jn1 ×n1

Ω ... Ω

(a) If mλ1 + . . . + mλk < n the matrix is not triangularizable.  Ω Jn2 ×n2 ... Ω 
J(λi ) =  .. .. .. .. 
(b) If mλ1 + . . . + mλk = n, the matrix is triangularizable. Let us see how to

. . . .

obtain its Jordan form. We will obtain the Jordan block associated Ω Ω ... Jnd ×nd
to each eigenvalue independently. So, given an eigenvalue λi
We note that it is convenient to arrange the boxes starting with the
i. We compute the subspaces largest and ending with the one with the smallest dimension.
v. To obtain the basis relative to which this Jordan matrix is obtained,
Sλi ,p = ker(T − λI)p we look for a set vectors whose structure mirrors the one described
above:
for successive values of p. Note that with this notation Sλi ,1 = Sλi .
We will stop at a p for which Sλi ,1 Sλi ,2 ... Sλi ,nf1 ... Sλi ,n2 ... Sλi ,n1
n1 −→ x̄1,1 x̄1,2 ... x̄1,nf1 ... x̄1,n2 ... x̄1,n1
dim(Sλi ,p+1 ) = dim(Sλi ,p ).
n2 −→ x̄2,1 x̄2,2 ... x̄2,nf1 ... x̄2,n2
ii. The geometric multiplicity d(λi ) is equal to the number of Jordan ..
blocks relative to the eigenvalue λi : .
nf1 −→ x̄f1 ,1 x̄f1 ,2 ... x̄f1 ,nf1
geometric multiplicity d(λi ) = number of Jordan blocks
↑ ↑ ↑
f1 f2 fp
iii. Consider the following values:
where
dim(Sλi ,1 ) −→ f1 = d(λi ) x̄j,k = (T − λi I)x̄j,k+1
−→ f2 = dim(Sλi ,2 ) − dim(Sλi ,1 )
Therefore, to construct these vectors, it is enough to choose the vector
dim(Sλi ,2 )
at the right end of each row in such a way that the resulting first
−→ f3 = dim(Sλi ,3 ) − dim(Sλi ,2 )
vectors of all rows turn out to be linearly independent:
dim(Sλi ,3 )
.. .. {x̄1,1 , x̄2,1 , . . . , x̄f1 ,1 }.
. .
dim(Sλi ,p−1 ) The required basis will be formed by all these vectors ordered from
−→ fp = dim(Sλi ,p ) − dim(Sλi ,p−1 ) left to right and from top to bottom:
dim(Sλi ,p ) {x̄1,1 , x̄1,2 , . . . , x̄1,n1 , x̄2,1 , x̄2,2 , . . . , x̄2,n2 , . . . , x̄f1 ,1 , x̄f1 ,2 , . . . , x̄f1 ,nf1 }
It can be shown that always fk ≥ fk+1 . vi. Finally, the Jordan form is formed by joining the Jordan blocks cor-
iv. We form a diagram with p columns, in such a way that the k-th responding to each eigenvalue:
column is formed by fp elements: J(λ1 )

Ω ... Ω

 Ω J(λ2 ) ... Ω 
n1 −→ ∗ ∗ ... ∗ ... ∗ ... ∗ J = .. .. .. ,
..
n2 −→ ∗ ∗ ... ∗ ... ∗ . . . .
 
.. Ω Ω ... J(λp )
.
nf1 −→ ∗ ∗ ... ∗ and the basis B 0 in which it is expressed will be formed by joining
↑ ↑ ↑ all the bases we have obtained for each eigenvalue, preserving the
f1 f2 fp order in which each vector has been placed. Thus:
J = (MBB 0 )−1 T MBB 0
where d = f1 = d(λi ). Now the number of elements in each
row of the diagram tells us the dimension of each of the Jordan where B is the initial basis.

51
Part III. Chapter 4. Endomorphisms. Linear Algebra I. Departamento de Matemáticas. UDC.

4.4 Application to obtain powers of matrices.


Suppose A is a triangularizable matrix and we want to obtain Ak . The procedure is
analogous to the one we have seen for diagonalizable matrices.
Since A is triangularizable, there is a regular matrix P such that

J = P −1 AP where J is a Jordan matrix.

Then A = P −1 JP and
Ak = (P JP −1 ) · (P JP −1 ) · . . . · (P JP −1 ) =
| {z }
k times
= P (P −1 P J) · (P −1 P J) . . . · (P −1 P · J) P −1 = P J k P −1 .
| {z }
k times
That is,
Ak = P J k P −1

One way to calculate J k is the following: we decompose J as the sum of two matrices,
one diagonal D and another upper triangular T with zeros on the diagonal:

J =D+T ⇒ J k = (D + T )k .

Although in general Newton’s binomial formula is not true for matrices, it


is true in this case (since D and T commute). So:
k  
X k
(D + T )k = Di T k−i .
i
i=0

The computation of the powers of D and T is very simple; in fact, from a sufficiently
large exponent onwards the matrix T k−i will vanish.

52
Linear Algebra I. Departamento de Matemáticas. UDC.

53
Glosary
Addition Domain, 2
vector, 28
addition Echelon form
of matrices, 6 column, 18
Automorfismo, 45 row, 16
Eigenspace, 46
Basis, 33 Eigenvalue, 46
canonical, 35 Eigenvector, 46
of eigenvectors, 48 Elementary operations
Bijective, 2 column, 15
Binomial coefficients, 4 row, 15
Binomial Theorem, 5 Elements, 1
Endomorphism, 45, 46
Cardinal, 1 diagonalizable, 48, 49
Change of basis, 35 triangularizable, 50
associated matrix Epimorphism, 44
to a homomorphism, 41 Equation
characteristic polynomial, 47 parametric, 37
Classification
of linear maps, 44 Function, 2
bijective, 2
Cofactor, 13
composed, 3
Combinations
identity, 3
with repetition, 5
inverse, 3
without repetition, 5
inyective, 2
Combinatorics., 4
projection, 43
Complement, 1
surjective, 2
Composition
Fundamental
of linear maps, 43
Theorem of Algebra, 51
Composition of funcions, 3
Coordinate, 34
Gaussian elimination method, 25
Correspondence, 2
composed, 2 Homomorphism, 40
inverse, 2
Cramer Image, 2
rule, 26 of a linear map, 42
Implicit or Cartesian equation, 37
De Morgan’s laws, 1 Intersection, 1
Determinant, 11 of subspaces, 29
Diagonalization Inyective, 2
by congruence, 19 Isomorphism, 44
similarity, 48
Dimension, 34 Jordan
sum ob subspaces, 37 block, 51

54
Glosario. Linear Algebra I. Departamento de Matemáticas. UDC.

box, 51 column, 15
canonical form, 51 row, 15
matrix, 51 inverse, 7
Jordan box, 51 Jordan, 51
non singular, 7
Kernel orthogonal, 10
of a linear map, 42 rectangular, 6
regular, 7
Leading element, 16, 18 singular, 7
Linear square, 6
dependent, 32 symmetric, 9
independent, 32 transition, 36
linear combination, 32 transpose, 8
Linear map, 40 triangular, 9
linear span, 32 lower, 9
Linear system, 24 upper, 9
consistent, 24 triangularizable, 50
homogeneous, 24 Monomorphism, 44
inconsistent, 24 Multiplication of a matrix
linear system by a scalar, 6
consistent Multiplicity
determined, 24 algebraic, 47
un determined, 24 geometric, 47
Linear systems
equivalent, 24 Origin, 2

Map, 2 Permutaciones
Matrices with repetition, 4
congruent, 19 Permutations., 4
equivalent, 19 Pivot, 16, 18
column, 17 Power
row, 16 of a matrix, 50, 53
similar, 20 of a matrix., 8
Matrix, 6 Product
augmented, 24 a scalar by a vector, 28
of constants, 24 cartesian, 2
of unknowns, 24 of matrices, 6
system, 24 Product rule, 4
adjoint, 14
antisymmetric, 9 ramer’s rule, 26
associated Range, 2
to a homomorphism, 41 Rank
to a linear map, 41 of a matrix, 14
change-of-basis, 36 rank
diagonal, 9 set of vectors, 35
diagonalizable, 48 Reduced echelon form
elementary column, 18

55
Glosario. Linear Algebra I. Departamento de Matemáticas. UDC.

row, 16

Set, 1
final, 2
generating, 32
initial, 2
spanning, 32
Set of vectorss
equivalent, 32
Signature
of a permutation, 11
Subespace
characteristic, 46
Subset, 1
Subspace
linear, 28
sum, 29
vector, 28
Subspaces
complementary, 30
Sum
direct, 30
of subspaces, 29
Surjective, 2

Theorem
Rouché-Fröbenius, 24
Steinitz, 33
Trace, 8
definition, 8
properties, 8
Transposition, 11

Union, 1

Value
characteristic, 46
proper, 46
Variations, 4
with repetition, 4
Variationss
without repetition, 4
Vector, 28
of constants, 24
of unknowns, 24
characteristic, 46
equation, 37
proper, 46

56

You might also like