0% found this document useful (0 votes)
9 views

Chap5 Evv Larp

The document discusses eigenvalues and eigenvectors, detailing the process of solving the equation Ax = λx, where λ represents eigenvalues and x represents eigenvectors. It explains the significance of the characteristic polynomial, the conditions for diagonalization, and the properties of Hermitian and unitary matrices. Additionally, it covers the implications of complex numbers in matrix theory and the concept of similarity transformations.

Uploaded by

dalmakhnirajma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Chap5 Evv Larp

The document discusses eigenvalues and eigenvectors, detailing the process of solving the equation Ax = λx, where λ represents eigenvalues and x represents eigenvectors. It explains the significance of the characteristic polynomial, the conditions for diagonalization, and the properties of Hermitian and unitary matrices. Additionally, it covers the implications of complex numbers in matrix theory and the concept of similarity transformations.

Uploaded by

dalmakhnirajma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 59

Eigenvalues and

Eigenvectors
CS6015/LARP

Ack: Linear Algebra and Its Applications , Gilbert Strang


The Solution of 𝑨𝒙 = 𝝀𝒙
• 𝐴𝑥 = 𝜆𝑥 is a nonlinear equation; 𝜆 multiplies 𝑥. If we could
discover 𝜆, then the equation for 𝑥 would be linear.
• We could write 𝜆𝐼𝑥 in place of 𝜆𝑥, and bring this term over to the
left side:
(𝐴 − 𝜆𝐼)𝑥 = 0

• We want a nonzero eigenvector 𝑥. The vector 𝑥 = 0 always


satisfies 𝐴𝑥 = 𝜆𝑥, but it is useless.
• To be of any use, the nullspace of 𝐴 − 𝜆𝐼 must contain vectors other
than zero.
• In short, 𝑨 − 𝝀𝑰 must be singular.
The Solution of 𝑨𝒙 = 𝝀𝒙
The Solution of 𝑨𝒙 = 𝝀𝒙
• Example:

• This is the characteristic polynomial.


• Its roots, where the determinant is zero, are the eigenvalues.
The Solution of 𝑨𝒙 = 𝝀𝒙

• There are two eigen values, because a quadratic has two roots.
• The values 𝜆 = −1 and 𝜆 = 2 lead to a solution of 𝐴𝑥 = 𝜆𝑥 or
(𝐴 − 𝜆𝐼)𝑥 = 0.
The Solution of 𝑨𝒙 = 𝝀𝒙
In both matrices, the columns are multiples of each
other, so either column can be used; Eigenvectors ??
Thus, (1, -2) can be taken as an eigenvector associated
with the eigenvalue -2; and (3, -1) as an eigenvector
associated with the eigenvalue 3, as can be verified by
multiplying them by A. (read Cayley–Hamilton theorem).
The Solution of 𝑨𝒙 = 𝝀𝒙

• The steps in solving 𝐴𝑥 = 𝜆𝑥:

1. Compute the determinant of 𝑨 − 𝝀𝑰. With 𝜆


subtracted along the diagonal, this determinant is a
polynomial of degree 𝑛. It starts with −𝜆 𝑛 .
2. Find the roots of this polynomial. The 𝑛 roots are
the eigenvalues of 𝐴.
3. For each eigenvalue solve the equation (𝑨 −
𝝀𝑰)𝒙 = 𝟎. Since the determinant is zero, there are
solutions other than 𝑥 = 0. Those are the
eigenvectors.
The Solution of 𝑨𝒙 = 𝝀𝒙 (Recap)

• The key equation was 𝐴𝑥 = 𝜆𝑥.


• Most vectors 𝑥 will not satisfy such an equation.
• They change direction when multiplied by 𝐴, so that 𝐴𝑥 is not a
multiple of 𝑥.
• This means that only certain special numbers are eigenvalues, and
only certain special vectors 𝒙 are eigenvectors.
Example 2. The eigenvalues of a projection matrix are 1 or 0.

• We have 𝜆 = 1 when 𝑥 projects to itself, and 𝜆 = 0 when 𝑥


projects to the zero vector.
• The column space of 𝑃 is filled with eigenvectors, and so is the
nullspace.
• If those spaces have dimension 𝑟 and 𝑛 − 𝑟, then 𝜆 = 1 is
repeated 𝑟 times and 𝜆 = 0 is repeated 𝑛 − 𝑟 times (always 𝑛 𝜆’s):

• A zero eigenvalue signifies that the matrix is singular.


Example 3. The eigenvalues are on the main diagonal when 𝐴 is
triangular.

• The determinant is just the product of the diagonal entries.

3 1
• It is zero if 𝜆 = 1, 𝜆 = , 𝑜𝑟 𝜆 =
4 2

• The eigenvalues were already sitting along the main diagonal.


For a 2 by 2 matrix, the trace and determinant tell us everything:
Diagonalization of a Matrix

• The eigenvectors diagonalize a matrix

• We call 𝑆 the “eigenvector matrix” and Λ the “eigenvalue matrix”.


Diagonalization of a Matrix
Diagonalization of a Matrix

• It is crucial to keep these matrices in the right order.


• If Λ came before 𝑆 (instead of after), then 𝜆1 would multiply the entries in
the first row. Therefore,

• 𝑆 is invertible, because its columns (the eigenvectors) were


assumed to be independent.
Diagonalization of a Matrix (REMARKS)

Remark 1. If the matrix 𝐴 has no repeated eigenvalues—the numbers


𝜆1 , … , 𝜆𝑛 are distinct—then its n eigenvectors are automatically
independent. So, any matrix with distinct eigenvalues can be
diagonalized.

Remark 2. The diagonalizing matrix 𝑆 is not unique. We can multiply


the columns of 𝑆 by any nonzero constants, and produce a new
diagonalizing 𝑆.

Remark 3. Other matrices 𝑆 will not produce a diagonal Λ.

Remark 4. Not all matrices possess 𝑛 linearly independent


eigenvectors, so not all matrices are diagonalizable.
Diagonalization of a Matrix (REMARKS)

The standard example of a “defective matrix” is

Its eigenvalues are 𝜆1 = 𝜆2 = 0, since it is triangular with zeros on


the diagonal:

All eigenvectors of this 𝐴 are multiples of the vector (1,0):

𝜆 = 0 is a double eigenvalue—its
algebraic multiplicity is 2. But the
geometric multiplicity is 1—there is
only one independent eigenvector.
We can’t construct 𝑺.
Diagonalization of a Matrix (REMARKS)

• Diagonalization can fail only if there are repeated eigenvalues.


• Even then, it does not always fail.
• 𝐴 = 𝐼 has repeated eigenvalues 1,1, … , 1 but it is already diagonal!
There is no shortage of eigenvectors in that case.
Diagonalization of a Matrix (REMARKS)

• Eigenvectors that come from distinct eigenvalues are automatically


independent.

• A matrix with 𝑛 distinct eigenvalues can be diagonalized. This is the


typical case.
Examples of Diagonalization

= ??
How can a vector be rotated and still have its direction unchanged?

• It can’t—except for the zero vector, which is useless.


• The eigenvalues of 𝐾 are imaginary numbers, 𝜆1 = 𝑖 and 𝜆2 = −𝑖.
• In turning through 90°, they are multiplied by 𝑖 or −𝑖:
• The eigenvalues are distinct, even if imaginary, and the eigenvectors
are independent. They go into the columns of S:

• Complex numbers are needed even for real matrices.


• If there are too few real eigenvalues, there are always 𝑛 complex
eigenvalues. (Complex includes real, when the imaginary part is
zero.)
Powers and Products: 𝑨𝒌 and 𝑨𝑩

• The eigenvalue of 𝐴2 are exactly 𝜆12 , … , 𝜆2𝑛 , and every eigenvector


of 𝐴 is also an eigenvector of 𝐴2

• Thus 𝜆2 is an eigenvalue of 𝐴2 , with the same eigenvector 𝑥.

• The same result comes from diagonalization, by squaring 𝑆 −1 𝐴𝑆 =


Λ:

• The matrix 𝐴2 is diagonalized by the same 𝑆, so the eigenvectors are


unchanged. The eigenvalues are squared.
• This continues to hold for any power of 𝐴.
Powers and Products: 𝑨𝒌 and 𝑨𝑩

• If 𝐴 is invertible this rule also applies to its inverse (the power 𝑘 =


−1).
−1 1
• The eigenvalues of A are .
𝜆𝑖
Powers and Products: 𝑨𝒌 and 𝑨𝑩
Powers and Products: 𝑨𝒌 and 𝑨𝑩

Thus 𝑥 and 𝐵𝑥 are both eigenvectors of 𝐴, sharing the same 𝜆 (or else
𝐵𝑥 = 0).
Complex Matrices

• We now introduce the space 𝐂 𝑛 of vectors with 𝑛 complex


components.
• Addition and matrix multiplication follow the same rules
as before.
• Length is computed differently
• The old way, the vector in 𝐂 2 with components
(1, 𝑖) would have zero length: 12 + 𝑖 2 = 0 which is not
good.
• The correct length squared is 12 + 𝑖 2 = 2
• The inner product, the transpose, the definitions of
symmetric and orthogonal matrices, all need to be
modified for complex numbers.
Complex Matrices

We particularly want to find out about symmetric matrices and


Hermitian matrices: Where are their eigenvalues, and what is special
about their eigenvectors?
Complex Numbers and Their Conjugates

The real numbers 𝑎 and the imaginary numbers 𝑖𝑏 are special cases of
complex numbers; they lie on the axes

Fig: The complex plane, with 𝑎 + 𝑖𝑏 = 𝑟𝑒 𝑖𝜃 and its conjugate 𝑎 − 𝑖𝑏 = 𝑟𝑒 −𝑖𝜃


Complex Numbers and Their Conjugates

• The complex conjugate of 𝑎 + 𝑖𝑏 is the number 𝑎 − 𝑖𝑏. The sign


of the imaginary part is reversed.
• It is the mirror image across the real axis
• Any real number is its own conjugate, since 𝑏 = 0.
Complex Numbers and Their Conjugates

The conjugate is denoted by a bar or a star: 𝑎 + 𝑖𝑏 ∗ = (𝑎 + 𝑖𝑏) =


𝑎 − 𝑖𝑏.

Important properties:
1. The conjugate of a product equals the product of the conjugates:

2. The conjugate of a sum equals the sum of the conjugates:

3. Multiplying any 𝑎 + 𝑖𝑏 by its conjugate 𝑎 − 𝑖𝑏 produces a real


number 𝑎2 + 𝑏 2 :
Complex Numbers and Their Conjugates

• Trigonometry connects the sides 𝑎 and 𝑏 to the hypotenuse 𝑟 by


𝑎 = 𝑟 cos𝜃 and 𝑏 = 𝑟 𝑠𝑖𝑛𝜃.
• Combining these two equations moves us into polar coordinates:

Most important special case is when 𝑟 = 1:

• It falls on the unit circle in the complex plane


• As θ varies from 0 to 2𝜋, this number 𝑒 𝑖𝜃 circles
around zero at the constant radial distance:
Complex Numbers and Their Conjugates
Lengths and Transposes in the Complex Case

The complex vector space 𝐂 𝑛 contains all vectors 𝑥 with 𝑛 complex


components:

In the new definition of length, each 𝑥𝑗2 is replaced by its modulus


2
𝑥𝑗 :
Hermitian Matrices

The matrix 𝐴ҧ𝑇 = 𝐴𝐻 = 𝐴∗ is called a “Hermitian”:

• This symbol 𝐴𝐻 gives official recognition to the fact that, with


complex entries, it is seldom that we want only the transpose of 𝐴.
• It is the conjugate transpose 𝐴𝐻 that becomes appropriate.
• A real symmetric matrix is certainly Hermitian. The eigenvalues are
real
Hermitian Matrices
Hermitian Matrices

• The outside numbers are 𝜆1 𝑥 𝐻 𝑦 = 𝜆2 𝑥 𝐻 𝑦, since the 𝜆’s are real.


• Now use the assumption 𝜆1 ≠ 𝜆2 , which forces the conclusion that
𝑥 𝐻 𝑦 = 0.
• In our example,
• These two eigenvectors are orthogonal:
• In geometry or mechanics, this is the principal axis
theorem. It gives the right choice of axes for an ellipse.
• Those axes are perpendicular, and they point along the
eigenvectors of the corresponding matrix.
• In mathematics the formula 𝐴 = 𝑄Λ𝑄𝑇 is known as the
spectral theorem.
Hermitian Matrices

In mathematics the formula 𝐴 = 𝑄Λ𝑄 𝑇 is known as the spectral


theorem.
• The spectral theorem 𝐴 = 𝑄Λ𝑄𝑇 has been proved only
when the eigenvalues of 𝐴 are distinct. Then there are
certainly n independent eigenvectors, and 𝐴 can be
safely diagonalized.
• Nevertheless it is true that even with repeated
eigenvalues, a symmetric matrix still has a complete set
of orthonormal eigenvectors.
• The extreme case is the identity matrix, which has
𝜆 = 1 repeated n times—and no shortage of
eigenvectors.
Unitary Matrices

A complex matrix with orthonormal columns is called a unitary


matrix.

Two analogies:
1. A Hermitian (or symmetric) matrix can be compared to a real
number.
2. A unitary (or orthogonal) matrix can be compared to a number on
the unit circle.
Unitary Matrices
• Example:

• Skew-symmetric matrix: 𝑲𝑻 = −𝑲
• Skew-Hermitian matrix: 𝑲𝑯 = −𝑲

• The eigenvalues of 𝐾 are purely imaginary instead of


purely real; we multiply i. The eigenvectors are not
changed.
Example:
Similarity Transformations

• The matrices 𝐴 and 𝑀−1 𝐴𝑀 are “similar”.


• Going from one to the other is a similarity transformation.
• A whole family of matrices 𝑀−1 𝐴𝑀 is similar to 𝐴, and there are
two questions:
Similarity Transformations

• The matrices 𝐴 and 𝑀−1 𝐴𝑀 are “similar”.


• The family of matrices 𝑀−1 𝐴𝑀 includes 𝐴 itself, by choosing
𝑀 = 𝐼.
• Similar matrices share the same eigenvalues.
Similarity Transformations

• The polynomials det(𝐴 − 𝜆𝐼) and det(𝐵 − 𝜆𝐼) are equal.


• Their roots—the eigenvalues of 𝐴 and 𝐵—are the same. Here are
matrices B similar to A.
Diagonalizing Symmetric and Hermitian Matrices

• The triangular form will show that any symmetric or Hermitian


matrix—whether its eigenvalues are distinct or not—has a complete
set of orthonormal eigenvectors.
• We need a unitary matrix such that 𝑈 −1 𝐴𝑈 is diagonal.
• This triangular 𝑇 must be diagonal, because it is also Hermitian when
𝐴 = 𝐴𝐻 :
• The diagonal matrix 𝑈 −1 𝐴𝑈 represents a key theorem in linear
algebra.
Diagonalizing Symmetric and Hermitian Matrices

The diagonal matrix 𝑈 −1 𝐴𝑈 represents a key theorem in linear algebra.


Diagonalizing Symmetric and Hermitian Matrices
Normal Matrices

• The matrix 𝑁 is normal if it commutes with


𝑁 𝐻 : 𝑁𝑁 𝐻 = 𝑁 𝐻 𝑁.
• Normal matrices are exactly those that have a
complete set of orthonormal eigenvectors.

Read about Jordan form

You might also like