0% found this document useful (0 votes)
3 views

Eigenvalues and Eigenvectors with Professor Strang (MIT)

Professor Gilbert Strang's lecture on eigenvalues and eigenvectors covers their definitions, properties, and applications in various fields such as physics and machine learning. The document outlines the mathematical framework for understanding eigenvalues and eigenvectors, including the characteristic equation and examples of special matrices. It also discusses challenges in dealing with complex eigenvalues and the implications of repeated eigenvalues in linear algebra.

Uploaded by

Antonio Mateos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Eigenvalues and Eigenvectors with Professor Strang (MIT)

Professor Gilbert Strang's lecture on eigenvalues and eigenvectors covers their definitions, properties, and applications in various fields such as physics and machine learning. The document outlines the mathematical framework for understanding eigenvalues and eigenvectors, including the characteristic equation and examples of special matrices. It also discusses challenges in dealing with complex eigenvalues and the implications of repeated eigenvalues in linear algebra.

Uploaded by

Antonio Mateos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

Eigenvalues and

Eigenvecrtors

A recap of Professor Gibert Strang’s MIT


Linear Algebra Lecture 21

1
Contents
1 Eigenvalues and Eigenvectors 3
1.1 What are Eigenvalues and Eigenvectors? 4
1.2 Properties of
Eigenvalues and Eigenvectors . . . . 11
1.3 Practical Applications . . . . . . . . . 14
1.4 Challenges and Complexities . . . . . 15

2 Prof. Gilbert Strang’s


Lecture on Eigenvalues and Eigenvectors. 16
2.1 Example . . . . . . . . . . . . . . . . . 20
2.2 Example of a Rotation
(Orthogonal) Matrix . . . . . . . . . . . 26
2.3 Trouble with repeated eigenvalues . . 29
2.4 Conclusions . . . . . . . . . . . . . . . 31
2.5 What’s Next? . . . . . . . . . . . . . . 32

3 Quiz on Eigenvalues and Eigenvectors 35

2 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
1 Eigenvalues and Eigenvectors
In Lecture 21 from MIT’s 18.06 Linear Algebra
course, Professor Gilbert Strang describes the
meaning of eigenvalues and eigenvectors, an
important concept in linear algebra. These ideas
have profound implications for areas ranging
from physics, engineering, and machine learning
to computer science.
In modal structural dynamics and mechanical vi-
bration, the eigenvalues are the square of natural
frequencies, and the eigenvectors are the mode
shapes of vibration.
This lecture explores the fundamental concepts
and methods for solving problems involving eigen-
values and the challenges associated with them.

3 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
1.1 What are Eigenvalues and Eigenvec-
tors?
Eigenvalues and eigenvectors provide a way to
understand how a matrix behaves when it trans-
forms a vector.
An eigenvector of a matrix is a vector that only
changes in magnitude (not direction) when a ma-
trix is applied to it. The factor by which it is
scaled is called the eigenvalue.
Mathematically, this relationship is expressed
as:
Ax = λx
where: A is a square matrix, x is the eigenvector,
λ is the eigenvalue.
This can be viewed as the input vector x to the
transformation Ax results in the output of scaled
parallel vector x.

4 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
Why Are Eigenvalues and Eigenvectors Impor-
tant?
Eigenvalues and eigenvectors are important be-
cause they provide insight into the behavior of
matrices, which are used to represent systems of
equations, transformations, 2nd-order tensors,
and other phenomena.

5 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
Understanding the eigenstructure of a matrix
helps to:
1. Determine stability in dynamic systems.
2. Simplify computations, such as finding pow-
ers of matrices and diagonalization of ma-
trices for solving systems of ordinary differ-
ential equations, for example, in vibration
analysis.
3. Determine principal normal stresses and
stretches for stress and strain tensors in
continuum mechanics.
4. Determine matrix invariants under change
of basis transformation or coordinate rota-
tion; useful, for example, in defining distor-
tion energy yield criterion (equivalent to Von
Mises stress criterion) in terms of deviatoric
stress.
5. Analyze data, for example, in Principal Com-
ponent Analysis (PCA), for dimensionality
reduction.

6 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
1. Matrix Transformation and Eigenvectors
When a matrix A is applied to a vector x, it usu-
ally changes both its direction and magnitude.
However, for certain vectors, called eigenvec-
tors, the output vector remains in the same or
exactly opposite direction as x. The scaling
factor for this change is the eigenvalue λ.
A classic example involves the projection
matrix, which projects vectors onto a plane.
In this case, the vectors already on the
plane remain unchanged, making them eigen-
vectors with eigenvalue 1.

7 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
2. Characteristic Equation
To find the eigenvalues of a matrix, we rely on
the characteristic equation obtained from the
eigenproblem obtained by rewriting Ax − λx =
0, as
(A − λI)x = 0
where I is the identity matrix.
For nonzero x, the matrix (A − λI) must
be singular, and therefore the determinant
must be zero.
The goal is to solve for the eigenvalues
λ0s by setting det(A − λI) to zero, yield-
ing a characteristic polynomial equation in
λ. The solutions (roots) of this equation are
the eigenvalues.

8 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
3. Examples of Special Matrices
Professor Strang uses several examples to illus-
trate the behavior of eigenvalues and eigenvec-
tors:
Projection Matrix: A matrix that projects
vectors onto a subspace. Its eigenvalues
are typically 1 (for vectors in the subspace)
and 0 (for vectors orthogonal to it).
Permutation Matrix: A matrix that swaps
elements. It showcases how eigenvectors
change with permutation while maintaining
certain fixed properties.

9 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
4. Complex Eigenvalues
Not all matrices have real eigenvalues.
For example, a rotation matrix that rotates
vectors by 90 degrees has no real eigen-
vector that remains parallel to the original
vector. In such cases, imaginary numbers
are used to describe the eigenvalues.

10 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
1.2 Properties of
Eigenvalues and Eigenvectors
1. Trace and Determinant
The trace of a matrix (sum of diagonal ele-
ments) equals the sum of its eigenvalues.
The determinant is the product of the eigen-
values.
These properties provide valuable information
for checking calculations, understanding the
matrix’s behavior, and being invariant to changes
in basis, particularly rotations.

11 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
2. Symmetric Matrices
Symmetric matrices have real eigenvalues, and
their eigenvectors are orthogonal (perpendicu-
lar). This makes them particularly well-behaved
in many applications, such as physics, mechani-
cal vibration, and machine learning, as the eigen-
vectors can be used as a linearly independent
vector basis in which the matrix transform in this
basis is diagonal.

12 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
3. Repeated Eigenvalues and Deficiency
Matrices can have repeated eigenvalues. In
some cases, repeated eigenvalues may result
in a shortage of independent eigenvectors, mak-
ing it difficult to describe the system fully. This
scenario is called deficiency and is a complex
aspect of linear algebra.

13 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
1.3 Practical Applications
Eigenvalues and eigenvectors have numerous appli-
cations:
Mechanical Vibrations: Determine natural fre-
quencies and mode shapes of systems.
Quantum Mechanics: Describe physical states
and energies.
Principal Component Analysis (PCA): Reduce di-
mensions in data, identifying principal directions
of variation.

14 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
1.4 Challenges and Complexities
While the theory of eigenvalues and eigenvectors can
be elegant, several complexities arise:
Complex Numbers: Nonsymmetric real matri-
ces can have complex eigenvalues since they
can be composed as the sum of a symmetric
and anti-symmetric matrix, which leads to com-
plex conjugate eigenvalue pairs with both real
and imaginary parts and complex eigenvectors.
These complex modes are more difficult to in-
terpret physically compared to their real-valued
counterparts with real symmetric matrices.
Deficient Systems: Some matrices might not
have enough independent eigenvectors to de-
scribe their behavior fully.
Sensitivity: Eigenvalues can be sensitive to slight
changes in the matrix, which might lead to chal-
lenges in practical applications.

15 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
2 Prof. Gilbert Strang’s
Lecture on Eigenvalues and Eigen-
vectors.
Let’s dive into Professor Strang’s lecture in more
detail.

Given an input value of x, the function f (x) out-


puts the image of x in a codomain.
Similarly, given an input vector x, a matrix Ax
operates with a linear transformation T (x) acting
on x and outputs the image of x in a codomain.

16 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
A question we can ask is:
For input vector x, operated on by square matrix
A, what parallel vectors scaled with λ exist?
This problem can be posed as the eigenproblem:

Ax = λx (1)

Here x are eigenvectors parallel to Ax.


How can we find the x’s and scaling factors λ’s?
For a linear system Ax = b where vector b is
given, we know how to solve for x.
However, for the eigenproblem in Eqn. (1), the
b = λx is also unknown, and also the associated
λ.
How can we deal with this? Let’s rewrite the
eigenproblem and move the x to the same side:

Ax − λx = 0

This can be expressed with a collected x using


the identity matrix I:

(A − λI) x = 0 (2)
17 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
In this form, the matrix (A − λI) is the matrix
A shifted by λI, and can be viewed as a linear
system with right-hand-side b = 0. Here both λ
and x are unknowns.
We know from linear algebra that with a b = 0,
we know that for a nonzero x in the null space,
the matrix (A − λI) must be singular.
We know a singular matrix must have the deter-
minant equal to zero:

det(A − λI) = 0 (3)

This gives a characteristic polynomial equation


in λ, which we can use to solve for the roots λ
that give a singular matrix.
The roots of this characteristic equation are called
eigenvalues or characteristic values.
Once the λ’s are found that make the matrix
(A − λI) singular, then we can apply equation
elimination methods to find the eigenvectors x’s
in the null space associated with these eigenval-
ues λ’s.

18 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
Since the x’s are in the null space, we can only
solve them up to an arbitrary constant.
If the square matrix is n × n, there will be n eigen-
values and associated eigenvectors.
Since they go together, we can refer to the eigen-
pairs (λi, xi), for i = 1, 2, . . . , n.

19 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
2.1 Example
Let’s do an example to see this solving process and
any insights into the eigenpairs (λi, xi)
 

3 1
A=

 
 
1 3
 
 

In this case, the 2 × 2 matrix has special properties


of being symmetric, AT = A, and the diagonals are
the same.
Let’s take the determinant of the eigenproblem
for this matrix:
3−λ 1
= (3 − λ)2 − 1 = λ2 − 6λ + 8 = 0
1 3−λ

Notice that the 6 in the 2nd coefficient in front of


λ is the trace of the matrix defined by the sum
of the diagonals:
tr(A) = 3 + 3 = 6

We also observe that the constant 8 is equal to


the determinant of A:
det(A) = (3)(3) − (1)(1) = 9 − 1 = 8

20 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
Let’s factor this quadratic characteristic equa-
tion,
(λ − 2)(λ − 4) = 0
The two solutions are,

λ1 = 2, λ2 = 4

Notice that the sum of the eigenvalues equals


the trace of A:

tr(A) = λ1 + λ2 = 2 + 4 = 6

Also, the product of the eigenvalues equals the


determinant:

det(A) = λ1λ2 = (2)(4) = 8

21 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
Now, let’s get the two eigenvectors,
For λ1 = 2:
(A − λ1I) x1 = 0
   

 1 1 0


(A − 2I) = x1 =
 
   
   
1 1 0
   
   

The matrix must be singular, and x1 is in the null


space. A solution is
 

−1
x1 =

 
 
1
 
 

For λ2 = 4:
(A − λ2I) x2 = 0
     

−1 1 
 0

 1


(A − 4I) x2 = x2 = , → x2 =
   
     
     
1 −1 0 1
     
     

22 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
What properties of the eigensolutions can we ob-
serve here?
Since the matrix was real and symmetric, the
eigenvalues are real-valued, and the eigenvec-
tors are perpendicular (orthogonal) and linearly
independent, i.e., their dot product with them-
selves is nonzero, and with others are zero;
 

 −1
x1 · x1 = [−1, 1] = (−1)(−1) + (1)(1) = 2

 
 
1
 
 

 

1
x2 · x2 = [1, 1] = (1)(1) + (1)(1) = 2

 
 
1
 
 

 

 1
x1 · x2 = [−1, 1] = (−1)(1) + (1)(1) = 0

 
 
1
 
 

Since the orthogonal eigenvectors are linearly


independent, they can be used as a vector basis.
This will be a very nice basis in which a similar
diagonal matrix can be formed with the same
eigenvalues on the diagonal and the same trace
and determinant invariants.

23 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
Consider another matrix:
 

0 1 0−λ 1
= λ2−1 = 0,

A= , → λ1 = −1, λ2 = 1

 
 
1 0 1 0−λ
 
 

 

−1
λ1 = −1 : → x1 =

 
 
1
 
 

 

1
λ2 = 1 : → x2 =

 
 
1
 
 

Here

tr(A) = λ1 + λ2 = 0, det(A) = λ1λ2 = −1

This shows that if we add 3I to this matrix, we


get the previous example matrix with constant
diagonals of 3, and this shifts the eigenvalues
by 3: λ1 = −1 + 3 = 2, λ2 = 1 + 3 = 4, and the
eigenvector doesn’t change.
We can see this by adding 3I to A, and with
Ax = λx, and the same eigenvectors,

(A + 3I)x = Ax + 3x = λx + 3x = (λ + 3)x

24 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
CAUTION
Be careful, just because we added 3I to the special
symmetrix matrix with constant diagonals gave the
same eigenvectors just with shifted eigenvalues; this
is not true when adding any matrix, say B to A.
We could be tempted to write: If A has eigen-
values λ’s, and B has eigenvalues α’s, then the
eigenvalues of A and B add, with the follow-
ing (incorrect) argument: Ax = λx; Bx = αx.
Adding gives, (A + B) x = (λ + α)x; but this is
False!
This argument assumed that the eigenvectors of
A and B were the same, which is not true.
In general, the addition (A+B) and product (AB)
of matrices does not give the sum or product of
eigenvalues.

25 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
2.2 Example of a Rotation
(Orthogonal) Matrix
Consider a rotation (orthogonal) matrix that takes any
vector and rotates it by 90◦ counter-clockwise.
The rotated basis vectors are T (e1) = [0, 1]T , and
T (e2) = [−1, 0]T , thus the rotation matrix is:
 
  
 0 −1 
Q =  T (e1) T (e2)  =

 
 
1 0
 
 

What vector can output Qx a vector parallel to


x after rotating by 90◦?
Answer: there won’t be any since the rotated
vector is perpendicular to the input vector.

26 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
Let’s check:

−λ −1
det(Q − λI) = = λ2 + 1 = 0
1 −λ

The roots are imaginary numbers (even though


the matrix is real):
√ √
λ = −1 = i, λ2 = − −1 = −i

We see the two roots are complex conjugate


pairs!
The trace and determinant are the sum and prod-
uct of the eigenvalues:

tr(Q) = λ1 + λ2 = i − i = 0,
det(Q) = λ1λ2 = i(−i) = −i2 = −(−1) = 1

The 90◦ rotation is as far away (in fact opposite)


of a symmetric matrix as we can get; it is an
anti-symmetric matrix, and the eigenvalues are
imaginary, complex-conjugate pairs.

27 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
In general, a matrix can be decomposed into a
symmetric and ani-symmetric part:

A = Asym + Aanti-sym

In this case, the eigenvalues λ = Re ± i Im are


complex conjugate pairs with both real (due to
the symmetric part) and imaginary (due to the
anti-symmetric part).

28 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
2.3 Trouble with repeated eigenvalues
Consider this triangular matrix (zeros on one side of
the diagonal):  

3 1
 

A =  
 


0 3
det(A) = (3 − λ)(3 − λ) = 0, → λ1 = λ2 = 3
Here, the eigenvalues (roots of the characteristic
polynomial) are the same, i.e., repeated.
Note that the eigenvalues are the same as the
diagonal elements of the triangular matrix (this
will always be true).
With repeated eigenvalues, we will have trouble
with the eigenvectors.

29 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
With λ1 = 3,
     

0 1 
0 1


x1 = , → x1 =
  
     
     
0 0 0 0
     
     

The matrix is singular (as it should be), and the


eigenvector x1 is in the null subspace. However,
there is no independent x2, and thus, there is a
shortage of independent eigenvectors (we call
this degenerate).

30 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
2.4 Conclusions
Eigenvalues and eigenvectors are foundational
tools in linear algebra. They allow us to analyze
and understand matrices more deeply. From
simple projections to complex rotations, they re-
veal the underlying structure of transformations
and provide insights across various disciplines.
Mastering the methods for interpreting these val-
ues gives us a powerful perspective on theo-
retical and applied mathematics and physics,
making them indispensable for anyone working
with mathematical models, engineering, machine
learning, or data analysis.

31 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
2.5 What’s Next?
What can we do with the eigenvalues and eigen-
vectors after they are found? For one, we can
use them to diagonalize the matrix A and con-
veniently express the powers of the matrix A.
This is the subject of Professor Strang’s Video
Lecture 22 found here: https://youtu.be/13r
9QY6cmjc?si=bwC2xo6a8XjuTkIF.

32 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
References
Lecture 21: Eigenvalues and Eigenvectors.
Prof. Gilbert Strang.
Lec 21 – MIT 18.06 Linear Algebra, Spring 2005.
YouTube Video:
https://youtu.be/lXNXrLcoerU?si=YEsra72F
rzJztKT_

33 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
🧠 Let’s Discuss with Your Insights and Experi-
ence

Have you used eigenvalues and eigenvectors in


your work?

Can you share a real‑world example where a deep


understanding of eigenvalues and eigenvectors
paid off for you?
What’s the most counterintuitive insight you’ve
discovered in working with eigenvalues and eigen-
vectors?

34 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
3 Quiz on Eigenvalues and Eigen-
vectors
1. What is an eigenvector?
A) A vector that changes its direction after matrix
multiplication.
B) A vector that remains in the same direction
after matrix multiplication.
C) A vector that disappears after matrix multipli-
cation.
D) A vector that rotates 90 degrees after matrix
multiplication.
2. How is the eigenvalue of a matrix defined?
A) The sum of all elements in a matrix.
B) The trace of a matrix.
C) The scalar by which an eigenvector is scaled
after matrix multiplication.
D) The number of columns in the matrix.

35 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
3. What happens to eigenvalues when you add a scalar
cI to a matrix A?
A) Eigenvalues remain the same.
B) Eigenvalues are scaled by the scalar.
C) Eigenvalues increase by the value of the scalar.
D) Eigenvalues decrease by the value of the scalar.
4. What is a characteristic equation used for?
A) To find the inverse of a matrix.
B) To determine the null space of a matrix.
C) To calculate eigenvalues by setting the deter-
minant of A − λI to zero.
D) To identify the trace of the matrix.

36 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
5. What is a major challenge when dealing with repeated
eigenvalues?
A) They always produce complex numbers.
B) They often result in a lack of enough indepen-
dent eigenvectors.
C) They require the use of orthogonal matrices.
D) They always have real numbers.

Enjoy testing your knowledge.

37 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
Answers to Quiz
Here are the correct answers for the quiz:
1. B) A vector that remains in the same direction after
matrix multiplication.
2. C) The scalar by which an eigenvector is scaled after
matrix multiplication.
3. C) Eigenvalues increase by the value of the scalar.
4. C) To calculate eigenvalues by setting the determi-
nant of A − λI to zero.
5. B) They often result in a lack of enough independent
eigenvectors.

38 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
We Love Matrix Linear Algebra!

Repost this RETWEET

– Powered by LaTeX �

My LinkedIn Profile

39 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.

You might also like