SORUMA SEMINAR 111.) [email protected] (1) ....
SORUMA SEMINAR 111.) [email protected] (1) ....
TABLE CONTENT
Acknowledgement………………………………………………………………………….i
Abstract………………………………………………………………………………..…...ii
CHAPTER ONE………………………………………………………………………...….1
INTRODUCTION……………………………………………………………….….……..1
1.1 Overview of Eigenvalues…………………………………………………….………...1
1.2 Importance of Eigenvalues………………………………………………….………….1
CHAPTER TWO…………………………………………………………………………..2
Mathematical Foundations…………………………………………………………………2
2.1 The Characteristic Equation…………………………………………………………....2
2.2 Numerical Eigenvalue Decomposition ……………………………………………...2
2.3 Eigenvalues and Matrix Properties…………………………………………………….3
CHAPTER THREE ……………………………………………………………………….4
The Power Method………………………………………………………………………...4
3.1 Introduction to the Power Method………………………………………………….….4
3.2 Steps of the Power Method………………………………………………………….....4
3.3 Convergence Analysis………………………………………………………………....6
3.4 Limitations of the Power Method……………………………………………………..7
CHAPTER FOUR………………………………………………………………………...9
Solving Numerical Problems Using the Power Method…………………………………..9
4.1 Computing the Dominant Eigenvalue…………………………………………………9
4.2 Slow Convergence with Close Eigenvalues…………………………………………..11
4.3 Real-World Application……………………………………………………..………..13
CHAPTER FIVE …………………………………………………………………………15
Applications in Numerical Analysis……………………………………….……………..15
5.1 Iterative Solvers for Linear Systems………………………………………………….15
5.2 Stability Analysis of Numerical Schemes………………………………….…………16
5.3 Dimensional Reduction in Data Science……………………………..……………….17
5.4 Computational Modeling…………………………………………………….……….18
CHAPTER SIX………………………………………………………………...………...19
Advanced Numerical Techniques and Extensions……………………………………….19
6.1 Accelerating the Power Method……………………………………………………...19
6.2 Finding Multiple Eigenvalues………………………………………………………..20
6.3 Generalizations of the Power Method…………………………………………..……21
CHAPTER SEVEN ……………………………………………………………………...22
Conclusion………………………………………………………………………………..22
References………………………………………………………………………………..23
Acknowledgment
First of all I want to thanks GOD for his protection and guidance to prepare this
seminar successfully. Next to this I would like to express us thanks to the Department of
Applied Mathematics, Adama Science and Technology University, for providing me with the
necessary knowledge, assistance and facilities to conduct my seminar work. Next, I would
like to express-my deep appreciation to my advisor, Mesfin Zewude (Phd) for his enthusiasm,
guidance and constant encouragement throughout the seminar period. His regular advice and
suggestion made me work easier and proficient. I really appreciate the time he has taken to
supervise me on skill and knowledge, thank you once again. Last but not least, a special word
of thanks also goes tomy family and friends for their continuous and unconditional support,
love and encouragement throughout the progress of this seminar
Abstract
Mathematics plays an important role in our everyday life. Fixed point iteration theory is a
fascinating subject, with an enormous number of applications in various fields of
mathematics.Maybe due to this transversal character, I have always experienced some
difficulties to find a book (unless expressly devoted to fixed points) treating the argument in a
unitary fashion. Inmost cases, It is noticed that fixed points pop up when they are needed. On
the contrary, It is believed that they should deserve a relevant place in any general textbook,
and particularly, in a functional analysis textbook. This is mainly the reason that made me
decide to write down these notes. Ii is tried to collect most of the significant results of the
field, and then to present various related applications
CHAPTER ONE
INTRODUCTION
Eigenvalue is a scalar value associated with a square matrix (or a linear transformation) that
indicates how much a corresponding eigenvector is stretched or compressed when the matrix
is applied to it. In simple terms, an eigenvalue tells you how much the matrix "scales" its
eigenvector, without changing its direction.
Eigenvalues refer to a set of scalar values associated with a square matrix or a linear
transformation. Eigenvalues play a crucial role in understanding the behavior of a matrix and
its transformations. They are primarily used in areas like solving systems of linear equations,
matrix diagonalization, stability analysis, and optimization, among others.
Eigenvalues are the special set of scalars associated with the system of linear equations. It is
mostly used in matrix equations. ‘Eigen’ is a German word that means ‘proper’ or
‘characteristic’. Therefore, the term eigenvalue can be termed as characteristic
value, characteristic root, proper values or latent roots as well. In simple words, the
eigenvalue is a scalar that is used to transform the eigenvector. The basic equation is
Ax = λx
A: This is a square matrix (of size n×n ) that represents a linear transformation or linear
operator.
x: is a vector
λ: This is a scalar (a real or complex number) called the eigenvalue.
The number or scalar value “λ” is an eigenvalue of A
CHAPTER TWO
Mathematical Foundations
Eigenvalues arise from the study of linear transformations and matrices, and their calculation
involves solving the characteristic equation det(A−λI)=0
The roots of this equation are the eigenvalues, and each eigenvalue corresponds to a non-zero
eigenvector that is scaled by the matrix. The properties and applications of eigenvalues are
vast, influencing areas from stability analysis to machine learning and quantum mechanics.
The characteristic equation forms the mathematical basis for finding eigenvalues.
Derivation: For a square matrix AA, the eigenvalues are the roots of det(A−λI)=0 where
λ\ lambda is a scalar, and I is the identity matrix.
Numerical Significance: Solving the characteristic polynomial can be computationally
expensive for large matrices. Direct computation is avoided for matrices of high order
due to potential numerical instability and inefficiency.
Practical Approach: Iterative methods, such as the Power Method, provide approximate
solutions to dominant eigenvalues without explicitly solving this polynomial.
For a square matrix A (usually n×n) that is diagonalizable, the eigenvalue decomposition
expresses A as a product of three matrices:
A=V ∆ V − 11
where:
Numerical Challenges:
Computing eigenvalue decomposition directly for large matrices is often computationally
intensive.
Iterative methods are preferred for approximate solutions, especially when matrices are
sparse or structured.
Matrix Behavior :
The power method is a fundamental technique in numerical analysis for computing the
dominant eigenvalue and eigenvector of a matrix. It's an iterative approach that repeatedly
multiplies a matrix by a vector, converging to the largest eigenvalue in magnitude.
This method forms the basis for more advanced eigenvalue algorithms and has wide-ranging
applications. From structural engineering to quantum mechanics, the power method helps
solve complex problems by approximating a matrix's most influential characteristics
efficiently.
Starts with an initial guess vector x0x0 and iteratively applies the matrix AA to it
Normalizes the resulting vector after each iteration to prevent overflow or underflow
Computes the Rayleigh quotient Xk T AXk to estimate the eigenvalue at each step
T
Xk Xk
Continues until the change in the eigenvalue estimate falls below a specified tolerance
Rate of convergence
Determined by the ratio of the magnitudes of the two largest eigenvalues ∣λ2/λ1∣
Linear convergence achieved when this ratio is less than 1
Slower convergence occurs when the ratio is close to 1, indicating closely spaced
eigenvalues
Affects the number of iterations required to reach a desired level of accuracy
CHAPTER FOUR
where mk+1 is the largest element in magnitude of yk+1. The dominant eigenvalue in
magnitude is given by
( Yk +1 ) r ,r = 1,2,….,n
λ 1= lim
k →∞ ( Vk ) r
Let v₀ = [1 1]ᵀ.
We have the ff Type equation here . results
y1 = Av0 = [ ][ ] [ ] [
1 2 1
3 4 1
=
3
7
=7
0.42857
1 ]
So v1 =[ 0.42857
1 ]
y2 = Av1 =[
3 4 ] [ 1 ] [ 5.28571 ]
= 5.28571[
1 ]
1 2 0.42857 2.28571 0.45946
=
So v2 =[ 0.4594
1
6
]
y3 = Av2 =[ ] [ ] =[ ] =5.37838 [
1 ]
1 2 0.45946 2.45946 0.45729
3 4 1 5.37187
So v3 = [
1 ]
0.45729
So v4 = [
1 ]
0.45744
Y5 = Av4 = [ ] [ ] =[ ] = 5.37232[
1 ]
1 2 0.45744 2.45744 0.45743
3 4 1 5.37232
So v5 = [
1 ]
0.45743
Y6 = Av5 = [ 13 24] [ 0.45743
1 ] =[
2.45743
5.37229 ] = 5.37229[
1 ]
0.45743
So v6 = [
1 ]
0.45743
2 . 45743
. = 5.37225 , 5.37229.
0 . 45743
The magnitude of the error between the ratios is | 5.37225 – 5.37229 | = 0.00004 < 0.00005.
When eigenvalues of a matrix are close in magnitude, the Power Method, a commonly used
iterative algorithm to compute the dominant eigenvalue and eigenvector, can converge very
slowly. This happens because the ratio between the dominant eigenvalue and the next largest
eigenvalue (the spectral gap) significantly influences the rate of convergence.
Spectral Gap:
.where λ1 is the dominant eigenvalue andλ2 is the second-largest eigenvalue (in absolute
magnitude). If ∣λ2∣is close to ∣λ1∣, the convergence is slow because the difference between
successive approximations of the eigenvector becomes small
example
7. Image Compression
Problem: Reducing the storage requirements for images while retaining quality.
Application: Image compression techniques like Singular Value Decomposition
(SVD) involve finding eigenvectors of image matrices.
Role of Power Method: Efficient computation of dominant singular values and
corresponding vectors.
Impact: Enables efficient storage and transmission of images (e.g., JPEG
compression).
8. Recommendation Systems
9. Financial Modeling
Iterative solvers for linear systems are numerical methods used to approximate the solution
of large systems of linear equations, especially when direct methods (e.g., Gaussian
elimination, LU ,Gauss Jacobi and Gauss seidel decomposition and etc ) become
computationally expensive or impractical. These methods are particularly effective for sparse
or structured matrices, such as those arising in engineering, physics, and computational
science
E. G
Jacobi iterative method
.In the Jacobi method, an initial (first) value is assumed for each of the
unknowns, X 1(1) , x 2(2),…, xn(n)
If no information is available regarding the approximate values of the unknown, the initial
value of all the unknowns can be assumed to be zero.
In numerical analysis, stability of a numerical scheme refers to the ability of the scheme to
produce bounded solutions over time (for time-stepping methods) or during iterations (for
iterative methods). When solving differential equations or other mathematical problems
numerically, stability is crucial to ensuring that small errors introduced during computation
do not grow uncontrollably, leading to incorrect results. The concept of stability applies to a
wide range of numerical methods, from finite difference methods for solving partial
differential equations (PDEs) to iterative solvers for linear systems.
There are different ways to formalize the concept of stability. The following definitions of
forward, backward, and mixed stability are often used in numerical linear algebra
a. Linear Algebra
b. Matrix Factorization
Several dimensionality reduction methods in data science are grounded in numerical analysis:
o The first principal component corresponds to the eigenvector with the largest
eigenvalue.
o The second principal component is the eigenvector corresponding to the
second-largest eigenvalue, and so on.
Numerical Techniques Used: Eigenvalue decomposition of the covariance matrix,
Singular Value Decomposition (SVD).
Here are some key aspects of computational modeling related to numerical concept
1. Mathematical Models • Definition: A mathematical model is an abstract representation of
a system using mathematical language.
• Types: •
Deterministic Models
Stochastic Models
2. Numerical Methods • Numerical methods are techniques used to obtain approximate
solutions to mathematical problems that cannot be solved analytically.
Types of numerical method
CHAPTER SIX
Accelerating the Power Method in numerical analysis refers to techniques that enhance the
convergence speed of the Power Method when calculating the dominant eigenvalue and
eigenvector of a matrix. The standard Power Method can be slow, particularly if the dominant
eigenvalue is close to the next largest eigenvalue.
Here are some key concepts and methods used to accelerate the Power Method:
1 Shifted Power Method: Introduces a shift σ to the matrix, effectively transforming the
problem to find the dominant eigenvalue of A - σ I . • This can help bring λ₁ closer to zero,
improving convergence. Formula:
bₖ₊₁ = (A - σ I)⁻¹
2. Deflation: A' = A - λ₁
3. Rayleigh Quotient Iteration
(A - λₖ I)
4. Inverse Power Method
bₖ₊₁ = A⁻¹
5 Chebyshev Acceleration
Finding multiple eigenvalues (also known as computing the eigenvalues of a matrix) involves
techniques that can efficiently identify not just the largest eigenvalue, but also additional
eigenvalues and their corresponding eigenvectors. Here's a detailed overview of methods for
finding multiple eigenvalues
Finding multiple eigenvalues involves methods like solving the characteristic polynomial,
using iterative algorithms such as QR, Power Method with deflation, subspace iteration,
Lanczos for sparse matrices, and Jacobi for symmetric matrices.
B If It may happen that a matrix has some “repeated” eigenvalues. That is, the characteristic