Notes 1 MSC Physics I Mathematical Physics
Notes 1 MSC Physics I Mathematical Physics
Vector Space
A (real) vector space V is a non-empty set equipped with an addition and a scalar
multiplication operation such that for all α, β ∈ R and all u, v, w ∈ V:
Linear independence
α 1 v 1+ α 2 v 2 +…+ α k v k =0
1
has the unique solution
α 1=α 2=…=α k =0
Linear Dependence
α 1 v 1+ α 2 v 2 +…+ α k v k =0
Bases
Dimension
B= { v 1 , v 2 , … … .. , v n }
An inner product space is a vector space V along with a function ⟨ i ⟩ called an inner product
which associates each pair of vectors u, v with a scalar ⟨ v i ⟩and which satisfies:
2
Linear Transformations
T ( u+ v )=T (u )+ T ( v )
T ( cu )=cT ( u )
Matrices
( ) []
1 2 1
−3 2
1 0
or [ ]
1 0
0 1
or 0
3
or [ 3 4 0]
1. matrix is enclosed by [ ] or ( ) or | |
2. Compact form the above matrix is represented by [ a ij ]m × n∨ A=[aij ]
3. Element of a Matrix: The numbers a 11 , a12 ,… . etc., in the above matrix are known as
the element of the matrix, generally represented as a ij, which denotes element in i th
row and j th column.
4. Order of a Matrix: In above matrix has m rows and n columns, then A is of order
m× n
Types of Matrices
3
1. Row Matrix: A matrix having only one row and any number of columns is called a
row matrix.
[ 3 4 0]
2. Column Matrix: A matrix having only one column and any number of rows is called
column matrix.
[]
1
0
3
[ 12 0 3
1 0 ]
4. Horizontal Matrix: A matrix in which the number of rows is less than the number of
columns, is called a horizontal matrix.
5. Vertical Matrix: A matrix in which the number of rows is greater than the number of
columns, is called a vertical matrix.
6. Null or Zero Matrix: A matrix of any order, having all its elements are zero, is called
a null/zero matrix. i.e., a ij=0 , ∀ i ,
[ ]
0 0 0
0 0 0
0 0 0
7. Square Matrix: A matrix of order m× n, such that m=n, is called square matrix.
[ ]
1 0 3
[ ]
1 0
5 1
or 5 3 2
1 −5 3
8. Diagonal Matrix: A square matrix A=[ aij ]m ×n , is called a diagonal matrix, if all the
elements except those in the leading diagonals are zero, i.e., a ij=0 for i≠ j. It can be
represented as
4
[ ]
1 0 0
0 1 0
0 0 1
9. Scalar Matrix: A square matrix in which every non-diagonal element is zero and all
diagonal elements are equal, is called scalar matrix. i.e., in scalar matrix a ij=0 for i≠ j
and a ij=k , for i= j
[ ]
2 0 0
0 2 0
0 0 2
10. Unit Matrix or Identity Matrix: If all the elements of a principal diagonal in a
diagonal matrix are 1, then it is called a unit matrix. A unit matrix of order n is
denoted by In. Thus, a square matrix A=[ aij ]m ×n is an identity matrix if
{
a ij= 1 , i= j
0 ,i ≠ j
[ ]
1 0 0
0 1 0
0 0 1
11. Triangular Matrix: A square matrix is said to be a triangular matrix if the elements
above or below the principal diagonal are zero. There are two types:
i. Upper Triangular Matrix is when all entries below the main diagonal are zero:
[ ]
1 3 −4
0 1 2
0 0 1
ii. Lower Triangular Matrix is when all entries above the main diagonal are zero:
[ ]
1 0 0
−1 1 0
2 3 1
12. Transpose Matrix: A Transpose is where we swap entries across the main diagonal
(rows become columns) like this:
5
[ ]
6 1
[ ]
T
6 4 24 =
4 −9
1 −9 8
24 8
][ ]
6 1
[
'
6 4 24
or = 4 −9
1 −9 8
24 8
13. Symmetric Matrix: In a Symmetric matrix matching entry either side of the main
diagonal are equal. A square matrix A=[ aij ] is called a symmetric matrix if a ij = a ji,
for all i , j values:
[ ]
1 2 3
2 4 5
3 5 2
Adjoint of a Matrix
Let A=[ aij ]be a square matrix of order n. The adjoint of a matrix A is the transpose of the
cofactor (or minors) matrix of A. It is denoted by adj A. An adjoint matrix is also called an
adjugate matrix.
[ ]
3 1 −1
Example: Find the adjoint of the matrix A= 2 −2 0 .
1 2 −1
Solution: To find the adjoint of a matrix, first find the cofactor matrix of the given matrix.
Then find the transpose of the cofactor matrix.
6
Cofactor of 2= A 21=− |12 −1
−1|
=−1
[ ]
2 2 6
The cofactor matrix A is [ Aij ] = −1 −2 −5
−2 −2 −8
[ ]
2 −1 −2
T
adj A=( A ij ) = 2 −2 −2
6 −5 −8
Inverse of Matrix
A square matrix A , which is non-singular (i.e., det (A) does not equal zero), then there exists
an n × n matrix A−1 which is called the inverse of A such that:
−1 −1
A A =A A=I
[ ]
3 1 −1
Example: Find the inverse of the matrix A= 2 −2 0 .
1 2 −1
7
Step 2:Find the cofactor matrix of the given matrix.
Step 3:Find the transpose of the cofactor matrix (Adjoint of matrix).
Step 4:Multiply by 1/Determinant (Inverse of matrix)
Step 1: Determinate of the matrix A=3 |−22 −10 |−1|21 −10 |+ (−1)|21 −22 |
¿ 3 ( 2−0 )−1 (−2−0 ) −1(4 +2)
¿ 6+2−6=2
Step 2:
[ ]
2 2 6
The cofactor matrix A is [ Aij ] = −1 −2 −5
−2 −2 −8
8
Step 3: Now find the transpose of Aij .
[ ]
2 −1 −2
T
Adj A= ( A ij ) = 2 −2 −2
6 −5 −8
[ ]
2 −1 −2
−1 Adj A 1
A = = 2 −2 −2
| A| 2
6 −5 −8
[ ]
−1
1 −1
2
A−1= 1 −1 −1 Inverse of a Symmetric Matrix
−5
3 −4
2
The elementary transformations are to be transformed so that the property of being symmetric
is preserved. This requires that the transformations occur in pairs, a row transformation must
be followed immediately by the same column transformation.
[ ]
3 −3 4
2 −3 4
0 −1 1
[ ]
3 −3 4
Solution: Given matrix is A= 2 −3 4 .
0 −1 1
[ ][ ]
3 −3 4 1 0 0
2 −3 4 = 0 1 0 A
0 −1 1 0 0 1
9
[ ][ ]
4 1
1 −1 0 0 R1
3 3
⟹ = A ; R1 ⟶
2 −3 4 0 1 0 3
0 −1 1 0 0 1
[ ][ ]
4 1
1 −1 0 0
3 3
⟹ 4 = −2 A ; R 2 ⟶ R2−2 R1
0 −1 1 0
3 3
0 −1 1 0 0 1
[ ][ ]
4 1
1 −1 0 0
3 3
⟹ −4 = 2 A ;R 2 ⟶−R2
0 1 −1 0
3 3
0 −1 1 0 0 1
[ ][ ]
4 1
1 −1 0 0
3 3
−4 2
⟹ 0 1 = −1 0 A ;R 3 ⟶ R3 + R 2
3 3
−1 2
0 0 −1 1
3 3
[ ][ ]
4
1 −1 1
3 0 0
3
−4
⟹ 0 1 = 2 A ; R 3 ⟶−3 R3
3 −1 0
3
−1
0 0 2 3 −3
3
[ ][ ]
1 −1 0 3 −4 4
4 4
⟹ 0 1 0 = −2 3 −4 A ; R1 ⟶ R 1− R3 , R2 ⟶ R 2+ R 3
3 3
0 0 1 −2 3 −3
[ ][ ]
1 0 0 1 −1 0
⟹ 0 1 0 = −2 3 −4 A ; R 1 ⟶ R1+ R2
0 0 1 −2 3 −3
Hence,
10
[ ]
1 −1 0
−1
A = −2 3 −4
−2 3 −3
[ ][ ] [ ]
a11 a12 a13 ⋯ a1 n x1 y1
a21 a22 a23 ⋯ a2 n x2 y2
Let a31 a32 a33 ⋯ a3 n x3 = y 3 ⟹ AX=Y
⋯ ⋯ ⋯ ⋯ ⋯ ⋮ ⋮
an 1 an 2 a n3 ⋯ a nn xn yn
….. (1)
Where A is the matrix, X is the column vector and Y is also column vector. Here, column
vector X is transformed into the column vector Y by means of the square matrix A.
Let X be a such vector which transforms into X by means of the transformation (1). Suppose
the linear transformation Y = AX transforms X into a scalar multiple of itself i.e. X.
AX = Y = X
AX – IX = 0
(A – I) X = 0
…... (2)
Thus, the unknown scalar is known as an eigen value of the matrix A and the corresponding
non zero vector X as eigen vector. The eigen values are also called characteristic values or
proper values or latent values.
[ ]
6 −2 2
−2 3 −1 .
2 −1 3
[ ]
6− λ −2 2
−2 3−λ −1 =0
2 −1 3− λ
11
⟹(6 – )(9 – 6+❑2 – 1)+2( – 6+2+2)+ 2(2 – 6+2)=0 ⟹ – ❑3 +12❑2 – 36+32=0
⟹( – 2)(❑2 – 10+16)=0 – 2 ¿( – 2)( – 8)=0 ⟹=2, 2 , 8 are the characteristic roots or Eigen
values.
Cayley-Hamilton Theorem
n n−1 n−2
X + a1 X + a2 X +⋯ ⋯ +an I =0 is satisfied by X =A i.e.,
Proof. Since the elements of the matrix A – I are at most of the first degree in , the elements
of adj. ( A – I ) are at most degree (n – 1) in . Thus, adj. ( A – I ) may be written as a matrix
polynomial in , given by
n−1 n−2
Adj ( A – I )=B0 λ + B1 λ + ⋯ ⋯+ B n−1
We know that
( AI ) Adj ( AI )| AI| I
n
−I B0 =(−1 ) I
n
A B0−I B1= (−1 ) a 1 I
n
A B1−I B2=(−1 ) a2 I
⋯ ⋯⋯⋯ ⋯⋯ ⋯
12
n
A Bn−1=(−1 ) an I
n n−1
Thus, A + a1 A +…+ an I =0
3 2
λ −2 λ +3 λ−4=0
…… (1)
be its characteristic equation, then according to Cayley Hamilton Theorem (1) is satisfied by
A.
3 2
A −2 A +3 A−4 I =0
…… (2)
We can find out A−1 from (2). On multiplying (2) by A−1 , we get
A2−2 A1 +3 I −4 A−1=0
2 1 −1
A −2 A +3 I =4 A
1 2
A = ( A −2 A +3 I )
−1 1
4
[ ]
2 −1 1
A= 1 2 −1
1 −1 2
and verify that it is satisfied by A and hence obtain A−1. Express A6 – 6A5 + 9A4 – 2A3 –
12A2 + 23A – 9I in linear polynomial in A.
[ ]
2−λ −1 1
1 2− λ −1 =0
1 −1 2−λ
13
( 2 – ) [ ( 2 – ) 2 – 1 ] +1 [ – 2++ 1 ] +1 ¿
or ( 2 – )3 – ( 2 – ) + – 1+ – 1=0
or ( 2 – )3 – 2++ – 1+ – 1=0
or ( 2 – )3 +3 – 4=0
or 8 – 3 – 12 + 2 + 3– 4 = 0
or – 3 + 2 – 9+ 4 = 0
or 3 – 2 + 9– 4 = 0
By Cayley-Hamilton Theorem A3 – 6A2 + 9A – 4I = 0.
Verification:
[ ][ ]
2 −1 1 2 −1 1
2
A = −1 2 −1 −1 2 −1
1 −1 2 1 −1 2
[ ][ ]
4+1+1 −2−2−1 2+1+2 6 −5 5
¿ −2−2−1 1+ 4+1 −1−2−2 = −5 6 −5
2+1+ 2 −1−2−2 1+1+4 5 −5 6
[ ][ ]
6 −5 5 2 −1 1
3
A = −5 6 −5 −1 2 −1
5 −5 6 1 −1 2
[ ][ ]
12+5+5 −6−10+ 5 6+5+10 22 −21 21
¿ −10−6−5 5+12+5 −5−6−10 = −21 22 −21
10+5+6 −5−10−6 5+5+12 21 −21 22
3 2
A −6 A +9 A−4 I
[ ][ ][ ][ ]
22 −21 21 6 −5 5 2 −1 1 1 0 0
¿ −21 22 −21 −6 −5 6 −5 +9 −1 2 −1 −4 0 1 0
21 −21 22 5 −5 6 1 −1 2 0 0 1
[ ]
0 0 0
¿ 0 0 0 =0
0 0 0
Inverse of Matrix A,
14
3 2
A −6 A +9 A−4 I=0
2 −1
A −6 A +9 I −4 A =0
[ ][ ][ ]
6 −5 5 2 −1 1 1 0 0
−1
or 4 A = −5 6 −5 −6 −1 2 −1 + 9 0 1 0
5 −5 6 1 −1 2 0 0 1
[ ] [ ]
6−12+9 −5+6+ 0 5−6+0 3 1 −1
−1 1
¿ −5+ 6+0 6−12+ 9 −5+6 +0 ⟹ A = 1 3 1
4
5−6+0 −5+6+ 0 6−12+9 −1 1 3
6 5 4 3 2
A −6 A +9 A −2 A −12 A +23 A−9 I
¿ 5 A−I
Diagonalization of Matrix
If a square matrix A of order n has n linearly independent eigen vectors, then a matrix P can
be found such that P–1AP is a diagonal matrix.
Proof. We shall prove the theorem for a matrix of order 3. The proof can be easily extended
to matrices of higher order.
[ ]
a 1 b1 c 1
Let A= a 2 b2 c 2
a3 b3 c3
and let 1, 2, 3 be its eigen values and X1, X2, X3 the corresponding eigen vectors, where
[] [] []
x1 x2 x3
X 1 = y1 X 2 = y2 X 3= y3
z1 z2 z3
15
}
( a1−λ 1) x 1 +b1 y 1+ c1 z 1=0
a 2 x 2+ ( b2− λ1 ) y 2+ c 2 z 2=0
a3 x 3+ b3 y 3+ ( c 3−λ 1) z 3=0
……… (1)
We have
}
a1 x 1 +b 1 y 1 +c 1 z 1=λ 1 x 1
a2 x 2+ b2 y 2 +c 2 z2 =λ1 y 2
a3 x3 + b3 y 3 + c3 z 3=λ 1 z 3
……… (2)
}
a1 x 1 +b 1 y 1 +c 1 z 1=λ 2 x 1
a2 x 2+ b2 y 2 +c 2 z2 =λ2 y 2
a3 x3 + b3 y 3 + c3 z 3=λ 2 z 3
……… (3)
}
a1 x 1 +b 1 y 1 +c 1 z 1=λ 3 x 1
a2 x 2+ b2 y 2 +c 2 z2 =λ3 y 2
a3 x3 + b3 y 3 + c3 z 3=λ 3 z 3
……… (4)
[ ]
x1 x2 x3
We consider the matrix P= y 1 y2 y3
z1 z2 z3
[ ][ ]
a1 b 1 c 1 x 1 x2 x3
Then, AP= a2 b 2 c 2 y 1 y2 y3
a3 b3 c 3 z 1 z2 z3
[ ]
a1 x 1 +a1 y 1+ a1 z 1 a1 x 2+ a1 y 2 +a 1 z 2 a1 x 3 +a1 y 3 +a1 z3
¿ a2 x 1 +a2 y 1+ a2 z1 a2 x2 + a2 y 2 +a 2 z 2 a2 x 3 +a2 y 3 +a2 z 3
a3 x 1 +a3 y 1+ a3 z 1 a3 x2 + a3 y 2 +a 3 z 2 a 3 x 3 +a3 y 3 +a3 z 3
16
[ ]
λ1 x 1 λ2 x2 λ 3 x3
¿ λ1 y1 λ2 y 2 λ3 y 3 [Using eq. (2), (3) and (4)]
λ1 z 1 λ2 z 2 λ3 z 3
[ ][ ]
x1 x2 x 3 λ1 0 0
¿ y1 y2 y3 0 λ2 0 =PD
z1 z2 z3 0 0 λ3
[ ]
λ1 0 0
where, D is the Diagonal matrix 0 λ2 0 .
0 0 λ3
AP=PD
⟹ P−1 AP=D
Notes
1. The square matrix P, which diagonalises A, is found by grouping the eigen vectors of
A into square-matrix and the resulting diagonal matrix has the eigen values of A as its
diagonal elements.
2. The transformation of a matrix A to P–1AP is known as a similarity transformation.
3. The reduction of A to a diagonal matrix is, obviously, a particular case of similarity
transformation.
4. The matrix P which diagonalises A is called the modal matrix of A and the resulting
diagonal matrix D is known as the spectra matrix of A
17