0% found this document useful (0 votes)
14 views

Notes 1 MSC Physics I Mathematical Physics

The document defines key concepts related to vector spaces and matrices including vector space properties, linear independence and dependence, bases and dimension, inner product spaces, linear transformations, types of matrices such as row, column, diagonal and identity matrices, and adjoint of a matrix.

Uploaded by

Aloke Verma
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Notes 1 MSC Physics I Mathematical Physics

The document defines key concepts related to vector spaces and matrices including vector space properties, linear independence and dependence, bases and dimension, inner product spaces, linear transformations, types of matrices such as row, column, diagonal and identity matrices, and adjoint of a matrix.

Uploaded by

Aloke Verma
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 17

Unit -I Vector space and Matrices

Vector Space

A (real) vector space V is a non-empty set equipped with an addition and a scalar
multiplication operation such that for all α, β ∈ R and all u, v, w ∈ V:

1. u + v ∈ V (closure under addition)


2. u + v = v + u (commutative law for addition)
3. u + (v + w) = (u + v) + w (associative law for addition)
4. there is a single member 0 of V, called the zero vector, such that for all v ∈ V, v + 0 =
v
5. for every v ∈ V there is an element w ∈ V, written −v, called the negative of v, such
that v + w = 0
6. αv ∈ V (closure under scalar multiplication)
7. α(u + v) = αu + αv (distributive law)
8. (α + β)v = αv + βv (distributive law)
9. α(βv) = (αβ)v (associative law for vector multiplication)
10. 1v = v

Linear independence

Let V be a vector space and v1 , v 2 , … … .., v k ∈V . Then v1 , v 2 , … … .., v k are linearly


independent (or form a linearly independent set) if and only if the vector equation

α 1 v 1+ α 2 v 2 +…+ α k v k =0

1
has the unique solution

α 1=α 2=…=α k =0

Linear Dependence

Let V be a vector space and v1 , v 2 , … … .., v k ∈V . Then v1 , v 2 , … … .., v k are linearly


dependent (or form a linearly dependent set) if and only if there are real numbers
α 1 , α 2 ,… , α k all zero, such that

α 1 v 1+ α 2 v 2 +…+ α k v k =0

Bases

Let V be a vector space. Then the subset B= { v 1 , v 2 , … … .. , v n } of V is said to be a basis for V


if:

1. B is a linearly independent set of vectors, and


2. B spans V; that is, V = Lin(B)

Dimension

Let V be a vector space with a basis

B= { v 1 , v 2 , … … .. , v n }

of n vectors. Then any set of n + 1 vectors is linearly dependent.

Inner Product Space

An inner product space is a vector space V along with a function ⟨ i ⟩ called an inner product
which associates each pair of vectors u, v with a scalar ⟨ v i ⟩and which satisfies:

1. ⟨ u ,u ⟩≥ 0 with equality if and only if u = 0


2. ⟨ u , v ⟩ = ⟨ v ,u ⟩ and
3. ⟨ αu+ v , w ⟩ =α ⟨ u , w ⟩ + ⟨ v , w ⟩

Combining (2) and (3), we also have ⟨ u , α v +w ⟩ =α ⟨ u , v ⟩ + ⟨ u , w ⟩ . Condition (1) is called


positive definite, condition (2) is called symmetric and condition (3) with the note above is
called bilinear. Thus, an inner product is an example of a positive definite, symmetric bilinear
function or form on the vector space V.

2
Linear Transformations

A linear transformation is a transformation T : R n ⟶ Rm satisfying

T ( u+ v )=T (u )+ T ( v )

T ( cu )=cT ( u )

for all vectors u , v in Rn and all scalars c.

Matrices

A matrix is a rectangular arrangement of numbers (real or complex) which may be


represented as

( ) []
1 2 1
−3 2
1 0
or [ ]
1 0
0 1
or 0
3
or [ 3 4 0]

1. matrix is enclosed by [ ] or ( ) or | |
2. Compact form the above matrix is represented by [ a ij ]m × n∨ A=[aij ]
3. Element of a Matrix: The numbers a 11 , a12 ,… . etc., in the above matrix are known as
the element of the matrix, generally represented as a ij, which denotes element in i th
row and j th column.
4. Order of a Matrix: In above matrix has m rows and n columns, then A is of order
m× n

Types of Matrices

3
1. Row Matrix: A matrix having only one row and any number of columns is called a
row matrix.

[ 3 4 0]

2. Column Matrix: A matrix having only one column and any number of rows is called
column matrix.

[]
1
0
3

3. Rectangular Matrix: A matrix of order m× n, such that m≠ n , is called rectangular


matrix.

[ 12 0 3
1 0 ]
4. Horizontal Matrix: A matrix in which the number of rows is less than the number of
columns, is called a horizontal matrix.
5. Vertical Matrix: A matrix in which the number of rows is greater than the number of
columns, is called a vertical matrix.
6. Null or Zero Matrix: A matrix of any order, having all its elements are zero, is called
a null/zero matrix. i.e., a ij=0 , ∀ i ,

[ ]
0 0 0
0 0 0
0 0 0

7. Square Matrix: A matrix of order m× n, such that m=n, is called square matrix.

[ ]
1 0 3
[ ]
1 0
5 1
or 5 3 2
1 −5 3

8. Diagonal Matrix: A square matrix A=[ aij ]m ×n , is called a diagonal matrix, if all the
elements except those in the leading diagonals are zero, i.e., a ij=0 for i≠ j. It can be
represented as

4
[ ]
1 0 0
0 1 0
0 0 1

9. Scalar Matrix: A square matrix in which every non-diagonal element is zero and all
diagonal elements are equal, is called scalar matrix. i.e., in scalar matrix a ij=0 for i≠ j
and a ij=k , for i= j

[ ]
2 0 0
0 2 0
0 0 2

10. Unit Matrix or Identity Matrix: If all the elements of a principal diagonal in a
diagonal matrix are 1, then it is called a unit matrix. A unit matrix of order n is
denoted by In. Thus, a square matrix A=[ aij ]m ×n is an identity matrix if

{
a ij= 1 , i= j
0 ,i ≠ j

[ ]
1 0 0
0 1 0
0 0 1

11. Triangular Matrix: A square matrix is said to be a triangular matrix if the elements
above or below the principal diagonal are zero. There are two types:
i. Upper Triangular Matrix is when all entries below the main diagonal are zero:

[ ]
1 3 −4
0 1 2
0 0 1

ii. Lower Triangular Matrix is when all entries above the main diagonal are zero:

[ ]
1 0 0
−1 1 0
2 3 1

12. Transpose Matrix: A Transpose is where we swap entries across the main diagonal
(rows become columns) like this:

5
[ ]
6 1
[ ]
T
6 4 24 =
4 −9
1 −9 8
24 8

][ ]
6 1
[
'
6 4 24
or = 4 −9
1 −9 8
24 8

The main diagonal stays the same.

13. Symmetric Matrix: In a Symmetric matrix matching entry either side of the main
diagonal are equal. A square matrix A=[ aij ] is called a symmetric matrix if a ij = a ji,
for all i , j values:

[ ]
1 2 3
2 4 5
3 5 2

A is symmetric if A’ = A (where ‘A’ is the transpose of matrix)

Adjoint of a Matrix

Let A=[ aij ]be a square matrix of order n. The adjoint of a matrix A is the transpose of the
cofactor (or minors) matrix of A. It is denoted by adj A. An adjoint matrix is also called an
adjugate matrix.

[ ]
3 1 −1
Example: Find the adjoint of the matrix A= 2 −2 0 .
1 2 −1

Solution: To find the adjoint of a matrix, first find the cofactor matrix of the given matrix.
Then find the transpose of the cofactor matrix.

Cofactor of 3=A 11= |−22 01|=2


Cofactor of 1= A 12=− |21 −10 |=2
Cofactor of −1= A 13= |21 −2
−1|
=6

6
Cofactor of 2= A 21=− |12 −1
−1|
=−1

Cofactor of −2= A 22= |31 −1


−1|
=−2

Cofactor of 0=A 23= |31 12|=−5


Cofactor of 1= A 31= |−21 −10 |=−2
Cofactor of 2= A 32= |32 −10 |=−2
Cofactor of −1= A 33= |32 −21 |=−8

[ ]
2 2 6
The cofactor matrix A is [ Aij ] = −1 −2 −5
−2 −2 −8

Now find the transpose of Aij .

[ ]
2 −1 −2
T
adj A=( A ij ) = 2 −2 −2
6 −5 −8

Inverse of Matrix

A square matrix A , which is non-singular (i.e., det (A) does not equal zero), then there exists
an n × n matrix A−1 which is called the inverse of A such that:

−1 −1
A A =A A=I

Where, I is the identity matrix.

[ ]
3 1 −1
Example: Find the inverse of the matrix A= 2 −2 0 .
1 2 −1

Solution: To find the inverse of a matrix:

Step 1:Find the determinate of the given matrix.

7
Step 2:Find the cofactor matrix of the given matrix.
Step 3:Find the transpose of the cofactor matrix (Adjoint of matrix).
Step 4:Multiply by 1/Determinant (Inverse of matrix)

Step 1: Determinate of the matrix A=3 |−22 −10 |−1|21 −10 |+ (−1)|21 −22 |
¿ 3 ( 2−0 )−1 (−2−0 ) −1(4 +2)

¿ 6+2−6=2

Step 2:

Cofactor of 3=A 11= |−22 01|=2


Cofactor of 1= A 12=− |21 −10 |=2
Cofactor of −1= A 13= |21 −2
−1|
=6

Cofactor of 2= A 21=− |12 −1


−1|
=−1

Cofactor of −2= A 22= |31 −1


−1|
=−2

Cofactor of 0=A 23= |31 12|=−5


Cofactor of 1= A 31=|−21 −10 |=−2
Cofactor of 2= A 32=|32 −10 |=−2
Cofactor of −1= A 33= |32 −21 |=−8

[ ]
2 2 6
The cofactor matrix A is [ Aij ] = −1 −2 −5
−2 −2 −8

8
Step 3: Now find the transpose of Aij .

[ ]
2 −1 −2
T
Adj A= ( A ij ) = 2 −2 −2
6 −5 −8

Step 4: Then find the inverse of matrix A.

[ ]
2 −1 −2
−1 Adj A 1
A = = 2 −2 −2
| A| 2
6 −5 −8

[ ]
−1
1 −1
2
A−1= 1 −1 −1 Inverse of a Symmetric Matrix
−5
3 −4
2

The elementary transformations are to be transformed so that the property of being symmetric
is preserved. This requires that the transformations occur in pairs, a row transformation must
be followed immediately by the same column transformation.

Example: Find the inverse of the following matrix employing elementary


transformations:

[ ]
3 −3 4
2 −3 4
0 −1 1

[ ]
3 −3 4
Solution: Given matrix is A= 2 −3 4 .
0 −1 1

[ ][ ]
3 −3 4 1 0 0
2 −3 4 = 0 1 0 A
0 −1 1 0 0 1

9
[ ][ ]
4 1
1 −1 0 0 R1
3 3
⟹ = A ; R1 ⟶
2 −3 4 0 1 0 3
0 −1 1 0 0 1

[ ][ ]
4 1
1 −1 0 0
3 3
⟹ 4 = −2 A ; R 2 ⟶ R2−2 R1
0 −1 1 0
3 3
0 −1 1 0 0 1

[ ][ ]
4 1
1 −1 0 0
3 3
⟹ −4 = 2 A ;R 2 ⟶−R2
0 1 −1 0
3 3
0 −1 1 0 0 1

[ ][ ]
4 1
1 −1 0 0
3 3
−4 2
⟹ 0 1 = −1 0 A ;R 3 ⟶ R3 + R 2
3 3
−1 2
0 0 −1 1
3 3

[ ][ ]
4
1 −1 1
3 0 0
3
−4
⟹ 0 1 = 2 A ; R 3 ⟶−3 R3
3 −1 0
3
−1
0 0 2 3 −3
3

[ ][ ]
1 −1 0 3 −4 4
4 4
⟹ 0 1 0 = −2 3 −4 A ; R1 ⟶ R 1− R3 , R2 ⟶ R 2+ R 3
3 3
0 0 1 −2 3 −3

[ ][ ]
1 0 0 1 −1 0
⟹ 0 1 0 = −2 3 −4 A ; R 1 ⟶ R1+ R2
0 0 1 −2 3 −3

Hence,

10
[ ]
1 −1 0
−1
A = −2 3 −4
−2 3 −3

Eigen Values and Eigen Vector

[ ][ ] [ ]
a11 a12 a13 ⋯ a1 n x1 y1
a21 a22 a23 ⋯ a2 n x2 y2
Let a31 a32 a33 ⋯ a3 n x3 = y 3 ⟹ AX=Y
⋯ ⋯ ⋯ ⋯ ⋯ ⋮ ⋮
an 1 an 2 a n3 ⋯ a nn xn yn

….. (1)

Where A is the matrix, X is the column vector and Y is also column vector. Here, column
vector X is transformed into the column vector Y by means of the square matrix A.

Let X be a such vector which transforms into X by means of the transformation (1). Suppose
the linear transformation Y = AX transforms X into a scalar multiple of itself i.e. X.

AX = Y =  X

AX –  IX = 0

(A – I) X = 0

…... (2)

Thus, the unknown scalar  is known as an eigen value of the matrix A and the corresponding
non zero vector X as eigen vector. The eigen values are also called characteristic values or
proper values or latent values.

Example: Find the characteristic roots of the matrix

[ ]
6 −2 2
−2 3 −1 .
2 −1 3

Solution: The characteristic equation of the given matrix is

[ ]
6− λ −2 2
−2 3−λ −1 =0
2 −1 3− λ

11
⟹(6 – )(9 – 6+❑2 – 1)+2( – 6+2+2)+ 2(2 – 6+2)=0 ⟹ – ❑3 +12❑2 – 36+32=0

By trial,  = 2 is a root of this equation.

⟹( – 2)(❑2 – 10+16)=0 – 2 ¿( – 2)( – 8)=0 ⟹=2, 2 , 8 are the characteristic roots or Eigen
values.

Cayley-Hamilton Theorem

Every square matrix satisfies its own characteristic equation.

If | A−λI |=(−1 )n ( λn +a1 λn −1 +a 2 λ n−2+ ⋯ ⋯+ an ) be the characteristic polynomial of n × n


matrix A=( aij ), then the matrix equation

n n−1 n−2
X + a1 X + a2 X +⋯ ⋯ +an I =0 is satisfied by X =A i.e.,

An + a1 A n−1 + a2 A n−2 + ⋯ ⋯+ an I=0

Proof. Since the elements of the matrix A – I are at most of the first degree in , the elements
of adj. ( A – I ) are at most degree (n – 1) in . Thus, adj. ( A – I ) may be written as a matrix
polynomial in , given by

n−1 n−2
Adj ( A – I )=B0 λ + B1 λ + ⋯ ⋯+ B n−1

Where B0 , B1 , B2 , ⋯ ⋯ , Bn−1matrices, their elements being polynomial in .

We know that

( AI ) Adj ( AI )| AI| I

( AI ) ( B0 λn−1 + B1 λ n−2+ ⋯ ⋯+ B n−1) =(−1 )n ( λ n+ a1 λn−1 +a2 λ n−2 + ⋯ ⋯+ an ) I

Equating coefficient of like power of on both sides, we get

n
−I B0 =(−1 ) I

n
A B0−I B1= (−1 ) a 1 I

n
A B1−I B2=(−1 ) a2 I

⋯ ⋯⋯⋯ ⋯⋯ ⋯

12
n
A Bn−1=(−1 ) an I

On multiplying the equation by An , A n−1 , … … … , I respectively and adding, we obtain

0=(−1 ) ( A n +a 1 An−1 +…+ an I )


n

n n−1
Thus, A + a1 A +…+ an I =0

for example, Let A be square matrix and if

3 2
λ −2 λ +3 λ−4=0
…… (1)

be its characteristic equation, then according to Cayley Hamilton Theorem (1) is satisfied by
A.

3 2
A −2 A +3 A−4 I =0
…… (2)

We can find out A−1 from (2). On multiplying (2) by A−1 , we get

A2−2 A1 +3 I −4 A−1=0
2 1 −1
A −2 A +3 I =4 A

1 2
A = ( A −2 A +3 I )
−1 1
4

Example: Find the characteristic equation of the symmetric matrix

[ ]
2 −1 1
A= 1 2 −1
1 −1 2

and verify that it is satisfied by A and hence obtain A−1. Express A6 – 6A5 + 9A4 – 2A3 –
12A2 + 23A – 9I in linear polynomial in A.

Solution: Characteristic equation is ¿ A – λI ∨¿ 0

[ ]
2−λ −1 1
1 2− λ −1 =0
1 −1 2−λ

13
( 2 – ) [ ( 2 – ) 2 – 1 ] +1 [ – 2++ 1 ] +1 ¿
or ( 2 – )3 – ( 2 – ) + – 1+ – 1=0
or ( 2 – )3 – 2++ – 1+ – 1=0

or ( 2 – )3 +3 – 4=0

or 8 – 3 – 12  + 2 + 3– 4 = 0
or – 3 + 2 – 9+ 4 = 0

or 3 – 2 + 9– 4 = 0
By Cayley-Hamilton Theorem A3 – 6A2 + 9A – 4I = 0.

Verification:

[ ][ ]
2 −1 1 2 −1 1
2
A = −1 2 −1 −1 2 −1
1 −1 2 1 −1 2

[ ][ ]
4+1+1 −2−2−1 2+1+2 6 −5 5
¿ −2−2−1 1+ 4+1 −1−2−2 = −5 6 −5
2+1+ 2 −1−2−2 1+1+4 5 −5 6

[ ][ ]
6 −5 5 2 −1 1
3
A = −5 6 −5 −1 2 −1
5 −5 6 1 −1 2

[ ][ ]
12+5+5 −6−10+ 5 6+5+10 22 −21 21
¿ −10−6−5 5+12+5 −5−6−10 = −21 22 −21
10+5+6 −5−10−6 5+5+12 21 −21 22

3 2
A −6 A +9 A−4 I

[ ][ ][ ][ ]
22 −21 21 6 −5 5 2 −1 1 1 0 0
¿ −21 22 −21 −6 −5 6 −5 +9 −1 2 −1 −4 0 1 0
21 −21 22 5 −5 6 1 −1 2 0 0 1

[ ]
0 0 0
¿ 0 0 0 =0
0 0 0

So, it is verified that the characteristic equation (1) is satisfied by A.

Inverse of Matrix A,

14
3 2
A −6 A +9 A−4 I=0

On multiplying by A–1, we get

2 −1
A −6 A +9 I −4 A =0

or 4 A−1= A 2−6 A+9 I

[ ][ ][ ]
6 −5 5 2 −1 1 1 0 0
−1
or 4 A = −5 6 −5 −6 −1 2 −1 + 9 0 1 0
5 −5 6 1 −1 2 0 0 1

[ ] [ ]
6−12+9 −5+6+ 0 5−6+0 3 1 −1
−1 1
¿ −5+ 6+0 6−12+ 9 −5+6 +0 ⟹ A = 1 3 1
4
5−6+0 −5+6+ 0 6−12+9 −1 1 3

6 5 4 3 2
A −6 A +9 A −2 A −12 A +23 A−9 I

¿ A3 ( A3−6 A 2+ 9 A−4 I )+ 2 ( A 3−6 A2 +9 A−4 I ) +5 A−I

¿ 5 A−I

Diagonalization of Matrix

If a square matrix A of order n has n linearly independent eigen vectors, then a matrix P can
be found such that P–1AP is a diagonal matrix.

Proof. We shall prove the theorem for a matrix of order 3. The proof can be easily extended
to matrices of higher order.

[ ]
a 1 b1 c 1
Let A= a 2 b2 c 2
a3 b3 c3

and let 1, 2, 3 be its eigen values and X1, X2, X3 the corresponding eigen vectors, where

[] [] []
x1 x2 x3
X 1 = y1 X 2 = y2 X 3= y3
z1 z2 z3

For the eigen value 1, the eigen vector is given by

15
}
( a1−λ 1) x 1 +b1 y 1+ c1 z 1=0
a 2 x 2+ ( b2− λ1 ) y 2+ c 2 z 2=0
a3 x 3+ b3 y 3+ ( c 3−λ 1) z 3=0

……… (1)

 We have

}
a1 x 1 +b 1 y 1 +c 1 z 1=λ 1 x 1
a2 x 2+ b2 y 2 +c 2 z2 =λ1 y 2
a3 x3 + b3 y 3 + c3 z 3=λ 1 z 3

……… (2)

Similarly, for 2 and 3 we have

}
a1 x 1 +b 1 y 1 +c 1 z 1=λ 2 x 1
a2 x 2+ b2 y 2 +c 2 z2 =λ2 y 2
a3 x3 + b3 y 3 + c3 z 3=λ 2 z 3

……… (3)

}
a1 x 1 +b 1 y 1 +c 1 z 1=λ 3 x 1
a2 x 2+ b2 y 2 +c 2 z2 =λ3 y 2
a3 x3 + b3 y 3 + c3 z 3=λ 3 z 3

……… (4)

[ ]
x1 x2 x3
We consider the matrix P= y 1 y2 y3
z1 z2 z3

Whose columns are the eigenvectors of A.

[ ][ ]
a1 b 1 c 1 x 1 x2 x3
Then, AP= a2 b 2 c 2 y 1 y2 y3
a3 b3 c 3 z 1 z2 z3

[ ]
a1 x 1 +a1 y 1+ a1 z 1 a1 x 2+ a1 y 2 +a 1 z 2 a1 x 3 +a1 y 3 +a1 z3
¿ a2 x 1 +a2 y 1+ a2 z1 a2 x2 + a2 y 2 +a 2 z 2 a2 x 3 +a2 y 3 +a2 z 3
a3 x 1 +a3 y 1+ a3 z 1 a3 x2 + a3 y 2 +a 3 z 2 a 3 x 3 +a3 y 3 +a3 z 3

16
[ ]
λ1 x 1 λ2 x2 λ 3 x3
¿ λ1 y1 λ2 y 2 λ3 y 3 [Using eq. (2), (3) and (4)]
λ1 z 1 λ2 z 2 λ3 z 3

[ ][ ]
x1 x2 x 3 λ1 0 0
¿ y1 y2 y3 0 λ2 0 =PD
z1 z2 z3 0 0 λ3

[ ]
λ1 0 0
where, D is the Diagonal matrix 0 λ2 0 .
0 0 λ3

 AP=PD

⟹ P−1 AP=D

Notes

1. The square matrix P, which diagonalises A, is found by grouping the eigen vectors of
A into square-matrix and the resulting diagonal matrix has the eigen values of A as its
diagonal elements.
2. The transformation of a matrix A to P–1AP is known as a similarity transformation.
3. The reduction of A to a diagonal matrix is, obviously, a particular case of similarity
transformation.
4. The matrix P which diagonalises A is called the modal matrix of A and the resulting
diagonal matrix D is known as the spectra matrix of A

17

You might also like