0% found this document useful (0 votes)
79 views

Matrix Formalation

Notes

Uploaded by

PARTHA NANDI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
79 views

Matrix Formalation

Notes

Uploaded by

PARTHA NANDI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 51
11 MATRIX FORMULATION OF QUANTUM THEORY In the preceding chapters, the Schroedinger wave equation was developed and applied to various 8 > a P Thus A-p=|4n-8, 4n-By Ay 2, | 2-Ba An-Bn An Be )) Ifthe sum matrix is called ©, then Cet = Agi + Bey for addition et = Agi ~ Byy for subtraction 4) The rule for multiplying matrix A by a constant C is Aly CAy CA ... CAyy | ae Arw| | Can Cdn... Cayn ro Ara || CAp Cys eh cage iz, each element of A is multiplied by the constant. ‘Two matrices can be multiplied if the number of columns in A is equal to the number of rows in 1 8 The product is a matrix whose element in the ith row and jth column is the algebraic sum of the products of the elements in the ith row of A by the corresponding elements in the ‘jth column of B. Au agian ee pe ag de aS mn Ba If Anil ge Se and B=|By, By Ax Ay Ax Ba! Bea Ag Ag Aas Ay By + Ay2By +4138 Ar Biz + A.B + A13 Bx Ay By + Ar By +A Bs An Bi. + An By + An Br then AB= (6) Ag, By + A32 Ba) + A33 Ba, A31 Biz + A By + Ay By Ag By + Ag Bx + Ag Bs An B+ Ann + Ag br ‘The commutative law of addition holds good for matrices A+B=BrA associat f addition is also true i. mek (A+B) +C=A+(B+C). | 1 hold in general Le., | The commutative law of multiplication er te (0) 112.” SPECIAL MATRICES "0 Row matrix : ‘matrix ‘A matrix having only one row is called a row in A 413 Ainh 494 QUANTUM: (2) Column matrix A matrix having only one column is called a column matrix. Au BANS i, Ain, Ayal 4 ” - Ayi @) Noll matrix: A matrix having every clement as zero is called a null matrix. ooo 000 | 000 Foran arbitrary square matrix A, the null matrix O is defined by the equations OA=0 and AO=0 for which it follows that all the elements of O are zero. @) Unit matrix: A square matrix having unit elements in the principal or leading diagonal and zero clemen everywhere else is called unit matrix : A048 Onin XO 010 0 I=]0 0 1 0 ~Q) 000 ‘Thus the unit matrix is defined by AL=A=1A. 6 (5) Inverse, singular and non-singular matrices ‘The inverse, or reciprocal, of a ‘square matrix A is indicated by the symbol A~ ! and is defined,i® case it exists, by the relations a a AA t= A-tAsy # Ais said to be ‘non-singular if it Possesses an inverse, and singular if it does not. (6) Determinant of a matrix Every finite square matrix A has a corresponding determinant which we represent by det Ao! | A]. It is found from the usual rule for the com tation of U number, IfA and B are square of the same rank, then,” (he determinant of a square ars) det AB = det A det B, 0 (7) Trace or spur of a matrix ; ‘The trace of matrix isthe sum of the diagonal elements of the matrix: Tr(A) =A, 4 k oe MATRIX FORMULATION OF QUANTUM THEORY ig) Transpose of a matrix The matrix of order n x m obtained from an ly matrix A of order s god columns is called the transpose of A and is de, tm x n by interchanging its rows noted by AT or A’ An Ay Ay A 14 If Aml Ay Ay Any Ay Ax Ax Ay Ay Ay Ay Ax then ata] 42 An Az Ais Ax x3 Au Ay Ax (9) Conjugate of a matrix If a matrix A consists of complex numbers as elements, then the matrix obtained by the corresponding conjugate complex elements is called the conjugate of A and is denoted by A*. (10) Adjoint of a matrix : If we denote the cofactor of the element aj; in the determinant | A | of the square matrix A= {aj by A;,, then the transpose of matrix [4] is called the adjoint of A and is denoted by Adj A. Thus adj. Ais the transpose of the matrix formed by the cofactors of A. 41 12 + Gn ay ay ay rs an n 4n3 982 -- San Its determinant is ay 412 41n ay ax 42, a1 Anz Ann ; Suppose A. isthe cofactor ofa, and similarly Ay» isthe cofactor of dy the matrix of cofactors ‘given by Ay An Ain An An if aa? ae ae the matrix of cofa / jlumns in matrix of cofactors, ie., The adjoint A is obtained by interchanging rows and pe ieee Ay Ax An ASGA=] 0... Ain Aan - yarmix FORMULATION OF QUANTUM THEORY 497 lator =IAl}0 1 O}=ja o 1 ofmjatt Similarly (Adj A) A =| A |t A(Adj A) = | A| 1 = (Adj A) A. If | A | 0, then this result may be expressed as 1 = (ra7494) A a(rara)=t= (747 In terms of Adj. A, the theorem 3 y = i about the existence of inverse can be expressed in the following ono The necessary and sufficient condition for a square matra ‘matrix A to possess the inverse so that inverse of Ris fe ‘Ato possess the inverse is that | A | # 0, at (Adj A) S x TAl Example 1. Find the adjoint of the matrix ahg A=|h bf Shoe “The cofactors of the element of first row are, ‘The cofactors of a, i¢., a=crl} The cofactor of h, ie, The cofactor of g, ie, Similarly other cofactors can be obtained. be-f gf-he hf—gb AdjA=|gf-he ace gh—as)- hf-gb gh-af ab-h? Example 2. Find the inverse ofthe matrix 3-1 a Ael|- 6-5 ai bd 82. Let its cofactor matrix be Ay By Ay Bz C2)» As Bs C3 a -15 -5 sfe-na-c'| . s|-s 6 oe ane | > _— yy “a ANTM Me a (<1) |, oe an-ot| "5 _§]=0 (Sa ld 1 Az =(-1) (a 2|-9%=C0 ee 1 atepsls -1 c=cnt[3 Tifa Similarly A3=-1,B;=0 and C3 =3. Ai Az A3 2 AdjA=|B, B, B,|=|5 QO & 0 Also, JA] =3.24(-1).5+1.0 A 2 0 ‘Therefore Ala adjA=|5 1 1A] ora (11) Hermitian matrix: A hermitian matrix is a square matrix which is equal to its hermitian adjoint.t It confirms tothe equation. AT =A. ~O) Evidently only square matrices can be hermitian. (12) Unitary matrix : ‘A matrix is unitary if its hermitian adjoint is equal to its inverse; thus U is a unitary matrix if Uy =u-! or if UUt=1 and UfU=1. ‘8 113. EIGENVALUES AND EIGENVECTORS OF MATRICES .,_ lorder to obtain the eigenvalues of an operator A from its matrix Tepresentation, we begin with A ¥r= 4, ¥p where a, is the rih eigenvalue of the operator 4. We then expand y, in an orthonormal series y, = EC, y, and obtain n LCA py = 0,E Cy Ye 0) n n Multiplying equation (1) by y, * and integrating over all ‘space, we have ECndmn = dy Cry. 0 n Equation (2) provides a set of homogeneous linear equations defining the Cyy in terms of anni 4, The condition for solution is the vanishing of the determinant of coefficients of the C’s, ie 14mn=Smndr|=0, r= 1,2... a ‘The equation (3) is often called the secular or characteristic equation. Note. + This symbol is used for hermitian adjoint, ie, (At); = (A)%j ax FORMULATION OF QUANTUM THEORY re Equation (3) is an equation defining a,, which is of the same order as the number of rows and jumns in the determinant, Each solution provides an eigenvalue a,. The eigenvector is simply the jun representation of the corresponding eigenfunction of the operator A i the sicients of the orthonormal set yy. Fenian as cee Pa 1 For example, if we pre-multiply the vector u = | 2 | by the matrix - 2 1 ee A=/-6 10 -4 2S 6 u -6 2]f1 Mx 1-6x2+2x2 get Au=/-6 10 -4/]2]=|-6x1+10x2-4x2 2-4 6/[2 2xK1-4K2+6x2 3 1 =/6]=3]2 6 2 ence Au=3u if The vector | 2 | is consequently an eigenvector of the matrix A, the corresponding eigenvalue of 2 23. transformation and diagonalisation of matrices Let $ be a non-singular matrix. Now the similarity transformation of a square matrix A into ther matrix A’ can be written in the form of a transformation equation as SAS"! =A’ and S-'A'S=A | Any matrix equation remains unaffected by such a transformation. The following equation AB+CDE=F, Inbe transformed into SABS~'+SCDES"'=SFS"', | Jens equivatent to zx sas-!.SBS"'+8CS"'.SDS"'xSES"'=SFS"'! A’B’ +C’D'E’=F’, (8) tere the primes over matrices denote the transformed matrices, {Now we shat! strate the transformation of a matrix A into a diagonal matrix D, through rity transformation : dy 0 0 dx et is YL. ihe and D ay 42 the matrix A is transformed to D by the transformation matrix $ so that s-'AS=D o AS@ SD. Taking mth clement of Both the sites, we h SD Sanden ® Sandnn 1 Nave . Aaa din aS ‘ : any Stn tn 2 San ™ Sandan (m,n = 1,2) Setting n= Lande = 1, 2, we get ay Sy tae ay Sy +a Sy = Sadun jeterminant of the coefficient vanishes ; Sudu ‘Tris set has the solution for unknown S's, if the d ay - ay ae =0 ay an ty ‘Similarty, setting x = 2 and m = 1, 2, we get *n-42 0 1 | Lg ay aa-da} From equations (5) and (6), we see that dj and d22 are the roots of the secular equation in aya ayy @ a-A ‘When A is 2 hermitian matrix, then the secular equation (7) has real roots. Thus, in order to fi agonal manic and consequently the eigenvalues of A (eigenvalues are the elements of the di matrix), we have to solve the secular equation (7) for A. Example. Reduce 0 a diagonal form Taye 0-42 Or a7 te sa! 100 Let a-[3 -4 ] m fie] 0 07 001 ‘We know that the secular equation is | A — Al] = 0, 1 23] fAow 1-a 0-4 2/-]0a 4 ; : 0 o7] jooa 0 -4-4 2 oO 0 I-A ‘Solving the determinant, we have (1a) (~4~-A) (7a) =0 or A=1-4 and 7, ‘These are then the eigenvalues of the matrix A. The diagonal matrix will be Ao 0 oof 2g) g) =0. oo De Ose yt FOPMULATION OF QUANTUM THEORY 501 qi LINEAR VECTOR SPACEs ; HILBERT SPACE fe shall now introdi west luce here some terminology that will be useful in the subsequent discussions. ajomn vector, Row vector and Hermitian scalar product Itis conventional to speak of a one-column matrix as a column vector or, gslogy, the elements of a column matrix can be visualised some useful definitions, we take, simply as a vector. On this as the components of a vector. For the sake for example, the column vector x as a 2 x= [x4]. (1) In In order to conserve space, we write these elements in a line within curly brackets to represent a vector x= (4129%5..%q)- -Q) _ Acone-row matrix fry x23 ...X,] will be termed a row vector. It is obvious that this vector is the of the column vector x. x7 = [ry 1943. Sn] ~B) In a three-dimensional space, n = 3,x,,x2,X3 may Tepresent the components of a vector in the of rectangular (x;,x2 and.x3) axes. The square of the length of this vector is then given by +x;2 +x;2. Extending this definition to a vector in an n-dimensional space, which will now have ponents, the square of the length of this vector is wt xy? xg? +g? + on +p (4) We associate n mutually orthogonal axes with an n-dimensional coordinate system, so that a point n corresponding coordinates and that a vector has m components along these axes. We now 1uce the concept of the scalar product of two vectors in an n-dimensional space. Let the vector be y. The scalar product of x and y is defined by dyaylxaniy teat tint ~(S) that ifx = y, it also describes the square of the length of the vector x, given by eq. (4). When the components of the vectors are complex numbers, we define the hermitian scalar product la vector x and y as xtTy =xtyy +2 to tw ¢* denotes complex conjugation. The operation *T is also denoted by the symbol {."Thus, we have xty=xtTy = yt)? = 79) = OTT, ~) in a transposition of the product of two matrices, the order of the matries is reversed. Ea. (7) that the two hermitian scalar products are complex conjugate of each other. | ———— <= QUANTUNy wa Xe near Lctor space. Independence of vectors and linear ve L ti, is said to be linearly independent if there exists no Tlton Asset of n vectors ty them of the form 4 Hy + 02g + oe + Oy Oy = 0, Me hen all the c’s are zero, For example, in a two-dimensional space, if cy andcz are no except relation 4 cm +70, =0 or Ba Six implies that w; is in the negative direction of uy, Such two vectors are Obviously not lng independent. For them to be linearly independent, it is necessary that the two be non-collinear, ang, | ‘wo such vectors will define a plane. Ina similar way, any three vectors which are not parallel to ayy are linearly independent and constitute a three dimensional space. If 4, 2, ... ty are linearly independent vectors in an n-dimensional space, then the se ota ‘vectors v which can be expressed in the form v=, ay + Cu +... + Oy sh ‘is called the vector space, spanned by the set of vectors uy, Uz, ... , Uy. We say that the set of n linea independent vectors forms the basis for the n-dimensional vector space which it spans. Because of fact that totality of all vectors v, formed from eq. (9), is obtained as a linear combination of i» u-vectors, vector space so defined is also a linear vector space. From eqs. (8) and (9) we can define a linear vector space of n-dimensions if we can find a setofy linearly independent vectors but no such set of (n + 1) vectors; for the (n + 1)th vector in this spac, like v will be linearly dependent on u's. Hillmert space. The vector space, suitable for quantum mechanics is called the Hilbert or function ‘pace. To be precise, this is a linear vector spare, usually of infinite dimensions and complex, such th all infinite series occurring in it are convergent, More of the discussion of this chapter is confined to the complex linear vector space oft dimensions; but the results arrived at will be applicable to the case nc (-e., suitable for quantua itional restrictions on the vectors and operators of the Hilbet space, ‘Some definitions in the linear vector space. Basis or basis vectors, In an n-dimensional Space we may choose a set of n linearly independes VATS, $89 YW -» Yr Meany arbitrary vector y in this space can be expanded in terms of these, A Vm + Vat tC =D) CY 0 iat then the vectors y, are called basis vectors or c ‘omplete set of vectors of the vector space. The coeffiies ¢; 1m this expansion may be complex, in gener ral, and are called the components of the vector y. a ATR FORMULATION OF QUANTUM THEORY 503 scalar (Hermitian) product of the vectors Fortwo veciors # and y, the (hermitian) salar product is defined byt =Sb*ydracylg>: (it) 11g = Ws then eq, (11) defines the norm o length of the vector v <¥l¥>=JSyrydre0 , Orthonormality property of basis vectors Ika set of basis vectors v1... Yq has the Property =3,;, Gj =1,2,...,n) (13) ie, =0. isj (14) =1 i=j (15) itis called an orthonormal set of basis vectors, and eq (13) define the ‘orthonormality property of basis vectors. Eq. (14) above defines an orthogonal system of basis vectors and eq. (15) is the normalisation condition. From the point of view of quantum mechanics, the solutions of the schroedinger equations for @ particular system form an orthogonal set in the Hilbert space (as n-» ) and are usually normalised to one. There, each stationary state specified by an eigenfunction can be represented by a functions form a complete set of basis vectors (orthogonal) in the Hilbert space. By completeness, we understand a system of basis vectors in term of ‘hich any arbitrary wavefunction, represented by the vector y can be expanded in terms of basis vectors ¥ (/e., Can be written as a vector sum) : iy a; Here a; corresponds to the component of the state vector y along ith basis vector. ‘The basis vector representation of an arbitrary vector affords us to write the scalar product of two arbitrary vectors y,, yz in the familiar form ¥ (16) = = ¥ a2 ie ij == aby = Bait b= Eaj* % on iy - 2 Hence, in the terms of its components, the square of the norm of the vector ‘Wa can be written as =Baj°a) =| 6; |? wn(18) i Schwartz inequality, This relation is very important and relates the scalar product of two vectors to their norms. Let us define Y= VatAYy Where the constant A may be complex. Since the norm of a vector can be negative : 20 as =< a tAve) | Wa trv) > 20 or +4 < Val Yo > +A*< v5 | ¥a> tAd*< yy | yy > 20 (19) ~~ . ’ oH ) for a value A given by d=0 This will be minimum (or maximum) << vq lve> +A% < v1 Mb < Wa | Vo > or a= Syl Ve> or £91 ¥e> or A= Sy 1V0> Substituting this value in eq. (19), we get the Schwartz inequaltty 1 175 <¥al Ya > <0 | Ys > i 115, LINEAR OPERATORS : LINEAR TRANSFORMATIONS ‘An entity A that relates every vector y in a linear vector space to another ¢ in this space by equation i a g=Ayp fl is called an operator. The operator A is said to be linear if it possesses the properties AQ va + Aa ve) =A vo + AoA vy 0 and Aay=aay, 0 where 4;, 2,4 are complex numbers. Since, in eq. (1), the two vectors ¢ and y are defined in a linear vector space, this equation ais defines a linear transformation in the vector space. A familiar example of a linear transformation is the following set of m linear equations in un: nowns (variables) 21,35 %q ! Oy Xy + O42 Xp + on. +O X= Cy Oy, X4 + O72 %2 +... + O24 Xy = Cz yy 1 Xy + Ayn 2X2 +». + Amn Xn = Cm ‘The set of equations is equivalent to a single matrix equation Ax=c where A is called the coefficient matrix and x is the column vector or column matrix. ‘The a;;'s (i = 1, ..., m,j = 1, ..,m) are the constant coefficient of variables and can be arranged in ows and columns to form a matrix, 442 Ain ‘es, |Aah. ahh Man. mt Om, ; The sets of quantities xi = 1, 2,..m) and oj(i = 1, 2, ..m) can also be represented as cl¥™ matrix each (or as a column vector) Om arn FORMULATION OF QUANTUM THEORY 4 x= @) =| 2 *| e=() =| 2 (6) 4 | a Wren we choose the representation (5) and (6) the ‘system of m linear equations defines a linear yansformation in matrix form Ax=c. A) ‘When eq. (1) is seen in the context of €q. (7), it becomes obvious why we prefer to describe eq. (1) ssa linear transformation. We shall now ‘attempt to describe the matrix (column) representation of the sate vectors @ andy and the matrix representation ofthe operator A in the linear vector space. 1.6. MATRIX FORM (REPRESENTATION) OF AN OPERATOR We shall here find the matrix representation. ‘ofan operator 4. When operator 4 operates on some function f(x), the function is transformed into new function g(x), i.e., Ase) = 50) =e} it is possible to express the functions f(x) and g(x) in terms of complete orthonormal set of eigenfunctions y,,(x) of operator A, ie., Hla) = E dyn Yy(x) and g(x) = E by Yqlx). -@ ‘Substituting eq. (2) in eq. (1), we get AE Op, Y(t) = E On Yale). ——- In order to obtain b,,, we multiply eq. (3) by yy *(z) and integrate over the entire space; hence < W114 | Zam Yn > = <¥1| En Vn > = ri Edy < Ys |A Ym > = Eby < V1 | Yn > = = . Edy Aim = Eby 81n = by ~@ = is where Aim = <¥1 1A Ym > (5) Equation 0 determines fhe traaetoemation ot fanetion ff) repacoenied by fa. compoucnts 6 into g (x) as represented by its components b, under the action of operator A. The operator 4 in this 5 - a te (A) is ou representation is gi . (5) in the form of a matrix. Thus the definition of matrix (A) is equivalent oe comes Thay Tes eget merece tans tien wanes and are termed the matrix elements in the representation {y,)- I ek If the operator A is hermitian, then it may be represented as a diagonal : real eigenvalues, i, cage ss 4m = Gm Ye 506 mera From eq. (5) Aim = 5 ¥19E) Gn VO) 48 = Am 81m Faqeation (6) show tat matt loments wh = / re non-2eo while the clement waaay . Thus the matrix is diagonal % zero, Thus the matrix is diag CMe 0 0 420 . 0 A 0. 0.» -O@Onedyy ‘Thus each hermitian operator has a representation as a diagonal matrix, pr wavefunction is expanded in terms of its eigenfunctions. Hence the matrix representation o@ Dperator enables us to determine the eigenvalues and eigenfunctions of the corresponding perator is known, then matrix element 4; , can be determined, Moreover, ifthe operator is hema, then it can be represented by a diagonal matrix. hermit 11.7. COLUMN REPRESENTATION OF THE WAVEFUNCTION. ‘Suppose that in the system of basis vectors y; (i =1,2,...m), the wavefunction @ has expansion = EY = CY + C2V2 t+ oan fl ‘We say that ¢ is specified by the expansion coefficients ¢; in this representation. The c;’s can be fount we multiply eq. (1) by yj* and integrate over all space : Sujrddrm ty Zawarn Barf wt vides Bed: =e, ie, Gulu toda

You might also like