0% found this document useful (0 votes)
21 views

Matrix Mechanics

Uploaded by

adamdaneal817
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Matrix Mechanics

Uploaded by

adamdaneal817
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

5.

73 Lecture #10 10 - 1
Matrix Mechanics
should have read CDTL pages 94-121
read CTDL pages 121-144 ASAP

Last time: * Numerov-Cooley Integration of 1-D Schr. Eqn. Defined on a Grid.


* 2-sided boundary conditions (two different kinds of boundary condition)
* nonlinear system - iterate to eigenenergies (Newton-Raphson)

So far focussed on ψ(x) and Schr. Eq. as differential equation.


Variety of methods {Ei, ψi(x)} ↔ V(x)

Often we want to evaluate integrals of the form


a is “mixing coefficient”
overlap of special

*
ψ (x)φi (x)dx = a i φi is a member of a “complete”
ψwith standard
set of basis functions, {�}
functions {φ}
OR

expectation values and


transition moments ∫
* n
( n)
φi x̂ φ jdx ≡ x
ij

There are going to be elegant tricks for evaluating these integrals and relating one
integral to others that are already known. Also “selection” rules for knowing
automatically which integrals are zero: symmetry, commutation rules

Today: begin matrix mechanics - deal with matrices composed of these integrals -
focus on manipulating these matrices rather than solving a differential
equation - find eigenvalues and eigenvectors of matrices instead
(COMPUTER “DIAGONALIZATION”). LINEAR ALGEBRA.

H
I
G * Perturbation Theory: tricks to find approximate eigenvalues of infinite matrices
H
L * Wigner-Eckart Theorem and 3-j coefficients: use symmetry to identify and inter-
I relate values of nonzero integrals
G
H * Density Matrices: information about “state of system” as separate from
T “measurement operators”
S

updated 8/13/20 8:22 AM


1
5.73 Lecture #10 10 - 2

First Goal: Dirac notation as convenient NOTATIONAL simplification


It is actually a new abstract picture
(vector spaces) — but we will stress the utility of ψ ↔ |〉 relationships
rather than the philosophy!

Find equivalent matrix form of standard ψ (x) concepts and methods.

∫ ψ i ψ j dx = δ ij
*
1. Orthonormality

2. Completeness ψ(x) is an arbitrary function

(expand ψ) A. Always possible to expand ψ(x) uniquely in a COMPLETE BASIS SET


{φ}
ψ(x) = ∑
a iφ i (x)
i
mixing coefficient — how to get it?
left multiply by �*i and
* a i = ∫ φ*i ψdx integrate over x

Always possible to expand B̂ψ in {φ} since we can write ψ in terms of {φ}.
(expand B̂ ψ) B.
So simplify the question we are asking to B̂φ = ∑ b φ
i j j j
⎧ ⎫
What are the ⎨ b ⎬ ? Multiply by ∫ φ*
⎩ j⎭ j

b j = ∫ φ*j B̂φi dx ≡ B ji

B̂ φ i = ∑ B ji φ j
note counter-intuitive pattern of
j indices. We will return to this.

* The effect of any operator on �i is to give a linear combination of �j’s.

updated 8/13/20 8:22 AM


2
5.73 Lecture #10 10 - 3
3. Products of Operators

(ÂB̂)φi = Â(B̂φ i ) = Â∑ B jiφ j


j
can move numbers (but not operators) around freely
= ∑ B jiÂφ j = ∑ ∑ Bji Akjφ k note repeated j-index
j j k

= ∑( AkjB ji )φ k = ∑( AB) kiφ k note repeated k-index


j,k k

* Thus the product of 2 operators follows the rules of matrix multiplication:


ÂB̂ acts like A B

Recall rules for matrix multiplication:


⎛ ⎞⎛ ⎞ indices of a matrix are Arow ,column
⎜ ⎟⎜ ⎟
⎝ ⎠⎝ ⎠

must match # of columns on left to # of rows on right

( N × N) ⊗ ( N × N) → ( N × N) a matrix
the order of (1 × N ) ⊗ ( N ×1) → (1× 1) a number
matrices
matters! " row vector" " column vector"

( N ×1) ⊗ (1 × N ) → ( N × N) a matrix!
column row
vector vector

Need a notation that accomplishes all of this memorably and compactly.

updated 8/13/20 8:22 AM


3
5.73 Lecture #10 10 - 4
Dirac’s bra and ket notation
Heisenberg’s matrix mechanics
⎛ a1 ⎞
⎜ a2 ⎟
ket is a column matrix , i.e. a vector ⎜⎜ ! ⎟⎟
⎝ aN⎠

The ket contains all of the “mixing coefficients” for ψ expressed in some
(implicit) basis set.
[These are projections onto unit vectors in N-dimensional vector space.]
Must be clear what state is being expanded in what basis
ai

[
ψ(x) = ∑ ∫ φi*ψdx φ i (x)
i
] express {ψ} basis in {φi} basis

⎛ ∫ φ1* ψdx ⎞ ⎛ a1 ⎞
⎜ φ * ψdx ⎟ ⎜a ⎟ * ψ expressed in φ basis

ψ =⎜ ⎟ =⎜ 2⎟
2 * a column of complex #s
⎜ ! ⎟ ⎜ ! ⎟ * nothing here is a function of x
⎜ * ⎟ ⎜ ⎟
⎝ ∫ φ N ψdx⎠ φ ⎝ a N ⎠
bookkeeping device (RARELY USED) to specify basis set

OR, a pure state in its own basis


⎛ 0⎞
⎜ 1⎟
φ 2 = ⎜ 0⎟ one 1, all others 0 (often expressed as ∣2〉)
⎜⎜ ! ⎟⎟
⎝ 0⎠ φ
⎛ 1 ⎞ ⎛ 0 ⎞ ⎛ 0 ⎞
⎜ 0 ⎟ ⎜ 1 ⎟ ⎜ ! ⎟ a weighted
ψ = a1 ⎜ ⎟ + a2 ⎜ ⎟ + …aN ⎜ ⎟ sum of unit
⎜ ! ⎟ ⎜ ! ⎟ ⎜ ! ⎟
⎜⎝ ⎟⎠ ⎜⎝ ⎟⎠ ⎜⎝ ⎟⎠ vectors
0 0 1

is a row matrix ( b1, b2 …bN )


bra
{ }
contains all mixing coefficients for ψ * in φ * basis set

ψ * (x) = ∑ ⎡⎣ ∫ φiψ * dx ⎤⎦φi* (x) (This is * of ψ (x) above)


i

The * stuff is needed to make sure ψ ψ = 1 even though φ i ψ is complex.


updated 8/13/20 8:22 AM
4
5.73 Lecture #10 10 - 5
The symbol 〈a|b〉, a bra–ket, is defined in the sense of a product of
(1 × N) ⊗ (N × 1) matrices → a 1 × 1 matrix: a number!

Box Normalization in both ψ and 〈 | 〉 pictures

1 = ∫ ψ * ψ dx

ψ=∑ ( ∫ φ ψ dx )φ
i
*
i i
expand both in ortho-normal

ψ* = ∑ ( ∫ φ ψ * dx ) φ
take complex conjugate of � equation
*
φ basis
j j
j

1 = ∫ ψ * ψ dx = ∑
i, j
( ∫ φ ψ * dx )( ∫ φ ψ dx )∫ φ φ dx
j
*
i
*
j i

c.c. δij
1 = ∑ ∫ φ ψ dx
2
*
j forces the 2 sums (over i
j and j) to collapse into 1
sum (over j)
real, positive #’s

We have proved that sum of |mixing coefficients |2 = 1. These mixing


coefficients “squared” are called “mixing fractions” or “fractional characters”.

now in picture
⎛ ⎞
⎜ φ*ψ dx ⎟
⎛ ⎞⎜
∫ 1 ⎟
⎜ ⎟
ψ ψ = ⎜ ∫ φ1ψ * dx ∫ φ2 ψ * dx !⎟ ⎜ ∫ φ2 ψ dx ⎟ = 1× 1 matrix
*

⎜⎝ "$$$$$#$$$$$% ⎟⎠
row vector: “bra” ⎜ & ⎟
⎜ "$#$% ⎟
⎜⎝ vector ⎟
“ket” ⎠
column

2
=∑ ∫ φ ψ dx
*
j same result as in wavefunction representation
j

[CTDL talks about “dual vector spaces” — best to walk before you run. Always
translate 〈 〉 into ψ picture until you are sure you understand the notation.]

updated 8/13/20 8:22 AM


5
5.73 Lecture #10 10 - 6
Any symbol 〈 〉 is a complex number.
Any symbol ∣ 〉 〈 ∣ is a square matrix.

⎛ φ1 ψ ⎞
⎜ ⎟
again ψ ψ = ( ψ φ1 ψ φ2 …) ⎜ φ2 ψ ⎟
⎜ ⎟
⎜⎝ ! ⎟⎠

= ∑ ψ φi φi ψ = ψ ψ = 1
i
unit matrix �

⎛ ⎞ ⎛ three dots are


1 ⎞ shorthand for
⎜ ⎟ ⎜ 1 0 " ⎟
(
what is φ1 φ1 = 10…0 ⎜

) 0
!
⎟ =⎜ 0 0
⎟ ⎜ ! ⎟
specifying only
the important
⎜⎝ ⎟⎠ ⎝ # ⎟⎠ part of an
0 infinite matrix
⎛ 1 0 ⎞
⎜ 1 0 ⎟ unit or identity
what is ∑ φi φi = ⎜ ⎟ matrix = �
i ⎜ 0 1 ⎟
⎜⎝ 0 ⎟⎠
# Large zero (0)
denotes a lot of
zeroes.
“completeness” or “closure” involves insertion of � between any two symbols.

updated 8/13/20 8:22 AM


6
5.73 Lecture #10 10 - 7
Use 1 to evaluate the matrix elements of the product of 2 operators, AB (we know
how to do this inψpicture).

i-th square matrix


⎛ 0 ⎞
⎜ : ⎟
⎜ ⎟
φi A φ j = ( 0…1…0 )( A ) ⎜ 1 ⎟
⎜ ⎟ j-th position – picks
: out j-th column of A
⎜ 0 ⎟
i-th ⎝ ⎠
⎛ A1 j ⎞
⎜ ⎟
= ( 0…1…0 ) ⎜ A2 j ⎟ = Aij picks out the i-th element of a
⎜ ⎟ column vector
⎝ ! ⎠
φi AB φ j = ∑ φi A φ k φ k B φ j
k

= ∑ Aik Bkj = ( AB )ij a number (obtained by matrix multiplication)


k

In the Heisenberg picture, how do we get an exact equivalent of ψ(x)?


Use basis set δ(x,x0) for all x0 – this is a “complete” basis (eigenbasis
for ^
x, eigenvalue x0) - perfect localization at any x0

This 〈x ⎜ψ 〉 symbol is the same thing as ψ(x)


⎛ i.e.,
↑, ⎝ ∫ δ (x , x′) * ψ( x′ ) dx′ = ψ(x) ⎞⎠
x is continuously variable ↔ δ(x)

Overlap of state vector ψ with δ(x) – a complex number. ψ(x) is a complex


function of a real variable.

updated 8/13/20 8:22 AM


7
5.73 Lecture #10 10 - 8
other ψ ↔ 〈 ⎜ 〉 relationships

1. All observable quantities are represented by a Hemitian operator (Why –


because the expectation values of a Hermitian operator are always real).
Definition of Hermitian operator:
For a matrix: A ij = A*ji † Easy to prove that if all
or A = A expectation values of A
are real, then A = A† and
2. Change of basis set vice-versa
A φ ↔ Au {φ} to {u }

Aijφ ≡ φi A φ j = φi 1A1 φ j
= ∑ φi uk uk A uℓ uℓ φ j
k ,ℓ
S*ki ≡ Sik

Sℓj
← φ, S is Frequently
↑, φ, ↑, used to denote an
u u “overlap” integral
This is the j-th column of S

u basis
φ basis

† u
= ∑ Sik
k ,ℓ
Akℓ Sℓj = S† Au S ( ) ij
≡ Aijφ

φ † u S† … S is a special kind of
A =S A S transformation (unitary)

(different from more-familiar


T–1AT “similarity”
transformation)

updated 8/13/20 8:22 AM


8
5.73 Lecture #10 10 - 9
For a state vector (ket):
N
⎛ S1 j ⎞
φ j = ∑ uℓ uℓ φ j = ∑ u ℓ S ℓj = ⎜ " ⎟
⎜ ⎟
ℓ ℓ=1 ⎜⎝ S ⎟⎠
Nj

⎛ S ℓj ⎞
⎜ ⎟
φj =⎜ " ⎟ This is the j-th column of S.
⎜ S Nj ⎟
⎝ ⎠

The linear combination of ui for each φ j is the j-th


column of S. Also, the linear combination of φ j for
each ui is the i-th column of S† . [This is a very useful thing to remember.]

i-th

⎛ 0 ⎞
⎜ ⎟ ⎛ S† ⎞ ⎛ 1 ⎞ ⎛ 0 ⎞ ⎛ 0 ⎞
⎜ ! ⎟ ⎜ 1i ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟
0 ⎟ + S2 i ⎜ 1 ⎟ +…S Ni ⎜ ! ⎟
ui = ⎜ 1 ⎟ =⎜ ! ⎟ = S1i ⎜
† † †

⎜ ! ⎟ ⎜ S† ⎟ ⎜ ! ⎟ ⎜ ! ⎟ ⎜ ! ⎟
⎜ ⎟ ⎝ Ni ⎠φ ⎜⎝ 0 ⎟⎠ ⎜⎝ ! ⎟⎠ ⎜⎝ 1 ⎟⎠
⎝ 0 ⎠u mixed pure
pure state in state in
state in {�} basis {�} basis
{u} basis

updated 8/13/20 8:22 AM


9
5.73 Lecture #10 10 - 10
What kind of matrix is S?

Sℓj = uℓ | φ j

Sℓj* = ⎡ uℓ | φ j ⎤ * = φ j | uℓ ≡ S †jℓ
⎣ ⎦
†means take complex conjugate and interchange indices.
Using the definitions of S and S† :

Sℓj S †jk = uℓ φ j φ j uk

∑ ℓj jk =∑
S S †
uℓ φ j φ j uk = uℓ 1 uk = δ ℓk
j j
identity
= uℓ uk = δ ℓk = 1 ℓk matrix

a very special and


SS† = � OR S† = S–1 “Unitary” convenient property.

S† is inverse of S!

updated 8/13/20 8:22 AM


10
5.73 Lecture #10 10 - 11
Unitary transformations preserve both normalization and orthogonality.

A φ = S †A u S
SA φS† = SS†A u SS† = A u
A u = SA φS†
Take matrix element of both sides of equation:
Aiju = ui | A | u j = ( SA φS† )ij
= ∑ Sik φ k | A | φℓ S†ℓj
k,ℓ

∴ u j = ∑ φℓ S†ℓj u j is j-th column of S†


φ → u via S† ,S : u j is j-th column of S†

updated 8/13/20 8:22 AM


11
5.73 Lecture #10 10 - 12
Thus,
⎛ S† ⎞
⎛ 0 ⎞ ⎜
1j

j-th
⎜ ! ⎟ ⎜ S2† j ⎟
⎜ ⎟ ⎜ ⎟
uj = ⎜ 1 ⎟ =⎜ ! ⎟
⎜ ! ⎟ ⎜ ! ⎟
⎜ 0 ⎟ ⎜ † ⎟
⎝ ⎠u ⎜ Snj ⎟
⎝ ⎠φ

Similarly,
A φpq = φ p | A | φq = S†Au S ( ) pq

= ∑ S pm

u m | A | u n S nq
mn φq

∴ φq = ∑ un Snq q-th column of S


n

⎛ 0 ⎞ ⎛ S1q ⎞
⎜ ! ⎟ ⎜ ⎟
⎜ ⎟ ⎜ S2q ⎟
φq =⎜ 1 ⎟ =⎜ ⎟
⎜ ! ⎟ ⎜ ! ⎟
q-th
⎜ 0 ⎟ ⎜⎝ Snq ⎟⎠
⎝ ⎠φ u

updated 8/13/20 8:22 AM


12
5.73 Lecture #10 10 - 13
Commutation Rules

* ⎡ Â, B̂ ⎤ = ÂB̂ − B̂Â


⎣ ⎦
! dφ ! ⎛ dφ⎞ !
e.g. ⎡⎣ x̂, p̂ ⎤⎦ = i! means ( x̂p-
ˆ p̂x̂ ) φ = x − ⎜ φ + x ⎟ = − φ = !iφ
i dx i ⎝ dx ⎠ i
= i!φ

* If  and B̂ are Hermitian, is ÂB̂ Hermitian?


Hermitian A and B

( AB )ij = ∑ Aik Bkj = ∑ Aki* B*jk = ∑ B*jk Aki* = ( BA )*ji


k

but this is not what we need to be able to show that AB is Hermitian:


That would be: ( AB ) = ( AB )* or AB = ( AB ) ≠ ( BA )
† †
ij ji

AB is Hermitian only if ⎡⎣ A,B⎤⎦ = 0


1
However, ⎡⎣ AB + BA⎤⎦ is Hermitian if both A and B are Hermitian.
2

This is a foolproof way to construct a new Hermitian operator out of


simpler Hermitian operators.
This is the standard prescription for implementing the Correspondence
Principle for constructing a quantum mechanical equivalent of a
classical mechanical quantity. Quantities that commute in classical
mechanics do not always commute in quantum mechanics. Almost
everything that is not classical mechanical in quantum mechanics is
derivable from [ x̂ , p̂ x ] ≠ 0!

updated 8/13/20 8:22 AM


13
MIT OpenCourseWare
https://ocw.mit.edu/

5.73 Quantum Mechanics I


Fall 2018

For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms.

14

You might also like