0% found this document useful (0 votes)
77 views

Quantum Mechanics: School of Physics, Georgia Institute of Technology, Atlanta, GA 30332

The document summarizes the key postulates of quantum mechanics: 1) States of a quantum system correspond to vectors in a Hilbert space H. Observables correspond to Hermitian operators acting on H. 2) Measurements yield eigenvalues of operators and collapse the wavefunction into the corresponding eigenstate. 3) Between measurements, the wavefunction evolves according to the Schrodinger equation.

Uploaded by

140557
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
77 views

Quantum Mechanics: School of Physics, Georgia Institute of Technology, Atlanta, GA 30332

The document summarizes the key postulates of quantum mechanics: 1) States of a quantum system correspond to vectors in a Hilbert space H. Observables correspond to Hermitian operators acting on H. 2) Measurements yield eigenvalues of operators and collapse the wavefunction into the corresponding eigenstate. 3) Between measurements, the wavefunction evolves according to the Schrodinger equation.

Uploaded by

140557
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Quantum Mechanics

Michael Pustilnik
School of Physics, Georgia Institute of Technology,
Atlanta, GA 30332

Postulate 1
For any quantum system there is an associated Hilbert space H. States of the system correspond
to normalized vectors in H.
Postulate 2
For any classical observable A there is a corresponding Hermitian operator A = A acting in
H. Conversely, any Hermitian operator corresponds to some observable.
Postulate 3

A measurement of A yields one of the eigenvalues of the corresponding operator A.


Postulate 4
A measurement of A on many identical copies of the system, all in state |i, produces random
a |i, where 1
a is a projector onto the
results. The probability to find A = a is Pa = h| 1

subspace of eigenvectors of A corresponding


to the eigenvalue a. Equivalently, probability


density of A is P (A) = h| A1 A |i.


Postulate 5
A measurement affects the state of the system: right after the measurement the system is in
a |i, a normalized eigenvector of A corresponding to the eigenvalue a found
the state |a i 1
in the measurement. This change of the state as the result of the measurement is often referred
to as the wave function collapse.
Postulate 6
Between the measurements, the state of the system evolves according to the Schrodinger equa |i, where the Hamiltonian H
is the operator corresponding to the energy of
tion i~ dtd |i = H
the system.

Hilbert space
A linear vector space H is a collection of abstract
vectors (often called ket-vectors) such that their linear
combinations also belong to H,
|i , |i H

a |i + b |i H,

(1)

where a and b are arbitrary complex numbers. A sum


of two (or more) vectors, multiplication of a vector by a
number, etc., have natural properties, e.g.,

a |i+b |i = (a+b) |i , a |i+a |i = a |i+a |i .
Any linear vector space contains a unique null -vector
|nulli, such that for any |i H
0 |i = |nulli ,

|i + |nulli = |i .

(2)

A Hilbert space is a linear vector space endowed with


a scalar product. Essentially, it is a rule that associates
a complex number with any two vectors from H,
|i , |i H

h|i - complex number.

The scalar product h|i is postulated to satisfy


h|i = h|i
h|i 0 with h|i = 0 |i = |nulli

h| c1 |1 i + c2 |2 i = c1 h|1 i + c2 h|2 i
It can be shown that the Schwartz inequality,


h|i 2 h|ih|i,

(3)
(4)
(5)

(6)

and the triangle inequality,


p
p
p
h|i h|i + h|i for |i = |i + |i , (7)

2
follow from these postulates.
We shall assume that the Hilbert space H comprises
only normalizable vectors with h|i < (this restriction is important for infinitely dimensional Hilbert
spaces).
A vector is called normalized if h|i = 1. According
to Postulate 1, all states of the system are described by
such normalized vectors; every normalized vector corresponds to a state which, in principle, can be realized. Linear combinations of such state vectors form the Hilbert
space of the system.

Operators
Operator A acting in H maps the Hilbert space onto
itself: it is a rule that to any vector |i H assigns some
other vector | 0 i H,
A
|i | 0 i .
The vector | 0 i is said to be the result of the action of A
or A |i:
on |i and denoted as |Ai
= A |i .
| 0 i = |Ai

Basis



A set of vectors |n i , n = 1, . . . , N, is called linearly independent if the equation
n=N
X

cn |n i = |nulli

(8)

n=1

has no solution for {cn } other than the trivial one (cn = 0
for all n). Any linear vector space H is characterized
by its dimension NH = max{N }, the largest number
of vectors a linearly independent set can have. Thus,
any set of NH linearly independent vectors serves as a
basis for H: any |i H can be written as a linear
combination of the basis vectors,
|i =

n=N
XH

n |n i .

(9)

n=1

It can be shown that the coefficients n (often


called
components of vector |i in the basis |n i ) in this
expansion are unique for a given basis set. Obviously,
multiplying a vector by a number reduces to multiplying
all its components by that number. Adding two vectors
amounts to adding their components.
Given a linearly independent set of NH vectors, one
can use the so-called Gramm-Schmidt
to con procedure

struct an orthonormal basis set |n i , for which
hm |n i = m,n .

(10)

From now on, we will only deal with such orthonormal


basis sets.
n of vector |i in an orthonormal basis
 Components

|n i have a particularly simple form n = hn |i, so
that
X
|i =
|n ihn |i.
(11)

Our convention is that operators act on kets, never on


bras. We are only interested in linear operators, such
that

A a |i + b |i = aA |i + bA |i
(13)
for any |i , |i H.
Hermitian conjugate of operator A is such an operator A that

h| A |i = h| A |i for any |i H,

(14)

which implies that

h| A |i = h| A |i for any |i , |i H.

(15)

Hermitian operators satisfy A = A . Hermitian operators have real eigenvalues (see below), hence the restriction to Hermitian operators in Postulate 2.
is an operator C =
A product of operators A and B
that act as follows: C |i = A|
Bi

acts
AB
(that is, B

first, then A acts on B|i). It is easy to verify that



=B
A .
AB
(16)
is, in general, non-Hermitian even if
Note that C = AB

both A and B are Hermitian.


leaves vectors unThe identity (or unity) operator 1
changed,

1|i
= |i for any |i H.

(17)

= 1
and 1
2 = 1
(i.e., 1
is a projector).
Obviously, 1
Comparison of Eqs. (11) and (17) shows that the identity
operator in the orthonormal basis {|n i} can be written
as
=
1

n=N
XH

|n ihn | .

(18)

n=1

A scalar product of |i and |i can be expressed via their


components in an orthonormal basis as
X
h|i =
n n .
(12)
n

Obviously, h|i given by Eq. (12) has all the properties


listed in Eqs. (3)-(4) above.

Eq. (18) is known as the resolution of identity (a.k.a.


the closure relation, a.k.a. the completeness relation).
Using Eq. (18), it is easy to obtain the representation
of A in terms of {|n i},
X
A1
=
A = 1
|m iAmn hn | , Amn = hm | A |n i .
m,n

3
The objects Amn form NH NH matrix and are called
Obviously,
matrix elements of the operator A.
A = A Amn = Anm ,
i.e., matrix elements of Hermitian operators form Hermitian matrices.
we find Cmn = P Aml Bln , i.e., the maFor C = AB
l
trix corresponding to the product of two operators is a
productPof their matrices. For | 0 i = A |i we have
0
m
=
n Amn n , which can be viewed as a product
of the matrix representing A and a column-vector representing |i. Similarly, Eq. (12) above can be viewed
as a (matrix) product of the row-vector representing h|,
and the column-vector representing |i. Note that in the
matrix language bras are Hermitian conjugates of kets,

1
1

|i 2 , h| 1 2 . . . = 2 .
...
...

Eigenvalue problem
Consider the equation
A | a i = a | a i ,

A = A .

(20)

Eq. (20) guarantees that results of measurements are real,


see Postulate 3. Moreover, it is easy to see that the eigenvectors corresponding to different eigenvalues are orthogonal,
h a | b i a,b ,

be an orthonormal basis for Ha . The eigenvalue a is


called non-degenerate if the corresponding subspace
Ha is one-dimensional, Na = 1. Otherwise, a is called
Na -fold degenerate.
The operator
n=N
Xa
n=1

|na ihna |



Therefore, the union (set of sets) a |na i forms the
orthonormal basis for the entire Hilbert space, and the
identity operator [see Eq. (17)] can be written in terms
of the projectors (22) as
X
=
a.
1
1
(24)
a

(Note that if a 6= a0 , then the only element Ha and Ha0


share is |nulli; such spaces are called orthogonal.)
We shall assume that Eq. (24) is applicable to
infinitely-dimensional Hilbert spaces as well: eigenvectors of Hermitian operators with discrete spectrum form
a basis for the Hilbert space of the system. (This is indeed so, provided that both the Hilbert space and the
allowed operators are defined in a more careful manner
than it is done here.) Obviously, operator A is diagonal
in the basis of its own eigenvectors and is given by
A =

a =
a1

Xa
X n=N
a

|na i a hna | .

(25)

n=1



Conversely, for any orthonormal basis set |n i there
exist Hermitian operators for which |n i are eigenvectors.
Eq. (24) ensures that the probability introduced in
Postulate 4 is properly normalized,
X
X
a |i = h|i = 1.
1
(26)
Pa = h|
a

Commuting operators

(21)

whereas any linear combination of eigenvectors corresponding to the same eigenvalue a is also an eigenvector
with that eigenvalue. Accordingly, the eigenvectors corresponding to the eigenvalue a form a linear vector space
Ha H, a subspace of the Hilbert space of the system.
Let
 a
|n i , n = 1, . . . , Na NH

a =
1

(19)

The goal is to find all eigenvalues a for which Eq. (19)


has a solution for the eigenvectors | a i H other than
| a i = |nulli.
Obviously, all eigenvalues of Hermitian operators are
real,
Im a = 0.

a |i Ha for any |i H and


is a projector onto Ha : 1
a |i = |nulli if |i is orthogonal to any vector in Ha .
1
For a finite-dimensional Hilbert space (NH < ) one
can prove using the tools of Linear Algebra that
X
Na = NH .
(23)

(22)

commute if and only if


Theorem: Operators A and B
there exists an orthonormal set of NH vectors which are
simultaneously.
eigenvectors of both A and B,
Proof : The sufficient condition is obvious. The necessary condition is proved as follows: Let Ha be the subspace of eigenvectors of A corresponding to the eigenvalue
B]
= 0, then B
|i Ha for any |i Ha . Aca. If [A,

cordingly, one can consider an eigenvalue problem for B


in the restricted space,






a,b = b a,b , a,b Ha .
B
Solution of this problem divides Ha into orthogonal subspaces Ha,b : Ha = b Ha,b . Here, by construction, Ha,b is
with
the space of simultaneous eigenvectors of A and B
eigenvalues a, b. If
 a,b
n
, n = 1, . . . , Na,b

4
is an orthonormal
basis for Ha,b , then, obviously, the

union a,b na,b is an orthonormal basis for the entire
Hilbert space. Q.E.D.
Implications: It is obvious that the above theorem
can be restated as
B]
=0
[A,

a1
b = 1
b1
a = 1
a,b .
1

(27)

Therefore, according to Postulate 5, after one measures A and then B, state of the system collapses to
|i Ha,b , one of the normalized eigenvectors of A and
According to Postulate 4, observables A and B in
B.
this state are no longer random, but have definite values,
a and b, respectively. Any subsequent measurement of
A or B, in any order, will not change the state of the
[and, more generally,
system. Moreover, the product AB

any function f (A, B)] is Hermitian, with |i being the


corresponding eigenvector. In this sense, A and B not
only may simultaneously have definite (non-random)
values, but can also be measured simultaneously.
Complete set of commuting operators
If Ha,b is one-dimensional (Na,b = 1), then knowledge of
A = a and B = b completely characterizes the state of
the system: there is no degeneracy. If, however, Na,b > 1,
then there exists at least one additional operator C that
but is functionally indecommutes with both A and B
pendent from them, so that the dimension Na,b,c of the
space Ha,b,c formed by the simultaneous eigenvectors of
all three commuting operators satisfies Na,b,c < Na,b . If
Na,b,c > 1, then there exists a fourth independent commuting operator, and so on.
The number of independent commuting operators one
needs to completely lift the degeneracy is often referred
to as the number of degrees of freedom. Essentially, it
is the number of measurements one needs to perform
in order to project the state of the system onto a onedimensional subspace. The corresponding observables
(A, B, C, . . .) are often called good quantum numbers,
B,
C . . . are said to form a complete
and the operators A,
set of commuting operators.
Commuting operators and degeneracy
B]
= 0, [A,
C]
= 0, but [B,
C]
6= 0, then at least one
If [A,
of the eigenvalues of A is degenerate. Indeed, let |i be
with eigenvalues
a simultaneous eigenvector of A and B
(If all
a and b, respectively, but not an eigenvector of C.

then
eigenvectors of A and B were also eigenvectors of C,
the three operators would commute.) Then |i = C |i
with the same eigenvalue a,
is also an eigenvector of A,
Therefore, |i and
but it is not an eigenvector of B.
|i cannot be proportional to each other, i.e., the two
vectors are linearly independent. This implies that a is
degenerate: Na 2.
2
Example is provided by the angular momentum: J

commutes with any component J , but components do


not commute with each other. Accordingly, eigenvalues

2 must be degenerate.
of J

Non-commuting operators
do not commute, then a generic state |i
If A and B
will not be an eigenstate of either of these operators. The
uncertainty of the observable A in the state |i,
p
A = hA2 i hAi2 , hAn i = h|An |i ,
can be also written as
q
2 |i,
A = h|
A

A = A hAi1,

or
A2 = hA |A i,

A |i .
|A i =

Application of the Schwartz inequality (6) to |A i and


similarly defined |B i yields the Heisenbergs uncertainty
relation
A B


1
B]|i

.
h|[A,
2

(28)

Importantly, the left-hand side of Eq. (28) does not deal


with a single quantum system, nor does it deal with measurements of A and B on the same system. Instead, it has
to do with statistics: Suppose we start with many identical copies of the system, all in the state |i. We then
measure A on some of these copies, and B on some other
copies. The measurements produce two sets of random
results, for A and B, characterized by the uncertainties
A and B, respectively. Eq. (28) establishes the relation
between the results of these independent experiments.

Unitary transformations

i.e., the generator describes an infinitesimal transforma


()A U
(), see Eq. (34),
tion. The operator A()
= U
obeys Heisenberg equation,

It is obvious that a unitary transformation


|i ,
|i | 0 i = U

U
= 1,

(29)

preserves the value of the scalar product for any two vectors,
h 0 |0 i = h|i.

(30)

In fact, it can be shown that any transformation with


this property must be unitary (Wigner theorem).
A passive transformation is regarded as a change
of variables and does not affect the values of observable
quantities. This is possible if the transformation of vectors is accompanied by the transformation of operators
according to
A U
,
A A0 = U

(31)

which guarantees that the probability P (A), see Postulate 4, is not affected by the transformation. (Note
= (A0 )n , and that A
,U
An U
that h 0 |A0 |0 i = h|A|i
0

and A have the same eigenvalues.) Passive transformations are often used to simplify the problem of finding
the eigenvalues of the operators of interest.
An active transformation describes change of the
state of the system. In this case the operators do not
change, but the expectation values do:
h 0 |A | 0 i =
6 h| A |i .

(32)

Alternatively, since
A U
|i ,
h 0 |A | 0 i = h| U

(33)

one can view the active transformation as a change of


operators only,
A U
,
A U

(34)

is the operator of evowith states unaffected. (When U


lution in time, these two points of view correspond to the
so-called Schr
odinger and Heisenberg pictures.)

Continuous unitary transformations


Consider a single-parameter family of transformations
() with U
(0) = 1
and with the group property
U
(1 + 2 ) = U
(1 )U
(2 ).
U

(35)

It is easy to prove the Stone theorem: any transformation


that satisfies Eq. (35) can be written as
() = eiK ,
U
(36)

=K
= i dU
/d
where K
is the generator of the
0
() = 1i
+. . .,

transformation. For small we have U


K

A],

dA/d
= i[K,

(37)

which is a quantum counterpart of the classical Liouville


equation (2.10).
Generators corresponding to translation and rotation transformations can be deduced from the corresponding classical expressions with the result
Ta = ei(ap)/~ ,

= ei(J)/~
R
,

(38)

where we have set = 1. It is easy to check that the


transformations (38) indeed describe translations and rotations:

Ta r Ta = r + a1,
rR
= r + r + 2 (. . .),
R

Ta |ri = |r + ai ,
|ri = |r + r + . . .i.
R

Note that Ta1 +a2 = Ta1 Ta2 = Ta2 Ta1 for any a1 and a2 ,
except for 1 k 2 ,
6= R
R
+ 6= R
R
whereas R
1
2
2
1
2
1
as expected for translations and rotations.

Classical vs Quantum
Hamiltonian formulation

is called canonical if it preserves the Poisson brackets of


the canonical variables, i.e.,

Consider a classical particle of mass m in the presence


of velocity-independent potential V . In the Hamiltonian
formulation of Classical Mechanics, state of the particle
is specified by its position r and its momentum p, a.k.a.
canonical variables. The dynamics is described by the
Hamilton (a.k.a. canonical) equations of motion
r = p H,

p = r H,

(2.1)

where the dot denotes the total derivative with respect


to time and
H(r, p; t) =

p2
+ V (r, t),
2m

(2.2)

is the Hamiltonian. Indeed, substitution of Eq. (2.2) into


Eq. (2.1) yields the Newtons equations
r = p/m

p = mr,

p = r V (r, t).

All of the Hamiltonian Mechanics can be expressed in


terms of the so-called Poisson brackets. The Poisson bracket {f, g} of dynamical quantities f (r, p, t) and
g(r, p, t) is defined as

f, g = r f p g p f r g.
(2.3)
The properties of the Poisson brackets are very similar
to those of the commutators in Quantum Mechanics. In
particular,
{f, g} = {g, f },
{f + g, h} = {f, h} + {g, h},
{f, gh} = {f, g}h + h{f, g},




f, {g, h} + g, {h, f } + h, {f, g} = 0.

(2.4)

Canonical transformations
A transformation
p p0 (r, p)

{p0 , p0 } = {p , p } = 0.

Depending on how the transformation affects observable quantities, one can distinguish between passive and
active transformations. For a passive transformation
Eq. (2.5) is regarded as merely a change of variables,
while observables remain unchanged:

f (r, p) = f r(r0 , p0 ), p(r0 , p0 ) .
It can be shown that for a canonical transformation
{f, g}0 = {f, g} for any f and g; here {...}0 is the Poisson
bracket evaluated in the new variables. Therefore, both
the Liouville equation (2.4) and the Hamilton equations
(2.1) retain their form.
On the contrary, an active transformation describes
change of the state of the system {r, p} {r0 , p0 } accompanied by change of observables,
f (r0 , p0 ) 6= f (r, p).
Infinitesimal canonical transformations can be cast
in the form
r0 = r + p ,

p0 = p r ,

(2.6)

where (r, p) is the so-called generating function and is


an infinitesimally small dimensionless parameter. Direct
calculation shows that for the transformation (2.6)

with similar relations for {r0 , r0 } and {p0 , p0 }. In other


words, the Poisson brackets are preserved in the first order in , which is sufficient for our purpose.
For = a p, where a is a constant vector which has
units of length, Eq. (2.6) describes a translation (shift)
in space by a,
r0 = r + a,

Eq. (2.4) is valid for any dynamical quantity f (r, p, t).


Substituting for f canonical variables, one recovers the
Hamilton equations (2.1). For f = H we find dH/dt =
H/t, which expresses the conservation of energy: E =
H = const if the Hamiltonian does not explicitly depend
on time.

r r0 (r, p),

{r0 , r0 } = {r , r } = 0,

{r0 , p0 } {r , p } 2 ,

Using Eqs. (2.1) and (2.3), it is easy to derive the


Liouville equation of motion

df
f 
=
+ f, H .
dt
t

{r0 , p0 } = {r , p } = , ,

(2.5)

p0 = p.

(2.7)

For = J, where is a constant dimensionless vector


and J = r p is the angular momentum, Eq. (2.6) describes a rotation by the angle || about the direction
of ,
r0 = r + r,

p0 = p + r.

(2.8)

It is often said that linear (angular) momentum generates


translation (rotation) in space.
Continuous transformation can be viewed as a sequence of a large number of infinitesimal transformations,
controlled by dimensionless parameter : r(0) r()

7
with r(0) = r. Increase of by d is an infinitesimal
transformation described by
r( + d) = r() + dp ,

p( + d) = p() dr ,

see Eq. (2.6), which can be written in the form of differential equations
dr/d = p ,

dp/d = r .

(2.9)

For active  continuous transformation the observable


f r(), p() evolves according to
df /d = {f, }.

(2.10)

Note that Eqs. (2.9) and (2.10) have the form of


the Hamilton equations (2.1) and the Liouville equation
(2.4), with and playing the parts of t and H. In this
sense, the Hamiltonian generates changes of dynamical
quantities with time, just as linear and angular momenta
generate translations and rotations in space.
If {f, } = 0, the quantity f is said to be invariant
with respect to the transformation generated by . For
f = H, such invariance implies that is independent
of time, see Eq. (2.4). Thus, translational (rotational)
invariance of H gives rise to the conservation of linear
(angular) momentum.

Canonical quantization
The similarity of the classical Poisson brackets and
quantum commutators suggests the correspondence be-

tween classical and quantum observables, the Diracs


quantization rule



A, B = C

B]
= i~ C.

[A,

(2.11)

Although the rule (2.11) has its limitations, it works perfectly well for sufficiently simple quantities, ensuring that
quantum results have correct classical limit.
Using Eq. (2.11), we find
{r , r } = 0
{p , p } = 0

[
r , r ] = 0
[
p , p ] = 0

{r , r } = ,

[
r , p ] = i~, 1

The rule Eq. (2.11) is applicable to the components of


= r p
as
the orbital angular momentum J = r p, J
well, e.g.,



Jx , Jy = Jz



Jx , Jy = i~Jz

(In this case there is no ambiguity in constructing the


just as it is the case in Classical Mechanics,
operator J:
r p
= p
r, despite the non-commutativity of r and
.)
p
Classical-quantum analogy extends beyond the similarity of Poisson brackets and commutators. For example, classical Liouville equation (2.4) corresponds to
quantum Heisenberg equation of motion, while the classical canonical transformations have a quantum counterpart in the unitary transformations.

You might also like