0% found this document useful (0 votes)
0 views

infdim

The document discusses Infinite-Dimensional Lie Algebras, primarily using Kac's texts as references. It covers examples of Lie algebras, the relationship between Lie groups and algebras, central extensions, derivations, and specific examples like the Virasoro algebra and loop algebras. The text emphasizes the classification of representations and the structure of Lie algebras, particularly in infinite dimensions.

Uploaded by

weijifen12138
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

infdim

The document discusses Infinite-Dimensional Lie Algebras, primarily using Kac's texts as references. It covers examples of Lie algebras, the relationship between Lie groups and algebras, central extensions, derivations, and specific examples like the Virasoro algebra and loop algebras. The text emphasizes the classification of representations and the structure of Lie algebras, particularly in infinite dimensions.

Uploaded by

weijifen12138
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Math 263: Infinite-Dimensional Lie Algebras

Our text will be Kac, Infinite-Dimensional Lie Algebras (third edition) though I will
also rely on Kac and Raina, Bombay Lectures on Highest Weight Representations of Infinite-
Dimensional Lie Algebras. I will assume you have access to the first text but not the second.
Unfortunately Kac’ book is long and dense. These notes are an attempt to make a speedy
entry. Most relevant for the first part of these notes are Chapters’s 7 and 14 of Kac’s book.

1 Examples of Lie Algebras


Given an associative algebra A, we have two Lie algebras, Lie(A) and Der(A).
• Lie(A) = A as a set, with bracket operation [x, y] = xy − yx.
• Der(A) is the set of derivations of A. It is a subalgebra of Lie(End(A)). For this it
is not necessary that A be associative.

C ∞(M ) 
If X is a vector field on a manifold M , then X may be be identified with the map
C ∞(M ) which is differentiation in the direction X. By the Leibnitz rule, X(fg) =
X(f ) g + f X(g). So X is a derivation of C ∞(M ) and in fact every derivation of A =
C ∞(M ) arises from a vector field. Thus the space vector fields on M may be identified
with Der(A). It is an infinite-dimensional Lie algebra.
If G is a Lie group, then the vector space g of left invariant vector fields is isomorphic
to the tangent space T1(G) of G at the identity. This is obviously closed under the bracket
operation. This is the Lie algebra of G.

least if M is compact) there should be defined a map φ: M × (−ε, ε)


0) = m and

Given a vector field X on a manifold M, there is a family of integral curves. That is, (at
M such that φ(m,

d
Xf (m) = f (φ(m, t))|t=0
dt
and more generally
d
Xf (φ(u, m)) = f (φ(t + u, m))|t=0.
dt
If G is a Lie group, and X ∈ g, the Lie algebra, then X is a left-invariant vector field, and
it may be deduced that the integral curve has the form


φ(g, t) = g · exp (tX)
where exp: g G is a map, the exponential map. We also denote exp (X) = eX . So
d
Xf (g) = f (g · etX )|t=0.


dt
If π: G GL(V ) is a representation, then we obtain a representation of g by
d
dπ(X)v = π(etX )v|t=0.
dt

1
2 Section 3

We have
dπ([X , Y ]) = dπ(X) ◦ dπ(Y ) − dπ(Y ) ◦ dπ(X).

2 Attitude
For Lie groups, statements about group theory may be translated into statements about
Lie algebras. For example, the classification of irreducible representations of a semisimple
complex or compact Lie group is (at least in the simply-connected case) the same as the clas-
sification of the irreducible representations of its Lie algebra. (Representations of a complex
Lie group are assumed analytic, and representations of a compact Lie group are assumed
continuous. Representations of a complex Lie algebra are assumed C-linear, and represen-
tationsions of a real Lie algebra are assumed R-linear.
Thus to a first approximation, the finite-dimensional theories of Lie groups and Lie
algebras are nearly equivalent. The theory of Lie groups is superior for supplying intuition,
but the theory of Lie algebras is simpler since the operation is linear.
In the infinite-dimensional case, the fact that the theory of Lie algebras is simpler is
decisive. Still, we may use the theory of groups to supply intuition, so we may preface a
discussion with some optional group theory.
For example, let us motivate the definition of an invariant bilinear form with the group-

V , that is, a map π: g


theoretic definition. We assume that we have an action of a Lie algebra on a vector space


End(V ) such that π([X , Y ]) = π(X) ◦ π(Y ) − π(Y ) ◦ π(X). Then
by an invariant bilinear form we mean a bilinear map B: V × V C such that
B(X · v, w) + B(v, X · w) = 0. (1)
Here X · v means π(X)v. Why is this the right definition? Begin with the well-motivated
and obviously correct notion for a Lie group:
B(g · v, g · w) = B(v, w).
Take g = etX and differentiate with respect to t:
d
0= B(etX · v, etX · w).
dt
By the chain rule, if F (t, u) is a smooth function of two variables then
d ∂ ∂
F (t, t) = F (t, u)|u=t+ F (t, u)|u=t
dt ∂t ∂u
and applying this to F (t, u) = B(etX · v, euX · w) we obtain (1).

3 Central Extensions and Derivations


As with groups, we may consider extensions of Lie algebras, that is, equivalence classes of
short exact sequences
0 
k g q 0.
Central Extensions and Derivations 3

Given an extension, there are two cases where there is associated an action of q on k. This
means a homomorphism from q to the Lie algebra Der(k) of derivations of k. The first is the
case where k is abelian. The second is where the extension is split. In both these cases, the
Lie algebra g may be recovered from k, q and some additional data.
• If k is abelian, then k is a q-module, and the extra data needed to describe g is a 2-
cocycle. The special case where the q-module structure is a 2-cocycle. For simplicity
we will only discuss (below) the case where the q-module structure on k is trivial, or
equivalently that the image of k in g is central.


If the extension is split, then the only data needed to describe g is the homomorphism
q Der(k).
As an important special case, we are interested in increasing the dimension of a Lie algebra

 
by one. That is, we want to consider extensions:
0 C·c g̃ g 0

  
and
0 g′ g C·d 0
where C · c and C · d are one-dimensional abelian Lie algebras. In the first case we may
assume that c is central, that is, [c, X] = 0 for all X ∈ g. In the second case, we assume only
that g ′ is an ideal, that is, [g, g ′] ⊂ g ′.

3.1 Central extensions


Let a be an abelian Lie algebra. We’re mainly interested in the case where a is one-dimen-

1
sional. We wish to classify central extensions
p


0 a g̃ g 0.
We choose a section s: g g̃, which is a linear map (not a homomorphism) such that
p ◦ s = 1g. Then if X , Y ∈ g consider
φ(X , Y ) = s([X , Y ]) − [s(X), s(Y )].
Clearly φ(X , Y ) is in the kernel of p, which we may identify with a. We observe that φ is
skew-symmetric and satisfies the cocycle identity
φ([X , Y ], Z) + φ([Y , Z], X) + φ([Z , X], Y ) = 0.

Proof. We have
φ([X , Y ], Z) = s([X , Y ], Z) − [s([X , Y ]), s(Z)].
Since s([X , Y ]) − [s(X), s(Y )] = φ(X , Y ) ∈ a, we may rewrite this
φ([X , Y ], Z) = s([X , Y ], Z) − [[s(X), s(Y )], s(Z)].
The cocycle identity now follows from the Jacobi identities in g and g̃. 

A skew-symmetric map that satisfies the cocycle identity is called a 2-cycle. Let Z2(g, a)
be the space of 2-cocycles.
4 Section 4

We see that every extension produces a 2-cocycle, and conversely given a 2-cocycle we
may reconstruct g̃ as follows. Let g̃ = g ⊕ a with the bracket operation
JX + δ, Y + εK = [X , Y ] + δ + ε + φ(X , Y )
Then the 2-cocycle condition implies the Jacobi identity.

map f : g 
We have some freedom of choice in the section s. We could vary it by an arbitrary linear
a. This changes the cocycle by the map
ψ(X , Y ) = f ([X , Y ]).
A cocycle of this form is called a coboundary. If B2(g, a) is the space of coboundaries then
we see that central extensions are classified by H2(g, a) = Z2(g, a)/B2(g, a).

3.2 Adjoining a derivation

associative or not. A map D: A


Exponentiating it:

A derivation is a local analog of an automorphism. Thus, let A be an algebra, whether
A such that D(xy) = D(x) y + xD(y) is a derivation.

1
eD = 1 + D + D 2 +
2
D D D
then satisfies e (xy) = e (x)e (y).
1
(x + D(x) + D 2(x) + .)


2

If g is a Lie algebra, then ad: g End(g) is the adjoint representation, defined by


ad(X) Y = [X , Y ].
It follows from the Jacobi identity that ad(X) is a derivation. The map ad is a Lie algebra
homomorphism, that is, ad([X , Y ]) = ad(X)ad(Y ) − ad(Y )ad(X). This also follows from the
Jacobi identity.
Suppose that k is an ideal of g of codimension 1, and let d ∈ g − k. Then ad(d) induces
a derivation of k. Conversely, suppose that g is given, together with a derivation D. Then
we may construct a Lie algebra g that is the direct sum of g and a one-dimensional space
C · d, where the Lie bracket is extended to k by requiring that [D, X] = −[X , D] = d(X). The
Jacobi identity follows from the fact that D is a derivation.
More generally, given a pair k, q of Lie algebras and a homomorphism α: q  Der(k) we
may construct a Lie algebra g which contains copies of both q and k. The subalgebra k is an
ideal, and if X ∈ q and Y ∈ k, then α(Y )X = [Y , X].

4 Virasoro Algebra
Let us consider the Lie algebra w of (Laurent) polynomial vector fields on the circle T =
{t ∈ C×kt| = 1}, or equivalently, polynomial vector fields on C×. This is the Lie algebra of
the (infinite-dimensional) group of diffeomorphisms of the circle. It has a basis
d
di = t1−i .
dt
Loop algebras 5

These satisfy the commutation relation


[di , d j ] = (i − j)di+j .
Let N be an integer.

Proposition 1. Let 
iN if i = −j
φN (di , d j ) =


0 otherwise ,
extended by linearity to a map w × w C. If N = 1 or 3 then φN is a 2-cocycle. (We are
regarding C as a one-dimensional abelian Lie algebra.

Proof. If N is odd then φN is skew-symmetric, and we need


φN ([di , d j ], dk) + φN ([dk , di], d j ) + φN ([d j , dk], di) = 0.
The left-hand side is trivially zero unless i + j + k = 0, so we assume this. Then the left-hand
side equals
(i − j)k N + (j − k)iN + (k − i)j N = (i − j)(−i − j)N + (i + 2j)iN − (j + 2i)j N .
This is zero if i + j + k = 0 and N = 1 or 3. 

It may be seen that φ1 and φ3 span Z2(w, C), but φ1 is a coboundary. The space H 2(w,
C) is one-dimensional. There is a unique isomorphism class of Lie algebras that are nontrivial
central extensions of w by C. (Each element of H 2(w, C) determines a unique equivalence
class of extensions, but two inequivalent extensions may be isomorphic as Lie algebras.)
1
The choice of extension is traditionally to take the cocycle 12 (φ3 − φ1). Thus we obtain
the Virosoro algebra with basis di , c, where c is central, and
(
1
(i − j)di+ j + 12 (i3 − i) if i = −j,
[di , d j ] =
(i − j)di+ j otherwise.

5 Loop algebras
In this section we will describe affine Kac-Moody Lie algebras following Chapter 7 of Kac,
Infinite-dimensional Lie algebras. This illustrates both methods of enlarging a Lie algebra:
first by making a central extension, then adjoining a derivation.
If g is a Lie algebra and A is a commutative associative algebra, then A ⊗ g is a Lie
algebra with the bracket
[a ⊗ X , b ⊗ Y ] = ab ⊗ [X , Y ].

which we will denote (|). Recalling that ad: g


means

Let g be a complex Lie algebra that admits an ad-invariant symmetric bilinear form,
End(g) is the map ad(X)Y = [X , Y ], this

([X , Y ]|Z) = −(Y |[X , Z]).


6 Section 5

Since the form is symmetric, this implies


([X , Y ]|Z) = ([Y , Z]|X). (2)

For example, let G0 be a compact Lie group. Then G0 acts on itself by conjugation,

g0. The representation of g0 on itself induced by Ad is ad. Since Ad: G0 


inducing a representation Ad of G0 on left-invariant vector fields, that is, on its Lie algebra
GL(g0) is a
representation of a compact Lie group on a finite-dimensional real vector space, it admits
an invariant bilinear form, and by the argument in Section 2, it follows that this gives an
invariant positive definite symetric bilinear form on g0. Since g0 is actually a real Lie algebra,
we then pass to the complexified Lie algebra g = C ⊗ g0 to obtain a complex Lie algebra.
Let L = C[t, t−1] be the commutative algebra of Laurent polynomials. Then L(g) = L ⊗ g


is an infinite-dimensional Lie algebra. We will show how to obtain a central extension g ′ of
it. The residue map Res: L C is defined by
X 
Res cntn = c−1.
n
We will also define
dP
φ(P , Q) = Res(P ′ Q), P′= .
dt
We have
Res(P ′) = 0
for any Laurent polynomial. Thus (using ′ to denote the derivative) since (PQ) ′ = P ′ Q + PQ ′
and (PQR) ′ = P ′ QR + PQ ′R + PQR ′ we obtain
φ(P , Q) = −φ(Q, P )
and
φ(PQ, R) + φ(QR, P ) + φ(RP , Q) = 0. (3)

define ψ: L(g) × L(g) 


These relations are similar to the conditions that must be satisfied by a 2-cocycle. So we
C by
ψ(a ⊗ X , b ⊗ Y ) = (X |Y )φ(a, b).
This bilinear form is clearly skew-symmetric. Here is the proof of the cocycle relation:
ψ([a ⊗ X , b ⊗ Y ], c ⊗ Z) = ([X , Y ]|Z)φ(ab, c).
Using (2) and (3) we have
ψ([a ⊗ X , b ⊗ Y ], c ⊗ Z) + ψ([b ⊗ Y , c ⊗ Z], a ⊗ X) + ψ([c ⊗ Z , a ⊗ X], b ⊗ Y ) =
([X , Y ]|Z)[φ(ab, c) + φ(bc, a) + ψ(ca, b)] = 0.

Let g ′ be the central extension of L(g) determined by this 2-cocycle. We may further
enlarge g ′ by adjoining a derivation: use the derivation td t of L (acting by zero on the
adjoined one-dimensional central element. Let ĝ be the resulting Lie algebra.
Projective Representations and Central Extensions 7

If g is a simple complex Lie algebra, then ĝ is an affine Kac-Moody Lie algebra.

6 Projective Representations and Central Extensions


It is well-known that there is a relationship between projective representations of a group
and central extensions. We briefly recall this before considering the analogous situation for
Lie algebras. Suppose that π: G → G L(V ) is a projective representation of a group G; that
is, a map that induces a homomorphism G → Aut(P1(V )), the group of automorphisms of
the corresponding projective space. Concretely, this means that
π(g) π(h) = φ(g, h) π (g h)
for some map φ: G × G → C. Now the map φ satisfies the cocycle condition
φ (g1 g2, g3) φ(g1, g2) = φ (g1, g2 g3) φ(g2, g3),

which means that we may construct a central extension of G. Let G̃ be the group which as
a set is G × C× with the multiplication
(g, ǫ)(h, δ) = (g h, ǫ δ φ(g, h)).
The cocycle condition implies that this group law is associative. we may define a represen-
tation π̃: G̃ → G L(V ) by
π̃(g, ǫ) v = ǫ π(g) v,
and this is a true representation. Therefore every projective representation of G may be
lifted to a true representation of a covering group, that is, a central extension determined
by a cocycle describing the projective representation.
Now let us consider the corresponding construction for a Lie algebra representation. A
representation of a Lie algebra g is a linear map π: g → End(V ) such that
π([X , Y ]) v = π(X) π(Y ) v − π(Y ) π(X) v, X , Y ∈ g.
The corresponding notion of a projective representation requires a map φ: g × g → C such that
π([X , Y ]) v = π(X) π(Y ) v − π(Y ) π(X) v + φ(X , Y ) v, X , Y ∈ g. (4)

Lemma 2. In ( 4), φ is a 2-cocycle.

Proof. It is straightforward to check that φ(X , Y ) = −φ(Y , X). We have

π([[X , Y ], Z]) = π(X) π(Y ) π(Z) − π(Y ) π(X) π(Z) − π(Z) π(X) π(Y )
+ π(Z) π(Y ) π(X) + φ([X , Y ], Z) IV .

Summing cyclicly and using the Jacobi relation gives


φ([X , Y ], Z) + φ([Y , Z], X) + φ([Z , X], Y ) = 0. 
8 Section 7

Remark 3. As a special case, let π: g → End(V ) be a true representation. We obtain


a projective representation by perturbing it. Thus, let λ: g → C be an arbitrary linear
functional, and define π ′(X) = π(X) − λ(X). Then π ′ is a projective representation with
φ(X , Y ) = λ([X , Y ]). This cocycle is a coboundary.

7 Fermions
1
The Dirac equation describes the electron or other particles of spin 2 . Such a particle is a
fermion, meaning that the wave equation is skew-symmetric, changing sign when particles
are interchanged. Therefore no two particles can occupy the same state. (This is the Pauli
exclusion principle.) In the simplest case, electrons are solutions to the Dirac equation
characterized by energy (which is quantized) but no other quantum number. There is thus
one solution with energy k for each integer k. Thus if vk is a vector in a suitable Hilbert
space V representing such a solution, a solution representing the superposition of several
electrons might be represented by an element vk1 ∧ vk2 ∧ of the exterior algebra on V .
A difficulty with the Dirac equation is the existence of solutions with negative energy.
Particles with negative energy are unphysical and yet they are predicted by the theory. Dirac
postulated that the negative energy solutions are fully populated. Thus the ground state is
the “vacuum vector”
|0i = v0 ∧ v−1 ∧ v−2 ∧
If one electron is promoted to a state with higher energy, we might obtain a state (for
example)
v3 ∧ v0 ∧ v−1 ∧ v−3 ∧ v−4
The v3 then appears as an electron with energy 3. The missing particle v−2 represents the
absence of a usually present negative energy particle, that is, a positive energy particle
whose charge is the opposite of the charge of the electron. This energetic state represents
an electron-positron pair. This scheme avoids the physical appearance of negative energy
solutions.
We are led to consider the space of semi-infinite wedges, spanned by elements of the form
vim ∧ vim−1 ∧
where (due to the skew-symmetry of the exterior power) we may assume that
im > im−1 > im−2 > .
We will call such a vector a semi-infinite monomial. The integer m is chosen so that i−k + k =
0 for sufficiently large k. The integer m is the excess of electrons over positrons, so m is
called the charge of the monomial. Let F(m) be the span of all monomials of charge m, and
let F be the direct sum of the F(m). The space F is called the Fermionic Fock space.
We will denote
|mi = vm ∧ vm−1 ∧ ∈ F(m). (5)

This is the vacuum vector in F(m).


Fermions 9

Let gl∞ be the Lie algebra spanned by elements Eij with i, j ∈ Z. This is an associative
ring (without unit) having multiplication
Eij Ekl = δjk Eil , δ = Kronecker delta,
and as usual the associative ring becomes a Lie algebra with bracket [X , Y ] = X Y − Y X.
Elements
P may be thought of as infinite matrices as follows: we will represent an element
aijEij ∈ gl∞, where all but finitely many aij as a matrix (aij ) indexed by i, j ∈ Z.
The Lie algebra gl∞ acts on the vk by Eij vk = δ jk vi. Then there is an action on the
seminfinite monomials by
X
X (vim ∧ vim−1 ∧ ) = vim ∧ ∧vik+1 ∧ X(vik) ∧ vik −1 ∧ . (6)
k

In accordance with the Leibnitz rule, we have applied the matrix X to a single vk, then
summed over all k.
The Lie algebra gl∞ has an enlargement which we will denote ā. This is again obtained
from an associative algebra, namely we now consider (aij ) where now we allow an infinite
number of nonzero entries, but we require that there exists a bound N such that aij = 0
when |i − j |>N . This is sufficientP to imply that in the definition of matrix multiplication,
(aij ) · (bij ) = (cij ) where cij = k aikbkj , the summation over k is finite.
The action of gl∞ on semi-infinite monomials does not extend to ā. However it almost
does so extend. Applying (6) formally gives
 
X X
 aij vim ∧ ∧ vik −1 ∧ vi ∧ vik+1 ∧ . (7)
n i,j ,k
 
 
|n|6N i− j=n
ik = j

The inner sum is finite for all n except n = 0. The reason is that if n  0 then there will only
be finitely many ik such that replacing vik by vik +n will not cause a repeated index, and
vim ∧ ∧ vik −1 ∧ vik +n ∧ vik+1 ∧ is treated as zero unless the indices are all distinct.
However the n = 0 the inner sum is
X
aik ,ikvim ∧ ∧vik+1 ∧ vik ∧ vik −1 ∧
ik
which is problematic.

sentation π: gl∞ 
We may address this problem by making use of Remark 3. We will perturb the repre-
End(F) defined by the above action (so Xv = π(X)v) by introducing a
linear functional λ on g = g, then let
π ′(X)v = π(X) − λ(X)IF.
Thus π ′ is only a projective representation, but if λ is chosen correctly it has the advantage
that it extends to ā.
We consider the linear functional λ: gl∞


C defined on basis vectors by

1 if i = j < 0,
λ(Eij ) =
0 otherwise.
10 Section 8

In this case, the term n = 0 in (7) becomes


X
aik ,ikvim ∧ ∧vik+1 ∧ vik ∧ vik −1 ∧
ik >0

which is a finite sum even when (aij ) ∈ ā.


Now the cocycle φ(X , Y ) = λ([X , Y ]) is a coboundary on gl∞. It is given by

 1 if i < 0, j > 0
φ(Eij , E ji) = −1 if i > 0, j < 0,
0 otherwise

while φ(Eij , Ekl) = 0 unless j = k and i = l. Let a be the corresponding central extension of
ā. Then we obtain a representation of a on F.

8 The Heisenberg Lie Algebra and the Stone-von Neu-


mann Theorem
Let H∞ be the Heisenberg Lie algebra, which has a basis pi , qi (i = 1, 2, 3, ) and c. The
vector c is central, while
[pi , qi] = c. (8)
These elements have nicknames coming from quantum mechanics: qi are “position,” pi
are “momentum.” This is because if we have independent particles, with corresponding space-
like variables x1, x2, x3, the position and momentum operators on the Schwartz space
n
S(R ) by

(qi f )(x1, x2, ) = xif , pi = −i~
∂xi
give a representation of H∞ with c action by −i~. Here ~ is Planck’s constant. There is a
certain symmetry since the Schwartz space is invariant under the Fourier transform, which
intertwines the position and momentum operators.
The representation extends to tempered distributions. To obtain a purely algebraic
analog, we may restrict the representation to the space of polynomials, which are not in
the Schwartz space, but nevertheless are tempered distributions. The constant polynomial
is annihilated by the position operators, and this corresponds to the “vacuum vector.” Inter-
preted as a physical state, it corresponds to a function with definite (zero) momentum
but completely indeterminate position. The symmetry is now lost since the Fourier trans-
form of a polynomial is a tempered distribution but not a polynomial, or even a function.
Still, the polynomial representation is an algebraic substitute for the topological one.
Abstractly, given a representation of H∞, we call a nonzero vector v a vacuum vector if
it is annihilated by the pi. If cv = λv then we say λ is the eigenvalue of v. In the physics
literature the vacuum vector is denoted |0i (following Dirac).
Similarly, let a be any constant, and consider the representation of H∞ on polynomials

with qi being multiplication by xi and pi = λ ∂xi , and λ being the eigenvalue of c. This H∞-
module is denoted Rλ.
The Harmonic Oscillator 11

Let H be the Heisenberg Lie group, which is a central extension of R2n, based on a
symplectic form, which may be regarded as a 2-cocycle. Concretely, H is R2n × T (where T
is the group of complex numbers of absolute value 1) with multiplication
(x1, , xn , y1, , yn , t)(x1′ , , xn′ , y1′ , , yn′ , t ′) =
X
x1 + x1′ , , y1 + y1′ , , tt ′exp xiyi′ − yixi′ .


The Lie algebra of H is H∞. The classical Stone-von Neumann theorem says that H admits
a unique irreducible unitary representation with any given nontrivial central character. The
following result is an algebraic analog.

Proposition 4. Let V be any irreducible H∞-module. Suppose that V is generated by a


vacuum vector with eigenvalue λ. Then V D Rλ.

Proof. See Kac, Corollary 9.13 on page 162. In order to see why this is true, consider the
abelian “Lagrangian” subalgebra H+ generated by the qi. Since H+ is abelian, its universal
enveloping algebra U(H+) is the polynomial ring A = C[q1, q2, ]. We will see that if v0 is
the vacuum vector, then V = A · v0. Indeed, given any polynomial P (q1, q2, ) we have
∂P
[pi , P (q1, q2, )] = (q1, q2, )c.
∂qi
Since pi annihilates v0 we see that
∂P
piP (q1, q2, )v0 = [pi , P (q1, q2, )]v0 = λ (q1, q2, )v0.
∂qi
This shows that A · v0 is closed under the action of the pi as well as the qi, and moreover
gives an algorithm for computing the action of the pi. 

9 The Harmonic Oscillator


The results of this section are, strictly speaking, not needed for the discussion of the Boson-
Fermion correspondence, but they are useful to consider since we will be introduced to the
raising and lowering operators. Moreover, the Proposition of this section is almost the same
as Proposition 4, and the idea of the proof is the same.
Let us consider the one-dimensional Heisenberg Lie algebra H3, spanned by P , Q and ~
subject to the conditions
[P , Q] = −i~, [~, P ] = [~, Q] = 0.
Given any nonzero complex number λ, this has a representation on the Schwartz space S(R)
in which
∂f
Qf (x) = xf (x), Pf = iλ , ~f = λf . (9)
∂x
This representation extends to tempered distributions, and it has an algebraic submodule
C[x] consisting of all polynomial functions. Polynomials are not Schwartz functions, but
they are tempered distributions. In some sense, this representation on C[x] is an algebraic
version of the analytic representation on S(R). Let us call it V (λ).
12 Section 9

The polynomial 1 is a vacuum vector since it is annihilated by P ; we will denote it as


|0i. Here is an analog of Proposition 4.

Proposition 5. Let λ  0. Then any irreducible representation of H3 that has a vacuum


vector v0 , and such that ~v0 = λv0 , is isomorphic to the module described by ( 9).

Proof. By a simple change of basis, we may reduce to the case λ = i. Thus if we normalize
so that ~ = 1 this means that
[Q, P ] = i.
Let us intoduce two operators
1 1
a = √ (Q + iP ), a∗ = √ (Q − iP ). (10)
2 2
These operators satisfy
[a, a∗] = −i[Q, P ] = 1. (11)
The vacuum vector |0i, we have already noted is a notation for the constant function 1.
Define recursively
1
v0 = |0i, vk+1 = √ a∗vk. (12)
k+1

(The constant 1/ k + 1 is included to make the vk+1 orthonormal, though we will not check
that they have norm 1.) Let N = a∗a. It is clearly a Hermitian operator. We claim that
Nvk = kvk. (13)
Indeed, if v = v0 then since av0 = 0 we have (13) when k = 0. Assume it is true for some k.
Then using (11) we have
Na∗vk = a∗aa∗ = a∗(a∗a + 1)vk = a∗(N + 1)vk = (k + 1)a∗vk.

Dividing by k + 1 we obtain (13) with k increased to k + 1, so (13) follows by induction.
The vk are eigenvalues of the self-adjoint operator N with distinct eigenvalues, so they
are linearly independent. Now consider the space
M∞
Cvk. (14)
k=0

This space is clearly closed under the action of a∗, and the effect of a∗ on the vk is given in
(12). Let us show that the effect of a on vk may also be computed. We will show recursively
that

avk = k vk−1.
Indeed, we have
1 1 1
avk = √ aa∗vk−1 = √ (N + 1)vk −1 = √ kvk−1.
k k k
We see that (14) is closed under both a and a∗, hence under both P and Q. By irreducibility,
we see that the vk are a basis, and moreover, the effect of P and Q are determined, so the
module is unique up to isomorphism. 
Heisenberg representation on the Fermionic Fock space 13

The operators (10) are called (respectively) annihilation and creation. As we will now
explain, they act on the stationary states of the quantum mechanical harmonic oscillator,
changing the energy level. The operator a∗ creates a quantum of energy, and the operator
1
a destroys a quantum of energy. The operator H = 2 (aa∗ + a∗a) is the Hamiltonian of the
harmonic oscillator. Concretely, this is the differential operator
d2
P 2 + Q2 = − + x2
dx2
corresponding to a quadratic potential U (x) = x2. The eigenfunctions are the vk, which turn
2 1 1
out to be v0 = e−x /2 times Hermite polynomials. Since H = N + 2 , the energy levels are 2 ,
3 5
, , . The creation and annihilation operators shift up and down between the levels.
2 2

10 Heisenberg representation on the Fermionic Fock


space
Let Λn be the shift operator acting on the space V spanned by vectors v j (j ∈ Z):
Λn(v j ) = v j −n.

If j  0 we adapt the representation to an action on F by:


X
Λn(vim ∧ vim−1 ∧ ) = vi1 ∧ ∧ Λn(vik) ∧ . (15)
k

The k-th term is interpreted to be 0 if ik − n = il for some l  k (since then vil appears twice).
Under our assumption that j  0 it follows that this is zero for all but finitely many k.
If j = 0 this definition doesn’t make sense because the sum (15) is infinite, but we define
Λ0 to act by the scalar m on F(m).
We also make use of raising and lowering operators which we now define. Define
ψ j (ξ) = vj ∧ ξ.
Thus if ξ = vim ∧ vim−1 ∧ then
(
0 if j = ik for any k,
ψ j (ξ) =
(−1)m−k vim ∧ ∧ vik ∧ v j ∧ vik −1 ∧ if ik > j > ik −1 for some k.

Let ψ∗j be the adjoint of ψ j with respect to the inner product such that the semiinfinite
monomials are an orthonormal basis. Thus ψ ∗j removes v j from the monomial if it can be
found; otherwise it gives zero. Concretely with ξ = vim ∧ vim−1 ∧
(
(−1)m−k+1vim ∧ ∧ vik −1 ∧ vˆj ∧ vik+1 if ik = j,
ψ ∗j (ξ) =
0 if j  {im , im−1, }.

The “hat” over v j means that this vector is omitted.


14 Section 11

Proposition 6. The operators Λ j and Λ−j are adjoints. We have


[Λ j , ψn] = ψn− j , [Λ j , ψn∗ ] = −ψn+

j, (16)
[Λn , Λm] = nδn,−mIF. (17)

Proof. To see that Λ j and Λ−j are adjoints, observe that each preserves the orthogonal
subspaces F(m). Moreover if ξ = vim ∧ vim−1 ∧ and ξ ′ = vjm ∧ v jm−1 ∧ then hΛ jξ, ξ ′i is zero
unless ik = jk for all but one value k = k0 and ik0 − j = jk0, in which case it is 1; and hξ, Λ−jξ ′i
is the same.
For the first equation in (16), using the Leibnitz rule
[Λ j , ψn]ξ = Λ j (vn ∧ ξ) − vn ∧ Λ j (x) = Λ j (vn) ∧ ξ = vn−j ∧ ξ = ψn− j (ξ).
Taking adjoints gives the second equation in (16).
Now to prove [Λn , Λm] = nδn,−mIF, let us first show that applying ΛnΛm to a vacuum
vector. For definiteness, let us use |0i as in (5), though there would be no difference for any
other vacuum. Let us consider first the case where n = −m. We will show that
ΛnΛ−n |0i − Λ−nΛn |0i = n|0i.
We may assume that n > 0, since if n = 0 this is trivial, and the cases n and −n are equivalent.
In this case, it is easy to see that Λn |0i = 0 since every term in (15) will have a repeated vi.
On the other hand,
n−1
X
Λ−n |0i = (−1)k−1vk ∧ v0 ∧ v−1 ∧ ∧ vk −n ∧
k=0

where the hat means the vector vk−n has been omitted. Applying Λn to each term produces
just |0i, for n terms altogether.
We now know that
[Λn , Λ−n]ξ = nξ (18)
when ξ = |0i is the vacuum. Next let us show that if (18) is true for a vector ξ then it is true
for ψ jξ for any j. Indeed
[Λn , Λ−n]ψ jξ − nψ jξ = [Λn , Λ−n]ψ jξ − ψ j [Λn , Λ−n]ξ.
But by the Jacobi identity
[[Λn , Λ−n], ψ j ] = [Λn , [Λ−n , ψ j ]] − [Λ−n , [Λn , ψj ]]
and by (16) this is zero. Now (17) follows since every semiinfinite monomial can be built up
from a vacuum by applications of ψ j for various j. 

11 Boson-Fermion Correspondence
Let H be the Lie algebra spanned by elements Li (i ∈ Z) and a central element ~ subject to
the relations
[Lm , Ln] = mδm,−n~.
Boson-Fermion Correspondence 15

1
In addition to ~ there is a second central element L0. The basis elements Ln, n Ln, ~ obviously

von-Neumann result is true in the following form: let λ: Z(H) 


satisfy the same relations (8) as the pi, qi and c, so H D H∞ ⊕ CL0. Thus the algebraic Stone-
C be a linear functional
on the center Z(H), which is the vector space spanned by ~ and L0, and assume that λ(~)
is nonzero. Define a vacuum vector to be one annihilated by the L j with j > 0. Then there
exists a unique isomorphism class of irreducible representations generated by a vacuum
vector such that Z(H) acts by the linear functional λ.

H  Let B be the ring of polynomials C[x1, x2, ]. For every m we have an action of rm:
End(B) on B in which L0 = mIB and if k > 0

Lk = , L−k = kxk ,
∂xk
with ~ acting by the scalar 1.
On the other hand, we have the action on F(m) in which Lk  Λk which by Proposition 6

intertwining map σm: F(m) 


gives a representation. Due to the algebraic Stone-von-Neumann theorem, there is a unique
B that intertwines the action on F(m) with rm, and which maps

these into an intertwining map σ: F 


the vacuum vector |mi to the vacuum vector in B, which is the polynomial 1. We combine
B̂ = C[q, q −1] ⊗ B that sends |mi to q m, where now
B̂ is an H-module with H acting by σm on the coset q mB, that is, on q m ⊗ B.
Let u be a generator of a ring C[[u]] of formal power series. Combine the raising and
lowering operators into series:
X X
ψ(u) = u jψ j , ψ ∗(u) = u−jψ ∗j .
j ∈Z j ∈Z

Note that applied to a semi-infinite monomial, all but finitely many ψ j with j < 0 produce

are meaningful. If u is an indeterminate, they map F


field of fractions of C[[u]], consisting of formal power series

0, and similarly all but but finitely many ψ∗j with j > 0 produce 0. Hence ψ(u) and ψ∗(u)
C((u))
P ⊗ F, where C((u)) is the
anun where an = 0 if n < −N
for some N .
By (16) we have
[Λn , ψ(u)] = unψ(u), [Λn , ψ ∗(u)] = −unψ∗(u). (19)

By abuse of notation we will write F and B instead of C((u)) ⊗ F and C((u)) ⊗ B, so


that we may work with formal power series. Let Ψ(u) and Ψ∗(u) be the corresponding
endomorphisms of B, that is,
Ψ(u) = σψ(u)σ −1, Ψ∗(u) = σψ ∗(u)σ −1.
Transferring (19) to B we have
[Ln , Ψ(u)] = unΨ(u), [Ln , Ψ∗(u)] = −unΨ∗(u). (20)
Concretely, [Ln , Ψ(u)] = unΨ(u) means, first taking n = −j where j > 0, and then taking n = j:
 
u−j ∂
[xj , Ψ(u)] = Ψ(u), , Ψ(u) = u j Ψ(u). (21)
j ∂x j
16 Section 11

Proposition 7. On q mB we have

! ∞
!
X X u−j ∂
Ψ(u) = um+1 q · exp u jx j exp − , (22)
j=1 j=1
j ∂x j


! ∞
!
X X u−j ∂
Ψ∗(u) = u−mq −1 · exp − u jx j exp . (23)
j=1 j =1
j ∂xj

Proof. We recall Taylor’s theorem in the form



X 1 dnf (x) d
f (x + t) = n
= exp (t )f (x).
n! dt dx
j =0
Define  
u−1 −1
Tuf (x1, x2, ) = f x1 + u , x2 + , . (24)
2
Then !

X u−j ∂
Tu = exp .
j=1
j ∂x j
We consider
[x j , Ψ(u)Tu] = [x j , Ψ(u)]Tu − Ψ(u)[Tu , u].
By (24) we have
u−j
[Tu , x j ] = Tu.
j
Combining this with (21)
[x j , Ψ(u)Tu] = 0.
This means that Ψ(u)Tu is free of terms involving ∂/∂x j , and we may write
Ψ(u)Tu = qP (x),
a polynomial in x (also involving u as a formal power series). The factor q is clear since Ψ
maps qmB into q m+1B but P (x) remains to be determined.
Now consider !
X∞
exp − u jx j Ψ(u)Tu. (25)
j=1
We have " !# !
∞ ∞
∂ X X
, exp − u jx j = −u j exp − u jx j .
∂x j j=1 j=1
h i
∂ ∂
Combining this with (21) and the fact that ∂x j
, Tu = 0 we see that ∂x j
annihilates (25), and
so this is a constant multiple of the vacuum vector. Here “constant” means independent of
the xi. The constant, we have already noted, is q times a factor depending only on u, and
since it is independent of the xi and q, it depends only on u. We evaluate it by applying (25)
to the vacuum vector q m.
The Murnaghan-Nakayama Rule 17

To carry this out, consider the following commutative diagram

1 1 1 1
# # # # #
Tu Ψ(u) α q −1
q mB q mB q m+1B q m+1B q mB

1 1 1 1
σ σ σ σ σ (26)
Tu′ ψ (u) β κ
F(m) F(m) F(m+1) F(m+1) F(m)
The maps Tu′ and κ are defined by the commutativity of the squares the appear in. The map
q −1 is multiplication by q −1, and α is multiplication by

!
X
exp − u jx j ;
j =1

therefore the third square is commutative if



!
X uj
β = exp − Λ−j .
j
j=1

Let us argue that


κ(vim+1 ∧ vim ∧ ) = vim+1 −1 ∧ vim −1 ∧ . (27)
The map κ = σ (q −1)σ is a homomorphism of H0 modules, where H0 is spanned by the Li with
i  0 and ~. Let κ ′ be the map defined by (27). We observe that κ ′ is clearly a homomorphism
of H0 modules, in view of (15). As F(m+1) is irreducible, κ and κ ′ differ by a constant multiple.
But applied to the vacuum |m + 1i both maps give |mi, proving (27).
We have proven that the top diagram is multiplication by a constant, and therefore so
does the bottom map. To compute it, we apply it to |mi. The map Tu′ takes |mi to |mi
since the translation map Tu preserves the vacuum in q mB, which is the constant function
qm.. The map ψ(u) takes |mi to um+1|m + 1i plus a sum of other seminfinite monomials
orthogonal to |m + 1i. Evidently applying β then produces um+1|m + 1i plus a sum of semi-
infinite monomials orthogonal to it. The κ produces um+1|mi plus something orthogonal to
it. Since we know that the bottom composition applied to |mi gives a multiple of |mi, we
see that this composition is just multiplication by the constant um+1, and we have proved
(22). The other equation (23) is similar. 

12 The Murnaghan-Nakayama Rule


Let α1, , αn be variables. Let Λ(n) be the ring of symmetric polynomials in α1, , αn.
Particular important elements of Λ(n) are the k-th elementary symmetric polynomials
X
ek(α) = e(n)
k (α) = e k (α 1, , α n ) = α i1 α ik
i1 < <ik

and the k-th complete symmetric polynomials

hk(α) = h(n)
X
k (α) = hk (α1, , αn) = αi1 αik .
i1 6 6ik
18 Section 12

We define e0 = h0 = 1 and ek = hk = 0 if k < 0.


There is a ring homomorphism Λ(n+1)
(n+1)
sends ek (n) (n+1)
to ek and hk (n)

Λ(n) in which αn+1 
0. This homomorphism
to hk , so ek and hk have a well-defined meaning in the inverse
limit


Λ = lim Λ(n).

The ring Λ may be thought of as the ring of symmetric polynomials in infinitely many
variables.

Proposition 8. The ring Λ(n) is a polynomial ring C[e1, , en] = C[h1, , hn]. Thus the ring


Λ is a polynomial ring C[e1, e2, ] = C[h1, h2, ]. It admits an involution that sends ei
and hi ei.
hi 
Proof. We first recall why any symmetric polynomial f is a polynomial in e1, , en. By
induction, we assume this is true in Λ(n) and check that it is true in Λ(n+1). Let f ∈ Λ(n) and
let f ′ be the image of f in Λ(n) obtained by putting αn+1 = 0. By induction on n, f ′ may
be expressed as a polynomial F ′ in e(n) , e(n)

′ (n)
, e(n)


1 , k . Writing f = F e1 , n , we observe
′ (n+1) (n+1)  (n+1)
that f − F e1 , , en is in the kernel of the homomorphism Λ Λ(n), and this
implies that it is a multiple of αn+1. Since it is symmetric, it is a multiple of e(n+1)
n+1 = α1 αn+1
′ (n+1) (n+1)  (n+1)
and we may write f = F e1 , , en + gen+1 for some symmetric polynomial g. The
degree of g is strictly less than that of f , and by induction on degree, g is a polynomial in
e(n+1)
1 , , e(n+1)
n+1 , so the same is true of f .
Therefore Λ(n) = C[e1, , en]. We note that e1, , en are algebraically independent. See
Lang’s Algebra, third edition, page 192 for an algebraic proof valid over any field, or working
over the complex numbers observe that if φ(e1, , en) = 0 where φ is a nonzero polynomial,
then there are values a1, , an ∈ C such that φ(a1, , an)  0. Now by the fundamental
theorem of algebra we may find r1, , rn ∈ C such that
(X − r1) (X − rn) = X n − a1X n−1 + ± an
so ak = ek(r1, , rn). Therefore φ(a1, , an) = φ(e1(r1, , rn), , en(r1, , rn)) = 0, a
contradiction, proving that the ei are algebraically independent.
Thus Λ(n) = C[e1, , en] is a polynomial ring. There exists a homomorphism ι: Λ(n) Λ(n)
such that ι(ek) = hk for k = 1, , n. We will show that ι(hk) = ek when k 6 n also. To prove

this, define

X X∞
k
E(t) = ekt , H(t) = hktk.
k=0 k=0

It is not hard to show that


n
Y n
Y
E(t) = (1 + αit), H(t) = (1 − αit)−1,
i=1 i=1

where for the second identity we assume |tαi |<1 so that we may expand the geometric series.
Therefore
E(t)H(−t) = 1.
The Murnaghan-Nakayama Rule 19

This means that


ek − ek −1h1 + ek−2h2 − + (−1)khk = 0 (28)
for k > 0. This allows us to write hk recursively as a polynomial in the ek. Applying ι we obtain
hk − hk−1ι(h1) + hk −2ι(h2) − + (−1)kι(hk) = 0
and comparing this relation with (28) we see (using induction on k) that ι(hk) = ek, provided
k 6 n.
Taking the limit as n ∞, the corresponding statements are true for Λ. In the limiting
case, we no longer need the assumption that k 6 n, so ι interchanges ek and hk for all k. 

In addition to the ek and hk let pk be the k-th power-sum symmetric polynomial:


X
pk(α1, , αN ) = αki ,
or the image of this polynomial in Λ.

Proposition 9. (i) (Pieri’s formula)


X
h k sλ = sµ,
µ⊃λ
µ\λ is a horizontal strip of size k
(ii) (Pieri’s formula)
X
eksλ = sµ ,
µ⊃λ
µ\λ is a vertical strip of size k

(iii) (Murnaghan-Nakayama rule)


X
pk sλ = (−1)ht(µ\λ)s µ.
µ⊃λ
µ\λ is a ribbon of size k

Here µ ⊃ λ means that the Young diagram of µ contains that of λ, or alternatively that
µi > λi for all i. Then we may consider the skew shape µ\λ. Its Young diagram is the set
theoretic difference between the Young diagrams of µ and λ. To say that µ\λ is a horizontal
strip means that it has no two boxes in the same column. To say that it is a vertical strip
means that it has no two boxes in the same row. To say that it is a ribbon means that it is
connected and contains no 2 × 2 block.
We will prove (iii), the Murhaghan-Nakayama rule. Pieri’s formula (ii) may be proved
easily by the same method.

Lemma 10. Let ξ: Z 


V be a function taking values in a vector space V and let f be a
skew-symmetric n-linear map from Cn to another vector space U. Define
F (λ) = f (ξ(λ1), , ξ(λn)), λ ∈ Zn.
If λ = (λ1, , λn) ∈ Zn define
S(λ) = F (λ + ρ) = F (λ1 + n − 1, λ2 + n − 2, , λn )
20 Section 12

where ρ = (n − 1, n − 2, , 0). Let ei = (0, , 1, , 0) be the k-th standard vector. Assume


that λ is a partition, so λ1 > λ2 > > λn. Then the nonzero elements of the set
{S(λ + kei)|i = 1, , n} (29)
are precisely the nonzero elements of the set
{(−1)ht(µ\λ)S(µ) | µ ⊃ λ and µ\λ is a ribbon of length k }.

Before giving the proof let us consider an example. Let λ = (8, 4, 4, 3, 2, 1, 1, 1) and let n = 10.
With ρ = (7, 6, 5, 4, 3, 2, 1, 0) we have λ + ρ = (15, 10, 9, 7, 5, 3, 2, 1). One of the elements of
the set (29) is
S(λ + 10e6) = F (15, 10, 9, 7, 5, 13, 2, 1) = (−1)4F (15, 13, 10, 9, 7, 5, 2, 1) = S(µ)
where
µ = (8, 7, 5, 5, 4, 3, 1, 1).
We confirm that µ − λ is a ribbon with the following diagram:

We can see that the skew diagram is a ribbon.

Proof. We may now explain the reason that the Lemma is true. If F (λ + ρ + kei)  0, then
the components of the vector λ + ρ + kei are distinct, and in particular λi + n − i + k does
not equal λ j + n − j for any j  i. Thus there is a j < i such that
λ j −1 + n − j + 1 > λi + n − i + k > λ j + n − j , (30)
unless λi + n − i − k > λ1 + n − 1; in the latter case, the following explanation will remain true
with j = 1. Let µ + ρ be λ + ρ + kei with the entries rearranged in descending order. This
means that the i-th entry of λ + ρ + kei is moved forward into the j-th position. Taking into
account that ρ j = ρi + j − i this means that µ j = λi + k − i + j.
Now (30) may be rewritten
λ j −1 > µ j > λ j , (31)

and furthermore if j < l 6 i, we have moved the (l − 1)-th entry into the l-th position, which
means that
µl+1 = λl + 1. (32)
For l not in the range j 6 l 6 i we have µl = λl. It is easy to see that the conditions (31) and
(32) mean that µ\λ is a ribbon. 
The Boson-Fermion correspondence (continued) 21

We may now give two examples for Lemma 10. One example will prove the Murnaghan-
Nakayama rule. The other is relevant to the Fermion-Boson correspondence.

terminates, and let V = Cn. Let ξ: Z 


Let us prove Proposition 9 (iii), the Murnaghan-Nakayama rule. Let α1, , αn be inde-
V be the map


ξ(k) = (α1k , αkn).
Let f : V n C be the map sending v1, , vn to ∆−1det(v), where v is the matrix with rows
vi and
Y
∆ = det (αin−j )i,j = (αi − α j ).
i<j

Then if λ is a partition then S(λ) = sλ(α1, , αn) is the Schur polynomial.


Pm
Now consider pksλ. This equals ∆−1 times i=1 Mi, where Mi = αiksλ. We will argue that
n
X n
X
Mi = Mi′, (33)
i=1 i=1

where in the notation of Mi′ = S(λ + kei). Indeed, Mi may be described as follows. Begin
with the matrix:  
λ1 +n−1 λ1 +n−1 λ1 +n−1
α α αn
 1λ2 +n−2 2λ2 +n−2
 α1 α2 αnλ2 +n−2 

 .
 
λn λn λn
α1 α2 αn
Increase each exponent in the i-th column by k, then take the determinant. On the other
hand, Mi′ may be described as follows: increase each exponent in the i-th row by k, then
take the determinant. In either case, we obtain the following n! · n terms:
X X X λi+n−i+kδi,k
αw(j) ,
k w∈Sn i,j

proving (33). Now using the Lemma, we see that pksl equals
X
(−1)ht(µ\λ)s µ ,
µ⊃λ
µ\λ is a ribbon of size k

proving the Murnaghan-Nakayama rule.

13 The Boson-Fermion correspondence (continued)


Let pk be the k-th power sum symmetric polynomial, either regarded as a polynomial in some
fixed set of “hidden variables” α1, , αn, or (better, since n will not be fixed) as an element
of the ring Λ. We identify these (normalized) with the variables xk in the Bosonic Fock space:
1
xk = pk.
k
22 Section 13

Let Sλ(x) be the polynomials in the xi such that Sλ(x) = sλ(α) is the Schur polynomial. For
example,
1 1 1
S(1) = x1, S(2) = x21 + x2, S(1,1) = x21 − x2, S(2,1) = x31 − x3.
2 2 3

Let us just consider the charge 0 part F(0) of the Fermionic Fock space. We index the
monomials in F(0) by partitions: given
ξ = vi0 ∧ vi−1 ∧ vi−2 ∧ ,
where i0 > i−1 > and i−k = −k for k sufficiently large, writing λk = i−k + k, we have
λ0 > λ1 > λ2 > and λk = 0 eventually, so λ = (λ0, λ1, ) is a partition. We write ξ = ξ(λ),
and then ξ is a bijection of the set of all partitions onto a basis of F(0).

Theorem 11. Let λ be a partition. Then σ(ξ(λ)) = Sλ.

the length of λ. Let F : Zn 


Proof. This is another application of Lemma 10. We fix an integer n that is greater than
F(0) be the map
F (µ0, , µn−1) = σ(v µ0 ∧ v µ1 −1 ∧ vµ2−2 ∧ v µn−1−n+1 ∧ v−n ∧ v−n−1 ∧ ).
This map is skew symmetric, so by the Lemma we have
n−1
X X
F (λ0, , λ j −1 + k, , λn−1) = (−1)ht(µ\λ)F (µ).
j=0 µ⊃λ
µ\λ is a ribbon of size k

By the Leibnitz rule (15) the left-hand side is Λ−k applied to F (λ), and remembering that
in the bosonic picture this is multiplication by kxk = pk. Thus we have proved
X
kxkF (λ) = (−1)ht(µ\λ)F (µ). (34)
µ⊃λ
µ\λ is a ribbon of size k

On the other hand, we have


X
pk sλ = (−1)ht(µ\λ)s µ (35)
µ⊃λ
µ\λ is a ribbon of size k

by the Murnaghan-Nakayama rule.


Now we make use of the following facts about symmetric functions. The ring Λ of sym-

basis. Consequently, there is a linear map θ: Λ 


metric functions is generated by the pk as a C-algebra, and as a vector space, the sλ are a
C[x1, x2, x3, ] that sends Sλ to F (λ).


1
Since we are identifying Λ with C[x1, x2, x3, ] by identifying pk with k xk, we have, by (34)
and (35), θ(pksλ) = pkj(sλ), and therefore θ is a Λ-module homomorphism Λ Λ. Since
θ(1) = 1, it follows that θ is the identity map, which in view of our identifications means
σ(ξ(λ)) = Sλ. 
The Boson-Fermion correspondence (continued) 23

You might also like