0% found this document useful (0 votes)
120 views

On The Exponential Solution of Differential Equations For A Linear Operator PDF

This document summarizes a 1954 paper by Wilhelm Magnus on solving differential equations for a linear operator. The paper addresses the following problem: given a linear operator A(t) and a second operator Y(t) satisfying the differential equation dY/dt = AY and initial condition Y(0) = I, define an operator Q(t) such that Y = exp Q. Magnus derives an infinite series solution for Q(t) that satisfies certain algebraic properties. He also supplements previous work on existence of "elementary" solutions involving algebraic operations and quadratures applied to A(t). The paper includes formal algebraic preliminaries and proves Friedrichs' theorem characterizing Lie elements, which is key to expressing the

Uploaded by

zhihong zuo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
120 views

On The Exponential Solution of Differential Equations For A Linear Operator PDF

This document summarizes a 1954 paper by Wilhelm Magnus on solving differential equations for a linear operator. The paper addresses the following problem: given a linear operator A(t) and a second operator Y(t) satisfying the differential equation dY/dt = AY and initial condition Y(0) = I, define an operator Q(t) such that Y = exp Q. Magnus derives an infinite series solution for Q(t) that satisfies certain algebraic properties. He also supplements previous work on existence of "elementary" solutions involving algebraic operations and quadratures applied to A(t). The paper includes formal algebraic preliminaries and proves Friedrichs' theorem characterizing Lie elements, which is key to expressing the

Uploaded by

zhihong zuo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

COMMUNJCATIONS ox PURE A N D APPLIED MATHEMATICS, VOL.

VII, 649-673 (1954)

On the Exponential Solution of Differential


Equations for a Linear Operator*
By WILHELM MAGNUS

Introduction and Summary


The present investigation was stimulated by a recent paper of K. 0.
Friedrichs 113, who arrived at some purely algebraic problems in connection
with the theory of linear operators in quantum mechanics. In particuIar,
Friedrichs used a theorem by which the Lie elements in a free associative ring
can be characterized. This theorem is proved in Section I1 of the present
paper together with some applications which concern the addition theorem of
the exponential function for non-commuting variables, the so-called Baker-
Hausdorff formula. Section I contains some algebraic preliminaries. It is of
a purely expository character and so is part of Section 111. Otherwise, Section
111 deals with the following problem, also considered by Friedrichs: Let A ( t )
be a linear operator depending on a real variable 1. Let Y(t)be a second operator
satisfying the differential equation
(1) dY/dt = AY
and the initial condition Y(0) = I , where I denotes the identity operator. The
problem is to define, in terms of A , an operator Q(t) such that Y = exp Q.
Feynman 121, using a symbolic interpretation of

(2) exp l’A dt

has derived a solution of (1) in the infinite series form obtained when (1) is
solved by iteration. The expression for fl obtained in the present paper is also
an infinite series but it satisfies the condition that the partial sums of this series
become Hermitian after multiplication by i if iA is a Hermitian operator. This
formula for t2 is the continuous analogue of the Baker-HausdorfT formula. All
of these results are essentially algebraic; they are supplemented in Section IV
by a proof of Zassenhaus’ formula, which may be described as the dual of
Hausdorff’s formula.
The simplest instance of an equation of type (1) is given by a finite system
of linear differential equations. In this case, A ( t ) is the matrix of the coefficients
of the system, and the convergence of the infinite series for D(t) can be discussed.
*This research was supported in part by the United States Air Force, through the
Office of Scientific Research of the Air Research and Development Command.
649
650 WILHELM MAGNUS

This is done in Section VI. In a special case, which is treated in Section VII,
necessary and sufficient conditions are derived for the existence and regularity
of Q(t) for all values of t. Section V supplements a recent investigation by H. B.
Keller and J. B. Keller [3], who resumed and continued the work by H. F.
Baker [4]on systems of ordinary linear differential equations. I n [3], the investi-
gation starts with the assumption that the matrix A ( t ) can be diagonalized.
In Section V we show that the continuous analogue of the Baker-Hausdorff
formula provides non-trivial sufficient conditions for the existence of elementary
solutions of (1) in cases where A ( t ) cannot be diagonalized. Here the term
“elementary” refers to algebraic operations and applications of a finite number
of quadratures to A(t).

FIRST PART: FORMAL ALGEBRA


1. Preliminuries
A free associative ring R, with n free generators x1, . , x, and an identity +

1 will be defined by the following four axioms:


(a) The elements of a field fo of characteristic zero (for instance the
field of real numbers) are in R, , and the unit element 1 of fo is the identity of
R, . The field fa belongs to the center of R, , i.e., all its elements commute with
all the elements of R. We shall call fo the Jietd of the coefin‘ents.
(b) The addition in R, is commutative and associative, and the multi-
plication is associative.
(c) There exist no relations between the elements of R except those which
follow from (a) and (b).
(d) Every element, of R, can be obtained from the elements of fo and
from x1 , . . ,x, by carrying out a finite number of additions and multiplications.
a

It follows from the axioms (a)-(d) that the identity 1 and the products
of any number of “factors” i.e., the products
(1.1) x.,x,:a am
x,, , (a1 , Q 2 , - . -, ff, = 1, 2 , 3 , * * * ) ,
with
(1.2) ~ ~ # ~ ~ # v ~ ~ ~ ~ v , ~ ~ # v ~ , v I~ , .=. . l , ,m 2 , ~ ~ ~ n f o r
form a basis of linearly independent elements of R, with respect to fo .
We may extend R, to a ring R, which consists of all the formal power
-
series with coefficientsco , c(vl , - . , Y, , al , . . . , a,) in .fo , the generic element
A of R being
(1.3) A = CO + Cc(v1, 1 vrn ;a1 1 . * . , a&;: * * * x;: .
The sum in (1.3) is taken over all possible combinations of integers v 1 , . . . ,
-
v, , a1 , * . , a, satisfying (1.1) and (1.2) for m = 1, 2, 3, . . A power series - a

of the type described by (1.3) will also be called a function of x1 , . . , 5, and -


EXPONENTIAL SOLUTIONS 65 1

will be denoted by F(x, , ... , z,,). Problems of convergence do not play a


role; if, for instance, fo is the field of rational numbers, both
m

C n!xT
n-0
and its square

belong to R. Here we have used the natural notation according to which


.r: = 1 for any v.
As an example which will be used later we may consider the case of two
free generators x, y; we take for fo the field of real numbers. The exponential
function is defined by
m

(1.4) ez = C zn/n!
n-0

and we have
m

(1 5) ezev = C
n.m-0
xnym/(n!m!).

We can find a function z of x and y such that


(1.6) e'eY = e"

and
(1 -7) 2 = u- +u2 + gu3 ... ,
where
x2 xy y2
(1 4 u=(ezev- 1) = x + y + z + - +1!1!
z+ - 4 . .

If we substitute for u its value from (1.8), it is easily shown that the series in
(1.7) leads to an element of R. For this purpose, we shall call
ff1 + + +
ff2 * * * ffn

the degree (in x, , . , 2,) of the basis element (1.1); in (1.5) the terms of degree
I (in x, y) will then consist of the sum

Only a finite number of powers of u will involve terms of a given degree in z, y,


since the terms of lowest degree in uk are of degree k in x and y. From this we
derive easily that z becomes a power series in x, y with rational coefficients,
the first terms being
(1.10) 2 =x + y + +xy - tyx + - * . .
652 WILHELM MAGNUS

Therefore, the mere existence of an element z of R satisfying (6) is almost


obvious. Also, it would be easy to prove that z is uniquely determined by (6).
But it is much more difficult to show that z has a certain algebraic property
which will be described presently, and to provide a method by which z can be
computed and expressed in a form which exhibits this property. These results
were obtained first and independently by Baker 153 and Hausdorff [GI; in order
to formulate them, we need the following definitions:
Let u, v be any elements of R. Then the bracket-product or Lie-product
[u, v] of u and 0 is defined by
(1.11) [u,v] = uv - vu.
Using Lie-multiplication, we can define recursively a Lie-element of R. We
shall call x and y (or, in the general case, the free generators z,) Lie-elements of
degree one. Any linear combination of Lie-elements of degree one with co-
efficients from .fo and any Lie-product of Lie-elements shall also be called a
Lie-element. The total set of Lie-elements obtained in this manner will be
called A. For properties of this set see [7], [8], and [9]. In the case of two free
generators x, y the general Lie-element involving terms of a degree 5 3 is
(1.12) c1z + +
C2Y CIP(ZY -Y4 + C121((XY - Y 4 X - X(ZY - y4)
+ c122((v - LAY - YbCy - Yd),
where c1 , , cizz are elements of the field fo (e.g., rational numbers).
We shall call the following statement the Baker-Hausdorfl theorem: let z be
the element qf R defined by exp x exp y = exp z. Then z i s a Lie-element. A new
proof of this t.heorem wiIl be given in the next section. The explicit expression
of z in terms of Lie-elements of R shall be called the Baker-Hausdorflformula.
Methods for finding recursively the terms of a degree 5 n for n = 1, 2, 3, of
this formula will be discussed in Sections I11 and IV.

11. A Theorem of Friedrichs"


I n a discussion of the theory of operators of quantum mechanics,
Friedrichs [I] found a characterization of Lie-elements and proved it in a par-
ticular case by using a theory of representation of these operators. We shall
formulate and prove Friedrichs' theorem in the case where the ring R has two
free generators x, y. But the proof for a free ring with any number of generators
is almost literally the same.
We construct first an isomorphic replica R' of R which has two free genera-
tors x', y'. Then we construct the direct product l? of R and R', identifying the
elements of the underlying isomorphic fields of coefficients in R and R'. The
*9proof different from the one given here and based on a lemma due to Birkhoff and
Witt [8]has been communicated to the author by Dr. P. M. Cohn of Manchester University.
EXPONENTIAL SOLUTIONS 653

ring @ is generated by x , y, x', y', where the generators are not entirely free but
satisfy the relation
(2.1) xx' = 2'2, yy' = y'y, xy' = y'x, x'y = yx'.
We may consider any element of R as an element of l?, which contains R. Now
we can state Friedrichs' theorem:
THEOREM I: Let F ( x , y) be a function of x and y. Then F i s a Lie-element
of R i f and only i f
(2.2) F(z + x', Y + Y') = F(x, Y) + F W , Y').
To prove this theorem, we may confine ourselves to the case where F is
homogeneous in x , y, that is where F is a sum of terms of a fixed degree. It is
easily seen that (2.2) holds if F is a Lie-element. In this case, F must be a sum
of terms which have been derived from the generators by a repeated application
of Lie-multiplication. Consider a term w involved in F , which has been obtained
from two Lie-elements u and v of R by Lie-multiplication:
w = [u, v ] .
Then u and v are of lesser degree than w and we may assume that, as functions
of x and y, they satisfy (2.2). Since it follows from (2.1) that the bracket product
of a factor depending on x,y and a factor depending on x', y' is always zero, we
find that w also satisfies (2.2).
To show that only Lie-elements satisfy (2.2) we introduce first the following
definition:
ilny r elements (r = 1, 2, 3, . .) of R are called algebraically independent
if they generate a subring of R which is isomorphic with a free ring of r free
-
generators. More specifically, let u,, * , u,be the r elements of R and let
1

y1 , . . . , y, be free generators of a free ring R*. Then we shall call the u, ,


( p = 1, . . . , r ) algebraically independent if the mapping

YP-fUP (P = 1, * . * ,TI

determines an isomorphic (one-to-one) correspondence between R* and the


smallest subring of R which contains the up . (We define the maps of sums and
products in the natural manner.)
Next, we need the following two lemmas, which have been proved elsewhere
(see [7]):
LEMMA 1: Let u, , . . - , u, and v be r + 1 algebraically independent
elements of a free ring R*. Fm I = 1 , 2 , 3 , . and a - p = 1, 2, . . , r, let
a

(2.3)
(2.4)
The right hand side in (3) i s the result of a n 1-fold bracket multiplicalion of up by v
from the right. Then all the u:" are algebraically independent (for 1 = 0, 1,2, . -). -
654 WILHELM MAGNUS

Lemma 1 shows that there exist any number of algebraically independent


elements (and in particular of Lie-elements) in €2; if we take, for instance, r = 1,
ul = x, v = y, the resulting elements u:’) are all Lie-elements. With the same
assumptions as in Lemma 1 we have:
LEMMA 2: Every function H(ul - . - , u,, v) can be written in a unique
,
way in the form
(2 -5) H = Ho + H,v + H~v’ + , - a *

where H , , H , , H , , - - - are functions of the u:’),(I 0, 1, 2, - - .). =

We shall apply these lemmas to the proof of Friedrichs’ theorem. Let


F ( x , y ) be a function which satisfies (2) and is of degree d in x , y . F is then
expressed in terms of Lie-elements of the first degree.
Assume that F could also be written as a function H(ul , . , u,, v) of
algebraically independent Lie-elements u,, u,, .. u,, v of degrees - a ,

I, 2 I, 2
2 I, 2 1 0 , ’ a -

where lo is the degree of v in x, y and I, denotes the degree of up . Then we can


prove
LEMMA 3: If H i s expanded according to Lemma 2, then either H = Ho(u:”)
+
or H = Ho(u:”) hv, where h i s a constant (that i s where h i s a n element of the
jield of coefkients fo).
Proof: We have now to use the fact that, in terms of the Lie-elements
u1 . u2 , ... 7 ur 1 v,
(2 -6) F(z, Y) = , u2 , ... 1 ur ;v>,
and to show that if F has the property (2.2) either
(2.7a) F ( x , Y) = Ho(u:”)
or
(2.7b) F(s, 9) = H,(u:”) hv +
holds. Since the “variables” u1 , . - -, u,, v are Lie-elements of the ring R
generated by x , y, we have
(2 8 w(z x’, y + +
Y’) = w ( z , Y) W W , Y’), +
where w stands for any one of the elements uI , . , u, u. We shall write w’ 7 .
for w(z’, y’) and, correspondingly, u: , v’ for up (x‘, g’), v(x’, y’). Now we have
from (2.2), (2.6) and (2.8)
(2.9) +
F ( x x’, Y Y’) = H(ul + + U: , ,U, + U: , v + v’).
Applying Lemma 2 to H , we find
H(u, + U: , * ,U: ,v+ v‘) = H~(u:” + u:’)’)
(2.10)
+ H,(U:L’ + u:L)’)(v+ v’) + H,(u:” + P ’ ) ( v + + Y’y * * ’ ,
EXPONENTIAL SOLUTIONS 655

where the uj’)’ are derived from the u;” by the transition from x, y to x’, y‘.
Now condition (2.2) gives I

W u , + u: 1 * ’ 1 u, + u: , v + v’)
(2.11)

According to Lemma 2, the coefficients of the products v k d k ’ on the right hand


sides of (2.10) and (2.11) must be the same. From this we have
(2.12) H,(u:” + u y ) = H0(u:”’) + H,(u:”’),
(2.13) H,(u:” + u y ’ ) = H,(ui”) = H,(u:”’) = h,

(2.14) H 2 ( U ; z ) + u ; y = H,(u:”) = H&:”’) = 0,


.......................................
Equations (2.12), (2.13), (2.14) contain the proof of Lemma 3. Xow we can
prove Theorem I by using the following argument: Since F is homogeneous
with respect to x, y, the case where h Z 0 in (2.13) can arise only if lo = 1, =
-- = 1, = d where d is the degree of F with respect to x, y. In this case, F is a
linear combination of the Lie-elements u, , . * * ,u, ,v and therefore is itself a L i e
element as we wanted it to be. If d > lo , the constant h in (2.13) must vanish.
But in this case we can apply our Lemma 3 to H,(u:”). According to Lemma 1,
the u:‘) are again algebraically independent Lie-elements, their degrees with
+
respect to x, y are 1, llo , therefore the number of Lie-elements of lowest degree
lo involved in H , has diminished by one, and we see that a repeated application
of Lemma 3 leads to a proof of Theorem I.
Friedrichs used Theorem I for a proof of the following results, both of
which had been derived by Hausdorff in a different manner:
THEOREM IT: Let x, y be free generators of an associative ring R. Let z and
w be the elements defined by
(2.15) e“e’ = e’,
(2.16) e-“yez = w.
Then z and w are Lie-elements in R.
The proof follows immediately from Theorem I if we observe that e”’”’ =
ez.e”’ and xx‘ = 2’2. Baker [5] and HausdorfT [6] proved Theorem I1 by a
recursive construction of z and by tin explicit formula for w. Once one knows
that w is a Lie-element, it is easy to determine it explicitly. Since it is of first
degree in y, it must be expressible in terms of the Lie-elements
(2.17) {y, 57 = [ ..* “Y, 21x1 -.-21
656 WILHELM MAGNUS

which are obtained from y by an 1-fold bracket multiplication by x. The result is


m .

(2.18)

where {y, x o ) stands for y itself.


In a remarkably simple manner, D. Finkelstein has proved that Lie-elements
can also be characterised by the following property: Let F(x, y), x , y, z’, y’ be
defined as in Theorem I. Let Ax = x - x’, Ay = y - y’, and define AF in
+
such a manner that for a sum F = CIFl C,F, with constant C , , C,
AF = CIAFl + C,AF, .
Then AF is defined for all F if AFo is defined for all products F, of the generators.
For any such product F, , 1% Fo denote the product of the same factors in the
inverse order (for example ( x y ) = ys) and define

Then F ( x , y) is a Lie element if and only if


(2.19) AF(x, y) = F(x - x’, ?/ .- y’).
It can be shown that the properties (2.2) and (2.19) are equivalent by observing
that they are equivalent for any F of type (4.4). D. Finkelstein’s results, which
include also a new and simple derivation of Theorem I1 and of formulas (3.16),
(3.17), are presented in a forthcoming publication.

111. Difierentiation and Differential Operators


Let R be a free associative ring with free generators x , y, x, , y, . . . Let
F(x, y) be an element of R and let X be a parameter, i.e., an arbitrary number
from the field fo (for instance an arbitrary real number). Then we can expand
+
F ( s As, , y) in a series of powers of A:
(3.1) F(x + jy) = F(x, Y) +
XFl(z, 21 Y> + * * ’ -
We shall call the coefficient of X a derivative and write

The word ‘Lderivativel’was introduced by Hausdorff. Another more customary


term for this coefficient is “polar,” and the process by which F , is obtained from
F is also called “polarization.” The definition of a derivative used here is
related to but different from the one used by Falk [lo]. If F is a monomial
(that is, an element of the type (l.l)),polarization consists of first replacing one
factor x by x l in every possible way and then adding all the resulting terms
afterwards. The polar of a sum of terms is the sum of the polars of the terms.
If F does not involve x , its polar with respect to x is zero. It is not necessary
EXPONENTIAL SOLUTIONS 657

that x1 be different from x; we can define (x.a/ax)F by forming (x.d/ax)F and


substituting 2 for x1 after the polarization. We may also substitute any element
u of R for x1 after polarization; the result will be denoted by (u.a/dn)F.
We extend the notation introduced by (2.17) and (2.18) to

(3.3)

where

with constant coefficients p , . With this notation, we have, according to


HausdoriT [6], the formulas

and the following theorem:


LEMMA 4: Let P(x) and Q(x) be two power series in x which satisfy
(3.6) P(x)Q(x) = 1.

Then each of the equations


(3.7) (Y,P(41 = u, Y = b,&(.)I
is a consequence of the other.
The proof consists of a simple straightforward computation, using the
relations between the coefficients of P(x) and Q(x) which result from (3.6) and
the identity
(3 4 {(y,x"), x") = { y , z"+") (n, m = 0, 1, 2, . - a ) .

The equations (3.5) and Lemma 4 lead to Hausdorff's representation of the


function z(x, y ) defined by exp x exp y = exp z. We have from (3.5)

(3.9)

where u is any indeterminate quantity and where

(3.10) = (u&

On the other hand, we have from the definition of 2 and from (3.5)

(3.11) ax X
658 WILHELM MAGNUS

Similarly, if we put

(3.12) f* = (9;) z

we have from (3.5) and the definition of z:

(3.13) e-'(y $)e' = {{*, '+} = e-"e-(y $) e"e" := e-'(y $) e" = y

Now we shall replace u in (3.9) and (3.11) by

(3.14) w = {Y, &}.


Then we see from Lemma 4 that the last term in (3.11) becomes simply y, and
by substituting this value of the first term in (3.11) for the left hand side of
(3.9), we find from (3.9), Lemma 4 and (3.12)

(3.15)

This is a partial differential equation for z which can be solved by the method
of power series expansion and coefficientcomparison. Indeed, if we write
(3.16) z = x + + + + ... .
21 22 23

where z,, contains exactly these terms of z which are of degree n with respect to
y, we find first (y.d/ay) z, = nz, and then from (3.15)

z1 , - .. , znCl= --
1
n + l
(w $) 2, , .. *

since w is linear in y. It is obvious that (3.17) gives recurrence formulas for the
computation of z in terms of Lie-elements. The first terms are

(3.19)

where
(3.20)
EXPONENTIAL SOLUTIONS 659

We shall consider now an associative ring R the elements of which are


differentiable functions of a real parameter t. As an example we may take a
ring of finite or infinite matrices with real or complex elements which are differ-
entiable functions of t. For our purposes, two types of properties of these ring
elements are required, formal properties and properties dealing with convergence.
The formal properties needed are:
(a) A ( t ) is an element of R for all real values of t. For any sufficiently
small e,
(3.21) A(t + €) = A(t) + €A,(t) + €*A&) + - , * *

where the terms on the right hand side of (3.10) are in R and where their sums
are to be defined in R. We shall define dA/dt to be A , ( t ) .
(b) The formal laws of differentiation for a sum and a product hold.
(c) If P ( x ) is a power series in x and if P ( A ( t ) )= B(t) exists in R, then

(3.22) dB dA x = A(t).
dt '
(d) Given A @ ) , there exists in R a uniquely determined function A*(t)
such that
dA*
- - - A(t),
(3.23) A*(O) = 0.
dt
We shall write
rt
(3.24) j0 A(.) d r = A*(t).
The second type of property, which is more difficult to describe, concerns
problems of convergence. For the present purposes it will be necessary to
assume that exp A exists for all functions under considerationand is differentiable
in the sense described under (a). Also, it will be necessary to assume that certain
repeated integrals (where integration is defined by (3.23), (3.24)) exist.
Furthermore, we must be able to define certain infinite sums. If the
elements of R are linear operators acting on a Hilbert space, definitions are
readily available. A different type of example of a ring R in which the con-
structions of this section are permissible can be obtained as follows: Let u,
be a sequence of elements of a free associative ring of the type defined in Section
I. Let p,, be the minimum of the degrees of terms involved in u, ;if u, = 0, we
define ,L,~ to be - 0 3 . We shall call the sequence of the u, a null sequence if
lim p,' = 0.
n-00

Now we define R as the ring of all power series in a real variable t of the type
m

A(t) = C u,tn
n-0
660 WILHELM MAGNUS

where the u,form a null sequence and where t commutes with all other quantities.
The case where the A ( t ) are finite matrices with elements depending on t
will be considered in Sections V and VI.
We can state
THEOREM 111: Let A(t) be a known function of t in a n associative ring R ,
and let U ( t ) be a n unknown function satisfying

(3.25) = AU, U(0) = 1.


dt
Then, i f certain unspecijied conditions of convergence are satisjied, U ( t ) can be
written in the f o r m
(3.26) U ( t ) = exp 0(t)
where
d 0-
-
dt n=O

(3.27)

The p,, vanish for n = 3, 5, 7, . . , and PBrn= (- l)"'-' B2,/(2m) !, where the
BPm(for m = 1, 2, 3, a ) are the Bernoulli numbers. Integration of (3.27) by
iteration leads to a n inJinite series for R the first terms of which ( u p to terms involving
three integrations) are

(3.28)

+ * * - .
Formula (3.28) is the continuous analogue of the Baker-Hausdorff formula by
which z is expressed in (3.19) as a Lie-element of 2, y. We can prove Theorem
I11 in the following manner: If U = exp 0, then, according to (3.22) and (3.5),

(3.29)

Therefore we find from (3.25) and from Lemma 4

(3.30)
EXPONENTIAL SOLUTIONS 66 1

This proves (3.27). The proof of (3.28) is carried out by defining

and putting D = lim n+mQ,, . In the case where A ( T ) is a finite matrix with
bounded elements it can be shown by standard methods that (3.31) actually
leads to a function D satisfying (3.26) for sufficiently small values of t.
A different method of deriving (3.28) can be based on (3.19). If we set
t = n6, A(v8) = A , (Y = 1, 2, * . a , n)
and if we integrate (3.25) by substituting for A ( r ) ,0 5 r 5 t a piecewise constant
function with values A , , we find for U(t) the approximate value
(3.20) exp A,6 exp An-,8 exp A,6 exp A16.
Repeated application of (3.19) and passage to the limit n -+a gives easily the
first two terms of the right hand side of (3.28). But the complexity of the
calculations increases rapidly with the number of terms, and the convergence
difficulties involved in the application of (3.19) are considerable even if x, y
are finite matrices.

IV. The Zassenhaus Formula


Let R be the free ring with two generators x,y and with rational coefficients.
It has been observed by Zassenhaus [ l l ] that there exists a formula which may
be called the dual of Hausdorff's formula. We may state his result as follows:
There exist uniquely determined Lie-elements C, (n = 2, 3, 4, . - ) in R
which are exactly of degree n in x, y such that
(4.1) e"+Y = eze'ec*ec' ... eC^ .. . .
The existence of a formula of type (1) is an immediate consequence of Hausdorff's
theorem.
In fact, we find successively that exp (-x) exp (x y) = exp (y + C), +
where C involves Lie-elements of a degree > 1, that exp (- y) exp (y C) = +
exp (C, + C*) where C* involves Lie-elements of a degree > 2 and so on. But
the computation of C,, becomes rather difficult if it is based on HausdorfYs compli-
cated formula. A simpler method for the calculation of the C, can be derived
from a result due to Dynkin [12], Specht [13], and Wever [14]. The method
employed here has already been used by Dynkin to derive the coefficients of the
terms of degree n in Hausdorff's formula without the use of the coefficientsof
terms of lower degree.
662 WILHELM MAGNUS

For every element F(x, y ) in R we define a corresponding Lie-element F ,


where the "curly bracket operator" { ) has the following properties:
(a) For any element C in the field of coefficients,
(4 .2) (CF) = C(F\.
(b) For any two elements F , , F , of R
(4-3) (Fi + F,)(Fi} {Fz).
= +
(c) Let x, , for v = 1, 2, , n, be any one of the generators. Then for
any monomial x1x2. . - x, we define
(4.4) 1x1 x, . . * xn) = [[ * * * [[Zlx21x31 . * * IxJ; {x,) = z.
and for the identity we define
( 1 ) = 0.
It is clear that the operator ( } is defined uniquely for all F in R by the rules
(a), (b), (c). In [12], [13], [14], the following theorem is proved: Let G be a
homogeneous Lie-element in R which is of degree n; then
(4.5) { G ) = nG.
From Wever's paper [14] we can easily derive
LEMMA 5: If G i s a homogeneous Lie-element and F is any element of R, then
(4.6) (G'F) = 0.
We expand both sides of (1) in power series and apply the operator { 1.
According to Lemma 5 we find

=x+y
+
since { (x y)"} = 0 if n > 1. In the same manner we find for the right hand
side in (1)

= x: +y + (ZYI + {C,) + 7
{XY'I
+ (.GI + {YCZ) + + -.* ?

where the omitted terms are of a degree greater than three. By comparing
terms of the same degree in (7) and (8) we find
+ = 0,
(C21 IZYl

fC3) + IxC2) + fYC*)+ Hw2) = 0


EXPONENTIAL SOLUTIONS 663

and therefore, because of (4.5),


c2 = -%, 91,
c
3 = -wYlYl + 9((%+ Y)(ZY -
= -m,YlYl - Q"n, Y l ~ l .
It is clear that by this method we may also compute C, for 'any n > 3 by re-
currence formulas.

SECOND PART : MATRICES


V. lntegration of Systems of Ordinary Differential Equations
by Elementary Formulas
Let A(t) and Y(t) be n by n matrices the elements of which depend on a
parameter t. We consider the system of linear homogeneous differential equa-
tions
(5.1) dY/dt = AY
subject to the initial conditions
(5.2) Y(0) = I ,
where I denotes the unit matrix.
From well-known general theorems we know that (5.1) always has a
uniquely determined solution Y(t) which is continuous and has a continuous
first derivative in any interval in which Aft) is continuous. The elements of
the k-th column of Y(t)are the solutions 9, of the system of linear differential
equations

(5.3) (Y = 1, 2, * * . , n)
subject to the initial conditions
(5.4) yy(0) = 0 if v # le, 0 )1.
~ ~ (=
The u V , #are, of course, the elements of A , and they are functions of t. The
determinant of Y is always different from zero; its value is given by

(5.5)

We wish to apply Theorem I11 and in particular formula (3.28) to equation


(5.1), assuming that Y can be written in the form Y = exp 8. In general, the
use of Theorem I11 involves difficulties of convergence; some of these will be
discussed in Sections VI, VII. But there is one case in which (3.28) clearly
664 WILHELM MAGNUS

determines Q for all values of t, namely, when the series in the right hand side
of (3.28) terminates. This will happen, for instance, if

identically for all values of t. If (5.6) is true, Q becomes simply

(5.7) s,' A(T)


and Y = exp Q satisfies (5.1).
In order to state a concise result, we introduce the following
DeJinition of a Lie-integral Functional. Let A ( t ) be an integrable function
of t (in the ordinary sense). We define a Lie-integral functional 9, of weight
n of A recursively as follows:
(i) The functional of weight 1 is any multiple of

(5 .g> s,' il(s) as.

(ii) Let @A , , - - - , aVbe any functionals of weight X, p , . . ,v such that


(5.9) X + p + -.-+ v + p = n - 1.
Then a functional of weight n is defined as any linear combina on of terms of
the type

where , ---
, 9oare written as functions of the independent variable s.
Apparently, the terms involving 1, 2, , n integrations in (3.28) are
functionals of the type described above; we shall call them the Baker-Hausdorg
functionals B,'of A ( t ) (for c = 0), and we shall write (3.28) in the form
(5.11) Q = B, + B, + B, + ,
where the B, will be written as

(5.12) Bll(A, t , c>


if the matrix A , the variable t, and the constant c are to be exhibited. In (3.28),
we assumed that c = 0. Now we can state
THEOREM IV: If all Lie-integral functionals of A of a weight m vanish
(n < m I2 n - l ) , then the solution of (5.1) with initial conditions (5.2) i s given
by Y = exp Q, where

(5.13) n= c B,(A,
a

"-1
t , 0).

The B , are the Baker-Hausdorff functions defined by (3.28) and (5.11)


EXPONENTIAL SOLUTIONS 665

A sufficient (but not a necessary) condition for the vanishing of Lie-integral


functionals of weight greater than n is that
(5.14) “ * -- A(SJI43)l * * * IA(Sn+Jl =0
for any choice of s1 , . . , s ~ . + ~
Clearly, the fact that all functionals of type (5.10) vanish for weight m
between n and 2n - 1 implies that but a finite number of linearly independent
functionals must vanish identically.
In order to prove Theorem IV we need first
LEMMA 6: If all Lie-integral functionals of a weight m vanish, where n < m
-
I 2n - 1 , then all Lie-integral functionals of any weight m > n also vanish.
Proof: We shall consider a functional of type (5.10), assuming now that
m = 1 + + +
X p . +
p and that m > n. Since our Lemma is trivial for
n = 1, we may assume that n > 1. Now we shall apply induction with respect
to m,assuming that m 2 2n. If one of the weights X, p, . . . , p is greater than n,
the corresponding 9 vanishes identically and the Lemma holds. But suppose
all of the weights A, p, , p are 5 n; then we consider the first number S
great,er than n in the sequence
1,l +A, 1 +x+p, * - * , 1+x+p + * * . +v, 1 + A + . * - +v+ p.

This number is necessarily at most equal to 2n. If S < 2n, and, if S = 1 + X +


+
- - - v, then
(5.15) “ * * * “ A , %l@,l . . * ]@”I
is the derivative of a functional of degree S , where n < S < 2n, and therefore
not only this functional but also (5.10) vanishes identically. Hence for this
case our lemma is true. There remains the case S = 2n which can take place
only if the last term Y in S = 1 X + + + +
. - v is equal to n. Now we use a
lemma proved by Wever [14] according to which a Lie-product (5.15) can also
be written as a sum of Lie-products in which 9,always appears in the first place,
but in which the factors and the arrangement of the brackets are the same as in
(5.15). This lemma follows wit,hout difficulty from the Dynkin-Wever-Specht
formula (4.5). Consider now any Lie-product of type (5.15) in which the first
factor is . The second factor is either A or one of the other factors, which
may be called *. Now we merely have to show that
(5.16) [@” , A ] = 0, [a”, *] = 0.
If (5.16) holds, any product involving the left hand sides in (5.16) also vanishes
and therefore the product in (5.15) vanishes. Now (5.16) is true because
[av, A ] = - [ A , a”]is the derivative of a vanishing functional of weight n 1. +
Similarly, [(a, , a&]= 0 since

(5.17)
666 WILHELM MAGNUS

Both d@.,/dtand d\k/dt are sums of terms of the type

- -
where \kl , \k2 , . are Lie-integral functionals. Therefore, the right hand side
of (5.17) vanishes since the individual terms are derivatives of Lie-integral
+
functionals of a weight k (n 1 Ik 5 2n - 1). The inequalities for k follow
from the fact that the weight of GVequals n and the weight of P is less than n
and a t least, equal to unity.
This finishes the proof of Lemma 6. Probably, a better result could be
obtained for any given n; for example, for n = 2 it is easily shown that the
vanishing of all functionals of weight 3 implies the vanishing of all functionals
of weight 4. For n = 1, it is trivial that all functionals of a weight 2 2 vanish
if those of weight 2 vanish.
To prove Theorem IV we observe that all steps in the formal proof of
(3.28) now involve only a finite number of terms. First, the solution of (3.27)
by iteration gives a finite sum of functionals which we denote by B , (v = 1,
-
. ., n) in accordance with (5.11). Secondly, it follows directly that the Lie-
product

(5.18)

of at least m + 1 factors vanishes identically if m 2 n. Now we consider the


following equation which is equivalent to (3.29):

This formula makes sense for any differentiable finite matrix Q(t) since it can
be shown to be an absolutely and uniformly convergent rearrangement of the
series obtained by differentiating exp Q directly term by term and multiplying
by exp (-0) afterwards. From (5.18) it follows that the right hand side in
(5.19) is a terminating series. Calling its sum B,we find directly from (5.18),
(5.19) that
(5.20)

where the 8. are explained in Theorem 111. On the other hand, Q could also
have been derived from

(5.21)

since the higher terms in (3.27) do not contribute to dQ/dt. Putting B -A = C


we merely have to show that the equation

(5.22)
EXPONENTIAL SOLUTIONS 667

cannot have a solution C which does not vanish identically and which is such
that { C , Q'"} = 0 if m 2 n. This can be shown by applying bracket multiplica-
tion to (5.22) k = n - 1, n - 2, - . , 1 times. Then we find that
6

(5.23) 2S l ( C , Q'+7 c /3"(C,


"-0
=
n-k-1

"-0
Q2r+k) = 0,

and this gives recursively


(5.24) b0(C, Qn-'] = /3,(C, = -- * = /To[C, Q] = 0.
Since Po # 0, combining (5.22) and (5.24) we find that C = 0, and this finishes
the proof of the first part of Theorem IV.
The statement in Theorem IV that (5.14) is a sufficient condition for (5.13)
is almost trivial. To show that it is not a necessary condition we take n = 1 and

A ( t ) = (cos t - cos 2t) (: 1) for0 4 t 5 2r,

(5.25)

A(t) = for t 2 2r.

Clearly,

(5.26)

but if 0 < s, < r/4 and if sz > 2r, then


(5.27) [&I), A(41 f 0.
It can be shown that even in the case n = 1 and even if the elements of
A ( t ) are polynomials in t, there exist infinitely many examples involving an
A ( t ) which satisfies (5.26) but not (5.27). The implications of (5.26) will be
investigated in detail in a forthcoming paper by M. Hellman.

VI. Conditions for Existence of a Solution Y = exp 0 for Y = AY


In this section, we shall derive some results about the existence in the large
of solutions of (5.1).
We consider again a system of linear differential equations of the first order
(6.1) d Y / d t = A(t)Y(t),
where Y and A are n by n matrices the elements of which are functions of para-
meter t. We assume again that Y(0)is the identity I and that A ( t ) is continuous
in t , although it is well known that the latter condition could be weakened.
If we wish to represent the solution of (6.1) in the form Y = exp Q, the
668 WILHELM MAGNUS

techhique based on (3.27) and (3.28) will work for sufficiently small values of
I t 1. Also, it is well known that any preassigned constant matrix Y can be
written in the form exp D, if the determinant I Y I of Y is different from zero.
From Section V we know that Y is finite and I Y I f 0 everywhere. Neverthe-
less, if the function Q(t) is assumed to be differentiable, it may not exist every-
where. This can be shown by the following considerations. We may start
from t = 0 and arrive a t the value for the solution of (6.1) for t = to :
(6.2) Y o = Y(t,) = exp 9, , Qo = (to).
Let us consider Y o and fl, as points of 2n2-dimensional spaces S, and S , , re-
spectively, where the Cartesian coordinates qy and w , of these spaces consist of
the real and imaginary parts of the n2 elements of an arbitrary matrix Y or R.
The formula
(6.3) Y = exp D

defines a mapping of S , into S, such that the coordinates in S, become entire


analytic functions of the coordinates of S , . The functional determinant
(6.4) A = I h./aw, 1 (Y, p = 1, , 2n2)
of this mapping is an analytic function of the w, . If A does not vanish at a
certain point Ro of S , , then a neighborhood of R, is mapped continuously with
a one-to-one correspondence upon a full neighborhood of Y o . In this case,
the solution Y = exp R of (6.1) can be continued beyond the value t = to to a
value t , > to . If, however, A = 0 at Y = Y o , then dY/dt = .4Y may point
towards a part of S , which is not covered by the map of S , in S, . In this case,
dR/dt cannot exist at fl = Qo . Actually, the question reduces to the problem of
solving

with respect to dQ/dt. If this is possible for any A in the neighborhood (in S,)
of a point fl = Q0 , and if the result is of the type
(6.6) dR/dt = F ( A , D),
where the right-hand side in (6.6) is a matrix depending analytically on the
elements of Q , then (1) has a solution of type (63) in the neighborhood of
Y o = exp Ro = exp R(t,,). We shall prove the following result:
THEOREM V: The functional determinant A dejned by (6.4) does not vanish,
and (6.5) can be solved by an expression (6.6) in the neighborhood of any point
R, in S , for an arbitrary A if and only if none'of the differences between any two
of the eigenvalues of 9, equals 2 m 4 where m = rt 1, f . 2 , . . . , m # 0.
Proof: If (6.3) holds, the determinant I Y 1 of Y is different from zero.
Setting
(6 -7) dY * Y-' = dZ,
EXPONENTIAL SOLUTIONS 669

we may compute the determinant which connects the elements of dZ and dQ.
It will differ from A only by a power of 1 Y I-' since the elements of each row of
dZ are obtained from the corresponding row of dY by a linear substitution, the
matrix of which is the transpose of Y-'. From (3.24) we have

(6 3) dZ = dQ -
1
[dQ, Q ] + 31 [[dQ, Q]Q]=F - . * *

Let
(6.9)
and assume that
(6.10) Q = A = (A1 , ,A, 1
is a diagonal matrix with the numbers A, , . a , A, in the main diagonal. In
this case we have from (6.8), (6.9), and (6.10)

(6.11)

The n2 quantities dz,,, are linear functions of the n' quantities dw,,,, , and the
determinant of (6.11) is

(6.12)

where the product is extended over v, p = 1, . - - , n,with v f p. Let


(Z - X,)(Z - A,) - * * (z - 1,)
(6.13)
= Z" - SIXn-' + , S2Xn-2 + *. * + (-l)S,
where the s, (V = 1, . - n) are the elementary symmetric functions of the A. .
,I

Then A* becomes a function of the s, which is analytic and entire in each s, .


Next, let
(6.14) Q = CAC-',
where C is a matrix the determinant of which equals unity. If we introduce
(6.15) dZ* = C-' - dZ * C, dQ* = C-' . d Q - C,
equation (6.8) becomes

[dQ*, A] + j j
1 1
(6.16) dZ* = dQ* --
2!
[[dQ*,A]A] F

and instead of (6.11) we have

(6.17)
670 WILHELM MAGNUS

Now
(dzf,,J = c*(dz,, A, (dw?,,I = c*(&, A ,
where C* = C-’ 0C’ is the Kronecker product of C-’ and the transpose C’ of C
and where (dzf,,) stands for the vector with n2 components dxf,, . Since the
determinant of C* equals 1, the dz,,, are again linear functions of the dw,,,,
where the determinant of the relation is A*. This shows that
(6.18) A* = I 1
whenever Q can be transformed into diagonal form. Now A* in (G.12) is a
function of the s, , which are the coefficients of the characteristic equation for
Q. Since A* must be a continuous function of the w v , , , we shall write A* as a
function of the s, . This gives an expression for A* which is valid if D has different
eigenvalues. But in S , the neighborhood of every point Q, contains points
corresponding to matrices which have different eigenvalues. Therefore the A*
in (6.18) is given by (6.12) for all Q . This proves Theorem V, since A* will
vanish if and only if the conditions of Theorem V are satisfied.

VII. Example of the Ordinary Differential Equation of Second Order

I n this section, the simplest non-trivial case of a differential equation of


type (6.1) will be studied. It will be shown explicitly that, in general, a solution
of the type Y = exp D does not exist in the large.
The equation
(7.1) Y” + Q(0y = 0
may be written as
(7 -2) Y: = ~2 Y: = -QY~
and leads to the matrix equation

where y, , v , denote two linearly independent solutions of (7.1). If we choose


these in such a way that
YI(0) = 1, YdO) = Y:(o) = 0,
(7 4
do) = 0, d o ) = d ( 0 ) = 1,
then Y ( 0 ) = I and the determinant I Y I of Y equals unity for all values of t.
Therefore, if
Y = exp fl,
EXPONENTIAL SOLUTIOSS 671

the trace of Q vanishes and we may put Q into the form

(7 -5)
If we introduce
= t -:).
(7.6) A = du2 + @P,
we find after some calculation

(7.7) y = -(sinh A) Q
A
+ (cosh A ) I .
Let
(78 = y1 + t12

be t.he t.race of Y . Then we have from (7.7)


(7.9) O = 2 cosh A
(since the trace of Q vanishes). Actually, 2 A is the difference between the
eigenvalues A, , X, of a. If this difference is a multiple of 2m' but different from
zero, then A, # X2 since X, + X2 = 0. In this case, B can be transformed into
the diagonal form. But then, although both the eigenvalues of Y equal + 1 or
-1, it is possible that Y is not a multiple of the unit matrix. Instead, if we
combine (7.7) and (7.9) we find
-__
(7 .lo)& Q = (2Y - OI)(A/t/O' - 4)
where we have 8' = 4 if A = 2nn-i; therefore if A # 0 and Q is
finite, 2Y - 8I = 0. In order to discuss a t Ieast one case completely we prove
THEOREM VI: Let Q(t) > 0 for all t 2 0. Then a solution Y of (7.3), with
the initial condition Y(0) = I , has a representation
Y(t) = exp O(t), Q(0) = 0,
for t 5 0 with a two times differentiableQ(t),i f and only i f
(7.11) (trace Y ) 2 s4
for t 2 0.
Proof: We know from (7.10) that 2Y = OI whenever O2 = 4 and A # 0
(which implies B # 0). Also, we know that Y = I for t = 0, 8 = 2, A = 0.
Now we shall consider a value of t = to for which 8' = 4, Y = $ OI. We find
at this point
(7.12)
d8
- = y:
dt
+
tl: = y: - Qtli = 0,

(7.13)
672 WILHELM MAGNUS

+
Therefore, if Q > 0, then 8‘ < 4 for t = to e if e is positive and sufficiently small.
Next, we observe that 8’ # 0 and A’ # 0 in any interval to < t < tl in
which 8’ < 4. Since I Y J = y i d - y:r), = 1 we find for 8’ = 0 from (7.13)
+
that, y,d = 1 Q$ . Therefore, y, and d have the same sign and are different
from zero. However,
(7.14) (yl - ?:)’ = 8’ - 4yl.rl: = e2 - 4 - Q ~ <: o
if 8’ - 4 < 0, and this is a contradiction. Therefore 8‘ # 0. We find from (7.9)
by differentiation that A’ can vanish only if 8’ = 0 or if sin h A = 0; but then
6’ = 4 which had been excluded.
We consider now the behaviour of 8(t) and A(t) for t > 0, starting a t t = 0.
We have shown that there exists a smallest positive number t, (which may be
Q) ) such that 8’ < 4 in 0 < t < t , . For 0 < t < tl , A must be purely imaginary
according to (7.9). We put A = iD. We find from (7.9) that
(7.15) 8’ = -2D’sin D.
Therefore, D either increases in 0 < t < t, or decreases monotonically, and if
-
tl is finite and 8(t,) = 2, then D(tJ = A or - A. If tl = a,we see from (7.10)
and (7.9) that Theorem VI is true. Therefore, assume that t, is finite. Then
+
beyond this point, say for t = t L E , 8(t) must increase again since it follows
from (7.10) that Y(tl) = -
I . If we differentiate (7.9) two times, we find
for t = tl from sin D(tJ = 0 and from (7.13) that
(7.16) +Or’ = - D r 2 cos D = DI2(tJ > 0.
Therefore, D’cannot even vanish at t = t, . Going from t = t , to the next.point
t2 > t, for which 8’ = 4, etc., we find that D‘ must be always real and positive
or always real and negative since we shall never come back to D = 0 for t > 0.
Therefore, D(t) is a monotonic function of t for t > 0, and this proves that
e’ < 4 is a necessary condition for the existence of Q. That it is also a sufficient
condition can be seen as follows: If A Z f in^, n = 1, 2, 3, then $2 is
completely determined by postulating that D(t) is monotonic and by using
(7.9) and (7.10). In order to fmd Q ( t ) for the exceptional values t = t,, for which
A = ina (or A = - inn, consistently with the same sign), we multiply both
sides of (7.10) by do2- 4 and then differentiate with respect to t. We h d
(7.17) 2(A’ cosh A)Q + 2(sinh A)Q’ = (2Y - 201)A’ + (2Y’ - 8’I)A.
Since sinh A = 0, cosh A = (- 1)“and 2Y - 81 = 0, 8’ = 0, Y’ = A Y for
t = t, , (7.17) becomes

(7.18) (- l)”A’(t,)Q(t,) = A Y A = A ( - l ) ” A ( t , ) .
Since A(tn) = e in* where t is independent of n and e = f 1, it is easily seen
that (7.18), (7.6), (7.5) and (7.3) suffice to determine Q uniquely for t = t,, .
The result is independent of e if we define consistently d F 4 = 2 sinhA.
Also, it is easily seen that the condition (7.11) will not be satisfied generally
EXPONENTIAL SOLUTIONS 673

for a differential equation (7.1) or (7.2). As an example, we may take


Q = exp 2t. Then

where J , , Yodenote the Bessel functions of the first and second kind respectively.
The asymptotic expansions for the Bessel functions and their derivatives show
that Oa > 4 for infinitely many values of t.

BIBLIOGRAPHY
[I] K. 0. Friedrichs. Mathematieal aspects of the quantum uleory of fields, Part V., Com-
munications on Pure and Applied Mathematics, Vol. 6, 1953, pp. 1-72.
[2] R. P. Feynman, An operator calculus having applications in quantum ekctrodynamks,
Physical Review, Vol. 84, No. 1, 1951, pp. 108-128.
[3] Herbert B. Keller and Joseph B. Keller, On system of linear ordinary diferential equations,
Research Report EM-33, New York University, Washington Square College of
Arts and Science, Mathematics Research group.
[4] H. F. Baker, On the integration of linear diferential equations, Proceedings of the London
Mathematical Society, Vol. 35, 1903, pp. 333374; Vol. 34, 1902, pp. 347360;
Seeond Series, Vol. 2, 1904,pp. 293-296.
(51 H. F. Baker, Alternants and continuous p-oups, Proceedings of the London Mathematical
Society, Second Series, Vol. 3, 1904, pp. 2447.
[6] F. Hausdorff, Die symbolische Exponentialformel in der Gruppentheorie, Berichte der
Sachsischen Akademie der Wissenschaften (Math. Phys. Klasse), Leipzig, Vol. 58,
1906, pp..19-48.
[7] W. Magnus, fiber Beziehungen zwisehen hoheren Kmmutatoren, Journal f. d. Reine u.
Angewandte Mathematik, Vol. 177, 1937, pp. 105-115.
f8l E. Witt, Treue Ourstellung Liescher Ringe, Jourml f. d. Reine u. Angewandte Mathematik,
Vol. 177, 1937, pp. 152-160.
[9] W. Magnus, A connection between the Baker-Hausdorff formula and a problem of Uurn.de,
Annals of Mathematics, Vol. 52, 1950, pp. 111-126.
[ lOf G . Falk, Konstanzekmente in Ringen mit Differentiation, Mathematische Annalen, Vol.
124, 1952, pp. 182-186.
[ l l ] H. Zassenhaus, Unpublished.
[I21 E. B. Dynkin, Calculation of the coefiients in the Campbell-Hausdorffformula, Doktady
Akad, Nauk USSR (N.S.), Vol. 57, 1947, pp. 323-326 (Russian).
1131 W. Specht, Die tinearen Beziehungen zvtischen hbheren K m m u ~ t m e n Mathematische
,
Zeitschrift, Vol. 51, 1948, pp. 367376.
(141 F. Wever, Operatmen in Liesehen Ringen, Journal f. d. Reine u. Angewandte mathematik,
Vol. 187, 1947, pp. 4455.

You might also like