0% found this document useful (0 votes)
444 views192 pages

Linear Robust Control Solutions Manual

This document appears to be a solutions manual for problems in a textbook on linear robust control. It provides detailed solutions to problems presented in each chapter of the textbook. The introduction provides context on the goals of the solutions manual and analogies drawn between working through exercises and training for a marathon. It aims to check students' work while not allowing access to solutions beforehand. The solutions presented are closely tied to concepts and equations from the original textbook.

Uploaded by

Juan Carlos Ruiz
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
444 views192 pages

Linear Robust Control Solutions Manual

This document appears to be a solutions manual for problems in a textbook on linear robust control. It provides detailed solutions to problems presented in each chapter of the textbook. The introduction provides context on the goals of the solutions manual and analogies drawn between working through exercises and training for a marathon. It aims to check students' work while not allowing access to solutions beforehand. The solutions presented are closely tied to concepts and equations from the original textbook.

Uploaded by

Juan Carlos Ruiz
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 192

LINEAR

ROBUST CONTROL
SOLUTIONS MANUAL
LINEAR
ROBUST CONTROL
SOLUTIONS MANUAL
Michael Green
Australian National University
Canberra, Australia
David J.N. Limebeer
Professor of Control Engineering
Imperial College of Science, Technology and Medicine
University of London
London, U.K.
This book was previously published by: Pearson Education, Inc.
ISBN 0-13-133547-2
To Keith Brackenbury and Stanford Reynolds
Contents
Preface ix
Solutions to Problems in Chapter 1 1
Solutions to Problems in Chapter 2 9
Solutions to Problems in Chapter 3 25
Solutions to Problems in Chapter 4 45
Solutions to Problems in Chapter 5 59
Solutions to Problems in Chapter 6 71
Solutions to Problems in Chapter 7 89
Solutions to Problems in Chapter 8 97
Solutions to Problems in Chapter 9 121
Solutions to Problems in Chapter 10 131
Solutions to Problems in Chapter 11 143
Solutions to Problems in Appendix A 155
Solutions to Problems in Appendix B 163
vii
Preface
Every serious student of a technical scientic subject has spent late nights struggling
with homework assignments at some time during their career. The frustrations
which go along with this activity range from: I dont have the foggiest idea how
to do this exercise to this is probably right, but it would be nice to have my
solution checked by an expert. It is our expectation that the student exercises
in our book Linear Robust Control, published by Prentice-Hall, 1994, will generate
both the above sentiments at some stage or other and many others besides!
Because we would like our book to be useful both as a teaching and as a research
aid, we decided that a reasonably detailed solutions manual would have a role to
play. We hope that most of the answers are informative and that some of them
are interesting and even new. Some of the examples took their inspiration from
research papers which we were unable to cover in detail in the main text. In some
cases, and undoubtedly with the benet of hindsight, we are able to supply dierent
and possibly nicer solutions to the problems studied in this literature.
What about the answer to the question: who should have access to the solutions
manual? We believe that in the rst instance students should not have access to
the solutions manual, because that would be like exploring the Grand Canyon from
the window of a rental carto really experience you have to actively partake.
In an attempt to steel the nerve for the task ahead, we thought it appropri-
ate to repeat a quotation due to Brutus Hamilton (1957), from the book Lore of
Running (Oxford University Press, 1992), by the South African sports scientist and
ultramarathon runner Tim Noakes.
It is one of the strange ironies of this strange life that those who work
the hardest, who subject themselves to the strictest discipline, who give
up certain pleasurable things in order to achieve a goal are the happiest
of people. When you see twenty or thirty men line up for a distance
race, do not pity them, dont feel sorry for them. Better envy them.
After reading Noakes book a little further, we couldnt help noticing a number
of other analogies between doing student exercises and training for a marathon.
Here are a few:
ix
x PREFACE
1. Nobody can do them for you.
2. At least in the beginning, there is no doubt that they are hard.
3. Like any acquired skill, the more eort that goes into the acquisition, and the
more diculties overcome, the more rewarding the result.
4. To achieve success there must always be the risk of failure no matter how hard
you try.
5. Student exercises, like running, teach you real honesty. There is no luck.
Results cannot be faked and there is no one but yourself to blame when
things go wrong.
6. Dont make excuses like my feet are too big, I dont know enough mathematics,
I am too old and so on. Overcoming such diculties will only heighten the
reward.
We have tried to tie the solutions manual to the main text as closely as possible
and from time to time we refer to specic results there. Equation references of
the form (x.y.z) refer to equations in the main text. For example, equation (3.2.1)
means equation (3.2.1) of Linear Robust Control, which will be the rst equation in
Section 3.2. Equations in the solutions manual have the form (x.y). For example,
equation (8.1) will be the rst equation in the Solutions to Problems in Chapter 8.
All cited works are as listed in the bibliography of Linear Robust Control.
We have made every eort within stringent time constraints to trap errors, but
we cannot realistically expect to have found them all. If a solution is hard to follow
or doesnt seem to make sense it could be wrong! Dont get mad, we have tried to
help and we hope our solutions will assist teachers and students alike.
Michael Green
David Limebeer
London
LINEAR
ROBUST CONTROL
SOLUTIONS MANUAL
Solutions to Problems in
Chapter 1
Solution 1.1. We will establish the three properties of a norm and then the
submultiplicative property:
1. If h = 0 sup

[h(j)[ = 0. Conversely, if sup

[h(j)[ = 0, h(j) = 0 for all


, then h = 0. If h ,= 0, it is clear from the denition that |h|

> 0.
2.
|h|

= sup

[h(j)[
= sup

[[.[h(j)[
= [[ sup

[h(j)[
= [[|h|

.
3.
|h +g|

= sup

[h(j) +g(j)[
sup

([h(j)[ +[g(j)[)
sup

[h(j)[ + sup

[g(j)[
= |h|

+|g|

.
This establishes that ||

is a norm. We now prove the submultiplicative property:


4.
|hg|

= sup

[h(j)g(j)[
= sup

([h(j)[.[g(j)[)
sup

[h(j)[ sup

[g(j)[
= |h|

|g|

.
1
2 SOLUTIONS TO PROBLEMS IN CHAPTER 1
Solution 1.2. Set

h = w(1 gk)
1
= w(1 +gq), q = k(1 gk)
1
.
Therefore,
q = g
1
w
1
(

h w).
In the case that < 0, g
1
is stable and |

h|

may be made arbitrarily small by


using the constant compensator k . If 0,

h must satisfy the interpolation
constraint:

h() = w()
=
+ 4
2( + 1)
.
Now
[

h()[ < 1, 0
( + 4) < 2( + 1)
> 2.
Thus the problem has a solution if and only if < 0 or if > 2.
Solution 1.3. Since e = h gf with f !1

,
f = g
1
(h e).
In order for f to be stable, we need
e(1) = h(1)
= 1/5.
Dierentiating e = h gf gives
de
ds
=
dh
ds
g
df
ds
f
dg
ds
.
Thus, f is also given by
f =
_
dg
ds
_
1
_
de
ds

dh
ds
+g
df
ds
_
INTRODUCTION 3
Since
d
ds
g[
s=1
= 0, the stability of f requires a second interpolation constraint:
de
ds

s=1
=
dh
ds

s=1
=
1
(s + 4)
2

s=1
= 1/25.
Solution 1.4.
1. It is sucient for closed loop stability that [gk(1 gk)
1
(j)[.[(j)[ < 1
for all real , including = . (This follows from the Nyquist criterion.) If
||

< , we need
|gk(1 gk)
1
|


1
,
with maximized. This may be achieved via the following procedure:
Step 1: Factorize gg

= mm

, in which both m and m


1
are stable.
Step 2: Dene the Blaschke product
a =
m

i=1
p
i
+s
p
i
s
,
in which the p
i
s are the right-half-plane poles of g.
Step 3: If q = k(1 gk)
1
and q = amq, we observe that
(i)
|gq|

= |gaq|

, since a is allpass
= |aqmm

(g

)
1
|

= |aqm|

, since m

(g

)
1
is allpass
= | q|

.
(ii) q !1

q !1

.
(iii) q !1

q(p
i
) = 0.
(iv) (1 +gq)(p
i
) = 0 q(p
i
) = (amg
1
)(p
i
).
Step 4: Find a stable q of minimum innity norm which satises
q(p
i
) = (amg
1
)(p
i
).
4 SOLUTIONS TO PROBLEMS IN CHAPTER 1
Step 5: Back substitute
k = q(1 +gq)
1
= q(am+g q)
1
.
2. (i) If g is unstable and 1, we can always destabilize the loop by setting
= 1 since in this case g(1 +) would be open loop.
(ii) Suppose for a general plant that
g =
n
+
n

d
+
d

in which n

and d

are polynomials that have all their roots in the closed-


right-half plane, while n
+
and d
+
are polynomials that have all their roots in
the open-left-half plane. Then
a =
d

, m=
n
+
n

d
+
d

,
and consequently
g
1
ma =
d
+
d

n
+
n

.
n
+
n

d
+
d

.
d

=
n

,
which is an unstable allpass function. The implication now follows from the
fact that [a(p
i
)[ > 1 for any unstable allpass function a and any R
e
(p
i
) > 0.
3. In this case
g =
_
s 2
s 1
_
a =
_
s + 1
1 s
_
and m=
_
s + 2
s + 1
_
,
so that
g
1
am =
_
s 1
s 2
__
s + 1
1 s
__
s + 2
s + 1
_
=
_
s + 2
s 2
_
.
Therefore
g
1
am(1) = 3
q
opt
= 3 and | q
opt
|

= 3.
INTRODUCTION 5
Since
max
= 1/| q
opt
|

, we see that
max
= 1/3. Finally
k = q
opt
(am+g q
opt
)
1
= 3
__
s + 2
1 s
_
3
_
s 2
s 1
__
1
=
3(s 1)
4(s 1)
= 3/4.
Solution 1.5.
1. The closed loop will be stable provided
|k(1 gk)
1
|

< 1
|w
1
wk(1 gk)
1
|

< 1
|w
1
|

<
1
|wk(1 gk)
1
|

.
This last inequality will be satised if [(j)[ < [w(j)[ for all and |wk(1
gk)
1
|

1.
2. We will now describe the optimization procedure
Step 1: Dene
q = waq,
where
q = k(1 gk)
1
and a =
m

i=1
_
p
i
+s
p
i
s
_
,
in which the p
i
s are the right-half-plane poles of g.
Step 2: Find a stable q of minimum innity norm such that
q(p
i
) = g
1
aw(p
i
).
Step 3: Back substitute using k = q(aw +g q)
1
.
3. Since
g =
_
s + 1
s 2
_
,
6 SOLUTIONS TO PROBLEMS IN CHAPTER 1
we must have
a =
_
s + 2
2 s
_
.
Therefore
g
1
aw =
_
s 2
s + 1
__
s + 2
2 s
__
s + 1
s + 4
_
=
_
s + 2
s + 4
_
.
Consequently
q
opt
=
_
s + 2
s + 4
_

s=2
= 2/3.
Thus, a controller exists, since min
k
|wk(1gk)
1
|

= | q
opt
|

= 2/3 < 1.
The optimal controller is
k =
2
3
__
s + 2
2 s
__
s + 1
s + 4
_
+
2
3
_
s + 1
s 2
__
1
=
_
2(s + 4)
s + 1
_
.
Solution 1.6.
1. If E = H GF, it follows that
F = G
1
(H E).
It is now immediate from the stability requirement on F that all the right
half plane poles of G
1
must be cancelled by zeros of (H E).
2. It follows from the standard theory of stable coprime matrix fractions that a
cancellation between G
1
and (H E) will occur if and only if
_
H(z
i
) E(z
i
) G(z
i
)

looses rank at a zero z


i
of G. If such a loss of rank occurs, there exists a w

i
such that
w

i
_
H(z
i
) E(z
i
) G(z
i
)

= 0.
If
w

i
H(z
i
) = v

i
,
INTRODUCTION 7
the vector valued interpolation constraints will be
w

i
E(z
i
) = v

i
.
Satisfaction of these constraints ensures the cancellation of the unstable poles
of G
1
.
Solutions to Problems in
Chapter 2
Solution 2.1.
1.
(I Q) (I) (Q)
= 1 (Q)
> 0.
2.
(

k=0
Q
k
)

k=0
(Q
k
)

k=0
(Q)
k
=
1
1 (Q)
< .
3. Consider
(I Q)(

k=0
Q
k
) =

k=0
Q
k

k=0
Q
k+1
= I +

k=0
Q
k+1

k=0
Q
k+1
= I.
Hence
(I Q)
1
=

k=0
Q
k
.
9
10 SOLUTIONS TO PROBLEMS IN CHAPTER 2
Solution 2.2.
1.
Q(I Q)
1
= (I Q)
1
(I Q)Q(I Q)
1
= (I Q)
1
(QQ
2
)(I Q)
1
= (I Q)
1
Q(I Q)(I Q)
1
= (I Q)
1
Q.
2.
(I Q)
1
= (I Q+Q)(I Q)
1
= I +Q(I Q)
1
.
3.
K(I GK)
1
= (I KG)
1
(I KG)K(I GK)
1
= (I KG)
1
(K KGK)(I GK)
1
= (I KG)
1
K(I GK)(I GK)
1
= (I KG)
1
K.
Solution 2.3. Suppose that Q = Y U

, where = diag(
1
, ,
p
) and
1

2

p
> 0. Then
Q
1
= U
1
Y

= Udiag(
1
1
, ,
1
p
)Y

,
where
1
p

1
2

1
1
. Hence
(Q
1
) =
1
p
=
1
(Q)
.
Solution 2.4.
1. Let Q = WJW
1
, in which J is the Jordan form of Q. Then
detQ = det(W)det(J)det(W
1
)
= det(W)det(J)
1
det(W)
= det(J)
=
p

i=1

i
(Q).
MULTIVARIABLE FREQUENCY RESPONSE DESIGN 11
Now let Q = Y U

, with = diag(
1
, ,
p
) be a singular value decompo-
sition of Q. Then
det(Q) = det(Y )det()det(U

)
= e
i
Y
p

i=1

i
(Q)e
i
U
, since Y and U are unitary
= e
i
p

i=1

i
(Q).
2. It is well known that
(Q)
|Qu|
|u|
(Q)
for any non-zero vector u. Now if Qw
i
=
i
w
i
, we see that
(Q)
|Qw
i
|
|w
i
|
= [
i
[ (Q).
Solution 2.5. Nothing can be concluded in general. To see this consider the
system
G =
_
1 +
s+1
s1
1

0 1 +
s+1
s1
_
.
Each eigenvalue
i
(j) makes one encirclement of +1, and [1
i
(j)[ = for all
and any value of . It is easy to check that the (constant) additive perturbation
A =
_
0 0

2
0
_
will destabilize the loop, since det
_
I (G(j) + A)
_
passes through the origin (at
= 0). Since lim
0
|A| = 0, we see that the loop may be made arbitrarily close
to instability for any value of .
Solution 2.6.
1. These are just the Nyquist plots of
1
s+1
and
2
s+2
, which are circles cutting the
real axis at 0 and 1. (i.e., both are circles with center 1/2 and radius 1/2.)
2. This can be checked by evaluating C(sI A)
1
B with
A =
_
1 0
0 2
_
B =
_
7 8
12 14
_
C =
_
7 8
6 7
_
.
12 SOLUTIONS TO PROBLEMS IN CHAPTER 2
3. We begin by setting k
1
= k + and k
2
= k . This gives
det(sI ABKC) = det
__
s + 1 k 97 112
168 s + 2 2k + 194
__
= s
2
+s(3 3k + 97) + 2
_
(1 k)
2

2
_
.
We therefore require positivity of the linear coecient 3 3k + 97. Now
3 3k + 97 = 3 50(k ) + 47(k +)
= 3 + 47k
1
50k
2
.
Solution 2.7. The plots given in Figures 2.1, 2.2 and 2.3 can be veried using
MATLAB procedures.
-0.5
-0.4
-0.3
-0.2
-0.1
0
0.1
0.2
0.3
0.4
0.5
-2 -1.8 -1.6 -1.4 -1.2 -1 -0.8 -0.6 -0.4 -0.2 0
Figure 2.1: Generalized Nyquist diagram when = 0.005.
Solution 2.8. The proof of Theorem 2.4.3 has to be modied to make use of

= det(I (I +
1
(s))GK(s))
= det(I
1
GK(I GK(s))
1
) det(I GK(s)).
MULTIVARIABLE FREQUENCY RESPONSE DESIGN 13
-0.5
-0.4
-0.3
-0.2
-0.1
0
0.1
0.2
0.3
0.4
0.5
-2 -1.8 -1.6 -1.4 -1.2 -1 -0.8 -0.6 -0.4 -0.2 0
Figure 2.2: Generalized Nyquist diagram when = 0.
It is now clear that

will not cross the origin for any [0, 1] if

1
(s)
_

_
GK(I GK(s))
1
_
< 1

1
(s)
_
<
1

_
GK(I GK(s))
1
_
for all s on the Nyquist contour D
R
.
Solution 2.9. In this case we use

= det
_
I (I
2
(s))
1
GK(s)
_
= det
_
I
2
(s) GK(s)
_
det
_
(I
2
(s))
1
_
.
We need
_

2
(s)
_
< 1 to ensure the existence of
_
I
2
(s)
_
1
, and we need

2
(s)
_
<
_
I GK(s)
_
to ensure that

will not cross the origin for any [0, 1]. Thus a sucient
condition for closed loop stability is
(
2
(s)) < min1, (I GK(s))
for all s on the Nyquist contour D
R
.
14 SOLUTIONS TO PROBLEMS IN CHAPTER 2
-0.5
-0.4
-0.3
-0.2
-0.1
0
0.1
0.2
0.3
0.4
0.5
-2 -1.8 -1.6 -1.4 -1.2 -1 -0.8 -0.6 -0.4 -0.2 0
Figure 2.3: Generalized Nyquist diagram when = 0.005.
Solution 2.10. To see that the implication in (2.4.8) is true we argue that
(K(I GK)
1
)
(K)((I GK)
1
)

(K)
(I GK)

(K)
1 +(G)(K)

(K)

1 (G)
, for 1 (G) > 0.
For the implication in (2.4.9),
(K(I GK)
1
)

(K)
1 (G)(K)

(K)

1 +(G)
.
To establish the inequalities given in (2.4.13), we argue that
(Q(I Q)
1
)
MULTIVARIABLE FREQUENCY RESPONSE DESIGN 15

(Q)
(I Q)

(Q)
1 +(Q)

(Q)

1
, for < 1.
Also,
(Q(I Q)
1
)

(Q)
(I Q)

(Q)
1 (Q)

(Q)

1 +
.
Solution 2.11. The aim of this question is to construct a rational additive per-
turbation A of minimum norm such that
I AK(I GK)
1
is singular at
0
. The frequency point
0
is selected to be
0
= arg max
s=j
(K(I
GK)
1
(j)). If K(I GK)
1
(j
0
) has singular value decomposition
K(I GK)
1
(j
0
) =
2

i=1

i
v
i
u

i
,
then a constant complex perturbation with the correct properties is given by
A =
1
1
u
1
v

1
,
since |A| =
1
1
. To realize this as a physical system, we set
v

1
=
_
r
1
e
i
1
r
2
e
i
2

and u
1
_
r
3
e
i
3
r
4
e
i
4
_
,
in which the signs of the r
i
s are chosen to make all the angles positive. We then
select the
i
s and
i
s, both nonnegative, so that the phase of
_
j
0

i
j
0
+
i
_
16 SOLUTIONS TO PROBLEMS IN CHAPTER 2
is given by
i
and the phase of
_
j
0

i
j
0
+
i
_
is given by
i
. The perturbation is then given by
A =
1
1
_
_
r
3
_
s
1
s+
1
_
r
4
_
s
2
s+
2
_
_
_
_
r
1
_
s
1
s+
1
_
r
2
_
s
2
s+
2
_ _
.
To nd the pole locations for the allpass functions we argue as follows:
re
i
= x +iy
and so
x +iy

_
x
2
+y
2
=
_
j
0

j
0
+
_
.
This gives
(x +iy)(j
0
+) =
_
x
2
+y
2
(j
0
)
and equating real parts yields
=
y
0
x
_
x
2
+y
2
in which the sign is selected to ensure 0. If
i
0 and
i
0, A will be stable.
These ideas are implemented in the follow MATLAB
1
code.
%
% Enter the transfer function and nd a state-space model for it
%
d=[1 3 2];
num=[0 -47 2;0 -42 0;0 56 0; 0 50 2];
[a,b,c,d]=tfm2ss(num,d,2,2);
%
% Find the frequency response singular values of (I GK)
%
w=logspace(-2,3,100);
[sv]=sigma(a,b,c,eye(2)+d,1,w);
svp=log10(sv(2,:));
%
% Find K(I GK)
1
%
[af,bf,cf,df]=feedbk(a,b,c,d,2);
1
MATLAB is a registered trademark of The MathWorks, Inc.
MULTIVARIABLE FREQUENCY RESPONSE DESIGN 17
%
% Find the frequency response singular values of K(I GK)
1
%
w=logspace(-2,3,100);
[sv]=sigma(af,bf,cf,df,1,w);
svp2=-log10(sv(1,:));
semilogx(w,svp,w,svp2)
grid
pause
%
% Find the singular values and singular vectors of K(I GK)
1
(j3)
% with K = I
%
wp=3
ac=a-b*inv(eye(2)+d)*c;
bc=b*inv(eye(2)+d);
cc=inv(eye(2)+d)*c;
dd=-inv(eye(2)+d);
g=dd+cc*inv(j*wp*eye(4)-ac)*bc;
[u,s,v]=svd(g);
zz=u;
u1=zz(1,:);
v1=v(:,1);
s2=1/s(1,1);
%
% Find the constant for the rst allpass function
%
x=real(u1(1,1));
y=imag(u1(1,1));
%
% Select the sign of r1 so that
1
is positive
%
r1=-abs(u1(1,1))
alp1=y*wp/(x+r1)
pause
%
% Find the constant for the second allpass function
%
x=real(u1(1,2));
y=imag(u1(1,2));
%
% Select the sign of r2 so that
2
is positive
%
r2=-abs(u1(1,2))
18 SOLUTIONS TO PROBLEMS IN CHAPTER 2
alp2=y*wp/(x+r2)
pause
%
% Assemble the rst part of the perturbation
%
aper=[-alp1 0;0 -alp2];
bper=[-2*alp1*r1*s2 0 ;0 -2*alp2*r2*s2];
cper=[1 1];
dper=[s2*r1 s2*r2];
%
% Find the constant for the third allpass function
%
x=real(v1(1,1));
y=imag(v1(1,1));
%
% Select the sign of r3 so that
3
is positive
%
r3=abs(v1(1,1))
alp3=y*wp/(x+r3)
pause
%
% Find the constant for the forth allpass function
%
x=real(v1(2,1));
y=imag(v1(2,1));
%
% Select the sign of r3 so that
3
is positive
%
r4=-abs(v1(2,1))
alp4=y*wp/(x+r4)
pause
%
% Assemble the second part of the perturbation
%
aper1=[-alp3 0;0 -alp4];
bper1=[1;1];
cper1=[-2*alp3*r3 0; 0 -2*alp4*r4];
dper1=[r3; r4];
%
% Assemble the full perturbation
%
adel=[aper1 bper1*cper;zeros(2,2) aper];
bdel=[bper1*dper;bper];
cdel=[cper1 dper1*cper];
MULTIVARIABLE FREQUENCY RESPONSE DESIGN 19
ddel=dper1*dper;
%
% Plot the perturbations frequency response to check that it is allpass
%
[sv]=sigma(adel,bdel,cdel,ddel,1,w);
loglog(w,sv(1,:))
grid
pause
%
% Check results by assembling and plotting I AK(I GK)
1
%
at=a-b*inv(eye(2)+d)*c;
bt=b*inv(eye(2)+d);
ct=inv(eye(2)+d)*c;
dt=-inv(eye(2)+d);
w=logspace(0,1,400);
[ae,be,ce,de]=series(at,bt,ct,dt,adel,bdel,cdel,ddel);
[sv]=sigma(ae,be,-ce,eye(2)-de,1,w);
loglog(w,sv)
grid
%
% As a last check check, nd the poles of the perturbed closed
% loop system
%
[A,B,C,D]=addss(a,b,c,d,adel,bdel,cdel,ddel);
eig(A-B*inv(eye(2)+D)*C)
These are:
0.0000
0.0000 + 3.0000i
0.0000 - 3.0000i
-2.0000
-1.0000
-0.0587
-0.0616
-0.0382
You will note, that in this case, the perturbation has an unobservable mode at the
origin which is nding its way into the closed loop pole set.
Solution 2.12. It is immediate from Figure 2.13 that
y = G(I KG)
1
r,
20 SOLUTIONS TO PROBLEMS IN CHAPTER 2
so
r y = (I GK)
1
(I G(K +R))r.
We can thus argue

_
(I GK)
1
(I G(K +R))
_

_
I G(K +R)
_
(I GK)

_
I G(K +R)
_
(GK 1

1 +
(I G(K +R))

(GK).
Solution 2.13. Suppose that
(S) < 1 (GKS)
(S) < 1 (GKS)
(S) < (I GKS)
(S)
_
(I GKS)
1
_
< 1

_
S(I GKS)
1
_
< 1.
Conversely,

_
S(I GKS)
1
_
< 1
(S)
_
(I GKS)
1
_
< 1
(S) < (I GKS)
(S) < 1 +(GKS).
Solution 2.14. It follows from Figure 2.4 that
y
c
= G
t
K(I G
t
K)
1
r.
Therefore
r +y
c
= (I +G
t
K(I G
t
K)
1
)r
= (I G
t
K)
1
r
= (I (I +
2
)
1
GK)
1
r
= (I +
2
GK)
1
(I +
2
)r.
MULTIVARIABLE FREQUENCY RESPONSE DESIGN 21
This means that

_
(I G
t
K)
1
_
(j)

_
(I +
2
GK)
1
(I +
2
)
_
(j)

_
(I +
2
GK)
1
_
(I +
2
) (j)

(I +
2
)
(I +
2
GK)
(j)

1 +(j)
(I GK) (j)
(j)
(S)
(j)
1 +(j)(1 +(j))
Solution 2.15. The solution to this problem is similar to the previous one and
we will therefore simply present an annotated working MATLAB code.
%
% Enter the batch reactor model...
%
a=[ 1.3800 -0.2077 6.7150 -5.6760;
-0.5814 -4.2900 0 0.6750;
1.0670 4.2730 -6.6540 5.8930;
0.0480 4.2730 1.3430 -2.1040]
b=[ 0.0 0.0 ;
5.6790 0;
1.1360 -3.1460;
1.1360 0]
c=[1 0 1 -1;
0 1 0 0]
d=[ 0.0 0.0;
0.0 0.0]
%
% and now the controller
%
ac=[0 0;0 0];
bc=[1 0;0 1];
cc=[0 2;-8 0];
dc=[0 2;-5 0];
%
% Evaluate the frequency response of 1/(GK(I GK)
1
(j))
%
w=logspace(-2,3,100);
22 SOLUTIONS TO PROBLEMS IN CHAPTER 2
[A,B,C,D]=series(ac,bc,cc,dc,a,b,c,d);
[af,bf,cf,df]=feedbk(A,B,C,D,2);
[sv]=sigma(af,bf,cf,df,1,w);
svp1=-log10(sv(1,:));
semilogx(w,svp1)
grid
pause
%
% Find GK(I GK)
1
(j2.5)
%
wp=2.5
g=df+cf*inv(j*wp*eye(6)-af)*bf;
[u,s,v]=svd(g);
u1=u(:,1)
v1=v(:,1)
s2=1/s(1,1);
%
% Evaluate the rst pair of allpass function constants
%
x=real(v1(1,1));
y=imag(v1(1,1));
r1=abs(v1(1,1))
alp1=wp*y/(r1+x)
%
x=real(v1(2,1));
y=imag(v1(2,1));
r2=abs(v1(2,1))
alp2=wp*y/(r2+x)
%
% Assemble the rst part of the perturbation
%
aper=[-alp1 0;0 -alp2];
bper=[1;1];
cper=[-2*alp1*r1*s2 0;0 -2*alp2*r2*s2];
dper=[s2*r1;s2*r2];
%
% Evaluate the second pair of allpass function constants
%
x=real(u1(1,1));
y=-imag(u1(1,1));
r3=abs(u1(1,1))
alp3=wp*y/(r3+x)
%
x=real(u1(2,1));
MULTIVARIABLE FREQUENCY RESPONSE DESIGN 23
y=-imag(u1(2,1));
r4=-abs(u1(2,1))
alp4=wp*y/(r4+x)
%
% Assemble the second part of the perturbation
%
aper1=[-alp3 0;0 -alp4];
bper1=[-2*alp3*r3 0;0 -2*alp4*r4];
cper1=[1 1];
dper1=[r3 r4];
%
% Put the whole perturbation together
%
adel=[aper bper*cper1;zeros(2,2) aper1];
bdel=[bper*dper1;bper1];
cdel=[cper dper*cper1];
ddel=dper*dper1;
%
% Plot the frequency response of the perturbation to check that
% it is allpass
%
w=logspace(0,1,400);
[sv]=sigma(adel,bdel,cdel,ddel,1,w);
loglog(w,sv(1,:))
grid
pause
%
% Assemble and plot I GK(I GK)
1
and check that it is
% singular at = 2.5
%
[ae,be,ce,de]=series(af,bf,cf,df,adel,bdel,cdel,ddel);
[sv]=sigma(ae,be,-ce,eye(2)-de,1,w);
loglog(w,sv)
grid
%
% As a last check, nd the closed loop poles of the perturbed system
%
ach11=af+bf*inv(eye(2)-ddel*df)*ddel*cf;
ach12=bf*inv(eye(2)-ddel*df)*cdel;
ach21=bdel*inv(eye(2)-df*ddel)*cf;
ach22= adel+bdel*inv(eye(2)-df*ddel)*df*cdel;
ach=[ach11 ach12; ach21 ach22];
eig(ach)
24 SOLUTIONS TO PROBLEMS IN CHAPTER 2
These are:
0.0
-14.6051
-10.8740
-0.0000 + 2.5000i
-0.0000 - 2.5000i
-3.2794
-2.3139
-0.6345 + 0.3534i
-0.6345 - 0.3534i
-0.7910
Solutions to Problems in
Chapter 3
Solution 3.1.
1. For any nite T > 0 and any 0 < < T,
_
T

[f(t)[
2
dt =
_
T

t
2
dt
=
_
1
2+1
(T
2+1

2+1
) for ,=
1
2
log(T/) for =
1
2
.
If >
1
2
, then |f|
2,[0,T]
=
T
2+1
2+1
< for any nite T. If
1
2
, then f
is not in /
2
[0, T] for any T.
2. For any nite T > 0
_
T
0
[g(t)[
2
dt =
_
T
0
(t + 1)
2
dt
=
_
1
2+1
((T + 1)
2+1
1) for ,=
1
2
log(T + 1) for =
1
2
.
Hence g /
2
[0, T] for any nite T, which is to say g /
2e
. The integral
_
T
0
[g(t)[
2
dt remains nite as T if and only if <
1
2
, so this is a
necessary and sucient condition for g /
2
[0, ).
Solution 3.2. XX
1
= I. Therefore
0 =
d
dt
(X(t)X
1
(t))
=
_
d
dt
X(t)
_
X
1
(t) +X(t)
d
dt
X
1
(t).
The result follows upon multiplying on the left by X
1
(t).
25
26 SOLUTIONS TO PROBLEMS IN CHAPTER 3
Solution 3.3.
1. Let v R
n
and consider the dierential equation
x(t) = A(t)x(t) x(t
1
) = v.
The unique solution is x(t) = (t, t
1
)v, for all t. Choose any real and
consider the dierential equation
y(t) = A(t)y(t) y() = (, t
1
)v,
which has unique solution y(t) = (t, )(, t
1
)v for all t. Since y() = x(),
it follows from the uniqueness of solutions to linear dierential equations that
y(t) = x(t) for all t. Therefore (t, t
1
)v = (t, )(, t
1
)v for all t, , t
1
and
all v. Consequently, (t
2
, t
1
) = (t
2
, )(, t
1
) for all t
2
, , t
1
.
2. From Item 1, (, t)(t, ) = (, ) = I. Hence
1
(t, ) = (, t).
3.
d
d
(t, ) =
d
d

1
(, t)
=
1
(, t)[
d
d
(, t)]
1
(, t)
=
1
(, t)A()(, t)
1
(, t)
=
1
(, t)A()
= (t, )A().
Solution 3.4.
1.
_

f( +j)

f( +j) d =
_

1
( a)
2
+
2
d
=
1
a
_
tan
1
(

a
)
_

=

a
.
Alternatively,
_

f( +j)

f( +j) d
SIGNALS AND SYSTEMS 27
=
1
2( a)
_

1
j + ( a)

1
j ( a)
d
=
1
2( a)j
_
D
R
1
s ( a)

1
s + ( a)
ds
=

a
by Cauchys integral formula.
(The contour D
R
is a standard semicircular contour in the right-half plane of
radius R > a and is traversed in an anticlockwise direction.) The result
follows, since
1
a
is maximized by setting = 0.
2. f satises
x(t) = ax(t) x(0) = 1
f(t) = x(t).
Since a < 0, f /
2
[0, ). Furthermore, the observability gramian q, which
is the solution to
2aq + 1 = 0,
is given by q =
1
2a
.
Solution 3.5.
1. Choose an arbitrary x
0
R
n
and let x(t) be the solution to
x(t) = Ax x(0) = x
0
z(t) = Cx.
Noting that lim
t
x(t) = 0 we obtain
x
0
Qx
0
=
_

0
d
dt
(x

Qx) dt
=
_

0
x

Qx +x

QAxdt
=
_

0
z

z dt
0.
2. Let Ax = x. Then
0 = x

(QA+A

Q+C

C)x
= ( +

)x

Qx +|Cx|
2
.
Since x

Qx 0, it follows that either (a) |Cx| = 0 or (b) +



< 0. That is,
x is either asymptotically stable or is unobservable.
28 SOLUTIONS TO PROBLEMS IN CHAPTER 3
Solution 3.6.
1. Let z
i
= G
i
w. Then z = z
1
+z
2
is the output of G
1
+G
2
and
z = C
1
x
1
+C
2
x
2
+ (D
1
+D
2
)w.
2. The input to G
2
is z
1
= C
1
x
1
+D
1
w. Therefore
x
2
= A
2
x
2
+B
2
(C
1
x
1
+D
1
w)
= B
2
C
1
x
1
+A
2
x
2
+B
2
D
1
w
and the output of G
2
is
z = C
2
x
2
+D
2
(C
1
x
1
+D
1
w)
= D
2
C
1
x
1
+C
2
x
2
+D
2
D
1
w.
3.
_
x
1
w
1
_
=
_
A
1
B
1
0 I
_ _
x
1
w
1
_
;
_
x
1
y
1
_
=
_
I 0
C
1
D
1
_ _
x
1
w
1
_
.
Hence
_
x
1
w
1
_
=
_
A
1
B
1
0 I
_ _
I 0
C
1
D
1
_
1
_
x
1
y
1
_
.
4.
_
_
x
z
1
z
2
_
_
=
_
_
A B
C
1
D
1
C
2
D
2
_
_
_
x
w
_
Hence
_
x
z
1
_
=
_
I 0
C
1
D
1
_ _
x
w
_
and
_
x
z
2
_
=
_
A B
C
2
D
2
_ _
I 0
C
1
D
1
_
1
_
x
z
1
_
.
A more pedestrian approach:
G
1
1
s
=
_
ABD
1
1
C
1
BD
1
1
D
1
1
C
1
D
1
1
_
.
Hence, by the series formula,
G
2
G
1
1
s
=
_
_
ABD
1
1
C
1
0 BD
1
1
BD
1
1
C
1
A BD
1
1
D
2
D
1
1
C
1
C
2
D
2
D
1
1
_
_
.
SIGNALS AND SYSTEMS 29
Now apply the state transformation T =
_
I 0
I I
_
to obtain
G
2
G
1
1
s
=
_
_
ABD
1
1
C
1
0 BD
1
1
0 A 0
C
2
D
2
D
1
1
C
1
C
2
D
2
D
1
1
_
_
s
=
_
ABD
1
1
C
1
BD
1
1
C
2
D
2
D
1
1
C
1
D
2
D
1
1
_
.
The nal step follows since the states associated with A are uncontrollable.
Solution 3.7.
1. Since any state-space system has nite /
2
[0, T] induced norm, we may set
2
=
|G|
[0,T]
< . Since G
1
has realization (ABD
1
C, BD
1
, D
1
C, D
1
),
it too has nite /
2
[0, T] induced norm as we may take
1
= 1/|G
1
|
[0,T]
.
2. Take
2
= |G|

and
1
= 1/|G
1
|

.
Solution 3.8.
1.
G

(s)G(s)
=
_
D

+B

( sI A

)
1
C

__
D +C(sI A)
1
B
_
= D

D +D

C(sI A)
1
B +B

( sI A

)
1
C

D
+B

( sI A

)
1
C

C(sI A)
1
B
= I B

Q(sI A)
1
B B

( sI A

)
1
QB
+B

( sI A

)
1
C

C(sI A)
1
B
= I +B

( sI A

)
1
(C

C ( sI A

)QQ(sI A)) (sI A)


1
B
= I (s + s)B

( sI A

)
1
Q(sI A)
1
B.
The conclusion that G

(s)G(s) I if Q 0 and s + s 0 is immediate.


2. Since D

D = I, there exists a matrix D


e
such that
_
D D
e

is a square or-
thogonal matrixthe columns of D
e
are an orthonormal basis for the orthogo-
nal complement of the range of D. To show that B
e
= Q
#
C

D
e
, in which Q
#
denotes the Moore-Penrose pseudo inverse, satises D

e
C +B

e
Q = 0, we need
to show that ker C ker Q. Let Qx = 0. Then 0 = x

(QA+A

Q+C

C)x =
|Cx|
2
, giving Cx = 0.
30 SOLUTIONS TO PROBLEMS IN CHAPTER 3
Solution 3.9. Suppose, without loss of generality, that
A =
_
_
A
1
0 0
0 A
2
0
0 0 A
3
_
_
C =
_
C
1
C
2
0

in which A
1
and A
2
are asymptotically stable and A
3
has only imaginary axis
eigenvalues. Let
0 = Q
1
A
1
+A
1
Q
1
+C

1
C
1
0 = Q
2
(A
2
) + (A
1
)Q
2
+C

2
C
2
which exist by Theorem 3.1.1. Set
Q =
_
_
Q
1
0 0
0 Q
2
0
0 0 0
_
_
.
Solution 3.10.
1. Consider f(x) = [x[, which is not dierentiable at the origin. Then [([x
1
[
[x
2
[)[ [x
1
x
2
[. It follows that (f) = 1.
2. The inequality

f(x
1
) f(x
2
)
x
1
x
2

(f)
shows that [
df
dx
[ (f). On the other hand, f(x
2
) = f(x
1
)+(x
2
x
1
)
df
dx
[
x=x
1
+
0((x
2
x
1
)
2
) shows that sup
x
[
df
dx
[ = (f).
If f is dierentiable except at isolated points x
i
then sup
x=x
i
[
df
dx
[ = (f).
3. See Figure 3.1
Solution 3.11. Notice that
(XY

)
ii
=

j
x
ij
y
ij
.
Therefore,
trace(XY

) =

i,j
x
ij
y
ij
.
SIGNALS AND SYSTEMS 31
y = (f)x +f(0)
y = f(x)
y = (f)x +f(0)
y = x
Figure 3.1: Illustration showing that (f) < 1 implies x = fx has a solution.
1.
trace(XY

) =

i,j
x
ij
y
ij
=

i,j
y
ij
x
ij
= trace(Y X

).
2. Follows from the above.
3. Let |X| =
_
trace(XX

).
(a)
_
trace(XX

) 0 is obvious, and
_
trace(XX

) = 0 x
i,j
= 0 for all
i, j. That is
_
trace(XX

) = 0 X = 0.
(b)
_
trace(XX

) =
_

i,j

2
x
ij
= [[
_
trace(XX

).
(c)
trace
_
(X +Y )(X +Y )

_
=

i,j
(x
ij
+y
ij
)
2
=

i,j
x
2
ij
+ 2x
ij
y
ij
+y
2
ij
= trace(XX

) + trace(Y Y

) + 2

i,j
x
ij
y
ij
32 SOLUTIONS TO PROBLEMS IN CHAPTER 3
trace(XX

) + trace(Y Y

) + 2

i,j
x
2
ij

i,j
y
2
ij
by Cauchy Schwartz
= (
_
trace(XX

) +
_
trace(Y Y

))
2
.
Solution 3.12. Consider g =
1
sa
1
and h =
1
sa
2
, with a
i
< 0. Then
|g|
2
|h|
2
=
1
2

a
1
a
2
.
Also,
hg
s
=
_
_
a
1
0 1
1 a
2
0
0 1 0
_
_
.
The observability gramian of this realization is given by
Q =
_
1
2a
1
a
2
(a
1
+a
2
)
1
2a
2
(a
1
+a
2
)
1
2a
2
(a
1
+a
2
)
1
2a
2
_
.
Hence
|hg|
2
=
1
_
2a
1
a
2
(a
1
+a
2
)
.
It follows that |hg|
2
> |g|
2
|h|
2
for any a
1
, a
2
such that 2 < a
1
+ a
2
. For
example, choose a
1
= 1/2. Then |g|
2
2
= 1 and |g
2
|
2
=

2.
Solution 3.13.
1.
|GB| = sup
w=0
|GBw|
S
2
|w|
S
0
= sup
z=0
|Gz|
S
2
|z|
S
1
= |G|.
2. Take the innite horizon 2-norm
|AG|
2
=
_

trace
_
G(j)

A(j)

A(j)G(j)
_
d
=
_

trace
_
G(j)

G(j)
_
d
= |G|
2
since A(j)

A(j) = I.
SIGNALS AND SYSTEMS 33
Solution 3.14.
Suciency Suppose Z 1

is strictly positive real. The condition Z 1

implies that Z has nite incremental gain. Equation (3.9.1) gives


inf

0
>0

_
Z(
0
+j
0
) +Z

(
o
+j
0
)
_
2,
which implies that Z(j) +Z

(j) 2I for all real .


Suppose the system is relaxed at time t
0
, let w be any signal in /
2
[t
0
, T] and
let z = Zw. Dene the /
2
(, ) signals w
e
and z
e
by
w
e
(t) =
_
w(t) for t [t
0
, T]
0 otherwise
and
z
e
(t) =
_
z(t) for t t
0
0 otherwise.
Then
z, w
[t
0
,T]
=
_
T
t
0
w

(t)z(t) dt
=
_

e
(t)z
e
(t) dt
=
1
2
_

e
(j) z
e
(j) d
=
1
2
_

e
(j)Z(j) w
e
(j) d
=
1
4
_

e
(j)[Z(j) +Z

(j)] w
e
(j) d


2
_

e
(j) w
e
(j) d
=
_

e
(t)w
e
(t) dt
=
_
T
t
0
w

(t)w(t) dt.
In the above, w
e
and z
e
denote the Fourier transforms of w
e
and z
e
. Conse-
quently, Z denes an incrementally strictly passive system.
34 SOLUTIONS TO PROBLEMS IN CHAPTER 3
Necessity Suppose the system dened by Z has nite incremental gain and is
incrementally strictly passive. The nite incremental gain assumption implies
that Z 1

.
Notice that for any complex numbers z and w, R
e
(z)R
e
(w) =
1
2
R
e
( wz +wz).
Choose s
0
=
0
+j
0
with
0
= R
e
(s
0
) > 0 and choose x C
n
. Consider the
input
w(t) = R
e
(xe
s
0
t
1(t t
0
))
in which 1() denotes the unit step function. For t
0
, the response to
this input is
z(t) = R
e
(Z(s
0
)xe
s
0
t
1(t t
0
)).
Therefore,
w

(t)z(t) =
1
2
R
e
_
e
2
0
t
x

Z(s
0
)x +e
2s
0
t
x

Z(s
0
)x
_
.
Integrating from to some nite time T we have
_
T

(t)z(t) dt =
1
2
R
e
_
1
2
0
e
2
0
T
x

Z(s
0
)x +
1
2s
0
e
2s
0
T
x

Z(s
0
)x
_
.
Also,

_
T

(t)w(t) dt =

2
R
e
_
1
2
0
e
2
0
T
x

x +
1
2s
0
e
2s
0
T
x

x
_
.
Suppose > 0 is such that
z, w
[,T]
|w|
2,[,T]
for all nite T (such an exists by the assumption that Z is incrementally
strictly passive). Then
R
e
(x

Z(s
0
)x x

x)
2
0
e
2
0
T
R
e
_
e
2s
0
T
2s
0
(x

Z(s
0
)x x

x)
_
. (3.1)
If w
0
= 0 (i.e., s
0
is real), we choose x real and obtain x

Z(s
0
)x x

x 0.
Since x may be any real vector and since Z

(s
0
) = Z

(s
0
) for s
0
R, we
conclude that Z(s
0
) +Z

(s
0
) 2I for all real s
0
> 0.
If ,= 0, we notice that arg e
2s
0
T
= 2
0
T takes all values between 0 and
2 as T varies. This implies that the right-hand side of (3.1) is nonnega-
tive for some values of T (which will depend on the choice of s
0
and x).
Because the left-hand side of (3.1) is independent of T, we conclude that
R
e
(x

Z(s
0
)x x

x) 0. Consequently,
Z(s
0
) +Z

(s
0
) 2I
for all s
0
such that R
e
(s
0
) > 0.
SIGNALS AND SYSTEMS 35
Solution 3.15. Since Z 1

, the complex function f(s) = v

Z(s)v is also in
1

for any (complex) vector v. Also,


g(s) = e
f(s)
is analytic and nonzero in the closed-right-half plane. It follows from the maximum
modulus principle that max
R
e
(s)0
[g(s)[ = max

[g(j)[. Now note that


[g(s)[ = e
R
e
_
f(s)
_
.
Therefore, min
R
e
(s)0
R
e
_
f(s)
_
= min

R
e
_
g(j)
_
. The result follows.
Solution 3.16.
1. The nonsingularity of Z(s) follows from the denition:
Z(s
0
)v = 0 v

_
Z

(s
0
) +Z(s
0
)
_
v = 0
and it follows that v = 0.
2. Since Z is strictly positive real, D is nonsingular. The eigenvalues of A
BD
1
C are either (a) zeros of Z or (b) unobservable modes of (A, C) or un-
controllable modes of (A, B). Since A is asymptotically stable, the realization
(A, B, C, D) has no uncontrollable or unobservable modes in the closed-right-
half plane and any eigenvalue of ABD
1
C which is in the closed-right-half
plane is a zero of Z. Since Z has no zeros in the closed-right-half plane,
ABD
1
C is asymptotically stable.
Solution 3.17. Notice that
_
I +G
I G
_
s
=
_
_
A B
C I +D
C I D
_
_
.
Hence
(I G)(I +G)
1
s
=
_
A B
C I D
_ _
I 0
C I +D
_
1
s
=
_
AB(I +D)
1
C B(I +D)
1
C (I D)(I +D)
1
C (I D)(I +D)
1
_
s
=
_
AB(I +D)
1
C B(I +D)
1
2(I +D)
1
C (I D)(I +D)
1
_
.
36 SOLUTIONS TO PROBLEMS IN CHAPTER 3
Solution 3.18. Consider Figure 3.2 and dene
z
2
=
_
z
21
z
22
_
; w
2
=
_
w
21
w
22
_
.
Then
f f
f
s s
G
K

1
- -
?

6
?
-
w
21
z
21
z
22
w
22
z
1
w
1
Figure 3.2:
_
z
1
z
2
_
= P
_
w
1
w
2
_
in which
P =
_
_
SGK SG SGK
KS (I KG)
1
KS
S SG S
_
_
,
with S = (I GK)
1
. The nominal closed loop is stable if and only if P is stable.
1. The result follows by a direct application of Theorem 3.6.1.
2. By Theorem 3.5.7, the closed loop will be stable provided that GKS is
incrementally strictly passive. By Lemma 3.5.6, this is equivalent to
_
(I +
GKS)(I GKS)
1
_
< 1, which can be simplied to

_
(I 2GK)
1
_
< 1.
Solution 3.19.
1. The condition min ()(DGD
1
) < 1 implies that there exists a D T
such that ()(DGD
1
) < 1. Therefore, there exists a D T such that
(DD
1
)(DGD
1
) < 1, by virtue of the commutative property of D.
The stability of the closed loop is now immediate from Corollary 3.5.2.
SIGNALS AND SYSTEMS 37
2. For any matrix valued
i
, the corresponding block-diagonal entry D
i
in D
must have the form I, for some scalar transfer function 1

such that

1
1

. For any
i
that is of the form I, the corresponding block-
diagonal entry D
i
in D must satisfy D
1
1

. The other block-entries of


D are zero.
Solution 3.20. Firstly note that at least one solution always exists (see Solu-
tion 3.9). Also, if Q
1
and Q
2
are any two solutions, then X = Q
2
Q
1
satises
XA+A

X = 0,
which we may write as
_
A 0
0 A

_ _
I
X
_
=
_
I
X
_
A.
But
_
A 0
0 A

_ _
I
0
_
=
_
I
0
_
A,
so if
i
(A) ,=
j
(A

), which is to say
i
(A) +
j
(A) ,= 0, for all i, j, then X = 0
by the uniqueness properties of eigenvalue decompositions.
Conversely, if
i
(A) +
j
(A) = 0 for some i, j, we have
_
A 0
0 A

_ _
I
0
_
=
_
I
0
_
A
and
_
A 0
0 A

_ _
I
X
_
=
_
I
X
_
A, for some X ,= 0.
Therefore, Q and Q+X ,= Q are two solutions.
To illustrate these nonuniqueness properties, consider A =
_
1 0
0 1
_
. Then
XA + A

X = 0 has the solution set X =


_
0 1
1 0
_
. As another example, if
A =
_
0 w
0
_
, then X = I is a solution.
Solution 3.21.
1. Let Hx = x, x ,= 0.
x

SH = x

(SH)

by the Hamiltonian property


= x

=

x

S since S

= S.
38 SOLUTIONS TO PROBLEMS IN CHAPTER 3
Noting that x

S ,= 0, we conclude that

is an eigenvalue of H.
2.
(X

SX) +

(X

SX) = X

SHX +X

SX
= X

_
SH (SH)

_
X since S

= S
= 0 by the Hamiltonian property.
We conclude that X

SX = 0, because linear matrix equations of the form


Y +

Y = 0 in which R
e
(
i
()) < 0 have the unique solution Y = 0 (see
Problem 3.20).
3. From Item 2, X

1
X
2
= X

2
X
1
. Hence X
2
X
1
1
= (X

1
)
1
X

2
and P is symmetric.
Also from Item 2, X

SHX = X

SX = 0. Hence
0 = (X

1
)
1
X

SHXX
1
1
=
_
P I

_
H
11
H
12
H
21
H
22
_ _
I
P
_
= PH
11
H
21
+PH
12
P H
22
P
= PH
11
+H

11
P +PH
12
P H
21
.
The nal equality is valid because the Hamiltonian property implies that
H
22
= H

11
.
Finally,
H
11
+H
12
P =
_
I 0

HXX
1
1
=
_
I 0

XX
1
1
= X
1
X
1
1
.
Solution 3.22.
1. Since (T, T) = I, it is immediate that P(T) = .


P =

X
2
X
1
1
+P

X
1
X
1
1
= (

21
+

22
)X
1
1
+P(

11
+

12
)X
1
1
=
_
P I

_
I

_
X
1
1
=
_
P I

H
_
I

_
X
1
1
=
_
P I

H
_
X
1
X
2
_
X
1
1
=
_
P I

H
_
I
P
_
.
SIGNALS AND SYSTEMS 39
The result follows upon expansion of the right-hand side and noting that
H
22
= H

11
.
2. Matrix addition, multiplication and inversion are continuous operations. (A+
B)
ij
and (AB)
ij
are continuous functions of the entries of A and B, and
(A
1
)
ij
is a continuous function of the entries of A, provided A is nonsingular.
The result follows from these facts and Item 1.
Solution 3.23.
1. Write the Riccati equation as
A+A

+
_
C

1
B

_
C

1
B

_
= 0
Therefore is the observability gramian of (A,
_
C

1
B

_
). Hence (A, C)
observable implies that is nonsingular.
2. Write the Riccati equation as
(A+
2
BB

) = A

C.
Suppose that Ax = x and Cx = 0, x ,= 0. Note that R
e
() < 0 since A is
asymptotically stable. It follows that
x

(A+
2
BB

) =

.
Since A +
2
BB

is asymptotically stable and R


e
(

) > 0, we conclude
that x = 0. Hence the unobservable subspace is contained in ker .
Suppose now that M
2
is a basis for ker . Then M

2
(3.7.17)M
2
yields CM
2
= 0
and (3.7.17)M
2
results in AM
2
= 0. That is, ker is an A invariant subspace
contained in ker C, which shows that ker is a subset of the unobservable
subspace. We therefore conclude that ker is the unobservable subspace.
3. Follows immediately from Item 2.
Solution 3.24.
1. Let X(t) = P(t). Dierentiating X(t) and using the two Riccati equations,
we obtain

X = XA+A

X +
2
BB

2
PBB

P
= X(A+
2
BB

) + (A+
2
BB

X
2
XBB

X. (3.2)
40 SOLUTIONS TO PROBLEMS IN CHAPTER 3
Choose a t

T and an x such that X(t

)x = 0. Then x

(3.2)x gives
x


X(t)x[
t=t
= 0, which is equivalent to

X(t)x[
t=t
= 0 (since P(t) and hence
X(t) are monotonic, which implies

X is semidente). Consequently, (3.2)x
gives X(t

)(A +
2
BB

)x = 0. That is, the kernel of X(t

) is invariant
under multiplication by A+
2
BB

.
Consider the dierential equation
x(t) =
_
(A+
2
BB

2
XBB

_
x(t) +X(A+
2
BB

)x, x(t

) = 0.
One solution is x
1
(t) = X(t)x. Another solution is x
2
(t) = 0 for all t. Hence
X(t)x = 0 for all t by the uniqueness of solutions to dierential equations.
2. Let M =
_
M
1
M
2

be any nonsingular matrix such that M


2
is a basis for
the kernel of . Since P(T) = 0, M
2
is also a basis for the kernel of P(t)
for all t T by Item 1. It follows that
_
P(t)
_
M
2
= 0 and M

2
[P(t)] = 0
for all t T and that
1
P
1
(t) = M

1
_
P(t)
_
M
1
is nonsingular for all
t T.
3. The matrix P(t) is nonsingular for all t T if and only if is nonsingular
(this follows from P(T) = 0 and Item 1).
If is nonsingular, the assumptions used in the text hold and A+
2
BB

is asymptotically stable.
In the case that is singular, let M be as in Item 2. By the solution to
Problem 3.23 Item 1,
M
1
AM =
_

A
11
0

A
21

A
22
_
, M
1
B =
_

B
1

B
2
_
,
CM =
_

C
1
0
_
.
(Note that we cannot, and do not, assume that (

A
11
,

C
1
) is observable). Fur-
thermore, M

1
_
P(t)
_
M
1
is nonsingular and P
1
(t) = M

1
P(t)M
1
satises

P
1
=

A
11
P
1
+P
1

A

11
+
1
P
1

B
1

1
P
1
+

C

C
1
P
1
(T) = 0
with lim
t
P
1
(t) =
1
in which
1
= M
1
M
1
. Applying the argument
of the text to this subspace shows that

A
11
+
2

B
1

B
1

1
is asymptotically
stable. We conclude that A+
2
BB

is asymptotically stable, since


A+
2
BB

= M
_

A
11
+
2

B
1

B
1

1
0

A
21
+
2

B
2

B
1

1

A
22
_
M
1
.
SIGNALS AND SYSTEMS 41
Solution 3.25. Dene
G = (I Z)(I +Z)
1
s
=
_
AB(I +D)
1
C B(I +D)
1
2(I +D)
1
C (I D)(I +D)
1
_
s
=
_

A

B

C

D
_
.
Since Z denes an incrementally strictly passive system with nite incremental
gain, (G) < 1 by Lemma 3.5.6. This is equivalent to |G|

< 1 since G is linear,


time-invariant and rational. Now verify

R = I

D

D
= I (I +D

)
1
(I D

)(I D)(I +D)


1
= (I +D

)
1
_
(I +D

)(I +D) (I D

)(I D)
_
(I +D)
1
= (I +D

)
1
_
I +D

+D +D

D I +D

+D D

D
_
(I +D)
1
= 2(I +D

)
1
(D +D

)(I +D)
1
.
Condition 1 of the bounded real lemma says that

R > 0. Therefore R = D+D

> 0.
Using this identity, we easily obtain

R
1

=
1
2
BR
1
B

and

A+

B

R
1

C = AB(I +D)
1
C BR
1
(I D

)(I +D)
1
C
= AB
_
I +R
1
(I D

)
_
(I +D)
1
C
= ABR
1
_
D +D

+I D

_
(I +D)
1
C
= ABR
1
C.
Finally, notice that for any matrix X with I X

X nonsingular,
I +X(I X

X)
1
X

= I + (I XX

)
1
XX

= (I XX

)
1
(I XX

+XX

)
= (I XX

)
1
.
Consequently,
I +

D

R
1

= (I

D

)
1
=
_
(I +D)
1
_
(I +D)(I +D

) (I D)(I D

)
_
(I +D

)
1
_
1
=
1
2
(I +D

)(D +D

)
1
(I +D)
42 SOLUTIONS TO PROBLEMS IN CHAPTER 3
which results in

(I +

D

R
1

C = 2C

R
1
C.
Condition 2 of the Bounded Real Lemma ensures the existence of a

P such that

P(ABR
1
C) + (ABR
1
C)

P +
1
2

PBR
1
B

P + 2C

R
1
C = 0
with A BR
1
C +
1
2
BR
1
B

P asymptotically stable (and



P 0). We therefore
dene P =
1
2

P.
Solution 3.26.
1. Suppose = W

W. Then

= W

W = . Furthermore, W
1
!1

implies that (j) > 0.


Now suppose that = V

V . Then
WV
1
= (W

)
1
V

.
The elements of the left-hand side have no poles in R
e
s 0 and the ele-
ments of the right-hand side have no poles in R
e
s 0. Hence WV
1
= M,
a constant matrix, which satises M = M

. We conclude that M is real


(hence orthogonal) by noting that W and V are implicitly assumed to be
real systems.
2. Since =

, the poles of are symmetric about the imaginary axis and


=

j=1
M
ij
(s p
i
)
j
+

j=1
M

ij
(s p
i
)
j
,
in which M
ij
are complex matrices and R
e
(p
i
) < 0. Dene
Z =

j=1
M
ij
(s p
i
)
j
.
Since is real and in !/

, Z is real and in !1

.
(Alternatively, let (t) be the inverse Fourier transform of , dene
Z(t) =
_
(t) for t 0
0 otherwise
and let Z be the Fourier transform of Z.)
Since (j) > 0 for all , it follows that
Z(j) +Z

(j) > 0
for all . Consequently, since Z is rational, there is an > 0 such that
Z(j) +Z

(j) > 2I.


It follows that Z is strictly positive real.
SIGNALS AND SYSTEMS 43
3. The fact that W !1

and W
1
!1

follows trivially from the asymp-


totic stability of the matrices A and A BR
1
(C B

P) = A BW
1
L.
Verify that the Riccati equation can be written as
PA+A

P +L

L = 0.
(This shows that (A, L) is observable if and only if P is nonsingular.) Now
verify
W

W = W

W +B

(sI A

)
1
L

W +W

L(sI A)
1
B
+B

(sI A

)
1
L

L(sI A)
1
B
= D +D

+B

(sI A

)
1
(C

PB) + (C B

P)(sI A)
1
B
+B

(sI A

)
1
L

L(sI A)
1
B
= D +D

+B

(sI A

)
1
C

+C(sI A)
1
B
+B

(sI A

)
1
_
L

L P(sI A)
(sI A

)P
_
(sI A)
1
B
= D +D

+B

(sI A

)
1
C

+C(sI A)
1
B
= Z +Z

.
Solutions to Problems in
Chapter 4
Solution 4.1.
1(a) The function h =
_
s1
s+1
_
maps the imaginary axis s = j into the circle
[h[ = . We can therefore nd w by solving the equation
1
1 w
=
_
s 1
s + 1
_
w =
_
s + 1
s 1
__
(s 1)
1
(s + 1)
s + 1
_
=
_
s(1
1
) (1 +
1
)
s 1
_
.
1(b) In this case we need to solve
w
1 w
=
_
s 1
s + 1
_
w =
_
s 1
s + 1
__
1 +
_
s 1
s + 1
__
1
=
_
(s 1)
s(1 +) + (1 )
_
.
2(a) Let
1
1 q
=
_
s 1
s + 1
_
q =
_
s(1
1
) (1 +
1
)
s 1
_
where q = gk
k = s(1
1
) (1 +
1
).
45
46 SOLUTIONS TO PROBLEMS IN CHAPTER 4
The Nyquist plot of q cuts the real axis at 1
1
. This means that there
will be one encirclement of +1 for all > 0. In order to make the controller
realizable, one could use
k =
s(1
1
) (1 +
1
)
s + 1
for arbitrarily small .
2(b) In this case we solve
q
1 q
=
_
s 1
s + 1
_
to obtain
q =
_
(s 1)
s(1 +) + (1 )
_
.
It is not hard to check that the Nyquist plot of q cuts the real axis at

1
.
We therefore require > 1 for the single required encirclement. The corre-
sponding controller is given by
k =
_
(s 1)
2
s(1 +) + (1 )
_
,
or
k =
_
(s 1)
2
s(1 +) + (1 ))(1 +s)
_
for a proper approximation.
3 Just repeat the calculations of Part (2a) using
1
1 q
=
_
s 1
s + 1
_
2
.
This gives
q =
_
s
2
(1
1
) 2s(1 +
1
) + 1
1
(s 1)
2
_
k =
_
s
2
(1
1
) 2s(1 +
1
) + 1
1
(s + 1)
2
_
.
LINEAR FRACTIONAL TRANSFORMATIONS 47
Solution 4.2. Since
T

(P, K
1
) = P
11
+P
12
K
1
(I P
22
K
1
)
1
P
21
and
T

(P, K
2
) = P
11
+P
12
(I K
2
P
22
)
1
K
2
P
21
,
it follows that
T

(P, K
1
) T

(P, K
2
)
= P
12
(I K
2
P
22
)
1
_
(I K
2
P
22
)K
1
K
2
(I P
22
K
1
)
_
(I P
22
K
1
)
1
P
21
= P
12
(I K
2
P
22
)
1
(K
1
K
2
)(I P
22
K
1
)
1
P
21
.
The result now follows because P
12
(I K
2
P
22
)
1
has full column rank for almost
all s and (I P
22
K
1
)
1
P
21
has full row rank for almost all s.
Solution 4.3. To see this we observe that
_
z
y
_
= P
_
w
u
_
u = Ky
_

_
z = R w where R = T

(P, K)
and
_
w
u
_
= P
1
_
z
y
_
z = R w
_

_
K = T
u
(P
1
, R).
Solution 4.4. This follows because:
Z = (I +S)(I S)
1
= (I S + 2S)(I S)
1
= I + 2S(I S)
1
= T

__
I I
2I I
_
, S
_
.
Solution 4.5.
48 SOLUTIONS TO PROBLEMS IN CHAPTER 4
1. Let
_
z
y
_
=
_
P
11
P
12
P
21
P
22
_ _
w
u
_
(4.1)
u = Ky
so that z = T

(P, K)w. Rewrite (4.1) as


_
z
w
_
=
_
P
11
P
12
I 0
_ _
w
u
_
_
u
y
_
=
_
0 I
P
21
P
22
_ _
w
u
_
which gives
_
z
w
_
=
_
P
11
P
12
I 0
_ _
0 I
P
21
P
22
_
1
_
u
y
_
=
_
P
11
P
12
I 0
_ _
0 I
P
21
P
22
_
1
_
K
I
_
y
=
_

11

12

21

22
_ _
K
I
_
y.
Hence y = (
21
K +
22
)
1
w and
z = (
11
K +
12
)(
21
K +
22
)
1
w.
We conclude that
T

(P, K) = (
11
K +
12
)(
21
K +
22
)
1
.
2.
P

P I
=
_
P

11
P

21
P

12
P

22
_ _
P
11
P
12
P
21
P
22
_

_
I 0
0 I
_
=
_
P

11
P
11
+P

21
P
21
I P

11
P
12
+P

21
P
22
P

12
P
11
+P

22
P
21
P

12
P
12
+P

22
P
22
I
_
=
_
P

11
I
P

12
0
_
J
_
P
11
P
12
I 0
_

_
0 P

21
I P

22
_
J
_
0 I
P
21
P
22
_
=
_
0 P

21
I P

22
_
(

JJ)
_
0 I
P
21
P
22
_
.
The last line follows from
=
_
P
11
P
12
I 0
_ _
0 I
P
21
P
22
_
1
.
LINEAR FRACTIONAL TRANSFORMATIONS 49
3.
_

_
0 I
P
21
P
22
P
11
P
12
I 0
_

_
s
=
_

_
A B
1
B
2
0 0 I
C
2
D
21
D
22
C
1
D
11
D
12
0 I 0
_

_
.
It follows (see Problem 3.6) that has realization
_
_
A B
1
B
2
C
1
D
11
D
12
0 I 0
_
_
_
_
I 0 0
0 0 I
C
2
D
21
D
22
_
_
1
.
That is,

s
=
_
_
AB
1
D
1
21
C
2
B
2
B
1
D
1
21
D
22
B
1
D
1
21
C
1
D
11
D
1
21
C
2
D
12
D
11
D
1
21
D
22
D
11
D
1
21
D
1
21
C
2
D
1
21
D
22
D
1
21
_
_
.
Solution 4.6. Note that
XA+DX +XBX +C = X(BX +A) + (DX +C).
The result is now immediate.
Solution 4.7. We see from the diagram that
z = T

(P, )w
where
= T

(K,
11
).
It is now immediate that
_
z
r
_
=
_
T

(P, T

(K,
11
))

_ _
w
v
_
.
Solution 4.8.
1. From the diagram we see that
z = P
11
w +P
12
u
y = P
21
w +P
22
u
u = K
11
y +K
12
v
r = K
21
y +K
22
v.
50 SOLUTIONS TO PROBLEMS IN CHAPTER 4
Eliminating y and u from these equations gives
z = (P
11
+P
12
K
11
(I P
22
K
11
)
1
P
21
)w +P
12
(I K
11
P
22
)
1
K
12
v
r = K
21
(I P
22
K
11
)
1
P
21
w + (K
22
+K
21
(I P
22
K
11
)
1
P
22
K
12
)v.
2. Since we require
0 = P
11
+P
12
K
11
(I P
22
K
11
)
1
P
21
,
we obtain
K
11
= (P
12
P
11
P
1
21
P
22
)
1
P
11
P
1
21
= P
1
12
P
11
(P
21
P
22
P
1
12
P
11
)
1
.
Setting
I = K
21
(I P
22
K
11
)
1
P
21
,
gives
K
21
= P
1
21
(I P
22
K
11
)
= P
1
21
+P
1
21
P
22
(P
12
P
11
P
1
21
P
22
)
1
P
11
P
1
21
= P
1
21
(I P
22
P
1
12
P
11
P
1
21
)
1
(I P
22
P
1
12
P
11
P
1
21
+P
22
P
1
12
P
11
P
1
21
)
= (P
21
P
22
P
1
12
P
11
)
1
.
A similar calculation starting from
I = P
12
(I K
11
P
22
)
1
K
12
,
results in
K
12
= (I K
11
P
22
)P
1
12
= P
1
12
+ (P
12
P
11
P
1
21
P
22
)
1
P
11
P
1
21
P
22
P
1
12
= (P
12
P
11
P
1
21
P
22
)
1
.
Finally,
0 = K
22
+K
21
(I P
22
K
11
)
1
P
22
K
12
and K
11
= P
1
12
P
11
K
21
results in
K
22
= K
21
(I P
22
K
11
)
1
P
22
K
12
= K
21
(I +P
22
P
1
12
P
11
K
21
)
1
P
22
K
12
= (K
1
21
+P
22
P
1
12
P
11
)
1
P
22
K
12
= (P
21
P
22
P
1
12
P
11
+P
22
P
1
12
P
11
)
1
P
22
K
12
= P
1
21
P
22
(P
12
P
11
P
1
21
P
22
)
1
.
LINEAR FRACTIONAL TRANSFORMATIONS 51
Hence
P
#
=
_
P
#
11
P
#
12
P
#
21
P
#
22
_
, (4.2)
in which
P
#
11
= (P
12
P
11
P
1
21
P
22
)
1
P
11
P
1
21
P
#
12
= (P
12
P
11
P
1
21
P
22
)
1
P
#
21
= (P
21
P
22
P
1
12
P
11
)
1
P
#
22
= P
1
21
P
22
(P
12
P
11
P
1
21
P
22
)
1
.
It is easy to check that
P
1
=
_
0 I
I 0
_
P
#
_
0 I
I 0
_
.
3. We are going to need the six equations from
_
P
11
P
12
P
21
P
22
_ _
P

11
P

21
P

12
P

22
_
=
_
I 0
0 I
_
and
_
P

11
P

21
P

12
P

22
_ _
P
11
P
12
P
21
P
22
_
=
_
I 0
0 I
_
.
Since
P

12
P
11
+P

22
P
21
= 0,
we have
P
11
P
1
21
= (P

12
)
1
P

22
.
Substituting into the formula for P
#
gives
P
#
11
= (P
12
+ (P

12
)
1
P

22
P
22
)
1
P
11
P
1
21
= (P

12
P
12
+P

22
P
22
)
1
P

12
P
11
P
1
21
= P

12
P
11
P
1
21
= P

22
since I = P

12
P
12
+P

22
P
22
. In much the same way
P

11
P
12
+P

21
P
22
= 0
gives
P
22
P
1
12
= (P

21
)
1
P

11
.
52 SOLUTIONS TO PROBLEMS IN CHAPTER 4
Substituting into the formula for P
#
21
yields
P
#
21
= (P
21
P
22
P
1
12
P
11
)
1
= (P
21
+ (P

21
)
1
P

11
P
11
)
1
= P

21
since I = P

21
P
21
+P

11
P
11
. The other partitions of P
#
follow in the same
way.
Solution 4.9.
1. One has to check that (

(P
1
, P
2
) and its (1, 2)- and (2, 1)-partitions are non-
singular. By referring to the general formula (4.1.9) for (

(, ), we see that
this is indeed the case for any P
1
, P
2
T.
2. The best way to establish the associativity property is to transform the P
i
s
into
i
s in a scattering framework (see Problem 4.5). We then get
(

((

(P
1
, P
2
), P
3
) = (
1

2
)
3
=
1
(
2

3
)
= (

(P
1
, (

(P
2
, P
3
))
in which
i
are the scattering matrices associated with each P
i
. The associa-
tivity property comes from the fact that matrix multiplication is associative.
3. The identity is given by
P
I
=
_
0 I
I 0
_
,
and by referring to (4.1.9) it is easy to check that
P = (

(P, P
I
)
= (

(P
I
, P).
4. Again, it is a routine matter to check that P
#
given in (4.2) has the de-
sired properties. The fact that (

(P, P
#
) = (

(P
#
, P) comes from
#
=

#
= I.
5. The group property now follows directly from the denition.
LINEAR FRACTIONAL TRANSFORMATIONS 53
Solution 4.10.
1. We begin by expressing s
1
as function of w
1
.
s = (b wd)(cw a)
1
= (w
1
b d)(c aw
1
)
1
s
1
= (c aw
1
)(w
1
b d)
1
= (c aw
1
)(1 w
1
bd
1
)
1
d
1
= cd
1
+ (a cbd
1
)w
1
(1 w
1
bd
1
)
1
d
1
= T

__
cd
1
a bcd
1
d
1
bd
1
_
, w
1
_
.
We therefore have
G(s) = D +C(sI A)
1
B
= D +Cs
1
(I s
1
A)
1
B
= T

__
D C
B A
_
, s
1
_
= T

__
D C
B A
_
, T

(
_
cd
1
a bcd
1
d
1
bd
1
_
, w
1
)
_
= T

__
D cC(dI +cA)
1
B C(dI +cA)
1
(ad bc)
(dI +cA)
1
B (aA+bI)(dI +cA)
1
_
, w
1
_
= T

__

D

C

B

A
_
, w
1
_
=

G(w).
2. Suppose Ax = x and Cx = 0, x ,= 0. Dene y = (cA+dI)x = (c+d)x ,= 0
(since cA+dI is nonsingular). Then

Cy = 0 and

Ay =
a+b
c+d
y.
Suppose x

B = 0 and x

A = x, x ,= 0. Dene y

= x

(cA + dI) = (c +
d)x

,= 0. Then y

B = 0 and y


A = (c+d)(a+b)x

(cA+dI)
1
=
a+b
c+d
y

.
Similar arguments establish the converse implications.
Solution 4.11.
1. Writing W
1
(I GK)
1
as W
1
_
I +GK(I GK)
1
_
, gives
_
W
1
(I GK)
1
W
2
K(I GK)
1
_
=
_
W
1
0
_
+
_
W
1
G
W
2
_
K(I GK)
1
.
54 SOLUTIONS TO PROBLEMS IN CHAPTER 4
Comparing terms with
T

(P, K) = P
11
+P
12
K(I P
22
K)
1
P
21
establishes that
_
P
11
P
12
P
21
P
22
_
=
_
_
W
1
W
1
G
0 W
2
I G
_
_
.
2. Note that
_
P
11
P
12
P
21
P
22
_
=
_
_
W
1
0 0
0 W
2
0
0 0 I
_
_
_
_
I G
0 I
I G
_
_
,
that
_
_
W
1
0 0
0 W
2
0
0 0 I
_
_
s
=
_

_
A
1
0 B
1
0 0
0 A
2
0 B
2
0
C
1
0 D
1
0 0
0 C
2
0 D
2
0
0 0 0 0 I
_

_
,
and that
_
_
I G
0 I
I G
_
_
s
=
_

_
A 0 B
C I D
0 0 I
C I D
_

_
.
The state-space realization of P is obtained using the series connection rule
(see Problem 3.6).
Solution 4.12. The solution comes from noting that
_

_
y
r y
u
d
r
y +n
_

_
=
_

_
G
d
0 0 G
t
G
d
I 0 G
t
0 0 0 I
I 0 0 0
0 I 0 0
G
d
0 I G
t
_

_
_

_
d
r
n
u
_

_
u =
_
F R K

_
_
d
r
y +n
_
_
.
LINEAR FRACTIONAL TRANSFORMATIONS 55
Solution 4.13. Follows immediately from Theorem 3.6.1. Alternatively, using
the fact that |P
22
K|

< 1 with P
22
, K !1

, we observe that
det(I P
22
K) ,= 0 for all [0, 1] and all s D
R
.
This means that det(I P
22
K) has a winding number of zero around the origin.
We therefore conclude from the argument principle that (I P
22
K)
1
is stable
and therefore that T

(P, K) is stable.
Solution 4.14.
1. D

D = I follows by calculation.
2. It is immediate that
T

(D, f) =
_
1 0
0 f
_
with |T

(D, f)| = 1 for all [f[ < 1.


3. If [f[ > 1, it is clear that |T

(D, f)| > 1.


Solution 4.15. We use Lemma 4.4.1. D
11
=
_
I
0
_
and D
12
=
_
X
I
_
and
Q = F(I XF)
1
. Set

D
12
=
_
I
X

_
and note that

12
_

D
12
D
12

=
_
I +XX

.
We therefore set

D
12
=

D
12
(I +XX

1
2
. Now

D

12
D
11
= (I +XX

1
2
. Hence F
exists if and only if |(I +XX

1
2
|. Now note that
|(I +XX

1
2
| = ((I +XX

1
2
)
=
_
((I +XX

)
1
)
=
1
_
(I +XX

)
.
To nd

F, set

Q =
11
= X

(I + XX

)
1
. Solving for

F we obtain

F = (I +
2X

X)
1
X

.
Solution 4.16.
1. has the property

= I. By Theorem 4.3.2, |T

(,
1
G)|

< 1 if and
only if |
1
G|

< 1. Hence |G|

< if and only if |

G|

< .
56 SOLUTIONS TO PROBLEMS IN CHAPTER 4
2. Since G(s) = T

(
_
D C
B A
_
, s
1
),
_

1

D

C

B

A
_
= (

(,
_

1
D C

1
B A
_
)
which yields (after some calculation)

A = A+B(
2
I D

D)
1
D

C,

B = B(
2
I D

D)
1/2
,

C = (
2
I DD

)
1/2
C,

D = 0.
3. If A is asymptotically stable and |
1
G|

< 1, then |
1

G|

< 1 and

G is
stable by Theorem 4.3.3. We conclude that

A is asymptotically stable, since
any uncontrollable or unobservable modes of (

A,

B,

C) are eigenvalues of A.
The converse follows likewise.
Solution 4.17. Observe that
G(z) = D +C(zI A)
1
B
= D +Cz
1
(I Az
1
)
1
B
= T

__
D C
B A
_
, z
1
_
.
The result now follows from Theorem 4.3.2, since:
_
_
_
_
_
D C
B A
__
_
_
_
1 and [z
1
[ 1
for all [z[ 1.
Solution 4.18. The rst step is to nd

G(s) from G(z) using the bilinear trans-
formation:
G(z) = D +C(zI A)
1
B
= D +C(
_
s + 1
1 s
_
I A)
1
B
= D +C(1 s)(sI (I +A)
1
(AI))
1
(I +A)
1
B
= D +C(1 s)(sI

A)
1
1/

B
= D +C(I Is

A+

A)(sI

A)
1
1/

B
= D 1/

2C

B + 1/

2C(I

A)(sI

A)
1

B
= D C(I +A)
1
B +

2C(I +A)
1
(sI

A)
1

B
=

D +

C(sI

A)
1

B
=

G(s).
LINEAR FRACTIONAL TRANSFORMATIONS 57
This completes the rst part. To prove the second part, we substitute into the
continuous bounded real equations:
0 =

A

P +P

A+

C

C +

L

L (4.3)
0 =

D

C +

B

P +

L (4.4)
0 =
2
I

D

W. (4.5)
Substituting into (4.3) gives
0 = (I+A

)
1
(A

I)P+P(AI)(I+A)
1
+2(I+A

)
1
_
C


_
C
L
_
(I+A)
1
where

L =

2L(I +A)
1
.
Therefore
0 =

2(I +A

)
1
_
A

PAP +C

C +L

L
_
2(I +A)
1
0 = A

PAP +C

C +L

L. (4.6)
Substituting (4.6) into (4.4) gives
0 =

2(I +A

)
1
_
C


__
D
W
_

_
C
L
_
(I +A)
1
B
_
+

2P(I +A)
1
B
= C

D +L

W + (A

PAP)(I +A)
1
B + (I +A

)P(I +A)
1
B
= C

D +L

W + (A

PAP +P +A

P)(I +A)
1
B
= C

D +L

W +A

PB. (4.7)
Finally, we may substitute into (4.5) using (4.6) and (4.7) to obtain
0 =
2
I D

D W

W B

PB. (4.8)
Equations (4.3), (4.4) and (4.5) may be combined as
_
A

_
_
_
P 0 0
0 I 0
0 0 I
_
_
_
_
A B
C D
L W
_
_
=
_
P 0
0 I
_
.
Note that the same P solves the discrete and continuous bounded real equations.
Solution 4.19. Since |G|

< 1, it has a minimal realization which satises the


discrete bounded real equations
_
A

_
_
_
P 0 0
0 I 0
0 0 I
_
_
_
_
A B
C D
L W
_
_
=
_
P 0
0 I
_
58 SOLUTIONS TO PROBLEMS IN CHAPTER 4
for certain matrices P, L and W. We may now select a new state-space basis such
that P = I. This gives
_
A

_
_
_
A B
C D
L W
_
_
=
_
I 0
0 I
_
so that
_
_
_
_
_
_
_
_
A B
C D
L W
_
_
_
_
_
_
_
_
= 1.
We may now conclude that
_
_
_
_
_
A B
C D
__
_
_
_
1.
Solutions to Problems in
Chapter 5
Solution 5.1. Let
_
x

1
u

and
_
x

2
u

be any two initial condition and


control input pairs, and let z
1
and z
2
denote the corresponding objectives (z =
_
Cx
Du
_
). By linearity, the objective signal obtained by using the initial condition
and control input pair
_
x

_
=
_
x
1
u
1
_
+ (1 )
_
x
2
u
2
_
is
z

= z
1
+ (1 )z
2
.
The cost associated with this initial condition and control input pair is
J

=
_
T
0
z

dt.
Let J
1
and J
2
denote the cost associated with
_
x

1
u

and
_
x

2
u

respec-
tively. To show convexity, we need to show that
J

J
1
+ (1 )J
2
for any 0 1. Now
z

1
z
1
(1 )z

2
z
2
= (1 )(z
1
z
2
)

(z
1
z
2
)
0
for any 0 1. Integrating from 0 to T, we obtain the desired inequality
J

J
1
+ (1 )J
2
,
and we conclude that J is convex.
Since we are free to choose x
2
= x
1
, J is convex in u. Similarly, by choosing
u
2
= u
1
, we see that J is convex in x
0
.
59
60 SOLUTIONS TO PROBLEMS IN CHAPTER 5
Solution 5.2.
1. The optimal state trajectory satises

= Ax

+Bu

, x(0) = x
0
.
If u = u

+ u, then
d
dt
(x x

) = A(x x

) +B u, (x x

)(0) = 0.
Thus
(x x)(t) =
_
t
0
(t, )B ud
= x,
in which
x(t) =
_
t
0
(t, )B ud
and (, ) is the transition matrix corresponding to A.
2. Direct substitution of u = u

+ u and x = x

+ x into the cost function J


yields the stated equation after elementary algebra.
3. Since u

is minimizing, changing the control to u

+ u cannot decrease J.
Therefore, as a function of , J must take on its minimum value at = 0. Since
the cost function is quadratic in , with a minimum at = 0, the coecient
of the linear term must be zero. That is
_
T
0
( x

Cx

+ u

) dt = 0.
4. Substituting the formula for x into the above equation and interchanging the
order of integration gives
_
T
0
u

(B

+u

)dt = 0,
in which is the adjoint variable dened by
(t) =
_
T
t

(, t)C

Cx

d.
Thus, B

+u

is orthogonal to every u /
2
[0, T]. Hence B

+u

= 0 almost
everywhere.
LQG CONTROL 61
5. Dierentiating with respect to t and using Leibnitzs rule, we obtain

(t) = C

(t)C(t)x

(t)
_
T
t
A

(t)

(, t)C

()C()x

()d
= A

(t)(t) C

(t)C(t)x

(t).
The fact that
d
dt

(, t) = A

(t)

(, t) has been usedsee Problem 3.3 for


this. Evaluating (T), we conclude that the terminal condition (T) = 0
applies.
Substituting u

= B

into the dynamical equation for the optimal state,


we obtain

x

= Ax

BB

. Combining this with the equation for , one


obtains the TPBVP.
6. The solution to the TPBVP is given by
_
x

_
(t) = (t, T)
_
x

_
(T),
in which is the transition matrix associated with the TPVBP dynamics.
Imposing the boundary condition (T) = 0, we see that
_
x

_
(t) =
_

11

21
_
(t, T)x

(T).
Thus (t) =
21
(t, T)
1
11
(t, T)x

(t) for all time for which the inverse exits.


It remains to show that
11
(t, T) is nonsingular for all t T.
Observe that
d
d
_

21
(, T)
11
(, T)
_
=

11
(, T)C

C
11
(, T)

21
(, T)BB

21
(, T).
Integrating from t to T and noting that
21
(T, T) = 0 yields

21
(t, T)
11
(t, T) =
_
T
t
_

11
(, T)C

C
11
(, T)
+

21
(, T)BB

21
(, T)
_
d.
Suppose
11
(t, T)v = 0. Multiplying the above identity by v

on the left and


v on the right, we conclude that B

21
(, T)v 0 and that C
11
(, T)v 0
for all [t, T]. Now B

21
(, T)v 0 implies
d
d

11
(, T)v = A
11
(, T)v.
Recalling that
11
(t, T)v = 0, we see that
11
(, T)v = 0 for all , since linear
dierential equations with specied initial conditions have a unique solution.
Since
11
(T, T) = I, we must have v = 0, from which we conclude that

11
(t, T) is nonsingular.
62 SOLUTIONS TO PROBLEMS IN CHAPTER 5
7. That P(t) =
21
(t, T)
1
11
(t, T) is the solution to the Riccati dierential equa-
tion (5.2.5) follows by direct substitution; see Problem 3.21.
Solution 5.3. Write the two Riccati equations
PA+A

P PB
2
B

2
P +C

C = 0

PA+A


P

PB
2
B

2

P +C

C = 0
and subtract them equations to get
(P

P)A+A

(P

P) PB
2
B

2
P +

PB
2
B

2

P = 0.
Hence
(P

P)(AB
2
B

2
P) + (AB
2
B

2
P)

(P

P) + (P

P)B
2
B

2
(P

P) = 0.
Since P is stabilizing, A B
2
B

2
P is asymptotically stable and we conclude that
P

P 0.
Suppose (A, C) is detectable and P 0 is a solution to the Riccati equation.
Write the Riccati equation as
P(AB
2
B

2
P) + (AB
2
B

2
P)

P +PB
2
B

2
P +C

C = 0.
If (AB
2
B

2
P)x = x, then we obtain
( +

)x

Px +|B

2
Px|
2
+|Cx|
2
= 0.
Hence ( +

)x

Px 0. If equality holds, then Cx = 0 and B


2
Px = 0, which gives
Ax = x, Cx = 0 and we conclude that R
e
() < 0 from the detectability of (A, C).
If, on the other hand, ( +

)x

Px < 0, we must have x

Px > 0 since P 0 and


hence R
e
() < 0. Hence P is a stabilizing solution. But the stabilizing solution is
unique, so we conclude that it is the only nonnegative solution.
Solution 5.4.
1. The identity is established by noting that
C

C = P(sI A) + (sI A

)P +PB
2
B

2
P,
which gives
(sI A

)
1
C

C(sI A)
1
= (sI A

)
1
P +P(sI A)
1
+ (sI A

)
1
PB
2
B

2
P(sI A)
1
,
LQG CONTROL 63
and
I +B
2
(sI A

)
1
C

C(sI A)
1
B
2
= (I +B

2
P(sI A)
1
B
2
)

(I +B

2
P(sI A)
1
B
2
)
Therefore,
_
W(j)
_

_
W(j)
_
I for all , and hence
_
S(j)
_
1 for all
, which is equivalent to |S|

1.
2.
_
W
G
_
s
=
_

_
A B
2
B

2
P I
C 0
0 I
_

_
.
It now follow from Problem 3.6 that
GW
1
s
=
_
_
AB
2
B

2
P B
2
C 0
B

2
P I
_
_
.
Since the Riccati equation can be written in the form
P(AB
2
B

2
P) + (AB
2
B

2
P)

P +PB
2
B

2
P +C

C = 0,
it follows from Theorem 3.2.1 that GW
1
is allpass. Since P is the stabilizing
solution, A B
2
B

2
P is asymptotically stable, GW
1
1

and hence it is
contractive in the right-half plane.
3. If u = Kx is optimal w.r.t.
_

0
(x

Cx +u

u) dt, then K = B

2
P, in which
P is the stabilizing solution to the Riccati equation (5.2.29) and Item 1 shows
that the inequality holds.
Conversely, suppose AB
2
K is asymptotically stable and
_
I +K(sI A)
1
B
2
_

_
I +K(sI A)
1
B
2
_
I.
Then S = I K(sI (A B
2
K))
1
B
2
satises |S|

1 and the equality


version of the bounded real lemma ensures the existence of P 0 and L such
that
P(AB
2
K) + (AB
2
K)

P +K

K = L

L
K +B

2
P = 0.
Substituting K = B

2
P into the rst equation and re-arranging yields (5.2.29),
in which C = L, we and we conclude that K is the optimal controller for the
performance index
_

0
(x

Cx +u

u) dt, with C = L.
4. The inequality [1+b

2
P(jI A)
1
b
2
[ 1 is immediate from the return dier-
ence equality. This inequality says that the Nyquist diagram of b

2
P(jI
A)
1
b
2
cannot enter the circle of unit radius centered at +1. The stated gain
and phase margins then follow from the geometry of this situation.
64 SOLUTIONS TO PROBLEMS IN CHAPTER 5
Solution 5.5. If x(t) = e
t
x(t) and u(t) = e
t
u(t), then J =
_

0
x

C x+ u

udt.
Furthermore,

x = (I +A) x +B
2
u
follows from

x = e
t
x+e
t
x. This is now a standard LQ problem in the variables
x and u. Hence u = B

2
P

x is the optimal controller, which is equivalent to


u = B

2
P

x.
The required assumptions are (I +A, B
2
) stabilizable and (C, I +A) has no
unobservable modes on the imaginary axis. Equivalently, we require that (A, B
2
)
has no uncontrollable modes in R
e
(s) and that (C, A) has no unobservable
modes on R
e
(s) = .
The closed-loop dynamics are x = (AB
2
B

2
P

)x; the closed-loop poles are (a


subset of) the eigenvalues of (A B
2
B

2
P

), which are all in R


e
(s) < because
I +AB
2
B

2
P

is asymptotically stable.
Solution 5.6.
1. Substitute u = Kx into the dynamics to obtain
x = (AB
2
K)x +B
1
w
z =
_
C
DK
_
x.
The result is now immediate from Theorem 3.3.1.
2. Elementary manipulations establish the Lyapunov equation. Theorem 3.1.1
and the asymptotic stability of AB
2
K establishes that QP 0.
3. trace(B

1
QB
1
) = trace(B

1
(Q P)B
1
) + trace(B

1
PB
1
). Hence the cost is
minimized by setting QP = 0, which we do by setting K = B

2
P.
Solution 5.7.
trace(QS) = trace(QA

P +QPA)
= trace(PQA

+PAQ)
= trace(PR).
The main thing is to recall that trace(XY ) = trace(Y X) for any square matrices
X and Y .
LQG CONTROL 65
Solution 5.8. Let
J(K, x
t
, T, ) =
_
T
t
z

z d +x

(T)x(T).
For any controller, J(K, x
t
, T,
1
) J(K, x
t
, T,
2
), since
1

2
. If we use
K = K

1
, the optimal controller for the problem with terminal-state penalty
matrix
1
, the left-hand side is equal to x

t
P(t, T,
1
)x
t
. Hence
x

t
P(t, T,
1
)x
t
J(K

1
, x
t
, T,
2
)
min
K
J(K, x
t
, T,
2
)
= x

t
P(t, T,
2
)x
t
.
Since x
t
is arbitrary, we conclude that P(t, T,
1
) P(t, T,
2
) for any t T.
Solution 5.9. The case when
A+A

B
2
B

2
+C

C 0
is considered in the text. We therefore consider the case that
A+A

B
2
B

2
+C

C 0.
The same argument as used in the text shows that P(t, T, ) is monotonically non-
increasing as a function of t, and P(t, T, ) is therefore non-decreasing as a function
of T (by time-invariance). It remains to show that P(t, T, ) is uniformly bounded.
Let K be such that AB
2
K is asymptotically stable. Then
J(K, x
0
, T, )
_

0
z

z dt +e
T
x

0
x
0
for some 0. Hence
x

0
P(0, T, )x
0
= min
K
J(K, x
0
, T, )
J(K, x
0
, T, )

_

0
z

z dt +e
T
x

0
x
0

_

0
z

z dt +x

0
x
0
,
which is a uniform bound on x

0
P(0, T, )x
0
. Thus P(t, T, ) is monotonic and
uniformly bounded. Hence it converges to some as T t .
66 SOLUTIONS TO PROBLEMS IN CHAPTER 5
Solution 5.10.
1. By Problem 5.9, = lim
Tt
P(t, T, 0) exists; it is a solution to the alge-
braic Riccati equation by virtue of time-invariance (see the argument in the
text). Also, 0 because the zero terminal condition is nonnegative denite.
Thus P(t, T, 0) converges to a nonnegative denite solution to the algebraic
Riccati equation. We conclude that is stabilizing because when (A, B
2
, C)
is stabilizable and detectable, the stabilizing solution is the only nonnegative
denite solution.
2. Let
X = A+A

B
2
B

2
+C

C.
Since X is symmetric, it has the form
X = V

_
_
I 0 0
0 0 0
0 0 I
_
_
V.
Let
R = V

_
_
I 0 0
0 0 0
0 0 I
_
_
V,
and let be the stabilizing solution to the algebraic Riccati equation
A+A

B
2
B

2
+C

C +R = 0,
which exists under the stated assumptions, since R 0. This implies that
satises the inequality
A+A

B
2
B

2
+C

C 0.
It remains to show that . By subtracting the equation dening X from
the Riccati equation dening , we obtain
()(AB
2
B

2
) + (AB
2
B

2
)

()
+()B
2
B

2
() +R +X = 0.
Since R + X 0 and A B
2
B

2
is asymptotically stable, we conclude that
0.
3. Let 0 be arbitrary. Let be as constructed above. Then P(t, T, 0)
P(t, T, ) P(t, T, ) for all t T, by Problem 5.8. Since P(t, T, 0) and
P(t, T, ) both converge to the stabilizing solution to the algebraic Riccati
equation, we conclude that P(t, T, ) also converges to the stabilizing solution
to the algebraic Riccati equation.
LQG CONTROL 67
-2
-1.5
-1
-0.5
0
0.5
1
0 1 2 3 4 5 6 7 8 9 10
horizon length T
Solution 5.11. That the stated control law is optimal is immediate; that it is
constant if the problem data are constant is also obvious, since in this case P(t, t +
T, ) = P(0, T, ), which is independent of t. A counter-example to the fallacious
conjecture is
A =
_
1 0
1 1
_
, B
2
=
_
1
0
_
with C arbitrary and = I. Then F
T
[
T=0
= B

2
and
AB
2
B

2
=
_
1 0
1 1
_
,
which is not asymptotically stable. The graph shows a plot of the real parts of the
closed-loop poles versus the horizon length T if we take C =
_
1 1

and = 1.
Solution 5.12. Kalman lter is

x = A x +H(y C x),
in which H = QC

. Hence the state estimation error equation is


x

x = (AHC)(x x) +
_
B HD

_
w
v
_
.
68 SOLUTIONS TO PROBLEMS IN CHAPTER 5
The innovations process is
= C(x x) +Dv.
Therefore, the system A mapping
_
w

to is given by the realization


A
s
=
_
AHC
_
B HD

C
_
0 D

_
.
Theorem 3.2.1 and the identity
(AHC)Q+Q(AHC)

+HH

+BB

= 0
shows that AA

= I. Hence the power spectrum of is I, which shows that is


white, with unit variance.
Solution 5.13. Combine the arbitrary lter given in the hint with the state dy-
namics to obtain
d
dt
_
_
x
x

_
_
=
_
_
A 0 0
QC

C AQC

C 0
GC 0 F
_
_
_
_
x
x

_
_
+
_
_
B 0
0 QC

D
0 GD
_
_
_
w
v
_
x x =
_
I JC H
1
H
2

_
_
x
x

_
_

_
0 JD

_
w
v
_
.
Since v is white noise and DD

= I, J 0 is necessary and sucient for c( x(t)


x(t))( x(t)x(t))

to be nite. Set J 0 and denote the matrices in the realization


above by

A,

B,

C. We then have that
c( x(t) x(t))( x(t) x(t))

=

C(t)

P(t)

C

,
in which

P is the solution to the equation

P =

A

P +

P

A

+

B

B

,

P(0) =
_
_
P
0
0 0
0 0 0
0 0 0
_
_
.
Elementary algebra reveals that

P has the form

P =
_
_
P P Q X
P Q P Q X
X

Y
_
_
,
in which P is the solution to

P = AP +PA

+BB

, P(0) = P
0
,
LQG CONTROL 69
and X and Y satisfy the linear matrix dierential equations

X = AX +PC

+XF

, X(0) = 0

Y = FY +Y F

+GG

+GCX +XC

, Y (0) = 0.
Since

P 0, we may write the Schur decomposition

P =
_
I Z
0 I
_ _
R 0
0 Y
_ _
I 0
Z

I
_
,
in which
Z =
_
X
X
_
Y
#
R =
_
P P Q
P Q P Q
_
ZY Z

=
_
W +Q W
W W
_
,
in which W = PQXY
#
X

and ()
#
denotes the Moore-Penrose pseudo-inverse.
1
It now follows that

C(t)

P(t)

C

= (terms independent of H
2
) + (H
2
H

2
)Y (H
2
H

2
)

,
in which H

2
= (I H
1
)XY
#
. Now Y 0, since

P 0, and it follows that an
optimal choice for H
2
is H

2
. With this choice for H
2
, the cost is given by

C(t)

P(t)

C

=
_
I H
1

_
W +Q W
W W
_ _
I
H

1
_
= Q+ (I H
1
)W(I H
1
)

.
From

P 0, it follows that R 0 and hence that W 0. Therefore, Q is the
optimal state error covariance and H

1
= I is an optimal choice for H
1
. This gives
H

2
= 0 as an optimal choice for H
2
.
We now note that if H
2
= 0, the values of F and G are irrelevant, and an optimal
lter is therefore

x = A x +QC

(y C x), which is the Kalman lter.


Solution 5.14. The problem data are
A =
_
1 1
0 1
_
, B
1
=

_
1 0
1 0
_
, B
2
=
_
0
1
_
C
1
=

_
1 1
0 0
_
, D
11
= 0, D
12
=
_
0
1
_
1
It is a fact that the Schur decomposition using a pseudo-inverse can always be nonnegative
denite matrices. If
_
A B
B

C
_
0, then Cv = 0 Bv = 0, which is what makes it work.
70 SOLUTIONS TO PROBLEMS IN CHAPTER 5
C
2
=
_
1 0

, D
21
=
_
0 1

, D
22
= 0.
Since D

12
C
1
= 0, there are no cross-terms in the control Riccati equation, which
is
XA+A

X XB
2
B

2
X +C

1
C
1
= 0.
The stabilizing solution is easily veried to be
X =
_
2 1
1 1
_
.
Thus F = B

2
X =
_
1 1

.
Similarly, since B
1
D

21
= 0, the measurement and process noise are uncorrelated
and the Kalman lter Riccati equation is
AY +Y A

Y C

2
C
2
Y +B
1
B

1
= 0.
It is easy to check that the stabilizing solution is
Y =
_
1 1
1 2
_
.
Hence H = Y C

2
=
_
1
1
_
.
The optimal controller is therefore given by

x = A x +B
2
u +H(y C
2
x)
u = F x.
Rewriting this, we obtain
K

s
=
_
AB
2
F HC
2
H
F 0
_
.
Now
AB
2
F HC
2
=
_
1 1
( +) 1
_
.
Evaluating F(sI (AB
2
F HC
2
)
1
H, we obtain
K

=
(1 2s)
s
2
+ ( + 2)s + 1 +
.
The optimal cost is given by
_
trace(B

1
XB
1
) + trace(FY F

) =
_
5( +).
The optimal cost is monotonically increasing in both and .
Solutions to Problems in
Chapter 6
Solution 6.1.
1. This follows by replacing u with u and elementary algebra.
2. This follows from Theorems 6.2.1 and 6.2.4.
3. A direct application of (6.3.25) gives
_

x
u
_
w w

x x
_
_

_
=
_

AB
2
B

2
P
_
0 B
1

B
2
0
_
B

2
P 0

I
_
0
I
_ _

2
B

1
P I
I 0
_ _
0
0
_
_

_
_

_
x
_
x
w
_
r
_

_
r =
_
U V

_
w w

x x
_
.
Since u = u D

12
C
1
x, we obtain
_

x
u
_
w w

x x
_
_

_
=
_

_
AB
2
F
_
0 B
1

B
2
0
_
F 0

I
_
0
I
_ _

2
B

1
P I
I 0
_ _
0
0
_
_

_
_

_
x
_
x
w
_
r
_

_
r =
_
U V

_
w w

x x
_
,
in which F = D

12
C
1
+B

2
P.
4. See Section 5.2.3.
5. There exists an X

satisfying

AX

+

A

(B
2
B

2
B
1
B

1
)X

+

C


C = 0
such that

A(B
2
B

2
B
1
B

1
)X

is asymptotically stable and X

0.
71
72 SOLUTIONS TO PROBLEMS IN CHAPTER 6
Solution 6.2. Using the properties of (see Problem 3.3),

(t) = B(t)u(t) +
_
T
t
d
dt
_

(, t)B()u()
_
d A

(t)

(T, t)
T
= B(t)u(t)
_
T
t
A

(t)

(, t)B()u() d A

(t)

(T, t)
T
= B(t)u(t) A

(t)(t).
Setting B = C

C and
T
= x

(T) shows that (6.2.7) satises (6.2.14).


Solution 6.3. For any vector z and any real number , we have the identity
_
z
1
+ (1 )z
2
_

_
z
1
+ (1 )z
2
_
z

1
z
1
(1 )z

2
z
2
= (1 )(z
1
z
2
)

(z
1
z
2
). (6.1)
1. Suppose z is the response to inputs u and w, and z is the response to inputs
u and w. The response to inputs u +(1 ) u and w is z

= z +(1 ) z.
Hence, for any [0, 1],
J(u + (1 )u, w) =
_
T
0
z

2
w

wdt

_
T
0
z

z + (1 ) z

z
2
w

wdt, by (6.1)
=
_
T
0
z

z
2
w

wdt + (1 )
_
T
0
z

z
2
w

wdt
= J(u, w) + (1 )J( u, w).
That is, J is convex in u.
2. Set u = u

= B

2
Px = K

x. Then J(K

, w) =
2
_
T
0
(ww

(ww

) dt,
in which w

=
2
B

1
Px. Let W be the closed-loop map from w to w w

,
which is linear. Let w (w w

) and w ( w w

). That is, w

is
produced by the input w, with the controller K

in place, and w

is produced
by the input w, with with the controller K

in place. Then w+(1 ) w


(w w

) + (1 )( w w

). Thus
J(K

, w + (1 ) w)
=
2
_
T
0
_
_
(w w

) + (1 )( w w

)
_

_
(w w

) + (1 )( w w

)
_
_
dt

2
_
T
0
(w w

(w w

) + (1 )( w w

( w w

) dt
= J(u

, w) + (1 )J(u

, w).
FULL-INFORMATION 1

CONTROL 73
That is, J is concave in w.
3. Suppose K is a linear, full-information controller that makes J strictly concave
in w. That is,
J(K, w + (1 ) w) > J(K, w) + (1 )J(K, w) (6.2)
for all w ,= w and all (0, 1). Let w ,= w, (0, 1) and let z and z be
the responses of the closed-loop system R
zw
for inputs w and w respectively.
From (6.2) and the identity (6.1), we conclude that
(1 )
_
T
0
_
(z z)

(z z)
2
(w w)

(w w)
_
dt > 0.
Taking w = 0, which implies z = 0, we see that
_
T
0
z

z
2
w

wdt < 0 for all w ,= 0.


Therefore, |R|
[0,T]
< and we conclude that P(t) exists on [0, T].
Solution 6.4. Since R
zw
is causal and linear, (R
zw
) < if and only if
|R
zw
|
[0,T]
< for all nite T. Hence, there exists a controller such that (R
zw
) <
if and only if the Riccati dierential equation


P = PA+A

P P(B
2
B

2
B
1
B

1
)P +C

C, P(T) = 0
has a solution on [0, T] for all nite T.
Solution 6.5.
1. P(t, T, ) =
2
(t)
1
1
(t) is a solution to the Riccati equation provided
d
dt
_

1

2
_
= H
_

1

2
_
,
_

1

2
_
(T) =
_
I

_
. (6.3)
(see Section 6.2.3 for details.) We therefore verify that the given formulas for

1
and
2
do indeed satisfy this linear dierential equation. We are going to
nd the solutions
i
and
2
via the change of variables
_

2
_
= Z
1
_

1

2
_
,
since Z
1
HZ block-diagonalizes H.
74 SOLUTIONS TO PROBLEMS IN CHAPTER 6
Let

1
(t) and

2
(t) be the solutions to
d
dt
_

2
_
=
_
0
0
_ _

2
_
,
_

2
_
(T) = Z
1
_
I

_
.
That is,

1
(t) = e
(tT)

1
(T)

2
(t) = e
(tT)

2
(T).
The boundary condition can be written as
Z
_

2
_
(T) =
_
I

_
,
from which we obtain
Z
11

1
(T) +Z
12

2
(T) = I
Z
21

1
(T) +Z
22

2
(T) = .
Multiplying the rst equation by and subtracting from the second, we
obtain
(Z
21
Z
11
)

1
(T) + (Z
22
Z
12
)

2
(T) = 0.
Therefore,

2
(T) = (Z
22
Z
12
)
1
(Z
21
Z
11
)

1
(T)
= X

1
(T).
Hence

2
(t) = e
(tT)

2
(T)
= e
(tT)
X

1
(T)
= e
(Tt)
Xe
(Tt)

1
(t).
Then
_

1

2
_
= Z
_

2
_
=
_
Z
11
Z
12
Z
21
Z
22
_ _
I
e
(Tt)
Xe
(Tt)
_

1
(t)
is the solution to (6.3).
FULL-INFORMATION 1

CONTROL 75
2. From HZ = Z
_
0
0
_
, we obtain
C

CZ
12
+A

Z
22
= Z
22
. (6.4)
Hence
Z

12
C

CZ
12
+Z

12
A

Z
22
= Z

12
Z
22
= Z

22
Z
12
.
Suppose Z
22
x = 0. Multiplying the above equation on the left by x

and on
the right by x we see that CZ
12
x = 0. Now multiply (6.4) on the right by x
to obtain Z
22
x = 0. We can now multiply on the left by x

and on the
right by x, and so on, to obtain
Z
22
x = 0
_
Z
22
CZ
12
_

k
x = 0, k = 0, 1, 2, . . . .
We rst prove that Z
22
is nonsingular if (A, C) is detectable. Suppose, to ob-
tain a contradiction, that x ,= 0 and Z
22
x = 0. Then, by the above reasoning,
_
,
_
Z
22
CZ
12
__
is not observable. Therefore, there exists a y ,= 0 such that
_
_
I
Z
22
CZ
12
_
_
y = 0.
Note that R
e
() 0 because R
e
(
i
()) 0. From the (1, 1)-partition of
HZ = Z
_
0
0
_
, we obtain AZ
12
y = Z
12
y. Therefore,
(I +A)Z
12
y = 0
CZ
12
y = 0.
Hence Z
12
y = 0, since (A, C) is detectable. This gives
_
Z
12
Z
22
_
y = 0, which
implies y = 0, since Z is nonsingular. This contradicts the hypothesis that
there exists an x ,= 0 such that Z
22
x = 0 and we conclude that Z
22
is nonsin-
gular.
We now prove the rank defect of Z
22
(the dimension of ker(Z
22
)) is equal to
the number of undetectable modes of (A, C). Let the columns of V be a basis
for ker(Z
22
). Arguments parallel to those above show that
_
Z
22
CZ
12
_

k
V = 0, k = 0, 1, 2, . . . .
Furthermore AZ
12
V = Z
12
V = Z
12
V (for some ), so Z
12
V is an
unstable A-invariant subspace contained in ker(C). That is, Z
12
V is a subset
76 SOLUTIONS TO PROBLEMS IN CHAPTER 6
of the undetectable subspace of (A, C). Since Z is nonsingular, rankZ
12
V =
rankV = dimker(Z
22
). Thus (A, C) has at least as many undetectable modes
as the rank defect of Z
22
. For the converse, suppose W is a basis for the
undetectable subspace of (A, C). Then
AW = W, R
e

i
() > 0,
CW = 0.
(Strict inequality holds because we assume that (A, C) has no unobservable
modes on the imaginary axis.) Let
_
X
1
X
2
_
= Z
1
_
W
0
_
.
Then
_
X
1
X
2
_
= Z
1
H
_
W
0
_
=
_
0
0
_
Z
1
_
W
0
_
=
_
X
1
X
2
_
.
Since R
e

i
() > 0 and R
e

i
() 0, X
1
= 0. From
_
W
0
_
= Z
_
X
1
X
2
_
= Z
_
0
X
2
_
=
_
Z
12
X
2
Z
22
X
2
_
,
we see that Z
22
X
2
= 0 and that W = Z
12
X
2
. We conclude that the dimension
of the ker Z
22
is (at least) the number of undetectable modes of (A, C).
3. Multiply HZ = Z
_
0
0
_
on the right by
_
Z
1
11
0
_
. The upper block of
the resulting equation is A(B
2
B

2
B
1
B

1
) = Z
11
Z
1
11
.
Solution 6.6.
1. Suppose Px = 0, x ,= 0. Multiplying the Riccati equation by x

on the left
and by x on the right reveals that Cx = 0. Now multiplying by x on the right
reveals that PAx = 0. Thus ker(P) is an A-invariant subspace. Hence, there
exists a y ,= 0 such that Py = 0, Cy = 0 and Ay = y, which contradicts
the assumed observability of (A, C). Thus (A, C) observable implies P is
nonsingular.
FULL-INFORMATION 1

CONTROL 77
2. (ASP) is asymptotically stable.
Suppose (A, C) has no stable unobservable modes. If Px = 0 for some x ,= 0,
then (as in part 1), there exists a y ,= 0 such that Py = 0, Cy = 0 and
Ay = y. Thus (A SP)y = Ay = y, which implies R
e
() < 0, since
ASP is asymptotically stable. This contradicts the assumption that (A, C)
has no stable unobservable modes and we conclude that P is nonsingular.
Conversely, suppose P is nonsingular. If Ax = x, Cx = 0 for some x ,= 0,
then multiplying the Riccati equation on the right by x results in (A
SP)

Px = Px. Since (A SP) is asymptotically stable and P is non-


singular, we have R
e
() > 0. This shows that any unobservable mode is in
the right-half plane, which is equivalent to the proposition that (A, C) has no
unobservable modes that are stable.
3. The fact that the given P satises the Riccati equation is easily veried. Also,
since
ASP =
_
A
11
S
11
P
1
0
A
21
S

12
P
1
A
22
_
,
we see that ASP is asymptotically stable.
Solution 6.7. Since (A, C) is observable, X and Y are nonsingular (see Prob-
lem 6.6). Therefore,
AX
1
+X
1
A

S +X
1
C

CX
1
= 0
AY
1
+Y
1
A

S +Y
1
C

CY
1
= 0.
Subtract these to obtain
AZ +ZA

+X
1
C

CX
1
Y
1
C

CY
1
= 0
in which Z = X
1
Y
1
. Some elementary algebra reveals that this equation can
be re-written as
(A+X
1
C

C)Z Z(A+X
1
C

C)

+ZC

CZ = 0.
Now (A + X
1
C

C)

= X(A SX)X
1
, so (A + X
1
C

C) is asymptotically
stable. Hence, Z 0, which is to say X
1
Y
1
. Since X 0 and Y 0, this is
equivalent to Y X.
This problem shows that the stabilizing solution is the smallest of any nonneg-
ative denite solutions.
78 SOLUTIONS TO PROBLEMS IN CHAPTER 6
Solution 6.8.
1.
_
I +B

2
(sI A

)
1
PB
2
__
I +B

2
P(sI A)
1
B
2
_
= I +B

2
P(sI A)
1
B
2
+B

2
(sI A

)
1
PB
2
+B

2
(sI A

)
1
_
P(sI A) (sI A

)P
+C

C +
2
PB
1
B

1
P
_
(sI A)
1
B
2
= I +B

2
(sI A

)
1
_
C

C +
2
PB
1
B

1
P
_
(sI A)
1
B
2
.
2. Immediate from setting B
2
= b
2
in Part 1.
3. The Nyquist diagram of b

2
P(sI A)
1
b
2
cannot enter the circle of radius
one, centered on s = 1. The stated gain and phase margins follow from this
fact. (see Problem 5.4 for more details.)
Solution 6.9. Substituting u = px and using x = (1
2
)q, we have
(1
2
) q =
_

2
+
_
1 +c
2
(1
2
)
_
q +w
z =
_
c(1
2
)
(1 +
_
1 +c
2
(1
2
))
_
q.
When 1, q = w/2 and z =
_
0
w
_
.
Solution 6.10. X(t) = P(t, T, P
2
) satises the equation


X = X

A+

A

X +X(B
2
B

2
B
1
B

1
)X, X(T) = 0. (6.5)
in which

A = A (B
2
B

2

2
B
1
B

1
). Suppose
_
P(t

)
_
x = 0 for some t

.
Then x

(6.5)x yields x


X(t

)x = 0 and it follows that



X(t

)x = 0, since

X 0 by
the monotonicity property of P(t, T, P
2
). Now (6.5)x yields X(t

Ax = 0 and we
conclude that ker X(t

) is an

A invariant subspace. Therefore, there exists a y such
that X(t

)y = 0 and

Ay = y. (It follows as before that

Xy = 0.) Hence X(t)y is
a solution to = 0, (t

) = 0 and we conclude that X(t)y = 0 for all t. Therefore,


without loss of generality (by a change of state-space variables if necessary), P(t)
has the form
P(t) =
_

1
P
1
(t) 0
0 0
_
, for all t T, (6.6)
FULL-INFORMATION 1

CONTROL 79
in which
1
P
1
(t) is nonsingular for all t. Furthermore, since ker( P(t)) is

A
invariant

A =
_

A
11
0

A
21

A
22
_
.
The asymptotic stability of the

A
11
follows as in the text. We therefore need to
establish the asymptotic stability of

A
22
. Setting t = T in (6.6), we see that
P
2
=
_
(P
2
)
11
0
0 0
_
.
From equation (6.3.20), (P P
2
)y = 0 implies B

1
Py = 0. Therefore,

A
22
=
(AB
2
B

2
P)
22
, which is asymptotically stable by Lemma 6.3.4.
Solution 6.11.
1. From the Riccati equation, we have (A+P
1
2
C

C) = P
1
2
(AB
2
B

2
P
2
)

P
2
,
so (A+P
1
2
C

C) is asymptotically stable.
2. Suppose a stabilizing, nonnegative denite P exists. Since P P
2
> 0, P is
nonsingular. Therefore, we may write
AP
1
2
+P
1
2
A

B
2
B

2
+P
1
2
C

CP
1
2
= 0
AP
1
+P
1
A

B
2
B

2
+
2
B
1
B

1
+P
1
C

CP
1
= 0.
Subtracting these equations and a little algebra yields
0 = (A+P
1
2
C

C)(P
1
2
P
1
) + (P
1
2
P
1
)(A+P
1
2
C

C)

2
B
1
B

1
(P
1
2
P
1
)C

C(P
1
2
P
1
).
Multiplying by
2
and dening Y =
2
(P
1
2
P
1
) yields the given Riccati
equation for Y . From the Riccati equation for P, we see that
(A+P
1
2
C

C) +
2
Y C

C = (A+P
1
C

C)
= P
1
_
A(B
2
B

2
B
1
B

1
)P
_

P.
Hence Y is the stabilizing solution. Since
2
I P
2
Y =
2
P
2
P
1
and P and
P
2
are positive denite, we conclude that
2
> (P
2
Y ).
Conversely, suppose Y is a stabilizing solution to the stated Riccati equation
and
2
> (P
2
Y ). Then X = P
1
2

2
Y is positive denite and satises
AX +XA

B
2
B

2
+
2
B
1
B

1
+XC

CX = 0.
Hence P = X
1
satises the Riccati equation (6.3.5). Furthermore,
(A+P
1
2
C

C) +
2
Y C

C = (A+P
1
C

C)
= P
1
_
A(B
2
B

2
B
1
B

1
)P
_

P,
so P is the stabilizing solution.
80 SOLUTIONS TO PROBLEMS IN CHAPTER 6
3. A suitable controller exists if and only if a stabilizing, nonnegative denite
solution to the Riccati equation (6.3.5) exists, which we have just shown is
equivalent to the existence of a stabilizing solution Y such that (P
2
Y ) <
2
.
Since C = 0 and A is asymptotically stable, Y is the controllability gramian
of (A, B
1
). Therefore, a suitable controller exists if and only if
2
> (P
2
Y ).
In the case that C = 0, this result gives a formula for the optimal performance
level.
Solution 6.12.
1.
|z|
2
2

2
|w|
2
2
=
_

0
(z

z
2
w

w) dt
=
_

0
_
z
w
_

_
I 0
0
2
I
_ _
z
w
_
dt
=
_

0
_
w
u
_

JG
_
w
u
_
dt
=
1
2
_

_
w
u
_

JG
_
w
u
_
d
by Parsevals theorem.
2.
G

JG
s
=
_

_
A 0 B
1
B
2
C

C A

0 0
0 B

1

2
I 0
0 B

2
0 I
_

_
s
=
_

_
A 0 B
1
B
2
C

C PAA

P A

PB
1
PB
2
B

1
P B

1

2
I 0
B

2
P B

2
0 I
_

_
s
=
_

_
A 0 B
1
B
2
P(B
2
B

2
B
1
B

1
)P A

PB
1
PB
2
B

1
P B

1

2
I 0
B

2
P B

2
0 I
_

_
= W

JW.
In the above,

J =
_
I 0
0
2
I
_
.
FULL-INFORMATION 1

CONTROL 81
The dimension of the
2
I-block is the same as the dimension of w and the
dimension of the I-block is the dimension of u. (In J, the dimension of the
I-block is the dimension of z.)
3. The A-matrix of W
1
is

A = A(B
2
B

2
B
1
B

1
)P which is asymptotically
stable if P is the stabilizing solution. Hence W
1
1

. Using Problem 3.6,


we obtain
GW
1
s
=
_

A B
2
B
1
C 0 0
DB

2
P D 0

2
B

1
P 0 I
_

_
,
Hence GW
1
1

.
4. By direct evaluation,
(GW
1
)

J(GW
1
)
=

J +
_
B

2
B

1
_
( sI

A

)
1
_
C

C +P(B
2
B

2
B
1
B

1
)P
( sI

A

)P P(sI

A)
_
(sI

A)
1
_
B
2
B
1

=

J (s + s)
_
B

2
B

1
_
( sI

A

)
1
P(sI

A)
1
_
B
2
B
1


J, for (s + s)P 0.
The equation
_
u u

w w

_
= W
_
w
u
_
follows immediately fromu

= B

2
Px
and w

=
2
B

1
Px.
If u = u

, then
_
W
11
W
12

_
w
u

_
= 0. Hence u

= W
1
12
W
11
w.
(Note: It can be shown that the J-lossless property implies that W
1
12
is
nonsingular.)
Solution 6.13.
1. Suppose there exists a measurement feedback controller u = Ky that achieves
the objective. Then
u = K
_
C
2
I

_
x
w
_
is a full-information controller that achieves the objective. Therefore, P(t)
exists.
82 SOLUTIONS TO PROBLEMS IN CHAPTER 6
Conversely, if P(t) exists, then u = B

2
Px is a controller that achieves the
objective. Now consider the measurement feedback controller dened by

x = A x +B
1
(y C
2
x) +B
2
u, x(0) = 0
u = B

2
P x.
Note that x is a copy of x, since
d
dt
( x x) = (AB
1
C
2
)( x x), x(0) x(0) = 0.
Consequently, the measurement feedback controller u = B

2
P x generates
the same control signal and hence the same closed loop as the controller u =
B

2
Px. Therefore, it is a measurement feedback controller that achieves the
objective.
The generator of all controllers is obtained by noting that all closed-loops
generated by full-information controllers are generated by uu

= U(ww

),
in which u

= B

2
Px and w

=
2
B

1
Px. Replacing x with x and replacing
w by y C
2
x results in the LFT
_
_

x
u
w w

_
_
=
_
_
AB
1
C
2
B
2
B

2
P B
1
B
2
B

2
P 0 I
(C
2
+
2
B

1
P) I 0
_
_
_
_
x
y
r
_
_
,
r = U(w w

).
2. Since the measurement feedback controller u = B

2
x generates the same
closed-loop as the controller u = B

2
x, the closed loop generated by u =
B

2
x is stable. To conclude internal stability, we need to show that no unsta-
ble cancellations occur. The cancellations which occur are at the eigenvalues
of A B
1
C
2
, since these are uncontrollable modes (see the error dynamics
equation in the preceding part). Thus, the measurement feedback controller
is internally stabilizing if and only if AB
1
C
2
is asymptotically stable.
(You may like to connect the controller generator K
a
to the generalized plant
using the inter-connection formula of Lemma 4.1.1 and verify that the modes
associated with AB
1
C
2
are uncontrollable).
Solution 6.14. Let A = diag(s
i
), and let G

and H

be the matrices with rows


g

i
and h

i
. The dynamics are therefore
x = Ax H

w +G

u.
FULL-INFORMATION 1

CONTROL 83
1. Recall that any control signal (and hence any closed-loop transfer function
matrix R) that can be generated by a stabilizing full-information controller
can be generated by
u = Fx +Uw,
in which F is a stabilizing state feedback and U 1

(see Section 4.2.2).


Taking the Laplace transform of the dynamics and substituting the control
law u = Fx +Uw, we obtain
x =
_
sI (AG

F)
_
1
(G

U H

)w.
The closed-loop R maps w to u. Hence
R = U F
_
sI (AG

F)
_
1
(G

U H

). (6.7)
Now note the identity
I =
_
sI (AG

F)
__
sI (AG

F)
_
1
= (sI A)
_
sI (AG

F)
_
1
+G

F
_
sI (AG

F)
_
1
.
Therefore G

R is given by
G

R = H

+ (sI A)
_
sI (AG

F)
_
1
(G

U H

). (6.8)
Suppose R is a closed-loop system generated by a stabilizing, full-information
controller. Then G

R is given by (6.8) for some U 1

and some F
such that A G

F is asymptotically stable. This implies that the zeros of


sI A, which are in the right-half plane, cannot be cancelled by the poles of
_
sI (AG

F)
_
1
(G

UH

), which are in the left-half plane. Hence, since


the i
th
row of sI A is zero for s = s
i
, we obtain the interpolation equation
g

i
R = h
i
.
Conversely, suppose R satises the interpolation constraints. We want R
to be the map from w to u for some stabilizing controller. We therefore
back solve for U and then show that the satisfaction of the interpolation
constraints ensures that U is stable. To back solve for U, simply note that
R : w u implies that sx = Ax + (G

RH

)w. Hence u +Fx is given by


u + Fx =
_
R+ F(sI A)
1
(G

RH

)
_
w. Therefore, U, the map from w
to u +Fx, is given by
U = R+F(sI A)
1
(G

RH

).
Since the i
th
row of G

R H

whenever i
th
row of (sI A) is also zero,
namely when s = s
i
, we conclude that U 1

.
84 SOLUTIONS TO PROBLEMS IN CHAPTER 6
2. Using the result of Item 1, we can determine the existence of an interpolating
R such that |R|

< by invoking our full-information 1

control results.
Thus, R exists if and only if there exists a stabilizing solution to the Riccati
equation
PA+A

P P(G

G
2
H

H)P = 0
such that P 0.
Suppose Rexists. Then a stabilizing P 0 exists. Because all the eigenvalues
of A are in the right-half plane, we must have P > 0 (see Problem 6.6). Dene
M = P
1
. Then M > 0 satises the equation
AM +MA

G+
2
H

H = 0,
from which it follows that
M
ij
=
g

i
g
j

2
h

i
h
j
s
i
+ s
j
.
Conversely, suppose M given by this formula is positive denite and dene
P = M
1
> 0. Then A(G

G
2
H

H)P = P
1
A

P, which is asymp-
totically stable. Thus P = M
1
is a positive denite, stabilizing solution to
the Riccati equation and we conclude that R exists.
3. Substituting into the generator of all closed-loops for the full-information
problem, we see that all solutions are generated by the LFT R = T

(R
a
, U),
in which U 1

, |U|

< and
R
a
s
=
_
_
AG

GP H

GP 0 I

2
HP I 0
_
_
.
For example, solutions to the scalar interpolation problem r(1) = 1 such that
|r|

< exist when (1


2
)/(1 + 1) > 0, i.e., > 1. All solutions are
generated by T

(r
a
, u), in which
r
a
s
=
_
_
(1 +
2
)/(1
2
) 1 1
2/(1
2
) 0 1
2
2
/(1
2
) 1 0
_
_
.
It is easy to verify that the r
a11
=
2
s(1
2
)+1+
2
, which clearly interpo-
lates the data and |r
a11
|

= 2/(1 +
2
), which is less than provided
> 1. It is also easy to see that r
a12
(1) = 0, from which it follows that
T

(r
a
, u)(1) = r
a11
(1) = 1. Note that since r(1) = 1 is a one-point inter-
polation problem,
opt
1 follows from the maximum modulus principle,
and consideration of the constant interpolating function r = 1 shows that

opt
= 1. Can you explain what happens in the parametrization of all sub-
optimal ( > 1) solutions above as 1?
FULL-INFORMATION 1

CONTROL 85
4. Let AX XA

+G

G = 0 and AY Y A

+H

H = 0. Note that X 0
and Y 0, since A is asymptotically stable. Then M = X
2
Y =

2
X(
2
I X
1
Y ). Therefore,
opt
=
_

max
(X
1
Y ). Note that X is
nonsingular provided none of the g
i
s is zero.
Solution 6.15.
1. Stability of the loop is an immediate consequence of the small gain theorem.
2. Immediate from the full-information synthesis results.
Solution 6.16.
1. Completing the square with the Riccati equation yields
|z|
2
2

2
|w|
2
2
= x

0
Px
0
+|u u

|
2
2

2
|w w

|
2
2
.
Setting u = u

and w 0, we see that |z|


2
2
= x

0
Px
0

2
|w

|
2
2
x

0
Px
0
.
2. The closed-loop R
zw
with u = u

is given by
x = (AB
2
B

2
P)x +B
1
w
z =
_
C
DB

2
P
_
x.
Hence, by Theorem 3.3.1, |R
zw
|
2
2
= trace(B

1
QB
1
), in which Q is the observ-
ability gramian, which satises
Q(AB
2
B

2
P) + (AB
2
B

2
P)

Q+PB
2
B

2
P +C

C = 0.
The Riccati equation for P can be written as
P(AB
2
B

2
P) + (AB
2
B

2
P)

P +P(B
2
B

2
+
2
B
1
B

1
)P +C

C = 0.
Subtracting these results in
(P Q)(AB
2
B

2
P) + (AB
2
B

2
P)

(P Q) +
2
PB
1
B

1
P = 0.
Since A B
2
B

2
P is asymptotically stable, we have P Q 0. Hence
|R
zw
|
2
2
= trace(B

1
QB
1
) trace(B

1
PB
1
).
86 SOLUTIONS TO PROBLEMS IN CHAPTER 6
Solution 6.17. Suppose there exists a stabilizing controller K such that
|R
zw
|

< . Then by the argument of Section 6.3.4, there exists an L such


that K stabilizes the plant
x = Ax +B
1
w +B
2
u
z
a
=
_
_
Cx
Lx
u
_
_
,
(A,
_
C

) has no unobservable mode on the imaginary axis and |R


z
a
w
|

<
. Hence there exists a solution to the Riccati equation
PA+A

P P(B
2
B

2
B
1
B

1
)P +C

C +L

L = 0 (6.9)
such that A(B
2
B

2
B
1
B

1
)P is asymptotically stable and P 0. Clearly, P
is a solution to the stated Riccati inequality.
Conversely, suppose P 0 is a stabilizing solution to the Riccati inequality. Let
L be a Cholesky factor such that
L

L = (PA+A

P P(B
2
B

2
B
1
B

1
)P +C

C).
Thus, the Riccati equation (6.9) holds. Hence, by Theorem 6.3.1, the controller
u = B

2
Px stabilizes the augmented system, and |R
z
a
w
|

< . Hence, since z


consists of the upper components of z
a
, the controller also stabilizes the system and
satises |R
zw
|

<
Solution 6.18.
1. Suppose K stabilizes the augmented system and |R
z
a
w
|

< . Then the


system is stabilized by K and
|z|
2
2
+
2
|u|
2
2

2
|w|
2
2
|w|
2
2
(6.10)
for some > 0. Hence
|z|
2
2

2
|w|
2
2
|w|
2
2
and we conclude that |R
zw
|

< .
Conversely, suppose K stabilizes the system and |R
zw
|

< . Then the


closed-loop system mapping w to u has nite innity norm M, say. This
follows from the denition of internal stability for linear fractional transfor-
mations. This observation shows that K stabilizes the augmented system.
Set 0 <
_
1
2M
2
(
2
|R
zw
|
2

). Then
|z|
2
2
+
2
|u|
2
2

2
|w|
2
2
|z|
2
2
+ (
2
M
2

2
)|w|
2
2
(|R
zw
|
2

+
2
M
2

2
)|w|
2
2

1
2
(
2
|R
zw
|
2

)|w|
2
2
= |w|
2
2
,
FULL-INFORMATION 1

CONTROL 87
for = (
2
|R
zw
|
2

)/2 > 0. Hence |R


zw
|

< .
2. Suppose P

exists. Then u = B

2
P

x is stabilizing for z
a
, hence also for z, and
|R
z
a
w
|

< , hence also |R


zw
|

< . Conversely, if a stabilizing controller


K satises |R
zw
|

< , then it also stabilizes an augmented objective system


of the form given in Item 1 and |R
z
a
w
|

< . This augmented system


satises the standard assumptions (modulo scaling by

D +
2
I), which
is nonsingular since > 0. Consequently, the stated Riccati equation has a
stabilizing, nonnegative denite solution.
Solution 6.19. This problem is simply a combination of the previous two. If K
stabilizes the system and |R
zw
|

< , we can choose > 0 and an L such that


(A,
_
C

) has no unobservable mode on the imaginary axis, and K stabilizes


the augmented system with
z
a
=
_

_
Cx
Lx
u
u
_

_
and |R
zw
|

< . Consequently, the stated Riccati inequality has a stabilizing,


nonnegative denite solution. Conversely, if the stated Riccati inequality has a
stabilizing, nonnegative denite solution, then the controller u = B

2
P

x stabilizes
the augmented system for
L

L = (P

A+A

(B
2
R
1

2
B
1
B

1
)P

+C

C),
and |R
z
a
w
|

< . Since z consists of components of z


a
, we conclude that u =
B
2
P

x stabilizes the system and |R


zw
|

< .
Solutions to Problems in
Chapter 7
Solution 7.1. Subtracting (7.2.17) from (7.2.16) gives
(

Q) = (AQC

C)(

Q

Q) + (

Q

Q)(AQC

C)

+ (Q

Q)C

C(Q

Q),
which has a nonnegative solution, since (

Q

Q)(0) = 0. This proves that

Q(t)

Q(t). Subtracting (7.2.16) from (7.2.13) gives


(

Q

Q) = (AQC

C)(Q

Q) + (Q

Q)(AQC

C)

+
2
QL

LQ,
which also has a nonnegative solution, since (Q

Q)(0) = 0. We therefore conclude
that Q(t)

Q(t)

Q(t) as required.
Solution 7.2. Firstly, we replace the message generating dierential equation
x = Ax +Bw
with
x = Ax +BQ
1/2
w.
Next, we replace the observations equation
y = Cx +v
with
y = R
1/2
y
= R
1/2
Cx + v.
From the general theory presented in the text, we see that the Riccati dierential
equation associated with w, v and y is

Q = AQ+QA

Q(C

R
1
C
2
L

L)Q+BQB

, Q(0) = 0
89
90 SOLUTIONS TO PROBLEMS IN CHAPTER 7
and that the observer gain (for the scaled variables) is

H = QC

R
1/2
. The lter
for the scaled variables is

x = (AQC

R
1
C) x +QC

R
1/2
y
z = L x,
which is equivalent to

x = (AQC

R
1
C) x +QC

R
1
y
z = L x.
This shows that the observer gain (for the original variables) is given by H =
QC

R
1
.
Solution 7.3. It is easy to verify that
Q =
_

Q 0
0 0
_
satises
AQ+QA

Q(C

C
2
L

L)Q+BB

= 0
if

Q satises
A
11

Q+

QA

11


Q(C

1
C
1

2
L

1
L
1
)

Q+B
1
B

1
= 0.
We can now complete the calculation by observing that
AQ(C

C
2
L

L)
=
_
A
11


Q(C

1
C
1

2
L

1
L
1
) A
12


Q(C

1
C
2

2
L

1
L
2
)
0 A
22
_
is stable because A
22
and A
11


Q(C

1
C
1

2
L

1
L
1
) are stable.
Solution 7.4. Consider the given signal generator
G :
_
x = Ax +Bw
y = Cx +Dw, DD

= I.
The lter
z = Fy
is to provide an estimate of z = Lx. In order to remove the cross coupling be-
tween the input and observations disturbances, we introduce the observations pre-
processor
P :
_

x = (ABD

C) x +BD

y
y = C x +y.
THE 1

FILTER 91
Thus, y is generated from w by the system
( x

x) = (ABD

C)(x x) +B(I D

D)w
y = C(x x) +Dw.
Since the pre-processor is an invertible system, there is a one-to-one correspondence
between the original and the modied ltering problems. Since (I D

D)D

= 0,
the modied problem ts into the standard theory, which we can use to nd an
estimator

F of L(x x) given the observations y. We obtain a lter F for our
original problem from

F by noting that
L x = L( x x) +L x
=

F y +L
_
sI (ABD

C)
_
1
BD

y
=
_

FP +L
_
sI (ABD

C)
_
1
BD

_
y.
We use (7.2.10) and (7.2.11) to give the generator of all estimators of L(x x),
given the information y:

F
a
s
=
_
_
A(BD

+QC

)C QC

2
QL

L 0 I
C I 0
_
_
,
where Q(t) satises

Q = (ABD

C)Q+Q(ABD

C)

Q(C

C
2
L

L)Q+B(I D

D)B

with initial condition Q(0) = 0. Since



F
a
generates all estimators of L(x x) (and
not Lx), all the estimators of Lx will be given by
F
a
s
=
_
_
ABD

C BD

0
L 0 0
0 0 0
_
_
+
_
_
A(BD

+QC

)C QC

2
QL

L 0 I
C I 0
_
_
_
_
ABD

C BD

0
C I 0
0 0 I
_
_
s
=
_
_
ABD

C BD

0
L 0 0
0 0 0
_
_
+
_

_
A(BD

+QC

)C QC

C QC

2
QL

0 ABD

C BD

0
L 0 0 I
C C I 0
_

_
92 SOLUTIONS TO PROBLEMS IN CHAPTER 7
s
=
_
_
ABD

C BD

0
L 0 0
0 0 0
_
_
+
_

_
A(BD

+QC

)C 0 BD

+QC

2
QL

0 ABD

C BD

0
L L 0 I
C 0 I 0
_

_
s
=
_
_
A(BD

+QC

)C BD

+QC

2
QL

L 0 I
C I 0
_
_
.
That is,

x = A x + (BD

+QC

)(y C x)
2
QL

z = L x +
= y C x.
The lters are obtained by closing the loop with = U.
Solution 7.5.
G(I +G

G)
1
G


2
I
(I +GG

)
1
GG


2
I
GG


2
(I +GG

)
(1
2
)GG


2
I
(1
2
)
2

2
, in which = |G|


2
1 +
2

1

1 +
2
.
Solution 7.6. We deal with the case in which w(t) is frequency weighted rst.
Suppose

x =

A x +

B w
w =

C x +

D w
and
x = Ax +Bw
y = Cx +v.
THE 1

FILTER 93
Combining these equations gives
_
x

x
_
=
_
A B

C
0

A
_ _
x
x
_
+
_
B

D

B
_
w
y =
_
C 0

_
x
x
_
+v,
which is the standard form.
The case in which v(t) is frequency weighted may be handled in much the same
way. Suppose

x =

A x +

B v
v =

C x +

D v.
Then
_
x

x
_
=
_
A 0
0

A
_ _
x
x
_
+
_
B 0
0

B
_ _
w
v
_
y =
_
C

C

_
x
x
_
+
_
0

D

_
w
v
_
,
which contains cross coupling in the disturbance inputi.e., it is of the form in
Problem 7.4.
Solution 7.7. The equations describing the message generating system as drawn
in Figure 7.12 are
x = Ax +Bw
y = Cx +v
and
x
w
= A
w
x
w
+B
w
Lx
= C
w
x
w
+D
w
Lx,
which may be combined as
_
x
x
w
_
=
_
A 0
B
w
L A
w
_ _
x
x
w
_
+
_
B
0
_
w
_
y

_
=
_
C 0
D
w
L C
w
_ _
x
x
w
_
+
_
I
0
_
v.
Substituting this realization into the general ltering formulas and partitioning the
Riccati equation solution as
Q =
_
Q
1
Q
2
Q

2
Q
3
_
94 SOLUTIONS TO PROBLEMS IN CHAPTER 7
gives

F
a
s
=
_

_
AQ
1
C

C 0 Q
1
C

2
(Q
1
L

w
+Q
2
C

w
)
B
w
L Q

2
C

C A
w
Q

2
C

2
(Q
2
L

w
+Q
3
C

w
)
D
w
L C
w
0 I
C 0 I 0
_

_
.
The generator of all lters can now be found from
F
a
=
_
W
1
0
0 I
_

F
a
=
_
_
A
w
B
w
D
1
w
C
w
B
w
D
1
w
0
D
1
w
C
w
D
1
w
0
0 0 I
_
_

_
AQ
1
C

C 0 Q
1
C

2
(Q
1
L

w
+Q
2
C

w
)
B
w
L Q

2
C

C A
w
Q

2
C

2
(Q
2
L

w
+Q
3
C

w
)
D
w
L C
w
0 I
C 0 I 0
_

_
.
After the removal of the unobservable modes we get the lter generator F
a
dened
by the realization
_

_
A
w
B
w
D
1
w
C
w
Q

2
C

C Q

2
C

B
w
D
1
w
+
2
(Q
2
L

w
+Q
3
C

w
)
0 AQ
1
C

C Q
1
C

2
(Q
1
L

w
+Q
2
C

w
)
D
1
w
C
w
L 0 D
1
w
0 C I 0
_

_
,
which is free of degree ination.
Solution 7.8.
1. Collecting all the relevant equations gives
_
_
x
z Lx
y
_
_
=
_

_
A
_
B 0

B
2
L
_
0 0

I
C
_
0 I

0
_

_
_

_
x
_
w
v
_
z
_

_
,
which has the associated adjoint system
P

=
_

_
A

_
B

0
_ _
0
0
_ _
0
I
_
B

2
I 0
_

_
.
THE 1

FILTER 95
This problem is now of the form of the special measurement feedback control
problem considered in Problem 6.13. The results of this problem show that a
solution exists if and only if the Riccati equation

Q = AQ+QA

Q(C

C
2
L

L)Q+BB

, Q(0) = 0.
has a solution on [0, T], in which case all F

s are generated by T

(F

a
, U

),
in which
F

a
s
=
_
_
A

+L

2
C

CQ L

CQ 0 I
(B

2
LQ) I 0
_
_
.
Taking the adjoint, we see that all lters are generated by T

(F
a
, U), in which
F
a
s
=
_
_
A+B
2
L QC

C QC

B
2

2
QL

L 0 I
C I 0
_
_
.
(We have adjusted the signs of the input and output matrices so that the lter
state is x, an estimate of x, rather than x.) Note that the central lter
can be written

x = A x +QC

(y C x) +B
2
z
z = L x.
2. The internal stability results are immediate from the corresponding results in
Problem 6.13.
Solution 7.9. The generalized regulator conguration we are interested in given
by the diagram:
s s
s s
P

F
-

z z
y z

_
w
v
_
96 SOLUTIONS TO PROBLEMS IN CHAPTER 7
in which = Ex. Writing down the appropriate equations gives
_

_
x
_

z z
_
y
_

_
=
_

_
A
_
H
1
B 0

0
E
_
0 0 0

0
L
_
0 0 0

I
C
_
H
2
0 I

0
_

_
_

_
x
_
_

w
v
_
_
z
_

_
.
A lter F with the desired properties exists if and only if there exists a controller
F such that the map R, which maps
_

(z z)

, has
the property |R|

< 1. Note that the solution of two Riccati equations will be


required. The reader may like to study Chapter 8 before returning to this problem.
Solutions to Problems in
Chapter 8
Solution 8.1.
1. Since
i
(I A) = 1
i
(A), we conclude that I A is nonsingular if (A) < 1.
2. First observe that
i
_
A(I + A)
1
_
=
i
(A)/
_
1 +
i
(A)
_
. Since
i
(A) are
real and nonnegative and
i
(A)
j
(A) implies that
i
(A)/(1 +
i
(A))

j
(A)/(1 +
j
(A)), the result follows. You might like to check for yourself
that x/(1+x) is monotonically increasing for all real nonnegative values of x.
Solution 8.2.
1. Using the denition of the H
Y
Hamiltonian we have that
_
I
2
X

0 I
_
H
Y
_
I
2
X

0 I
_
=
_
I
2
X

0 I
_ _

A

(C

2
C
2

2
C

1
C
1
)

B

B

A
_ _
I
2
X

0 I
_
=
_

A

+
2
X


B

B

B

B

A
2

B

B

_
=
_
A

B

B

A
z
_
,
in which
= (C

2
C
2

2
C

1
C
1
) +
2
(X


A+ (

A

+
2
X


B

B

)X

)
= (C

2
C
2

2
C

1
C
1
) +
2
X

(AB
1
D

21
C
2
)
+
2
(AB
1
D

21
C
2
)

+
4
X

B
1
(I D

21
D
21
)B

1
X

= (C
2
+
2
D
21
B

1
X

(C
2
+
2
D
21
B

1
X

)
+
2
(D

12
C
1
+B

2
X

(D

12
C
1
+B

2
X

)
= C

2z
C
2z
+
2
F

.
97
98 SOLUTIONS TO PROBLEMS IN CHAPTER 8
2. If Y

, the stabilizing solution to (8.3.12), exists, we have


H
Y
_
I 0
Y

I
_
=
_
I 0
Y

I
_ _

A

(C

2
C
2

2
C

1
C
1
)Y


0
_
,
in which R
e

i
(

A

(C

2
C
2

2
C

1
C
1
)Y

) < 0. Substituting from Part 1


gives
H
Z
_
I
2
X

0 I
_ _
I 0
Y

I
_
=
_
I
2
X

0 I
_ _
I 0
Y

I
_ _

A

(C

2
C
2

2
C

1
C
1
)Y


0
_
,
which implies that
H
Z
_
I
2
X


2
X

I
_
=
_
I
2
X


2
X

I
_ _

A

(C

2
C
2

2
C

1
C
1
)Y


0
_
.
If (X

) <
2
, it follows that (I
2
X

)
1
exists. It is now immedi-
ate that Z

= Y

(I
2
X

)
1
satises (8.3.9) and that it is stabilizing.
3. Multiplying
H
Y
_
I
2
X

0 I
_
=
_
I
2
X

0 I
_
H
Z
on the right by
_
I Z

gives
H
Y
_
I +
2
X

_
=
_
I
2
X

0 I
_ _
A

z
(C

2z
C
2z

2
F

)Z

B

B

A
z
Z

_
.
Expanding the (1, 1)-partition of this equation and using the Z

Riccati equa-
tion yields

(I +
2
X

) (C

2
C
2

2
C

1
C
1
)Z

= A

z
(C

2z
C
2z

2
F

)Z

+
2
X

(Z

z
Z

(C

2z
C
2z

2
F

)Z

).
Since Y

(I +
2
X

) = Z

, we obtain
(

A

(C

2
C
2

2
C

1
C
1
)Y

)(I +
2
X

)
= (I +
2
X

)
_
A

z
(C

2z
C
2z

2
F

)Z

_
.
THE 1

GENERALIZED REGULATOR PROBLEM 99


Hence

AY

(C

2
C
2

2
C

1
C
1
)
= (I +
2
Z

)
1
_
A
z
Z

(C

2z
C
2z

2
F

)
_
(I +
2
Z

)
as required.
Solution 8.3. Suppose (8.6.2) has solution P(t) with P(0) = M. From this it is
immediate that
_
I
P
_
(ADP)
_
A D
Q A

_ _
I
P
_
=
_
0

P
_
,
so that
_
I P

is a solution to (8.6.3) with the correct boundary conditions.


Now suppose that (8.6.3) has a solution with P
1
(t) nonsingular for all t [0, T]
and with P
2
(0)P
1
1
(0) = M. This gives
_
P I

_
A D
Q A

_ _
I
P
_
=
_
P I

__
I
P
_
P
1
X
_

P
1

P
2
__
P
1
1
=

P
2
P
1
1
P
2
P
1
1

P
1
P
1
1
=

P
as required.
Solution 8.4.
1. It follows from (8.2.4) and
_
I
2
X

0 I
_
H
Y
_
I
2
X

0 I
_
= H
Z
+
_
0
2

X

0 0
_
that
H
Y
_
I +
2
X

_
=
_
I +
2
X

_
_
A
z
Z

(C

2z
C
2z

2
F

)
_

d
dt
_
I +
2
X

_
.
Since X

, Z

0, it follows that (I +
2
X

) is nonsingular for all


t [0, T]. Also Z

(I +
2
X

)
1
(0) = 0. We can now use Problem 8.3
to show that Y

= Z

(I +
2
X

)
1
is a solution to (8.2.8). Also
(X

) =
_
X

(I +
2
X

)
1
_
=

2
(X

2
+(X

)
<
2
.
100 SOLUTIONS TO PROBLEMS IN CHAPTER 8
2. If Y

exists and (X

) <
2
, then I
2
X

is nonsingular on [0, T],


Y

(I
2
X

)
1
(0) = 0, and from (8.6.1) we get
H
Z
_
I
2
X

_
=
_
I
2
X

_
(

AY

(C

2
C
2

2
C

1
C
1
))

d
dt
_
I
2
X

_
.
It now follows from Problem 8.3 that Z

= Y

(I
2
X

)
1
is a solution
to (8.2.15).
Solution 8.5. A direct application of the composition formula for LFTs (see
Lemma 4.1.2) gives
A
PK
=
_
A+B
2
D
K
C
2
B
2
C
K
B
K
C
2
A
K
_
B
PK
=
_
B
1
+B
2
D
K
D
21
B
K
D
21
_
and
A
GK
=
_
A+
2
B
1
B

1
X

+B
2
D
K
(C
2
+
2
D
21
B

1
X

) B
2
C
K
B
K
(C
2
+
2
D
21
B

1
X

) A
K
_
B
GK
=
_
B
1
+B
2
D
K
D
21
B
K
D
21
_
.
It is now easy to see that
_
A
GK
I B
GK

=
_
A
PK
I B
PK

_
_
I 0 0
0 I 0

2
B

1
X

0 I
_
_
.
Solution 8.6.
1. Eliminating y and u from the ve given equations yields
x = Ax B
2
C
1
x +B
1
w

x = (AB
2
C
1
) x +B
1
(C
2
x +w C
2
x)
z = C
1
x C
1
x.
THE 1

GENERALIZED REGULATOR PROBLEM 101


This means that
x

x = (AB
1
C
2
)(x x).
It follows that (x x)(t) = 0 for all t, since x(0) = 0 and x = 0. It also follows
that z(t) = 0.
In the same way, we eliminate y and u from the six given equations to obtain
x = Ax B
2
C
1
x +B
1
w +B
2
(u u

x = (AB
1
C
2
B
2
C
1
) x +B
1
(C
2
x +w) +B
2
(u u

)
x

x = (AB
1
C
2
)(x x)
and
z = C
1
(x x) + (u u

)
w = C
2
(x x) +w.
Hence x = x, w = w and z = u u

. Since u u

= U w, it follows that
z = Uw and the result is proved.
2. Just choose a |U|

and assemble the corresponding controller.


3. We need note two things. Firstly,
_
z
w
_
=
_
0 I
I 0
_ _
w
u u

_
and secondly that all the internal cancellations occur at
i
(A B
1
C
2
) and

i
(A B
2
C
1
) (see Lemma 4.1.2). We can now select a stable U such that
|U|

.
Solution 8.7.
1. Consider the diagram and observe that
= K(I GK)
1
.
If
P =
_
0 I
I G
_
,
then T

(P, K) = K(I GK)


1
as required. (Alternatively, observe that
_

y
_
=
_
0 I
I G
_ _

u
_
.)
102 SOLUTIONS TO PROBLEMS IN CHAPTER 8
f
s s
s s
A
G
K
- -

-
?

u y
A state-space realization for P is
P
s
=
_
_
A 0 B
0 0 I
C I 0
_
_
.
2. If
A

P +PA

+B

= 0
and
A

Q+QA

+C

= 0,
then
A

P
1
+P
1
A

+P
1
B

P
1
= 0
and
A

Q
1
+Q
1
A

+Q
1
C

Q
1
= 0.
Next, we see that the equation dening X

is given by
_
A

0
0 A

+
_
X

+X

_
A

0
0 A
+
_
X

_
B

B
+
_
_
B

= 0.
One nonnegative solution is clearly
X

=
_
P
1
0
0 0
_
.
To show that this solution is stabilizing, we observe that
_
A

+B

P
1
0
B
+
B

P
1
A
+
_
=
_
A

0
0 A
+
_

_
B

B
+
_
_
B

_
P
1
0
0 0
_
.
THE 1

GENERALIZED REGULATOR PROBLEM 103


Since
A

+B

P
1
= PA

P
1
,
with R
e

i
(A

) > 0, the solution is indeed stabilizing. A parallel set of argu-


ments may be developed for Y

.
3. The smallest achievable value of is determined by

2
(X

)
= (P
1
Q
1
)
=
1

min
(PQ)
.
Hence
(A) =
1
=
_

min
(PQ).
4. Direct substitution into the formulas of (8.3.11) gives
A
k
= ABB

2
C
2
C
k1
= B

B
k1
= Z

2
Z

= (I
2
Y

)
1
Y

.
Solution 8.8.
1. Substitution into
A

+X

AX

BB

= 0
gives
0 =
_
1 0
1 1
_ _
x
1
x
2
x
2
x
3
_
+
_
x
1
x
2
x
2
x
3
_ _
1 1
0 1
_

_
x
1
x
2
x
2
x
3
_ _
0 0
0 1
_ _
x
1
x
2
x
2
x
3
_
=
_
2x
1
x
2
2
x
1
+ 2x
2
x
2
x
3
x
1
+ 2x
2
x
2
x
3
2(x
2
+x
3
) x
2
3
_
Consider x
3
= x
2
and x
1
= 2x
2
. This gives
0 =
_
4x
2
x
2
2
4x
2
x
2
2
4x
2
x
2
2
4x
2
x
2
2
_
104 SOLUTIONS TO PROBLEMS IN CHAPTER 8
and therefore x
2
= 0 or x
2
= 4. It is clear that X

= 0 is not stabilizing,
while x
2
= 4 gives
X

= 4
_
2 1
1 1
_
,
which is both positive denite and stabilizing. A parallel set of arguments
leads to
Y

= 4
_
1 1
1 2
_
.
2. To nd the optimal value of , we solve the equation

2
opt
= (X

)
= 16
__
3 4
2 3
__
.
The eigenvalues of
_
3 4
2 3
_
are given by the roots of
2
6 + 1 = 0. That
is
=
6

36 4
2
= 3 2

2.
Thus

opt
= 4
_
3 + 2

2.
Solution 8.9.
1. This requires the facts
T

__
0 G
I G
_
, K
_
= GK(I GK)
1
and
_
0 G
I G
_
s
=
_
_
A 0 B
C 0 D
C I D
_
_
.
2. (A, B
2
, C
2
) stabilizable and detectable requires (A, B, C) stabilizable and de-
tectable.
_
AjI B
2
C
1
D
12
_
full column rank
THE 1

GENERALIZED REGULATOR PROBLEM 105


requires ABD
1
C jI full rank, since
_
AjI B
C D
_ _
I 0
D
1
C I
_
=
_
ABD
1
C jI B
0 D
_
.
Finally
_
AjI B
1
C
2
D
21
_
full row rank
requires AjI full rank.
3. After scaling we get

P
s
=
_
_
A 0 BD
1
C 0 I
C I I
_
_
,
and then direct substitution yields:
0 = (ABD
1
C)

X +X(ABD
1
C) X(BD
1
)(BD
1
)

X
0 = AY +Y A

Y C

CY.
4. Substituting into the general 1

synthesis Riccati equations gives


0 = (ABD
1
C)

+X

(ABD
1
C) X

(BD
1
)(BD
1
)

0 = AY

+Y

(1
2
)Y

CY

.
Comparing terms with the LQG equations now yields X

= X and Y

=
(1
2
)
1
Y .
5. When G is stable, K = 0 is a stabilizing controller, in which case
opt
= 0.
6. When G has a right-half-plane pole, Y ,= 0. Hence > 1 is necessary (and
sucient) to ensure that Y

exists and Y

0. The spectral radius condition


gives

2
(X

)
= (1
2
)
1
(XY )

2
1 +(XY )

_
1 +(XY ).
Since
_
1 +(XY ) 1 for any X and Y , we see that all the conditions are
satised for any >
_
1 +(XY ) and we conclude that
opt
=
_
1 +(XY ).
7. Immediate from the above result, since in this case X = 0. Note that as 1,
Y

becomes unboundedthis needs to be dealt with carefully.


106 SOLUTIONS TO PROBLEMS IN CHAPTER 8
Solution 8.10.
1. This follows from
(I GK)
1
= I +GK(I GK)
1
= T

__
I G
I G
_
, K
_
and
_
I G
I G
_
s
=
_
_
A 0 B
C I D
C I D
_
_
.
2. We will prove this from the equivalence of the following two diagrams.
f
f
f
f
P
K
P
D
D
1
D
K
D
-

6
6
6
6

-
?
-
-

K
z w
y u
z w
y
u

u
The second of these gures yields
u = D
1
( u y) and y = y Du.
THE 1

GENERALIZED REGULATOR PROBLEM 107


Therefore
x = Ax +Bu
z = Cx +w +Du
y = Cx +w +Du
becomes
x = Ax +BD
1
( u y)
= Ax +BD
1
( u Cx w)
= (ABD
1
C)x BD
1
w +BD
1
u,
together with
z = Cx +w + u Cx w
= u
y = Cx +w.
Hence

P =
_
_
ABD
1
C BD
1
BD
1
0 0 I
C I 0
_
_
.
Also,

K = I +DK(I DK)
1
= (I DK)
1
.
3.
(A, B
2
) stabilizable (ABD
1
C, BD
1
) stabilizable
(A, B) stabilizable.
(A, C
2
) detectable (ABD
1
C, C) detectable
(A, C) detectable.
_
AjI B
2
C
1
D
12
_
full column rank

_
ABD
1
C jI BD
1
0 I
_
full column rank
ABD
1
C jI full rank.
108 SOLUTIONS TO PROBLEMS IN CHAPTER 8
_
AjI B
1
C
2
D
21
_
full row rank

_
ABD
1
C jI BD
1
C I
_
full row rank
AjI full rank (take Schur complements).
4. By direct substitution into the general formulas we get
0 = (ABD
1
C)

X +X(ABD
1
C) X(BD
1
)(BD
1
)

X
0 = AY +Y A

Y C

CY.
5. Again, direct substitution into the general formulas yields
0 = (ABD
1
C)

+X

(ABD
1
C)
(1
2
)X

(BD
1
)(BD
1
)

0 = AY

+Y

CY

.
Comparing terms yields X

= (1
2
)
1
X and Y

= Y . It is easy to see
that these are the stabilizing solutions.
6. When G
1
!1

, we have R
e

i
(A BD
1
C) < 0 X

= 0. This
together with (X

) =
2
opt
implies that
opt
= 0, oering one explana-
tion. Alternatively, if G
1
!1

, we can use an innitely high, stabiliz-


ing feedback gain to reduce the sensitivity. This together with the fact that
lim
k
(I kG)
1
= 0 provides a second explanation. (Remember G() is
nonsingular.)
7.
(X

) =
2
opt
(1
2
opt
)
1
(XY ) =
2
opt
(XY ) =
2
opt
1 provided
opt
,= 0

opt
=
_
1 +(XY ).
Solution 8.11.
1. The four given Riccati equations follow by substitution into their general
counterparts.
THE 1

GENERALIZED REGULATOR PROBLEM 109


2. Problem 3.23 implies that X is nonsingular if and only if

A is asymptotically
stable.
Suppose we add

AX
1
+X
1

A

B
2
B

2
= 0
and

2

AW
2
W

A

+
2
B
1
B

1
= 0
to get

A(X
1

2
W) + (X
1

2
W)

A

(B
2
B

2
B
1
B

1
) = 0.
Now if (XW) <
2
, (X
1

2
W)
1
exists, is nonnegative denite and
satises
0 =

A

(X
1

2
W)
1
+ (X
1

2
W)
1

A
(X
1

2
W)
1
(B
2
B

2
B
1
B

1
)(X
1

2
W)
1
.
Since
(X
1

2
W)

A

(X
1

2
W) = A(B
2
B

2
B
1
B

1
)(X
1

2
W)
1
,
this solution is stabilizing. We may therefore set X

= (X
1

2
W)
1
. A
parallel set of arguments leads to Y

= (Y
1

2
V )
1
.
3. We know from the general theory that a stabilizing controller exists if and
only if X

0, Y

0 and (X

)
2
. Since X

> 0 X
1

> 0,
Y

> 0 Y
1

> 0 and (X

)
2
Y
1


2
X

0, we can check
these three necessary and sucient conditions via the positivity of
() =
_
Y
1

2
V
1
I

1
I X
1

2
W
_
.
It is now easy to verify that
_

2
X
0 X
_
()
_
Y 0
0 I
_
=
2
_
I 0
0 I
_

_
V Y +XY XW
XY XW
_
.
This means that the smallest value of for which a solution exists is the
largest value of for which () is singular, or in other words the largest
eigenvalue of
_
V Y +XY XW
XY XW
_
.
110 SOLUTIONS TO PROBLEMS IN CHAPTER 8
Solution 8.12.
1. Since
y =
_
G I G

_
_
w
v
u
_
_
,
we conclude that the generalized plant for this problem is given by
_
_
_
y
u
_
y
_
_
=
_
_
_
G I
0 0
_ _
G
I
_
_
G I

G
_
_
_
_
_
w
v
_
u
_
_
.
It follows from the diagram in the question that
y = v +GKy +Gw
y = (I GK)
1
_
G I

_
w
v
_
.
Similarly,
u = K(v +Gw +Gu)
u = K(I GK)
1
_
G I

_
w
v
_
.
Combining these equations yields
_
y
u
_
=
_
I
K
_
(I GK)
1
_
G I

_
w
v
_
.
Substituting G = C(sI A)
1
B yields the realization of this generalized
plant:
_
_
_
G I
0 0
_ _
G
I
_
_
G I

G
_
_
s
=
_

_
A
_
B 0

B
_
C
0
_ _
0 I
0 0
_ _
0
I
_
C
_
0 I

0
_

_
.
2.
_
0 I
0 0
_
+
_
0
I
_
K
_
0 I

=
_
0 I
0 K
_
.
Therefore |T

(P, K)()| 1, which shows that |T

(P, K)|

1. We also
see that |T

(P, K)|
2
= , since T

(P, K) is not strictly proper.


3.
THE 1

GENERALIZED REGULATOR PROBLEM 111


Step 1: Since |D
11
| = 1,

D
12
=
_
I
0
_
and
|

12
D
11
| = |
_
I 0

|
= 1,
we set F = 0 because the norm of

D
11
cannot be reduce further (by feedback).
Step 2: From the denition of in (4.6.5) we get

11
=
_
0
2
I
0 0
_

12
=
_

1
(1
2
)
1/2
I 0
0
1
I
_

21
=
_

1
I 0
0
1
(1
2
)
1/2
I
_

22
=
_
0 0

2
I 0
_
.
Now
B
1

22
=
_
B 0

_
0 0

2
I 0
_
= 0.
Next, we observe that
(I
22
D
11
)
1
=
__
I 0
0 I
_

_
0 0

2
I 0
_ _
0 I
0 0
__
1
=
_
I 0
0
1
1
2
I
_
(I D
11

22
)
1
=
__
I 0
0 I
_

_
0 I
0 0
_ _
0 0

2
I 0
__
1
=
_
1
1
2
I 0
0 I
_
,
which results in

A = A, since B
1
= 0

B
1
= B
1
(I
22
D
11
)
1

21
=
_
B 0

_
I 0
0
1
1
2
I
_ _

1
I 0
0
1
_
1
2
I
_
=
_

1
B 0

B
2
= B
2
= B since B
1
= 0.
112 SOLUTIONS TO PROBLEMS IN CHAPTER 8
In much the same way:

C
1
=
12
(I D
11

22
)
1
C
1
=
1
_ _
1
2
I 0
0 I
_ _
1
1
2
I 0
0 I
_ _
C
0
_
=
_

1

1
2
C
0
_
,

C
2
= C
2
+D
21

22
(I D
11

22
)
1
C
1
= C +
_
0 I

_
0 0

2
I 0
_ _
1
1
2
I 0
0 I
_ _
C
0
_
= C +

2
C
1
2
=
1
1
2
C.
The whole point of the construction is to ensure

D
11
= 0. It only remains for
us to evaluate the remaining partitions of the

D matrix.

D
12
=
12
(I D
11

22
)
1
D
12
=
1
_ _
1
2
I 0
0 I
_ _
1
1
2
I 0
0 I
_ _
0
1
_
=
_
0

1
I
_

D
21
= D
21
(I
22
D
11
)
1

21
=
_
0 I

_
I 0
0
1
1
2
I
_ _

1
I 0
0
1
_
1
2
I
_
=
_
0

1

1
2
I
_

D
22
= D
22
+D
21

22
(I D
11

22
)
1
D
21
=
_
0 I

_
0 0

2
I 0
_ _
1
1
2
I 0
0 I
_ _
0
I
_
= 0
Combining these results now yields:

P
s
=
_

_
A
1
B 0 B

1
2
C 0 0 0
0 0 0
1
I
1
1
2
C 0

1

1
2
I 0
_

_
THE 1

GENERALIZED REGULATOR PROBLEM 113


Since |T

(P, K)|

|T

P, K)|


1
, we can rescale

P so that
|T

P, K)|

with

P
s
=
_

_
A B 0 B
1

1
2
C 0 0 0
0 0 0 I
1
1
2
C 0
1

1
2
I 0
_

_
Step 3: Scale D
12
and D
21
. D
12
requires no scaling, while scaling D
21
yields

P
s
=
_

_
A B 0 B
1

1
2
C 0 0 0
0 0 0 I
1

1
2
C 0 I 0
_

_
4. Since (A, C
2
) must be detectable, we require (A, C) detectable. To establish
that this condition suces for one of the imaginary axis conditions we argue
as follows:
_
jI A B
2
C
1
D
12
_ _
w
1
w
2
_
= 0

_
jI A B
1

1
2
C 0
0 I
_

_
_
w
1
w
2
_
= 0
w
2
= 0 and
_
jI A
C
_
w
1
= 0
It is clear that the second condition is equivalent to (A, C) having no unob-
servable modes on the imaginary axis and this is implied by the detectability of
(A, C). A dual argument may be used to establish the required stabilizability
of (A, B).
Note that there is no cross coupling in the generalized plant

P. It is immediate
that the X

and Y

Riccati equations are as stated in the question.


5. The generalized plant for this problem is given by
_
_
_

u
_
y
_
_
=
_
_
_
G 0
0 0
_ _
G
I
_
_
G I

G
_
_
_
_
_
w
v
_
u
_
_
.
114 SOLUTIONS TO PROBLEMS IN CHAPTER 8
The realization of this generalized plant is
_
_
_
G 0
0 0
_ _
G
I
_
_
G I

G
_
_
s
=
_

_
A
_
B 0

B
_
C
0
_ _
0 0
0 0
_ _
0
I
_
C
_
0 I

0
_

_
.
Substituting into the general formulas now gives
A

X +XAXBB

X +C

C = 0
AY +Y A

Y C

CY +BB

= 0.
6. Substituting X

= (1
2
)
1
X into the X

Riccati equation gives


A

X +XAXBB

X +C

C = 0
which shows that X

= (1
2
)
1
X is indeed a solution. If A BB

X is
stable, then so is ABB

(1
2
)X

. It is clear that X

0 for all > 1.


The fact that Y

= Y is trivial.
The spectral radius condition gives:

2
(X

)
= (1
2
)
1
(XY )

_
1 +(XY ).
Hence >
_
1 +(XY ) implies that all the conditions are met and we con-
clude that
opt
=
_
1 +(XY ).
7. Substitution into the various denitions in the text gives:
F

= B

= (1
2
)
1
B

X
C
2z
=
1
_
1
2
C
Z

= (I
2
Y

)
1
Y

=
2
(1
2
)W
1
Y.
From (8.3.11) we obtain the central controller

K
s
= (A
k
, B
k1
, C
k1
) for the
scaled system

P:
B
k1
= Z

2z
=
2
_
1
2
W
1
Y C

C
k1
= (1
2
)
1
B

X
A
k
= A+
2
BB

BB

1
_
1
2
C

1
_
1
2
C
= ABB

X
2
W
1
Y C

C.
THE 1

GENERALIZED REGULATOR PROBLEM 115


Recalling the D
21
-scaling, which means

K =
1

1
2
K, we set
K =
_
1
2
K
by multiplying B
k1
by
_
1
2
. This gives the controller

x = (ABB

X
2
W
1
Y C

C) x + (
2
1)W
1
Y C

y
u = (1
2
)
1
B

X x.
Dening x = (1
2
)
1
x yields

x = (ABB

X
2
W
1
Y C

C) x +
2
W
1
Y C

y
u = B

X x.
Finally, multiplying by W yields
W

x =
_
W(ABB

X)
2
Y C

C
_
x +
2
Y C

y
u = B

X x,
which is the desired controller.
Notice that for suboptimal , W is nonsingular and we have

x = A x +Bu +
2
W
1
Y C

(y C x)
u = B

X x,
which is an observer and state-estimate feedback. The LQG optimal controller
for the problem in Part 5 is

x
LQG
= A x
LQG
+Bu +Y C

(y C x
LQG
)
u = B

X x
LQG
.
The 1

controller therefore uses a dierent observer gain matrix, but the


same feedback gain matrix.
8. Using
g
s
=
_
0 1
1 0
_
,
we obtain X = 1 and Y = 1. Therefore
opt
=
_
1 +(XY ) =

2.
Setting k = 1 gives a closed-loop transfer function matrix of
_
1
k
_
(1 gk)
1
_
g 1

=
_
1
1
_
1
s + 1
_
1 s

=
1
s + 1
_
1 s
1 s
_
.
116 SOLUTIONS TO PROBLEMS IN CHAPTER 8
The singular values of this matrix are the square roots of the eigenvalues of
1
j + 1
_
1 j
1 j
_
1
j + 1
_
1 1
j j
_
=
_
1 1
1 1
_
.
Now
_
1 1
1 1
_
has eigenvalues 2 and 0, and so
1
=

2 and
2
= 0.
(It is instructive to examine the general controller formula for this case. We
have X = 1 and Y = 1, so W =
2
2. Substitution into the controller
formulas give
u =

2
(
2
2)s + 2(
2
1)
y.
For
2
= 2, this gives the optimal controller u = y. The LQG controller is

1
s+2
.)
Solution 8.13. We show how the optimal controllers given in Examples 8.4 and
2.4.2 may be obtained using The Robust Control Toolbox and Matlab, version 4.0.
1
Note that Chiang and Safonov, authors of the Robust Control Toolbox [35], consider
the synthesis problem |
1
T

(P, K)|

< 1.
Servomechanism of Section 8.4:
>> J1=1;
>> J2=2;
>> D1=.01;
>> D2=.02;
>> K=30;
>> A=[-D1/J1,0,-K/J1;0,-D2/J2,K/J2;1,-1,0];
>> B=[40/J1;0;0];
>> C=[0,1,0];
>> D=0;
>> B1=[B, zeros(3,1)];
>> B2=-B;
>> C1=[zeros(1,3);C];
>> C2=C;
>> D11=[1,0;D,1];
>> D12=[-1;D];
>> D21=[D,1];
>> D22=-D;
>> genplant=mksys(A,B1,B2,C1,C2,D11,D12,D21,D22,tss);
1
MATLAB is a registered trademark of The MathWorks, Inc.
THE 1

GENERALIZED REGULATOR PROBLEM 117


>> GOPT=[1;2];
>> aux=[1e-12,1/4,1/3];
>> [gopt,ss_cp,ss_cl]=hinfopt(genplant,GOPT,aux);
<< H-Infinity Optimal Control Synthesis >>
Information about 1

optimization displayed.
>> 1/gopt
ans =
3.8856
>> [Ak,Bk,Ck,Dk]=branch(ss_cp);
>> [NUMk,DENk]=ss2tf(Ak,Bk,Ck,Dk)
NUMk =
3.6283 6.8528 88.0362
DENk =
1.0000 25.3218 343.0775
Example 2.4.2:
Additive robustness problem
>> num=1;
>> den=[1,-2,1];
>> [A,B,C,D]=tf2ss(num,den);
>> genplant=mksys(A,zeros(2,1),B,zeros(1,2),C,0,1,1,0,tss);
>> gamopt=4*sqrt(3+2*sqrt(2))
gamopt =
9.6569
>> aux=[1e-12,1/9,1/10];
>> GOPT=1;
>> [gopt,ss_cp,ss_cl]=hinfopt(genplant,GOPT,aux);
118 SOLUTIONS TO PROBLEMS IN CHAPTER 8
<< H-Infinity Optimal Control Synthesis >>
Information about 1

optimization displayed.
>> 1/gopt
ans =
9.6569
>> [Ak,Bk,Ck,Dk]=branch(ss_cp);
>> [NUMk,DENk]=ss2tf(Ak,Bk,Ck,Dk)
NUMk =
-9.6569 4.0000
DENk =
1.0000 4.4142
The combined additive/multiplicative problem
>> num=1; den=[1,-2,1];
>> [A,B,C,D]=tf2ss(num,den);
>> Z12=zeros(1,2);
>> Z21=zeros(2,1);
>> genplant=mksys(A,Z21,B,[C;Z12],C,Z21,[D;1/10],1,D,tss);
>> GOPT=[1;2];
>> aux=[1e-12,1/2,1/3];
>> [gopt,ss_cp,ss_cl]=hinfopt(genplant,GOPT,aux);
<< H-Infinity Optimal Control Synthesis >>
Information about 1

optimization displayed.
>> 1/gopt
ans =
2.3818
THE 1

GENERALIZED REGULATOR PROBLEM 119


>> [Ak,Bk,Ck,Dk]=branch(ss_cp);
>> [NUMk,DENk]=ss2tf(Ak,Bk,Ck,Dk)
NUMk =
-23.8181 4.8560
DENk =
1.0000 6.9049
Solution 8.14. Suppose
P
s
=
_
_
A B
1
B
2
C
1
D
11
D
12
C
2
D
21
D
22
_
_
.
Since (A, B
2
) is stabilizable, there exists an F such that the eigenvalues of AB
2
F
are not on the imaginary axis. Similarly, since (A, C
2
) is detectable, there exists an
H such that the eigenvalues of AHC
2
are not on the imaginary axis.
Now consider the dilated plant
P
a
s
=
_

_
A
_
B
1
H

B
2
_
C
1
F
_ _
D
11
0
0 0
_ _
D
12
I
_
C
2
_
D
21
I

D
22
_

_
.
Firstly, we show that
rank
_
_
AjI B
2
C
1
D
12
F I
_
_
= m+n
for any ,= 0.
This follows since
_
_
I 0
1
B
2
0 I
1
D
12
0 0 I
_
_
_
_
AjI B
2
C
1
D
12
F I
_
_
=
_
_
AB
2
F jI 0
C
1
D
12
F 0
F I
_
_
in which AB
2
F may be chosen with no eigenvalues on the imaginary axis.
120 SOLUTIONS TO PROBLEMS IN CHAPTER 8
A parallel argument proves that
rank
_
AjI B
1
H
C
2
D
21
I
_
= q +n.
To prove the rst direction suppose we select K such that T

(P
a
, K) is inter-
nally stable and such that |T

(P
a
, K)|

< . Since the dilation process has no


eect on internal stability, T

(P
a
, K) is stable. In addition, |T

(P, K)|

<
since removing the dilation must be norm non-increasing.
Conversely, if K is stabilizing and satises |T

(P, K)|

< , the closed loop


mapping from w to
_
x

has nite norm. Thus there exists an > 0 such


that |T

(P
a
, K)|

< .
Solutions to Problems in
Chapter 9
Solution 9.1.
1. Let C have SVD C = U
1
V

1
, in which
=
_

1
0
0 0
_
, det(
1
) ,= 0.
Then BB

= C

C = V
1

2
V

1
, so B has SVD
B = V
1

2
,

=
_

1
0
0 0
_
.
(

has fewer columns and .) Therefore,


B = V
1
U

1
U
1
_
U

2
0
_
= C

U/
in which
U = U
1
_
U

2
0
_
.
2. The point here is that C is no longer assumed to have at least as many rows
as B has columns. To overcome this, augment C with zero rows:

C =
_
C
0
_
,
with the number of zero rows of

C chosen so that

C has at least as many
rows as B has columns. Therefore, there exists a

U such that

U

U =
2
I and
B +

C

U = 0. Partition

U conformably with

C:

U =
_
U
U
2
_
.
121
122 SOLUTIONS TO PROBLEMS IN CHAPTER 9
Then U

U =
2
I U

2
U
2

2
I and B + C

U = 0. Furthermore, U has
SVD
U =
_
I 0

U/),
so the singular values of U are either or zero.
Solution 9.2. In order for
=
_

1
0
0
2
_
to be the controllability/observability gramian of the combined system, we require
that A
21
and A
12
satisfy
A
12

2
+
1
A

21
+B
1
B

2
= 0

1
A
12
+A

21

2
+C

1
C
2
= 0.
Hence
A
12

2
2
+
1
A

21

2
+B
1
B

2
= 0

2
1
A
12
+
1
A

21

2
+
1
C

1
C
2
= 0,
giving

2
1
A
12
A
12

2
2
+
1
C

1
C
2
B
1
B

2
= 0, (9.1)
which has a unique solution A
12
provided that no eigenvalue of
1
is also an eigen-
value of
2
. This is a standard result from the theory of linear matrix equations.
To prove it, let
_

2
1

1
C

1
C
2
B
1
B

2
0
2
2
_ _
V
1
V
2
_
=
_
V
1
V
2
_

2
2
(9.2)
in which
_
V

1
V

has full column rank. (This can be obtained from an eigen-


value decomposition.) Provided V
2
is nonsingular, it is easy to check that A
12
=
V
1
V
1
2
is a solution to (9.1). To show that V
2
is nonsingular, suppose that V
2
x = 0.
Multiplying (9.2) by x, we see that V
2

2
2
x = 0 and we conclude that there exists
a y such that V
2
y = 0 and
2
2
y =
2
y. Now multiplying (9.2) by y we obtain

2
1
V
1
y =
2
V
1
y. Since
1
and
2
have no eigenvalues in common, we conclude that
V
1
y = 0. Thus
_
V

1
V

y = 0, which implies y = 0 and we conclude that V


2
is
nonsingular. Conversely, if A
12
is a solution to (9.1), then V
1
= A
12
and V
2
= I is
a solution to (9.2). Thus, V
1
, V
2
satisfy (9.2) if and only if
_
V
1
V
2
_
=
_
A
12
I
_
V
2
, det V
2
,= 0.
MODEL REDUCTION BY TRUNCATION 123
Since the eigenvalue decomposition ensures that we can always nd a V
1
and V
2
to
satisfy (9.2), we conclude that A
12
= V
1
V
1
2
exists and is unique.
The matrix A
21
may be similarly obtained as the unique solution to the linear
equation
A
21

2
1

2
2
A
21
+B
2
B

2
C

2
C
1
= 0.
Solution 9.3. Pick any and let A =

A(j), B =

B(j) and C =

C(j).
Suppose Ax = x, x ,= 0. From (A+A

) +BB

= 0, we conclude that x

x( +

) +|B

x|
2
= 0. Since x

x ,= 0, we have that ( +

) 0. Equality cannot hold
(by assumption) and hence A is asymptotically stable. The bounded real lemma
says that |C(sI A)
1
B|

< if and only if there exists a P 0 such that


PA+A

P +
2
PBB

P +C

C = 0,
with A +
2
BB

P is asymptotically stable. If we hypothesize a solution of the


form P = I and substitute for C

C and BB

, we see that P = I is a solution


provided satises the quadratic equation

2
/
2
+ = 0.
The condition for a real solution is 2. In this case, both the solutions are
nonnegative. It remains to show that one of these solution is stabilizing (i.e., A +

2
BB

asymptotically stable). Assume that > 2, let


1
be the smaller of the
two solutions, and let
2
be the larger of the two solutions. Note that both P =
1
I
and P =
2
I are solutions of the Riccati equation. Subtracting these two Riccati
equations and re-arranging terms yields
(
2

1
)(A+
2
BB

1
) + (A+
2
BB

1
)

(
2

1
) +
2
(
2

1
)
2
BB

= 0
and it follows that (A+
2
BB

1
) is asymptotically stable. Thus, for any > 2,
there exists a stabilizing nonnegative denite solution to the bounded real lemma
Riccati equation and we conclude that |C(sI A)
1
B|

< for any > 2. In


particular, we have
_

C(j)(jI

A(j))
1

B(j)
_
< for any > 2. Since
was arbitrary, we conclude that
sup

C(j)(jI

A(j))
1

B(j)
_
2.
Solution 9.4. Since > 0 and ,= 1, the poles are obviously simple and located
at s =
i
< 0. To see that the zeros lie between the poles, consider any term
t
i
=
i
/(s+
i
) in the sum. As s moves out along the negative real axis, t
i
increases
monotonically from 1 (at s = 0) to (at s =
i
); it then becomes negative and
decreases in magnitude to zero at s = . Thus the function g
n
(s), which is
124 SOLUTIONS TO PROBLEMS IN CHAPTER 9
continuous except at the poles, moves from to as s moves from pole to pole.
It must therefore have a zero between every pole. This accounts for the n 1 zeros
and we conclude they are each simple and are located between each of the poles.
Considering the basic facts concerning Bode phase diagrams, we conclude from
the interlacing property that the phase of g
n
(j) always lies between 0

and 90

.
This means that g
n
is positive real, and it is there the impedance function of a
passive circuit.
Solution 9.5. The balanced truncation reduced-order model is given by the re-
alization (L

AM, L

B, CM, D), in which L

is the matrix consisting of the rst r


rows of T and M is the matrix consisting of the rst r columns of T
1
, in which
T is as dened in Lemma 9.3.1. That is, T =
1
2
U

R
1
, in which P = RR

and
R

QR = U
2
U

. We need to verify that L and M are given by the alternative


expressions stated in the problem.
Since P = RR

= U
P
S
P
U

P
, we see that R = U
P
S
1
2
P
. Thus
R

QR = (S
1
2
Q
U

Q
U
P
S
1
2
P
)

(S
1
2
Q
U

Q
U
P
S
1
2
P
)
= (V U

(V U

)
= U
2
U

.
Hence
T =
1
2
U

R
1
=
1
2
U

1
2
P
U

P
(9.3)
=
1
2
U

U
1
V

S
1
2
Q
U

Q
=

1
2
V

S
1
2
Q
U

Q
.
In the above, the fact that U
1
V

= (S
1
2
Q
U

Q
U
P
S
1
2
P
)
1
has been used. From the
partions of V and , it is now easily seen that L

, the rst r rows of T, is given by


L

1
2
1
V

1
S
1
2
Q
U

Q
as required.
Now take the inverse of the formula (9.3) for T to give T
1
= U
P
S
1
2
P
U

1
2
, and
from the partitions of U and we see that M, the rst r columns of T
1
, is given
by M = U
P
S
1
2
P
U
1

1
2
1
.
Solution 9.6.
1. Proof is identical to that of Lemma 9.3.1
2. From the (1, 1)-block of the observability gramian equation, we have
A

11

1
A
11

1
+A

21

2
A
21
+C

1
C
1
= 0.
MODEL REDUCTION BY TRUNCATION 125
Suppose A
11
x = x. Then
x

1
x([[
2
1) +x

(A

21

2
A
21
+C

1
C
1
)x = 0.
Since
1
> 0 and
2
> 0, we conclude that either (a) [[ < 1, or that (b)
A
21
x = 0, C
1
x = 0. If case (b) holds, we have
_
A
11
A
12
A
21
A
22
_ _
x
0
_
=
_
x
0
_
which implies that [[ < 1 or x = 0, since A is asymptotically stable. Thus
A
11
x = x implies that [[ < 1 or x = 0, which means that A
11
is asymptoti-
cally stable.
3. Follows by direct calculation. Let = e
j
A
11
, so that

A() = A
22
+
A
21

1
A
12
. The calculations required to show

A()
2

A

()
2
+

B()

() = 0
are facilitated by the identities
A
12

2
A

12
=
1
A
11

1
A

11
B
1
B

1
=
1

+A
11

+
1
A

11
B
1
B

1
and
A
11

1
A

21
+A
12

2
A

22
+B
1
B

2
= 0.
Similarly, the identities
A

21

2
A
21
=
1
A

11

1
A
11
C

1
C
1
=

1
+A

11

1
+

1
A
11
C

1
C
1
and
A

11

1
A
12
+A

21

2
A
22
+C
1
C

2
= 0
may be used to establish

()
2

A()
2
+

C

()

C() = 0.
4. Since A is asymptotically stable and BB

= I AA

, B has full row rank.


Similarly, C has full column rank. Therefore, by introducing unitary changes
of coordinates to the input and output spaces B and C may be written as
B =
_

B 0

, C =
_

C
0
_
in which

B and

C are nonsingular. Now dene

D =

1
A


B and
U =
_
A

B

C

D
_
.
126 SOLUTIONS TO PROBLEMS IN CHAPTER 9
A calculation shows that U

U = I, so U is orthogonal. We therefore have

(e
j
I A

)
1

C


C(e
j
I A)
1

B
=

B

(e
j
I A

)
1
(I A

A)(e
j
I A)
1

B
=

B

(e
j
I A

)
1
_
(e
j
I A

)(e
j
I A)
+(e
j
I A

)A+A

(e
j
I A)
_
(e
j
I A)
1

B
=

B

_
I + (e
j
I A

)
1
A

+A(e
j
I A)
1
_

B.
By direct calculation, A


B +

C


D = 0 and

D


D +

B


B = I, we see that
_

D +

C(e
j
I A)
1

B
_

D +

C(e
j
I A)
1

B
_
= I.
Since

B is nonsingular, it follows from U

U = I that that |

D| < 1. Hence

_
C(e
j
I A)B
_
=
_

C(e
j
I A)

B
_
=
_

D +

C(e
j
I A)

B

D
_

_

D +

C(e
j
I A)

B
_
+|

D|
< 1 + 1 = 2.
5. Obvious from
A
11

1
A

11

1
+A
12

2
A

12
+B
1
B

1
= 0
A

11

1
A
11

1
+A

21

2
A
21
+C

1
C
1
= 0.
6. Delete the states associated with
m
to give G
1
, with error E
1
=

C()
_
e
j
I

A()
_
1

B(), in which

m
(

A()

A

() I) +

B()

() = 0

m
(

A

()

A() I) +

C

()

C() = 0.
Thus
_
E
i
_
< 2
n
. Now consider

G
1
=
_
0 0
0 D
_
+
_
A
21m

1
2
2m
C
1m
_
(zI A
11m
)
1
_

1
2
2m
A
12m
B
1m
_
in which A
11m
, A
21m
, A
12m
, B
1m
C
1m
and
2m
come from the partitioning
that is associated with the truncation of the states associated with
m
.
Now truncate this realization of

G
1
, incur an error bounded by 2
m1
and
embed in a still larger system which is a balanced realization. Continue this
process until the desired r
th
-order truncated model is sitting in the bottom-
right-hand corner. The nal bound follows from the triangle inequality, to-
gether with the fact that the innity norm of a submatrix can never exceed
MODEL REDUCTION BY TRUNCATION 127
the norm of the matrix it forms part of. (Each step i incurs an augmented
error that is less than 2
mi
; the error we are actually interested in is the
(2, 2)-corner of the augmented error, which must also be less than 2
mi
.
This proof generalizes that given in [5], which is limited to the case when the

i
deleted are each of unit multiplicity.
When the state(s) associated with one
i
are truncated in discrete-time, the
actual error is strictly smaller that 2
i
, whereas in the continuous-time case,
equality holds. This eect is compounded as further states are removed. For
this reason, the discrete-time algorithm oers superior performance (and the
bound is correspondingly weaker) than its continuous-time counterpart.
Solution 9.7.
1. The dynamics are
_
x
1
x
2
_
=
_
A
11
A
12
A
21
A
22
_ _
x
1
x
2
_
+
_
B
1
B
2
_
u.
Replacing x
2
with x
2
yields
x
2
= (I A
22
)
1
(A
21
x
1
+B
2
u).
Replacing x
2
in the dynamical equation for x
1
and in the output equation
yields the GSPA approximant.
2. Use the Schur decomposition
I A =
_
I A
12
(I A
22
)
1
0 I
_ _
I

A 0
0 I A
22
_

_
I 0
(I A
22
)
1
A
21
I
_
,
in which

A = A
11
A
12
(I A
22
)
1
A
21
.
3. Problem 4.10 gives a formula for the realization of a system obtained via an ar-
bitrary linear fractional transformation of the complex variable. Substitution
into this formula gives the result. Alternatively, let
F(z) = G
_
(z 1)
z + 1
_
=

D +

C(zI

A)
1

B
128 SOLUTIONS TO PROBLEMS IN CHAPTER 9
denote the discrete-time equivalent transfer function matrix. Now re-arrange
(
(z1)
z+1
I A)
1
as follows:
_
(z 1)
z + 1
I A
_
1
= (z + 1)
_
(z 1)I (z + 1)A
_
1
= (z + 1)
_
z(I A) (I +A)
_
1
= (I A)
1
_
zI (I +A)(I A)
1
_
1
(z + 1).
(Note that I A is nonsingular for > 0, since A has no eigenvalue in the
closed-right-half plane.) Dene

A = (I +A)(I A)
1
and note that
(z + 1)I = zI

A+ 2(I A)
1
,
from which we see that
_
(z 1)
z + 1
I A
_
1
= (I A)
1
+ 2(I A)
1
(zI

A)
1
(I A)
1
.
Therefore, we have that

A = (I +A)(I A)
1
= (I A)
1
(I +A)

B =

2(I A)
1
B

C =

2C(I A)
1

D = D +C(I A)
1
B,
is the realization of F(z).
We now show that this realization is a balanced discrete-time realization. Sup-
pose is the controllability/observability gramian of the balanced realization
(A, B, C, D). Then


A +

C


C
= (I A

)
1
_
(I +A

)(I +A) (I A

)(I A)
+ 2C

C
_
(I A)
1
= 2(I A

)
1
_
A+A

+C

C
_
(I A)
1
= 0.
Similarly, we have

A

A

+

B

B

= 0.
It remains to show that

A has all its eigenvalues inside the unit circlethere
are several ways of doing this. One possibility is to suppose that

Ax = x
and show that this implies Ax =
(1)
+1
x. Since A is asymptotically stable,
MODEL REDUCTION BY TRUNCATION 129
we must have R
e
(1)
+1
< 0 and we conclude from this that [[ < 1. An
alternative is to suppose that

Ax = x and use the discrete-time observability
gramian equation to obtain
x

x([[
2
1) +|

Cx| = 0.
Since > 0, this shows that either: (a) [[ < 1 or (b)

Cx = 0. If case (b)
holds, we see that Ax =
(1)
+1
x and Cx = 0, which implies x = 0 because
(A, C) is observable.
4. Use the Schur decomposition
I A =
_
I A
12
(I A
22
)
1
0 I
_ _
I

A 0
0 I A
22
_

_
I 0
(I A
22
)
1
A
21
I
_
to obtain
(I A)
1
=
_
I 0
(I A
22
)
1
A
21
I
_ _
(I

A)
1
0
0 (I A
22
)
1
_

_
I A
12
(I A
22
)
1
0 I
_
It is now easy to see that

A
11
= (I +

A)(I

A)
1

B
1
=

2(I

A)
1

C
1
=

C(I

A)
1

D =

D +

C(I

A)
1

B,
which shows that (

A
11
,

B
1
,

C
1
,

D) is indeed the discrete-time equivalent of the
GSPA approximant.
5. For = , the GSPA approximant is the balanced truncation approximant.
For = 0, the GSPA approximant is the SPA approximant. Both these
algorithms have already been shown to preserve stability and to satisfy the
twice-the-sum-of-the-tail error bound. For the case 0 < < , we use the
discrete-time balanced truncation results of Problem 9.6 and the facts proved
in this problem. Since

A
11
has all its eigenvalues inside the unit circle,

A
has all its eigenvalues in the open-left-half plane, and since the unit circle is
mapped to the imaginary axis by s =
(z1)
z+1
, the unit circle innity norm
bound for the discrete-time truncation provides and innity norm bound for
the GSPA algorithm.
130 SOLUTIONS TO PROBLEMS IN CHAPTER 9
As a nal remark, we not that = 0 gives exact matching at steady-state
(s = 0), = gives exact matching at innite frequency. For between 0
and , exact matching occurs at the point s = on the positive real axis.
Varying can be used to adjust the emphasis the reduction procedure gives
to high and low frequencies. A low value of emphasizes low frequencies and
a high value of emphasizes high frequencies.
Solutions to Problems in
Chapter 10
Solution 10.1.
1. Let w 1

2
and z = Fw. Then z is analytic in R
e
(s) < 0 and
|z|
2
2
=
1
2
_

z(j)

z(j) d
=
1
2
_

w(j)

F(j)

F(j)w(j) d
sup

[F(j)]
2
1
2
_

w(j)

w(j) d
= |F|
2

|w|
2
2
.
Hence z 1

2
.
By the Paley-Wiener theorem, F/
2
(, 0] /
2
(, 0], which, since F is
time-invariant, implies that F is anticausal.
2. Suppose G = G
+
+G

with G
+
!1

and G

!1

(conceptually, this
may be done by considering partial fraction expansions of the entries of G,
or alternatively we may use the state-space algorithm below). By the Paley-
Wiener theorem, the inverse Laplace transform of G
+
is a function G
+
(t)
that is zero for t < 0 and the inverse Laplace transform of G

is a function
G

(t) that is zero for t > 0. Hence the system G, which is represented by the
convolution
y(t) =
_

G(t )u() d
may be decomposed as
y(t) =
_

G
+
(t )u() d +
_

(t )u() d
=
_
t

G
+
(t )u() d +
_

t
G

(t )u() d,
131
132 SOLUTIONS TO PROBLEMS IN CHAPTER 10
which is a causal/anticausal decomposition.
The converse follows similarly by using the Paley-Wiener theorem.
3. Start with a realization (A, B, C, D) of G. Produce, via an eigenvalue decom-
position for example, a realization of the form
A =
_
_
A

0 0
0 A
+
0
0 0 A
0
_
_
, B =
_
_
B

B
+
B
0
_
_
C =
_
C

C
+
C
0

in which R
e
[(A

)] > 0, R
e
[(A
+
)] < 0, R
e
[(A
0
)] = 0. The assumption
G !/

implies that C
0
(sI A
0
)
1
B
0
= 0. Therefore, (A
+
, B
+
, C
+
, D) is
a realization for the stable or causal part and G
+
and (A

, B

, C

, 0) is a
realization for the antistable or anticausal part G

.
Solution 10.2. Suciency follows from Theorem 3.2.1, or by direct calculation
as follows:
G

G = D

D +D

C(sI A)
1
B +B

(sI A

)
1
C

D
+B

(sI A

)
1
C

C(sI A)
1
B
= I +B

(sI A

)
1
_
Q(sI A) (sI A

)Q+C

C
_
(sI A)
1
B
= I.
Conversely, suppose G

G = I. Since G is square, this implies that G

= G
1
.
Now G

has realization
G

s
=
_
A

_
,
which is easily seen to be minimal. Also,
G
1
s
=
_
ABD
1
C BD
1
D
1
C D
1
_
,
which is also minimal. From G

= G
1
and the uniqueness of minimal realizations,
we conclude that D

= D
1
and that there exists a nonsingular matrix Q such that
B

Q = D
1
C
Q
1
A

Q = ABD
1
C.
Elementary manipulations now yield the desired equations.
Let P = Q
1
and note that
AP +PA

+BB

= 0.
OPTIMAL MODEL REDUCTION 133
Therefore P is the controllability gramian of G. If G is stable, the Hankel singular
values of G are given by
i
(PQ) = 1 for all i. This means that approximating
a stable allpass system is a fruitless exerciseone can never do better than ap-
proximating it by the zero system, since this gives an innity norm error of one.
Solution 10.3.
E
a
s
=
_
A
e
B
e
C
e
D
e
_
V
a
s
=
1

r+1
_
A

M
B

a
0
_
W
a
s
=
1

r+1
_
A M
C
a
0
_
,
in which
M =
_
0
I
l
_
with l being the multiplicity of
r+1
.
E

a
W
a
s
=
1

r+1
_
A

e
C

e
B

e
D

e
_ _
A M
C
a
0
_
s
=
1

r+1
_
_
A

e
C

e
C
a
0
0 A M
B

e
D

e
C
a
0
_
_
. (10.1)
Now from (10.3.4) and (10.3.5), we have
A

e
_
Q
N

_
+
_
Q
N

_
A+C

e
C
a
= 0
D

e
C
a
+B

e
_
Q
N

_
= 0
in which N

=
_
E

1
0

. Also, from (10.4.2),


_
Q
N

_
M =
r+1
_
M
0
_
.
Applying the state transformation
_
_
I
_
Q
N

_
0 I
_
_
134 SOLUTIONS TO PROBLEMS IN CHAPTER 10
to the realization in (10.1), we see that
E

a
W
a
s
=

r+1
_
_
A

e
_
M
0
_
B
e
0
_
_
s
=

r+1
_
A

M
B

a
0
_
=

r+1
B

a
(sI +A

)
1
M
=
r+1
V
a
(s)
The dual identity E
a
(s)V
a
(s) =
r+1
W
a
(s) follows using as similar approach.
Solution 10.4. Let q !1

(r) satisfy |g q|

=
r+1
. (Such a q ex-
ists by Theorem 10.4.2.) By Lemma 10.4.3 we have that (g q)(s)v
r+1
(s) =

r+1
w
r+1
(s). Hence
q(s) = g(s)
r+1
w
r+1
(s)
v
r+1
(s)
.
Solution 10.5.
1. See Theorem A.4.4; by Lemma A.4.5, the T
ij
s have realization
_
T
11
T
12
T
21
0
_
s
=
_

_
AB
2
F HC
2
H B
2
0 AHC
2
B
1
H 0
C
1
F C
1
0 I
0 C
2
I 0
_

_
.
2. The assumption that A B
2
C
1
and A B
1
C
2
have no eigenvalues on the
imaginary axis is necessary and sucient for the existence of stabilizing so-
lutions X and Y to the given Riccati equations (see Chapter 5). That is,
A B
2
C
1
B
2
B

2
X = A B
2
F and A B
1
C
2
Y C

2
C
2
= A HC
2
are
asymptotically stable. Now T
12
has realization
T
12
s
=
_
AB
2
F B
2
C
1
F I
_
.
Obviously, T
12
is square. Elementary algebra shows that
X(AB
2
F) + (AB
2
F)

X + (C
1
F)

(C
1
F) = 0
(C
1
F) +B

2
X = 0
and we conclude that T

12
T
12
= I from Theorem 3.2.1. The assertion that
T
21
is square and allpass may be established similarly.
OPTIMAL MODEL REDUCTION 135
Since T
12
and T
21
are square and allpass, we have
|T

(P, K)|

= |T
11
+T
12
QT
21
|

= |T

12
(T
11
+T
12
QT
21
)T

21
|

= |T

12
T
11
T

21
+Q|

= |R+Q|

.
3. Note that
|R+Q|

= |R

+Q

= |(R

)
+
+ ((R

+Q

)|

Now (R

)
+
!1

and ((R

+Q

) is an arbitrary element of !1

, so
by Neharis theorem the inmal norm is indeed the Hankel norm of (R

)
+
.
We now compute a realization of R. Since
T
11
s
=
_
_
AB
2
F HC
2
H
0 AHC
2
B
1
H
C
1
F C
1
0
_
_
,
we have
T
11
T

21
s
=
_
_
AB
2
F HC
2
H
0 AHC
2
B
1
H
C
1
F C
1
0
_
_
_
(AHC
2
)

2
(B
1
H)

I
_
s
=
_

_
AB
2
F HC
2
H(B
1
H)

H
0 AHC
2
(B
1
H)(B
1
H)

B
1
H
0 0 (AHC
2
)

2
C
1
F C
1
0 0
_

_
s
=
_

_
AB
2
F HC
2
0 H
0 AHC
2
0 0
0 0 (AHC
2
)

2
C
1
F C
1
C
1
Y 0
_

_
s
=
_
_
AB
2
F 0 H
0 (AHC
2
)

2
C
1
F C
1
Y 0
_
_
.
In the above calculation, we made use of the state transformation
_
_
I 0 0
0 I Y
0 0 I
_
_
,
136 SOLUTIONS TO PROBLEMS IN CHAPTER 10
the identity B
1
H+Y C

2
= 0 and the identity (AHC
2
)Y +Y (AHC
2
)

+
(B
1
H)(B
1
H)

= 0.
R = T

12
T
12
T

21
s
=
_
(AB
2
F)

(C
1
F)

2
I
_
_
_
AB
2
F 0 H
0 (AHC
2
)

2
C
1
F C
1
Y 0
_
_
s
=
_

_
(AB
2
F)

(C
1
F)

(C
1
F) (C
1
F)

C
1
Y 0
0 AB
2
F 0 H
0 0 (AHC
2
)

2
B

2
C
1
F C
1
Y 0
_

_
s
=
_

_
(AB
2
F)

0 (C
1
F)

C
1
Y XH
0 AB
2
F 0 H
0 0 (AHC
2
)

2
B

2
0 C
1
Y 0
_

_
s
=
_
_
(AB
2
F)

(C
1
F)

C
1
Y XH
0 (AHC
2
)

2
B

2
C
1
Y 0
_
_
.
Thus
R

s
=
_
_
AB
2
F 0 B
2
Y C

1
(C
1
F) AHC
2
Y C

1
H

X C
2
0
_
_
.
Since A B
2
F and A HC
2
are asymptotically stable, R

!1

, so we
may take (R

)
+
= R

.
4. Use Theorem 10.4.6 to obtain all optimal Nehari extensions of R

. This will
have the linear fractional form Q

= T

(Q

a
, U

), in which Q

a
constructed
from R

following the steps in Section 10.4.1 and U

!1

, |U|


|R

|
1
H
. Thus Q = T

(Q
a
, U), in which U !1

and |U|

|R

|
1
H
.
From the parametrization of all stabilizing controllers, we have
K = T

(K
s
, Q)
= T

(K
s
, T

(Q
a
, U))
= T

((

(K
s
, Q
a
), U)
= T

(K
a
, U)
in which K
a
= (

(K
s
, Q
a
) is the composition of the two linear fractional
transformations K
s
and Q
a
and U !1

with |U|

|R

|
1
H
.
OPTIMAL MODEL REDUCTION 137
Solution 10.6.
1. If

G !1

has (strictly) fewer poles that G, then


|G

G|

= |G


min
[G

] =
0
.
Conversely, if
0
, let

G

be the optimal Hankel norm approximant to G

of degree n l, in which n is the degree of G

and l is the multiplicity of

min
[G

]. Then

G

!1

, so

G !1

is of strictly lower degree than


G and |G

G|

=
0
.
Now suppose that >
0
, let

G be the optimal Hankel norm approximant as
discussed above and consider the family of plants
G+ =

G+
M
(s 1)
l
.
Then
||

= |

GG+
M
(s 1)
l
|

GG|

+|
M
(s 1)
l
|


0
+|M|
< .
Furthermore, for and > 0,

G+
M
(s1)
l
has n l + l = n unstable poles, the
same number as G.
Suppose, to obtain a contradiction, that K stabilizes the familily of plants
G + as given above. For ,= 0, G + has n unstable poles. Since K
stabilizes G+, the Nyquist diagram of det
_
I (G+)K(s)
_
must make
n +k encirclements of the origin as s traverses the Nyquist contour, in which
k is the number of right-half-plane poles of K. Since K must also stabilize

G, the Nyquist diagram of det


_
I

GK(s)
_
must make nl +k encirclements
of the origin as s traverses the Nyquist contour, since

G has only n l right-
half-plane poles. But the Nyquist diagram of det
_
I GK(s)
_
is a continuous
function of , and therefore there exists a
0
such that the Nyquist diagram
of det
_
I (

G+

0
M
(s1)
l
)K(s)
_
crosses the origin. This plant is therefore not
stabilized by K. We conclude that no controller can stabilize the the familily
of plants G+ as given above.
2. Write G = G
+
+G

. Then
K(I GK)
1
=

K(I +G
+

K)
1
[I (G
+
+G

K(I +G
+

K)
1
]
1
=

K[I +G
+

K (G
+
+G

K]
1
=

K(I G

K)
1
.
138 SOLUTIONS TO PROBLEMS IN CHAPTER 10
Note that
_
I K
G I
_
=
_
I 0
G
+
I
_ _
I

K
G

I
_ _
I 0
0 I +G
+

K
_
1
,
and that I +G
+

K = (I G
+
K)
1
.
Suppose

K stabilizes G

. Dene K =

K(I +G
+

K)
1
and note that
_
I K
G I
_
1
=
_
I 0
0 I +G
+

K
_ _
I

K
G

I
_
1
_
I 0
G
+
I
_
The last two matrices on the right-hand side of this identity are in 1

, since
G
+
1

and

K stabilizes G

. The only poles that I +G


+

K can have in the


right-half plane are the right-half-plane poles of

K, which must be cancelled
by the zeros of
_
I

K
G

I
_
1
. Hence the matrix on the left-hand side is
in !1

, which means that K stabilizes G. Conversely, suppose K stabilizes


G. Dene

K = K(I G
+
K)
1
and note that
_
I

K
G

I
_
1
=
_
I 0
0 I G
+
K
_ _
I K
G I
_
1
_
I 0
G
+
I
_
.
Arguing as before, we conclude that

K stabilizes G

.
Hence inf
K
|K(I GK
1
|

= inf

K
|

K(I G

K)
1
|

= 1/
min
[G

].
Thus the optimal (maximum) stability margin for additive uncertainties is

min
[G

], which means that the easier the unstable part is to approximate,


the harder it is to robustly stablize. This counter-intuitive result stems from
the requirement of the additive robustness theorem that all the (G + )s
must have the same number of unstable poles. If the unstable part is easy
to approximate, there is a system that is close to G that has fewer unstable
poles. Thus, we cannot allow very large ||

.
3. Comparing K(I GK)
1
with T

(P, K) = P
11
+P
12
K(I P
22
K)
1
P
21
,
we see that P
11
= 0, P
12
= I, P
21
= I and P
22
= G. Hence we need
_
P
11
P
12
P
21
P
22
_
=
_
0 I
I G
_
s
=
_
_
A 0 B
0 0 I
C I 0
_
_
.
We can assume that G() = 0, without loss of generality; the system G can
be replaced by GG() by using the controller

K = K(I G()K)
1
.
OPTIMAL MODEL REDUCTION 139
Before beginning the detailed analysis, we present a brief plausibility argu-
ment. If

K is invertible,

K(I G

K)
1
= (

K
1
G

)
1
.
Now let (

)
1
be an (n l)th order Hankel approximation of G

, so that

K
1
G

is allpass, completely unstable and with |

K
1
G

=
n
(G

).
This means that

K(I G

K)
1
will be stable and |

K(I G

K)
1
|

=
1

n
(G

)
.
Assume, without loss of generality, that
A =
_
A
+
0
0 A

_
, B =
_
B
+
B

_
, C =
_
C
+
C

,
in which R
e
[(A
+
)] < 0 and R
e
[(A

)] > 0. Thus G
+
= C
+
(sI A
+
)
1
B
+
and G

= C

(sI A

)
1
B

. Hence G

= B

(sI + A

)
1
C

and the
controllability and observability gramians of this realization of G

satisfy
A

+C

= 0
P

+B

= 0.
Thus the Hankel singular values of G

are equal to
_

i
(Q

).
According to the solution of Problem 10.5, we need to nd R

. To do this,
we need to nd the stabilizing solutions to the Riccati equations
XA+A

X XBB

X = 0
AY +Y A

Y C

CY = 0,
which exist provided A has no eigenvalue on the imaginary axis. Assume now,
without loss of generality, that (A

, B

, C

) is minimal, so Q

and P

are
nonsingular. Then A

P
1

= P

P
1

, which is asymptotically
stable, and A

Q
1

= Q
1

, which is also asymptotically


stable. Therefore, the stabilizing solutions to the Riccati equations are
X =
_
0 0
0 P
1

_
; Y =
_
0 0
0 Q
1

_
The system R

is therefore given by
R

s
=
_
_
ABF 0 B
0 AHC 0
H

X C 0
_
_
s
=
_
ABF B
H

X 0
_
140 SOLUTIONS TO PROBLEMS IN CHAPTER 10
in which F = B

X and H = Y C

. Noting that
H

X = CY X =
_
0 C

Q
1

P
1

,
we eliminate the unobservable modes to obtain the realization
R

s
=
_
A

P
1

Q
1

P
1

0
_
s
=
_
P

P
1

Q
1

P
1

0
_
s
=
_
A

P
1

Q
1

0
_
.
The controllability and observability gramians of this realization are easily
seen to be P
1

and Q
1

respectively. Hence
|R

|
H
=
_

max
(P
1

Q
1

)
=
1
_

min
(Q

)
=
1

min
(G

)
.
Solution 10.7.
1. If the future input is zero, i.e., u
k
= 0 for k 1, then the future output is
y
n
=
0

k=
h
nk
u
k
, n = 1, 2 . . .
=

m=0
h
n+m
u
m
, n = 1, 2 . . .
=

m=1
h
n+m1
v
m
, n = 1, 2 . . .
in which v
m
= u
1m
is the reection of the past input. This may be written
as the semi-innite vector equation
_

_
y
1
y
2
y
3
.
.
.
_

_
=
_

_
h
1
h
2
h
3
. . .
h
2
h
3
h
4
. . .
h
3
h
4
h
5
. . .
.
.
.
.
.
.
.
.
.
.
.
.
_

_
_

_
v
1
v
2
v
3
.
.
.
_

_
.
OPTIMAL MODEL REDUCTION 141
2. Note that
1
2
_
2
0
j( )e
jk
d =
_
( )e
jk
2k
_
2
0

_
2
0
1
2k
e
jk
d
=
1
k
.
Therefore, by Neharis theorem, |
H
| sup
[0,2)
[j( )[ = . Hilberts
inequality is immediate, noting that v

H
u |
H
||v|
2
|u|
2
|v|
2
|u|
2
.
3. This is a direct application of Neharis theorem. The idea is that e
N
=
z
N1
(f

f) is an extension of the tail function t


N
.
Solution 10.8. That O( = is obvious. To nd the Hankel singular values, note
that
(

) = ((

O()
= (((

O)
= (PQ).
Alternatively, consider the equations v
i
=
i
w
i
and

w
i
=
i
v
i
. If v = (

P
1
x
0
,
then v = O((

P
1
x
0
= Ox
0
. And if w = Ox
0
, then

w = (

Ox
0
= (

Qx
0
.
Therefore

v =
2
w if Qx
0
=
2
P
1
x
0
, from which we conclude that
2
must
be an eigenvalue of PQ.
Solution 10.9.
1. Choosing r = n 1 and =
n
in the construction of the optimal allpass
embedding yields a Q
a
!1

of McMillan degree n l, since no Hankel


singular value of G is strictly smaller than
n
and exactly n l are strictly
larger than
n
. Now |G
a
Q
a
|

=
n
, so set

G equal to the (1,1)-block
of Q
a
. (We may also choose

G by using any constant U in Theorem 10.4.5.
Hence |G

G|


n
, and in fact equality must hold because
n
is a lower
bound on the innity norm error incurred in approximating G by a stable
system of degree less than or equal to n 1.
2. Suppose, without loss of generality, that G is given by a balanced realization
(A, B, C, D) partitioned as
=
_

1
0
0
n
I
l
_
142 SOLUTIONS TO PROBLEMS IN CHAPTER 10
with
A =
_
A
11
A
12
A
21
A
22
_
, B =
_
B
1
B
2
_
C =
_
C
1
C
2

.
Then

G, the (1, 1)-block of Q
a
, is given by

G
a11
=

D +

C(sI

A)
1

B
in which

A = E
1
1
(
2
r+1
A

11
+
1
A
11

1
+
n
C

1
UB

1
)

B = E
1
1
(
1
B
1

r+1
UC
1
)

C = C
1

1
B

r+1
U

D = D +
n
U
in which UU

I and B
2
= C

2
U. Direct calculations show that the control-
lability gramian of this realization is
1
E
1
1
and the observability gramian of
this realization is
1
E
1
and we conclude that the Hankel singular values of

G
are the diagonal entries of
1
.
3. Iteration and the triangle inequality show the result.
4. The optimal allpass embedding is constructed so that
1
n
(G
a
Q
a
) is allpass.
Deleting the states corresponding to
n
from G
a
gives

G
a
and an examination
of the equations (10.3.4) to (10.3.6) and (10.4.3) shows that corresponding
equations for the truncated error system can be obtained merely by truncating
the rows and columns associated with =
n
from P
e
and Q
e
in (10.4.3), the
product of which remains
n
I. We conclude that
1
n
(

G
a
Q
a
) is allpass by
invoking Theorem 3.2.1.
To conclude the error bound, we note that
|G
a


G
a
|

|G
a
Q
a
| +|Q
a


G
a
|

=
n
+
n
= 2
n
.
Taking the (1, 1)-block, we obtain the desired inequality |G

G|

2
n
.
Solutions to Problems in
Chapter 11
Solution 11.1. Consider
_
A B

WW
1
_
C
D
_
= 0
where
W =
_
I 0
B
1
A I
_
.
This gives
_
0 B

_
C
D +B
1
AC
_
= 0.
Since B is nonsingular, D + B
1
AC = 0. Because C is square and of full column
rank it must be nonsingular.
Solution 11.2. By direct computation
X

X =
_
X

11
X

21
X

12
X

22
_ _
X
11
X
12
X
21
X
22
_
=
_
X

11
X
11
+X

21
X
21
X

11
X
12
+X

21
X
22
X

12
X
11
+X

22
X
21
X

12
X
12
+X

22
X
22
_
,
therefore
trace(X

X) = trace(X

11
X
11
) + trace(X

21
X
21
) + trace(X

12
X
12
) + trace(X

22
X
22
).
Since each term is nonnegative,
inf
X
22
_
_
_
_
_
X
11
X
12
X
21
X
22
__
_
_
_
2
=
_
_
_
_
_
X
11
X
12
X
21
0
__
_
_
_
2
= trace(X

11
X
11
) + trace(X

21
X
21
) + trace(X

12
X
12
).
143
144 SOLUTIONS TO PROBLEMS IN CHAPTER 11
Solution 11.3. Substituting for

A and

B
aa
gives
=

AY (Z
1
)

+Z
1
Y

A

+

B
aa

aa
= (A

Z
1
(Y B
aa
+C

aa
D
aa
)B

aa
)Y (Z
1
)

+Z
1
Y (AB
aa
(B

aa
Y +D

aa
C
aa
)(Z
1
)

)
+Z
1
(Y B
aa
+C

aa
D
aa
)(B

aa
Y +D

aa
C
aa
)(Z
1
)

= A

Y (Z
1
)

Z
1
Y AZ
1
Y B
aa
B

aa
Y (Z
1
)

+Z
1
C

aa
C
aa
(Z
1
)

= Z
1
_
ZA

Y Y AZ

Y B
aa
B

aa
Y +C

aa
C
aa
_
(Z
1
)

= Z
1
_
(Y X I)A

Y Y A(XY I) +Y (AX +XA

)Y A

Y Y A
_
Z
1
= 0
Solution 11.4. From
_

_
D
11
D
12
D
13
0
D
21
D
22
D
23
D
24
D
31
D
32
D
33
D
34
0 D
42
D
43
0
_

_
_

_
D

11
D

21
D

31
0
D

12
D

22
D

32
D

42
D

13
D

23
D

33
D

43
0 D

24
D

34
0
_

_
= I
it follows that
D
42
D

12
+D
43
D

13
= 0
D
1
42
D
43
= D

12
(D
1
13
)

.
In the same way, we may conclude from D

aa
D
aa
= I, that
D

21
D
24
+D

31
D
34
= 0
D
34
D
1
24
= (D
1
31
)

21
.
Solution 11.5. From the (1, 2)-partition of
0 =
_
A

0
0

A

_ _
Y Z
Z

XZ
_
+
_
Y Z
Z

XZ
_ _
A 0
0

A
_
+
_
C

aa

aa
_
_
C
aa

C
aa
_
,
we conclude that
0 = A

Z +Z

A+C

aa

C
aa


A = Z
1
(A

Z +C

aa

C
aa
)
THE FOUR-BLOCK PROBLEM 145
which establishes the rst part.
Substituting for C
aa
,

C
aa
and

B
4
gives

A

B
4
D
1
24

C
2
= Z
1
_
A

Z +C

C
2
+C

C
3
(C

2
D
24
+C

3
D
34
)D
1
24

C
2
_
= Z
1
_
A

Z +C

C
3
C

3
D
34
D
1
24

C
2
_
= Z
1
_
A

Z +C

3
(

C
3
+ (D
1
31
)

21

C
2
)
_
.
This proves the second part.
By direct calculation

C
3
+ (D
1
31
)

21

C
2
= (D
1
31
)

_
D

21
D

31

_

C
2

C
3
_
= (D
1
31
)

_
I 0 0 0

aa

C
aa
= (D
1
31
)

_
I 0 0 0

aa
(C
aa
X +D
aa
B

aa
)
= (D
1
31
)

(
_
I 0 0 0

(D

aa
C
aa
X +B

1
)
= (D
1
31
)

([D

11
C
1
+D

21
C
2
]X +D

31
C
3
X +B

1
)
= (D
1
31
)

(B

1
Y X +B

1
)
= (D
1
31
)

1
Z
which proves part three.
From parts one, two and three we have

A

B
4
D
1
24

C
2
= Z
1
_
A

Z +C

3
(

C
3
+ (D
1
31
)

21

C
2
)
_
= Z
1
_
A

Z C

3
(D
1
31
)

1
Z
_
= Z
1
_
AB
1
D
1
31
C
3
_

Z
as required.
Solution 11.6. Let us suppose that the generator of all controllers given in
(8.3.11) is described by
x = A
k
x +B
k1
y +B
k2
r
u = C
k1
x +r
s = C
k2
x +y.
Multiplying the rst equation on the left by Y

1
(I
2
Y

) and replacing x
with X
1
q gives
Y

1
(I
2
Y

)X
1
q = Y

1
(I
2
Y

)A
k
X
1
q
+Y

1
(I
2
Y

)[B
k1
y +B
k2
r]
u = C
k1
X
1
q +r
s = C
k2
X
1
q +y.
146 SOLUTIONS TO PROBLEMS IN CHAPTER 11
Since Y

= (Y

1
)
1
Y

2
and X

= X
2
X
1
1
, we get
E
k
= Y

1
X
1

2
Y

2
X
2
.
In the same way we get
C
k1
= F

X
1
= (D

12
C
1
X
1
+B

2
X
2
)
and
C
k2
= (C
2
+
2
D
21
B

1
X

)X
1
= (C
2
X
1
+
2
D
21
B

1
X
2
).
The formulae for the B
ki
s are just a little more complicated, but direct calculation
gives
B
k1
= Y

1
(I
2
Y

)B
1
D

21
+Y

1
Y

(C

2
+
2
X

B
1
D

21
)
= Y

1
B
1
D

21
+Y

2
C

2
and
B
k2
= Y

1
(I
2
Y

)(B
2
+
2
Z

)
= Y

1
(I
2
Y

)B
2
+
2
Y

1
Y

(C

1
D
12
+X

B
2
)
= Y

1
B
2
+
2
Y

2
C

1
D
12
.
Finally, the equation for A
k
comes from expanding
Y

1
(I
2
Y

)
_
A+
2
B
1
B

1
X

B
2
F

(B
1
D

21
+Z

2z
)(C
2
+
2
D
21
B

1
X

)
_
X
1
,
in which
C
2z
= C
2
+
2
D
21
B

1
X

= D

12
C
1
+B

2
X

.
This gives
A
k
= (Y

1
X
1

2
Y

2
X
2
)X
1
1
_
AX
1
+
2
B
1
B

1
X
2
B
2
(D

12
C
1
X
1
+B

2
X
2
)
_

_
Y

1
(I
2
Y

)B
1
D

21
+Y

2
(C

2
+
2
X

B
1
D
21
)
_
(C
2
X
1
+
2
D
21
B

1
X
2
).
The rst Hamiltonian expression in Theorem 11.5.1 gives
(AB
2
D

12
C
1
)X
1
(B
2
B

2
B
1
B

1
)X
2
= X
1
T
X
.
THE FOUR-BLOCK PROBLEM 147
Substituting and re-arranging gives
A
k
= (Y

1
X
1

2
Y

2
X
2
)T
X
(Y

1
B
1
D

21
+Y

2
C

2
)(C
2
X
1
+
2
D
21
B

1
X
2
)
= E
k
T
X
+B
k1
C
k2
.
Solution 11.7.
1. Since (I GK)
1
= I +GK(I GK)
1
, we get
T

(P, K) =
_
GK(I GK)
1
(I GK)
1
_
=
_
0
I
_
+
_
G
G
_
K(I GK)
1
I
=
_
GK(I GK)
1
(I GK)
1
_
,
since I +GK(I GK)
1
= (I GK)
1
. That P has the realization given
is straight forward.
2. Replacing u with (

2D)
1
u gives

P
s
=
_

_
A 0 B(

2D)
1
_
C
C
_ _
0
I
_ _
(

2)
1
I
(

2)
1
I
_
C I (

2)
1
I
_

_
.
Since
_
(

2)
1
I (

2)
1
I

_
(

2)
1
I
(

2)
1
I
_
= I,
D
12
has been orthogonalized as required.
3. It is easy to check that
(

2)
1
_
I I
I I
_
is orthogonal.
4. Direct substitution into the LQG Riccati equations given in Chapter 5 yields
0 = (ABD
1
C)

X +X(ABD
1
C)
1
2
XB(D

D)
1
B

X
and
AY +Y A

Y C

CY.
148 SOLUTIONS TO PROBLEMS IN CHAPTER 11
5. There are at least two ways of deriving the Riccati equations we require.
One method uses the loop-shifting transformations given in Chapter 4, while
an alternative technique employs the following 1

Riccati equations which


where derived to deal with the case that D
11
,= 0. They are:
0 = (A+ (B
1
D

11

D
12

12

2
B
2
D

12
)(
2
I D
11
D

11

D
12

12
)
1
C
1
)

+X

(A+ (B
1
D

11

D
12

12

2
B
2
D

12
)(
2
I D
11
D

11

D
12

12
)
1
C
1
)
X

(B
2
B

2
(B
1
B
2
D

12
D
11
)R
1
(B
1
B
2
D

12
D
11
)

)X

+C

D
12
(I
2

12
D
11
D

11

D
12
)
1

12
C
1
,
in which
R = (
2
I D

11

D
12

12
D
11
)
and
0 = (A+B
1
(
2
I

D

21

D
21
D

11
D
11
)
1
(

21

D
21
D

11
C
1

2
D

21
C
2
))Y

+Y

(A+B
1
(
2
I

D

21

D
21
D

11
D
11
)
1
(

21

D
21
D

11
C
1

2
D

21
C
2
))

(C

2
C
2
(C
1
D
11
D

21
C
2
)

R
1
(C
1
D
11
D

21
C
2
))Y

+B
1

21
(I
2

D
21
D

11
D
11

21
)
1

D
21
B

1
,
in which

R = (
2
I D
11

21

D
21
D

11
).
Evaluating the linear term in the rst equation gives
(A

2
2
BD
1
_
I I

_

2
I 0

2
2
(
2

1
2
)
1
I (
2

1
2
)
1
I
_ _
C
C
_
)X

= (ABD
1
_
I I

_
1
2
I
1
2
I
_
C)X

.
The constant term is zero because D

C
1
= 0. The coecient of the quadratic
term is given by
1
2
B(D

D)
1
B

(1
1
2
(
2

1
2
)
1
)
=
1
2
B(D

D)
1
B

2
1

1
2
.
Combining these yields
0 = (ABD
1
C)

+X

(ABD
1
C)

2
1
2
2
1
X

B(D

D)
1
B

.
THE FOUR-BLOCK PROBLEM 149
Turning to the second equation we see that the constant term is zero since
B
1
= 0. The linear terms are given by AY

+ Y

. The coecient of the


quadratic term is
C

C
2
__
C
C
_

_
0
I
_
C
_

__
C
C
_

_
0
I
_
C
_
= (1
2
)C

C.
Collecting terms then gives
AY

+Y

(1
2
)Y

CY

.
6. By referring to the LQG equations, it is east to check that
X

1
2

2
1
X and Y

= (1
2
)
1
Y
solve the 1

Riccati equations. Check that they are the stabilizing solutions.


7. If G is stable, Y = 0 and X

2
X
1
is nonnegative when 1. When G
is minimum phase X = 0 and Y

2
Y
1
is nonnegative when 1 (these
conditions come from Theorem 11.5.1). If G is stable and nonminimum phase
we see that Y = 0, X ,= 0,
opt
= 1 and
X

= lim
1

1
2

2
1
X
which is unbounded. A parallel argument may be used for Y

.
8. If G is stable and minimum phase, G
1
is a stabilizing controller because
no right-half-plane cancellations occur between the plant and controller when
forming GK and the resulting transfer function is constant. Next, we see
that
_
GK(I GK)
1
(I GK)
1
_
=
_
(1 +)
1
(1 +)
1
_
,
which gives
_
_
_
_
_
GK(I GK)
1
(I GK)
1
__
_
_
_

_

1 +
_
2
+
_
1
1 +
_
2
=
_

2
+ 1
1 +
.
Now
d
d
_
_
_
_
_
GK(I GK)
1
(I GK)
1
__
_
_
_

=
1
(1 +)
2
_

2
+ 1
150 SOLUTIONS TO PROBLEMS IN CHAPTER 11
which vanishes at = 1. This means that
_
_
_
_
_
GK(I GK)
1
(I GK)
1
__
_
_
_

has a minimum at = 1. Thus


inf
K
_
_
_
_
_
GK(I GK)
1
(I GK)
1
__
_
_
_

=
_
_

2
+ 1
1 +
_

=1
=
1

2
.
Solution 11.8. It follows from (10.2.9) that
_
_
_
_
_
D
N
_

_
_
_
_
_
D
N
__
_
_
_

= 1.
Now suppose that |
_
D
N
_
|
H
= 1. In this case there exists f, g !1
2
such that
_
D(s)
N(s)
_
g(s) = f(s).
Since D, N !1

are coprime, there exist U, V !1

such that
V D+UN = I
V (s)D(s) +U(s)N(s) = I.
This gives
g(s) =
_
V (s)
U(s)
_
f(s) , !1
2
which is a contradiction. We therefore conclude that
|
_
D
N
_
|
H
< 1.
Solution 11.9. We follow the construction given in Section 11.2.
Step 1: Construct D
13
such that
_
D
12
D
13

_
D

12
D

13
_
= I.
THE FOUR-BLOCK PROBLEM 151
Step 2: Construct D
31
such that
_
D

21
D

31

_
D
21
D
31
_
= I.
Step 3: Find
_
D
42
D
43

such that
_
D
12
D
13

_
D

42
D

43
_
= 0.
Step 4: Find
_
D

24
D

34

such that
_
D

24
D

34

_
D
21
D
31
_
= 0.
Step 5: We can now complete the construction with
_
D
22
D
23
D
32
D
33
_
=
_
D

21
D

31
D

24
D

34
_
1
_
D

11
0
0 0
_ _
D
12
D
13
D
42
D
43
_
=
_
0 0
0 0
_
since D
11
= 0.
Solution 11.10. The rst expression comes from the (2, 2)-partition of:
0 =
_
_
A

1
0 0
0 A

2
0
0 0 A

_
_
_
_
Y
11
0 Y
13
0 Y
22
0
Y

13
0
2
Y
33
_
_
+
_
_
Y
11
0 Y
13
0 Y
22
0
Y

13
0
2
Y
33
_
_
_
_
A
1
0 0
0 A
2
0
0 0 A
_
_
+
_
_
0 C

1
C

31
C

2
0 0
0 C

33
_
_
_
_
0 C
2
0
C
1
0 C
C
31
0 C
33
_
_
,
which denes the observability gramian of
_
_
0 R
12
R
13
R
21

1
r+1
R
22
R
23
R
31
R
32
R
33
_
_
.
152 SOLUTIONS TO PROBLEMS IN CHAPTER 11
The controllability gramian is dened by
0 =
_
_
A
1
0 0
0 A
2
0
0 0 A
_
_
_
_
X
11
0 0
0 X
22
X
23
0 X

23

2
X
33
_
_
+
_
_
X
11
0 0
0 X
22
X
23
0 X

23

2
X
33
_
_
_
_
A

1
0 0
0 A

2
0
0 0 A

_
_
+
_
_
B
1
0 0
0 B
2
B
32
0 B B
33
_
_
_
_
B

1
0 0
0 B

2
B

0 B

32
B

33
_
_
.
The (2, 2)-partition yields
A
2
X
22
+X
22
A

2
+B
2
B

2
+B
32
B

32
= 0.
To establish the spectral radius condition, we consider
_
R
12
R
13

s
=
_
A
2
B
2
B
32
C
2
D
12
D
13
_
.
Since
R
12
R

12
+R
13
R

13
= I
and since
_
R
12
R
13

has a stable right inverse, (R


12
, R
13
) are a normalized
left coprime pair. This means that |
_
R
12
R
13

|
H
< 1 and therefore that
(X
22
Y
22
) < 1.
Solution 11.11. The realization in Theorem 11.3.3 is given by
x =

Ax +
_

B
2

B
4
_
_
w
u
_
_
z
y
_
=
_

C
2

C
4
_
x +
_
D
22
D
24
D
42
0
_ _
w
u
_
.
Multiplying the rst equation on the left by Z gives
Z x = Z

Ax +Z
_

B
2

B
4
_
_
w
u
_
_
z
y
_
=
_

C
2

C
4
_
x +
_
D
22
D
24
D
42
0
_ _
w
u
_
.
Carrying out the calculations gives:
Z x = (A

+Y AX C

aa
D
aa
B

aa
)x
+
_
_
_
Y B
2
0

+
_
C

1
C

2
C

_
_
D
12
0
D
22
D
24
D
32
D
34
_
_
_
_
_
w
u
_
THE FOUR-BLOCK PROBLEM 153
which completes the verication since the second equation remains uneected.
Solutions to Problems in
Appendix A
Solution A.1. Since ND
1
= N
c
D
1
c
, it follows that W = D
1
c
D. To show
that W !1

, let X and Y be !1

transfer function matrices such that


XN
c
+Y D
c
= I, which exist because N
c
and D
c
are right coprime. Multiplying
on the right by W = D
1
c
D, we see that W = XN +Y D !1

.
Solution A.2. Let X and Y be !1

transfer function matrices such that XN+


Y D = I, which exist because N and D are right coprime. Now write the Bezout
identity as
_
X Y

_
N
D
_
= I
For any s in the closed-right-half plane (including innity),
_
X N

(s) is a nite
complex matrix, and
_
X Y

(s)
_
N
D
_
(s) = I
implies that
_
N
D
_
(s)
is a complex matrix with full column rank.
Now write GD = N. If s
0
is a pole of G in the CRHP, it must be a zero of
det D(s
0
), since N !1

. If s
0
is a zero of det D(s), there exists an x ,= 0 such
that
f = D
x
s s
0
!1

.
Hence
Gf = N
x
s s
0
,
which implies that G has a pole at s
0
, since f !1

and N(s
0
)x ,= 0, due to
coprimeness.
155
156 SOLUTIONS TO PROBLEMS IN APPENDIX A
Solution A.3. The verication of (A.2.3) is a routine application of the state-
space system interconnection (or inversion) results of Problem 3.6. Since
_
D
r
N
r
_
s
=
_
_
ABF B
F I
C DF D
_
_
a direct application of Problem 3.6, part 4, yields N
r
D
1
r
= D+C(sIA)
1
B = G.
The identity D
1
l
N
l
= N
r
D
1
r
follows from the (2, 1)-block of (A.2.3).
Now note that
_
V
r
U
r

s
=
_
AHC B HD H
F I 0
_
.
Application of a dual version of Problem 3.6, part 4, yields
V
1
r
U
r
s
=
_
AHC (B HD)F H
F 0
_
,
which we may write out as

x = (AHC (B HD)F) x +Hy


u = F x.
Replacing F x with u in the

x equation yields (A.2.6).
Solution A.4. Let
_
D
N
_
s
=
_

A

B

C
1

D
1

C
2

D
2
_

_
be a minimal realization. Since N and D are in !1

,

A is asymptotically stable.
Also, since N, D are r.c.,
_

AsI

B

C
1

D
1

C
2

D
2
_

_
has full column rank for all s in the closed-right-half plane.
Using Problem 3.6, part 4, yields
G = ND
1
s
=
_

A

B

D
1
1

C
1

B

D
1
1

C
2


D
2

D
1
1

C
1

D
2

D
1
1
_
.
INTERNAL STABILITY THEORY 157
Dene
A =

A

B

D
1
1

C
1
B =

B

D
1
1
C =

C
2


D
2

D
1
1

C
1
D =

D
2

D
1
1
W =

D
1
1
F =

D
1
1

C
1
.
Then
BW
1
=

B
ABW
1
F =

A

B

D
1
1

C
1
+

B

D
1
1

C
1
=

A
W
1
F =

C
1
W
1
=

D
1
C DW
1
F =

C
2


D
2

D
1
1

C
1
+

D
2

D
1
1

C
1
=

C
2
DW
1
=

D
2
.
Since A B(W
1
F) =

A is asymptotically stable, (A, B) is stabilizable. To prove
the detectability of (A, C), note that if
_
AsI
C
_
x = 0
_

AsI

B

C
2

D
2
_
_
I

D
1
1

C
1
_
x = 0

AsI

B

C
1

D
1

C
2

D
2
_

_
_
I

D
1
1

C
1
_
x = 0.
Thus, if R
e
(s) 0, then
_
I

D
1
1

C
1
_
x = 0, implying x = 0, and we conclude that
(A, C) is detectable. Thus (A, B, C, D) is a stabilizable and detectable realization
of G such that N, D has state-space realization as given in the problem statement,
for suitable W and F.
Solution A.5. The (1, 1)-block was veried in the solution to Problem A.3. The
(1, 2)- and (2, 1)-blocks are direct applications of the formula for inverting a state-
space realizationsee Problem 3.6. The (2, 2)-block is a direct application of Prob-
lem 3.6, part 4.
Solution A.6. Since G is assumed stable, every stabilizing controller is given by
K = Q(I +GQ)
1
. Now
y = (I GK)
1
v
158 SOLUTIONS TO PROBLEMS IN APPENDIX A
= (I +GQ)v.
Therefore, for perfect steady-state accuracy in response to step inputs, we need
(I +GQ)(0) = 0 (by the nal value theorem of the Laplace transform). Hence all
the desired controllers have the form
K = Q(I +GQ)
1
, Q !1

, Q(0) = G
1
(0).
As an example, consider g(s) =
1
s+1
. Then
k =
(s + 1)q
s + 1 q
, q !1

, q(0) = 1.
Solution A.7. Let (A, B, C, D) be any stabilizable and detectable realization of
G, and suppose D and N are given by
_
D
N
_
s
=
_
_
ABW
1
F BW
1
W
1
F W
1
C DW
1
F DW
1
_
_
.
We aim to choose F and W such that the allpass equations of Theorem 3.2.1 are
satised and A BW
1
F is asymptotically stable. If we can do this, then the
coprime factorization satises the equation
_
D


_
D
N
_
= I,
which denes the normalized coprime factorization. The allpass equations obtained
from Theorem 3.2.1 yield
0 = X(ABW
1
F) + (ABW
1
F)

X + (W
1
F)

(W
1
F)
+ (C DW
1
F)

(C DW
1
F)
0 = (W

)
1
_
W
1
F +D

(C DW
1
F) +B

X
_
I = (W

)
1
(I +D

D)W
1
.
Thus
W

W = I +D

D
F = (W

)
1
(D

C +B

X).
It remains to determine X. Substitution F and W into into the observability
gramian equation, we obtain (after some manipulation) the Riccati equation
X(ABS
1
D

C) + (ABS
1
D

C)

X XBS
1
B

X +C

S
1
C = 0
INTERNAL STABILITY THEORY 159
in which S = I + D

D and

S = I + DD

. (Note that I DS
1
D

=

S
1
). By
the results of Chapter 5, this Riccati equation has a stabilizing solution provided
(A BS
1
D

C, BS

1
2
) is stabilizable (which is true, since (A, B) is stabilizable)
and provided (A BS
1
D

C,

S

1
2
C) has no unobservable modes on the imagi-
nary axis (which is also true, since (A, C) is detectable). The fact that the required
assumptions hold follow immediately from the Popov-Belevitch-Hautus test for con-
trollability/observability.
Thus, if X is the stabilizing solution to the above Riccati equation, and F and
W are dened from X as above, the allpass equations are satised and ABW
1
F
is asymptotically stable. We conclude that N, D dened by the given state-space
realization is a normalized right coprime factorization of G.
To interpret this in the context of LQ control, consider the case D = 0. We then
have
XA+A

X XBB

X +C

C = 0
F = B

X
W = I.
This is exactly the same as the situation that arises in minimizing
J =
_

0
(x

Cx +u

u) dt.
For D ,= 0, the Riccati equation is that which we need to minimize
J = |z|
2
2
=
_

0
z

z dt.
Now
z =
_
G
I
_
u
=
_
N
D
_
D
1
u.
Since
_
N
D
_
is allpass, minimizing |z|
2
2
is the same as minimizing |D
1
u|
2
2
. Thus
we choose D
1
u = 0. Now
D
1
s
=
_
A B
F W
_
,
so setting D
1
u = 0 means
x = Ax +Bu
0 = Fx +Wu.
That is, u = W
1
Fx. Thus, computing a normalized right coprime factorization
is the frequency-domain equivalent of completing the square.
160 SOLUTIONS TO PROBLEMS IN APPENDIX A
Solution A.8. From Lemma A.4.5,
T
12
s
=
_
AB
2
F B
2
C
1
D
12
F D
12
_
.
From Theorem 3.2.1, T

12
T
12
= I if
0 = X(AB
2
F) + (AB
2
F)

X + (C
1
D
12
F)

(C
1
D
12
F)
0 = D

12
(C
1
D
12
F) +B

2
X
I = D

12
D
12
.
Since D

12
D
12
= I holds by assumption, we require
0 = X(AB
2
D

12
C
1
) + (AB
2
D

12
C
1
)

X XB
2
B

2
X
+C

1
(I D
12
D

12
)C
1
F = D

12
C
1
+B

2
X,
which is precisely the F from the (cross-coupled) LQ problem. Similarly for T
21
.
Solution A.9.
_

T
12
T
12
_
is clearly squareits D-matrix is square, since

D
12
is an orthogonal completion of D
12
. From Theorem 3.2.1,
_

T
12
T
12
_
is allpass
provided
X(AB
2
F) + (AB
2
F)

X + (C
1
D
12
F)

(C
1
D
12
F) = 0
_

D

12
D

12
_
(C
1
D
12
F) +
_

B

2
_
X = 0.
By construction, the rst equation is satised, and the (2, 1)-block of the second
equation is also satised. We therefore need to conrm the (1, 1)-block of the second
equation. Now

12
(C
1
D
12
F) +

B

X =

D

12
C
1
(I X
#
X).
The right-hand side is zero if ker(X) ker(

12
C
1
). To show this, suppose Xx = 0.
Then (C
1
D
12
F)x = 0 and consequently 0 =

D

12
(C
1
D
12
F)x =

D

12
C
1
x.
Thus Xx = 0 implies

D

12
C
1
x, so ker(X) ker(

12
C
1
) and we conclude that

12
(C
1
D
12
F) +

B

X = 0.
The reasoning for
_

T
21
T
21
_
is analogous.
Solution A.10. Choose
_

T
12
T
12
_
and
_

T
21
T
21
_
as in the previous problem.
INTERNAL STABILITY THEORY 161
1.
|T

(P, K)|
2,
=
_
_
_
_
_

T

12
T

12
_
T
11
_

T

21
T

21
_
+
_
0 0
0 Q
__
_
_
_
2,
=
_
_
_
_
_
R
11
R
12
R
21
R
22
+Q
__
_
_
_
2,
.
2. R is given by the realization
R
s
=
_
A
R
B
R
C
R
D
R
_
,
in which
A
R
=
_
(AB
2
F)

XHD
21
(B
1
HD
21
)

+ (C
1
D
12
F)

C
1
Y +XHC
2
Y
0 (AHC
2
)

_
and
B
R
=
_
X

B

21
XB
1
D

21

2
_
C
R
=
_

B

0
B

2
0
_
D
R
=
_
0 0
0 0
_
.
Since AB
2
F and AHC
2
are asymptotically stable, we see that R !1

.
3. Using the result of Problem 11.2, we have
trace(X

X) = trace(X

11
X
11
) + trace(X

12
X
12
)
+trace(X

21
X
21
) + trace(X

22
X
22
),
for any partitioned matrix
X =
_
X
11
X
12
X
21
X
22
_
.
Hence
_
_
_
_
_
R
11
R
12
R
21
R
22
+Q
__
_
_
_
2
2
= (terms independent of Q) +|R
22
+Q|
2
2
.
Hence, we need to choose Q !1

to minimize |R
22
+Q|
2
2
. Now |R
22
+Q|
2
is
nite if and only if Q() = R
22
() = 0. In this case,
|R
22
+Q|
2
2
= |R
22
|
2
2
+|Q|
2
2
,
since !1

and !1

are orthogonal in the 2-norm. Thus the minimum norm is


achieved by setting Q() = R
22
() = 0, and the minimum norm is |R|
2
.
162 SOLUTIONS TO PROBLEMS IN APPENDIX A
Solutions to Problems in
Appendix B
Solution B.1.
1. Since Q(N + 1) = 0,
N

k=0
x

k+1
Q(k + 1)x
k+1
x

k
Q(k)x
k
= x

0
Q(k)x
0
.
Now
N

k=0
z

k
z
k
+x

k+1
Q(k + 1)x
k+1
x

k
Q(k)x
k
=
N

k=0
x

k
_
C

(k)C(k) +A

(k)Q(k + 1)A(k) Q(k)


_
x
k
= 0.
Hence
N

k=0
z

k
z
k
= x

0
Q(0)x
0
.
2.
(i)(ii) Immediate from z
k
= CA
k
x
0
.
(ii)(iii) If AW = WJ, in which J is a Jordan block corresponding to an
eigenvalue with [[ 1, then CA
k
W = CWJ
k
. Hence CA
k
W 0 as k
implies CW = 0. That is, every observable eigenspace is asymptotically
stable.
(iii)(iv) Uniform boundedness follows from
Q(k) =
N

i=k
(A

)
Ni
C

CA
Ni
163
164 SOLUTIONS TO PROBLEMS IN APPENDIX B
and the asymptotic stability of every observable mode. Note that Q(k) is
monotonic. The convergence result is a consequence of monotonicity and
uniform boundedness.
(iv)(v) Set Q = lim
k
Q(k).
(v)(i) X(k) = Q Q(k) satises X(k) = A

X(k + 1)A, X(N + 1) = Q.


Therefore
X(k) = A
N+1k
QA
N+1k
0.
Thus 0 Q(k) Q, which establishes that lim
k
Q(k) is indeed the
smallest solution, and
|z|
2
2,[0,N]
= x

0
Q(0)x
0
x

0
Qx
0
.
Since this is a uniform bound on |z|
2
2,[0,N]
, we conclude that z
2
[0, ).
Solution B.2. The system L that maps w to ww

when the input is u

, which
is introduced in the proof of Theorem B.2.1, has realization
_
x
k+1
w
k
w

k
_
=
_
AB
2
R
1
3
L
2
B
1
B
2
R
1
3
R
2

1
L

I
_
(k)
_
x
k
w
k
_
.
This is causally invertible, since its inverse is
_
x
k+1
w
k
_
=
_
ABR
1
L B
1
B
2
R
1
3
R
2

1
L

I
_
(k)
_
x
k
w
k
w

k
_
.
|L
1
|
[0,N]
1 because the response to w
0
w

0
= e
1
, the rst standard basis
vector, for k = 0 and w
k
w

k
= 0 for all k ,= 0 has two-norm at least unity. (The
response is w
0
= e
1
, w
1
=
1
L

(B
1
B
2
R
1
3
R
2
)e
1
, . . ..)
Solution B.3. Suppose there exists a strictly proper controller such that (B.2.10)
holds (when x
0
= 0). Consider the input w
i
= 0 for i k 1. The strictly proper
nature of the state dynamics and the controller implies that u
i
= 0 and x
i
= 0 for
i k. Hence R
1
(k) I. Therefore, the stated Schur decomposition in the hint
exits and
z

k
z
k

2
w

k
w
k
+x

k+1
X

(k + 1)x
k
= x

k
X

(k)x
k
+ (w
k
w

k
)

R
1
(w
k
w

k
) + (u
k
u

k
)

(u
k
u

k
)
in which (k) = (R
3
R
2
R
1
1
R

2
)(k) and
_
w

k
u

k
_
=
_
R
1
1
L
1
R
1
1
R

1
(L
2
R
2
R
1
1
L
1
) 0
_ _
x
k
u
k
_
.
DISCRETE-TIME 1

SYNTHESIS THEORY 165


The Riccati equation follows as before (note that R
3
0 as before, so R
3
R
2
R
1
1
R

2
is nonsingular, implying R is nonsingular). Furthermore, by choosing u = u

k
and
w
k
= 0 we see that X

(k) 0, as before. The rest of the iterative argument is


identical to that presented in the text.
Solution B.4. By completing the square using X

and X

, we obtain
|z|
2
2,[k,N]

2
|w|
2
2,[k,N]
+x

N+1
x
N+1
= |r|
2
2,[k,N]

2
|s|
2
2,[k,N]
+x

k
X

(k)x
k
|z|
2
2,[k,N]

2
|w|
2
2,[k,N]
+x

N+1
x
N+1
= |r|
2
2,[k,N]

2
|s|
2
2,[k,N]
+x

k
X

(k)x
k
.
Setting u
i
= u

i
and w
i
= w

i
for i = k, . . . , N gives r
i
= 0 and s
i
= 0 for i =
k, . . . , N. Therefore
x
k
_
X

(k) X

(k)
_
x
k
= x

N+1
()x
N+1
+|r|
2
2,[k,N]
+
2
|s|
2
2,[k,N]
x

N+1
()x
N+1
.
Since x
k
may be regarded as an arbitrary initial condition, implies that
X

(k) X

(k).
Solution B.5.
1. Substitute for x
k+1
from the dynamics and use the equation dening z
k
.
2. Elementary algebra veries the completion of square identity. The con-
clusion that X
2
(k) 0 follows from the fact that the left-hand side of the
completion of squares identity is nonnegative for any u
k
and x
k
in particu-
lar, for u
k
= u

k
.
3. X
2
(N + 1) 0 implies R
1
(N) exists, which implies X
2
(N) is well dened
and nonnegative denite. Hence R
1
(N 1) exists, which implies X
2
(N 1)
is well dened and nonnegative denite . . . .
From the completion of squares identity in Part 2, we obtain
N

k=0
z

k
z
k
+x

k+1
X
2
(k + 1)x
k+1
x

k
X
2
(k)x
k
=
N

k=0
(u
k
u

k
)

R(k)(u
k
u

k
).
Since
N

k=0
x

k+1
X
2
(k + 1)x
k+1
x

k
X(k)x
k
= x

N+1
X
2
(N + 1)x
N+1
x

0
X
2
(0)x
0
166 SOLUTIONS TO PROBLEMS IN APPENDIX B
and X
2
(N + 1) = ), we obtain
N

k=0
z

k
z
k
+x

k+1
x
k+1
= x

0
X
2
(0)x
0
+
N

k=0
(u
k
u

k
)

R(k)(u
k
u

k
).
4. That the optimal control is u

k
= R(k)
1
L(k)x
k
is immediate from the
preceding identity. The optimal control is unique because R(k) > 0 for all k.
Solution B.6. Note that
L(k) R(k)(D

12
D
12
)
1
D

12
C
1
= B

2
X
2
(k + 1)AB

2
X
2
(k + 1)B
2
(D

12
D
12
)
1
D

12
C
1
= B

2
X
2
(k + 1)

A. (B.1)
The time-dependence of the matrices A, B
2
, C
1
, D
12
,

A and

C will be suppressed.
Using the Riccati equation from Problem B.5, we have
_
A B
2
C
1
D
12
_

_
X
2
(k + 1) 0
0 I
_ _
A B
2
C
1
D
12
_
=
_
X
2
(k) 0
0 0
_
+
_
L

(k)
R(k)
_
R
1
(k)
_
L(k) R(k)

.
Multiply on the right by
_
I 0
(D

12
D
12
)
1
D

12
C
1
I
_
and on the left by the trans-
pose of this matrix to obtain
_

A B
2

C D
12
_

_
X
2
(k + 1) 0
0 I
_ _

A B
2

C D
12
_
=
_

A

X
2
(k + 1)B
2
R
1
(k)B

2
X
2
(k + 1)

A

A

X
2
(k + 1)B
2
B

2
X
2
(k + 1)

A R(k)
_
+
_
X
2
(k) 0
0 0
_
.
The (1, 1)-block is the desired Riccati equation.
Using (B.1), we obtain

AB
2
R
1
(k)B

2
X
2
(k + 1)

A =

AB
2
R
1
(k)(L(k) R(k)(D

12
D
12
)
1
D

12
C
1
)
= AB
2
R
1
(k)L(k).
DISCRETE-TIME 1

SYNTHESIS THEORY 167


Solution B.7.
1.
(AB
2
K)

P(AB
2
K) + (C
1
D
12
K)

(C
1
D
12
K)
= A

PA+C

1
C
1
L

P
R
1
P
L
P
+(K R
1
P
L
P
)

R
P
(K R
1
P
L
P
)
in which R
P
= D

12
D
12
+ B

2
PB
2
and L
P
= D

12
C
1
+ B

2
PA. Substituting
this into (B.6.5), we obtain the inequality (B.6.3), with = P.
Now suppose
_
_
AI B
2
C
1
D
12
P 0
_
_
_
x
u
_
=
_
_
0
0
0
_
_
Multiplying (B.6.5) on the left by x

and on the right by x results in


0 = |P
1
2
(AB
2
K)x| +|(C
1
D
12
F)x|.
Hence (C
1
D
12
K)x = 0, since both terms on the right-hand side are non-
negative. Since C
1
x = D
12
u, we have D
12
(u + Kx) = 0 and D

12
D
12
> 0
implies that u = Kx. Hence
0 = (AI)x +B
2
u
= (AB
2
K I)x,
which implies that [[ < 1 or x = 0, since A B
2
K is asymptotically stable.
Thus
_
_
AI B
2
C
1
D
12
P 0
_
_
_
x
u
_
=
_
_
0
0
0
_
_
implies either [[ < 1 or x = 0, u = 0 and we conclude that
_
_
AI B
2
C
1
D
12
P 0
_
_
has full column rank for all [[ 1.
2. Dene u

k
= R(k)
1
L(k)x
k
. Then
x

k
X
2
(k)x
k
+|R
1
2
(u u

)|
2
2,[k,N]
= |z|
2
2,[k,N]
+x

N+1
x
N+1
= |z|
2
2,[k,N1]
+x

N
x
N
+x

N
(A

A+C

1
C
1
)x
N
, if u
N
= 0,
|z|
2
2,[k,N1]
+x

N
x
N
= x

k
X
2
(k + 1)x
k
+|R
1
2
(u u

)|
2
2,[k,N1]
168 SOLUTIONS TO PROBLEMS IN APPENDIX B
in which u

k
= R(k + 1)
1
L(k + 1)x
k
, which is the optimal control for
the time-horizon [k, N 1]. (Remember that we are dealing with the time-
invariant case, so X
2
(k, N 1, ) = X
2
(k + 1, N, ).) Setting u = u

on
[k, N 1] and u
N
= 0, we obtain
x

k
_
X
2
(k + 1) X
2
(k)
_
x
k
|R
1
2
(u u

)|
2
2,[k,N1]
and since x
k
may be regarded as an arbitrary initial condition, we conclude
that X
2
(k) X
2
(k + 1).
3. X
2
(k) is monotonic, bounded above (by ) and bounded below (by 0). Hence
X
2
= lim
k
X
2
(k) exists and satises the algebraic Riccati equation. The
completion of squares identify is immediate from the corresponding nite hori-
zon identity, since lim
N
x

N+1
x
N+1
= 0 for any stabilizing controller.
4. Let X
M
= X
2
(N + 1 M) and
M
= X
2
(N + 1 M) X
2
(N M) and
R
M
= R(NM). We use the fake algebraic Riccati technique, to determine
stability. Write the Riccati equation as
X
M
= (AB
2
F
M
)

X
M
(AB
2
F
M
)
+ (C
1
D
12
F
M
)

(C
1
D
12
F
M
) +
M
. (B.2)
We need to show that [
i
(AB
2
F
M
)[ < 1. Let x ,= 0 and satisfy
(AB
2
F
M
)x = x.
Then x

(B.2)x yields
([[
2
1)x

X
M
x +|(C
1
D
12
F
M
)x|
2
+|
1
2
M
x|
2
= 0.) (B.3)
Since X
M
0, we must have either
(a) [[ < 1 or
(b) (C
1
D
12
F
M
)x = 0 and
M
x = 0.
Case (a) is what we want, so it remains to see what happens in case (b).
Claim: Case (b) implies X
M
x = 0. Suppose case (b) holds. From (B.3),
if [[ , = 1, we have X
M
x = 0. On the other hand, if [[ = 1, use (B.2) to
obtain
_
A B
2
C
1
D
12
_

_
X
M
0
0 I
_ _
A B
2
C
1
D
12
_
=
_
X
M

M
0
0 0
_
+
_
F

M
I
_
R
M
_
F
M
I

.
DISCRETE-TIME 1

SYNTHESIS THEORY 169


Since (AB
2
F
M
)x = x, (C
1
D
12
F
M
)x = 0 and
M
x = 0, multiplying on
the right by
_
I
F
M
_
x results in

_
A

2
_
X
M
x =
_
X
M
x
0
_
.
Therefore, since (A, B
2
) is stabilizable, [[ = 1 implies that X
M
x = 0. Thus
case (b) implies X
M
x = 0 and the claim is established.
Claim: Case (b) implies [[ < 1. Suppose case (b) holds. Then X
M
x =
0. We now consider the implications of this fact.
If M = 0 (i.e., the horizon length is zero), we have
_
_
A B
2
C
1
D
12
0
_
_
_
I
F
M
_
x =
_
_
x
0
0
_
_
.
Since x ,= 0, we conclude that [[ < 1 from the assumption that (B.6.4) has
full column rank for all [[ 1.
If M 1, consider the system
_
x
k+1
z
k
_
=
_
A B
2
C
1
D
12
_ _
x
k
u
k
_
, x
N+1M
= x.
Then, by completing the square,
N

k=N+1M
z

k
z
k
+x
N+1
x
N+1
= x

X
M
x +
N

k=N+1M
(u
k
u

k
)R(k)(u
k
u

k
)
=
N

k=N+1M
(u
k
u

k
)R(k)(u
k
u

k
).
Therefore, the control strategy u
k
= u

k
= R(k)
1
L(k)x
k
results in z
k
= 0
for k = N + 1 M, . . . , N and x
N+1
= 0, since the left-hand side of the
above identity is nonnegative and the right-hand side is zero when u = u

.
Since (A B
2
F
M
)x = x and (C
1
D
12
F
M
)x = 0, the control strategy
u
k
= F
M
x
k
also results in z
k
= 0. Since D

12
D
12
> 0, this implies that
the controls u
k
= F
M
x
k
and u
k
= u

k
are identical. Consequently, the
state trajectories with u
k
= F
M
x
k
and u
k
= u

k
are identical. Since the
state trajectory resulting from u
k
= F
M
x
k
is
k+MN1
x and the state
170 SOLUTIONS TO PROBLEMS IN APPENDIX B
trajectory associated with u
k
= u

k
satises x
N+1
= 0, we conclude that

M
x = 0. We now have
_
_
A B
2
C
1
D
12
0
_
_
_
I
F
M
_

M
x =
_
_

M
x
0
0
_
_
.
Invoking assumption that (B.6.4) has full column rank for all [[ 1, we
conclude that [[ < 1 or
M
x = 0, which implies = 0 since x ,= 0.
This completes the proof that AB
2
F
M
is asymptotically stable.
The cost of the control law F
M
is
|z|
2
2
= x

0
P
M
x
0
|
1
2
M
x
0
|
2
2
x

0
P
M
x
0
.
5. Since F
M
F = R
1
L as M and [
i
(A B
2
F
M
)[ < 1 for all i and
all M, we must have that [
i
(A B
2
F)[ 1 for all i. To prove that strict
inequality holds, we must show that (AB
2
F)x = e
j
x implies x = 0.
Suppose (AB
2
F)x = e
j
x. Write the Riccati equation as
X
2
= (AB
2
F)

X
2
(AB
2
F) + (C
1
D
12
F)

(C
1
D
12
F).
Multiplying on the left by x

and on the right by x we conclude that (C


1

D
12
F)x = 0. Hence
_
Ae
j
I B
2
C
1
D
12
_ _
I
F
_
x =
_
0
0
_
.
Consequently, if (B.6.7) has full column rank for all , we must have x = 0.
Thus AB
2
F is asymptotically stable.
Conversely, suppose
_
Ae
j
I B
2
C
1
D
12
_ _
x
u
_
=
_
0
0
_
(B.4)
for some and some x, u not both zero. Write the Riccati equation as
_
A B
2
C
1
D
12
_

_
X
2
0
0 I
_ _
A B
2
C
1
D
12
_
=
_
X
2
0
0 0
_
+
_
L

R
1
I
_
R
_
R
1
L I

.
Multiplying on the left by
_
x


and on the right by
_
x
u
_
, we see that
x

X
2
x = x

X
2
x + (R
1
Lx +u)

R(R
1
Lx +u)
and hence u = R
1
Lx. Substituting into (B.4) gives (ABR
1
L)x = e
j
x,
and we conclude that ABR
1
L has an imaginary axis eigenvalue.
DISCRETE-TIME 1

SYNTHESIS THEORY 171


6. By completing the square with X
2
, the cost associated with any stabilizing
controller satises
|z|
2
2
= x

0
X
2
x
0
+|R
1
2
(u +R
1
Lx)|
2
2
Hence |z|
2
2
x

0
X
2
x
0
for any stabilizing controller. The lower bound x

0
X
2
x
0
can only be achieved by the controller u = R
1
Lx, which is stabilizing if
and only if (B.6.7) has full column rank for all real .
Solution B.8.
1. Complete the square with X
2
and with X
2
(k, N + 1, ) to obtain
N

0
z

k
z
k
+x

N+1
X
2
x
N+1
= x

0
X
2
x
0
+
N

0
(u
k
u

k
)

R(u
k
u

k
)
N

0
z

k
z
k
+x

N+1
x
N+1
=
N

0
(u
k
u

k
)

R(u
k
u

k
)
+x

0
X
2
(0, N + 1, )x
0
.
Subtracting these gives the stated identity (B.6.8).
The minimization of the left-hand side of (B.6.8) is an LQ problem, although
the terminal penalty matrix may or may not be nonnegative denite and
consequently we do not know that a solution to such problems exists in general.
In this particular case, a solution does exist, because the right-hand side of
(B.6.8) shows that u
k
= u

k
is the optimal control and the optimal cost is
x

0
(X
2
(0, N + 1, ) X
2
)x
0
. The objective function and state dynamics for
the problem of minimizing the left-hand side of (B.6.8) can be written as
_
x
k+1
R
1
2
(u
k
u

k
)
_
=
_
AB
2
F B
2
0 R
1
2
_ _
x
k
u
k
u

k
_
,
in which F = R
1
L is the optimal feedback gain, (i.e., u

k
= Fx
k
). Hence,
the the Riccati equation associated with the minimization of the left-hand side
of (B.6.8) is (B.6.9) and the minimum cost is x

0
(0)x
0
. Since we concluded
from the right-hand side of (B.6.8) that the minimum cost is x

0
(X
2
(0, N +
1, )X
2
)x
0
, it follows that (0) = X
2
(0, N+1, )X
2
, since x
0
is arbitrary.
By time invariance, (k) = X
2
(k, N + 1, ) X
2
.
2. This can be quite tricky if you take a brute force approach, which is one reason
why the preceding argument is so delightful. It also gives a clue about the
manipulations, since the argument above is about optimizing something you
already know the optimal control for.
172 SOLUTIONS TO PROBLEMS IN APPENDIX B
Write the two Riccati equations as
_
A B
2
C
1
D
12
_

_
X
2
(k + 1) 0
0 I
_ _
A B
2
C
1
D
12
_
=
_
X
2
(k) 0
0 0
_
+
_
L

(k)
R(k)
_
R
1
(k)
_
L(k) R(k)

and
_
A B
2
C
1
D
12
_

_
X
2
0
0 I
_ _
A B
2
C
1
D
12
_
=
_
X
2
0
0 0
_
+
_
L

R
_
R
1
_
L R

.
Subtract them to obtain
_
A B
2
C
1
D
12
_

_
(k + 1) 0
0 0
_ _
A B
2
C
1
D
12
_
=
_
(k) 0
0 0
_

_
L

R
_
R
1
_
L

R
_

+
_
L

(k)
R(k)
_
R
1
(k)
_
L

(k)
R(k)
_

.
Multiply on the right by
_
I 0
R
1
L I
_
and by its transpose on the left to
obtain
_
ABR
1
L B
2
C
1
D
12
R
1
L D
12
_

_
(k + 1) 0
0 0
_ _
ABR
1
L B
2
C
1
D
12
R
1
L D
12
_
=
_
L

(k) L

R
1
R(k)
R(k)
_
R
1
(k)
_
L

(k) L

R
1
R(k)
R(k)
_

+
_
(k) 0
0 R
_
. (B.5)
Now R(k) = R +B

2
(k + 1)B
2
and
L(k) R(k)R
1
L = L +B

2
(k + 1)A(R +B
2
(k + 1)B
2
)R
1
L
= B

2
(k + 1)(AB
2
R
1
L).
Therefore, the (1, 1)-block of (B.5) is the desired Riccati equation for (k).
Solution B.9. As in the previous problem, the calculations can become horren-
dous if a brute force approach is adopted. The technique of the previous problem
provides the remedy (which is why that problem is there).
DISCRETE-TIME 1

SYNTHESIS THEORY 173


1. Use the Schur decomposition (B.2.7) to write the Riccati equation for X

as
X

= A

A+C

1
C
1
L

2
R
1
3
L
2
L

1
L

,
in which L = L
1
R

2
R
1
3
L
2
as before. Combining this with the denitions
of L and R, we may write
_
A B
2
C
1
D
12
_

_
X

0
0 I
_ _
A B
2
C
1
D
12
_
=
_
X

+L

1
L

0
0 0
_
+
_
L

2
R
3
_
R
1
3
_
L
2
R
3

.
We also have for the X
2
Riccati equation
_
A B
2
C
1
D
12
_

_
X
2
0
0 I
_ _
A B
2
C
1
D
12
_
=
_
X
2
0
0 0
_
+
_

L

R
_

R
1
_

L

R
_
,
in which

R = D

12
D
12
+B

2
X
2
B
2
and

L = D

12
C
1
+B

2
X
2
A. Subtracting these
gives
_
A B
2
C
1
D
12
_

_
0
0 0
_ _
A B
2
C
1
D
12
_
=
_
+L

1
L

0
0 0
_
+
_
L

2
R
3
_
R
1
3
_
L

2
R
3
_

_

L

R
_

R
1
_

L

R
_

.
Multiply by
_
I 0
R
1
3
L
2
I
_
on the right, and by its transpose on the left,
to obtain
_
AB
2
R
1
3
L
2
B
2
C
1
D
12
R
1
3
L
2
D
12
_

_
0
0 0
_ _
AB
2
R
1
3
L
2
B
2
C
1
D
12
R
1
3
L
2
D
12
_
=
_

L

2
R
1
3

R
_

R
1
_

L

2
R
1
3

R
_

+
_
+L

1
L

0
0 R
3
_
. (B.6)
Now

R = R
3
B

2
B
2
and

L = L
2
B

2
A. Therefore

L

RR
1
3
L
2
= L
2
B

2
A(R
3
B

2
B
2
)R
1
3
L
2
= B

2
(AB
2
R
1
3
L
2
).
Therefore, the (1, 1)-block of (B.6) is the desired Riccati equation.
174 SOLUTIONS TO PROBLEMS IN APPENDIX B
2. Recall, from the monotonicity property of the solution X

(k, N+1, X
2
), that
X

X
2
. Therefore, 0. Also, from (B.2.33), (A B
2
R
1
3
L
2
, L

) is de-
tectable. Hence, since < 0, we conclude that AB
2
R
1
3
L
2
is asymptotically
stable from the equation established in Part 1.
3. If x = 0, then the equation established in Part 1 gives L

x = 0. From
(B.2.33), we therefore conclude that (ABR
1
L)x = (AB
2
R
1
3
L
2
)x. (Use
equation (B.2.33).)
Solution B.10.
1. Write the two Riccati equations as
_
A B
C D
_

_
X

(k + 1) 0
0 J
_ _
A B
C D
_
=
_
X

(k) 0
0 0
_
+
_
L

(k)
R(k)
_
R
1
(k)
_
L(k) R(k)

and
_
A B
C D
_

_
X

0
0 J
_ _
A B
C D
_
=
_
X

0
0 0
_
+
_
L

R
_
R
1
_
L R

.
Subtract them to obtain
_
A B
C D
_

_
(k + 1) 0
0 0
_ _
A B
C D
_
=
_
(k) 0
0 0
_
+
_
L

R
_
R
1
_
L

R
_

_
L

(k)
R(k)
_
R
1
(k)
_
L

(k)
R(k)
_

Multiply on the right by


_
I 0
R
1
L I
_
and by its transpose on the left to
obtain
_
ABR
1
L B
2
C DR
1
L D
_

_
(k + 1) 0
0 0
_ _
ABR
1
L B
C DR
1
L D
_
=
_
L

(k) L

R
1
R(k)
R(k)
_
R
1
(k)
_
L

(k) L

R
1
R(k)
R(k)
_

+
_
(k) 0
0 R
_
. (B.7)
DISCRETE-TIME 1

SYNTHESIS THEORY 175


Now R(k) = R B

(k + 1)B and
L(k) R(k)R
1
L = L B

(k + 1)A(R B(k + 1)B)R


1
L
= B

(k + 1)(ABR
1
L).
Therefore, the (1, 1)-block of (B.7) is the desired Riccati equation for
2. This is an intricate argument similar to that used in the solution of Prob-
lem B.7. It involves guring out what happens at the terminal time N + 1
from something that happens at time k and using the properties of the ter-
minal condition.
Suppose x ,= 0 and (X

X(k, N + 1, X
2
))x = 0. That is, (k)x = 0. Let
the initial condition for the dynamics be x
k
= x. Complete the square with
X

to obtain
|z|
2
2,[k,N]

2
|w|
2
2,[k,N]
+x
N+1
X

x
N+1
= |r|
2
2,[k,N]

2
|s|
2
2,[k,N]
+x

x.
Also, complete the square with X

(k, N + 1, X
2
) to obtain
|z|
2
2,[k,N]

2
|w|
2
2,[k,N]
+x
N+1
X
2
x
N+1
= |r|
2
2,[k,N]

2
|s|
2
2,[k,N]
+x

X(k, N + 1, X
2
)x.
Now subtract to obtain
x
N+1
(X

X
2
)x
N+1
= |r|
2
2,[k,N]
+
2
|s|
2
2,[k,N]

2
|s|
2
2,[k,N]
|r|
2
2,[k,N]
.
If we choose u such that r = 0 (i.e., u = u

for the innite-horizon problem)


and w such that s = 0 (i.e., w = w

[k,N]
, the worst w for the nite-horizon
problem), then we have
x
N+1
(X

X
2
)x
N+1
=
2
|s|
2
2,[k,N]
|r|
2
2,[k,N]
.
Since X

X
2
0, we must have r = 0 and s = 0. This implies that the
optimal controls for the nite- and innite-horizon problems are identical on
[k, N]. We also see that (X

X
2
)x
N+1
= 0. Since u = u

and w = w

, the
state dynamics reduce to
x
i+1
= (ABR
1
L)x
i
, x
k
= x.
Multiplying the equation for X

X
2
, which is given in (B.2.39), on the left
by x

N+1
and on the right by x
N+1
we conclude that (AB
2
R
1
3
L
2
)x
N+1
is
also in the kernel of X

X
2
and that L

x
N+1
= 0, because the terms on
the right-hand side are all nonnegative and the left-hand side is zero. From
(B.2.33), we see that (AB
2
R
1
3
L
2
)x
N+1
= (ABR
1
L)x
N+1
. Therefore,
176 SOLUTIONS TO PROBLEMS IN APPENDIX B
there is an eigenvalue of ABR
1
L in the kernel of X

X
2
. This eigenvalue
must be asymptotically stable because AB
2
R
1
3
L
2
is.
Conclusion: If x ker
_
X

(k, N+1, X
2
)
_
, the application of the dynam-
ics ABR
1
L leads to x
N+1
being in an invariant subspace of ker(X

X
2
)
after a nite number of steps. This invariant subspace is asymptotically sta-
ble by the stability of A B
2
R
1
3
L
2
. Hence the subspace corresponding to
ker
_
X

(k, N + 1, X
2
)
_
is asymptotically stable.
It follows that if
X

X
2
=
_

1
0
0 0
_
in which
1
is nonsingular, then
X

(k, N + 1, X
2
) =
_

1
(k) 0
0 0
_
in which
1
(k) is nonsingular for all k. Furthermore,
ABR
1
L =
_

A
11
0

A
21

A
22
_
.
The argument of the text shows that

A
11
is asymptotically stable, while

A
22
is asymptotically stable due to the stabilizing properties of X
2
.
Solution B.11. The
2
[0, N] adjoint of G is the system G

that has the property


u, Gw = G

u, w for all w, u. The inner product is dened by


w, z =
N

0
z

k
w
k
.
Suppose z = Gw is generated by
_
x
k+1
z
k
_
=
_
A B
C D
_
(k)
_
x
k
w
k
_
, x
0
= 0,
and p
k
is an arbitrary
2
[0, N] sequence such that p
N
= 0. Then
u, z =
N

0
z

k
u
k
+x

k+1
p
k
x

k
p
k1
=
N

0
_
x
k+1
z
k
_

_
p
k
u
k
_
x

k
p
k1
=
N

0
_
x
k
w
k
_

_
A B
C D
_

(k)
_
p
k
u
k
_
x

k
p
k1
DISCRETE-TIME 1

SYNTHESIS THEORY 177


Therefore, if we choose the sequence p
k
such that p
k1
= A

(k)p
k
+ C

(k)u
k
and
dene y
k
by
_
p
k1
y
k
_
=
_
A B
C D
_

(k)
_
p
k
u
k
_
, p
N
= 0,
then
u, z =
N

0
_
x
k
w
k
_

_
p
k1
y
k
_
x

k
p
k1
=
N

0
w
k

y
k
= y, w.
Hence y = G

u.
Solution B.12.
x
k+1
= Ax
k
+B
1
w,
y
k
= C
2
x
k
+D
21
w
k
,
x
k+1
= A x
k
+H(y
k
C
2
x
k
),
in which H = M
2
S
1
3
. Subtracting these equations results in the error dynamics
x
k+1
x
k+1
= (AHC
2
)( x
k
x
k
) + (HD
21
B
1
)w
k
.
By standard results on linear systems driven by white noise processes (see, for
example, [12]),
c( x
k+1
x
k+1
)( x
k+1
x
k+1
)

= P(k)
in which P(k) is the solution to the linear matrix equation
P(k + 1) = (AHC
2
)(k)P(k)(AHC
2
)

(k)
+ (HD
21
B
1
)(k)(HD
21
B
1
)

(k), P(0) = 0.
(We have P(0) = 0 because the initial state is assumed to be know; otherwise, we
set P(0) to the initial error covariance.)
Now
Y

(k + 1) = B
1
B

1
+AY

(k)A

M
2
S
1
3
M

2
M

1
M

in which
(k) = S
1
(k) S
2
(k)S
1
3
(k)S

2
(k)
M

(k) = M
1
(k) S
2
(k)S
1
3
(k)M
2
(k).
178 SOLUTIONS TO PROBLEMS IN APPENDIX B
Re-write the Riccati equation as
Y (k + 1) = (AHC
2
)Y

(k)(AHC
2
)

+M
2
S
1
3
M

2
M

1
M

+ (HD
21
B
1
)(HD
21
B
1
)

.
Hence
Y

(k + 1) P(k + 1) = (AHC
2
)(Y

(k) P(k))(AHC
2
)

+M
2
S
1
3
M

2
M

1
M

.
Since < 0 and Y

(0) P(0) = 0, we have Y

(k) P(k) 0 for all k. Hence


c( x
k+1
x
k+1
)( x
k+1
x
k+1
)

= P(k) Y

(k).

You might also like