0% found this document useful (0 votes)
40 views5 pages

Answers to Problems in Linear System Theory by Wilson Rugh

Simplify your study of linear systems with these solutions for problems from "Linear System Theory" by Wilson Rugh. This resource explores state-space representations, control systems, and stability analysis, offering valuable insights for engineering and control systems students.

Uploaded by

Aviv Avraham
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views5 pages

Answers to Problems in Linear System Theory by Wilson Rugh

Simplify your study of linear systems with these solutions for problems from "Linear System Theory" by Wilson Rugh. This resource explores state-space representations, control systems, and stability analysis, offering valuable insights for engineering and control systems students.

Uploaded by

Aviv Avraham
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

You can access complete document on following URL.

Contact me if site not loaded


https://unihelp.xyz/
sm

CHAPTER 1
tb9

Solution 1.1
(a) For k = 2, (A + B)2 = A 2 + AB + BA + B 2 . If AB = BA, then (A + B)2 = A 2 + 2AB + B 2 . In general if
AB = BA, then the k-fold product (A + B)k can be written as a sum of terms of the form A j B k−j , j = 0, . . . , k. The
8@
k
number of terms that can be written as A j B k−j is given by the binomial coefficient . Therefore AB = BA
 j 

implies
k
Σ
k  j k−j
(A + B)k = AB
 j
j =0

(b) Write
det [λ I − A (t)] = λn + an−1 (t)λn−1 + . . . + a 1 (t)λ + a 0 (t)
gm

where invertibility of A (t) implies a 0 (t) ≠ 0. The Cayley-Hamilton theorem implies


A n (t) + an−1 (t)A n−1 (t) + . . . + a 0 (t)I = 0
for all t. Multiplying through by A −1 (t) yields
_−a 1 (t)I −
. . . − an−1 (t)A n−2 (t) − A n−1 (t)
________________________________
A −1 (t) =
a 0 (t)
ail

for all t. Since a 0 (t) = det [−A (t)], a 0 (t) = det A (t). Assume ε > 0 is such that det A (t) ≥ ε for all t. Since
A (t) ≤ α we have aij (t) ≤ α, and thus there exists a γ such that a j (t) ≤ γ for all t. Then, for all t,
a 1 (t)I + . . . + A n−1 (t)
______________________
A −1 (t) =
det A (t)

+ γ α + . . . + αn−1 ∆
_γ________________
.co

≤ =β
ε

Solution 1.2
(a) If λ is an eigenvalue of A, then recursive use of Ap = λp shows that λk is an eigenvalue of A k . However to
show multiplicities are preserved is more difficult, and apparently requires Jordan form, or at least results on
similarity to upper triangular form.
m

(b) If λ is an eigenvalue of invertible A, then λ is nonzero and Ap = λp implies A −1 p = (1/ λ)p. As in (a),
addressing preservation of multiplicities is more difficult.
1 , . . . , λ__
(c) A T has eigenvalues λ__ n since det (λI − A ) = det (λI − A) = det (λI − A).
T T

(d) A H has eigenvalues λ1 , . . . , λn using (c) and the fact that the determinant (sum of products) of a conjugate is
the conjugate of the determinant. That is

Contact me in order to access the whole complete


-1-
document - Email: [email protected]
WhatsApp: https://wa.me/message/2H3BV2L5TTSUF1 - Telegram: https://t.me/solutionmanual
Linear System Theory, 2/E Solutions Manual

_ _ ________
_
det (λ I − A H ) = det (λ I − A)H = det (λ I − A)
(e) α A has eigenvalues αλ1 , . . . , αλn since Ap = λp implies (α A)p = (αλ)p.
(f) Eigenvalues of A T A are not nicely related to eigenvalues of A. Consider the example
 0 α   0 0 
A= , ATA =
 0 0   0 α 

where the eigenvalues of A are both zero, and the eigenvalues of A T A are 0, α. (If A is symmetric, then (a)
applies.)

Solution 1.3
(a) If the eigenvalues of A are all zero, then det (λ I − A) = λn and the Cayley-Hamilton theorem shows that A is
nilpotent. On the other hand if one eigenvalue, say λ1 is nonzero, let p be a corresponding eigenvector. Then
A k p = λ k1 p ≠ 0 for all k ≥ 0, and A cannot be nilpotent. _
(b) Suppose Q is real and symmetric, and λ is an eigenvalue of Q. Then λ also _ _is_ an eigenvalue. From the
eigenvalue/eigenvector
_ equation Qp = λ p we get_ p H Qp = λ p H p. Also Qp = λ p, and _ transposing gives
p H Qp = λ p H p. Subtracting the two results gives (λ − λ)p H p = 0. Since p ≠ 0, this gives λ = λ, that is, λ is real.
(c) If A is upper triangular, then λ I − A is upper triangular. Recursive Laplace expansion of the determinant about
the first column gives
det (λ I − A) = (λ − a 11 ) . . . (λ − ann )
which implies the eigenvalues of A are the diagonal entries a 11 , . . . , ann .

Solution 1.4
(a)
 0 0  1 0
A= implies A T A = implies A  = 1
 1 0  0 0
(b)
 3 1  10 6 
A= implies A T A =
 1 3  6 10 
Then
det (λI − A T A) = (λ − 16)(λ − 4)
which implies A  = 4.
(c)
 1−i 0   (1+i)(1−i) 0   2 0
A= implies A H A = =
 0 1+i   0 (1−i)(1+i)   0 2
This gives A  = √2 .

Solution 1.5 Let


1/α α 
α>1

A= ,
 0 1/α 
Then the eigenvalues are 1/α and, using an inequality on text page 7,
A  ≥ max aij  = α
1 ≤ i, j ≤ 2

-2-
Linear System Theory, 2/E Solutions Manual

Solution 1.6 By definition of the spectral norm, for any α ≠ 0 we can write
______
A x 
A  = max A x  = max
x  = 1 x  = 1 x 
A α x 
________ αA x 
_________
= max = max
α x  = 1 α x  x  = 1/α αx 

Since this holds for any α ≠ 0,


______
A x  ______
A x 
A  = max = max
x  ≠0 x  x≠0 x 

Therefore
______
A x 
A  ≥
x 

for any x ≠ 0, which gives


A x  ≤ A x 

Solution 1.7 By definition of the spectral norm,


AB  = max (AB)x  = max A (Bx)
x  =1 x  =1

≤ max {A Bx } , by Exercise 1.6


x  =1

= A  max Bx  = A B 


x  =1

If A is invertible, then A A −1 = I and the obvious I  = 1 give


1 = A A −1  ≤ A A −1 
Therefore
1
_____
A −1  ≥
A 

Solution 1.8 We use the following easily verified facts about partitioned vectors:
     
x1 x1 0
  ≥ x 1 , x 2  ;   = x 1  ,   = x 2 
x2 0 x2
     

Write
     
A 11 A 12 x1 A 11 x 1 + A 12 x 2
Ax = =
A 21 A 22 x2 A 21 x 1 + A 22 x 2
     

Then for A 11 , for example,


A  = max A x  ≥ max A 11 x 1 + A 12 x 2 
x  =1 x  =1

≥ max A 11 x 1  = A 11 


x 1  =1

The other partitions are handled similarly. The last part is easy from the definition of induced norm. For example
if

-3-
Linear System Theory, 2/E Solutions Manual

 
0 A 12
A=
0 0
 

then partitioning the vector x similarly we see that


max A x  = max A 12 x 2  = A 12 
x  =1 x 2  =1

Solution 1.9 By the Cauchy-Schwarz inequality, and x T  = x ,


x T A x  ≤ x T A x  = A T x x 

≤ A T x 2 = A x 2


This immediately gives
x T A x ≥ −A x 2
If λ is an eigenvalue of A and x is a corresponding unity-norm eigenvector, then
λ = λx  = λ x  = A x  ≤ A x  = A 

Solution 1.10 Since Q = Q T , Q T Q = Q 2 , and the eigenvalues of Q 2 are λ 21 , . . . , λ 2n . Therefore


Q  = √λmax (Q 2 )
      = max
1≤i≤n
λi 

For the other equality Cauchy-Schwarz gives


x T Qx | ≤ x T Q x  = Qx x 
≤ Q x 2 = [ max λi  ] x Tx
1≤i≤n

Therefore | x T Qx | ≤ Q  for all unity-norm x. Choosing xa as a unity-norm eigenvector of Q corresponding to


the eigenvalue that yields max λi  gives
1≤i≤n

x Ta Qxa  = x Ta [ max λi  ] xa = max λi 


1≤i≤n 1≤i≤n

Thus max x T Qx  = Q .


x  =1

Solution 1.11 Since A x  = √(A  T (A


   x)  = √x
 x)   TA TA x
 ,

A  = max √xTA TA x


x  =1
 
1/2
= max x T A T A x
 x  =1 

The Rayleigh-Ritz inequality gives, for all unity-norm x,


x T A T A x ≤ λmax (A T A) x T x = λmax (A T A)
and since A T A ≥ 0, λmax (A T A) ≥ 0. Choosing xa to be a unity-norm eigenvector corresponding to λmax (A T A) gives
x Ta A T A xa = λmax (A T A)
Thus

-4-
Linear System Theory, 2/E Solutions Manual

max x T A T A x = λmax (A T A)
x  =1

so we have A  = √λmax


  (A
T 
  A) .

Solution 1.12 Since A T A > 0 we have λi (A T A) > 0, i = 1, . . . , n, and (A T A)−1 > 0. Then by Exercise 1.11,
1
_________
A −1 2 = λmax ((A T A)−1 ) =
λmin (A T A)
n
Π λi (A T A) n −1
_[λ____________
T
__________________
i =1 max (A A)]
= ≤
λmin (A T A) . det (A T A) (det A)2

A 2(n−1)
_________
=
(det A)2
Therefore
A n−1
________
A −1  ≤
det A 

Solution 1.13 Assume A ≠ 0, for the zero case is trivial. For any unity-norm x and y,
y T A x  ≤ y T A x 

≤ y A x  = A 


Therefore
max y T A x  ≤ A 
x , y  =1

Now let unity-norm xa be such that A xa  = A , and let
Axa
_____
ya =
A 

Then ya  = 1 and


x Ta A T A xa 
__________ A xa 2
________ A 2
______
y Ta A xa  = = = = A 
A  A  A 

Therefore
max y T A x  = A 
x , y  =1

Solution 1.14 The coefficients of the characteristic polynomial of a matrix are continuous functions of matrix
entries, since determinant is a continuous function of the entries (sum of products). Also the roots of a
polynomial are continuous functions of the coefficients. (A proof is given in Appendix A.4 of E.D. Sontag,
Mathematical Control Theory, Springer-Verlag, New York, 1990.) Since a composition of continuous functions
is a continuous function, the pointwise-in-t eigenvalues of A (t) are continuous in t.
This argument gives that the (nonnegative) eigenvalues of A T (t)A (t) are continuous in t. Then the maximum at
each t is continuous in t — plot two eigenvalues and consider their pointwise maximum to see this. Finally since
square root is a continuous function of nonnegative arguments, we conclude A (t) is continuous in t.
However for continuously-differentiable A (t), A (t) need not be continuously differentiable in t. Consider the

-5-

You might also like