0% found this document useful (0 votes)
20 views

Linear System Theory and Design (Part 2)

Uploaded by

Jack Ronms
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

Linear System Theory and Design (Part 2)

Uploaded by

Jack Ronms
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 92

Linear System Theory

and Design
主讲:朱芳来
Part 2
Chapter 4 State-Space Solutions
and Realizations
4.1 Introduction
™ We know in that linear systems can be
described by convolutions and, if lumped, by
state-space equations.
™ The main purpose of this chapter is making
quantitative analyses for linear system. That is
to discuss how to find the solutions for a linear
system.
™ We mainly discuss the solutions for systems
which are described by state-space
equations.
4.2 Solution of LTI State Equation
™ Consider the linear time-invariant (LTI)
state-space equation
x& (t ) = Ax(t ) + Bu (t ) (4.2)
y (t ) = Cx (t ) + Du (t ) (4.3)
where A ∈ R n×n , B ∈ R n× p , C ∈ R q×n and D ∈ R q× p are
constant matrices.
™ The purpose of this section is to find the
solution excited by the initial state x(0) and
the input u(t).
™ To do this, we need the property of

d At
e = Ae A t = e A t A
dt

Premultiplying on both sides of (4.2) yields

e − At x& (t ) − e − At Ax(t ) = e − At Bu(t )

which implies
d − At
(e x(t )) = e − At Bu(t )
dt

Integrating the above equation from 0 to t


yields
t
e − At
x(t ) − e x(0) = ∫ e − A ( t −τ ) Bu (τ )dτ
0
0

Multiplying the above equation by e At from


right side, we have
t
x(t ) = e x(0) + ∫ e A (t −τ ) Bu(τ )dτ
At
(4.5)
0
This is the solution of (4.2). Substituting (4.5)
into (4.3) yields the solution of (4.3) as
t
y (t ) = Ce x(0) + C ∫ e A ( t −τ ) Bu (τ )dτ + Du(t )
At
(4.7)
0

Obviously, to compute (4.5) or (4.7), we


need compute the e At first.
™ The computing methods of e
At

1. Using Theorem 3.5: we are given f (λ ) and


an n × n matrix A with characteristic
polynomial
m
α (λ ) = ∏ (λ − λ i ) n
i

i =1
m
Where n = ∑ ni . Define
i =1

h(λ ) = β 0 + β 1λ + L + β n −1λn −1

The n unknown coefficients are determined by

f (l )
(λi ) = h ( l ) (λi ) for l = 0,1, L , ni − 1 and i = 1,2, L , m

here we say that h is equal to f on the spectrum


of A,
Then we have f(A)=h(A)
If we set f (λ ) = e λt , then e At can be computed as
e At = f ( A) = h( A)
™ Example 3.7 Compute A100 with
⎛0 1 ⎞
A = ⎜⎜ ⎟⎟
⎝ −1 − 2⎠
In order to compute A100, we set f (λ ) = λ100 . The
characteristic polynomial of A is
α (λ ) = (λ + 1) 2
Let
h (λ ) = β 0 + β 1 λ
⎧ f (−1) = h(−1)
On the spectrum of A, we have ⎨
⎩ f ′(−1) = h′(−1)
That is
⎧(−1)100 = β 0 − β1

⎩ 100 ⋅ ( −1) 99
= β1

So we obtain β1 = −100 and β 0 = −99.


That is
h(λ ) = −99 − 100λ

At this time, the A100 is computed as


⎛ − 199 − 100 ⎞
100
A = f ( A) = h( A) = β 0 I + β1 A = −99 A − 100 A = ⎜⎜ ⎟⎟
⎝ 100 101 ⎠
™ Example 3.8 Let
⎛ 0 0 − 2⎞
⎜ ⎟
A1 = ⎜ 0 1 0 ⎟
⎜1 0 3 ⎟
⎝ ⎠

Compute e A t
1

The characteristic polynomial of A is


α (λ ) = (λ − 1) 2 (λ − 2)
Let
h (λ ) = β 0 + β 1 λ + β 2 λ 2
then
⎧ f (1) = h(1) : e t = β 0 + β1 + β 2

⎨ f ′(1) = h′(1) : te t = β1 + 2 β 2
⎪ f ( 2) = h ( 2) : e 2t = β 0 + 2 β1 + 4 β 2

⎧β 0 = −2te t + e 2t

Solve above equations, we get ⎨β 1 = 3te + 2e − 2e
t t 2t

⎪β = e 2t − e t − te t
⎩ 2

⎛ 2e t − e 2 t 0 2e t − 2e 2 t ⎞
⎜ ⎟
Thus e A1t = h( A1 ) = ⎜ 0 et 0 ⎟
⎜ e 2t − e t 0 2e 2t − e t ⎟⎠

−1
2. Using Jordan form fo A: A = QAˆ Q , where  is
in Jordan form of
⎛ λ1 1 ⎞
⎜ ⎟
⎜ λ1 1 ⎟
Aˆ = ⎜ O O ⎟
⎜ ⎟
⎜ λ`1 1 ⎟
⎜ λ1 ⎟⎠ m×m

then we have
⎛ e λ1t te λ1t t 2 e λ1t / 2! L t m −1e λ1t /(m − 1)! ⎞
⎜ ⎟
⎜ e λ1t te λ1t m − 2 λ1t
L t e /(m − 2)!⎟
A = Q⎜ e λ1t m −3 λ1t
L t e /(m − 3)!⎟
⎟Q −1
⎜ (3.46)
⎜ O M ⎟
⎜ λ1t ⎟
⎝ e ⎠
™ Example 3.11 Consider
⎛ λ1 1 ⎞
⎜ ⎟
⎜ λ1 1 ⎟
A=⎜ λ1 ⎟
⎜ ⎟
⎜ λ2 1⎟
⎜ λ 2 ⎟⎠

By (3.46)
⎛ e λ1t te λ1t t 2 e λ1t / 2! 0 0 ⎞
⎜ ⎟
⎜ 0 e λ1t te λ1t 0 0 ⎟
e At = ⎜ 0 0 e λ1t 0 0 ⎟


⎜ 0 0 0 e λ2 t te λ 2t ⎟
⎜ λ2 t ⎟
⎝ 0 0 0 0 e ⎠
[
3. Using e At = L−1 ( sI − A) −1 ]
™ The computation of the inverse of (sI-A)
z Taking the inverse of (sI-A) by definition
z Using Theorem 3.5
z Using ( sI − A) −1 = Q ( sI − Aˆ ) −1 Q −1 and (3.49)
z Using the infinite power series in (3.57)

( sI − A) −1
=s −1
∑ (
k =0
s −1
A) k
(3.57)

z Using the Leverrier algorithm discussed in Problem


3.26
™ Example 4.1 Compute (sI-A)-1, where
⎡0 − 1 ⎤
A=⎢ ⎥
⎣1 − 2 ⎦
Method 1: By The definition of the inverse
of a matrix, we have
−1 adj( sI − A) 1 ⎡ s + 2 − 1⎤
( sI − A) = = 2
det( sI − A) s + 2 s + 1 ⎢⎣ 1 s ⎥⎦
⎡( s + 2) /( s + 1) 2 − 1 /( s + 1) 2 ⎤
=⎢ 2 ⎥
⎣ 1 /( s + 1) 2
s /( s + 1 ) ⎦
Method 2: The eigenvalue of A are -1, -1.
Let h ( λ ) = β 0 + β 1 λ and f ( λ ) = ( s − λ ) −1
. From
⎧ f (−1) = h(−1) : ( s + 1) −1 = β 0 − β1

⎩ f ′( −1) = h ′( −1) : ( s + 2) −1
= β1

we obtain
⎧β 0 = ( s + 1) −1 + ( s + 1) −2

β
⎩ 1 = ( s + 1) −2

Thus we have
h(λ ) = [( s + 1) −1 + ( s + 1) −2 ] + ( s + 1) −2 λ
and
( sI − A) −1 = h( A) = [( s + 1) −1 + ( s + 1) −2 ]I + ( s + 1) −2 A
⎡( s + 2) /( s + 1) 2 − 1 /( s + 1) 2 ⎤
=⎢ 2 ⎥
⎣ 1 /( s + 1) 2
s /( s + 1) ⎦

™ Example 4.2 Consider the equation


⎡0 − 1 ⎤ ⎡0 ⎤
x& (t ) = ⎢ ⎥ x(t ) + ⎢ ⎥u (t )
⎣1 − 2⎦ ⎣1⎦
Its solution is
t
x(t ) = e x(0) + ∫ e A(t −τ ) Bu (τ )dτ
At
0
The matrix function eAt can be computed as
⎡ s+2 −1 ⎤
−1 −1

−1 ( s + 1)
2
( s + 1) 2 ⎥ ⎡(1 + t )e
−t
− te −t ⎤
e = L [( sI − A) ] = L ⎢
At
⎥=
⎢ 1 s ⎥ ⎢⎣ te −t −t ⎥
(1 − t )e ⎦
⎢⎣ ( s + 1) 2 ( s + 1) 2 ⎥⎦

So we have

⎡ − t (t − τ )e −(t −τ ) u (τ )dτ ⎤
⎡(1 + t )e − te ⎤ ⎢ ∫0
−t −t
x(t ) = ⎢ + ⎥
−t ⎥ x ( 0)
⎢ [1 − (t − τ )]e −(t −τ ) u (τ )dτ ⎥
(1 − t )e −t ⎦
t

⎣ ∫0
⎣ te

™ Thegeneral property of the zero-input
response eAtx(0)
Consider a simple form of eAt
⎡e λ1t te λ1t t 2 e λ1t / 2 0 0 ⎤
⎢ ⎥
⎢ 0 e λ1t te λ1t 0 0 ⎥
e At = Q⎢ 0 0 e λ1t 0 0 ⎥Q −1
⎢ ⎥
⎢ 0 0 0 e λ1t 0 ⎥
⎢ 0 e λ2t ⎥⎦
⎣ 0 0 0

So the zero-input response is a linear


combination of terms {e λ t ,te λ t , t 2 e λ t , e λ t }
1 1 1 2
™ Conclusions for zero-input response
z Ifevery eigenvalue, simple or repeated,
of A has a negative real part, then every
zero-input response will approach zero
as t → ∞
z IfA has an eigenvalue, simple or
repeated, with a positive real part, then
most zero-input response will grow
unbounded as t → ∞
z IfA has some eigenvalues with zero real
part and all with index 1 and the remaining
eigenvalues all have negative parts, then
no zero-input response will grow
unbounded.
4.2.1 Discretization
™ Consider the continuous-time state equation
x& (t ) = Ax (t ) + Bu (t ) (4.9)
y (t ) = Cx (t ) + Du (t ) (4.10)
Because
x(t + T ) − x (t )
x& (t ) = lim
T →∞ T
If T is very trivial, we get the approximating
equation
x(t + T ) − x (t )
x& (t ) ≈
T
So we can compute (4.9) approximately as
x (t + T ) = x (t ) + Ax (t )T + Bu (t )T (4.11)
If we compute x(t) and y(t) only at t=kT for
k=0,1,2,…, Equation (4.11) and (4.10)
become
x((k + 1)T ) = ( I + TA) x(kT ) + TBu ( kT )
y ( kT ) = Cx ( kT ) + Du (kT )

The above equations are easily computed


by computer but yields the least accurate
results.
™ Another discretization method
Let
u (t ) = u ( kT ) =: u[ k ], for kT ≤ t < ( k + 1)T (4.12)
for k=0,1,2,…. Computing (4.5) at t=kT and
t=(k+1)T yields
kT
x[ k ] := x (kT ) = e AkT
x (0) + ∫ e A( kT −τ ) Bu (τ ) dτ
0

and
( k +1)T
x[ k + 1] := x((k + 1)T ) = e A ( k +1)T
x ( 0) + ∫ e A(( k +1)T −τ ) Bu (τ )dτ (4.14)
0
For (4.14), we can rewrite it as
kT ( k +1)T
x[k + 1] := e A( k +1)T
x(0) + ∫ e A (( k +1)T −τ )
Bu (τ )dτ + ∫ e A(( k +1)T −τ ) Bu (τ )dτ
0 kT
kT ( k +1)T
= e [e
AT AkT
x(0) + ∫ e A ( kT −τ )
Bu (τ )dτ ] + ∫ e A( kT +T −τ ) Bu (τ )dτ
0 kT

By substituting (4.12) and (4.13 ) and


introducing the new variable α := kT + T − τ , we
have
x[k ] + ⎜ ∫ e Aα dα ⎞⎟ Bu[k ]
⎛ T
x[k + 1] = e AT
⎝ 0 ⎠
Thus we obtain the discrete-time state-
space equation form of (4.9) and (4.10) as
follows

x[k + 1] = Ad x[k ] + Bd u[k ] (4.15)


y[k ] = C d x[k ] + Dd u[k ] (4.16)
where
, Bd = ⎜ ∫ e Aτ dτ ⎞⎟ B, C d = C and Dd = D
⎛ T
Ad = e AT
(4.17)
⎝ 0 ⎠
If A is nonsingular, the integral in Bd is

τk ∞
1 T k
(∑ A )dτ = ∑ A
T T
∫ e dτ = ∫ ∫
Aτ k k
τ dτ
0 0
k =0 k! k =0 k! 0


T k +1 k ∞
T k +1 k +1
=∑ A =A ∑
−1
A
k = 0 ( k + 1)! k = 0 ( k + 1)!

T k
⎛ ∞
T k

= A ∑ A = A ⎜⎜ ∑ A − I ⎟⎟ = A−1 (e AT − I )
−1 k −1 k

k =1 k! ⎝ k = 0 k! ⎠

So we have
Bd = A −1 (e AT − I ) B (4.18)
4.2.2 Solution of Discrete-Time Equations
™ Consider the discrete-time state-space
equation

x[k + 1] = Ax[k ] + Bu[k ]


k = 0,1,2, L (4.19)
y[k ] = Cx[k ] + Du[k ]

The solution of above Discrete-Time


Equations is k −1
x[ k ] = A k x[0] + ∑ A k −1− m Bu[ m] (4.20)
m =0
k −1
y[k ] = CA k x[0] + ∑ CA k −1− m Bu[m] + Du[k ] (4.21)
m =0
™ Thegeneral property of the zero-input
response Akx[0].
Suppose
⎛ λ1 1 ⎞
⎜ ⎟
⎜ λ1 1 ⎟
A = Q⎜ λ1 ⎟Q −1
⎜ ⎟
⎜ λ1 ⎟
⎜ λ2 ⎟⎠

Then we have
⎛ λ1k kλ1k −1 k (k − 1)λ1k − 2 0 0 ⎞
⎜ ⎟
⎜0 λ1k kλ1k −1 0 0 ⎟
A k = Q⎜ 0 0 λ1k 0 0 ⎟Q −1
⎜ ⎟
⎜0 0 0 λ1k 0 ⎟
⎜ k ⎟
⎝0 0 0 0 λ2 ⎠

which implies that every entry of the zero-


input response is a linear combination of
{λ1k ,kλ1k −1 ,k 2 λ1k − 2 ,λk2 }
™ Zero-input response conclusions for
discrete-time system
z If every eigenvalue, simple or repeated, of A
has magnitude less than 1, then every zero-
input response will approach zero as k
approaches infinity.
z If A has an eigenvalue, simple or repeated,
with magnitude larger than 1, then most zero-
input response will grow unbounded as k
approaches infinity.
z IfA has some eigenvalues with magnitude 1 and
all with index 1 and the remaining eigenvalues all
have magnitudes less than 1, then no zero-input
response will grow unbounded.
4.3 Equivalent State Equations
™ An example for understanding the concept
of equivalent state equation.
™ Example 4.3 Consider the network shown in
Fig. 4.1.
™ Selecting

inductor current-----state x1
capacitor voltage-----state x2
So, we have
The voltage across the inductor---- x&1

The current through the capacitor---- x& 2

The current through the resistor---- x2 / 1 = x 2


™ Clearly we have

⎧ x1 = x 2 + x& 2

⎩ x&1 + x 2 − u

™A state equation of the network

⎡ x&1 ⎤ ⎡0 − 1⎤ ⎡ x1 ⎤ ⎡0⎤
⎢ x& ⎥ = ⎢1 − 1⎥ ⎢ x ⎥ + ⎢1⎥u
⎣ 2⎦ ⎣ ⎦⎣ 2 ⎦ ⎣ ⎦
⎡ x1 ⎤
(4.22)
y = [0 1] ⎢ ⎥
⎣ x2 ⎦
™ Another selecting of state variables
The loop current of the left side-----state x1
The loop current of the right side-----state x2

Then
The voltage across the inductor---- x&1
The voltage across the resistor---- x1 ×1 − x2 ×1 = x1 − x2

From the left side loop, we have


u = x&1 + x1 − x2

or x&1 = − x1 + x 2 + u
From
The voltage across the capacitor=that across the resistor

we obtain
x& 2 = x&1 − x 2 = − x1 + u

™ Another state equation for the network


⎡ x&1 ⎤ ⎡− 1 1⎤ ⎡ x1 ⎤ ⎡1⎤
⎢& ⎥ = ⎢ ⎥ ⎢ ⎥ + ⎢ ⎥u
⎣ x 2 ⎦ ⎣− 1 0⎦ ⎣ x 2 ⎦ ⎣1⎦ (4.23)
⎡ x1 ⎤
y = [1 − 1]⎢ ⎥
⎣ x2 ⎦
™ The concept of equivalent state equations
Consider the n-dimensional state equation

x& (t ) = Ax(t ) + Bu (t )
(4.24)
y (t ) = Cx(t ) + Du (t )

where x(t ) ∈ R n×n , u(t ) ∈ R m , y (t ) ∈ R p


Definition 4.1 Let P be an n × n real nonsingular
matrix and let x = Px . Then the state equation,
x& (t ) = A x (t ) + B u (t )
(4.25)
y (t ) = C x (t ) + D u (t )
where
A = PAP −1 , B = PB, C = CP −1 and D = D (4.26)
is said to be (algebraically) equivalent to (4.24) and
x = Px is called and equivalence transformation.
™ Result: The equivalence transformation
doesn’t change the eigenvalues and the
transfer function matrix of the system.
Proof
Δ (λ ) = det(λI − A ) = det(λI − PAP −1 ) = det P(λI − A) P −1
= (det P) ⋅ det(λI − A) ⋅ (det P −1 ) = det(λI − A) = Δ(λ )

and
Gˆ ( s ) = C ( sI − A ) B + D = CP −1 ( sI − PAP −1 ) PB + D
= C ( sI − A) B + D = Gˆ ( s )
™ Definition:Two state equations are said to be
zero-state equivalent if they have the same
transfer function matrix.
™ Theorem 4.1 Two linear time-invariant
equations {A, B, C, D} and { A , B , C , D } are zero-
state equivalent if and only if D = D and
CA m B = C A m B , m = 0,1,2, L

Proof: Since
∞ ∞
( sI − A) −1 = s −1 ( I − As −1 ) −1 = s −1 ∑ ( As −1 ) m = ∑ A m s − m −1
m =0 k =0

So we have

D + C ( sI − A) −1 B = D + ∑ CA m Bs − m −1
m =0
Similarly, we still have

D + C ( sI − A ) −1 B = D + ∑ C A m B s − m −1
m =0
So they are zero-state equivalent if and only if
D + C ( sI − A) −1 B = D + C ( sI − A ) −1 B

And this is equivalent to


∞ ∞
D + ∑ CA Bs m − m −1
= D + ∑ C A m B s − m −1
m =0 m=0
⇔ D = D and CA m B = C A m B, m = 0,1,2,L

™ It should be noted that the algebraic


equivalence implies zero-state equivalence,
but the inverse conclusion is not true.
4.4 Realizations
™ Every linear time-invariant system can be
described by input-output description
yˆ ( s ) = Gˆ ( s )uˆ ( s )
and, if the system is lumped as well, it has
state-space equation description
x& (t ) = Ax(t ) + Bu (t ) (4.29)
y (t ) = Cx (t ) + Du (t )

If the state equation is known, the transfer


function matrix can be computed by
Gˆ ( s ) = C ( sI − A) −1 B + D (4.29’)
™ Now we study the converse problem, that is,
to find a state-space equation from a given
transfer function matrix and this is called the
realization problem.
™ Definition: A transfer function matrix Gˆ ( s ) is
said to be realizable if there exists a finite-
dimensional state-equation (4.29) such that
(4.29’) holds. And the (4.29) is called a
realization of Gˆ ( s ) .
™ Remarks: Not every Gˆ ( s ) is realizable. For
example, an LTI distributed system has a
transfer function matrix, but it can not be
described by a state-equation. If Gˆ ( s ) is
realizable, the realization will not be unique.
™ Theorem 4.2 A transfer matrix Gˆ ( s ) is realizable
if and only if Gˆ ( s ) is a proper rational matrix.
Proof Suppose that {A,B,C,D} is a
realization of Gˆ ( s ), then we have
−1 C[adj( sI − A)]B
G ( s ) = C ( sI − A) B + D =
ˆ +D
det( sI − A)

It is proper.
Next we show the converse. The Gˆ ( s ) ∈ R q× p
can be decomposed as two parts:
Gˆ ( s ) = Gˆ (∞) + Gˆ sp ( s ) (4.31)
where Gˆ sp (s) is the strictly proper part of . Let
d ( s ) = s r + α 1 s r −1 + L + α r −1 s + α r (4.32)
be the least common denominator of all
entries of Gˆ sp (s) and it is monic.
Then the Gˆ ( s ) can be expressed as
1
Gˆ sp ( s ) = N ( s)
d (s)
1 (4.33)
= [ N1s r −1 + N 2 s r − 2 + L + N r −1s + N r ]
d (s)

where N(s) is a q × p polynomial matrix of s


and Ni are q × p constant matrices.
Now we can verify that the following set of
equations
⎛ − α1 I p − α 2 I p L − α r −1 I p −αr I p ⎞ ⎛ I p ⎞
⎜ ⎟ ⎜ ⎟
⎜ Ip 0 L 0 0 ⎟ ⎜0⎟
x& = ⎜ 0 Ip L 0 0 ⎟ x + ⎜ 0 ⎟u
⎜ ⎟ ⎜ ⎟ (4.34)
⎜ L L L L L ⎟ ⎜M⎟
⎜ 0 0 L Ip 0 ⎟⎠ ⎜⎝ 0 ⎟⎠

y = ( N1 N 2 L N r −1 N r )x + Gˆ (∞)u

is one of the realization of Gˆ ( s ) , here we know


that dimensions of A, B and C are rp × rp, rp × p
and q × rp , respectively. Let us define
⎛ Z1 ⎞
⎜ ⎟
⎜ Z2 ⎟
Z := ⎜ ⎟ := ( sI − A) −1 B (4.35)
M
⎜ ⎟
⎜Z ⎟
⎝ R⎠

where Zi with dimension of p × p. The transfer


matrix of (4.34) can then be expressed as
C ( sI − A) −1 B + Gˆ (∞ )
= N1Z1 + N 2 Z 2 + L + N r Z r + Gˆ (∞ ) (4.36)
The (4.35) can be rewritten as (sI-A)Z=B or
sZ=AZ+B
Expanding above equation based on block
matrix, we have
sZ 1 = −α 1 Z 1 − α 2 Z 2 − L − α r Z r + I p (4.37)
sZ 2 = Z 1 , sZ 3 = Z 2 , L , sZ r = Z r −1 (4.37’)
The above last r-1 equations implies
1 1 1
Z 2 = Z 1 , Z 3 = 2 Z 1 , L Z r = r −1 Z 1
s s s
Substituting above r-1 equations into (4.37)
yields
⎛ α2 αr ⎞
sZ 1 = −⎜ α 1 + + L + r −1 ⎟ Z 1 + I p
⎝ s s ⎠
or
⎛ α2 αr ⎞
⎜ s + α1 + + L + r −1 ⎟ Z 1 = I p
⎝ s s ⎠
By using the notation of (4.32), the above
equation is turn out to be
d (s)
r −1
Z1 = I p
s
So we obtain
s r −1
Z1 = Ip
d (s)
Inserting it into (4.37’) leads to
s r −1 s r −2 1
Z1 = I p , Z2 = I p , LZ r = Ip
d (s) d (s) d (s)
Substituting them into (4.36) yields
C ( sI − A) −1 B + Gˆ (∞)
1
= [ N1s r −1 + N 2 s r − 2 + L + N r ] + Gˆ (∞)
d (s)

This shows that (4.34) is a realization of Gˆ ( s ) .

™ The realization in Theorem 4.2 is called


controllable cannonical form.
™ Example 4.6
Consider the proper rational matrix
⎛ 4 s − 10 3 ⎞
⎜ ⎟
Gˆ ( s ) = ⎜ 2s + 1 s+2 ⎟
⎜ 1 s +1 ⎟
⎜ (2 s + 1)( s + 2) ( s + 2) 2 ⎟
⎝ ⎠
⎡ − 12 3 ⎤
⎡2 0⎤ ⎢ 2 s + 1 s + 2 ⎥
=⎢ ⎥ +⎢ 1 s + 1 ⎥ (4.38)
⎣0 0⎦ ⎢ ⎥
⎢⎣ (2 s + 1)( s + 2) ( s + 2) ⎥⎦
2

= Gˆ (∞) + Gˆ ( s )
sp

The monic least common denominator of Gˆ sp (s)


is
d ( s ) = ( s + 0.5)( s + 2) 2 = s 3 + 4.5s 2 + 6 s + 2
Thus we have
1 ⎡ − 6( s + 2) 2
3( s + 2)( s + 0.5)⎤
Gˆ sp ( s ) = 3 ⎢ ⎥
s + 4.5s 2 + 6s + 2 ⎣ 0.5( s + 2) ( s + 1)( s + 2) 2 ⎦

1 ⎛ ⎡− 6 3⎤ 2 ⎡− 24 7.5⎤ ⎡− 24 3 ⎤ ⎞
= ⎜⎢ ⎥ s +⎢ ⎥ s+⎢ ⎥ ⎟

d ( s ) ⎝ ⎣ 0 1⎦ 0.5⎦ ⎟⎠
⎣ 0.5 1.5 ⎦ ⎣ 1

and a realization of (4.38) is


⎡− 4.5 0 −6 0 −2 0 ⎤ ⎡1 0⎤
⎢ 0 − 4 .5 0 − 6 0 − 2 ⎥ ⎢0 1 ⎥
⎢ ⎥ ⎢ ⎥
⎢ 1 0 0 0 0 0⎥ ⎢0 0 ⎥
x& = ⎢ ⎥x + ⎢ ⎥u
⎢ 0 1 0 0 0 0⎥ ⎢0 0 ⎥
⎢ 0 0 1 0 0 0⎥ ⎢0 0 ⎥ (4.39)
⎢ ⎥ ⎢ ⎥
⎢⎣ 0 0 0 1 0 0 ⎥⎦ ⎢⎣0 0⎥⎦
⎡− 6 3 − 24 7.5 − 24 3 ⎤ ⎡2 0⎤
y=⎢ ⎥ x+⎢ ⎥ u
⎣ 0 1 0.5 1.5 1 0.5⎦ ⎣0 0⎦
™A special case of (4.31) and (4.34) with p=1.
To save space, we assume that r=4 and q=2.
That is
⎡ d1 ⎤ 1
G( s) = ⎢ ⎥ + 4
ˆ
⎣ 2⎦
d s + α 1 s 3
+ α 2 s 2
+ α3s + α 4
(4.40)
⎡ β11s 3 + β12 s 3 + β13 s + β14 ⎤
⋅⎢ ⎥
⎣ β 21s + β 22 s + β 23 s + β 24 ⎦
3 3

Thus its realization can be obtained directly


from (4.34) as
⎡− α 1 − α 2 − α 3 − α 4 ⎤ ⎡1 ⎤
⎢ 1 0 0 0 ⎥ ⎢0 ⎥
x& = ⎢ ⎥ x + ⎢ ⎥u
⎢ 0 1 0 0 ⎥ ⎢0 ⎥
⎢ ⎥ ⎢ ⎥
⎣ 0 0 1 0 ⎦ ⎣0 ⎦
⎡ β 11 β 12 β 13 β 14 ⎤ ⎡ d1 ⎤
(4.41)
y=⎢ ⎥ x + ⎢ ⎥u
⎣ β 21 β 22 β 23 β 24 ⎦ ⎣d 2 ⎦

™ Other ways of realization


Let Gˆ ci (s) be the ith column of Gˆ ( s ) and let
ui be the ith component of the input vector u.
Then yˆ ( s) = Gˆ ( s)uˆ ( s) can be expressed as
yˆ ( s) = Gˆ c1 ( s)uˆ1 ( s) + Gˆ c 2 ( s)uˆ2 ( s) + L + Gˆ cm ( s)uˆm ( s)
=: yˆ c1 ( s) + yˆ c 2 ( s) + L + yˆ cm ( s)

as shown in Fig.4.4(a)
Suppose that
⎧ x&i = Ai xi + bi ui
⎨ i = 1,2, L, m
⎩ yci = Ci xi + d i ui

a realization of
yˆ ci ( s) = Gˆ ci ( s)uˆ i ( s)
Then
⎡ A1 ⎤ ⎡b1 ⎤
⎢ A ⎥ ⎢ b ⎥
x& = ⎢ 2 ⎥x + ⎢ 2 ⎥u
⎢ O ⎥ ⎢ O ⎥
⎢ ⎥ ⎢ ⎥
⎣ Am ⎦ ⎣ bm ⎦
y = [C1 C 2 L C m ]x + [d1 d 2 L d m ]u
will be a realization of Gˆ ( s ). Thus we can realize
each column of Gˆ ( s ) and then combine them to
yield a realization of Gˆ ( s ) .
™ Example4.7
Consider the proper rational matrix
⎡ 4 s − 10 3 ⎤
⎢ 2s + 1 s+2 ⎥
G (s) = ⎢
ˆ
1 s +1 ⎥
⎢ ⎥
⎢⎣ (2 s + 1)( s + 2) ( s + 2) ⎥⎦
2

The 1th and 2th columns of Gˆ ( s ) are


⎡ 4 s − 10 ⎤
⎢ 2 s + 1 ⎥ ⎡ 2⎤ 1 ⎡− 6 s − 12⎤
Gc1 ( s ) = ⎢
ˆ ⎥ =⎢ ⎥+ 2 ⎢ 0.5 ⎥

1
⎥ ⎣ ⎦
0 s + 2 .5 s + 1 ⎣ ⎦
⎢⎣ (2 s + 1)( s + 2) ⎥⎦
⎡ 2⎤ 1 ⎛ ⎡− 6⎤ ⎡− 12⎤ ⎞
=⎢ ⎥+ 2 ⎜ ⎢ ⎥s + ⎢ ⎥ ⎟
⎜ ⎟
⎣0⎦ s + 2.5s + 1 ⎝ ⎣ 0 ⎦ ⎣ 0.5 ⎦ ⎠

and
⎡ 3 ⎤
⎢ s+2 ⎥ 1 ⎡3s + 6⎤
Gc 2 ( s ) = ⎢ s + 1 ⎥ = 2
ˆ
⎢ s +1 ⎥
⎢ ⎥ s + 4 s + 4 ⎣ ⎦
⎢⎣ ( s + 2) ⎥⎦
2

1 ⎛ ⎡3⎤ ⎡6⎤ ⎞
= 2 ⎜ ⎢ ⎥s + ⎢ ⎥ ⎟
s + 4 s + 4 ⎜⎝ ⎣1⎦ ⎣1⎦ ⎟⎠
The realizations of them are
⎡ A1 ⎤ ⎡b1 ⎤
x& = ⎢ ⎥ x+⎢ ⎥ u
⎣ A2 ⎦ ⎣ b2 ⎦
y = [C1 C 2 ]x + [d1 d 2 ]u

That is
⎡− 2.5 − 1 0 0⎤ ⎡1 0⎤
⎢ 1 0 0 0 ⎥ ⎢0 0 ⎥
x& = ⎢ ⎥x + ⎢ ⎥u
⎢ 0 0 − 4 − 4⎥ ⎢0 1 ⎥
⎢ ⎥ ⎢ ⎥
⎣ 0 0 1 0 ⎦ ⎣ 0 0 ⎦
⎡− 6 − 12 3 6⎤ ⎡ 2 0⎤
y=⎢ ⎥ x+⎢ ⎥ u
⎣0 0.5 1 1⎦ ⎣0 0⎦
4.5 Solution of Linear time-varying (LTV)
state equation
Consider the linear time-varying (LTV) state
equation
x& (t ) = A(t ) x (t ) + B (t )u (t ) (4.45)
y (t ) = A(t ) x (t ) + D (t )u (t ) (4.46)
where x(t ) ∈ R n, u (t ) ∈ R p
and y (t ) ∈ R q
for any t. We
assume that for any initial state and any input
u(t), the state equation has a unique solution.
We consider the solutions of
x& (t ) = A (t ) x(t ) (4.49)
first.
The solution of the scalar time-varying
equation x& = a(t ) x due to x(0) is
t

x(t ) = e ∫0
a (τ ) dτ
x(0)

Can it be extended to the matrix situation?


That is t

x(t ) = e ∫0
A (τ ) dτ
x(0)
the solution of (4.49) due to x(0)? The answer
is no. So we have to find another approach to
develop the solution.
™ Fundamental matrix
For any n initial state xi(t0)(i=1,2,…,n) which
are linearly independent, there exist n unique
solutions xi(t) (i=1,2,…,n) corresponding to
each initial state.
Arrange the n solutions and n initial states
as square matrices
X(t)=[x1(t) x2(t) … xn(t)]
X(t0)=[x1(t0) x2(t0) … xn(t0)]
Obviously, X(t0) is nonsingular. The X(t) is
called a fundamental matrix of (4.49).
Because the selection of the initial states is
not unique, the fundamental matrix is not
unique.
™ Example 4.8
Consider the homogeneous equation

⎡0 0 ⎤
x& (t ) = ⎢ ⎥ x(t )
⎣ t 0⎦
That is
⎧ x&1 (t ) = 0
x&1 (t ) = 0⎨
⎩ x& 2 (t ) = tx1 (t )

For initial state x(0)=[x1(0) x2(0)]T, the solution


is
⎧ x1 (t ) = x1 (0)

⎩ 2
x (t ) = 0. 5t 2
x1 (0) + x 2 (0)

Specially, for two initial state x1(0)=[1 0]T and


x2(0)= [1 2]T which are linearly independent,
the solutions are
⎡ 1 ⎤
x1 (t ) = ⎢ 2⎥
⎣ 0 . 5t ⎦
and
⎡ 1 ⎤
x 2 (t ) = ⎢ ⎥
⎣ 0.5t 2
+ 2 ⎦

respectively.
Thus
⎡ 1 1 ⎤
X(t ) = ⎢
⎣ 0.5t 2
0.5t 2 + 2⎥⎦

is a fundamental matrix
™A very important property of the fundamental
matrix is that it is nonsingular for all t.
Definition 4.2 Let X(t ) be any fundamental
matrix of (4.49). Then
Φ(t , t 0 ) := X(t ) X −1 (t 0 )

is called the state transition matrix of (4.49).


The state transition matrix is also the unique
solution of

Φ(t , t0 ) = A (t )Φ(t , t0 )
∂t (4.53)
with initial condition Φ(t 0 , t 0 ) = I
™ The properties of fundamental matrix
Φ(t , t ) = I (4.54)
Φ − 1 (t , t 0 ) = Φ ( t 0 , t ) (4.55)
Φ(t , t0 ) = Φ(t , t1 )Φ(t1 , t0 ) (4.56)
The above equations hold for any t, t0, and t1.
™ Example 4.9
Consider the homogeneous equation in
Example 4.8. Its fundamental matrix is
⎡ 1 1 ⎤
X(t ) = ⎢
⎣ 0.5t 2
0.5t 2 + 2⎥⎦

The inverse of it is

−1 ⎡ 0.25 t 2
+ 1 − 0.5⎤
X (t ) = ⎢ ⎥
⎣ − 0. 2 t 2
0.5 ⎦

Thus the state transition matrix is given by


−1 ⎡ 1 1 ⎤ ⎡0.25t 2 + 1 − 0.5⎤
Φ(t , t 0 ) = X(t ) X (t ) = ⎢ ⎢ ⎥
⎣ 0 . 5t 2
0.5t 2 + 2⎥⎦ ⎣ − 0.2t 2 0 .5 ⎦
⎡ 1 0⎤
=⎢ ⎥
⎣ 0 . 5(t 2
− t 2
0 ) 1 ⎦
™ Now we claim that the solution of (4.45)
excited by the initial state x(t0)=x0 and the
input u(t) is given by
t
x(t ) = Φ(t , t0 ) x 0 + ∫ Φ(t ,τ )B(τ )u(τ )dτ (4.47)
t0

= Φ(t , t0 ) x 0 + ∫ Φ(t0 ,τ )B(τ )u(τ )dτ ⎤


⎡ t

⎢⎣ t0 ⎥⎦ (4.48)

where Φ(t ,τ ) is the state transition matrix of


(4.49)
™ Here we need to prove that the (4.57) or (4.58)
is the solution of (4.45).
First we show that (4.57) satisfies the initial
condition.
t0
x(t0 ) = Φ(t0 , t0 )x 0 + ∫ Φ(t ,τ )B(τ )u(τ )dτ = Ix 0 + 0 = x 0
t0

Second, we verify that (4.57) satisfying the


state equation
d ∂ ∂ t
x(t ) = Φ(t , t0 )x 0 + ∫ Φ(t ,τ )B(τ )u (τ )dτ
dt ∂t ∂t t0

t⎛∂ ⎞
= A (t )Φ(t , t0 )x 0 + ∫ ⎜ Φ(t ,τ )B(τ )u(τ ) ⎟dτ + Φ(t , t )B(t )u(t )
t 0 ∂t
⎝ ⎠
t
= A (t )Φ(t , t0 )x 0 + ∫ A (t )Φ(t ,τ )B(τ )u(τ )dτ + B(t )u(t )
t0

t
= A (t )[Φ(t , t0 )x 0 + ∫ Φ(t ,τ )B(τ )u(τ )dτ ] + B(t )u(t )
t0

= A (t ) x(t ) + B (t )u (t )

Thus (4.57) is the solution.


Substituting (4.57) into (4.46) leads to
t
y (t ) = C(t )Φ(t , t0 ) x 0 + C(t ) ∫ Φ(t ,τ )B(τ )u(τ ) dτ
t0

+ D(t )u(t ) (4.59)


If the input is identically zero, the (4.57)
becomes
x(t ) = Φ(t , t0 )x 0

Corresponding to it, the zero-input response is


y (t ) = C(t )Φ(t , t0 )x 0
If the initial state is zero, the (4.59) reduces to
t
y (t ) = C(t ) ∫ Φ(t ,τ )B(τ )u(τ )dτ + D(t )u(t ) (4.60)
t
0
If we only concern response with time between
t0 and t, we can regard the input u(t) as
⎧u(τ ) t0 ≤ τ < t
U(τ ) = ⎨
⎩0 others

So when t ∈ [t 0 , t ]

D(t )u(t ) = D(t ) U (t ) = ∫ D(τ )U (τ )δ (τ − t )dτ
−∞
t
= ∫ D(τ )u(τ )δ (τ − t ) dτ
t0

So the (4.59) can also be written as


t
y (t ) = ∫ [C(t )Φ(t ,τ )B(τ ) + D(τ )δ (τ − t )]u(τ )dτ (4.60)
t0
(4.60) is the zero-state response. Comparing
it with (2.5)
t
y (t ) = ∫ G (t ,τ )u(τ )dτ (2.5)
t0

leads to
G (t ,τ ) = C(t )Φ (t ,τ )B (τ ) + D(τ )δ (τ − t ) (4.62)
The (4.62) shows a relationship between the
input-output and state-space descriptions.
™ To compute the state transition matrix Φ(t ,τ )
is generally difficult, but for linear time-
invariant system, we have
X(t ) = e At

Φ(t ,τ ) = X(t ) X −1 (τ ) = e A ( t −τ ) = Φ(t − τ )


4.5.1 Discrete-Time Case
Consider the discrete-time state equation
x[k+1]=A[k]x[k]+B[k]u[k] (4.64)
y[k]=C[k]x[k]+D[k]u[k] (4.65)
™ Definethe discrete-time state transition matrix
as the solution of
Φ[k , k0 ] = A[k ]Φ[k , k0 ] with Φ[k0 , k0 ] = I

for k=1,2,…. The solution can be obtained


directly as
Φ[k , k0 ] = A[k − 1]A[k − 2]L A[k0 ]

for k>k0 and Φ[k0 , k0 ] = I


™ It should be pointed out that the A-matrix may
be singular, thus the inverse of Φ[k , k0 ] may not
be defined. Moreover, the transfer property
Φ[k , k0 ] = Φ[k , k1 ]Φ[k1 , k0 ]
holds only for k 0 ≤ k1 ≤ k
™ Using the discrete state transition matrix
defined by (4.66), we can express the solution
of (4.64) and (4.65) as
k −1
x[k ] = Φ[k , k0 ]x 0 + ∑ Φ[k , m + 1]B[m]u[m]
m = k0
k −1
y[k ] = C[k ]Φ[k , k0 ]x 0 + C[k ] ∑ Φ[k , m + 1]B[m]u[m] (4.67)
m = k0

+ D[k ]u[k ]

™ The zero-state response of (4.65) is


k −1
y[k ] = C[k ] ∑ Φ[k , m + 1]B[m]u[m] + D[k ]u[k ]
m = k0
4.6 Equivalent Time-Variying Equation
Consider the n-dimensional linear time-
varying state equation
x& = A (t )x + B(t )u
(4.69)
y = C(t )x + D(t )u
make the state equivalent transformation of
x (t ) = P (t ) x(t ) ,
where P(t) is nonsingular for all t.
the (4.69) then becomes
x& = A(t ) x + B (t )u (4.70)
y = C (t ) x + D(t )u
where

A (t ) = [P (t ) A (t ) + P& (t )]P −1 (t )
B (t ) = P (t )B(t )
C (t ) = C(t )P −1 (t )
D (t ) = D(t )

The (4.70) is said to be (algebraic) equivalent


to (4.69) and P(t) is called an (algebraic)
equivalence transformation.
™ Theorem 4.3 Let A0 be an arbitrary constant
matrix. Then there exists an equivalence
transformation that transforms the (4.69) into
(4.70) with A (t ) = A 0.
Proof: Let X(t) be a fundamental matrix
of x& = A(t )x . The differential of X-1(t)X(t)=I
yields
d & (t ) = 0
[ ( X −1 (t ))]X(t ) + X −1 (t ) X
dt

That is
d & (t ) X −1 (t )
( X −1 (t )) = − X −1 (t ) X
dt

Since X& (t ) = A(t ) X(t ), inserting it into above


equation leads to
d
( X −1 (t )) = − X −1 (t ) A (t ) (4.72)
dt
For constant matrix A (t ) = A 0 , X(t ) = e A t is a 0

fundamental matrix of x& = A (t )x = A 0 x . If we


define
P(t ) := X(t ) X −1 (t ) = e A 0t X −1 (t )
we have
A (t ) = [P(t ) A(t ) + P& (t )]P −1 (t )
d
= [e A 0t −1
X (t ) A (t ) + A 0 e A 0t −1
X (t ) + e A 0t
( X −1 (t ))]X(t )e − A 0t
dt

= e A 0t X −1 (t ) A (t ) X(t )e − A 0t + A 0 e A 0t X −1 (t ) X(t )e − A 0t
d
+ e A 0t ( X −1 (t )) X(t )e − A 0t
dt

= e A 0t X −1 (t ) A (t ) X(t )e − A 0t + A 0
+ e A 0t [− X −1 (t ) A (t )]X(t )e − A 0t = A 0
™ Result 1: Algebraic equivalence transformation
doesn’t change the impulse response matrix of
the system.
Proof: The impulse response matrix of (4.69)
is given by (4.62)
G (t ,τ ) = C (t )Φ (t ,τ ) B (t ) + D (t )δ (t − τ )

= C (t ) X (t ) X −1 (τ ) B (τ ) + D(t )δ (t − τ ) (4.62)
Similarly, the impulse response matrix of (4.70)
is
G (t ,τ ) = C (t ) X (t ) X −1 (τ ) B (τ ) + D (t )δ (t − τ )

Using (4.71)
X(t ) := P (t ) X(t ) (4.71)
we have
G (t ,τ ) = C (t ) P −1 (t ) P (t ) X (t ) X −1 (τ ) P −1 (τ ) P(τ ) B (τ )
+ D(t )δ (t − τ )

= C (t ) X (t ) X −1 (τ ) B (τ ) + D(t )δ (t − τ )

Compare above equation with (4.62), we


conclude that G (t ,τ ) = G (t ,τ )
™ Result 2: The equivalence transformation in the time-
invariant case is not a special case of the time-
varying case. Because at least the property of the A-
matrix may not be preserved

Definition 4.3 A matrix P(t) is called a Lyapunov


transformation if P(t) is nonsingular, P(t) and dotP(t)
are continuous, and P(t) and P-1(t) are bounded for all
t. Equations (4.69) and (4.70) are said to be Lyapunov
equivalent if P(t) is a Lyapunov transformation.

You might also like