Linear System Theory and Design (Part 2)
Linear System Theory and Design (Part 2)
and Design
主讲:朱芳来
Part 2
Chapter 4 State-Space Solutions
and Realizations
4.1 Introduction
We know in that linear systems can be
described by convolutions and, if lumped, by
state-space equations.
The main purpose of this chapter is making
quantitative analyses for linear system. That is
to discuss how to find the solutions for a linear
system.
We mainly discuss the solutions for systems
which are described by state-space
equations.
4.2 Solution of LTI State Equation
Consider the linear time-invariant (LTI)
state-space equation
x& (t ) = Ax(t ) + Bu (t ) (4.2)
y (t ) = Cx (t ) + Du (t ) (4.3)
where A ∈ R n×n , B ∈ R n× p , C ∈ R q×n and D ∈ R q× p are
constant matrices.
The purpose of this section is to find the
solution excited by the initial state x(0) and
the input u(t).
To do this, we need the property of
d At
e = Ae A t = e A t A
dt
which implies
d − At
(e x(t )) = e − At Bu(t )
dt
i =1
m
Where n = ∑ ni . Define
i =1
h(λ ) = β 0 + β 1λ + L + β n −1λn −1
f (l )
(λi ) = h ( l ) (λi ) for l = 0,1, L , ni − 1 and i = 1,2, L , m
Compute e A t
1
⎧β 0 = −2te t + e 2t
⎪
Solve above equations, we get ⎨β 1 = 3te + 2e − 2e
t t 2t
⎪β = e 2t − e t − te t
⎩ 2
⎛ 2e t − e 2 t 0 2e t − 2e 2 t ⎞
⎜ ⎟
Thus e A1t = h( A1 ) = ⎜ 0 et 0 ⎟
⎜ e 2t − e t 0 2e 2t − e t ⎟⎠
⎝
−1
2. Using Jordan form fo A: A = QAˆ Q , where  is
in Jordan form of
⎛ λ1 1 ⎞
⎜ ⎟
⎜ λ1 1 ⎟
Aˆ = ⎜ O O ⎟
⎜ ⎟
⎜ λ`1 1 ⎟
⎜ λ1 ⎟⎠ m×m
⎝
then we have
⎛ e λ1t te λ1t t 2 e λ1t / 2! L t m −1e λ1t /(m − 1)! ⎞
⎜ ⎟
⎜ e λ1t te λ1t m − 2 λ1t
L t e /(m − 2)!⎟
A = Q⎜ e λ1t m −3 λ1t
L t e /(m − 3)!⎟
⎟Q −1
⎜ (3.46)
⎜ O M ⎟
⎜ λ1t ⎟
⎝ e ⎠
Example 3.11 Consider
⎛ λ1 1 ⎞
⎜ ⎟
⎜ λ1 1 ⎟
A=⎜ λ1 ⎟
⎜ ⎟
⎜ λ2 1⎟
⎜ λ 2 ⎟⎠
⎝
By (3.46)
⎛ e λ1t te λ1t t 2 e λ1t / 2! 0 0 ⎞
⎜ ⎟
⎜ 0 e λ1t te λ1t 0 0 ⎟
e At = ⎜ 0 0 e λ1t 0 0 ⎟
⎟
⎜
⎜ 0 0 0 e λ2 t te λ 2t ⎟
⎜ λ2 t ⎟
⎝ 0 0 0 0 e ⎠
[
3. Using e At = L−1 ( sI − A) −1 ]
The computation of the inverse of (sI-A)
z Taking the inverse of (sI-A) by definition
z Using Theorem 3.5
z Using ( sI − A) −1 = Q ( sI − Aˆ ) −1 Q −1 and (3.49)
z Using the infinite power series in (3.57)
∞
( sI − A) −1
=s −1
∑ (
k =0
s −1
A) k
(3.57)
we obtain
⎧β 0 = ( s + 1) −1 + ( s + 1) −2
⎨
β
⎩ 1 = ( s + 1) −2
Thus we have
h(λ ) = [( s + 1) −1 + ( s + 1) −2 ] + ( s + 1) −2 λ
and
( sI − A) −1 = h( A) = [( s + 1) −1 + ( s + 1) −2 ]I + ( s + 1) −2 A
⎡( s + 2) /( s + 1) 2 − 1 /( s + 1) 2 ⎤
=⎢ 2 ⎥
⎣ 1 /( s + 1) 2
s /( s + 1) ⎦
So we have
⎡ − t (t − τ )e −(t −τ ) u (τ )dτ ⎤
⎡(1 + t )e − te ⎤ ⎢ ∫0
−t −t
x(t ) = ⎢ + ⎥
−t ⎥ x ( 0)
⎢ [1 − (t − τ )]e −(t −τ ) u (τ )dτ ⎥
(1 − t )e −t ⎦
t
⎣ ∫0
⎣ te
⎦
Thegeneral property of the zero-input
response eAtx(0)
Consider a simple form of eAt
⎡e λ1t te λ1t t 2 e λ1t / 2 0 0 ⎤
⎢ ⎥
⎢ 0 e λ1t te λ1t 0 0 ⎥
e At = Q⎢ 0 0 e λ1t 0 0 ⎥Q −1
⎢ ⎥
⎢ 0 0 0 e λ1t 0 ⎥
⎢ 0 e λ2t ⎥⎦
⎣ 0 0 0
and
( k +1)T
x[ k + 1] := x((k + 1)T ) = e A ( k +1)T
x ( 0) + ∫ e A(( k +1)T −τ ) Bu (τ )dτ (4.14)
0
For (4.14), we can rewrite it as
kT ( k +1)T
x[k + 1] := e A( k +1)T
x(0) + ∫ e A (( k +1)T −τ )
Bu (τ )dτ + ∫ e A(( k +1)T −τ ) Bu (τ )dτ
0 kT
kT ( k +1)T
= e [e
AT AkT
x(0) + ∫ e A ( kT −τ )
Bu (τ )dτ ] + ∫ e A( kT +T −τ ) Bu (τ )dτ
0 kT
∞
T k +1 k ∞
T k +1 k +1
=∑ A =A ∑
−1
A
k = 0 ( k + 1)! k = 0 ( k + 1)!
∞
T k
⎛ ∞
T k
⎞
= A ∑ A = A ⎜⎜ ∑ A − I ⎟⎟ = A−1 (e AT − I )
−1 k −1 k
k =1 k! ⎝ k = 0 k! ⎠
So we have
Bd = A −1 (e AT − I ) B (4.18)
4.2.2 Solution of Discrete-Time Equations
Consider the discrete-time state-space
equation
Then we have
⎛ λ1k kλ1k −1 k (k − 1)λ1k − 2 0 0 ⎞
⎜ ⎟
⎜0 λ1k kλ1k −1 0 0 ⎟
A k = Q⎜ 0 0 λ1k 0 0 ⎟Q −1
⎜ ⎟
⎜0 0 0 λ1k 0 ⎟
⎜ k ⎟
⎝0 0 0 0 λ2 ⎠
inductor current-----state x1
capacitor voltage-----state x2
So, we have
The voltage across the inductor---- x&1
⎧ x1 = x 2 + x& 2
⎨
⎩ x&1 + x 2 − u
⎡ x&1 ⎤ ⎡0 − 1⎤ ⎡ x1 ⎤ ⎡0⎤
⎢ x& ⎥ = ⎢1 − 1⎥ ⎢ x ⎥ + ⎢1⎥u
⎣ 2⎦ ⎣ ⎦⎣ 2 ⎦ ⎣ ⎦
⎡ x1 ⎤
(4.22)
y = [0 1] ⎢ ⎥
⎣ x2 ⎦
Another selecting of state variables
The loop current of the left side-----state x1
The loop current of the right side-----state x2
Then
The voltage across the inductor---- x&1
The voltage across the resistor---- x1 ×1 − x2 ×1 = x1 − x2
or x&1 = − x1 + x 2 + u
From
The voltage across the capacitor=that across the resistor
we obtain
x& 2 = x&1 − x 2 = − x1 + u
x& (t ) = Ax(t ) + Bu (t )
(4.24)
y (t ) = Cx(t ) + Du (t )
and
Gˆ ( s ) = C ( sI − A ) B + D = CP −1 ( sI − PAP −1 ) PB + D
= C ( sI − A) B + D = Gˆ ( s )
Definition:Two state equations are said to be
zero-state equivalent if they have the same
transfer function matrix.
Theorem 4.1 Two linear time-invariant
equations {A, B, C, D} and { A , B , C , D } are zero-
state equivalent if and only if D = D and
CA m B = C A m B , m = 0,1,2, L
Proof: Since
∞ ∞
( sI − A) −1 = s −1 ( I − As −1 ) −1 = s −1 ∑ ( As −1 ) m = ∑ A m s − m −1
m =0 k =0
So we have
∞
D + C ( sI − A) −1 B = D + ∑ CA m Bs − m −1
m =0
Similarly, we still have
∞
D + C ( sI − A ) −1 B = D + ∑ C A m B s − m −1
m =0
So they are zero-state equivalent if and only if
D + C ( sI − A) −1 B = D + C ( sI − A ) −1 B
It is proper.
Next we show the converse. The Gˆ ( s ) ∈ R q× p
can be decomposed as two parts:
Gˆ ( s ) = Gˆ (∞) + Gˆ sp ( s ) (4.31)
where Gˆ sp (s) is the strictly proper part of . Let
d ( s ) = s r + α 1 s r −1 + L + α r −1 s + α r (4.32)
be the least common denominator of all
entries of Gˆ sp (s) and it is monic.
Then the Gˆ ( s ) can be expressed as
1
Gˆ sp ( s ) = N ( s)
d (s)
1 (4.33)
= [ N1s r −1 + N 2 s r − 2 + L + N r −1s + N r ]
d (s)
= Gˆ (∞) + Gˆ ( s )
sp
1 ⎛ ⎡− 6 3⎤ 2 ⎡− 24 7.5⎤ ⎡− 24 3 ⎤ ⎞
= ⎜⎢ ⎥ s +⎢ ⎥ s+⎢ ⎥ ⎟
⎜
d ( s ) ⎝ ⎣ 0 1⎦ 0.5⎦ ⎟⎠
⎣ 0.5 1.5 ⎦ ⎣ 1
as shown in Fig.4.4(a)
Suppose that
⎧ x&i = Ai xi + bi ui
⎨ i = 1,2, L, m
⎩ yci = Ci xi + d i ui
a realization of
yˆ ci ( s) = Gˆ ci ( s)uˆ i ( s)
Then
⎡ A1 ⎤ ⎡b1 ⎤
⎢ A ⎥ ⎢ b ⎥
x& = ⎢ 2 ⎥x + ⎢ 2 ⎥u
⎢ O ⎥ ⎢ O ⎥
⎢ ⎥ ⎢ ⎥
⎣ Am ⎦ ⎣ bm ⎦
y = [C1 C 2 L C m ]x + [d1 d 2 L d m ]u
will be a realization of Gˆ ( s ). Thus we can realize
each column of Gˆ ( s ) and then combine them to
yield a realization of Gˆ ( s ) .
Example4.7
Consider the proper rational matrix
⎡ 4 s − 10 3 ⎤
⎢ 2s + 1 s+2 ⎥
G (s) = ⎢
ˆ
1 s +1 ⎥
⎢ ⎥
⎢⎣ (2 s + 1)( s + 2) ( s + 2) ⎥⎦
2
and
⎡ 3 ⎤
⎢ s+2 ⎥ 1 ⎡3s + 6⎤
Gc 2 ( s ) = ⎢ s + 1 ⎥ = 2
ˆ
⎢ s +1 ⎥
⎢ ⎥ s + 4 s + 4 ⎣ ⎦
⎢⎣ ( s + 2) ⎥⎦
2
1 ⎛ ⎡3⎤ ⎡6⎤ ⎞
= 2 ⎜ ⎢ ⎥s + ⎢ ⎥ ⎟
s + 4 s + 4 ⎜⎝ ⎣1⎦ ⎣1⎦ ⎟⎠
The realizations of them are
⎡ A1 ⎤ ⎡b1 ⎤
x& = ⎢ ⎥ x+⎢ ⎥ u
⎣ A2 ⎦ ⎣ b2 ⎦
y = [C1 C 2 ]x + [d1 d 2 ]u
That is
⎡− 2.5 − 1 0 0⎤ ⎡1 0⎤
⎢ 1 0 0 0 ⎥ ⎢0 0 ⎥
x& = ⎢ ⎥x + ⎢ ⎥u
⎢ 0 0 − 4 − 4⎥ ⎢0 1 ⎥
⎢ ⎥ ⎢ ⎥
⎣ 0 0 1 0 ⎦ ⎣ 0 0 ⎦
⎡− 6 − 12 3 6⎤ ⎡ 2 0⎤
y=⎢ ⎥ x+⎢ ⎥ u
⎣0 0.5 1 1⎦ ⎣0 0⎦
4.5 Solution of Linear time-varying (LTV)
state equation
Consider the linear time-varying (LTV) state
equation
x& (t ) = A(t ) x (t ) + B (t )u (t ) (4.45)
y (t ) = A(t ) x (t ) + D (t )u (t ) (4.46)
where x(t ) ∈ R n, u (t ) ∈ R p
and y (t ) ∈ R q
for any t. We
assume that for any initial state and any input
u(t), the state equation has a unique solution.
We consider the solutions of
x& (t ) = A (t ) x(t ) (4.49)
first.
The solution of the scalar time-varying
equation x& = a(t ) x due to x(0) is
t
x(t ) = e ∫0
a (τ ) dτ
x(0)
x(t ) = e ∫0
A (τ ) dτ
x(0)
the solution of (4.49) due to x(0)? The answer
is no. So we have to find another approach to
develop the solution.
Fundamental matrix
For any n initial state xi(t0)(i=1,2,…,n) which
are linearly independent, there exist n unique
solutions xi(t) (i=1,2,…,n) corresponding to
each initial state.
Arrange the n solutions and n initial states
as square matrices
X(t)=[x1(t) x2(t) … xn(t)]
X(t0)=[x1(t0) x2(t0) … xn(t0)]
Obviously, X(t0) is nonsingular. The X(t) is
called a fundamental matrix of (4.49).
Because the selection of the initial states is
not unique, the fundamental matrix is not
unique.
Example 4.8
Consider the homogeneous equation
⎡0 0 ⎤
x& (t ) = ⎢ ⎥ x(t )
⎣ t 0⎦
That is
⎧ x&1 (t ) = 0
x&1 (t ) = 0⎨
⎩ x& 2 (t ) = tx1 (t )
respectively.
Thus
⎡ 1 1 ⎤
X(t ) = ⎢
⎣ 0.5t 2
0.5t 2 + 2⎥⎦
is a fundamental matrix
A very important property of the fundamental
matrix is that it is nonsingular for all t.
Definition 4.2 Let X(t ) be any fundamental
matrix of (4.49). Then
Φ(t , t 0 ) := X(t ) X −1 (t 0 )
The inverse of it is
−1 ⎡ 0.25 t 2
+ 1 − 0.5⎤
X (t ) = ⎢ ⎥
⎣ − 0. 2 t 2
0.5 ⎦
⎢⎣ t0 ⎥⎦ (4.48)
t⎛∂ ⎞
= A (t )Φ(t , t0 )x 0 + ∫ ⎜ Φ(t ,τ )B(τ )u(τ ) ⎟dτ + Φ(t , t )B(t )u(t )
t 0 ∂t
⎝ ⎠
t
= A (t )Φ(t , t0 )x 0 + ∫ A (t )Φ(t ,τ )B(τ )u(τ )dτ + B(t )u(t )
t0
t
= A (t )[Φ(t , t0 )x 0 + ∫ Φ(t ,τ )B(τ )u(τ )dτ ] + B(t )u(t )
t0
= A (t ) x(t ) + B (t )u (t )
So when t ∈ [t 0 , t ]
∞
D(t )u(t ) = D(t ) U (t ) = ∫ D(τ )U (τ )δ (τ − t )dτ
−∞
t
= ∫ D(τ )u(τ )δ (τ − t ) dτ
t0
leads to
G (t ,τ ) = C(t )Φ (t ,τ )B (τ ) + D(τ )δ (τ − t ) (4.62)
The (4.62) shows a relationship between the
input-output and state-space descriptions.
To compute the state transition matrix Φ(t ,τ )
is generally difficult, but for linear time-
invariant system, we have
X(t ) = e At
+ D[k ]u[k ]
A (t ) = [P (t ) A (t ) + P& (t )]P −1 (t )
B (t ) = P (t )B(t )
C (t ) = C(t )P −1 (t )
D (t ) = D(t )
That is
d & (t ) X −1 (t )
( X −1 (t )) = − X −1 (t ) X
dt
= e A 0t X −1 (t ) A (t ) X(t )e − A 0t + A 0 e A 0t X −1 (t ) X(t )e − A 0t
d
+ e A 0t ( X −1 (t )) X(t )e − A 0t
dt
= e A 0t X −1 (t ) A (t ) X(t )e − A 0t + A 0
+ e A 0t [− X −1 (t ) A (t )]X(t )e − A 0t = A 0
Result 1: Algebraic equivalence transformation
doesn’t change the impulse response matrix of
the system.
Proof: The impulse response matrix of (4.69)
is given by (4.62)
G (t ,τ ) = C (t )Φ (t ,τ ) B (t ) + D (t )δ (t − τ )
= C (t ) X (t ) X −1 (τ ) B (τ ) + D(t )δ (t − τ ) (4.62)
Similarly, the impulse response matrix of (4.70)
is
G (t ,τ ) = C (t ) X (t ) X −1 (τ ) B (τ ) + D (t )δ (t − τ )
Using (4.71)
X(t ) := P (t ) X(t ) (4.71)
we have
G (t ,τ ) = C (t ) P −1 (t ) P (t ) X (t ) X −1 (τ ) P −1 (τ ) P(τ ) B (τ )
+ D(t )δ (t − τ )
= C (t ) X (t ) X −1 (τ ) B (τ ) + D(t )δ (t − τ )