0% found this document useful (0 votes)
95 views

LMI Methods in Optimal and Robust Control: Matthew M. Peet

This document summarizes a lecture on state-space theory and Lyapunov stability analysis. It introduces state-space representations of systems and defines the matrix exponential. It discusses properties of the matrix exponential and the Jordan decomposition. It defines stability for continuous and discrete time systems using Hurwitz and Schur matrices. It introduces Lyapunov functions and shows how they can be used to prove stability. It presents the Lyapunov inequality, which is the first LMI condition for stability. In summary, the document covers key concepts in state-space modeling and Lyapunov stability analysis using linear matrix inequalities.

Uploaded by

sema tekin
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
95 views

LMI Methods in Optimal and Robust Control: Matthew M. Peet

This document summarizes a lecture on state-space theory and Lyapunov stability analysis. It introduces state-space representations of systems and defines the matrix exponential. It discusses properties of the matrix exponential and the Jordan decomposition. It defines stability for continuous and discrete time systems using Hurwitz and Schur matrices. It introduces Lyapunov functions and shows how they can be used to prove stability. It presents the Lyapunov inequality, which is the first LMI condition for stability. In summary, the document covers key concepts in state-space modeling and Lyapunov stability analysis using linear matrix inequalities.

Uploaded by

sema tekin
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

LMI Methods in Optimal and Robust Control

Matthew M. Peet
Arizona State University

Lecture 4: LMIs for State-Space Internal Stability


Solving the Equations
Find the output given the input

State-Space:
ẋ = Ax(t) + Bu(t) Input Output
u State-Space y
y(t) = Cx(t) + Du(t) x(0) = 0 System

Basic Question: Given an input function, u(t), what is the output?

Definition 1.
Define the Matrix Exponential:
1 1 1
e A = I + A + A2 + A3 + · · · + Ak + · · ·
2 6 k!

M. Peet Lecture 4: State-Space Theory 2 / 23


Properties of the Matrix Exponential
The matrix exponential is similar to the scalar exponential with important
differences.
• e0 = I
1 1
e0 = I + 0 + 02 + 03 + · · · = I
2 6
M T
M
T
• e = e
T 1 1 1
eM = I + M T + (M T )2 + (M T )3 + · · · + (M T )k + · · ·
2 6 k!

d At
• dt e = AeAt
 
d At d 1 2 1 k
e = I + (At) + (At) + · · · + (At) + · · ·
dt dt 2 k!
 
1 2 1 k−1
= A I + (At) + (At) + · · · + (At) + ···
2 (k − 1)!

• However, eM +N 6= eM eN unless, M N = N M .
M. Peet Lecture 4: State-Space Theory 3 / 23
State-Space Solutions and the Jordan Decomposition
The equation ẋ(t) = Ax(t), x(0) = x0
has solution
x(t) = eAt x0
Proof.
Let x(t) = eAt x0 , then
• ẋ(t) = AeAt x0 = Ax(t).
• x(0) = e0 x0 = x0

Question: So what does eAt look like? Let λi be eigenvalues of A


Definition 2.
A Jordan Block, Ji is a matrix which looks like
 
  0 1
λi 
 0 1 

Ji = λi I + N = 
 .. +
  .. .. 
.  
 . . 

λi  0 1
| {z } 0
λi I | {z }
N
M. Peet Lecture 4: State-Space Theory 4 / 23
State-Space Solutions and the Jordan Decomposition
Theorem 3.
For and A ∈ Rn×n , there exists an invertible T such that
 
J1
A = T JT −1 , J = 
 .. 
where Ji are Jordan Blocks
. 
Jn
• Ak = T J k T −1 . Hence  Jt 
e 1
eAt = T eJt T −1 =T
 ..  −1
T
.
eJn t
• λi I and N commute, hence eλi I+N = eλi I eN . Further N d = 0 for some d.
 λt 
e i  
Ji t .. 1 2 1 k−1 k−1
e =  1 + N + N + ··· + N t
 
. 2 (k − 1)!
eλi t

• limt→∞ ti eλt = 0 if and only if Re λ < 0


M. Peet Lecture 4: State-Space Theory 5 / 23
Stability of Continuous and Discrete-Time Systems

Definition 4.
A is Hurwitz if Re λi (A) < 0 for all i.

Theorem 5.
ẋ(t) = Ax(t) is stable if and only if A is Hurwitz.

For Discrete-Time Systems: xk+1 = Axk ,

x k = Ak x 0

Definition 6.
A is Schur if |λi (A)| < 1 for all i.

Theorem 7.
xk+1 = Axk is stable if and only if A is Schur.
M. Peet Lecture 4: State-Space Theory 6 / 23
Lyapunov Functions

ẋ(t) = f (x(t))
Theorem 8 (Lyapunov).
V is a Lyapunov Function if V (0) = 0 and V (x) > 0 for x 6= 0 and
limkxk→∞ V (x) = ∞. If

d
V (x(t)) < 0 for ẋ(t) = f (x(t)).
dt
Then for any x(0) ∈ R the system ẋ(t) = f (x(t)) has a unique solution which is
stable in the sense of Lyapunov.
M. Peet Lecture 4: State-Space Theory 7 / 23
Lyapunov Functions: Local Stability

ẋ(t) = f (x(t))

Theorem 9 (Lyapunov Stability).


Suppose there exists a continuous V and α, β, γ > 0 where

βkxk2 ≤ V (x) ≤ αkxk2


V̇ (x) = ∇V (x)T f (x) ≤ −γkxk2

for all x ∈ X. Then any sub-level set of V in X is a Domain of Attraction.

M. Peet Lecture 4: State-Space Theory 8 / 23


Lyapunov Functions

Mass-Spring: Pendulum:

c k g
ẍ = − ẋ − x ẋ2 = − sin x1 ẋ1 = x2
m m l
1 1 1
V (x) = mẋ2 + kx2 V (x) = (1 − cos x1 )gl + l2 x22
2 2 2

V̇ (x) = ẋ(−cẋ − kx) + kxẋ V̇ (x) = glx2 sin x1 − glx2 sin x1


2
= −cẋ − k ẋx + kxẋ =0

= −cẋ2 ≤ 0
M. Peet Lecture 4: State-Space Theory 9 / 23
An Example of Global Stability Analysis
A controlled model of a jet engine (Derived from
Moore-Greitzer).
3 1
ẋ = −y − x2 − x3
2 2
ẏ = 3x − y
This is feasible with
V (x) = 4.5819x2 − 1.5786xy + 1.7834y 2 − 0.12739x3 + 2.5189x2 y − 0.34069xy 2
+ 0.61188y 3 + 0.47537x4 − 0.052424x3 y + 0.44289x2 y 2 + 0.090723y 4

M. Peet Lecture 4: State-Space Theory 10 / 23


The Lyapunov Inequality (Our First LMI)
Lemma 10 (An LMI for Hurwitz Stability).
A is Hurwitz if and only if there exists a P > 0 such that

AT P + P A < 0

Proof.
Suppose there exists a P > 0 such that AT P + P A < 0.
• Define the Lyapunov function V (x) = xT P x.
• Then V (x) > 0 for x 6= 0 and V (0) = 0.
• Furthermore,
V̇ (x(t)) = ẋ(t)T P x(t) + x(t)T P ẋ(t)
= x(t)T AT P x(t) + x(t)T P Ax(t)
= x(t)T AT P + P A x(t)


• Hence V̇ (x(t)) < 0 for all x 6= 0. Thus the system is globally stable.
• Global stability implies A is Hurwitz.
M. Peet Lecture 4: State-Space Theory 11 / 23
The Lyapunov Inequality
Proof.
For the other direction, if A is Hurwitz, for any Q > 0, let
Z ∞
T
P = eA s QeAs ds
0

• Converges because A is Hurwitz.


• Furthermore
Z ∞
T
PA = eA s QeAs Ads
Z0 ∞ Z ∞
T T d As 
= eA s QAeAs ds = eA s Q e ds
0 0 ds
 ∞ Z ∞
T d AT s As
= eA s QeAs − e Qe
0 0 ds
Z ∞
T
= −Q − AT eA s QeAs = −Q − AT P
0

• Thus P A + AT P = −Q < 0.

M. Peet Lecture 4: State-Space Theory 12 / 23


Discrete-Time Lyapunov Functions

xk+1 = f (xk )

Theorem 11 (Lyapunov).
V is a Lyapunov Function if V (0) = 0 and V (x) > 0 for x 6= 0 and
limkxk→∞ V (x) = ∞. If

V (xk+1 ) < V (xk ) where xk+1 = f (xk ).

Then for any x0 ∈ Rn the system xk+1 = f (xk ) is stable in the sense of
Lyapunov.

M. Peet Lecture 4: State-Space Theory 13 / 23


Discrete-Time Lyapunov Functions
Lemma 12 (An LMI for Schur Stability).
A is Schur if and only if there exists a P > 0 such that

AT P A − P < 0

Proof.
Suppose there exists a P > 0 such that AT P A − P < 0.
• Define the Lyapunov function V (x) = xT P x.
• Then V (x) > 0 for x 6= 0 and V (0) = 0.
• Furthermore,
V (xk+1 ) = xTk+1 P xk+1
= xTk AT P Axk
< xTk P xk = V (xk )

• Hence V (xk+1 ) < V (xk ) for all k ≥ 0. Thus the system is Stable.
• Stability implies A is Schur.
M. Peet Lecture 4: State-Space Theory 14 / 23
Lyapunov Functions
Proof.
For the other direction, if A is Hurwitz, for any Q > 0, let
X∞
P = (AT )k QAk
k=0


X ∞
X
AT P A − P = (AT )k QAk − (AT )k QAk
k=1 k=0
= −(AT )0 QA0 = −Q < 0

• Thus AT P A − P < 0.

YALMIP Code:
> P = sdpvar(n); eta=.1;
> F=[P>=eta*eye(n)];
> F=[F; A’ P A - P<=0];
> optimize(F);
M. Peet Lecture 4: State-Space Theory 15 / 23
Pole Locations AKA D-stability
Some people still care about pole locations.
• For these people, we have D-stability.
To begin, you have to define the acceptable region of the complex plane using
inequality constraints.
1.8
• Rise Time: ωn ≤ tr
• Settling Time: σ ≤ − 4.6
ts
ln Mp
• Percent Overshoot: σ≤− π |ωd |
Recall that if z is the complex pole location:
2
• ωn = kzk2 = z ∗ z
• ωd = Im z = (z − z ∗ )/2
• σ = Re z = (z + z ∗ )/2
Which yields
• Rise Time: z ∗ z − 1.82
t2r ≤0
• Settling Time: (z + z ∗ ) + 4.6
ts ≤ 0
• Percent Overshoot: z − z ∗ + ln πM |z + z ∗ | ≤ 0
p

M. Peet Lecture 4: State-Space Theory 16 / 23


An LMI for Pole Locations
Gutman proposed a nice LMI for D-stability with a single constraint

Theorem 13 (Gutman).
The pole locations, z ∈ C of A satisfy
X
z ∈ {z ∈ C : ckl z k (z ∗ )l < 0}
k,l

if and only if there exists some P > 0 such that


X
ckl Ak P (AT )l < 0
k,l

But this has some disadvantages


• There can only be one constraint.
• The LMI is not linear in A.
I So controller synthesis is not an LMI.

M. Peet Lecture 4: State-Space Theory 17 / 23


An LMI for Convex Regions of the Complex Plane

To get around the limitations of Gutman’s result, we introduce the concept of


LMI regions.
• These are regions which can be represented using LMIs in the z and z ∗
variables

Definition 14.
An LMI Region of the complex plane has the form

{z ∈ C : F0 + zF1 + z ∗ F2 < 0}

Such regions are hard to visualize, but


• Are convex
I e.g. Minimum rise time is not allowed!
• Can combine multiple convex regions

M. Peet Lecture 4: State-Space Theory 18 / 23


An LMI for Convex Regions of the Complex Plane
Examples

1.82
Rise Time: z ∗ z − ≤0
t2r
|{z}
 r 2      
−r z −r 0 0 1 0 0 ∗
= + z+ z <0
z∗ −r 0 −r 0 0 1 0
| {z } | {z } | {z }
F0 F1 F2
Which by the Schur complement is equivalent to r − z ∗ r−1 z > 0.

Settling Time: 4.6


ts + z + z∗ ≤ 0

Percent Overshoot: z − z ∗ + π
ln Mp |(z + z ∗ )| ≤ 0

ln Mp (z + z ∗ ) π(z − z ∗ )
       
0 0 ln Mp π ln Mp −π
= + z+ z∗ < 0
π(z − z ∗ )∗ ln Mp (z + z ∗ ) 0 0 −π ln Mp π ln Mp
| {z } | {z } | {z }
F0 F1 F2
Which by the Schur complement is equivalent to z − z ∗ < 0 and
(z − z ∗ )2 > ( ln πMp )2 |z + z ∗ |2 ≤ 0.
M. Peet Lecture 4: State-Space Theory 19 / 23
An LMI for Convex Regions of the Complex Plane

Theorem 15 (Chilali + Gahinet).


The pole locations, z ∈ C of A satisfy

z ∈ {z ∈ C : F0 + zF1 + z ∗ F2 < 0}

if and only if there exists some P > 0 such that

F0 ⊗ P + F1 ⊗ (AP ) + F2 ⊗ (AP )T < 0

The notation F ⊗ P is Kronecker notation and means for each element of F z,


replace the scalar z with the matrix P . So, e.g.
   
f11 f12 f P f12 P
⊗ P := 11
f12 f22 f12 P f22 P

M. Peet Lecture 4: State-Space Theory 20 / 23


An LMI for Sector Regions of the Complex Plane
Rise Time:        
−r z −r 0 0 1 0 0 ∗
= + z+ z <0
z∗ −r 0 −r 0 0 1 0
| {z } | {z } | {z }
F0 F1 F2
becomes
Lemma 16.
The pole locations, z ∈ C of A satisfy z ∗ z ≤ r2 if and only if there exists some
P > 0 such that  
−rP AP
<0
(AP )T −rP

Settling Time: 4.6


ts + z + z∗ ≤ 0
becomes
Lemma 17.
The pole locations, z ∈ C of A satisfy 2 Re x ≤ −α if and only if there exists
some P > 0 such that
AP + (AP )T + αP < 0

M. Peet Lecture 4: State-Space Theory 21 / 23


An LMI for Sector Regions of the Complex Plane

ln Mp
Percent Overshoot: z + z ∗ ≤ − π |z − z∗|

Lemma 18.
ln M
The pole locations, z ∈ C of A satisfyz + z ∗ ≤ − π p |z − z ∗ | if and only if
there exists some P > 0 such that
ln Mp (AP + (AP )T ) π(AP − (AP )T )
 
<0
π(AP − (AP )T )T ln Mp (AP + (AP )T )

M. Peet Lecture 4: State-Space Theory 22 / 23


Discrete-Time Lyapunov Functions

Next Time: LMIs for Controllability and Observability

M. Peet Lecture 4: State-Space Theory 23 / 23

You might also like