0% found this document useful (0 votes)
24 views

Skript Control Systems II - Gioele Zardini - V1

This document is a collection of notes from the Control Systems II lecture, intended to help students review and practice the course material. It includes various topics such as system modeling, linear system analysis, feedback system analysis, and implementation of controllers, along with numerous examples and exercises. The author encourages feedback for improving the content and acknowledges contributions from peers.

Uploaded by

yasser bouzid
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

Skript Control Systems II - Gioele Zardini - V1

This document is a collection of notes from the Control Systems II lecture, intended to help students review and practice the course material. It includes various topics such as system modeling, linear system analysis, feedback system analysis, and implementation of controllers, along with numerous examples and exercises. The author encourages feedback for improving the content and acknowledges contributions from peers.

Uploaded by

yasser bouzid
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 164

Control Systems II

Gioele Zardini
[email protected]
June 3, 2017

1
Gioele Zardini Control Systems II FS 2017

Abstract
This ”Skript” is made of my notes from the lecture Control Systems II of Dr. Gregor Ochsner
(literature of Prof. Dr. Lino Guzzella) and from my lectures as teaching assistant of 2017 for
the lecture of Dr. Guillaume Ducard.
This document, should give the chance to repeat one more time the contents of the lecture
Control Systems II and practice them through many examples and exercises.

A special thanks goes to Mr. Nicolas Lanzetti, friend of mine and best teaching assistant I
ever had: he taught me this course and created part of the material, as for example some of
the drawings. The updated version of the Skript is available on n.ethz.ch/∼gzardini/.

I cannot guarantee on the correctness of what is included in this Skript: it is possible that
small errors occur. For this reason I am very grateful to get feedbacks and corrections, in order
to improve the quality of the literature.
Enjoy your Control Systems II!

Gioele Zardini

Version Update:

Version 1: June 2017

2
Gioele Zardini Control Systems II FS 2017

Contents

1 Recapitulation Control Systems I 6


1.1 System Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Linear System Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.1 Stability (Lyapunov) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.2 Controllability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.3 Observability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Feedback System Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.1 Crossover Frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.2 Phase Margin ϕ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3.3 Gain Margin γ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4 Nyquist theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.5 Specifications for the closed-loop . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.6 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2 Synthesis of SISO Control Systems 15


2.1 Loop Shaping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1.1 Plant inversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1.2 Loop shaping for Nonminimumphase systems . . . . . . . . . . . . . . . 15
2.1.3 Loop shaping for unstable systems . . . . . . . . . . . . . . . . . . . . . 17
2.1.4 Feasibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2 Cascaded Control Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2.1 Structure of cascaded systems . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2.2 Time-Scale Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2.3 Design Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3 Predictive Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.3.1 Why predictive control . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.3.2 The Smith Predictor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.3.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.3.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.4 Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.4.1 Robust Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.4.2 Nominal Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.4.3 Robust Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.4.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

3 Implementation of Controllers 35
3.1 Setpoint Weighting for PID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.1.1 Recap Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.1.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.1.3 Application to Reference Tracking . . . . . . . . . . . . . . . . . . . . . . 36
3.1.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.2 Feed Forward . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.3 Saturation Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.3.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.4 Digital Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.4.1 Signals and Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.4.2 Discrete-Time State-Space Description . . . . . . . . . . . . . . . . . . . 46

3
Gioele Zardini Control Systems II FS 2017

3.4.3 Discrete-Time Control Systems . . . . . . . . . . . . . . . . . . . . . . . 47


3.4.4 Controller Discretization/Emulation . . . . . . . . . . . . . . . . . . . . . 49
3.4.5 Discrete Time Controller Synthesis . . . . . . . . . . . . . . . . . . . . . 51
3.4.6 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

4 MIMO Systems 66
4.1 System Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.1.1 State Space Descriprion . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.1.2 Transfer Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.2 System Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.2.1 Lyapunov Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.2.2 Controllability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.2.3 Observability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.3 Poles and Zeros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.3.1 Zeros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.3.2 Poles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.3.3 Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.4 Nyquist Theorem for MIMO Systems . . . . . . . . . . . . . . . . . . . . . . . . 70
4.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
4.6 Relative-Gain Array (RGA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

5 Frequency Response of MIMO Systems 98


5.1 Singular Value Decomposition (SVD) . . . . . . . . . . . . . . . . . . . . . . . . 98
5.1.1 Preliminary Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
5.1.2 Singular Value Decomposition . . . . . . . . . . . . . . . . . . . . . . . . 99
5.2 Frequency Responses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
5.2.1 Maximal and minimal Gain . . . . . . . . . . . . . . . . . . . . . . . . . 109
5.2.2 Robustness and Disturbance Rejection . . . . . . . . . . . . . . . . . . . 110

6 Synthesis of MIMO Control Systems 117


6.1 The Linear Quadratic Regulator (LQR) . . . . . . . . . . . . . . . . . . . . . . . 117
6.1.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
6.1.2 Kochrezept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
6.1.3 Frequency Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
6.1.4 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
6.1.5 Pole Placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
6.1.6 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
6.2 Extensions of the LQR problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
6.2.1 The LQRI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
6.2.2 Finite Horizon LQR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
6.2.3 Feedforward LQR Control Systems . . . . . . . . . . . . . . . . . . . . . 139
6.3 Numerical Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
6.3.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
6.3.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
6.4 Model Predictive Control (MPC) . . . . . . . . . . . . . . . . . . . . . . . . . . 143
6.4.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
6.4.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
6.5 The Linear Quadratic Gaussian (LQG) . . . . . . . . . . . . . . . . . . . . . . . 147
6.5.1 Observer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
6.5.2 The LQG Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
6.5.3 Kochrezept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

4
Gioele Zardini Control Systems II FS 2017

6.5.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150


6.6 Extension of LQG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
6.6.1 The LQGI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
6.7 LTR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
6.7.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
6.7.2 β method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
6.7.3 LQG/LQR for m ≥ p . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
6.7.4 LQG/LQR for p ≥ m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158

A Linear Algebra 159


A.1 Matrix-Inversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
A.2 Differentiation with Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
A.3 Matrix Inversion Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159

B Rules 160
B.1 Trigo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
B.2 Euler-Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
B.3 Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
B.4 Logarithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
B.5 Magnitude and Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
B.6 dB-Skala . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

C MATLAB 162
C.1 General Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
C.2 Control Systems Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
C.3 Plot and Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164

5
Gioele Zardini Control Systems II FS 2017

1 Recapitulation Control Systems I


In this section I’ve resumed all topics and concepts from the course Control Systems I that are
important for Control Systems II as well. This chapter gives one more time the chance to have
a look to fundamental concepts.

1.1 System Modeling


A whole course next semester will be focused on this section (System Modeling, Dr. Ducard).
The normalization and linearization (around the equilibrium x0 , u0 ) of a model result in the
state space description
d
x(t) = A · x(t) + b · u(t)
dt (1.1)
y(t) = c · x(t) + d · u(t),

where  ∂f 
∂f0,1
∂x1
0,1
... |
∂xn x=x0 ,u=u0
∂f0  x=x0 ,u=u0 
 .. .. 
A= = . ... . ,
∂x x=x0 ,u=u0  
∂f0,n ∂f0,n
∂x1
... ∂xn
x=x0 ,u=u0 x=x0 ,u=u0

 ∂f0,1

∂u
∂f0  x=x0 ,u=u0 
 .. 
b= = . 
∂u x=x0 ,u=u0  
∂f0,n
∂u
,
x=x0 ,u=u0

∂g0  
∂g0 ∂g0
c= = ∂x1
... ∂xn ,
∂x x=x0 ,u=u0 x=x0 ,u=u0 x=x0 ,u=u0

∂g0  
∂g0
d= = ∂u .
∂u x=x0 ,u=u0 x=x0 ,u=u0

The transfer function of the system is given by

(s − ζ1 ) · (s − ζ2 ) · . . . · (s − ζm )
P (s) = c(sI − A)−1 b + d = bm · . (1.2)
(s − π1 ) · (s − π2 ) · . . . · (s − πn )

• πi are the poles of the system.

• ζi are the zeros of the system.

Both zeros and poles strong determine the system’s behaviour.


Remark.

• The transfer function does not depend on the chosen system of coordinates.

• Fundamental system properties like stability and controllability, as well.

6
Gioele Zardini Control Systems II FS 2017

1.2 Linear System Analysis


1.2.1 Stability (Lyapunov)
Lyapunov stability analyses the behaviour of a system near to an equilibrium point for
u(t) = 0 (no input). With the found formula x(t) = x0 cdoteAt one can show that Lyapunov
stability can be determined through the calculation of the eigenvalues λi of A. The outcomes
are

• Stable: ||x(t)|| < ∞ ∀t ≥ 0 ⇔ Re(λi ) ≤ 0 ∀i

• Asymptotically stable: limt→∞ ||x(t)|| = 0 ⇔ Re(λi ) < 0 ∀i

• Unstable: limt→∞ ||x(t)||) = ∞ ⇔ Re(λi ) > 0 for at least one i

1.2.2 Controllability
Controllable: is it possible to control all the states of a system with an input u(t)?
A system is said to be completely controllable, if the Controllability Matrix R has full
rank 
R = b A · b A2 · b . . . An−1 · b . (1.3)

1.2.3 Observability
Observable: is it possible to reconstruct the initial conditions of all the states of a system from
the output y(t)?
A system is said to be completely observable, if the Observability Matrix O has full rank
 
c
 c·A 
 
 c · A2 
O= . (1.4)
 .. 
 . 
c · An−1

Detectability: A linear time-invariant system is detectable if all unstable modes are observable.
Stabilizability: A linear time-invariant system is stabilizable if all unstable modes are control-
lable.

7
Gioele Zardini Control Systems II FS 2017

1.3 Feedback System Analysis

w d
r e u y
C(s) P (s)

Figure 1: Standard feedback control system structure.

The loop gain L(s) is the open-loop transfer function from e → y defined by
L(s) = P (s) · C(s), (1.5)
where P (s) is the plant and C(s) is the controller.
The sensitivity S(s) is the closed-loop transfer function from d → y defined by
1
S(s) = . (1.6)
1 + L(s)
The complmentary sensitivity T (s) is the closed-loop transfer function from r → y defined
by
L(s)
T (s) = . (1.7)
1 + L(s)
It is quite easy to show that
1 L(s)
S(s) + T (s) = + = 1. (1.8)
1 + L(s) 1 + L(s)
holds. One can define the transfer function of the output signal Y (s) with
Y (s) = T (s) · R(s) + S(s) · D(s) − T (s) · N (s) + S(s) · P (s) · W (s) (1.9)
and the tranfer function of the error signal with
E(s) = S(s) · (R(s) − D(s) − N (s) − P (s) · W (s)) , (1.10)
where R(s), D(s), W (s), N (s) are the laplace transforms of the respective signals r(t), d(t), w(t), n(t).

1.3.1 Crossover Frequency


Normally, we want a high amplification at low frequencies for disturbance rejection and a low
amplification at high frequencies for noise attenuation.
The crossover frequency ωc is used as reference for the analysis of the system and is defined
as the frequency for which it holds
|L(jωc )| = 1 = 0 dB. (1.11)

• Bode: Frequency where L(s) crosses the 0 dB line.


• Nyquist: Frequency where L(s) crosses the unit circle.
• The bigger ωc is, the more aggressive the system is.

8
Gioele Zardini Control Systems II FS 2017

1.3.2 Phase Margin ϕ


180◦ ± ∠L(jωc ). (1.12)

• Bode: Distance from -180 ◦ line at crossover frequency ωc . See Figure 2

• Nyquist: Angle distance between the -1 point and the point where L(s) crosses the unit
circle. See Figure 3

1.3.3 Gain Margin γ


γ is the inverse of the magnitude of L(jω) if ∠L(jω) =-180◦ . We call its frequency ωγ ,ω ∗ .

1
= |Re(L(jωγ ))|, 0 = |Im(L(jωγ ))|. (1.13)
γ
• Bode: Distance from the 0dB line at frequency ωγ . See Figure 2.

• Nyquist: Inverse of the distance between the origin and the intersection point of L(s) and
the real axis.See Figure 3

You have to think of the phase and the gain margin as safety margins for your system. These
two are an indicator that tells you how close is the system to oscillation (instability). The gain
margin and the phase margin are the gain and the phase which can be varied before the
system becomes marginally stable, that is, just before instability.
This can be seen in Figure 2 and Figure 3.

Figure 2: Bodeplot with phase and gain margin

9
Gioele Zardini Control Systems II FS 2017

Figure 3: Nyquist plot with phase and gain margin

1.4 Nyquist theorem


A closed-loop system T (s) is asymptotically stable if
n0
nc = n+ + . (1.14)
2
holds, where

• nc : Number of mathematical positive encirclements of L(s) about critical point −1 (coun-


terclockwise).

• n+ : Number of unstable poles of L(s) (Re(π) > 0).

• n0 : Number marginal stable poles of L(s) (Re(π) = 0).

1.5 Specifications for the closed-loop


The main objective of control systems is to design a controller. Before one does this, one should
be sure that the system is controllable. We learned that the controllability and observability
are powerful tools to do that, but they are not enough: there are some others conditions for the
crossover-frequency ωc that should be fulfilled for the system in order to design a controller.
It holds

max(10 · ωd , 2 · ωπ+ ) < ωc < min(0.5 · ωζ + , 0.1 · ωn , 0.5 · ωdelay , 0.2 · ω2 ), (1.15)

where

10
Gioele Zardini Control Systems II FS 2017

• ωπ+ = Re(π + ):Dominant (biggest with Re(π) > 0) unstable Pole;

• ωζ + = Re(ζ + ): Dominant (smallest with Re(ζ) > 0) nonminimumphase zero;

• ωd : Biggest disturbance-frequency of the system;

• ωn : Smallest noise-frequency of the system;

• ω2 : Frequency with 100% uncertainty (|W2 (j · ω2 )| = 1),


1
• ωdelay = Tdelay
: Biggest delay of the System.

11
Gioele Zardini Control Systems II FS 2017

1.6 Examples
Example 1. The dynamic equations of a system are given as

ẋ1 (t) = x1 (t) − 5x2 (t) + u(t),


ẋ2 (t) = −2x1 (t),
ẋ3 (t) = −x2 (t) − 2x3 (t),
y(t) = 3x3 (t).

(a) Draw the Signal Diagram of the system.

(b) Find the state space description of the above system and write it in Matlab.

(c) Find out if the system is Lyapunov stable or not, in Matlab.

(d) Find the Controllability matrix and determine if the system is controllable in, Matlab.

(e) Find the Observability matrix and determine if the system is observable, in Matlab.

(f) Find the transfer function of the system with its poles and zeros, in Matlab

12
Gioele Zardini Control Systems II FS 2017

Solution.

(a) The Signal Diagram reads

Figure 4: Signal Diagram of Example 1.

(b) The state space description has the following matrices:


   
1 −5 0 1 

A = −2 0 
0 , b = 0 ,
 c= 0 0 3 , d = 0.
0 −1 −2 0
For the rest of solution (b) and solutions (c)-(f ) see the attached Matlab script: Example_1_EC1.m.

13
Gioele Zardini Control Systems II FS 2017

1 4
Example 2. Given are a plant P (s) = (s+1)·(s+2)
and its controller C(s) = (s+1)
.

(a) Compute L(s) and T (s), with Matlab.

(b) Find the gain and the phase margin numerically and plot them, with Matlab.

(c) Write the Bodeplot of P (s), C(s), L(s), Matlab.

(d) Write the Nyquist plot of P (s), L(s), with Matlab.

(e) Plot the Step response for the open and closed loop, with Matlab

(f) Plot the Impulse response for the open and the closed loop, with Matlab

Solution. See the attached Matlab script: Example_2_EC1.m.

14
Gioele Zardini Control Systems II FS 2017

2 Synthesis of SISO Control Systems


2.1 Loop Shaping
2.1.1 Plant inversion
This method isn’t indicated for nonminimumphase plants and for unstable plants: in those
cases this would lead to nonminimumphase or unstable controllers. This method is indicated
for simple systems for which it holds
• Plant is asymptotically stable.
• Plant is minimumphase.
The method is then based on a simple step:
L(s) = C(s) · P (s) ⇒ C(s) = L(s) · P (s)−1 . (2.1)
The choice of the loop gain is free: it can be chosen such that it meets the specifications.

2.1.2 Loop shaping for Nonminimumphase systems


A nonminimumphase system shows a wrong response: a change in the input results in a change
in sign, that is, the system initially lies. Our controller should therefore be patient and for this
reason we use a slow control system. This is obtained by a crossover frequency that is smaller
than the nonminimumphase zero. One begins to design the controller with a PI-Controller.
Ti · s + 1
C(s) = kp · . (2.2)
Ti · s
where parameters kp and Ti can be chosen such that the loop gain L(s) meets the known
specifications. One can reach better robustness with Lead/Lag elements of the form
T ·s+1
C(s) = . (2.3)
α·T ·s+1
where α, T ∈ R+ . One can understand the Lead and the Lag as
• α < 1: Lead-Element:
→ Phase margin increases.
→ Loop gain’s increases.
• α > 1: Lag-Element:
→ Phase margin decreases.
→ Loop gain’s decreases.
Like one can see in Figure 5 and Figure 6 the maximal benefits are reached at frequencies (ω̂)
where the drawbacks are not yet fully developed.
The element’s parameters can be calculated as
q 2
2 1 − sin(ϕ̂)
α= tan (ϕ̂) + 1 − tan(ϕ) = (2.4)
1 + sin(ϕ̂)
and
1
T = √ . (2.5)
ω̂ · α
where ω̂ the desired center frequency is and ϕ̂ = ϕnew − ϕ the desired maximum phase
shift (in rad) is. The classic loop-shaping method reads:

15
Gioele Zardini Control Systems II FS 2017

Figure 5: Bodeplot of lead Element

Figure 6: Bodeplot of lag Element

1. Design of a PI(D) controller.

2. Add Lead/Lag elements where needed1

3. Set the gain of the controller kp such that we reach the desired crossover frequency.

1
L(jω) often suits not the learned requirements

16
Gioele Zardini Control Systems II FS 2017

2.1.3 Loop shaping for unstable systems


Since the Nyquist’s theorem should always hold, if it isn’t the case, one has to design the
controller such that nc = n+ + n20 is valid. To remember is: stable poles increase the phase by
90◦ and minimumphase zeros decrease the phase by 90◦

2.1.4 Feasibility
Once the controller is designed, one has to look if this is really feasible and possible to imple-
ment. That is, if the number of poles is equal or bigger than the number of zeros of the system.
If that is not the case, one has to add poles at high frequencies, such that they don’t affect the
system near the crossover frequency. One could e.g. add to a PID controller a Roll-Off Term
as  
1 1
C(s) = kp · 1 + + Td · s · . (2.6)
Ti · s (τ · s + 1)2
| {z } | {z }
PID Controller Roll-Off Term

17
Gioele Zardini Control Systems II FS 2017

2.2 Cascaded Control Systems


Single input single output (SISO) systems use, in feedback control, the output y as unique
information to compute u. Clearly this doesn’t represent a big part of the systems existing in
the real world.
A first step towards a more precise solution, is to consider a situation where multiple outputs are
available: the single input multiple output case (SIMO). To analyze such systems we need the
theory and the methods that will be learned for the multiple input multiple output (MIMO)
case. One can but still, without introducing this theory, use the concepts learned so far to
handle such a system.

2.2.1 Structure of cascaded systems


The structure of a cascaded system can be seen in Figure 7. Special cases that you could
encounter in exercises can be seen in Figure 8 and Figure 9. .

rs es rf ef uf us ys
Cs (s) Cf (s) Pf (s) Ps (s)
− −
yf

Figure 7: Control loop structure of a cascaded system.

rs es rf ef uf ys
Cs (s) Cf (s) Pf (s) Ps (s)
− −
yf

Figure 8: Control loop structure of a cascaded system (with us = yf ).

The main idea here is to divide the system basing on its time-scale. Usually, one sets
an inner faster loop and an outer slower loop. One can identify these elements in the fast
dynamic’s plant Pf (s) and in the slow dynamic’s plant Ps (s). Basically, one wants to design a
faster controller Cf (s) for the inner loop and a slower controller Cs (s) for the outer loop.

2.2.2 Time-Scale Estimation


How can we distinguish between a slower and a faster control loop? A good way to do it
consists in looking at the poles and the delays of the plant. Let’s have a look at the Inverse
Laplace Transfrom of the output function Y (s):
p φi
X X ρi,k
y(t) = · tk−1 · eπi ·t · h(t). (2.7)
i=1 k=1
(k − 1)!

The dynamics of the plant are described by the absolutely smallest pole.

18
Gioele Zardini Control Systems II FS 2017

Example 3. We are given two plants P1 (s) und P2 (s). The poles π1 und π2 can be seen in the
following table. We have to distinguish between Ps (s) and Pf (s).

System π1 π2 System π1 π2
1) P1 (s) −1 −100 2) P1 (s) −1 100
P2 (s) −10 −50 P2 (s) −1 10

1) P1 (s) has the smallest pole: this means Ps (s) = P1 (s) and Pf (s) = P2 (s).

2) Both systems share the same smallest pole. This means that as there isn’t a different
time-scale, the two systems should not be controlled with the cascaded control principle.

2.2.3 Design Process


Faster Loop
The faster controller Cf (s) is (e.g. with Ziegler and Nichols) designed first without taking into
account the slower loop. In order to exploit the most of the available bandwidth2 , one chooses
a P(D) controller, without the integrator.

Slower Loop
The slower controller Cs (s) is (e.g. with Ziegler and Nichols) designed as second: here, one has
to consider the faster closed loop as part of the new plant. Since this controller should obtain
accuracy, one usually include the integrator here and chooses a PI(D) controller3 .

2
An Integrator reduces the bandwidth. The bandwidth defines where our controller is working: we want the
controller to have an action/reaction as big as possible.
3
The integrator reduces the static error!

19
Gioele Zardini Control Systems II FS 2017

2.2.4 Examples
Example 4. Figure 9 shows a single input multiple output (SIMO) loop, that should be
controlled with cascaded control. The given transfer functions are
1
Pf (s) = ,
s+1
1 (2.8)
Ps (s) = 2 ,
5s + 6s + 1
Cf (s) = 2.

rs es rf ef uf ys
Cs (s) Cf (s) Ps (s)
− −

yf
Pf (s)

Figure 9: Control loop structure of a cascaded system.

Compute the transfer function of Pout (s) : rf → ys .

20
Gioele Zardini Control Systems II FS 2017

Solution. To solve such problems, one has to work step by step. First, we use the frequency
domain to determine the signals’ dependencies. It holds:

Ys (s) = Ps (s) · Uf (s), (2.9)

where
Uf (s) = Cf (s) · (Rf (s) − Pf (s) · Uf (s))
Uf (s) · (1 + Pf (s) · Cf (s)) = Cf (s) · Rf (s)
(2.10)
Cf (s) · Rf (s)
Uf (s) = .
1 + Pf (s) · Cf (s)

2.10 in 2.9 gives

Ys (s) Cf (s)
Pout (s) = = Ps (s) ·
Rf (s) 1 + Pf (s) · Cf (s)
1 2 · (s + 1)
= 2 · (2.11)
5s + 6s + 1 (s + 3)
2
= .
(5s + 1) · (s + 3)

21
Gioele Zardini Control Systems II FS 2017

Example 5. Consider the heated water tank depicted in Figure 10. The goal of this exercise is
to design a controller for this system. The valve position u should be controlled such that the
outlet water temperature Tw follows a given reference Tw,ref . The water outlet temperature,
as well as the steam mass flow ṁS , are measured. Cascaded control can be used and the
chosen controller structure is depicted in Figure 11.

Figure 10: Heating water in the water tank.

rs es rf ef uf ys
Cs (s) Cf (s) Pf (s) Ps (s)
− −
yf

Figure 11: Control loop structure of the cascaded system.

(a) Which values of the system correspond to the signals rf , yf , rs and ys ?

(b) Derive the outer/slow closed-loop transfer function Trs →ys as a function of the closed-loop
transfer function Trf →yf .

22
Gioele Zardini Control Systems II FS 2017

Solution.

(a) The two values that describe this system are the steam mass flow ṁS and the water outlet
temperature Tw . The temperature changes significantly slower than the steam mass flow,
hence: the output of the outer/slower control loop ys corresponds to the water outlet
temperature Tw . Its reference signal rs is the reference of this temperature Tw,ref .
The output of the inner/fast control loop yf corresponds to the steam mass flow ṁS . Its
reference signal rf is the reference of the steam mass flow ṁS,ref .

(b) The transfer function of the inner/fast closed-loop is

Trf →yf = Tf
Lf
=
1 + Lf
C f · Pf
= .
1 + C f · Pf

The transfer function of the outer/slow closed-loop is

Trf →yf = Ts
Ls
=
1 + Ls
Cs · Tf · Ps
= .
1 + Cs · Tf · Ps

23
Gioele Zardini Control Systems II FS 2017

2.3 Predictive Control


2.3.1 Why predictive control
If a system has substantial delays, it is very difficult to control with a normal PID controller.
The I part of the controller causes impatience, that is, integrates overtime. As a practical
example think of taking a shower at morning: one let the water flow and of course this hasn’t
the desired temperature. For this reason one chooses warmer water by turning the temperature
controller, the water becomes too hot and so one turns it to colder water and so on, resulting in
a non optimal strategy. Moreover, the D part of the controller is practically unuseful4 . What
does the expression substantial delays mean? As indicative range one can say that it is worth
using predictive control if
T
> 0.3, (2.12)
T +τ
where T is the delay and τ is the time constant of the system. Other prerequisites are

• The plant must be asymptotically stable.

• A good model of the plant should be available.

2.3.2 The Smith Predictor


One can see the two equivalent structures of the Smith Predictor in Figure 12 and Figure 13.

w
r u yr y
Cr (s) Pr (s) e−T ·s

ŷr
P̂r (s) e−T̂ ·s −


Figure 12: Structure of the Smith predictor.

If the system has big delays, one can assume that it is possible to write the delay element
and the nondelayed plant as a product in the frequency domain: that’s what is done in the
upper right side of Figure 12. This means that the transfer function u → y can be written as

P (s) = Pr (s) · e−sT . (2.13)

Main Idea:
As long as we have no disturbance w (w = 0) and our model is good enough (this means
Pr (s) = P̂r (s), T = T̂ )5 , we can model a non delayed plant and get the non delayed output
ŷr (t) (one can see this on the lower right side of Figure 12). The feedback signal results from
the sum of ŷr (t) and the correction signal .
4
Taking the derivative of a delay element doesn’t help to control it
5
We use . ˆ. . to identify the parameters of the model

24
Gioele Zardini Control Systems II FS 2017

r e u y

Cr (s) Pr (s) e−s·T

P̂r (s)

e−s·T̂

Figure 13: Structure of the Smith predictor.

2.3.3 Analysis
The controller of the system is the transfer function e → u , which can be computed as

U (s) = Cr (s) · (R(s) − P̂r (s) · U (s) + P̂r (s) · e−sT̂ · U (s) − Y (s))
= Cr (s) · (R(s) − Y (s)) −Cr (s) · P̂r (s) · (1 − e−sT̂ ) · U (s). (2.14)
| {z }
E(s)

If one solves for U (s):

Cr (s)
U (s) = · E(s) (2.15)
1 + Cr (s) · P̂r (s) · (1 − e−s·T̂ )

and so the transfer function of the controller reads


Cr (s)
C(s) = . (2.16)
1 + Cr (s) · P̂r (s). · (1 − e−s·T̂ )

This means that the loop gain is

Cr (s) · Pr (s) · e−s·T


L(s) = P (s) · C(s) = . (2.17)
1 + Cr (s) · P̂r (s) · (1 − e−s·T̂ )

If one assumes as stated, that the model is good enough s.t. Pr (s) = P̂r (s), T = T̂ , one gets

L(s)
T (s) =
1 + L(s)
Cr (s)·Pr (s)·e−s·T
1+Cr (s)·Pr (s)·(1−e−s·T )
= (s)·Pr (s)·e−s·T
1 + 1+CCr r(s)·Pr (s)·(1−e
−s·T )

Cr (s) · Pr (s) · e−s·T (2.18)


=
1 + Cr (s) · Pr (s) · (1 − e−s·T ) + Cr (s) · Pr (s) · e−s·T
Cr (s) · Pr (s)
= · e−s·T
1 + Cr (s) · Pr (s)
= Tref (s) · e−s·T .

25
Gioele Zardini Control Systems II FS 2017

Remark.

• This result is very important: we have shown that the delay cannot be completely elimi-
nated and that every tranfer function (here T (s) but also S(s)) will have the same delay
as the plant P (s).

• Advantages of the prediction are:

– Very fast.
– Same Robustness.

• Disadvantages of the prediction are:

– Very difficult to implement.


– Very difficult to analyze.
– Problems if there are model errors.

26
Gioele Zardini Control Systems II FS 2017

2.3.4 Examples
Example 6. We want to control the plant P (s) = Pr (s) · e−sτ with a Smith Predictor. The
structure of the predictor can be seen in Figure 14.

Figure 14: Smith Predictor Structure

(a) Assume that the model isn’t that good and derive the transfer functions of

i) The controller.
ii) The loop gain.
iii) The complementary sensitivity.

(b) Let’s say the model is now perfect. Tr (s) should behave like a first order low-pass system
with cutoff frequency a. The gain should be chosen s.t. no static error is produced. Find
such a Tr (s).

(c) The plant has been modeled by a friend of you: he had no time for a complete job because
of a hockey playoff match and he derived just the state space description of the system.
This reads    
0 1 0
ẋ(t) = · x(t) + 1 · u(t)
0 0 m

and 
ŷ(t) = 1 0 · x(t), y(t) = ˆ(y)(t − τ ).

Find the transfer function of the plant.

(d) Calculate the transfer function of the controller Cr (s).

27
Gioele Zardini Control Systems II FS 2017

Solution.

(a) i) Controller:
With the same derivation seen in the theory, one gets

Cr (s)
C(s) = .
1 + Cr (s) · P̂r (s) · (1 − e−s·τ̂ )

ii) Loop Gain:


With the same derivation seen in the theory, one gets

Cr (s) · Pr (s) · e−s·τ


L(s) = .
1 + Cr (s) · P̂r (s) · (1 − e−s·τ̂ )

iii) Complementary Sensitivity:


With the same derivation seen in the theory, one gets

L(s)
T (s) =
1 + L(s)
Cr (s) · Pr (s) · e−sτ
= .
1 + Cr (s) · P̂r (s) · (1 − e−sτ̂ ) + Pr (s) · Cr (s) · e−sτ

(b) With a perfect model, one gets

Pr (s) · Cr (s)
T (s) = · e−sτ
1 + Pr (s) · Cr (s)
= Tr (s) · e−sτ .

The general form for a lowpass system is


k
Σ(s) = .
τs + 1
where ω = τ1 the is cutoff frequency and k is the gain. If we don’t want a static error, the
gain should be 1 (k = 1). This last step can be shown if one calculates the static error
with the well-known formula. With the informations about the cutoff frequency one gets
1
Tr (s) = s
a
+1
a
= .
s+a

(c) Here one should use Laplace Analysis. The two differential equations reads

ẋ1 (t) = x2 (t),


1
ẋ2 (t) = · u(t)
m
with
ŷ(t) = x1 (t),
y(t) = ŷ(t − τ ).

28
Gioele Zardini Control Systems II FS 2017

The Laplace transform leads to

sX1 (s) = X2 (s)


1
sX2 (s) = · U (s)
m
and
Ŷ (s) = X1 (s), Y (s) = Ŷ (s) · e−sτ
together:
1
s2 X1 (s) = U (s)
m
and so
U (s) = ms2 X1 (s)
For the transfer function, it holds

Y (s)
P (s) =
U (s)
X1 (s) · e−sτ
=
X1 (s) · ms2
1
= · e−sτ
ms2
= Pr (s) · e−sτ .

(d) Since
Pr (s) · Cr (s)
Tr (s) =
1 + Pr (s) · Cr (s)
by solving for Cr (s) one gets

Tr (s)
Cr (s) =
Pr (s) · (1 − Tr (s))
a
s+a
= 1 a

ms2
· 1− s+a
ams2
=
s
= am · s.

29
Gioele Zardini Control Systems II FS 2017

2.4 Robustness
2.4.1 Robust Stability
In order to determine the stability of the closed loop loop gain we have used the Nyquist
theorem. That isn’t enough: we have to consider that the model for L(s) could be unprecise.
In order to ensure stability, it should hold

|W2 (j · ω) · L(j · ω)| < |1 + L(j · ω)| ∀ω ∈ [0, ∞]. (2.19)

This condition can be reformulated, in order to use the complementary sensitivity T (s):

|W2 (j · ω) · L(j · ω)| < |1 + L(j · ω)| ∀ω ∈ [0, ∞]


|L(j · ω)| 1
<
|1 + L(j · ω)| |W2 (j · ω)|
(2.20)
1
|T (j · ω)| <
|W2 (j · ω)|
|T (j · ω)| · |W2 (j · ω)| < 1.

This can be interpreted as an upper bound for the complementary sensitivity.

2.4.2 Nominal Performance


The conditions for the sensitivity S(s) can be resumed in the transfer function W1 (s) and so it
must hold
|W1 (j · ω) · S(j · ω)| < 1
1
· |W1 (j · ω)| < 1 (2.21)
|1 + L(j · ω)|
|W1 (j · ω)| < |1 + L(j · ω)|.

which can be used in with the Nyquist plot: for each frequency ω the return difference should
have a distance from the critical point -1 that is larger than |W1 (j · ω)|. Such a system is
referred to as satisfying the nominal performance condition.

2.4.3 Robust Performance


If a controller C(s) also satisfies

|W1 (j · ω) · S(j · ω)| + |W2 (j · ω) · T (j · ω)| < 1. (2.22)

then it satisfies the robust performance condition, where, as reminder

• |W1 (j · ω)|: Bound for the sensitivity, from specifications.

• |W2 (j · ω)|: Bound for the complementary sensitivity, from specifications.

Multiplying both sides by |1 + L(j · ω)| one gets the rearranged condition

|W1 (j · ω)| + |W2 (j · ω) · L(j · ω)| < |1 + L(j · ω)|. (2.23)

Remark. The nominal and the robust perfomance conditions don’t guarantee the stability of the
system: this should be determined separately, e.g. with the Nyquist theorem. These conditions
can be easily checked with Figure 15

30
Gioele Zardini Control Systems II FS 2017

Figure 15: Nyquist with conditions

31
Gioele Zardini Control Systems II FS 2017

2.4.4 Examples
Example 7. In order to design a controller for the plant P (s), a bound for the sensitivity of
the control system is set:
2·s+1
W1 (s) = 60 · .
200 · s + 1
Iteratively, one finds a good controller that results in the complementary sensitivity

ω02
T (s) = .
s2 + 2 · δ · ω0 · s + ω02

where ω0 = 1 rad
s
and δ = 1.

(a) One can show that the maximal value of the sensitivity |S(jω ∗ )| occurs at ω ∗ = 2.
Compute the value of |S(jω ∗ )|.

(b) Show that the condition for nominal performance is fulfilled in ω ∗ .

(c) What is the limit value of |W2 (jω ∗ )| in order to fulfill the condition for robust perfor-
mance?
Hint: It holds |L(jω ∗ )| = 0.29.

32
Gioele Zardini Control Systems II FS 2017

Solution.

(a)

It holds
S(s) = 1 − T (s)
1
=1−
+2·s+1
s2
s2 + 2 · s
= 2 .
s +2·s+1
−ω 2 + 2jω
S(jω) =
−ω 2 + 2jω + 1
(−ω 2 + 2jω) · (1 − ω 2 − 2jω)
=
(−ω 2 + 2jω + 1) · (1 − ω 2 − 2jω)
ω 4 + 3ω 2 + 2jω
= .
(1 − ω 2 )2 + 4ω 2

∗ 10 + j2 2
|S(jω )| =
9
r
108
= = 1.16.
81

(b)

The condition reads


|S(jω ∗ )| · |W1 (jω ∗ )| < 1
1
⇒ |S(jω ∗ )| < .
|W1 (jω ∗ )|

It holds
2jω ∗ + 1
|W1 (jω ∗ )| = 60 ·
200jω ∗ + 1
(2jω ∗ + 1) · (1 − 200jω ∗ )
= 60 ·
(200jω ∗ + 1) · (1 − 200jω ∗ )
(1 + 400ω ∗ 2 − 198jω ∗ )
= 60 ·
1 + (200 · ω ∗ )2
p
(1 + 400ω ∗ 2 )2 + (198ω ∗ )2
= 60 ·
1 + (200 · ω ∗ )2
= 0.64.

Since
1
1.16 < = 1.56
0.64
the condition is fulfilled for ω ∗ .

33
Gioele Zardini Control Systems II FS 2017

(c)

The condition reads


|W1 (j · ω ∗ )| + |W2 (j · ω ∗ ) · L(j · ω ∗ )| < |1 + L(j · ω ∗ )|
1
|W1 (j · ω ∗ )| + |W2 (j · ω ∗ ) · L(j · ω ∗ )| <
S(jω ∗ )
 
∗ 1 1 ∗
|W2 (j · ω )| < · − |W1 (j · ω )|
|L(j · ω ∗ )| S(jω ∗ )
 
∗ 1 1
|W2 (j · ω )| < · − 0.64
0.29 1.16
|W2 (j · ω ∗ )| < 0.77.

34
Gioele Zardini Control Systems II FS 2017

3 Implementation of Controllers
3.1 Setpoint Weighting for PID
3.1.1 Recap Goals

w d
r e u y
C(s) P (s)

Figure 16: Standard feedback control system’s structure

The two big different goals for a control system are reference tracking and disturbance
rejection.
Reference Tracking: Let y(t) track r(t).
Disturbance Rejection: Keep y(t) at r(t).

Realisation of PID Controllers


replacemen
3.1.2 Method
A few remarks regarding “real” PID controllers:
Since these two goals are often not achievable with a single controller, we define a method to
adapt easily the PID controller to different needs, by introducing the weights a, b, c.
Setpoint weighting

a kp
-

r kp > u
b
- Ti

c d
kp T d
- dt
y

Usually: b = 1 and c = 0,
Figure 17:and a an additional
Setpoint design
weighting PID parameter. Roll-off time
structure.
constant τ substantiall smaller than min{Td , Ti }.
By tuning the weights a, b, c one can achieve the desired control action.
102
One can write, mathematically:
kp
U (s) = kp · (a · R(s) − Y (s)) + · (b · R(s) − Y (s)) + kp · Td · s · (c · R(s) − Y (s)). (3.1)
Ti · s
Remark. From this equation, one can show that the choice of a, b, c has no influence on the
loop gain, that is, no influence on the stability of the system.

35
Gioele Zardini Control Systems II FS 2017

3.1.3 Application to Reference Tracking


Although the PID controller was designed by thinking of some disturbance rejection prob-
lem, this method could also be used to ensure reference tracking. If disturbances are small
compared to reference changes and the actuator is a limiting factor (see saturation, next
chapters), usually one does the following choice:

• P: The parameter a is chosen such that: 0 < a < 1.

– We can control the aggressivity of the controller.

• I: The parameter b is chosen such that: b = 1.

– with r = y one has that e∞ → 0.

• D: The parameter c is chosen such that: c = 0.

– If there are disturbances, it is good not to have their influences on derivative terms.

36
Gioele Zardini Control Systems II FS 2017

3.1.4 Examples
Example 8. A plant P (s) should be controlled with a PID controller. One should consider
the setpoint weighting in order to better design the controller. The plant reads
10
P (s) = .
(s + 1)(s + 2)(s + 3)

(a) Find

i) The critical time constant T ∗ ,


ii) The critical gain kp∗ ,
iii) The static gain of the plant.

(b) Derive the transfer function between r → y.

37
Gioele Zardini Control Systems II FS 2017

Solution.
(a)
With the rules learned in Control Systems I, it holds:
kp∗ · P (jω ∗ ) = −1 + 0 · j

T∗ = ∗.
ω
With the defined plant:

10 · kp∗
kp∗ ∗
· P (jω ) =
(jω ∗ + 1) · (jω ∗ + 2) · (jω ∗ + 3)
10 · kp∗
= .
−6ω ∗2 + 6 + j · (−ω ∗3 + 11ω ∗ )
The imaginary part should be 0. It holds
3 √
−ω ∗ + 11ω ∗ = 0 ⇒ ω ∗ = 11.

The real part should be -1. It holds


10 · kp∗
= −1 ⇒ kp ∗ = 6.
−6ω ∗2 + 6
The static gain is
10
P (0) = .
6
(b)

It holds
Y (s) = P (s) · U (s)
 
kp 1
= P (s) · (aR(s) − Y (s)) · kp + (bR(s) − Y (s)) · · + (cR(s) − Y (s)) · kp · Td · s .
Ti s
One can write
   
b 1
R(s) · P (s) · kp · a + + cTd = Y (s) · 1 + P (s) · kp · (1 + + Td · s)
Ti · s Ti · s
From the definition:
Y (s)
T (s) =
R(s)
 
b
P (s) · kp · a + Ti ·s
+ cTd
= 1
1 + P (s) · kp · (1 + Ti ·s
+ kp · Td · s)
10kp · (aTi · s + b + c · Ti · Td · s2 )
= .
Ti · (s + 1) · (s + 2) · (s + 3) + 10kp (Ti · s + 1 + Ti · Td · s2 )
Remark. From this equation one can see that the stability isn’t affected from the choice of
a, b, c.

38
Gioele Zardini Control Systems II FS 2017

3.2 Feed Forward


The concept of different controllers for different goals in generalized in the feed forward struc-
ture.

F (s)

r e u y
C(s) P (s)

Figure 18: Feed Forward control system’s strucure.

Such a structure increases speed the of the system because r has direct influence on u. This
is really good if one wants to improve the reference tracking.
Mathematically, it holds

Y (s) = D(s) + P (s) · F (s) · R(s) + P (s) · C(s) · R(s) − P (s) · C(s) · (N (s) + Y (s))
Y (s) · (1 + P (s) · C(s)) = D(s) + (P (s) · F (s) + P (s) · C(s)) · R(s) − P (s) · C(s) · N (s)
| {z }
S(s)−1
 
 
⇒ Y (s) = P (s) · F (s) · S(s) + P (s) · C(s) · S(s) · R(s) − P (s) · C(s) · S(s) ·N (s)
| {z } | {z }
T (s) T (s)

⇒ Y (s) = (P (s) · S(s) · F (s) + T (s)) · R(s) − T (s) · N (s) + S(s) · D(s).
(3.2)

Remark.

• The closed loop stability is not affected by F .

• The tracking performance increases. Speed up the output response to input changes.

39
Gioele Zardini Control Systems II FS 2017

3.3 Saturation Problem


The plants treated so far, were linear systems: these of course represent just a small part of
the existing real systems, where nonlinearities occur. An important intuitive nonlinear effect
is the limitation of some actuators at the plant’s input.
In general, physical signals cannot have all desired values because of natural limitations: easy
examples are, the maximal opening area of a valve (0 ≤ A ≤ Amax ) and the maximal torque of
a motor (|T | ≤ |Tmax |). In order to take into account these limitations, one has to include in
the loop a saturation element.

r e u ū y
C(s) P (s)

Figure 19: Actuator Saturation.

This element defines the input to give to the plant as




umax , if u(t) > umax
ū(t) = u(t), if umax > u(t) > umin (3.3)


umin , if u(t) < umin .

A critical situation occurs when the integrative controller’s output is at its saturation state:
with a further integration (that is needed because on a non-zero error) one cannot decrease the
error by more increasing/decreasing of u(t) (that is limited).
To understand this, let’s take a step back: the integral part of a PID controller makes the
controller generate an output until the reference value is reached. To do this, the integral part
accumulates in the memory the error remaining after each cycle. It is clear, that if this error
won’t be zero because of the saturation, the accumulation becomes a problem.
One can solve the problem with the help of a Anti-Reset Windups (ARW) element. The
idea behind such a structure is to stop accumulating the error and empty the integrator.
Since the error is accumulating and we want
Z t
e(τ )dτ ≈ 0. (3.4)
0

we need for a certain big amount of time a negative error: we call this method integral
windup/reset windup. This causes oscillations that are not welcome (see Figure 22). With
the ARW one can empty the integrator faster and the behaviour of the system is better.

40
Gioele Zardini Control Systems II FS 2017

Figure 20: Anti-Reset Windups’ structure.

Figure 21: Effects of the Anti-Reset Windups.

41
Gioele Zardini Control Systems II FS 2017

3.3.1 Examples
Example 9. The plant
P (s) = e−s .
should be controlled with the PI controller
1
C(s) = 0.2 + .
2s
An input limitation/saturation is given as


0.1, if u(t) ≥ 0.1
ub (t) u(t), if |u(t)| < 0.1


−0.1, if u(t) ≤ 0.1.

(a) Draw the signals y(t), u(t), ub (t) in the given diagram.

(b) Which problem can be noticed in this problem? How could one solve this problem?

Figure 22: Diagram for the drawing. The signal r(t) is given.

42
Gioele Zardini Control Systems II FS 2017

Solution.
(a)
First of all, we refer to a classical feedback loop with saturation (see Figure 23).
For the rest of the calculations, please refer to Figure 24

r e u ub y
C(s) P (s)

Figure 23: Actuator Saturation.

First of all, let’s define how one can get u(t). From the very definition:
U (s) = C(s) · E(s).
This holds in frequency domain. If we want to draw a diagram in time domain, we have to
compute the inverse Laplace transform of U (s):
u(t) = L−1 (U (s)) = L−1 (C(s) · E(s))
1 1
= L−1 (0.2 · E(s) + · · E(s)).
2 s
With the known Laplace inversions it follows
Z t
−1 1 1 1
u(t) = L (0.2 · E(s) + · · E(s)) = 0.2 · e(t) + e(τ )dτ.
2 s 2 0

Now, let’s analyze and draw the signals: we do this in intervals, in order to track every
signals’ change.
• t ∈ [0, 1]
The reference signal r is zero and so all the other signals.
• t ∈ [1, 2]
Since the plant is a delay element of 1 second (e−s ), the output y(t) would be still 0. This
means that in this interval, the error is e(t) = r(t) − y(t) = 1. We use this information to get
the behaviour of u(t):
Z
1 t
u(t) = 0.2 · e(t) + e(τ )dτ
2 0
Z 2
1
= 0.2 · 1 + · 1dτ
2 1
= 0.2 + 0.5 · (t − 1).
In order to draw this, we evaluate u(t) at t = 1 and t = 2. It holds
u(1) = 0.2,
u(2) = 0.7.
So, u(t) is an increasing linear function from 0.2 to 0.7.
Since ub (t) is a function of u(t), we can now find its shape: since u(t) > 0.1 in the whole
interval, we use the definition and say ub (t) = 0.1 (constant value).

43
Gioele Zardini Control Systems II FS 2017

• t ∈ [2, 3)

There should be a step in y(t) here: this can be seen in

y(t) = ub (t − 1) = 0.1.

This means that the error e(t) = r(t) − y(t) = −0.1. We use this information to get the
behaviour of u(t):
Z
1 t
u(t) = 0.2 · e(t) + e(τ )dτ
2 0
Z 2 Z t 
1
= 0.2 · (−0.1) + · 1dτ + (−0.1)dτ
2 1 2
= −0.02 + 0.5 + 0.5 · (−0.1) · (t − 2)
= 0.48 − 0.05 · (t − 2).

In order to draw this, we evaluate u(t) at t = 2 and t = 3. It holds

u(2) = 0.48,
u(3) = 0.43.

So, u(t) is a decreasing linear function from 0.48 to 0.43.


Since ub (t) is a function of u(t), we can now find its shape: since u(t) > 0.1 in the whole
interval, we use the definition and say ub (t) = 0.1 (constant value).

• t ∈ [3, 5)

The error remains e(t) = r(t) − y(t) = −0.1. since y(t) = ub (t − 1) = 0.1. We use this
information to get the behaviour of u(t):
Z
1 t
u(t) = 0.2 · e(t) + e(τ )dτ
2 0
Z 2 Z t 
1
= 0.2 · (−0.1) + · 1dτ + (−0.1)dτ
2 1 2
= −0.02 + 0.5 + 0.5 · (−0.1) · (t − 2)
= 0.48 − 0.05 · (t − 2).

which reads exactly the same as before. In order to draw this, we evaluate u(t) at t = 2 and
t = 3. It holds
u(3) = 0.43
u(5) = 0.33.

So, u(t) is the same decreasing linear function as before.


Since ub (t) is a function of u(t), we can now find its shape: since u(t) > 0.1 in the whole
interval, we use the definition and say ub (t) = 0.1 (constant value).

44
Gioele Zardini Control Systems II FS 2017

Figure 24: Solution.

(b)

This example, shows the problem that is introduced by saturation. The from the controller
asked input u(t) is bigger than the real feasible input ub (t). This creates the windup situation
and we can use an ARW structure to avoid this.

45
Gioele Zardini Control Systems II FS 2017

3.4 Digital Control


3.4.1 Signals and Systems
A whole course is dedicated to this topic (see Signals and Systems of professor D’Andrea). A
signal is a function of time that represents a physical quantity.
Continuous-Time signals are described by a function x(t) such that this takes continuous
values.
Discrete-Time Signals are described by a function x[n] = x(n · Ts ) where Ts is the sampling
time. The sampling frequency is defined as fs = T1s .
One can see the difference between the two descriptions in Figure 25.

x(t)

Figure 25: Continuous-Time versus Discrete-Time representation

Advantages of Discrete-Time analysis


• Calculations are easier. Moreover, integrals become sums and differentiations become
finite differences.

• One can implement complex algorithms.

Disadvantages of Discrete-Time analysis


sTs
• The sampling introduces a delay in the signal (≈ e− 2 )

• The informations between two samplings, that is between x[n] and x[n + 1] is lost.

Every real-world signal is some sort of discrete-time signal.

3.4.2 Discrete-Time State-Space Description


If in continuous-time we have to deal with differential equations, in discrete-time we have to
work with difference equations: the discrete-time state-space description is very similar to
the continuous-time one:
x[k + 1] = A · x[k] + b · u[k]
(3.5)
y[k] = c · x[k] + d · u[k].

46
Gioele Zardini Control Systems II FS 2017

Lyapunov Stability
A system is asymptotically stable if and only if

|λi | < 1 ∀i. (3.6)

where λi are the eigenvalues of A. This, in other words, means that the eigenvalues should
remain inside the unity circle.

3.4.3 Discrete-Time Control Systems


Nowadays, controls systems are implemented in microcontroller or in microprocessors in discrete-
time and really rarely (see the lecture Elektrotechnik II ) in continuous-time. Like defined, al-
though the processes are faster and easier, the informations are still sampled and there is a
certain loss of data.

Aliasing
If the sampling frequency is chosed too low, i.e. one measures less times pro second, the signal
can become poorly determined and the loss of information is too big to reconstruct it uniquely.
This situation is called aliasing and one can find many examples of that in the real world.
Let’s take an easy example: you are finished with the summer’s session and you are flying to
Ibiza, to finally enjoy the sun after a summer at ETH. You decide to film the turbine of the
plane because, although you are on holiday, you have an engineer’s spirit. You arrive to Ibiza
and as you get into your hotel room, you want you have a look at your film. The rotation you
observe looks different to what it is supposed to be, and since you haven’t drunk yet, there
must be some scientific reason. In fact, the sampling frequency of your phone camera is much
lower than the turning frequency of the turbine: this results in a loss of information and hence
in a wrong perception of what is going on.

Let’s assume some signal


x1 (t) = cos(ω · t). (3.7)
After discretization, the sampled signal reads

x1 [n] = cos(ω · Ts · n) = cos(Ω · n), Ω = ω · Ts . (3.8)

Let’s assume a second signal


  

x2 (t) = cos ω+ ·t , (3.9)
Ts

where the frequency



ω2 = ω + . (3.10)
Ts
is given. The discretization of this second signal reads
  

x2 [n] = cos ω+ · Ts · n
Ts
= cos (ω · Ts · n + 2π · n) (3.11)
= cos(ω · Ts · n)
= x1 [n].

47
Gioele Zardini Control Systems II FS 2017

Although the two signals have different frequencies, they are equal when discretized. For this
reason, one has to define an interval of good frequencies, where aliasing doesn’t occur. In
particular it holds
π
|ω| < (3.12)
Ts
or
1
f< ⇔ fs > 2 · fmax . (3.13)
2 · Ts
The maximal frequency accepted is f = 2·T1 s and is named Nyquist Frequency. In order to
ensure good results, one uses in practice a factor of 10
1
f< ⇔ fs > 10 · fmax . (3.14)
10 · Ts
For control systems the crossover frequency should follow
ωc
fs ≥ 10 · . (3.15)

Anti Aliasing Filter (AAF)


The Anti Aliasing Filter is an analog filter and not a discrete one. In fact, we want to eliminate
unwanted frequencies before sampling, because after that is too late (see Figure 26). But how
can one define unwanted frequencies? Those frequencies are normally the higher frequencies of
a signal 6 . Because of that, as AAF one uses normally a low-pass filter. This type of filter,
lets low frequencies pass and blocks higher ones7 . The mathematic formulation of a first-order
low-pass filter is given by
k
lp(s) = . (3.16)
τ ·s+1
where k is the gain and τ is the time constant of the system. The drawback of such a fil-
ter is problematic: the filter introduces additional unwanted phase that can lead to unstable
behaviours.

Analog to Digital Converter (ADC)


At each discrete time step t = k · T the ADC converts a voltage e(t) to a digital number
following a sampling frequency.

Microcontroller (µP )
This is a discrete-time controller that uses the sampled discrete-signal and gives back a dicrete
output.

Digital to Analog Converter (DAC)


In order to convert back the signal, the DAC applies a zero-order-hold (ZOH). This intro-
duces an extra delay of T2 (see Figure 27).
6
Keep in mind: high signal frequency means problems by lower sampling frequency!
7
This topic is exhaustively treated in the course Signals and Systems

48
Gioele Zardini Control Systems II FS 2017

Figure 26: Control Loop with AAF.

Figure 27: Zero-Order-Hold.

3.4.4 Controller Discretization/Emulation


Discrete-Time controllers can be defined by using the z-Transform: this transform method is
to understand as some kind of Laplace transform (see the course Analysis III ).
In general, it holds

y(t + Ts ) = y[n + 1] ⇔ z · Y (z). (3.17)


(3.18)

The meaning of the variable z can change with respect to the chosen discretization approach.
Here, just discretization results are presented: if you are interested in the theory behind dif-
ference rules and discretization, have a look at courses like Computational Methods for Eng.
Applications I/II. A list of the most used transformations is:

1
Exakt s= · ln(z) z = es·Ts
Ts
z−1
Euler forward s= z = s · Ts + 1
Ts
z−1 1
Euler backward s= z=
z · Ts 1 − s · Ts
Ts
2 z−1 1+s· 2
Tustin s= · z= Ts
Ts z + 1 1−s· 2

49
Gioele Zardini Control Systems II FS 2017

The different approaches are results of different Taylor’s approximations8 :

• Forward Euler:
z = es·Ts ≈ 1 + s · Ts . (3.19)

• Backward Euler:
1 1
z = es·Ts = ≈ . (3.20)
e−s·Ts 1 − s · Ts
• Tustin: Ts Ts
es· 2 1+s· 2
z= Ts ≈ Ts
. (3.21)
e−s· 2 1−s· 2

In practice, the most used approach is the Tustin transformation, but there are cases where
the other transformations could be useful.

Stability
It is nice to investigate, how and if the stability conditions change in discretized systems. As a
reminder, continuous-time systems are stable if their poles have a negative real part. Discrete-
time systems are stable if their poles are bounded in the unity circle (see Figure 28). With
Euler Backward and Tustin the stability conditions remain the same; with Forward Euler it is
however possible, that the systems becomes unstable (see Figure 28).

Figure 28: Pole mapping.

8
As reminder: ex ≈ 1 + x.

50
Gioele Zardini Control Systems II FS 2017

3.4.5 Discrete Time Controller Synthesis


As you learned in class, there are two ways to discretize systems: see Figure 29.
In the previous chapter, we learned how to emulate a system: in this chapter we learn the
discrete time controller synthesis (direct synthesis).

Figure 29: Emulation and Discrete time controller synthesis.

Figure 30: Discrete-time control loop.

The general situation is the following: a control loop is given as in Figure 30.
The continuous time transfer function G(s) is given.
We want to compute the equivalent discrete-time transfer function H(z). The loop is composed
from a Digital-to-Analog Converter (DAC), the continuous-time transfer function G(s) and an
Analog to Digital Converter (ADC).
We want to reach a structure that looks like Figure 31.
The first thing to do, is to consider an input to analyze. What is usually chosen, is a unit-step

u(kT ) = {. . . , 0, 1, 1, . . .}. (3.22)

Since the z−Transform is defined as



X
X(z) = x(n) · z −n (3.23)
n=0

we get for u
U (z) = 1 + z −1 + z −2 + z −3 + . . . + z −n . (3.24)

51
Gioele Zardini Control Systems II FS 2017

This sum can be written as (see geometric series)


1
U (z) = . (3.25)
1 − z −1
For U (z) to be defined, this sum must converge. This can be verified by exploring the properties
of the geometric series.

Recall: sum of geometric series


Let Sn denote the sum over the first n elements of a geometric series:
Sn = U0 + U0 · a + U0 · a2 + . . . + U0 · an−1
= U0 · (1 + a + a2 + . . . + an−1 ).
Then
a · Sn = U0 · (a + a2 + a3 + . . . + an )
and
Sn − a · Sn = U0 · (1 − an ),
which leands to
1 − an
Sn = U0 · .
1−a
From here, it can be seen that the limit for n going to infinity convergese if and only if the
absolute value of a is smaller than one:
1
lim Sn = U0 · , iff |a| < 1. (3.26)
n→∞ 1−a
Therefore the limiting case |a| = 1 =: r is called radius of convergence. The according
convergence criterion is |a| < r.
H(z) contains the converters: at first, we have the digital -to-analog converter. The Laplace-
Transform of the unit-step reads generally
1
. (3.27)
s
Hence, the transfer function before the analog-to-digital converter reads
G(s)
. (3.28)
s
In order to consider the analog-to-digital converter, we have to apply the inverse Laplace trans-
from to get  
G(s)
y(t) = L −1
. (3.29)
s
Through a z− transform one can now get Y (z). It holds
Y (z) = Z (y(kT ))
  
G(s) (3.30)
=Z L −1
.
s
The transfer function is then given as
Y (z)
H(z) = . (3.31)
H(z)

52
Gioele Zardini Control Systems II FS 2017

yr ε(z) U (z) Y (z)


K(z) H(z)

n=0

Figure 31: Discrete Syhtesis.

3.4.6 Examples
Example 10. Show that the stability conditions of a system remain the same if stable continuous-
time poles are mapped to discrete-time poles.

53
Gioele Zardini Control Systems II FS 2017

Solution. If one looks at Tustin it holds


Ts
1+s· 2
z= Ts
1−s· 2
Ts
1 + (x + i · y) · 2
= Ts
1 − (x + i · y) · 2
Ts Ts
1+x· 2
+i·y· 2
= Ts Ts
.
1−x· 2
−i·y· 2

If we compute the modulus, we get


s
Ts 2
(1 + x · 2
) + y 2 · ( T2s )2
|z| = Ts 2
.
(1 − x · 2
) + y 2 · ( T2s )2

Since the continuous-time system should be stable, it should hold

x<0

This means
Ts 2 Ts Ts Ts
(1 + x · ) + y 2 · ( )2 < (1 − x · )2 + y 2 · ( )2 ,
2 2 2 2
and so
|z| < 1.
The discrete-time is therefore stable.

54
Gioele Zardini Control Systems II FS 2017

Example 11. One of your colleagues has developed the following continuous-time controller
for a continuous-time plant
2s + 1
C(s) =
s+α
where α ∈ R is a tuning factor.

(a) You now want to implement the controller on a microprocessor using the Euler back-
ward emulation approach. What is the resulting function C(z) for a generic sampling
time T ∈ R+ ?

(b) What is the range of the tuning factor α that produces an asymptotically stable dis-
crete.time controller when applying the Euler backward emulation approach?

(c) What is the condition on α to obtain an asymptotically stable continuous-time controller


C(s) and an asymptotically stable discrete-time controller C(z) using the Euler backward
emulation approach?

55
Gioele Zardini Control Systems II FS 2017

Solution.

(a) The Euler backward emulation approach reads


z−1
s= .
T ·z
If one substitutes this into the transfer function of the continuous-time controller, one
gets
z−1
2· T ·z
+1
C(z) = z−1
T ·z

z · (2 + T ) − 2
= .
z · (1 + α · T ) − 1

(b) The controller C(z) is asymptotically stable if its pole πd fulfills the condition

|πd | < 1.

The pole reads

z · (1 + α · T ) − 1 = 0
1
πd = .
1+α·T
This, together with the condition for stability gives
1 2
−1 < < 1 ⇒ α > 0 or α < − .
1+α·T T

(c) For C(s) to be asymptotically stable, its pole πc = −α must lie in the left half of the
complex plane:
Re{πc } < 0 ⇒ α > 0.
Together with the results from (b), the condition on α is α > 0.

56
Gioele Zardini Control Systems II FS 2017

Example 12.

(a) Choose all the signals that can be sampled without aliasing. The sampling time is Ts = 1s.

 x(t) = cos(4π · t).


 x(t) = cos(4π · t + π).
 x(t) = 2 · cos(4π · t + π).
 x(t) = cos(0.2π · t).
 x(t) = cos(0.2π · t + π).
 x(t) = 3 · cos(0.2π · t + π).
 x(t) = cos(π · t).
 x(t) = cos(π · t + π).
 x(t) = 2 · cos(π · t + π).
 x(t) = cos(0.2π · t) + cos(4π · t).
 x(t) = sin(0.2π · t) + sin(0.4π · t).
P 
 x(t) = 100 2π
i=1 cos i+1 · t .
P 
 x(t) = 100i=1 cos 2π
i+2
· t .

(b) The signal


x(t) = 2 · cos(20 · π · t + π) + cos(40 · π · t) + cos(30 · π · t).
is sampled with sampling frequency fs . What is the minimal fs such that no aliasing
occurs?

57
Gioele Zardini Control Systems II FS 2017

Solution.
(a)  x(t) = cos(4π · t).
 x(t) = cos(4π · t + π).
 x(t) = 2 · cos(4π · t + π).
3 x(t) = cos(0.2π · t).

3 x(t) = cos(0.2π · t + π).

3 x(t) = 3 · cos(0.2π · t + π).

 x(t) = cos(π · t).
 x(t) = cos(π · t + π).
 x(t) = 2 · cos(π · t + π).
 x(t) = cos(0.2π · t) + cos(4π · t).
3 x(t) = sin(0.2π · t) + sin(0.4π · t).

P 
 x(t) = 100i=1 cos 2π
i+1
· t .
P100 
3 x(t) = i=1 cos i+2
 2π
·t .
Explanation:
If one goes back to the definition of the ranges to ensure no aliasing occurs, one gets the
formula
1
f< .
2 · Ts
In this case the condition reads
1
f< = 0.5Hz.
2 · 1s
One can read the frequency of a signal from its formula: the value that multiplies t is ω
and
ω
= f.

The first three signals have

f= = 2Hz.

which is greater than 0.5Hz. Moreover, additional phase and gain don’t play an important
role in this sense. The next three signals have a frequency of
0.2π
f= = 0.1Hz.

that is lower than 0.5Hz and hence perfect, in order to not encounter aliasing. The
next three signals have the critical frequency and theoretically speaking, one sets this as
already aliased. The reason for that is that the Nyquist theorem sets a strict < in the
condition.
For the next two signals a special procedure applies: if one has a combination of signals,
one has to look at the bigger frequency of the signal. In the first case this reads 2Hz,
that exceeds the limit frequency. In the second case this reads 0.2Hz, that is acceptable.
The last two cases are a general form of combination of signals. The leading frequency
1
of the first sum, decreases with i+1 and has its biggest value with i = 1, namely 0.5Hz.
This is already at the limit frequency, hence not acceptable. The leading frequency of
1
the second sum, decreases with i+2 and has its biggest value with i = 1, namely 0.33Hz.
This is lower that the limit frequency, hence acceptable.

58
Gioele Zardini Control Systems II FS 2017

(b) The general formula reads


fs > 2 · fmax .
Here it holds
40π
fmax = = 20Hz.

It follows
fs > 2 · 20Hz = 40Hz.

59
Gioele Zardini Control Systems II FS 2017

Example 13.

(a) For which A and B is the system asymptotically stable?


   
1 2 1
 A= ,B= .
1 2 2
   
−1 −2 1
 A= ,B= .
−1 −2 2
   
0 0 0.1
 A= ,B= .
0 0 0
   
−1 −2 1
 A= ,B= .
1 −0.5 2
   
−1 −2 1
 A= ,B= .
0 −0.5 2
   
1 −2 1
 A= ,B= .
0 0.5 2
   
−0.1 −2 1
 A= ,B= .
0 −0.5 2
   
−0.1 −2 0.1
 A= ,B= .
0 −0.5 0
   
0.1 −2 1
 A= ,B= .
0 0.5 2
   
0.1 −2 0.1
 A= ,B= .
0 0.5 0

(b) The previous exercise can be solved independently of B.

 True.
 False.

60
Gioele Zardini Control Systems II FS 2017

Solution.
   
1 2 1
(a)  A= ,B= .
1 2 2
   
−1 −2 1
 A= ,B= .
−1 −2 2
   
3A= 0 0 0.1
 ,B= .
0 0 0
   
−1 −2 1
 A= ,B= .
1 −0.5 2
   
−1 −2 1
 A= ,B= .
0 −0.5 2
   
3A= −0.1 −2 1
 ,B= .
0 −0.5 2
   
3A= −0.1 −2 0.1
 ,B= .
0 −0.5 0
   
3 0.1 −2 1
 A= ,B= .
0 0.5 2
   
3A= 0.1 −2 0.1
 ,B= .
0 0.5 0
Explanation:
The eigenvalues of A should fulfill
|λi | < 1.

(b)

3 True.

 False.

61
Gioele Zardini Control Systems II FS 2017

Example 14. A continuous-time system with the following transfer function is considered:
9
G(s) = .
s+3
(a) Calculate the equivalent discrete-time transfer function H(z). The latter is composed
of a Digital-to-Analog Converter (DAC), the continuous-time transfer function G(s) and
an Analog-to-Digital Converter (ADC). Both converters, i.e. the DAC and ADC, have a
sampling time Ts = 1s.

(b) Calculate the static error if a proportional controller K(z) = kp is used and the reference
input yc,k is a step signal.
Hint: Heavyside with amplitude equal to 1.

62
Gioele Zardini Control Systems II FS 2017

Solution.
(a) Rather than taking into account all the individuals elements which make up the continuous-
time part of the system (DAC, plant, ADC), in a first step, these elements are lumped
together and are represented by the discrete-time description H(z). In this case, the
discrete-time output of the system is given by

Y (z) = H(z) · U (z),

where U (z) is the z-transform of the discrete input uk given to the system. Therefore ,
the discrete-time representation of the plant is given by the ratio of the output to the
input
Y (z)
H(z) = .
U (z)
For the sake of convenience, uk is chosen to be the discrete-time Heaviside function
(
uk [k] = 1, k ≥ 0
0, else.

This input function needs to be z-transformed. Recall the definition of the z-transform

X
X(z) = x(n) · z −n .
n=0

and applying it to the above equation, with the input uk gives (uk [k] = 1 for k ≥ 0)

U (z) = X(uk )

X
= z −k
k=0

X
= (z −1 )k .
k=0

For U (z) to be defined, this sum must converge. Recalling the properties of geometric
series one can see
1
U (z) = ,
a − z −1
as long as the convergence criterion is satisfied, i.e. as long as |z −1 | < 1 or better |z| > 1.
This signal is then transformed to continuous time using a zero-order hold DAC. The
output of this transformation is again a Heaviside function uh (t). Since the signal is
now in continuous time, the Laplace transform is used to for the analysis. The Laplace
transform of the step function is well known to be
1
L (uh (t))(s) = = U (s).
s
The plant output in continuous time is given by
Y (s) = G(s) · U (s)
G(s) .
= .
s
After the plant G(s), the signal is sampled and transformed into discrete time once more.
Therefore, the z-transform of the output has to be calculated. However, the signal Y (s)

63
Gioele Zardini Control Systems II FS 2017

cannot be transformed directly, since it is expressed in the frequency domain. Thus,


first, it has to be transformed back into the time domain (y(t)) using the inverse Laplace
transform, where it is them sampled every t = k ·T . The resulting series of samples {y[k]}
is then transformed back into the z-domain, i.e.
   
−1 G(s)
Y (z) = X {L (kT )} .
s
To find the inverse Laplace transform of the output, its frequency domain representation
is decomposed into a sum of simpler functions
G(s) 9
=
s s · (s + 3)
α β
= +
s s+3
s · (α + β) + 3 · α
= .
s · (s + 3)
The comparison of the numerators yields

α = 3, β = −3.

and thus
G(s) 3 3
= −
s s s + 3 
1 1
=3· − .
s s+3
Now the terms can be individually transformed with the result
 
G(s) 
L −1
= 3 · 1 − e−3t · uh (t)
s
= y(t).

The z-transform of the output sampled at discrete time istants y(kT ) is given by
"∞ ∞
#
X X
X({y(kT )}) = 3 · z −k − e−3kT · z −k
" k=0

k=0

#
X X
=3· z −1k − (e−3T · z −1 )k
k=0 k=0
= Y (z).

From above, the two necessary convergence criteria are known

|z −1 | < 1 ⇒ |z| > 1


|e−3T · z −1 | ⇒ |z| > |e−3T |.

Using the above equations the output transfrom converges to (giben that the two conver-
gence criteria are satisfied)
 
1 1
Y (z) = 3 · − .
1 − z −1 1 − e−3T · z −1

64
Gioele Zardini Control Systems II FS 2017

Finally, the target transfer function H(z) is given by

Y (z)
H(z) =
U (z)
= (1 − z −1 ) · Y (z)
 
1 − z −1
=3· 1−
1 − e−3T · z −1
(1 − e−3T ) · z −1
=3· .
1 − e−3T · z −1

(b) From the signal flow diagram, it can be seen that the error ε(z) is composed of

ε(z) = Yc (z) − Y (z)


= Yc (z) − H(z) · K(z) · ε(z)
Yc (z)
= .
1 + kp · H(z)

The input yc (t) is a discrete step signal, for which the z-transform was calculated in (a):
1
Yc (z) = .
1 + z −1
Therefore, the error signal reads
1
1+z −1
ε(z) = −3T )·z −1 .
1+3· kp · (1−e
1−e−3T ·z −1

To calculate the steady-state error, i.e. the error after infinite time, the discrete-time final
value theorem9 is used:
lim ε(t) = lim(1 − z −1 ) · ε(z),
t→∞ z→1

but as z goes to 1, so does z −1 and thereforem 1 is substituted for each z −1 in ε(z) and
the stati error becomes
1
ε∞ = .
1 + 3 · kp
Note that the error does not completely vanish but can be made smaller by increasing kp .
This is the same behaviour which would have been expected from a purely proportional
controller in continuous time. To drive the static error to zero, a discrete-time integrator
1
of the form Ti ·(1−z −1 ) would be necessary.

9
limt→∞ ε(t) = lims→0 s · ε(s)

65
Gioele Zardini Control Systems II FS 2017

4 MIMO Systems
MIMO systems are systems with multiple inputs and multiple outputs. In this chapter we will
introduce some analytical tools to deal with those systems.

4.1 System Description


4.1.1 State Space Descriprion
The state-space description of a MIMO system is very similar to the one of a SISO system.
For a linear, time invariant MIMO system with m input signals and p output signals, it holds
d
x(t) = A · x(t) + B · u(t), x(t) ∈ Rn , u(t) ∈ Rm
dt (4.1)
y(t) = C · x(t) + D · u(t), y(t) ∈ Rp

where

x(t) ∈ Rn×1 , u(t) ∈ Rm×1 , y(t) ∈ Rp×1 , A ∈ Rn×n , B ∈ Rn×m , C ∈ Rp×n , D ∈ Rp×m . (4.2)

Remark. The dimensions of the matrices A, B, C, D are very important and they are a key
concept to understand problems.

The big difference from SISO systems is that u(t) and y(t) are here vectors and no more
just numbers. For this reason B, C, D are now matrices.

4.1.2 Transfer Function


One can compute the transfer function of a MIMO system with the well known formula

P (s) = C · (s · 1 − A)−1 · B + D. (4.3)

This is no more a scalar, but a p×m-matrix. The elements of that matrix are rational functions.
Mathematically:
 
P11 (s) · · · P1m (s)
 ..  , bij (s)
P (s) =  ... ..
. .  Pij (s) = . (4.4)
aij (s)
Pp1 (s) · · · Ppm (s)

Here Pij (s) is the transfer function from the j-th input to the i-th output.
Remark. In the SISO case, the only matrix we had to care about was A. Since now B,C are
matrices, one has to pay attention to some mathematical properties: the matrix multiplication
is not commutative (A · B 6= B · A). Since now P (s) and C(s) are matrices, it holds

L(s) = P (s) · C(s) 6= C(s) · P (s). (4.5)

Moreover, one can no more define the complementary sensitivity and the sensitivity as

L(s) 1
T (s) = , S(s) = . (4.6)
1 + L(s) 1 + L(s)

because no matrix division is defined. There are however similar expressions to describe those
transfer functions: let’s try to derive them.

66
Gioele Zardini Control Systems II FS 2017

We start from the standard control system’s structure (see Figure 32).
To keep things general, let’s say the plant P (s) ∈ Cp×m and the controller C(s) ∈ Cm×p . The
reference r ∈ Rp , the input u ∈ Rm and the disturbance d ∈ Rp . The output Y (s) can as always
be written as
Y (s) = T (s) · R(s) + S(s) · D(s). (4.7)
where T (s) is the transfer function of the complementary sensitivity and S(s) is the transfer
function of the sensitivity.

w=0 d
r e u y
C(s) P (s)

n=0

Figure 32: Standard feedback control system structure.

Starting from the error E(s)


If one wants to determine the matrices of those transfer functions, one can start writing (by
paying attention to the direction of multiplications) with respect to E(s)

E(s) = R(s) − P (s) · C(s) · E(s) − D(s),


(4.8)
Y (s) = P (s) · C(s) · E(s) + D(s).

This gives in the first place

E(s) = (1 + P (s) · C(s))−1 · (R(s) − D(s)) . (4.9)

Inserting and writing the functions as F (s) = F for simplicity, one gets

Y =P ·C ·E+D
= P · C · (1 + P · C)−1 · (R − D) + D
= P · C · (1 + P · C)−1 · R − P · C (1 + P · C)−1 · D + (1 + P · C) (1 + P · C)−1 ·D
| {z } (4.10)
1
−1 −1
= P · C · (1 + P · C) · R + (1 + P · C − P · C) · (1 + P · C) ·D
−1 −1
= P · C · (1 + P · C) · R + (1 + P · C) · D.

Recalling the general equation (5.2) one gets the two transfer functions:

T1 (s) = P (s) · C(s) · (1 + P (s) · C(s))−1 ,


(4.11)
S1 (s) = (1 + P (s) · C(s))−1 .

67
Gioele Zardini Control Systems II FS 2017

Starting from the input U (s)


If one starts with respect to U (s), one gets

U (s) = C(s) · (R(s) − D(s)) − C(s) · P (s) · U (s),


(4.12)
Y (s) = P (s) · U (s) + D(s).

This gives in the first place

U (s) = (1 + C(s) · P (s))−1 · C(s) · (R(s) − D(s)) . (4.13)

Inserting and writing the functions as F (s) = F for simplicity, one gets

Y =P ·U +D
= P · (1 + C · P )−1 · C · (R − D) + D (4.14)
−1 −1 
= P · (1 + C · P ) · C · R + 1 − P · (1 + C · P ) · C · D.

Recalling the general equation (5.2) one gets the two transfer functions:

T2 (s) = P (s) · (1 + C(s) · P (s))−1 · C(s),


(4.15)
S2 (s) = 1 − P (s) · (1 + C(s) · P (s))−1 · C(s).

It can be shown that this two different results actually are the equivalent. It holds

S1 = S2
(1 + P · C)−1 = 1 − P · (1 + C · P )−1 · C
1 = 1 + P · C − P · (1 + C · P )−1 · C · (1 + P · C)
1 = 1 + P · C − P · (1 + C · P )−1 · (C + C · P · C)
1 = 1 + P · C − P · (1 + C · P )−1 · (1 + C · P ) · C
1=1+P ·C −P ·C
1=1
(4.16)
T1 = T2
P · C · (1 + P · C)−1 = P · (1 + C · P )−1 · C
P · C = P · (1 + C · P )−1 · C · (1 + P · C)
P · C = P · (1 + C · P )−1 · (C + C · P · C)
P · C = P · (1 + C · P )−1 · (1 + C · P ) · C
P ·C =C ·P
1 = 1.

Finally, one can show that

S(s) + T (s) = (1 + P · C)−1 + P · C · (1 + P · C)−1


= (1 + P · C) · (1 + P · C)−1 (4.17)
= 1.

68
Gioele Zardini Control Systems II FS 2017

4.2 System Analysis


4.2.1 Lyapunov Stability
The Lyapunov stability theorem analyses the behaviour of a system near to its equilibrium
points when u(t) = 0. Because of this, we don’t care if the system is MIMO or SISO. The three
cases read

• Asymptotically stable: limt→∞ ||x(t)|| = 0;

• Stable: ||x(t)|| < ∞ ∀ t ≥ 0;

• Unstable: limt→∞ ||x(t)|| = ∞.

As it was done for the SISO case, one can show by using x(t) = eA·t · x0 that the stability can
be related to the eigenvalues of A through:

• Asymptotcally stable: Re(λi ) < 0 ∀ i;

• Stable: Re(λi ) ≤ 0 ∀ i;

• Unstable: Re(λi ) > 0 for at least one i.

4.2.2 Controllability
Controllable: is it possible to control all the states of a system with an input u(t)?
A system is said to be completely controllable, if the Controllability Matrix

R = B A · B A2 · B . . . An−1 · B ∈ Rn×(n·m) . (4.18)

has full rank n.

4.2.3 Observability
Observable: is it possible to reconstruct the initial conditions of all the states of a system from
the output y(t)?
A system is said to be completely observable, if the Observability Matrix
 
C
 C ·A 
 
 2 
O =  C · A  ∈ R(n·p)×n . (4.19)
 .. 
 . 
n−1
C ·A

has full rank n.

69
Gioele Zardini Control Systems II FS 2017

4.3 Poles and Zeros


Since we have to deals with matrices, one has to use the theory of minors (see Lineare Algebra
I/II ) in order to compute the zeros and the poles of a transfer function.
The first step of this computation is to calculate all the minors of the transfer function P (s).
The minors of a matrix F ∈ Rn×m are the determinants of all square submatrices. By maximal
minor it is meant the minor with the biggest dimension. From the minors one can calculate
the poles and the zeros as follows:

4.3.1 Zeros
The zeros are the zeros of the numerator’s greatest common divisor of the maximal minors,
after their normalization with respect to the same denominator (polepolynom).

4.3.2 Poles
The poles are the zeros of the least common denominator of all the minors of P (s).

4.3.3 Directions
In MIMO systems, the poles and the zeros are related to a direction. Moreover, a zero-pole
cancellation occurs only if zero and pole have the same magnitude and input-output direction.
in,out
The directions δπ,i associated with a pole πi are defined by
in out
P (s) s=πi
· δπ,i = ∞ · δπ,i . (4.20)

in,out
The directions δξ,i associated with a zero ξi are defined by
in out
P (s) s=ξi
· δξ,i = 0 · δξ,i . (4.21)

The directions can be computed with the singular value decomposition (see next week) of
the matrix P (S).

4.4 Nyquist Theorem for MIMO Systems


The Nyquist theorem can also be written for MIMO systems. The closed-loop T (s) is asymp-
titotically stable if
1
nc = n+ + · n0 (4.22)
2
where

• nc : Encirclements about the origin of

N (i · ω) = det(1 + P (i · ω) · C(i · ω)). (4.23)

• n+ : number of unstable poles.

• n0 : number of marginal stable poles.

70
Gioele Zardini Control Systems II FS 2017

4.5 Examples
Example 15. The minors of a given matrix
 
a b c
P (s) =
d e f
are:
First order:
a, b, c, d, e, f
Second order:      
a b a c b c
det , det , det .
d e d f e f

71
Gioele Zardini Control Systems II FS 2017

Example 16. One wants to find the poles and the zeros of the given transfer function
 s+2 
s+3
0
P (s) = .
0 (s+1)·(s+3)
s+2

72
Gioele Zardini Control Systems II FS 2017

Solution. First of all, we list all the minors of the transfer function:

Minors:
s+2 (s+1)·(s+3)
• First order: s+3
, s+2
, 0, 0 ;

• Second order: s + 1.

Poles:
The least common denominator of all the minors is

(s + 3) · (s + 2)

This means that the poles are

π1 = −2
π2 = −3.

Zeros:
The maximal minor is s + 1 and we have to normalize it with respect to the polepolynom
(s + 3) · (s + 2). It holds

(s + 1) · (s + 2) · (s + 3)
(s + 1) ⇒
(s + 2) · (s + 3)

The numerator reads


(s + 1) · (s + 2) · (s + 3)
and so the zeros are
ζ1 = −1
ζ2 = −2
ζ3 = −3.

73
Gioele Zardini Control Systems II FS 2017

Example 17. One wants to find the poles and the zeros of the given transfer function
!
1 1 2·(s+1)
s+1 s+2 (s+2)·(s+3)
P (s) = s+3 s+4 .
0 (s+1)2 s+1

74
Gioele Zardini Control Systems II FS 2017

Solution. First of all, we list all the minors of the transfer function:

Minors:

• First order: 1
, 1 , 2·(s+1) , 0, (s+1)
s+1 s+2 (s+2)·(s+3)
s+3 s+4
2 , s+1 ;

s+3 s+4 2 1 s+4


• Second order: ,
(s+1)3 (s+1)·(s+2)
− (s+2)·(s+1)
= s+1
, − (s+1) 2.

Poles:
The least common denominator of all the minors is

(s + 1)3 · (s + 2) · (s + 3).

This means that the poles are

π1 = −1
π2 = −1
π3 = −1
π4 = −2
π5 = −3.

Zeros:
The numerators of the maximal minors are (s + 3), 1 and −(s + 4). We have to normalize them
with respect to the polepolynom (s + 1)3 · (s + 2) · (s + 3). It holds

(s + 3)2 · (s + 2)
(s + 3) ⇒ ,
(s + 1)3 · (s + 2) · (s + 3)
(s + 1)2 · (s + 2) · (s + 3)
1 ⇒ ,
(s + 1)3 · (s + 2) · (s + 3)
(s + 4) · (s + 1) · (s + 2) · (s + 3)
−(s + 4) ⇒ − .
(s + 1)3 · (s + 2) · (s + 3)

The greatest common divisor of these is

(s + 3) · (s + 2).

Hence, the zeros are

ζ1 = −2,
ζ2 = −3.

75
Gioele Zardini Control Systems II FS 2017

Example 18. The dynamics of a system are given as


   
4 1 0 1 0
ẋ(t) = −1 2 0 · x(t) + 0 0 · u(t)
0 0 2 0 1 (4.24)
 
1 0 0
y(t) = · x(t).
0 1 1

Moreover the transfer function is given as


(s−2)
!
s2 −6s+9
0
P (s) = −1 1
. (4.25)
s2 −6s+9 s−2

(a) Is the system Lyapunov stable, asymptotically stable or unstable?

(b) Is the system completely controllable?

(c) Is the system completely observable?

(d) The poles of the system are π1 = 2 and π2,3 = 3. The zero of the system is ζ1 = 2. Are
there any zero-pole cancellations?

(e) Write a Matlab code that computes the transfer function basing on A, B, C, D.

76
Gioele Zardini Control Systems II FS 2017

Solution.

(a)

First of all, one identifies the matrices as:


   
4 1 0 1 0
ẋ(t) = −1 2 0 ·x(t) + 0 0 ·u(t)
0 0 2 0 1
| {z } | {z }
A B (4.26)
   
1 0 0 0 0
y(t) = ·x(t) + ·u(t).
0 1 1 0 0
| {z } | {z }
C D

We have to compute the eingevalues of A. It holds


 
4−λ 1 0
det(A − λ · 1) = |  −1 2 − λ 0 |
0 0 2−λ
 
4−λ 1
= (2 − λ) · | |
−1 2 − λ
= (2 − λ) · ((4 − λ) · (2 − λ) + 1)
= (2 − λ) · (λ2 − 6λ + 9)
= (2 − λ) · (λ − 3)2 .

Since all the three eigenvalues are bigger than zero, the system is Lyapunov unstable.

(b)

The controllability matrix can be found with the well-known multiplications:


   
4 1 0 1 0

A · B = −1 2 0 · 0 0
0 0 2 0 1
 
4 0

= −1 0 ,
0 2
   
4 1 0 4 0
A2 · B = −1 2 0 · −1 0
0 0 2 0 2
 
15 0
= −6 0 .
0 4

Hence, the controllability matrix reads


 
1 0 4 0 15 0
R = 0 0 −1 0 −6 0 .
0 1 0 2 0 4

This has full rank 3: the system ist completely controllable.

77
Gioele Zardini Control Systems II FS 2017

(c)

The observability matrix can be found with the well-known multiplications:


 
  4 1 0
1 0 0 
C ·A= · −1 2 0
0 1 1
0 0 2
 
4 1 0
= ,
−1 2 2
 
  4 1 0
4 1 0 
C · A2 = · −1 2 0
−1 2 2
0 0 2
 
15 6 0
= .
−6 3 4

Hence, the observability matrix reads


 
1 0 0
0 1 1
 
4 1 0
O=
−1
.
 2 2

 15 6 0
−6 3 4

This has full rank 3: the system ist completely observable.

(d)

Although ζ1 = 2 and π1 = 2 have the same magnitude, they don’t cancel out. Why? Since
the system ist completely controllable and completely observable, we have already the minimal
realization of the system. This means that no more cancellation is possible. The reason for
that is that the directions of the two don’t coincide. We will learn more about this in the next
chapter.

(e)

The code reads

P=tf(ss(A,B,C,D));

or alternatively

P=tf(’s’);
P=C*inv(s*eye(3)-A)*B+D;

78
Gioele Zardini Control Systems II FS 2017

Example 19. Given is the system


   
−2 0 0 1 0
ẋ(t) =  0 −2 5 · x(t) + 0 0 · u(t)
0 −1 0 1 1
 
−1 0 1
y(t) = · x(t),
0 1 0

with two inputs u1 (t) and u2 (t) and two outputs y1 (t) and y2 (t). The transfer function of the
system reads !
2s−1 s+2
(s2 +2s+5)·(s+2) s2 +2s+5
P (s) = 5 5
.
s2 +2s+5 s2 +2s+5

(a) How many state variables are needed, in order to describe the input/output behaviour of
the system?

(b) How many outputs are needed, in order to reconstruct the initial state x(0)?

(c) For every x(0) 6= 0 we have to ensure limt→∞ x(t) = 0. How many inputs are needed, in
order to ensure this condition?

79
Gioele Zardini Control Systems II FS 2017

Solution.

(a) We want to find the minimal order of the system, that is, the number of poles.

Minors 1st. Order

2s − 1 s+2 5
, 2 , 2 .
(s2 + 2s + 5) · (s + 2) s + 2s + 5 s + 2s + 5

Minors 2nd. Order

2s − 1 5 s+2 5
· − ·
(s2 + 2s + 5) · (s + 2) s2 + 2s + 5 s2 + 2s + 5 s2 + 2s + 5
 
5 2s − 1 s+2
= 2 · −
s + 2s + 5 (s2 + 2s + 5) · (s + 2) s2 + 2s + 5
5 −s2 − 2s − 5
= 2 · 2
s + 2s + 5 (s + 2s + 5) · (s + 2)
−5
= 2 .
(s + 2s + 5) · (s + 2)
The pole-polynom reads
(s2 + 2s + 5) · (s + 2).
and the poles

π1 = −2
π2,3 = −1 ± 2j.

The minimal order of the system is n = 3, since we have 3 poles. The given description
is already in the minimal realization.

Alternative

As alternative one can compute the controllability matrix and the observability matrix
as follows: for the controllability matrix it holds:
   
−2 0 0 1 0
A · B =  0 −2 5 · 0 0
0 −1 0 1 1
 
−2 0
=  5 5 .
0 0
   
−2 0 0 −2 0
A2 · B =  0 −2 5 ·  5 5
0 −1 0 0 0
 
4 0
= −10 −10 .

−5 −5

80
Gioele Zardini Control Systems II FS 2017

The matrix reads  


1 0 −2 0 4 0
R = 0 0 5 5 −10 −10 .
1 1 0 0 −5 −5
This matrix has full rank r = 3 = n: the system is completely controllable.
For the observability matrix it holds:
 
  −2 0 0
−1 0 1 
C ·A= · 0 −2 5
0 1 0
0 −1 0
 
2 −1 0
= .
0 −2 5
 
  −2 0 0
2 −1 0 
C · A2 = · 0 −2 5
0 −2 5
0 −1 0
 
−4 2 −5
= .
0 −1 −10

The matrix reads  


−1 0 1
0 1 0 
 
 2 −1 0 
O= 
 0 −2 5  .
 
−4 2 −5 
0 −1 −10
This matrix has full rank r = 3 = n: the system is completely observable.
Since the system is completely observable and controllable, the given state-space descrip-
tion is already in its minimal realization, which means that the minimal order is n = 3.

(b) If we take into account just the first output y1 (t), we get

C1 = −1 0 1

with its observability matrix  


−1 0 1
 2 −1 0  .
−4 2 −5
This matrix has full rank and the partial system is completely observable: this means
that x(0) cab be reconstructed from the first output.
Remark. This holds e.g. not for the second output y2 (t). One would get

C2 = 0 1 0

with its observability matrix  


0 1 0
0 −2 5  .
0 −1 −10
This observability matrix has rank r = 2 and so the system is not completely observable:
x(0) cannot be reconstructed from the second output only.

81
Gioele Zardini Control Systems II FS 2017

(c) None. The eigenvalues of the system are

λ1 = −2
λ2,3 = −1 ± 2j.

These eigenvalues are all asymptotically stable: this means that no matter which input
is given, the state will come back to its initial state.
Remark. If this wouldn’t be the case, we would proceed like in task (b): if one takes into
account just input u1 (t), one gets  
1
B1 0
1
and its controllability matrix  
1 −2 4
0 5 −10 .
1 0 −5
This controllability matrix has full rank and would satisfy the condition. You can check
yourselves that this wouldn’t be the case for u2 (t).

82
Gioele Zardini Control Systems II FS 2017

4.6 Relative-Gain Array (RGA)


Although we introduced the structure of MIMO systems, one could still ask himself why do we
have to worry about all these new concepts and why don’t we easily consider a MIMO system
as a superposition of different SISO systems. In some cases, this reasoning is actually the good
one, but how can one distinguish when to use this approach?
The RGA-matrix tells us how the different subplants of a MIMO plant interact: this matrix is
a good indicator of how SISO a system is.
This matrix can be generally calculated as
RGA(s) = P (s). × P (s)−T (4.27)
where
P (s)−T = (P (s)T )−1 . (4.28)
and A. × A represents the element-wise multiplication (A.*A in Matlab).
In general, each element of the matrix gives us a special information:
gain from ua to yb with all other loops open
[RGA]ab = . (4.29)
gain from ua to yb with all other loops closed (perfect control)
Remark. It’s intuitive to notice, that if
[RGA]ab ≈ 1 (4.30)
the numerator and the denominator are equal, i.e. SISO control is enough to bring ua at yb .
Remark. The theory behind the relative-gain array goes far beyond the aim of this course and
one should be happy with the given examples. If however you are interested in this topic, you
can have a look here.
Let’s take the example of a 2 × 2 plant: in order to compute the first element (1, 1) of the
RGA(s) we consider the system in Figure 33. We close with a SISO controller C22 (s) the loop
from y2 to u2 and try to compute the transfer function from u1 to y1 .
Everyone has his special way to decouple a MIMO system; I’ve always used this procedure:
starting from the general equation in frequency domain
     
Y1 (s) P11 (s) P12 (s) U1 (s)
= · . (4.31)
Y2 (s) P21 (s) P22 (s) U2 (s)
one can read
Y1 (s) = P11 (s) · U1 (s) + P12 (s) · U2 (s)
(4.32)
Y2 (s) = P21 (s) · U1 (s) + P22 (s) · U2 (s).
Since we want to relate u1 and y1 let’s express u2 as something we know. Using the controller
C22 (s) we see
U2 (s) = −C22 (s) · Y2 (s)
= −C22 (s) · P21 (s) · U1 (s) − C22 (s) · P22 (s) · U2 (s)
(4.33)
−C22 (s) · P21 (s) · U1 (s)
⇒ U2 (s) = .
1 + P22 (s) · C22 (s)
With the general equation one can then write
Y1 (s) = P11 (s) · U1 (s) + P12 (s) · U2 (s)
−C22 (s) · P21 (s) · U1 (s)
= P11 (s) · U1 (s) + P12 (s) ·
1 + P22 (s) · C22 (s) (4.34)
P11 (s) · (1 + P22 (s) · C22 (s)) − P12 (s) · C22 (s) · P21 (s)
= · U1 (s).
1 + P22 (s) · C22 (s)

83
Gioele Zardini Control Systems II FS 2017

We have found the general transfer function that relates u1 to y1 . We now consider two extreme
cases:

• We assume open loop conditions, i.e. all other loops open: C22 ≈ 0. One gets

Y1 (s) = P11 (s) · U1 (s). (4.35)

• We assume high controller gains, i.e. all other loops closed : P22 (s) · C22 (s)  1. One gets

P11 (s) · (1 + P22 (s) · C22 (s)) − P12 (s) · C22 (s) · P21 (s)
lim
C22 (s)→∞ 1 + P22 (s) · C22 (s)
(4.36)
P11 (s) · P22 (s) − P11 (s) · P21 (s)
= .
P22 (s)

As stated before, the first element of the RGA is the division of these two. It holds

P11 (s)
[RGA]11 = P11 (s)·P22 (s)−P11 (s)·P21 (s)
P22 (s)
(4.37)
P11 (s) · P22 (s)
= .
P11 (s) · P22 (s) − P11 (s) · P21 (s)

Remark. As you can see, the definition of the element of the RGA matrix does not depend on
the chosen controller C22 (s). This makes this method extremely powerful.
By repeating the procedure one can try to find [RGA]22 . In order to do that one has to
close the loop from y1 to u1 : the result will be exactly the same:

[RGA]11 = [RGA]22 . (4.38)

Let’s go a step further. In order to compute the element [RGA]21 , one has to close the loop
from y1 to u2 and find the transfer function from u1 to y2 .
Remark. This could be a nice exercise to test your understanding!
With a similar procedure one gets

−P12 (s) · P21 (s)


[RGA]21 = . (4.39)
P22 (s) · P11 (s) − P21 (s) · P12 (s)

and as before
[RGA]21 = [RGA]12 . (4.40)
How can we now use this matrix, to know is SISO control would be enough? As already stated
before, [RGA]ab ≈ 1 means SISO control is enough. Moreover, if the diagonal terms differ
substantially from 1, the MIMO interactions (also called cross couplings) are too important
and a SISO control is no more recommended.
If
RGA ≈ 1 (4.41)
evaluated at the relevant frequencies of the system, i.e. at ωc ± one decade, one can ignore
the crosscouplings and can control the system with SISO tools One loop at time. If this is not
the case, one has to design a MIMO controller. A bunch of observations could be useful by
calculations:

84
Gioele Zardini Control Systems II FS 2017

• Rows and Columns add up to 1. This means one can write the matrix as
   
[RGA]11 [RGA]12 [RGA]11 1 − [RGA]11
= . (4.42)
[RGA]21 [RGA]22 [1 − RGA]11 [RGA]11

This allows to calculate just one element of the matrix.

• If one looks at RGA(s = 0) and the diagonal entries of the matrix are positive, SISO
control is possible.

• The RGA of a triangular matrix P (s) is the identity matrix.

• The RGA is invariant to scaling, i.e. for every diagonal matrix Di it holds

[RGA](P (s)) = [RGA](D1 · P (s) · D2 ). (4.43)

y1 P11 P21 u1

y2 u2
P12 P22

−C22

Figure 33: Derivation of the RGA-Matrix for the 2 × 2 case.

85
Gioele Zardini Control Systems II FS 2017

Example 20. For a MIMO system with two inputs and two outputs just the first element of
the RGA matrix is given. This is a function of a system parameter p and is given as
1
[RGA(s)]11 = .
ps2 + 2ps + 1

(a) Find the other elements of the RGA matrix.

(b) For which values of p is the system for all frequencies ω ∈ [0, ∞) controllable with two
independent SISO control loops (one loop at the time)?

Now, you are given the following transfer function of another MIMO system:
 1 s+2 
s s+1 .
1
1 − s+1

(c) Find the RGA matrix of this MIMO system.

(c) Use the computed matrix to see if for frequencies in the range ω ∈ [3, 10] rad/s the system
is controllable with two separate SISO controllers.

86
Gioele Zardini Control Systems II FS 2017

Solution.
(a) Using the theory we learned, it holds
1
[RGA(s)]11 = [RGA(s)]22 =
ps2 + 2ps + 1
and
[RGA(s)]12 = [RGA(s)]21
= 1 − [RGA(s)]11
1
=1− 2
ps + 2ps + 1
ps · (s + 2)
= 2 .
ps + 2ps + 1
(b) In order to use two independend SISO control loops, the diagonal elements of the RGA
matrix should be ≈ 1 and the anti diagonal elements should be ≈ 0. It’s easy to see that
this is the case for p = 0. In fact, if one sets p = 0 one gets
RGA(s) = 1.
Hence, independently of the frequency one has, i.e. ω ∈ [0, ∞), the control problem can
be solved with two independent SISO controllers.
(c) Using the learned theory, it holds
[RGA(s)]11 = [RGA(s)]22
P11 (s) · P22 (s)
=
P11 (s) · P22 (s) − P12 (s) · P21 (s)
1
− s·(s+1)
= 1
− s·(s+1) − s+2
s+1
1
=
1 + s · (s + 2)
1
= 2
s + 2s + 1
1
= .
(s + 1)2
and
[RGA(s)]12 = [RGA(s)]21
= 1 − [RGA(s)]11
1
=1−
(s + 1)2
s · (s + 2)
= .
(s + 1)2
(d) In order to evaluate the RGA matrix in this range, we have to express it with it’s frequency
dependence, i.e. s = jω. For the magnitudes it holds
|[RGA(jω)]11 | = |[RGA(jω)]22 |
1
=
|jω + 1|2
1
= .
1 + ω2

87
Gioele Zardini Control Systems II FS 2017

and
|[RGA(jω)]12 | = |[RGA(jω)]21 |
1
= · |jω| · |jω + 2|
|jω + 1|2

ω · 4 + ω2
= .
1 + ω2
We can know insert the two limit values of the given range and get

|[RGA(j · 3)]11 | = |[RGA(j · 3)]22 |


1
=
10
= 0.10.
|[RGA(j · 3)]12 | = |[RGA(j · 3)]21 |

3 · 13
=
10
≈ 1.08.

and
|[RGA(j · 10)]11 | = |[RGA(j · 10)]22 |
1
=
101
= 0.01.
|[RGA(j · 10)]12 | = |[RGA(j · 10)]21 |

10 · 104
=
101
≈ 1.01.

In both cases the diagonal elements are close to 0 and the antidiagonal elements are close
to 1. This means that the system is diagonal dominant and SISO control one loop at
time is permitted. We just need to pay attention to what should be controlled: since the
antidiagonal elements are close to 1, we need to use u1 for y2 and u2 for y1 .

88
Gioele Zardini Control Systems II FS 2017

Example 21. Figure 34 shows a 2 × 2 MIMO system. Sadly, we don’t know anything about
the transfer functions Pij (s) but
P12 (s) = 0.
Your boss wants you to use a one loop at the time approach as you see in the picture.

(a) Why is your boss’ suggestion correct?

(b) Just a reference ri is affecting both outputs yi , which one?

(c) Compute the transfer function ri → yj for i 6= j?

Figure 34: Structure of MIMO system.

89
Gioele Zardini Control Systems II FS 2017

Solution.

(a) To check if the suggestion is correct let’s have a look at the RGA matrix: it holds

[RGA]11 = [RGA]22
P11 (s) · P22 (s)
=
P11 (s) · P22 (s) − P12 (s) · P21 (s)
= 1.
[RGA]12 = [RGA]21
= 1 − [RGA]11
= 0.

since P12 (s) = 0. This means that the RGA matrix is identical to the identity matrix,
resulting in a perfect diagonal dominant system, which can be controlled with the one
loop at the time approach.

(b) Let’s analyze the signals from Figure 34. Since P12 (s) = 0, the output y1 is not affected
from u2 . Moreover, this means that the reference signal r2 , which influences u2 , cannot
affect the output y1 . The only reference that acts on both y1 and y2 is r1 : directly through
C1 (s) on y1 and with crosscouplings through P21 (s) on y2 .

(c) As usual we set to 0 the reference values we don’t analyze: here r2 = 0. Starting from
the general equation in frequency domain
     
Y1 (s) P11 (s) P12 (s) U1 (s)
= ·
Y2 (s) P21 (s) P22 (s) U2 (s)
   
P11 (s) 0 U1 (s)
= · .
P21 (s) P22 (s) U2 (s)

one can read


Y1 (s) = P11 (s) · U1 (s)
Y2 (s) = P21 (s) · U1 (s) + P22 (s) · U2 (s).

Since we want to relate r1 and y2 let’s express u1 as something we know.


Using Figure 34 one gets

R1 (s) · C1 (s) = U1 (s) + P11 (s) · C1 (s) · U1 (s)


R1 (s) · C1 (s)
U1 = .
1 + P11 (s) · C1 (s)

Inserting this into the second equation one gets

R1 (s) · C1 (s)
Y2 (s) = P21 (s) · + P22 (s) · U2 (s).
1 + P11 (s) · C1 (s)

One have to find an expression for U2 (s). To do that, we look at the second loop in Figure
34 an see
R2 (s) ·C2 (s) − Y2 (s) · C2 (s) = U2 (s)
| {z }
=0
U2 (s) = −Y2 (s) · C2 (s).

90
Gioele Zardini Control Systems II FS 2017

Inserting this into the second equation one gets

R1 (s) · C1 (s)
Y2 (s) = P21 (s) · + P22 (s) · U2 (s)
1 + P11 (s) · C1 (s)
R1 (s) · C1 (s)
= P21 (s) · + P22 (s) · (−Y2 (s) · C2 (s))
1 + P11 (s) · C1 (s)
R1 (s) · C1 (s)
Y2 (s) · (1 + P22 (s) · C2 (s)) = P21 (s) ·
1 + P11 (s) · C1 (s)
P21 (s) · C1 (s)
Y2 (s) = ·R1 (s).
(1 + P11 (s) · C1 (s)) · (1 + P22 (s) · C2 (s))
| {z }
F (s)

where F (s) is the transfer function we wanted.

91
Gioele Zardini Control Systems II FS 2017

Example 22. Figure 35 shows the structure of a MIMO system, composed of three subsystems
P1 (s), P2 (s) and P3 (s). It has inputs u1 and u2 and outputs y1 and y2 . The three subsystems

Figure 35: Structure of MIMO system.

are given as !
s−5
s+3 1 s+4
 s+2 1

P1 (s) = 1
, P2 (s) = s+3 s−5 , P3 (s) = s+5 s+1 .
s+4

Compute the transfer function of the whole system.

92
Gioele Zardini Control Systems II FS 2017

Solution. One should think with matrix dimensions here. Let’s redefine the subsystem’s
matrices more generally:
 11 
P1 11
 
P1 (s) = 21 , P2 (s) = P2 P212 , P3 (s) = P311 P312
P1

Together with the structure of the system one gets

Y1 = P211 · U1 + P212 · P111 · U1 ,


Y2 = P311 · P121 · U1 + P312 · U2 .

This can be written in the general matrix for the transfer function:
 11 
P2 + P212 · P111 0
P (s) =
P311 · P121 P312
1
!
s+3
+ s+4 · s−5 0
s−5 s+3
= s+2
· 1
s+5 s+4
1
s+1
s+5
!
s+3
0
= s+2 1
.
(s+5)·(s+4) s+1

93
Gioele Zardini Control Systems II FS 2017

Example 23. Figure 36 shows the structure of a MIMO system, composed of two subsystems
P1 (s), P2 (s). It has inputs u1 and u2 and outputs y1 and y2 . The subsystem P1 (s) is given with

Figure 36: Structure of MIMO system.

its state space description:


       
−3 0 1 0 0 2 0 1
A1 = , B1 = , C1 = D1 = .
2 1 2 1 3 1 0 0

and the subsystem P2 (s) is given as


 
1 s−1
P2 (s) = s−2 (s+4)·(s−2) .

Compute the transfer function of the whole system.

94
Gioele Zardini Control Systems II FS 2017

Solution. First of all, we compute the transfer function in frequency domain of the first sub-
system P1 (s). It holds

P1 (s) = C1 · (s · 1 − A1 )−1 · B1 + D1
   −1    
0 2 s+3 0 1 0 0 1
= · · +
3 1 −2 s − 1 2 1 0 0
       
0 2 1 s−1 0 1 0 0 1
= · · · +
3 1 (s + 3) · (s − 1) 2 s+3 2 1 0 0
     
1 0 2 s−1 0 0 1
= · · +
(s + 3) · (s − 1) 3 1 2s + 8 s + 3 0 0
 
1 4s + 16 2s + 6
= ·
(s + 3) · (s − 1) 5s + 5 s + 3
4s+16 2
!
(s+3)·(s−1) s−1
= 5s+5 1
.
(s+3)·(s−1) s−1

One should think with matrix dimensions here. Let’s redefine the subsystem’s matrices more
generally:  11 
P1 P112 11

P1 (s) = 21 22 , P2 (s) = P2 P212 .
P1 P1
Together with the structure of the system one gets

Y1 = P211 · U1 + P111 · P212 · U1 + P112 · P212 · U2 ,


Y2 = P121 · U1 + P122 · U2 .

This can be written in the general matrix for the transfer function:
 11 
P2 + P111 · P212 P112 · P212
P (s) =
P121 P122
= ...
s+7 s+1
!
(s+3)·(s−2) (s+4)(s−2)
= 5s+5 1
.
(s+3)·(s−1) s−1

95
Gioele Zardini Control Systems II FS 2017

Example 24. The system in Figure 37 can be well controlled with two separate SISO con-
trollers.

Figure 37: Structure of MIMO system.

 True.

 False.

96
Gioele Zardini Control Systems II FS 2017

Solution.
3 True.


 False.

Explanation:
One can observe that the input u2 affects only the output y2 . This means that the transfer
function matrix has a triangular form and hence, that the RGA matrix is identical to the
identity matrix: this means that we can reach good control with two separate SISO controllers.

97
Gioele Zardini Control Systems II FS 2017

5 Frequency Response of MIMO Systems


5.1 Singular Value Decomposition (SVD)
The Singular Value Decomposition plays a central role in MIMO frequency response analysis.
Let’s recall some concepts from the course Lineare Algebra I/II :

5.1.1 Preliminary Definitions


The induced norm ||A|| of a matrix that describes a linear function like

y =A·u (5.1)

is defined as
||y||
||A|| = max
u6=0 ||u||
(5.2)
= max ||y||.
||u||=1

Remark. In the course Lineare Algebra I/II you learned that there are many different norms
and the expression ||A|| could look too generic. However, in this course we will always use the
euclidean norm ||A|| = ||A||2 . This norm is defined as

||A||2 =µmax
p (5.3)
= maximal eigenvalue of A∗ · A

where A∗ is the conjugate transpose or Hermitian transpose of the matrix A. This is


defined as
A∗ = (conj(A))T . (5.4)

Example 25. Let  


1 −2 − i
A= .
1+i i
Then it holds
A∗ = (conj(A))T
 T
1 −2 + i
=
1−i −i
 
1 1−i
= .
−2 + i −i

Remark. If we deal with A ∈ Rn×m holds of course

A∗ = AT . (5.5)
One can list a few useful properties of the euclidean norm:

(i) Remember: AT · A is always a quadratic matrix!

(ii) If A is orthogonal:
||A||2 = 1. (5.6)

98
Gioele Zardini Control Systems II FS 2017

(iii) If A is symmetric:
||A||2 = max(|ei |). (5.7)

(iv) If A is invertible:
1
||A−1 ||2 = √
µmin
(5.8)
1
=√ .
minimal eigenvalue of A∗ · A

(v) If A is invertible and symmetric:


1
||A−1 ||2 = . (5.9)
min(|ei |)

where ei are the eigenvalues of A.


In order to define the SVD we have to go a step further. Let’s consider a Matrix A and the
linear function 5.1. It holds
||A||2 = max y ∗ · y
||u||=1

= max (A · u)∗ · (A · u)
||u||=1

= max u∗ · A∗ · A · u (5.10)
||u||=1

= max µ(A∗ · A)
i
= max σi2 .
i

where σi are the singular values of matrix A. They are defined as



σi = µi (5.11)

where µi are the eigenvalues of A∗ · A.


Combining 5.2 and 5.11 one gets

||y||
σmin (A) ≤ ≤ σmax (A). (5.12)
||u||

5.1.2 Singular Value Decomposition


Our goal is to write a general matrix A ∈ Cp×m as product of three matrices: U , Σ and V . It
holds

A = U · Σ · V ∗ with U ∈ Cp×p , Σ ∈ Rp×m , V ∈ Cm×m . (5.13)


Remark. U and V are orthogonal, Σ is a diagonal matrix.

99
Gioele Zardini Control Systems II FS 2017

u
y =M ·u

Figure 38: Illustration of the singular values.

Kochrezept:
Let A ∈ Cp×m be given:

(I) Compute all the eigenvalues and eigenvectors of the matrix

A∗ · A ∈ Cm×m .

and sort them as


µ1 ≥ µ2 ≥ . . . ≥ µr > µr+1 = . . . = µm = 0 (5.14)

(II) Compute an orthogonal basis from the eigenvectors vi and write it in a matrix as

V = (v1 . . . vm ) ∈ Cm×m . (5.15)

(III) We have already found the singular values: they are defined as

σi = µi for i = 1, . . . , min{p, m}. (5.16)

We can then write Σ as


 
σ1 0 ... 0
 .. .. ..  ∈ Rp×m , p < m
Σ= . . . (5.17)
σm 0 . . . 0
 
σ1
 .. 
 . 
 
 σn 
Σ=
0
 ∈ Rp×m , p > m. (5.18)
 ... 0  
 .. .. 
. .
0 ... 0

(IV) One finds u1 , . . . , ur from


1
ui = · A · vi for all i = 1, . . . , r (for σi 6= 0) (5.19)
σi

100
Gioele Zardini Control Systems II FS 2017

(V) If r < p one has to complete the basis u1 , . . . , ur (with ONB) to obtain an orthogonal
basis, with U orthogonal.

(VI) If you followed the previous steps, you can write

A = U · Σ · V ∗.

Motivation for the computation of Σ, U und V .

A∗ · A = (U · Σ · V ∗ )∗ · (U · Σ · V ∗ )
= V · Σ∗ · U ∗ · U · Σ · V ∗
(5.20)
= V · Σ∗ · Σ · V ∗
= V · Σ2 · V ∗ .

This is nothing else than the diagonalization of the matrix A∗ · A. The columns of V are the
eigenvectors of A∗ · A and the σi2 the eigenvalues.
For U :
A · A∗ = (U · Σ · V ∗ ) · (U · Σ · V ∗ )∗
= U · Σ · V ∗ · V · Σ · U∗
(5.21)
= U · Σ∗ · Σ · U ∗
= U · Σ2 · U ∗ .

This is nothing else than the diagonalization of the matrix A · A∗ . The columns of U are the
eigenvectors of A · A∗ and the σi2 the eigenvalues.

Remark. In order to derive the previous two equations I used that:

• The matrix A∗ · A is symmetric, i.e.

(A∗ · A)∗ = A∗ · (A∗ )∗


= A∗ · A.

• U −1 = U ∗ (because U is othogonal).

• V −1 = V ∗ (because V is othogonal).

Remark. Since the matrix A∗ · A is always symmetric and positive semidefinite, the singular
values are always real numbers.
Remark. The Matlab command for the singular value decomposition is

[U,S,V]=svd

One can write AT as A.’=transpose(A) and A∗ as A’=conj(transpose(A)). Those two are


equivalent for real numbers.
Remark. Although I’ve reported detailed informations about the calculation of U and V , this
won’t be relevant for the exam. It is however good and useful to know the reasons that
are behind this topic.

101
Gioele Zardini Control Systems II FS 2017

Example 26. Let u be  


cos(x)
u=
sin(x)
with ||u|| = 1. The matrix M is given as
!
2 0
M= 1
.
0 2
We know that the product of M and u defines a linear function
y =M ·u
!  
2 0 cos(x)
= ·
0 12 sin(x)
!
2 · cos(x)
= 1
.
2
· sin(x)
We need the maximum of ||y||. In order to avoid square roots, one can use that the x that
maximizes ||y|| should also maximize ||y||2 .
1
kyk2 = 4 · cos2 (x) + · sin2 (x)
4
has maximum
d||y||2 1 !
= −8 · cos(x) · sin(x) + · sin(x) · cos(x) = 0
dx  2
π 3π
⇒ xmax = 0, , π, .
2 2
Inserting back for the maximal ||y|| one gets:
1
||y||max = 2,||y||max = .
2

The singular values can be calculated with M · M :
M∗ · M = M| · M
     
4 0 1 1
= ⇒ λi = 4, ⇒ σi = 2, .
0 14 4 2
As stated before, one can see that ||y|| ∈ [σmin , σmax ]. The matrix U has eigenvectors of M · M |
as coulmns and the matrix V has eigenvectors of M | · M as columns.
In this case
M · M | = M | · M,
hence the two matrices are equal. Since their product is a diagonal matrix one should recall
from the theory that the eigenvectors are easy to determine: they are nothing else than the
standard basis vectors. This means
     
1 0 2 0 1 0
U= , Σ= , V = .
0 1 0 21 0 1

Interpretation:
Referring to Figure 39, let’s interprete these calculations. One can see that the maximal
amplification occurs at v = V (:, 1) and has direction u = U (:, 1), i.e. the vector u is doubled
(σmax ). The minimal amplification occurs at v = V (:, 2) and has direction u = U (:, 2), i.e. the
vector u is halved (σmin )

102
Gioele Zardini Control Systems II FS 2017

1
0.5 u y =M ·u

1 2

Figure 39: Illustration von der Singularwertzerlegung.

Example 27. Let  


−3 0
A = √0 3
3 2
be given.
Question: Find the singular values of A and write down the matrix Σ.

Solution. Let’s compute AT A:


 
 √  −3 0  √ 
−3 0 3  12 2 3
AT A = · √0 3 = √
0 3 2 2 3 13
3 2

One can see easily that the eigenvalues are

λ1 = 16, λ2 = 9.

The singular values are


σ1 = 4, σ2 = 3.
One writes in this case  
4 0
Σ = 0 3  .
0 0

103
Gioele Zardini Control Systems II FS 2017

Example 28. A transfer function G(s) is given as


1 s+1
!
s+3 s+3
s+1 1
s+3 s+3

Find the singular values of G(s) at ω = 1 rad


s
.

104
Gioele Zardini Control Systems II FS 2017

Solution. The transfer function G(s) evaluated at ω = 1 rad


s
has the form
1 j+1
!
j+3 j+3
G(j) = j+1 1
j+3 j+3

In order to calculate the singular values, we have to compute the eigenvalues of H = G∗ · G:

H = G∗ · G
1 −j+1
! 1 j+1
!
−j+3 −j+3 j+3 j+3
= −j+1 1
· j+1 1
−j+3 −j+3 j+3 j+3
3 2
!
10 10
= 2 3
10 10
 
1 3 2
= · .
10 2 3

For the eigenvalues it holds


3 2
!
10
−λ 10
det(H − λ · 1) = det 2 3
10
−λ 10
 2  2
3 2
= −λ − −
10 10
6 5
= λ2 − λ +
 10   100 
1 5
= λ− · λ− .
10 10

It follows
1
λ1 =
10
1
λ2 =
2
and so
r
1
σ1 =
10
≈ 0.3162.
r
1
σ2 =
2
≈ 0.7071.

105
Gioele Zardini Control Systems II FS 2017

Example 29. Let be



1 2
A=
0 1

B= j 1 .

Find the singular values of the two matrices.

106
Gioele Zardini Control Systems II FS 2017

Solution.
• Let’s begin with matrix A. It holds
H = A∗ · A
   
1 0 1 2
= ·
2 1 0 1
 
1 2
= .
2 5
In order to find the eigenvalues of H we compute
 
1−λ 2
det(H − λ · 1) = det
2 5−λ
= (1 − λ) · (5 − λ) − 4
= λ2 − 6λ + 1.

This means that the eigenvalues are



λ1 = 3 + 2 2

λ2 = 3 − 2 2.

The singular values are then

σ1 ≈ 2.4142
σ2 ≈ 0.4142.

• Let’s look at matrix B. It holds


F = B∗ · B
 
−j 
= · j 1
1
 
1 −j
= .
j 1
In order to find the eigenvalues of F we compute
 
1 − λ −j
det(F − λ · 1) = det
j 1−λ
= (1 − λ)2 − 1
= λ2 − 2λ
= λ · (λ − 2).

This means that the eigenvalues are

λ1 = 0
λ2 = 2.

The singular values are then

σ1 = 0

σ2 = 2.

107
Gioele Zardini Control Systems II FS 2017

5.2 Frequency Responses


As we learned for SISO systems, if one excites a system with an harmonic signal

u(t) = h(t) · cos(ω · t), (5.22)

the answer after a big amount of time is still an harmonic function with equal frequency ω:

y∞ (t) = |P (j · ω)| cos(ω · t + ∠(P (j · ω))). (5.23)

One can generalize this and apply it to MIMO systems. With the assumption of p = m, i.e.
equal number of inputs and outputs, one excite a system with
 
µ1 · cos(ω · t + φ1 )
 .. 
u(t) =  .  · h(t) (5.24)
µm · cos(ω · t + φm )

and get  
ν1 · cos(ω · t + ψ1 )
 .. 
y∞ (t) =  . . (5.25)
νm · cos(ω · t + ψm )
Let’s define two diagonal matrices

Φ = diag(φ1 , . . . , φm ) ∈ Rm×m ,
(5.26)
Ψ = diag(ψ1 , . . . , ψm ) ∈ Rm×m

and two vectors


T
µ = µ1 . . . µm ,
T (5.27)
ν = ν1 . . . νm .

With these one can compute the Laplace Transform of the two signals as:
Φ·s s
U (s) = e ω ·µ· . (5.28)
s2 + ω2
and
Ψ·s s
Y (s) = e ω ·ν· . (5.29)
s2 + ω2
With the general equation for a systems one gets

Y (s) = P (s) · U (s)


Ψ·s s Φ·s s
e ω ·ν· 2 2
= P (s) · e ω · µ · 2
s +ω s + ω2 (5.30)
Ψ·j·ω Φ·j·ω
e ω · ν = P (s) · e ω · µ
eΨ·j · ν = P (s) · eΦ·j · µ.

We then recall that the induced norm for the matrix of a linear transformation y = A · u
from 5.2. Here it holds
||eΨ·j · ν||
||P (j · ω)|| = max
eΦ·j ·µ6=0 ||eΦ·j · µ||
(5.31)
= max ||eΨ·j · ν||.
||eΦ·j ·µ||=1

108
Gioele Zardini Control Systems II FS 2017

Since
||eΦ·j · µ|| = ||µ|| (5.32)
and
||eΨ·j · ν|| = ||ν||. (5.33)
One gets
||ν||
||P (j · ω)|| = max
µ6=0 ||µ||
(5.34)
= max ||ν||.
||µ||=1

Here one should get the feeling of why we introduced the singular value decomposition. From
the theory we’ve learned, it is clear that

σmin (P (j · ω)) ≤ ||ν|| ≤ σmax (P (j · ω)). (5.35)

and if ||µ|| =
6 1
||ν||
σmin (P (j · ω)) ≤ ≤ σmax (P (j · ω)). (5.36)
||µ||
with σi singular values of P (j · ω). These two are worst case ranges and is important to notice
that there is no exact formula for ν = f (µ).

5.2.1 Maximal and minimal Gain


You are given a singular value decomposition

P (j · ω) = U · Σ · V ∗ . (5.37)

One can read out from this decomposition several informations: the maximal/minial gain will
be reached with an excitation in the direction of the column vectors of V . The response of the
system will then be in the direction of the coulmn vectors of U .
Let’s look at an example and try to understand how to use these informations:
Example 30. We consider a system with m = 2 inputs and p = 3 outputs. We are given its
singular value decomposition at ω = 5 rad
s
:
 
0.4167 0
Σ= 0 0.2631 ,
0 0
 
0.2908 0.9568
V = ,
0.9443 − 0.1542 · j −0.2870 + 0.0469 · j
 
−0.0496 − 0.1680 · j 0.1767 − 0.6831 · j −0.6621 − 0.1820 · j
U =  0.0146 − 0.9159 · j −0.1059 + 0.3510 · j −0.1624 + 0.0122 · j  .
0.0349 − 0.3593 · j 0.1360 − 0.5910 · j 0.6782 + 0.2048 · j

For the singular value σmax = 0.4167 the eigenvectors are V (:, 1) and U (:, 1):
     
0.2908 0.2908 0
V1 = , |V1 | = , ∠(V1 ) = ,
0.9443 − 0.1542 · j 0.9568 −0.1618
     
−0.0496 − 0.1680 · j 0.1752 −1.8581
U1 =  0.0146 − 0.9159 · j  , |U1 | = 0.9160 , ∠(U1 ) = −1.5548 .
0.0349 − 0.3593 · j 0.3609 −1.4741

109
Gioele Zardini Control Systems II FS 2017

The maximal gain is then reached with


 
0.2908 · cos(5 · t)
u(t) = .
0.9568 · cos(5 · t − 0.1618)

The response of the system is then


   
0.1752 · cos(5 · t − 1.8581) 0.1752 · cos(5 · t − 1.8581)
y(t) = σmax · 0.9160 · cos(5 · t − 1.5548) = 0.4167 · 0.9160 · cos(5 · t − 1.5548) .
0.3609 · cos(5 · t − 1.4741) 0.3609 · cos(5 · t − 1.4741)

Since the three signals y1 (t), y2 (t) and y3 (t) are not in phase, the maximal gain will never be
reachen. One can show that

max ky(t)k ≈ 0.4160 < 0.4167 = σmax


t

The reason for this difference stays in the phase deviation between y1 (t), y2 (t) and y3 (t). The
same analysis can be computed for σmin .

5.2.2 Robustness and Disturbance Rejection


Let’s redefine the matrix norm k · k∞ as

kG(s)k∞ = max(max(σi (G(i · ω))). (5.38)


ω i

Remark. It holds
kG1 (s) · G2 (s)k∞ ≤ kG1 (s)k∞ · kG2 (s)k∞ . (5.39)
As we did for SISO systems, we can resume some important indicators for robustness and
noise amplification:

SISO MIMO
Robustness µ = min (|1 + L(j · ω)|) µ = min (σmin (I + L(j · ω)))
ω ω

Noise Amplification ||S||∞ = max (|S(j · ω)|) kSk∞ = max (σmax (S(j · ω)))
ω ω

110
Gioele Zardini Control Systems II FS 2017

Example 31. Given the MIMO system


1 1
!
s+3 s+1
P (s) = 1 3
.
s+1 s+1

Starting at t = 0, the system is excited with the following input signal:


 
cos(t)
u(t) = .
µ2 cos(t + ϕ2 )

Find the parameters ϕ2 and µ2 such that for steady-state conditions the output signal
 
y1 (t)
y2 (t)

has y1 (t) equal to zero.

111
Gioele Zardini Control Systems II FS 2017

Solution. For a system excited using a harmonic input signal


 
µ1 cos(ωt + ϕ1 )
u(t) =
µ2 cos(ωt + ϕ2 )
the output signal y(t), after a transient phase, will also be a harmonic signal and hence have
the form  
ν1 cos(ωt + ψ1 )
y(t) = .
ν2 cos(ωt + ψ2 )
As we have learned, it holds
eΨ·j · ν = P (jω) · eΦ·j · µ.
One gets
 ψ ·j       ϕ ·j   
e 1 0 ν1 P11 (jω) P12 (jω) e 1 0 µ1
· = · · .
0 eψ2 ·j ν2 P11 (jω) P21 (jω) 0 eϕ2 ·j µ2
For the first component one gets
eψ1 ·j · ν1 = P11 (jω) · eϕ1 ·j · µ1 + P12 (jω) · eϕ2 ·j · µ2 .
For y1 (t) = 0 to hold we must have ν1 = 0. In the given case, some parameters can be easily
copied from the signals:
µ1 = 1
ϕ1 = 0
ω = 1.
With the given transfer functions, one gets
1 1
0= + µ2 · · eϕ2 ·j
j+3 j+1
3−j 1 − j ϕ2 ·j
0= + µ2 · ·e
10 2
3−j 1−j
0= + µ2 · · (cos(ϕ2 ) + j sin(ϕ2 ))
10 2  
3 1 1 1
0= + µ2 · · (cos(ϕ2 ) + sin(ϕ2 )) + j · µ2 · · (sin(ϕ2 ) − cos(ϕ2 )) − .
10 2 2 10
Splitting the real to the imaginary part, one can get two equations that are easily solvable:
1 3
µ2 ·
· (cos(ϕ2 ) + sin(ϕ2 )) + =0
2 10
1 1
µ2 · · (sin(ϕ2 ) − cos(ϕ2 )) − = 0.
2 10
Adding and subtracting the two equations one can reach two better equations:
1
µ2 · sin(ϕ2 ) +=0
5
2
µ2 · cos(ϕ2 ) + = 0.
5
One of the solutions (periodicity) reads
1
µ2 = √
5
 
1
ϕ2 = arctan + π.
2

112
Gioele Zardini Control Systems II FS 2017

Example 32. A 2 × 2 linear time invariant MIMO system with transfer function
1 2
!
s+1 s+1
P (s) = s2 +1 1
s+10 s2 +2

is excited with the signal  


µ1 · cos(ω · t + ϕ1 )
u(t) = .
µ2 · cos(ω · t + ϕ2 )
Because we bought a cheap signal generator, we cannot know exactly the constants µ1,2 and
ϕ1,2 . A friend of you just found out with some measurements, that the excitation frequency
is ω = 1 rad . The cheap generator, cannot produce signals with magnitude of µ bigger than
ps
10, i.e. µ1 + µ22 ≤ 10. This works always at maximal power, i.e. at 10. Choose all possible
2

responses of the system after infinite time.


 
5 · sin(t + 0.114)
 y∞ (t) = .
cos(t)
 
5 · sin(t + 0.114)
 y∞ (t) = .
cos(2 · t)
 
sin(t + 0.542)
 y∞ (t) = .
sin(t + 0.459)
 
19 · cos(t + 0.114)
 y∞ (t) = .
cos(t + 1.124)
 
5 · cos(t + 0.114)
 y∞ (t) = .
5 · cos(t)
 
10 · sin(t + 2.114)
 y∞ (t) = .
11 · sin(t + 1.234)

113
Gioele Zardini Control Systems II FS 2017

Solution.
 
3 y∞ (t) = 5 · sin(t + 0.114) .

cos(t)
 
5 · sin(t + 0.114)
 y∞ (t) = .
cos(2 · t)
 
sin(t + 0.542)
 y∞ (t) = .
sin(t + 0.459)
 
19 · cos(t + 0.114)
 y∞ (t) = .
cos(t + 1.124)
 
3 y∞ (t) = 5 · cos(t + 0.114)
 .
5 · cos(t)
 
3 y∞ (t) = 10 · sin(t + 2.114)
 .
11 · sin(t + 1.234)

Explanation
We have to compute the singular values of the matrix P (j · 1). These are

σmax = 1.8305
σmin = 0.3863.

With what we have learned it follows

10 · σmin = 3.863 ≤ ||ν|| ≤ 18.305 = 10 · σmax .


√ √
The first response has ||ν|| = 26 that is in this range. The second response also has ||ν|| = 26
but the frequency in its second
√ element changes and that isn’t possible for linear systems. The
third response
√ has ||ν|| = 2 that is too small to be in the range. The fourth√response has
||ν|| = 362 that is too big to be in the √ range. The fifth response has ||ν|| = 50 that is in
the range. The sixth response has ||ν|| = 221 that is in the range.

114
Gioele Zardini Control Systems II FS 2017

Example 33. A 3 × 2 linear time invariant MIMO system is excited with the input
 
3 · sin(30 · t)
u(t) = .
4 · cos(30 · t)

You have forgot your PC and you don’t know the transfer function of the system. Before coming
to school, however, you have saved the Matlab plot of the singular values of the system on
your phone (see Figure 40. Choose all the possible responses of the system.

Figure 40: Singular values behaviour.

 
0.5 · sin(30 · t + 0.314)
 y∞ (t) =  0.5 · cos(30 · t) .
0.5 · cos(30 · t + 1)
 
4 · sin(30 · t + 0.314)
 y∞ (t) =  3 · cos(30 · t) .
2 · cos(30 · t + 1)
 
0.1 · sin(30 · t + 0.314)
 y∞ (t) =  0.1 · cos(30 · t) .
0.1 · cos(30 · t + 1)
 
0
 y∞ (t) =  4 · cos(30 · t) .
2 · cos(30 · t + 1)
 
2 · cos(30 · t + 0.243)
 y∞ (t) = 2 · cos(30 · t + 0.142).
2 · cos(30 · t + 0.252)

115
Gioele Zardini Control Systems II FS 2017

Solution.
 
0.5 · sin(30 · t + 0.314)
3 y∞ (t) =  0.5 · cos(30 · t) .

0.5 · cos(30 · t + 1)
 
4 · sin(30 · t + 0.314)
 y∞ (t) =  3 · cos(30 · t) .
2 · cos(30 · t + 1)
 
0.1 · sin(30 · t + 0.314)
 y∞ (t) =  0.1 · cos(30 · t) .
0.1 · cos(30 · t + 1)
 
0
3 y∞ (t) =  4 · cos(30 · t) .

2 · cos(30 · t + 1)
 
2 · cos(30 · t + 0.243)
3 y∞ (t) = 2 · cos(30 · t + 0.142).

2 · cos(30 · t + 0.252)

Explanation
From the given input one can read

||µ|| = 32 + 42 = 5.

From the plot one can read at ω = 30 rad


s
σmin = 0.1 and σmax = 1. It follows

5 · σmin = 0.5 ≤ ||ν|| ≤ 5 = 5 · σmax .


√ √
The first response has ||ν|| = 0.75 that is in the range. The second √ response has ||ν|| = 29
that is too big to be in the range. The third response
√ has ||ν|| = 0.03 that is to small to be
in the range.
√ The fourth response has ||ν|| = 20 that is in the range. The fifth response has
||ν|| = 12 that is in the range.

116
Gioele Zardini Control Systems II FS 2017

6 Synthesis of MIMO Control Systems


We have learned that MIMO systems are more complicated than SISO systems and therefore
their control as well. In fact, it is very difficult to apply intuitive concepts like loop shaping,
that apply to SISO systems, because of the cross couplings/interactions between the inputs
and the outputs. This means that systematic methods should be used, in order to achieve good
control properties. We define two big groups of methods, that are used nowadays:
• H2 methods: Optimization problems in time domain which try to minimize the error
signal over all frequencies.

• H∞ methods: Optimization problems in frequency domain which try to minimize the


error signal over all frequencies.

6.1 The Linear Quadratic Regulator (LQR)


We first look at one H2 method: the linear quadratic regulator.

6.1.1 Definition
With the given state-space description
d
x(t) = A · x(t) + B · u(t), A ∈ Rn×n , B ∈ Rn×m , x(t) ∈ Rn , u(t) ∈ Rm (6.1)
dt
we are looking for a state-feedback controller

u(t) = f (x(t), t), (6.2)

that brings x(t) asymptotically to zero (with x(0) 6= 0). In other words it should hold.

lim x(t) = 0. (6.3)


t→∞

Remark. The input u(t) we want to control depends only on x(t) and not on other dynamic
state variables.
In order to achieve the best control, we want to minimize the criterium
Z ∞
J(x, u) = (x| · Q · x + |u| ·{z
R · u} ) dt, (6.4)
0
| {z }
error given energy

where
Q ∈ Rn×n (6.5)
is a symmetric positive semidefinite matrix and

R ∈ Rm×m (6.6)

is a symmetric positive definite matrix. These two matrices represent the weights, i.e. the
importance, of two things: how fast does the controller bring x(t) to zero and how much energy
should it use.

Remark. In the SISO case, we deal with scalars and things are nicer :
Z ∞
J(x, u) = (q · x2 + r · u2 )dt, (6.7)
0

117
Gioele Zardini Control Systems II FS 2017

Example 34. Let’s try to visualize this: a criterion could look like
Z ∞

J(x, u) = x21 + 6 · x1 · x2 + 100 · x22 + 6 · u21 + 10 · u22 dt.
0

The matrices Q and R are  


1 3
Q=
3 100
and  
6 0
R= .
0 10

The solution10 u(t) = f (x(t), t) (that minimizes (6.4) ) is given as

u(t) = −K · x(t), (6.8)

with
K = R−1 · B | · Φ. (6.9)
where Φ is the solution of the Riccati equation

Φ · B · R−1 · B | · Φ − Φ · A − A| · Φ − Q = 0. (6.10)

Remark. The actual procedure to determine this equation is not trivial. The main point you
should be concerned with in this course is the following: we are trying to minimize the given
criterium and we are dealing with matrices. This means that in the end, we calculate matrix
derivations and minimize, obtaining matrix equations. If you are interested in the minimization
procedure have a look here.
Remark. Don’t forget what we are trying to do! This equations give us the optimal controller
K: this means but not that it will be always the best one.
The feedback loop of such a system can be seen in Figure 41.

r(t) = 0 ẋ(t) R x(t) y


B C

Figure 41: LQR Problem: Closed loop system.

10
Here, it is assumed that x(t) is already available. This is normally not the case and we will therefore
introduce an element (observer) that gives us an estimation of it (see next chapters).

118
Gioele Zardini Control Systems II FS 2017

6.1.2 Kochrezept
(I) Choose the matrices Q and R:

• The matrix Q is usually chosen as follows:

Q = C̄ T · C̄, C̄ ∈ Rp×n (6.11)

where
p = rank(Q) ≤ n. (6.12)
Equation 6.11 represents the full-rank decomposition of Q. This makes sure that
the therm with Q in the criterion contains enough informations about the state’s
behaviour. C̄ has strictly speaking nothing to do with the state space description’s
matrix C. We will however discover, that the choice C̄ = C is really interesting.
• The matrix R is usually chosen as follows:

R = r · 1m×m . (6.13)

(II) Find the symmetric, positive definite solution Φ of the Riccati equation

Φ · B · R−1 · B | · Φ − Φ · A − A| · Φ − Q = 0. (6.14)

Remark. Such a solution exists if

• The pair {A, B} is completely controllable and


• The pair {A, C̄} is completely observable.

This doesn’t mean that these conditions should be always fulfilled: sufficient, not nec-
essary.

(III) Compute the matrix of the controller K with

K = R−1 · B | · Φ. (6.15)

6.1.3 Frequency Domain


By looking at Figure 41 one can write

LLQR (s) = K · (s · 1 − A)−1 · B (6.16)

and
TLQR (s) = C · (s · 1 − (A − B · K))−1 · B. (6.17)

6.1.4 Properties
A bunch of nice properties can be listed:

• The matrix of the closed-loop system

A−B·K (6.18)

is guaranteed to be a Hurwitz matrix.


Remark. A Hurwitz matrix is a matrix that has all eigenvalues with negative real parts.

119
Gioele Zardini Control Systems II FS 2017

But wait, what does it mean? We have learned that eigenvalues with negative real parts
contribute to the stability of the closed-loop system. In this case is the closed-loop system
asymptotically stable.

• If one chooses R = r · 1 one gets for the minimal return difference

• If Q >> R we say that we have cheap control energy.

• If R >> Q we say that we have expensive control energy.

µLQR = min{σmin (1 + LLQR )} ≥ 1. (6.19)


ω

• Closed-loop behaviour:

max{σmax (S(j · ω))} ≤ 1, (6.20)


ω
max{σmax (T (j · ω))} ≤ 2. (6.21)
ω

• Of course you are not expected in real life to solve the Riccati equation by hand. Since
these calculations contain many matrix multiplications, numerical algorithms are devel-
oped. Matlab offers the command

K=lqr(A,B,Q,R);

• Remind that
Φ = ΦT . (6.22)

6.1.5 Pole Placement


Another intuitive way of approaching this problem is the so called pole placement. Given that
one knows that the state feedback matrix is A − B · K, one can choose such a K matrix, that
the desired poles are achieved. With this method one can have direct influence on the speed
of the control system. The disadvantage, however, is that one has no guarantee of robustness.
If one looks at Figure 42, one can derive the following complete state space description of the
plant and of the controller:

ẋ(t) = A · x(t) + B · u(t),


y(t) = C · x(t) + D · u(t),
(6.23)
ż(t) = F · z(t) + G · e(t),
u(t) = H · z(t),

where e(t) is the error of the controller.

Open Loop Conditions


By assuming for this case D = 0 and plugging the information of the controller in the plant,
one gets

ẋ(t) = A · x(t) + B · Hz(t),


y(t) = C · x(t), (6.24)
ż(t) = F · z(t) + G · e(t).

120
Gioele Zardini Control Systems II FS 2017

   
x(t) u(t)
By defining two new variables x̃(t) = , ũ(t) = and referring to Figure 43 one
z(t) e(t)
can write
 
A B·H
Aopen loop = ,
0 F
 
0 (6.25)
Bopen loop = ,
G

Copen loop = C 0 .

Remark. Because of the rule for determinant of matrices build up from other submatrices, we
automatically know that the eigenvalues of Aopen loop are the eigenvalues of A and the eigenvalues
of F . Mathematically
λ(Aopen loop ) = λ(A) + λ(F ). (6.26)

Closed Loop Conditions


It is still assumed D = 0. If one closes the loop it holds

e(t) = −y(t) ⇒ e(t) = −C · x(t). (6.27)

This gives the state space description

ẋ(t) = A · x(t) + B · Hz(t),


y(t) = C · x(t), (6.28)
ż(t) = F · z(t) + G · (−C · x(t)).
 
x(t)
By defining a new variable x̃(t) = and referring to Figure 44 one can write
z(t)
 
A B·H
Aclosed loop = . (6.29)
−G · C F

Remark. The eigenvalues of Aclosed loop are not well defined. Moreover it holds

λ(Aclosed loop ) = f (F, G, H). (6.30)

This makes this method extremely superficial and not generally useful.

Figure 42: Signal Flow Diagram with Controller.

121
Gioele Zardini Control Systems II FS 2017

Figure 43: Open loop case.

Figure 44: Closed loop case.

6.1.6 Examples
Example 35. Your task is to keep a space shuttle in its trajectory . The deviations from the
desired trajectory are well described by
d
x(t) = A · x(t) + b · u(t)
dt    
0 1 0
= · x(t) + · u(t).
0 0 1

x1 (t) is the position of the space shuttle and x2 (t) is its velocity. Moreover, u(t) represents the
propulsion. The position and the velocity are known for every moment and for this reason we
can use a state-feedback regulator. In your first try, you want to implement a LQR with
 
1 0
Q=
0 2.25
R = 4.

(a) Show that the system is completely controllable.


(b) Find the state feedback matrix K.
After some simulations, you decide that the controller you’ve found is not enough for your task.
Your boss doesn’t want you to change the matrices Q and R and for this reason you decide to
try with direct pole placement. The specifications are
• The system should not overshoot.
• The error of the regulator should decrease with e−3·t .

(c) Find the poles such that the specifications are met.

(d) Find the new state feedback matrix K2 .

122
Gioele Zardini Control Systems II FS 2017

Solution.

(a) The controllybility of the system is easy to show: the controllability matrix is

R2 = b A · b
 
0 1
= .
1 0

The rank of R is clearly 2: completely controllable.

(b) In order to find the matrix K, one has to compute the solution of the Riccati equation
related to this problem. First, one has to look at the form that this solution should have.
Here b is a 2 × 1 vector. This means that since Φ = ΦT we are dealing with a 2 × 2 Φ of
the form  
ϕ1 ϕ2
Φ=
ϕ2 ϕ3
With the Riccati equation, it holds

Φ · B · R−1 · B | · Φ − Φ · A − A| · Φ − Q = 0

Plugging in the matrices one gets


   T    T    
0 −1 0 0 1 0 1 1 0 0 0
Φ· ·4 · ·Φ−Φ· − ·Φ− =
1 1 0 0 0 0 0 2.25 0 0
           
1 ϕ2  ϕ1 ϕ2 0 ϕ1 0 0 1 0 0 0
· · 0 1 · − − − =
4 ϕ3 ϕ2 ϕ3 0 ϕ2 ϕ1 ϕ2 0 2.25 0 0
       
1 0 ϕ2 ϕ1 ϕ2 1 ϕ1 0 0
· · − =
4 0 ϕ 3 ϕ2 ϕ 3 ϕ 1 2 · ϕ2 + 2.25 0 0
     
1 ϕ22 ϕ2 · ϕ3 1 ϕ1 0 0
· 2 − =
4 ϕ2 · ϕ3 ϕ ϕ1 2 · ϕ2 + 2.25 0 0
 23 
ϕ2 ϕ2 ·ϕ3  
−1 − ϕ1
 4 4 = 0 0
.
ϕ2 ·ϕ3
−ϕ
ϕ23
− 2 · ϕ − 2.25 0 0
4 1 4 2

From this it follows:


ϕ22
• 4
− 1 ⇒ ϕ2 = ±2.
• From the fourth term, we get two cases:

– ϕ2 = 2: ϕ3 = 25 = ±5.

– ϕ2 = −2: ϕ3 = j · 7.
• Because of the Sylvester’s Criterion, ϕ1 should be positive, in order to get a positive
definite matrix. Plugging into the second term: ϕ1 = ϕ24·ϕ3 = 2.5 is the only positive
solution. This means that the only reasonable choice is ϕ2 = 2 and ϕ3 = 5.

We therefore get  
2.5 2
Φ= .
2 5
Because of the Sylvester’s Criterion and the fact that Φ should be positive definite, this
is a good matrix!

123
Gioele Zardini Control Systems II FS 2017

For the matrix K it holds


K = R−1 · B | · Φ
 
1  2.5 2
= · 0 1 ·
4 2 5
1 
= · 2 5 .
4

(c) Overshoot or in general oscillations, are due to the complex part of some poles. The real
part of these poles is given by the decrease function of the error:

π1 = π2 = −3

(d) The closed loop has feedback matrix

A − B · K2 .

We have to choose K2 such that the eigenvalues of the state feedback matrix are both
−3. The dimensions of K2 are the same as the dimensions of K, namely 1 × 2. It holds
   
0 1 0 
A − B · K2 = − · k1 k2
0 0 1
   
0 1 0 0
= −
0 0 k1 k2
 
0 1
= .
−k1 −k2

The eigenvalues of this matrix are


p
−k2 ± k22 − 4k1
π1,2 = .
2
Since the two eigenvalues should have the same value, we know the the part under the
square rooth should vanish. This means that − k22 = −3 ⇒ k2 = 6. Moreover:

k22
k1 =
4
= 9.

The matrix finally reads 


K2 = 9 6 .

124
Gioele Zardini Control Systems II FS 2017

Example 36. A system is given as

ẋ1 = 3 · x2
1
ẋ2 = 3 · x1 − 2 · x2 + ·u
2
7
y = 4 · x1 + · x2 .
3
(a) Solve the LQR problem for the criterion
Z ∞
1
J= 7 · x21 + 3 · x22 + · u2 dt
0 4
and find K.

(b) Find the eigenvalues of the closed-loop and the LQ regulator K.

(c) Does it change something if we use the new criterion?


Z ∞
10 2
Jnew = 70 · x21 + 30 · x22 + · u dt.
0 4

125
Gioele Zardini Control Systems II FS 2017

Solution.
(a) From the theory of the course Lineare Algebra I/II about the quadratic forms, one can
read  
7 0
Q=
0 3
and
1
R= .
4
The state-space description of the system can be rewritten as
       
ẋ1 0 3 x 0
= · 1 + 1 ·u
ẋ2 3 −2 x2 2
| {z } | {z }
A B
 
7
 x1
y= 4 3 · 0 ·u.
+ |{z}
| {z } x2
D
C

Plugging this into the Riccati equation one gets:


         
0 1
 0 3 0 3 7 0 0 0
Φ· 1 ·4· 0 2 ·Φ−Φ· − ·Φ− =
2
3 −2 3 −2 0 3 0 0
         
2ϕ2  3ϕ2 3ϕ1 − 2ϕ2 3ϕ2 3ϕ3 7 0 0 0
· ϕ22 ϕ23 − − − =
2ϕ3 3ϕ3 3ϕ2 − 2ϕ3 3ϕ1 − 2ϕ2 3ϕ2 − 2ϕ3 0 3 0 0
 2     
ϕ2 ϕ2 ϕ3 6ϕ2 + 7 3ϕ1 − 2ϕ2 + 3ϕ3 0 0
− = .
ϕ2 ϕ3 ϕ23 3ϕ1 − 2ϕ2 + 3ϕ3 6ϕ2 − 4ϕ3 + 3 0 0

Hence, one gets 3 equations (two elements are equal because of symmetry):

ϕ22 − 6 · ϕ2 − 7 = 0
ϕ2 · ϕ3 − 3 · ϕ1 + 2 · ϕ2 − 3 · ϕ3 = 0
ϕ23 − 6 · ϕ2 + 4 · ϕ3 − 3 = 0.

From here, with the same procedure we have used before, one gets
 34 
7
Φ= 3 .
7 5

This gives K as
 34 
1
 7
K =4· 0 2
· 3
7 5

= 14 10 .

(b) The matrix to analyze is



  
0 3 0 
A−B·K = − 1 · 14 10
3 −2
  2 
0 3 0 0
= −
3 −2 7 5
 
0 0
= .
−4 −7

126
Gioele Zardini Control Systems II FS 2017

The eigenvalues of the closed loop system are given with

det((A − B · K) − λ · 1) = 0
λ2 + 7λ + 12 = 0

from which it follows: λ1 = −3 and λ2 = −4.

(c) No. K remains the same, because it holds Jnew = 10 · J.

127
Gioele Zardini Control Systems II FS 2017

Example 37. You have to design a LQ regulator for a plant with 2 inputs, 3 outputs and 6
state variables.

(a) What are the dimensions of A, B, C and D?

(b) What is the dimension of the transfer function u → y?

(c) What is the dimension of the matrix Q of JLQR ?

(d) What is the dimension of the matrix R of JLQR ?

(e) What is the dimension of the matrix K?

128
Gioele Zardini Control Systems II FS 2017

Solution.

(a) One can find the solution by analyzing the meaning of the matrices:

• Since we are given 6 states variables, the matrix A should have 6 rows and 6 columns,
i.e. A ∈ R6×6 .
• Since we are given 2 inputs, the matrix B should have 2 columns and 6 rows, i.e.
B ∈ R6×2 .
• Since we are given 3 outputs, the matrix C should have 6 columns and 3 rows, i.e.
C ∈ R3×6 .
• Since we are given 2 inputs and 3 outputs, the matrix D should have 2 columns and
3 rows, i.e. D ∈ R3×2 .

(b) Since we are dealing with a system with 2 inputs and 3 outputs, P (s) ∈ R3×2 . Moreover,
P (s) should have the same dimensions of D because of its formula.

(c) From the formulation of Q one can easily see that its dimensions are the same of the
dimensions of A, i.e. Q ∈ R6×6 .

(d) From
u(t) = −K · x(t).
we can see that K should have 6 columns and 2 rows, i.e. K ∈ R2×6 .

129
Gioele Zardini Control Systems II FS 2017

Example 38. You design with Matlab a LQ Regulator:

1 A = [1 0 0 0; 1 1 0 0; 1 1 1 0; 0 0 1 1];
2 B = [1 1 1 1; 0 1 0 2];
3 C = [0 0 0 1; 0 0 1 1; 0 1 1 1];
4
5 nx = size(A,1); Number of state variables of the plant, in Script: n
6 nu = size(B,2); Number of input variables of the plant, in Script: m
7 ny = size(C,1); Number of output variables of the plant, in Script: p
8
9 q = 1;
10 r = 1;
11 Q = q*eye(###);
12 R = r*eye(###);
13
14 K = lqr(A,B,Q,R);

Fill the following rows:

11 : ### =
12 : ### =

130
Gioele Zardini Control Systems II FS 2017

Solution. The matrix Q is a weight for the states and the matrix R is a weight for the inputs.
The correct filling is

11 : ### = nx
12 : ### = nu

131
Gioele Zardini Control Systems II FS 2017

6.2 Extensions of the LQR problem


6.2.1 The LQRI
This is an extension of what was presented in the previous section. In the previous formulation,
no integral action was present: in this section we introduce it. The main idea is to introduce
a (P)I element to each output channel, in order to control the static error.

Definition
Two possible structure’s representations can be seen in Figure 45 and Figure 46. Let’s analyze
this system by referring to Figure 46. Let’s define a new state vector
 
x(t)
x̃(t) = ∈ Rn+m . (6.31)
v(t)

Two assumptions are then used:

• D = 0,

• we have the same number of inputs and outputs (square systems).

By subdividing different B matrices for u(t), w(t) and r(t) one gets the dynamics

d
x̃(t) = Ã · x̃(t) + B̃u · u(t) + B̃r · r(t) + B̃w · w(t), (6.32)
dt
where      
A 0 B 0
à = , B̃u = B̃w = , B̃r = . (6.33)
−C 0 0 1
This can be solved using the LQR approach and the new weight matrices
 
C̄ 0
C̃ =
0 γ·1
⇒ Q̃ = C̃ T · C̃ (6.34)
 
Q 0
= .
0 γ2 · 1

and
R̃ = R. (6.35)
The solution we want to find is then

K̃ = K −KI , (6.36)

where K ∈ Rm×n and KI ∈ Rm×p .


Remark.

• One could ask, why the matrix R is unchanged. Recalling the role of R, one sees that
it is the weight for the input, that remains unchanged with respect to the initial LQR
problem.

• γ > 0 is the tuning parameter, that defines how strong will the integral action be.

132
Gioele Zardini Control Systems II FS 2017

Figure 45: Structure of LQRI.

Figure 46: Structure of LQRI.

Kochrezept
(I) Define the new system
 
d x(t)
x̃(t) = Ã · x̃(t) + B̃u · u(t) + B̃r · r(t) + B̃w · w(t), x̃(t) = . (6.37)
dt v(t)

with      
A 0 B 0
à = , B̃u = B̃w = , B̃r = . (6.38)
−C 0 0 1

(II) Choose the desired weight matrices


 
Q 0
Q̃ = , R̃ = R. (6.39)
0 γ2 · 1

The controller is then 


K̃ = K −KI , (6.40)

(III) Find K̃ using the standard LQR formulation with {Ã, B̃u , Q̃, R̃} instead of {A, B, Q, R}.

(IV) Use the Matlab command

K_tilde=lqr(A_tilde,B_u_tilde,C_tilde’*C_tilde,r*eye(m,m)).

133
Gioele Zardini Control Systems II FS 2017

Examples
Example 39. The dynamics of a system are given as

ẋ(t) = 3 · x(t) + 2 · u(t)


y(t) = 2 · x(t).

First, you try to solve the problem by using the standard LQR problem statement. How-
ever,after some measurements, you notice that the controller you’ve found actually is not so
good: there is a stationary controller’s error. You decide to use the LQRI formulation, in order
to eliminate this error. Find the new controller Ke using R = 1, C̄ = 2 and integral action
γ = 1.

134
Gioele Zardini Control Systems II FS 2017

Solution. The matrices that describe the dynamics of the system are

A = (3),
B = (2),
C = (2).

The new matrices {Ã, B̃, C̃} are


 
A 0
à =
−C 0
 
3 0
= ,
−2 0
 
B
B̃ =
0
 
2
= ,
0
 
C̄ 0
C̃ =
0 γ·1
 
2 0
= .
0 1

The matrix Q̃ reads

Q̃ = C̄ T · C̄
 2
2 0
=
0 1
 
4 0
= .
0 1
 
ϕ1 ϕ2
With Φ = one gets for the Riccati equation:
ϕ2 ϕ3

Φ · B̃ · R̃−1 · B̃ | · Φ − Φ · Ã − Ã| · Φ − Q̃ = 0
         
2  3 0 3 −2 4 0 0 0
Φ· ·1· 2 0 ·Φ−Φ· − ·Φ− =
0 −2 0 0 0 0 1 0 0
         
4ϕ21 4ϕ1 ϕ2 3ϕ1 − 2ϕ2 0 3ϕ1 − 2ϕ2 3ϕ2 − 2ϕ3 4 0 0 0
2 − − − =
4ϕ1 ϕ2 4ϕ2 3ϕ2 − 2ϕ3 0 0 0 0 1 0 0
 2   
4ϕ1 − 6ϕ1 + 4ϕ2 − 4 4ϕ1 ϕ2 − 3ϕ2 + 2ϕ3 0 0
2 = .
4ϕ1 ϕ2 − 3ϕ2 + 2ϕ3 4ϕ2 − 1 0 0

Let’s now analyze the equations:

• From the fourth term of the matrix, one can compute ϕ2 . One gets ϕ2 = ± 12 .

• By plugging these two into the first term of the equation one can get the following results:

3± 17
– ϕ2 = 21 , from which it follows ϕ1 = 4
.

– ϕ2 = − 12 , from which it follows ϕ1 = 3±4 33 .

135
Gioele Zardini Control Systems II FS 2017

Because of the Sylvester’s criterion, one has to eliminate all negative solutions for ϕ1 .
This means we are left with two solutions:

3+ 17
– ϕ2 = 12 , ϕ1 = 4
.

– ϕ2 = − 12 , ϕ1 = 3+4 33 .
−4ϕ1 ϕ2 +3ϕ2
• Let’s plug these solutions into the second term of the equation, where ϕ3 = 2
:

3+ 17
– ϕ2 = 12 , ϕ1 = 4
, from which it follows

3+ 17 3
ϕ3 = − +
√ 4 4
17
=− .
4

3+ 33
– ϕ2 = − 12 , ϕ1 = 4
, from which it follows

3+33 3
ϕ3 = −
√ 4 4
33
= .
4
This means that the two possible solutions read

3+ 17 1
!
4 2
Φ1 = √ ,
1 17
2
− 4

3+ 33
!
4
− 12
Φ2 = √ .
33
− 12 4

Since these solutions already contain just the ϕ1 that could lead to a positive definite matrix
(ϕ1 > 0), we can have a look at the determinants of them. In order to get a positive definite
matrix, we should choose the matrix with a positive determinant.
It holds
√ √
3 + 17 17 1
det(Φ1 ) = ·− −
4
√ 4 4
3 17 + 17 1
=− −
16 4
< 0,
√ √
3 + 33 33 1
det(Φ2 ) = · −
√4 4 4
3 33 + 33 1
= −
16 4
> 0.

From which it follows that the only positive definite and therefore correct matrix is Φ2 .

136
Gioele Zardini Control Systems II FS 2017

The new matrix Ke can now be computed:



Ke = K K I
= R−1 · B̃ T · Φ2

3+ 33
!
 4
− 12
= 2 0 · √
33
− 21 4
 √ 
3+ 33
= 2
−1 .

137
Gioele Zardini Control Systems II FS 2017

6.2.2 Finite Horizon LQR


Definition
Assuming a time-dependent LQR problem, we can write a time-dependent description of the
system:
d
x(t) = A(t) · x(t) + B(t) · u(t), x(t) ∈ Rn , u(t) ∈ Rm , t ∈ [ta , tb ], x(ta ) = xa , (6.41)
dt
with the matrices same as before, but now time-dependent. The criterion for the LQR formu-
lation can be written as
Z tb
T

J(u) = x (tb ) · P · x(tb ) + xT (u(t)) · Q(t) · x(u(t)) + uT (t) · R(t) · u(t) dt, (6.42)
ta

with P ∈ Rn×n and P = P T ≥ 0.


Remark. This formulation isn’t relevant for the course: you have to make only sure to under-
stand the point in considering time-dependent systems.
The equations for u(t) and for K are the same as before and the Riccati equation will be
a differential equation. Referring to Figure 47, we see that a memory element is present. In
fact, the new matrix K(t) is computed in advance, stored at everystep in this memory and
recovered everytime this is needed.

Figure 47: Structure of the finite horizon LQR.


.

138
Gioele Zardini Control Systems II FS 2017

6.2.3 Feedforward LQR Control Systems


Definition
Referring to Figure 48, we can see the structure of the feedforward LQR. The problem is still
set up in the interval [ta , tb ]. The objective of the controller is to follow a known reference signal
r(t) and to not care how much energy will this cost to us. The feedforward input uf f (t) can be
computed in advance as well and stored in another memory.
The method to do this, will be explained in the next section.

Figure 48: Structure of the feedforward LQR.


.

139
Gioele Zardini Control Systems II FS 2017

6.3 Numerical Optimization


6.3.1 Definition
The optimal input
u∗ (t ∈ [0, T ]) (6.43)
is computed numerically offline for the whole planning window and is used as feedforward signal,
i.e. uf f = u∗ (t). The numerical optimization is possible, because the planning window has a
limited dimension T .
The criterion we want to minimize is defined as
Z T
min JT (x0 , u(t)) = min l(x(t), u(t)) dt + m(x(t))
u(t) u(t) 0 | {z } | {z }
Stage cost Terminal cost

s.t. ẋ(t) = f (t, x(t), u(t)), (6.44)


x(0) = x0 ,
x(t) ∈ X , u(t) ∈ U,
x(T ) ∈ Xf ,

where

• ẋ(t) = f (t, x(t), u(t)): Model of the systems;

• x(0) = x0 : Initial Condition;

• x(t) ∈ X , u(t) ∈ U: “state and input constraint sets”;

• x(T ) ∈ Xf : “terminal state constraint set”.

Figure 49: Numerical Optimization.

Remark. A bunch of other courses is offered in order to learn how to approach such an opti-
mization problem. The aim of its introduction in this course, is only that of recognizing the
elements thate compose it and understand how the connect to each other.

140
Gioele Zardini Control Systems II FS 2017

6.3.2 Examples
Example 40. You would like to autonomously drive a train from Zürich to Lugano without
stops in two and a half hours by consuming as little energy as possible. You therefore formulate
the optimal control problem as
Z T
min Pel (Pm (t))dt
Pm (t∈[0,T ]) 0
r
2
s.t.ṡ(t) = · Ekin (t),
m
r
2 (6.45)
Ėkin (t) = Pm (t) − Fdrag (s(t), Ekin (t)) · · Ekin (t),
m
s(0) = Ekin (0) = Ekin (T ) = 0, s(T ) = Send
|Pm (t)| ≤ Pmax
m
Ekin (t) ≤ · vmax (s(t))2 ,
2
where Pm is the power generated by the electric motors, Pel : R → R a nonlinear function
describing the relation between mechanical power and power extracted from the electric grid,
s(t) is the current position of the train, Ekin (t) its current kinetic energy, Fdrag (s(t), Ekin (t))
the total drag force, Send the total distance between Zürich and Lugano, Pmax the maximum
power that can be exerted by the motors and vmax (s(t)) the position-dependent maximum
speed profile.

(a) Identify the state variables x(t) and the input variables u(t).

(b) Identify the system dynamics ẋ(t) = f (x(t), u(t)) and the initial conditions x0 . Is it a
nonlinear system?

(c) What is the stage cost function l(x(t), u(t))? Is it linear, quadratic or nonlinear? What
about the terminal cost function m(x(T )) and the objective function JT (x0 , u(t))?

(d) What is the value of T in seconds?

(e) Identify the state, input and terminal state constraint sets X , U, Xf .

141
Gioele Zardini Control Systems II FS 2017

Solution.

(a) The state variabes are easy to see: they are the train position and the its kinetic energy.
The state vector reads  
s(t)
x(t) = .
Ekin (t)
We are given only an input: the power exerted by the motors. Mathematically:

u(t) = Pm (t).

(b) The system dynamics are computed by taking the derivative of the state vector. Reading
from the problem’s description one gets
 q 
2
m
· x 2 (t)
ẋ(t) =  q ,
2
u(t) − Fdrag (x1 (t), x2 (t)) · m · x2 (t)

with x1 (t), x2 (t) state variables. Of course, this represents a nonlinear dynamical system.
The initial conditions are x0 = 0.

(c) The stage cost function is


l(x(t), u(t)) = Pel (u(t))
and is nonlinear.
The terminal cost function is zero, i.e.

m(x(T )) = 0.

The objective function reads


Z T
JT (x0 , u(t)) = Pel (u(t))dt.
0

(d) Since we would like to complete our journey within 2.5 hours, it holds

T = 3600s/h · 2.5h = 9000s.

(e) The state constraint set is


m 2
X = {x ∈ R2 : 0 ≤ x1 ≤ Send , x2 ≤ · v (x1 )}.
2 max
The input constraint set is
U = {u ∈ R : |u| ≤ Pmax }.
The terminal state constraint set is
 
Send
Xf = { }.
0

142
Gioele Zardini Control Systems II FS 2017

6.4 Model Predictive Control (MPC)


A whole course about this topic is offered in the master at IDSC (Prof. Zeilinger).

6.4.1 Definition
Basically, with MPC one solves in each time step the numerical optimization problem defined
in the previous section. The solution is always given as feedback to the system.

Method
For every ∆t this procedure is applied:
(I) Measure or estimate the actual state, i.e.
x(t) = z. (6.46)

(II) Find the optimal input for the whole planning window T :
u∗ ([0, T ]) = arg min JT (z, u([0, T ])), (6.47)
u([0,T ])

with the model and the constraints introduced in 6.44.


Remark. arg min f (x) is the argument that minimizes f (x).
(III) Implement just the first part
u∗ ([0, ∆t]) (6.48)
of the calculated solution.

Figure 50: MPC method description.


.

One of the big advantages of Model Predictive Control, is that this method can work with
every model, i.e. also with nonlinear models and delay elements. Another advantage is that
the criterion (objective function) can be freely chosen. However, stability is not guaranteed
and good models are computationally demanding.

143
Gioele Zardini Control Systems II FS 2017

6.4.2 Examples
Example 41. Consider the Model Predictive Control scheme P(x0 )
Z T
min q · x2 (t) + r · u2 (t)dt
u(t∈[0,T ]) 0
s.t. ẋ(t) = u(t), x(0) = x0
u(t) ∈ U, x(t) ∈ X , x(T ) ∈ Xf .

(a) How can we solve the problem for T → ∞ and X = U = Xf = R? What is the solution
for u∗ (t)?

(b) What happens if the value of r is decreased? And what if it is increased? Inlcue the
notion of cheap and expensive control in your reasoning.

(c) In this case would you need to iteratively solve the optimization problem? Why?

(d) Consider now a finite horizon T and the constraint sets

X =R
U = {u ∈ R : |u| ≤ umax }
Xf = {0}.

Define the feasible set of this MPC scheme, i.e. find the set

XT = {x0 ∈ R : P(x0 ) admits a feasible solution}.

What happens if x0 6∈ XT ?

(e) Now also consider


X = {x ∈ R : 0 ≤ x ≤ xmax }.
How does the XT change?

144
Gioele Zardini Control Systems II FS 2017

Solution.
(a) The MPC optimization problem becomes
Z ∞
min q · x2 (t) + r · u2 (t)dts.t. ẋ(t) = u(t), x(0) = x0 .
u(t) 0

This is a classic unconstrained infinite-horizon LQR problem that we can tackle solving
the scalar version of the Continuous-time Algebraic Riccati Equation (CARE):
1
· B 2 · Φ2 − 2 · A · Φ − q = 0.
r
Since in our case
A=0
B=1

and Φ > 0, we obtain



Φ= q · r.
Therefore, the static gain for the optimal state-feedback regulator is
B
K= ·Φ
r
r
q
= .
r
We know that
u∗ (t) = −K · x(t),
where x(t) can be computed from the dynamics of the closed loop

ẋ(t) = −K · x(t).

This is an easy differential equation and the solution reads

x∗ (t) = x0 · e−K·t

− rq ·t
= x0 · e .

Plugging this into the wanted input we find

u∗ (t) = −K · x∗ (t)
r √q
q
=− · x0 · e− r ·t .
r

(b) This case is interesting to analyze, since the control gain K is the same as the convergence
rate of the state variable. If the value of r is decreased we would do p cheap control: the
cheaper control energy would result in a fastest convergence rate K = qr at the expense
of a larger control gain. If we increase r, making the control enery more expensive, the
control gain will be reduced at the expensive of a slower convergence rate.

(c) It is not necessary to solve the optimization problem iteratively, as for this special case
the optimal input u∗ (t) can be computed as a closed form solution. Typically, this would
not be possible, as the presence of state and input contraints would not allow to solve the
optimization problem analytically. In that case, a numerical solver must be employed.

145
Gioele Zardini Control Systems II FS 2017

(d) The state dynamics are given by the open integrator ẋ(t) = u(t). Hence the maximum
change achievable in x over the ifnite horizon T can be computed integrating the minimum
and maximum values allowed for the input, i.e. u(t) = ±umax . Since the input constraint
is symmetric, we obtain for the maximum achievable distance for both negative and
positive directions Z T
∆xmax = umax dt = umax · T.
0
As we do not have state constraints, but we are only forcing the state variable to be at
the origin at the end of the horizon, i.e.

x(T ) ∈ Xf = {0} ⇒ x(T ) = 0,

the feasible set is defined by all the initial conditions whose distance from the origin is
less than the maxmum achievable distance ∆xmax , i.e.

XT = {x0 ∈ R : P(x0 ) admits a feasible solution


= {x0 ∈ R : |x0 | ≤ umax · T }.

If at some point x0 6∈ XT , which means |x0 | > ∆xmax , there will be no admissible control
trajectory able to steer the state variable to the origin within the horizon T , and therefore
the MPC problem will be unfeasible and no solution will be found.

(e) Since we are now imposing the addition contraint

x(t) ∈ X = {x ∈ R : 0 ≤ x ≤ xmax },

x0 must always be inside of X . Hence, XT results from the intersection of the admissible
set computed in part (c) XTold and the state constraint set X as

XTold ∩ X = {x0 ∈ R : 0 ≤ x0 ≤ min{umax · T, xmax }.

146
Gioele Zardini Control Systems II FS 2017

6.5 The Linear Quadratic Gaussian (LQG)


In the previous section we assumed that all the state variables of the system were given. In
non academic cases, however, this is not the case: one knows just the output y(t) and the
input u(t). Hence, one has to figure out how to get the actual state x(t). The idea is to use
an observer to get an estimate of x(t), also called x̂(t). A whole course about estimation if
offered at IDSC in the master by Prof. D’Andrea: Recursive Estimation.

6.5.1 Observer
For the following explanations, I refer to Figure 51. The idea is to copy the internal description
of the plant and its dynamics, in order to predict x̂(t). This is done by connecting two feedback
loops through the observer gain L, that can be tuned for different applications. First of all,
let’s define the observation error as

x̄(t) = x(t) − x̂(t) ∈ Rn . (6.49)

The dynamics of the error are described by its derivative, namely


d d d
x̄(t) = x(t) − x̂(t)
dt dt dt
= A · x(t) + B · u(t) − (Â · x̂(t) + B̂ · u(t) + L · (C · x(t) − Ĉ · x̂(t))) (6.50)
= A · x̄(t) − L · C · x̄(t)
= (A − L · C) · x̄(t),

by assuming the exact duplication of the system’s dynamics, i.e. A = Â, B = B̂ and C = Ĉ.
If the matrix A − L · C has asymptotically stable eigenvalues, the observation error will
converge asymptotically to 0. The speed of convergence can be tuned with the gain L.
The whole point here, is to find such a matrix with asymptotically stable eigenvalues. Since
with the LQR approach we are guaranteed to have asymptotically stable eigenvalues for the
matrix A − B · K, we can find similarities between the two: in fact the roles of B and C are
interchanged. By converting the problem, in order to get the same role for the matrices, one
gets
A − L · C ⇒ (A − L · C)T = AT − C T · LT (6.51)
This makes the things easier: we can now, according to Table 1, approach this problem as a
LQR problem with changed matrices. By plugging the new matrices into the Riccati equation,
one gets
1
· Ψ · C | · C · Ψ − Ψ · A| − A · Ψ − B̄ · B̄ | = 0, (6.52)
q
where Ψ is its solution. The matrix L is then given as
1
L| = · C · Ψ. (6.53)
q

147
Gioele Zardini Control Systems II FS 2017

LQR LQG
[A − B · K] [(A − L · C)| ] = [(A| − C | · L| )]
A A|
B C|
Q = C̄ | · C̄ B̄ · B̄ |
R=r·1 q·1

Table 1: LQG and LQR.

Figure 51: Structure of a state observer.


.

6.5.2 The LQG Controller


The LQG controller uses the LQR approach and the concept of the observer and connect the
two into a unique system. Starting from the solution of the LQR problem, that contained the
controller K and the observed signal x̂(t) from the observer, one can write as learned

u(t) = −K · x̂(t). (6.54)

Since we have now an estimate of x(t), we can approach the problem with a normal LQR
method. Let’s define  
x(t)
x̃ = . (6.55)
x̂(t)
By connecting the observer and the plant with the gain −K one can actually analyze the open
loop and the closed loop system conditions:

Open Loop Conditions


For open loop conditions, we can write the state dynamics as
d
x̃(t) = Ãopen loop · x̃(t) + B̃open loop · e(t). (6.56)
dt

148
Gioele Zardini Control Systems II FS 2017

By looking at Figure 52, we can derive


 
A −B · K
Ãopen loop = ,
0 A−B·K −L·C
 
0 (6.57)
B̃open loop = ,
−L

C̃open loop = C 0 .

Closed Loop Conditions


For closed loop conditions, we can write the state dynamics as
d
x̃(t) = Ãclosed loop · x̃(t) + B̃closed loop · e(t) (6.58)
dt
and analogously it follows from Figure 52 that
 
A −B · K
Ãclosed loop = ,
L·C A−B·K −L·C
 
0 (6.59)
B̃closed loop = ,
−L

C̃closed loop = C 0 .
If one analyzes this in the frequency domain, one gets
LLQG (s) = P (s) · C(s)
= C · (s · 1 − A))−1 · B · K · (s · 1 − (A − B · K − L · C))−1 · L (6.60)
| {z } | {z }
P (s) C(s)

and
TLQG (s) = LLQG (s) · (1 + LLQG (s))−1 . (6.61)
Remark.
• If the LQ regulator and the observer are stable, then so is the LQG closed loop.
• If the LQ regulator and the observer are robust, nothing can be said about the robustness
of the LQG closed loop. This means that we don’t habe guarantee on robustness and it
must be investigated a posteriori.

6.5.3 Kochrezept
(I) Find K: Solve a standard LQR problem with
{A, B, Q = C̄ T · C̄, R} (6.62)

(II) Find L: Solve a standard LQR problem with


{AT , C T , Q = B̄ · B̄, R = q · 1} (6.63)

(III) Use the Matlab command

K=lqr(A,B,C_tilde’*C_tilde,r*eye(m,m)),L=lqr(A’,C’,B_bar*B_bar’,q*eye(p,p))’.
Remark. It must hold
• {A, C} completely observable
• {A, B̄} completely controllable.

149
Gioele Zardini Control Systems II FS 2017

Figure 52: Structure of LQG controller.


.

6.5.4 Examples
Example 42. The dynamics of a system are given as

ẋ1 (t) = x2 (t)


ẋ2 (t) = u(t)
y(t) = x1 (t).

You want to design a state observer. The observer should use the measurements for y(t) and
u(t) in order to estimate the state variables x̂(t) ∼ x(t).

(a) Which dimension has the observer matrix L? Draw the signal diagram of such an observer.

(b) Compute the observer matrix L for q = 1.



(c) You have already computed a state feedback matrix K = 1 1 for the system above.
√ 
2
What is the complete transfer function of the controller C(s)? Use L = .
1

150
Gioele Zardini Control Systems II FS 2017

Solution.

(a) Since C is a 1 × 2 matrix and L · Cis of the same dimensions of A, 2 × 2 matrix, L is a


2 × 1 matrix, i.e.  
L1
L= .
L2
By adapting the general structure of an observer one gets Figure 53.

Figure 53: Structure of the state observer.


.

(b) First of all let’s read the system matrices:


 
0 1
A=
0 0
 
0
B=
1

C= 1 0
D = 0.

Plugging these matrices into the Riccati equation one gets:


1
· Ψ · CT · C · Ψ − Ψ · AT − A · Ψ − B · B T = 0
q
         
1  0 1 0 0 1  0 0
Ψ· · 1 0 ·Ψ−Ψ· − ·Ψ− · 0 1 =
0 0 0 1 0 0 0 0
       
ψ12 ψ1 · ψ2 2ψ2 ψ3 0 0 0 0
2 − − = .
ψ1 · ψ2 ψ2 ψ3 0 0 1 0 0

The matrix Ψ is symmetric and positive definite and with these informations we can
compute its elements:

• From the last term of the equation one gets

ψ22 = 1 ⇒ ψ2 = ±1.

• By plugging this into the first equation
√ one gets ψ 1 = ± 2. Because the positive
definite condition, one gets ψ1 = 2, ψ2 = 1.
• Because of the form of C we don’t care about ψ3 .

151
Gioele Zardini Control Systems II FS 2017

From these calculations it follows


1
LT = ·C ·Ψ
q
√ 
1  2 1
= · 1 0 ·
1 1 ∗
√ 
= 2 1 ,

and so √ 
2
L= .
1

(c) As it is shown in the theory, the formula to calculate the controller’s transfer function
reads
C(s) = K · (s · 1 − (A − B · K − L · C))−1 · L.
By plugging in the found matrices one gets
    √ 
0 1 0  2 
(A − B · K − L · C) = − · 1 1 − · 1 0
0 0 1 1
    √ 
0 1 0 0 2 0
= − −
0 0 1 1 1 0
 √ 
− 2 1
= .
−2 −1

It follows
 √ −1
−1 s + 2 −1
(s · 1 − (A − B · K − L · C)) =
2 s+1
 
1 s+1 1√
= √ · .
(s + 2) · (s + 1) + 2 −2 s + 2

By plugging this into the formula one gets

C(s) = K · (s · 1 − (A − B · K − L · C))−1 · L
  √ 
 1 s+1 1√ 2
= 1 1 · √ · ·
(s + 2) · (s + 1) + 2 −2 s + 2 1
√ 
1 √  2
= √ · s−1 s+1+ 2 ·
(s + 2) · (s + 1) + 2 1
1 √ √ √
= √ · ( 2s − 2 + s + 1 + 2)
(s + 2) · (s + 1) + 2

( 2 + 1)s + 1
= √ .
(s + 2) · (s + 1) + 2

152
Gioele Zardini Control Systems II FS 2017

Example 43. You are working for your semester thesis at a project which includes a water
reservoir. Your task is to determine the disturbance d(t) that acts on the reservoir. Figure ??
shows the situation. The only state of the system is the water volume x(t) = V (t). The volume
flows in the reservoir Vin∗ (t) are the known system input u(t) and the unknown disturbance
d(t) > 0. The volume flow of the system is assumed to be only dependend on the water
volume, i.e.

Vout (t) = −β · x(t).
The system output y(t) is the water level h(t). The model of this reservoir reads

dx(t)
= −β · x(t) + u(t) + d(t),
dt (6.64)
1
y(t) = · x(t), α > 0, β > 0.
α

Figure 54: a) Drawing of the reservoir; b) Inputs and Outputs of the observer; c) Blocks for
signal flow diagram.
.

The goal is to determine d(t). Your supervisor has already tried to solve the model equations
for d(t): he couldn’t determine the change in volume dx(t)dt
with enough precision. Hence, you
want to solve this problem with a state observer.

(a) Draw the signal flow diagram of such a state observer. Use the blocks of Figure ??c).

(b) The state feedback matrix L is in this case some scalar value. Which value can L be, in
order to get an asymptotically stable state observer?
ˆ in the state observer. This should approximate the real
(c) Introduce a new signal d(t)
disturbance d(t).

(d) Find the state space description of the observer with inputs u(t) and y(t) and output
ˆ
d(t).

153
Gioele Zardini Control Systems II FS 2017

Solution.

(a) The signal flow diagram can be seen in Figure 55.

Figure 55: Signal flow diagram of the state observer.


.

(b) The stability of the ovserver depends on the eigenvalues of A − L · C. In this case, since
A − L · C is a scalar,
A−L·C <0
should hold. This leads to
A
L>
C
. With the given informations it follows
β
L>− .
α

(c) The dashed line in Figure 55 represents the new output d(t). ˆ The integrator in Figure 55

has now 3 inputs. The arrow from downwards from the reservoir is Vout (t), the arrow from

left is the input flow u(t) = Vin1 (t). If we simulate the system without the dashed arrow,
there is a deviation between the measured y(t) and the simulated ŷ(t). This results from
the extra inflow d(t) = Vin∗ (t), which is not considered in the simulation.

(d) The new state-space description reads


 
dx̂(t) L
= −β − · x̂(t) + (1) · u(t) + (L) · y(t)
dt α
 
ˆ = − L
d(t) · x̂(t) + (0) · u(t) + (L) · y(t).
α

154
Gioele Zardini Control Systems II FS 2017

6.6 Extension of LQG


6.6.1 The LQGI
As for the LQR the LQRI problem can be introduced, one can introduce a LQGI as well. The
concept is really similar and the new matrices to compute are K ,KI and L. Please refer for
the following computations to Figure 56.

Kochrezept
(I) Find K and KI by solving a standard LQR problem (see LQRI) with

{Ã, B̃, C̃, D̃} (6.65)

(II) Find L by solving a standard LQR problem with

{ÃT , C̃ T , Q = B̄ · B̄ T , R = q · 1} (6.66)

This can be one more time analyzed with open loop and closed loop conditions:

Open Loop Conditions


The state space description for open loop conditions reads
       
x(t) A −Bu · K Bu · KI x(t) 0
d 
x̂(t) =  0 A − Bu · K − L · C Bu · KI  · x̂(t) + −L · e(t)
dt
v(t) 0 0 0 v(t) 1
  (6.67)
 x(t)

y(t) = C 0 0 · x̂(t) .
v(t)

Closed Loop Conditions


The state space description for closed loop conditions reads
       
x(t) A −Bu · K Bu · KI x(t) Bu
d 
x̂(t) = L · C A − Bu · K − L · C Bu · KI  · x̂(t) + Bu  · r(t)
dt
v(t) −C 0 0 v(t) 1
  (6.68)
 x(t)
y(t) = C 0 0 · x̂(t) .

v(t)

155
Gioele Zardini Control Systems II FS 2017

Figure 56: Structure of LQGI controller.


.

6.7 LTR
With the methods we have learned so far, we have no guarantee about robustness. The goal
of the LTR method is to improve the robustness of the system, in particular if an observer is
introduced.

6.7.1 Definition
The main idea is to design a LQR or an observer such that the system is enough good and in
a second step, design a LQG with tuning parameter, to improve the robustness. This can be
seen in Figure 57

Figure 57: LTR.

156
Gioele Zardini Control Systems II FS 2017

6.7.2 β method
In order to improve the robustness, the β method is often chosen:

Kochrezept
(I) LQR: Compute K with
1
· Φ · B · B | · Φ − Φ · A − A| · Φ − Q = 0, (6.69)
r·β
1
· B | · Φ.
K= (6.70)
r
Matlab: Phi = care(A,B,Q,beta*r*eye(nu)), K = 1/r*B’*Phi.

(II) LQG: Compute L with


1
· Ψ · C | · C · Ψ − Ψ · A| − A · Ψ − B̄ · B̄ | = 0, (6.71)
q·β
1
L| = · C · Ψ. (6.72)
q
Matlab: Psi = care(A’,C’,B bar*B bar’,beta*q*eye(ny)), L = (1/q*C*Psi)’.

Remark. In order to compute K and L the command lqr was used so far. Since β appears
only in the Riccati equations for Φ and Ψ, one should first solve the Riccati equation with the
command care and then just in a second step compute K and L.

6.7.3 LQG/LQR for m ≥ p


1. Modelization of the plant, linearization and normalization;

2. Definition of the specifications in frequency domain;

3. Design of the observer:

(a) Psi = care(A’,C’,B bar*B bar’,beta*q*eye(ny));


(b) L = (1/q*C*Psi)’;
(c) Find B bar*B bar’∈ Rn×n , q > 0, and β ≤ 1, s.t. the specifications are fulfilled.
(d) Compute Lobs with
Lobs (s) = C · (s · I − A)−1 · L; (6.73)

4. Design of the “state feedback regulator”:

(a) K = lqr(A,B,C’*Q y*C,rho*R u);


(b) Find Q y∈ Rp×p and R u∈ Rm×m (often Q y = eye(ny) and R u = eye(nu));
(c) Choose rho smaller until all the spezifications are fulfilled;
(d) Loop gain:

LLTR (s) = C · (s · I − A)−1 · B · K · (s · I − (A − B · K − L · C))−1 · L; (6.74)

5. Implement the controller.

157
Gioele Zardini Control Systems II FS 2017

6.7.4 LQG/LQR for p ≥ m


1. Modelization of the plant, linearization and normalization;

2. Definition of the specifications in frequency domain;

3. Design the LQR:

(a) Phi = care(A,B,Q,beta*q*eye(nu));


(b) K = 1/r*B’*Phi;
(c) Find Q∈ Rn×n , r > 0, and β ≤ 1, s.t. the specifications are fulfilled;
(d) Compute Lobs with
LLQR (s) = K · (s · I − A)−1 · B; (6.75)

4. Design the LTR “state feedback regulator” :

(a) L = lqr(A’,C’,B*Q u*B’,mu*R y)’;


(b) Find Q u∈ Rm×m and R y∈ Rp×p (often Q u = eye(nu) and R y = eye(ny));
(c) Choose mu smaller until all the specifications are fulfilled;
(d) Loop gain:

LLTR (s) = C · (s · 1 − A)−1 · B · K · (s · 1 − (A − B · K − L · C))−1 · L; (6.76)

5. Implement the controller.

158
Gioele Zardini Control Systems II FS 2017

A Linear Algebra
A.1 Matrix-Inversion
1
A−1 = · adj(A), {adj(A)}ij = (−1)i+j · det(Aij )
det(A)
Special Cases:

• n = 2:    
a b −1 1 d −b
A= ⇒ A = ·
c d a · d − b · c −c a
• n = 3:
   
a b c e·i−f ·h c·h−b·i b·f −c·e
1 
A = d e f  ⇒ A−1 = f · g − d · i a · i − c · g c · d − a · f
det(A)
g h i d·h−e·g b·g−a·h a·e−b·d

• (a + b)3 = a3 + 3a2 b + 3ab2 + b3

A.2 Differentiation with Matrices


d
A · x = AT
dx
d T
x · A · x = (AT + A) · x
dx

A.3 Matrix Inversion Lemma


1
[M + v · v T ]−1 = M −1 − −1
· M −1 · v · v T · M −1
1+ vT ·M ·v

159
Gioele Zardini Control Systems II FS 2017

B Rules
B.1 Trigo
α[◦ ] 0 30 45 60 90 120 180
π π π π 2π
α[rad] 0 6 4 3 2 3
π
√ √ √
1 2 3 3
sin(α) 0 1 0
√2 √2 2 2
3 2 1
cos(α) 1 0 − 12 −1
√3 2
√ 2

3
tan(α) 0 3
1 3 ±∞ − 3 0
√ √
3

3
cot(α) ±∞ 3 1 2
0 − 2
±∞

B.2 Euler-Forms
eix = cos(x) + i · sin(x)
a + i · b = |a + i · b| · ei·∠(a+i·b)
1 ix
sin(x) = (e − e−ix )
2i
1
cos(x) = (eix + e−ix )
2

B.3 Derivatives
1 1
(loga |x|)0 = (loga e) =
x x ln a
cx 0 cx
(a ) = (c ln a)a
1
(tan x)0 = = 1 + tan2 x
cos2 x
1
(arcsin x)0 = √
1 − x2
1
(arccos x)0 = − √
1 − x2
1
(arctan x)0 =
1 + x2

B.4 Logarithms
ln |y| · C = ln |y C |
− ln |r| = ln |r−1 |
ln(1) = log(1) = 0

160
Gioele Zardini Control Systems II FS 2017

B.5 Magnitude and Phase


In the Bode diagram are magnitude and phase separate from each other. Magnitude and phase
for complex fractions are: r
a+i·b a2 + b 2
=
c+i·d c2 + d2
     
a+i·b a+i·b bc − ad
∠ = arg = arctan
c+i·d c+i·d ac + bd
 
b
arg {a + i · b} = arctan
a
arg {(a + i · b)c } = c · arg {a + i · b}
 
c
arg = arg {c} − arg {a + i · b}
(a + i · b)

B.6 dB-Skala
Typically the unit for the magnitude is dB:

|Σ(jω)|dB = 20 · log10 |Σ(jω)|


|Σ(jω)|dB
|Σ(jω)| = 10 20

1
= −X|dB
X dB
(X · Y )|dB = X|dB + Y |dB

Value 0.001 0.01 0.1 0.5 √1 1 2 2 10 100
2
dB −60 −40 −20 ≈ −6 ≈ −3 0 ≈ 3 ≈ 6 20 40

161
Gioele Zardini Control Systems II FS 2017

C MATLAB
C.1 General Commands

Command Description
A(i,j) Element of A in position i (row) and j (column)
abs(X) Magnitude of all elements of X
angle(X) Phase of all elements of X
X’ Complex conjugate and transpose of X
X.’ Transpose, not complex conjugate of X
conj(X) Complex conjugate of all elements of X
real(X) Realteil von allen Einträge von X
imag(X) Imaginary part of all elements of X
eig(A) Eigenvalues of A
[V,D]=eig(A) Eigenvalues D (diagonal elements), eigenvectors V (column vectors)
s=svd(A) singular values of A
[U,Sigma,V]=svd(A) Singular Values Decomposition of A
rank(A) Rank of A
det(A) Determinant of A
inv(A) Inverse of A
diag([a1,...,an]) Diagonalmatrix with a1,...,an as diagonal elements
zeros(x,y) Zero matrix of dimension x×y
zeros(x) Zero matrix of dimension x×x
eye(x,y) Identity matrix of dimension x×y
eye(x) Identity matrix of dimension x×x
ones(x,y) One-Matrix (all elements = 1) of dimension x×y
ones(x) One-Matrix (all elements = 1) of dimension x×x
max(A) Largest element in vector A (A Matrix: Max in column vectors)
min(A) Smallest element in vector A (A Matrix: Max in column vectors)
sum(A) Sum of elements of A (A Matrix: Sum row pro row)
dim=size(A) Dimension of A (size=[#rows #columns])
dim=size(A,a) a=1: dim=#rows, a=2: dim=#columns, sonst dim=1
t=a:i:b t=[a,a+i,a+2i,...,b-i,b] (row vector)
y=linspace(a,b) row vector with 100 “linear-spaced” points in range [a,b]
y=linspace(a,b,n) row vector with n “linear-spaced” points in range [a,b]
y=logspace(a,b) row vector with 50 “logarithmically-spaced” points in range [10^a,10^b]
y=logspace(a,b,n) row vectors with n “logarithmically-spaced” points in range [10^a,10^b]
I=find(A) I: Index of non zero elements of A
disp(A) Print on screen of A (String: ’name’)

162
Gioele Zardini Control Systems II FS 2017

C.2 Control Systems Commands

Befehl Beschreibung
sys=ss(A,B,C,D) State-Space M. with A,B,C,D in time domain
sys=ss(A,B,C,D,Ts) State-Space M. with A,B,C,D and sampling Ts (discrete-time)
sys=zpk(Z,P,K) State-Space M. with zeros Z, poles P and gain K
sys=zpk(Z,P,K,Ts) State-Space M. with zeros Z, poles P, gain K and sampling Ts
sys=tf([bm ...b0],[an ...a0]) Transfer function with bn in numerator and an in denom.
P=tf(sys) Transfer function of sys
P.iodelay=... Insers to P delay.
pole(sys) Poles of System
zero(sys) Zeros og System
[z,p,k]=zpkdata(sys) z: Zeros, p: Poles, k: static gain
ctrb(sys) or ctrb(A,b) Controllability Matrix
obsv(sys) or obsv(A,c) Observability Matrix
series(sys1,sys2) series of sys1 and sys2
feedback(sys1,sys2) sys1 with sys2 as (negative) Feedback
[Gm,Pm,Wgm,Wpm]=margin(sys) Gm: gain margin, Pm: phase margin, Wpm: crossover freq.
[y,t]=step(sys,Tend) y: step response von sys until T, t: time
[y,t]=impulse(sys,Tend) y: impulse response of sys until Tend, t: time
y=lsim(sys,u,t) Simulation of sys with input u for the timet
sim(’Simulink model’,Tend) Simulation of Simulink Model’ until Tend
p0=dcgain(sys) static gain (P (0))
K=lqr(A,B,Q,R) Gain Matrix K (solution of the LQR-Problem)
[X,L,K]=care(A,B,Q) X: solution of the Riccati equation, G: Gain matrix
Paug=augw(G,W1,W3,W2) Space State M. for H∞
[K,Cl,gamma]=hinfsyn(Paug) H∞ : K: Controller
fr=evalfr(sys,f) sys evaluated in f (s = f )
sysd=c2d(sys,Ts,method) Discretization of sys with method with Sampling Time Ts

163
Gioele Zardini Control Systems II FS 2017

C.3 Plot and Diagrams

Befehl Beschreibung
nyquist(sys) Nyquist diagram of the system sys
nyquist(sys,{a,b}) Nyquis diagramm in interval [a,b] of the system sys
bode(sys) Bode diagram of the system sys
bode(sys,{a,b}) Bode diagram in intervall [a,b] of the system sys
bodemag(sys) Bode diagram (just magnitude) of the system sys
bodemag(sys,{a,b}) Bode diagram (just magnitude) in interval [a,b] of the system. sys
rlocus(sys) Root Locus diagram
impulse(sys) Impulse Response of the system sys
step(sys) Step response of the system sys
pzmap(sys) Poles and zeros mapping of the system sys
svd(sys) Singular values dynamics of the dystem sys
plot(X,Y) Plot of Y as function of X
plot(X,Y,...,Xn,Yn) Plot of Yn as function of Xn (for all n)
stem(X,Y) Discrete plot of Y as function of X
stem(X,Y,...,Xn,Yn) Discrete plot of Yn as function of Xn (for all n)
xlabel(’name’) Name of the x-Axis
ylabel(’name’) Name of the y-Axis
title(’name’) Title of the plot
xlim([a b]) Range for the x-Axis (Plot between a and b)
ylim([a b]) Range for the y-Axis (Plot between a and b)
grid on Grid
title(’name’) Title of the plot
legend(’name1’,...,’name’) Legend
subplot(m,n,p) Grid m×n, Plot in Position p
semilogx(X,Y) Logarithmitic Plot with y-Axis linear

164

You might also like