Fault Tolerant Process Control c2
Fault Tolerant Process Control c2
,
and KL
Scalar comparison functions, known as class K, K
if a =
and (r) as r .
Denition 2.2 A function : [0, a) [0, ) [0, ) is said to be of class KL
if, for each xed t 0, the mapping (r, t ) is of class K with respect to r and, for
each xed r, the mapping (r, t ) is decreasing with respect to t and (r, t ) 0 as
t .
We will write K and KL to indicate that is a class K function and
is a class KL function, respectively. As an immediate application of these function
classes, we can rewrite the stability denitions of the previous section in a more
compact way. For example, stability of the system of Eq. (2.5) is equivalent to the
property that there exist a > 0 and a class K function, , such that all solutions
with x(0) satisfy:
_
_
x(t )
_
_
__
_
x(0)
_
_
_
, t 0. (2.9)
Asymptotic stability is equivalent to the existence of a > 0 and a class KL func-
tion, , such that all solutions with x(0) satisfy:
_
_
x(t )
_
_
__
_
x(0)
_
_
, t
_
, t 0. (2.10)
2.3 Stability of Nonlinear Systems 13
Global asymptotic stability amounts to the existence of a class KL function, , such
that the inequality of Eq. (2.10) holds for all initial conditions. Exponential stability
means that the function takes the form (r, s) =cre
s
for some c, >0.
2.3.3 Lyapunovs Direct (Second) Method
Having dened stability and asymptotic stability of equilibrium points, the next task
is to nd ways to determine stability. To be of practical interest, stability conditions
must not require that we explicitly solve Eq. (2.5). The direct method of Lyapunov
aims at determining the stability properties of an equilibrium point from the proper-
ties of f (x) and its relationship with a positive-denite function V(x).
Denition 2.3 Consider a C
1
(i.e., continuously differentiable) function V :
R
n
R. It is called positive-denite if V(0) = 0 and V(x) > 0 for all x = 0. If
V(x) as x , then V is said to be radially unbounded.
If V is both positive-denite and radially unbounded, then there exist two class
K
functions
1
,
2
such that V satises:
1
_
x
_
V(x)
2
_
x
_
(2.11)
for all x. We write
V for the derivative of V along the solutions of the system of
Eq. (2.5), i.e.:
V(x) =
V
x
f (x). (2.12)
The main result of Lyapunovs stability theory is expressed by the following state-
ment.
Theorem 2.1 (Lyapunov) Let x = 0 be an equilibrium point for the system of
Eq. (2.5) and D R
n
be a domain containing x = 0 in its interior. Suppose that
there exists a positive-denite C
1
function V : R
n
R whose derivative along the
solutions of the system of Eq. (2.5) satises:
V(x) 0, x D (2.13)
then x =0 of the system of Eq. (2.5) is stable. If the derivative of V satises:
c
= {x R
n
: V(x) c} and can never come out again. When
V < 0, the trajec-
tory moves from one Lyapunov surface to an inner Lyapunov surface with smaller
c. As c decreases, the Lyapunov surface V(x) = c shrinks to the origin, showing
that the trajectory approaches the origin as time progresses. If we only know that
V(x) 0, we cannot be sure that the trajectory will approach the origin, but we can
conclude that the origin is stable since the trajectory can be contained inside any
ball, B
__
_
x(0)
_
_
, t
_
+
_
s
[0,t ]
_
, (2.19)
where
s
[0,t ]
:=ess.sup{(s) : s [0, t ]} (supremum norm on [0, t ] except for a
set of measure zero).
Since the system of Eq. (2.17) is time-invariant, the same property results if we
write
_
_
x(t )
_
_
__
_
x(t
0
)
_
_
, t t
0
_
+
_
s
[t
0
,t ]
_
, t t
0
0. (2.20)
The ISS property admits the following Lyapunov-like equivalent characterization:
The system of Eq. (2.17) is ISS if and only if there exists a positive-denite radially
unbounded C
1
function V : R
n
R such that for some class K
functions and
we have
V
x
f (x, )
_
x
_
+
_
_
, x, . (2.21)
This is, in turn, equivalent to the following gain margin condition:
x
_
_
=
V
x
f (x, )
_
x
_
, (2.22)
18 2 Background on Nonlinear Systems and Control
where , K
L
f
V(x)+
(L
f
V)
2
(x)+(L
g
V)
4
(x)
(L
g
V)
2
(x)
L
g
V(x), L
g
V(x) =0,
0, L
g
V(x) =0.
(2.29)
It should be noted that Eq. (2.28) can be satised only if
L
g
V(x) =0 = L
f
V(x) <0, x =0. (2.30)
The intuitive interpretation of the existence of a CLF is as follows: For any x such
that L
g
V(x) =0, since there are no constraints on the input,
V can be made negative
by picking a large enough control action, with an appropriate sign, to counter the
effect of possibly positive L
f
V(x) term. For all x such that L
g
V(x) =0, the control
action has no effect on the Lyapunov-function derivative. For it to be possible to
show stability using the CLF V , it should therefore be true that whenever L
g
V(x) =
0, we also have that L
f
V(x) < 0. This is the requirement that is formalized in
Eq. (2.30). With such a CLF, Eq. (2.29) results in
W(x) =
_
(L
f
V)
2
(x) +(L
g
V)
4
(x) >0, x =0. (2.31)
A further characterization of a stabilizing control law (x) for the system of
Eq. (2.27) with a given CLF V is that (x) is continuous at x = 0 if and only if
the CLF satises the small control property: For each >0 there is a () >0 such
that, if x =0 satises |x| , then there is some u with |u| < such that
L
f
V(x) +L
g
V(x)u <0. (2.32)
The main deciency of the CLF concept as a design tool is that for most nonlinear
systems a CLF is not known. The task of nding an appropriate CLF maybe as com-
plex as that of designing a stabilizing feedback law. In the next section, we review
one commonly used tool for designing a Lyapunov-based control law that utilizes
coordinate transformations. We also note that in the presence of input constraints,
the concept of a CLF needs to be revisited, and this issue is discussed in Sect. 2.6.
20 2 Background on Nonlinear Systems and Control
2.5 Feedback Linearization and Zero Dynamics
One of the popular methods for nonlinear control design (or alternatively, one way
to construct a Lyapunov-function for the purpose of control design) is feedback
linearization, which employs a change of coordinates and feedback control to trans-
form a nonlinear system into a system whose dynamics are linear (at least partially).
This transformation allows the construction and use of a Lyapunov function for the
control design utilizing results from linear systems analysis. A great deal of re-
search has been devoted to this subject over the last four decades, as evidenced by
the comprehensive books [72, 126] and the references therein. In this section, we
briey review some of the basic geometric concepts that will be used in subsequent
chapters. While this book does not require the formalism of differential geometry,
we will employ Lie derivatives only for notational convenience. If f : R
n
R
n
is a vector eld and h : R
n
R is a scalar function, the notation L
f
h is used for
h
x
f (x). It is recursively extended to
L
k
f
h(x) =L
f
_
L
k1
f
h(x)
_
=
x
_
L
k1
f
h(x)
_
f (x).
Let us consider the following nonlinear system:
x =f (x) +g(x)u,
y =h(x),
(2.33)
where x R
n
, u R, y R, f , g, h are analytic (i.e., innitely differentiable)
vector functions. The derivative of the output y =h(x) is given by
y =
h
x
(x)f (x) +
h
x
(x)g(x)u
= L
f
h(x) +L
g
h(x)u. (2.34)
If L
g
h(x
0
) = 0, then the system of Eq. (2.33) is said to have relative degree one at
x
0
(note that since the functions are smooth L
g
h(x
0
) =0 implies that there exists a
neighborhood of x
0
on which L
g
h(x) =0). In our terminology, this implies that the
output y is separated form the input u by one integration only. If L
g
h(x
0
) =0, there
are two cases:
(i) If there exist points arbitrarily close to x
0
such that L
g
h(x) =0, then the system
of Eq. (2.33) does not have a well-dened relative degree at x
0
.
(ii) If there exists a neighborhood B
0
of x
0
such that L
g
h(x) = 0 for all x B
0
,
then the relative degree of the system of Eq. (2.33) may be well-dened.
In case (ii), we dene
1
(x) =h(x),
2
(x) =L
f
h(x) (2.35)
2.5 Feedback Linearization and Zero Dynamics 21
and compute the second derivative of y
y =
2
x
(x)f (x) +
2
x
(x)g(x)u
= L
2
f
h(x) +L
g
L
f
h(x)u. (2.36)
If L
g
L
f
h(x
0
) =0, then the system of Eq. (2.33) is said to have relative degree two
at x
0
. If L
g
L
f
h(x) =0 in a neighborhood of x
0
, then we continue the differentiation
procedure.
Denition 2.7 The system of Eq. (2.33) is said to have relative degree r at the point
x
0
if there exists a neighborhood B
0
of x
0
on which
L
g
h(x) =L
g
L
f
h(x) = =L
g
L
r2
f
h(x) = 0, (2.37)
L
g
L
r1
f
h(x) = 0. (2.38)
If Eq. (2.37)(2.38) are valid for all x R
n
, then the relative degree of the system
of Eq. (2.33) is said to be globally dened.
Suppose now that the system of Eq. (2.33) has relative degree r at x
0
. Then
we can use a change of coordinates and feedback control to locally transform this
system into the cascade interconnection of an r-dimensional linear system and an
(n r)-dimensional nonlinear system. In particular, after differentiating r times the
output y =h(x), the control appears:
y
(r)
=L
r
f
h(x) +L
g
L
r1
f
h(x)u. (2.39)
Since L
g
L
r1
f
h(x) = 0 in a neighborhood of x
0
, we can linearize the inputoutput
dynamics of the system of Eq. (2.33) using feedback to cancel the nonlinearities in
Eq. (2.39):
u =
1
L
g
L
r1
f
h(x)
_
L
r
f
h(x) +v
_
. (2.40)
Then the dynamics of y and its derivatives are governed by a chain of r integra-
tors: y
(r)
= v. Since our original system of Eq. (2.33) has dimension n, we need
to account for the remaining n r states. Using differential geometry tools, it can
be shown that it is always possible to nd n r functions
r+1
, . . . ,
n
(x) with
i
x
(x)g(x) =0, for i =r +1, . . . , n such that the change of coordinates
1
=y =h(x),
2
= y =L
f
h(x), . . . ,
r
=y
(r1)
=L
r1
f
h(x),
1
=
r+1
, . . . ,
nr
=
n
(x)
(2.41)
22 2 Background on Nonlinear Systems and Control
is locally invertible and transforms, along with the feedback law of Eq. (2.40), the
system of Eq. (2.33) into
1
=
2
,
.
.
.
r
=v,
1
=
1
(, ),
.
.
.
nr
=
nr
(, ),
y =
1
,
(2.42)
where
1
(, ) =L
r+1
f
h(x),
nr
(, ) =L
n
f
h(x).
The states
1
, . . . ,
nr
have been rendered unobservable from the output y by
the control of Eq. (2.40). Hence, feedback linearization in this case is the nonlin-
ear equivalent of placing n r poles of a linear system at the origin and canceling
the r zeros with the remaining poles. Of course, to guarantee stability, the canceled
zeros must be stable. In the nonlinear case, using the new control input v to sta-
bilize the linear subsystem of Eq. (2.42) does not guarantee stability of the whole
system, unless the stability of the nonlinear part of the system of Eq. (2.42) has been
established separately.
When v is used to keep the output y equal to zero for all t > 0, that is, when
1
r
0, the dynamics of
1
, . . . ,
nr
are described by
1
=
1
(0, ),
.
.
.
nr
=
nr
(0, ).
(2.43)
They are called the zero dynamics of the system of Eq. (2.33) because they evolve
on the subset of the state-space on which the output of the system is identically
zero. If the equilibrium at
1
= =
nr
=0 of the zero dynamics of Eq. (2.43) is
asymptotically stable, the system of Eq. (2.33) is said to be minimum phase.
Remark 2.2 Most nonlinear analytical controllers emanating from the area of ge-
ometric control are inputoutput linearizing and induce a linear inputoutput re-
sponse in the absence of constraints [72, 81]. For the class of processes modeled by
equations of the form of Eq. (2.33) with relative order r and under the minimum
phase assumption, the appropriate linearizing state feedback controller is given by
u =
1
L
g
L
r1
f
h(x)
_
v L
r
f
h(x)
1
L
r1
f
h(x)
r1
L
f
h(x)
r
h(x)
_
(2.44)
2.6 Input Constraints 23
and induces the linear rth order response
d
r
y
dt
r
+
1
d
r1
y
dt
r1
+ +
r1
dy
dt
+
r
y =v, (2.45)
where the tunable parameters,
1
, . . . ,
r
, are essentially closed-loop time constants
that inuence and shape the output response. The nominal stability of the process is
guaranteed by placing the roots of the polynomial s
r
+
1
s
r1
+ +
r1
s +
r
in the open left-half of the complex plane.
2.6 Input Constraints
The presence of input constraints requires revisiting the concept of the CLF for both
linear and nonlinear systems. To understand this, consider a scalar linear system of
the form x =x +u, with u
min
u u
max
. For the sake of simplicity and without
loss of generality, let us assume u
min
< 0 < u
max
and > 0. For the case of scalar
systems, it is possible to determine the entire set of initial conditions from where
the system can be driven to the origin subject to input constraints (regardless of the
choice of the control law). This set is generally referred to as the null controllable
region (NCR). An explicit computation of the NCR is possible in this case because
for scalar systems (as discussed earlier) there exists a unique direction in which the
system states needs to move to achieve stability.
To determine this set, one can simply analyze the system trajectory to the left
and right of zero. Consider rst x > 0, and the requirement that for x > 0, x < 0.
If < 0, x < 0 x > 0 (and also x > 0 x < 0). On the other hand, if > 0,
x < 0 can only be achieved for x <
u
min
. The analysis reveals what was perhaps intuitive to begin with: For
linear systems, if the steady state is open-loop stable, the NCR is the entire state
space, while if the steady state is open-loop unstable, it has a nite NCR, which
in this case is {x :
u
max
< x <
u
min
, u
c
>u
max
resulting in u =u
max
, in
turn resulting in x < 0. A similar result is obtained for
u
max
< x <
|u
max
|
k
. The
analysis shows that for scalar systems, while the region of unconstrained operation
for a particular control law might depend on the specic control law chosen, the
stability region under the control law might still possibly be the entire NCR.
The issue of directionality again crops up when considering non-scalar systems.
While it is relatively easy to determine the region of unconstrained operation for a
particular control law, and, in certain cases, the region of attraction for the closed-
loop system, it is not necessary that the region of attraction for the closed-loop
system match the NCR. This happens due to the fact that it is in general difcult
to determine, for a particular value of the state, the unique direction in which the
inputs should saturate to achieve closed-loop stability. To achieve this objective, re-
cent control designs have utilized the explicit characterization of the NCR [71] in
designing CCLF based control laws that ensure stabilization from all initial con-
ditions in the NCR [93, 94]. For nonlinear systems, where the characterization of
the NCR is still an open problem, a meaningful control objective is to be able to
explicitly account for the constraints in the control design and provide an explicit
characterization of the closed-loop stability region.
2.7 Model Predictive Control
One of the control methods useful for accounting for constraints and optimality si-
multaneously is that of model predictive control (MPC). MPC is an approach which
accounts for optimality considerations explicitly and is widely adopted in industry
as an effective approach to deal with large multivariable constrained optimal con-
trol problems. The main idea of MPC is to choose control actions by repeatedly
2.7 Model Predictive Control 25
solving an online a constrained optimization problem, which aims at minimizing a
performance index over a nite prediction horizon based on predictions obtained by
a system model. In general, an MPC design is composed of three components:
1. A model of the system. This model is used to predict the future evolution of the
system in open-loop and the efciency of the calculated control actions of an
MPC depends highly on the accuracy of the model.
2. A performance index over a nite horizon. This index is minimized subject to
constraints imposed by the system model, restrictions on control inputs and sys-
tem state, and other considerations at each sampling time to obtain a trajectory
of future control inputs.
3. A receding horizon scheme. This scheme introduces the notion of feedback into
the control law to compensate for disturbances and modeling errors, whereby
only the rst piece of the future input trajectory is implemented and the con-
strained optimization problem is resolved at the next sampling instance.
Consider the control of the system of Eq. (2.1) and assume that the state mea-
surements of the system of Eq. (2.1) are available at synchronous sampling time
instants {t
k0
}, a standard MPC is formulated as follows [60]:
min
uS()
_
t
k+N
t
k
__
_
x()
_
_
Q
c
+
_
_
u()
_
_
R
c
_
d +F
_
x(t
k+N
)
_
(2.47)
s.t.
x(t ) =f
_
x(t ), u(t )
_
, (2.48)
u(t ) U, (2.49)
x(t
k
) =x(t
k
), (2.50)
where S() is the family of piece-wise constant functions with sampling period
, N is the prediction horizon, Q
c
and R
c
are strictly positive denite symmetric
weighting matrices, x is the predicted trajectory of the system due to control input
u with initial state x(t
k
) at time t
k
, and F() denotes the terminal penalty.
The optimal solution to the MPC optimization problem dened by Eq. (2.47)
(2.50) is denoted as u
(t |t
k
) which is dened for t [t
k
, t
k+N
). The rst step value
of u
(t |t
k
) is applied to the closed-loop system for t [t
k
, t
k+1
). At the next sam-
pling time t
k+1
, when a new measurement of the system state x(t
k+1
) is available,
and the control evaluation and implementation procedure is repeated. The manipu-
lated input of the system of Eq. (2.1) under the control of the MPC of Eq. (2.47)
(2.50) is dened as follows:
u(t ) =u
(t |t
k
), t [t
k
, t
k+1
), (2.51)
which is the standard receding horizon scheme.
In the MPC formulation of Eq. (2.47)(2.50), Eq. (2.47) denes a performance
index or cost index that should be minimized. In addition to penalties on the state
and control actions, the index may also include penalties on other considerations;
for example, the rate of change of the inputs. Equation (2.48) is the model of the
26 2 Background on Nonlinear Systems and Control
system of Eq. (2.1) which is used in the MPC to predict the future evolution of the
system. Equation (2.49) takes into account the constraint on the control input, and
Eq. (2.50) provides the initial state for the MPC which is a measurement of the
actual system state. Note that in the above MPC formulation, state constraints are
not considered but can be readily taken into account.
It is well known that the MPC of Eq. (2.47)(2.50) is not necessarily stabilizing.
To understand this, let us consider a discrete time version of the MPC implementa-
tion, for a scalar system described by x(k + 1) = x(k) + u(k), in the absence of
input constraints. Also, let N = 1, q and r denote the horizon, penalty on the state
deviation and input deviation, respectively. The objective function then simplies to
q(
2
x(k)
2
+ u(k)
2
+ 2x(k)u(k)) + ru(k)
2
, and the minimizing control action is
u(k) =
qx(k)
q+r
, resulting in the closed-loop systemx(k+1) =
rx(k)
q+r
. The minimiz-
ing solution will result in stabilizing control action only if q > r( 1). Note that
for < 1, this trivially holds (i.e., the result trivially holds for stabilization around
an open-loop stable steady state). For > 1, the result establishes how large the
penalty on the set point deviation should be compared to the penalty on the control
action for the controller to be stabilizing. The analysis is meant to bring out the fact
that generally speaking, the stability of the closed-loop system in the MPC depends
on the MPC parameters (penalties and the control horizon) as well as the system
dynamics. Note also that even though we have analyzed an unconstrained system,
the prediction horizon we used was nite (in comparison to linear quadratic regula-
tor designs, where the innite horizon cost is essentially captured in computing the
control action, and therefore results in stabilizing controller in the absence of con-
straints). Finally, also note that for the case of innite horizon, the optimum solution
is also the stabilizing one, and it can be shown that such an MPC will stabilize the
system with the NCR as the stability region (albeit at an impractical computational
burden).
To achieve closed-loop stability without relying on the objective function pa-
rameters, different approaches have been proposed in the literature. One class of
approaches is to use well-designed terminal penalty terms that capture innite hori-
zon costs; please, see [16, 100] for surveys of these approaches. Another class
of approaches is to impose stability constraints in the MPC optimization problem
[3, 14, 100]. There are also efforts focusing on getting explicit stabilizing MPC laws
using ofine computations [92]. However, the implicit nature of MPC control law
makes it very difcult to explicitly characterize, a priori, the admissible initial con-
ditions starting from where the MPC is guaranteed to be feasible and stabilizing.
In practice, the initial conditions are usually chosen in an ad hoc fashion and tested
through extensive closed-loop simulations.
2.8 Lyapunov-Based MPC
In this section, we introduce Lyapunov-based MPC (LMPC) designs proposed in
[93, 108, 110] which allow for an explicit characterization of the stability region
and guarantee controller feasibility and closed-loop stability.
2.8 Lyapunov-Based MPC 27
For the predictive control of the system of Eq. (2.1), the key idea in LMPC-based
designs is to utilize a Lyapunov-function based constraint and achieve immediate
decay of the Lyapunov function. The set of initial conditions for which it is possible
to achieve an instantaneous decay in the Lyapunov function value can be computed
explicitly, and picking the (preferably largest) level curve contained in this set can
provide the explicitly characterized feasibility and stability region for the LMPC.
The following example of the LMPC design is based on an existing explicit con-
trol law h(x) which is able to stabilize the closed-loop system [108, 110]. The for-
mulation of the LMPC is as follows:
min
uS()
_
t
k+N
t
k
__
_
x()
_
_
Q
c
+
_
_
u()
_
_
R
c
_
d (2.52)
s.t.
x(t ) =f
_
x(t ), u(t )
_
, (2.53)
u(t ) U, (2.54)
x(t
k
) =x(t
k
), (2.55)
V(x(t
k
))
x
f
_
x(t
k
), u(t
k
)
_
V(x(t
k
))
x
f
_
x(t
k
), h
_
x(t
k
)
__
, (2.56)
where V(x) is a Lyapunov function associated with the nonlinear control law h(x).
The optimal solution to this LMPC optimization problem is denoted as u
l
(t |t
k
)
which is dened for t [t
k
, t
k+N
). The manipulated input of the system of Eq. (2.1)
under the control of the LMPC of Eq. (2.52)(2.56) is dened as follows:
u(t ) =u
l
(t |t
k
), t [t
k
, t
k+1
), (2.57)
which implies that this LMPC also adopts a standard receding horizon strategy.
In the LMPCdened by Eq. (2.52)(2.56), the constraint of Eq. (2.56) guarantees
that the value of the time derivative of the Lyapunov function, V(x), at time t
k
is
smaller than or equal to the value obtained if the nonlinear control law u = h(x)
is implemented in the closed-loop system in a sample-and-hold fashion. This is a
constraint that allows one to prove (when state measurements are available every
synchronous sampling time) that the LMPC inherits the stability and robustness
properties of the nonlinear control law h(x) when it is applied in a sample-and-hold
fashion; please, see [30, 125] for results on sampled-data systems.
Let us denote the stability region of h(x) as
when the
sampling time is sufciently small. Note that the region
can be explicitly
characterized; please, refer to [110] for more discussion on this issue. The main
advantage of the LMPC approach with respect to the nonlinear control law h(x)
is that optimality considerations can be taken explicitly into account (as well as
constraints on the inputs and the states [110]) in the computation of the control
actions within an online optimization framework while improving the closed-loop
performance of the system. Since the closed-loop stability and feasibility of the
28 2 Background on Nonlinear Systems and Control
LMPC of Eq. (2.52)(2.56) are guaranteed by the nonlinear control law h(x), it is
unnecessary to use a terminal penalty term in the cost index (see Eq. (2.52) and
compare it with Eq. (2.47)) and the length of the horizon N does not affect the
stability of the closed-loop system but it affects the closed-loop performance.
2.9 Hybrid Systems
Hybrid systems are characterized by the co-existence of continuous modes of op-
eration along with discrete switches between the distinct modes of operation and
arise frequently in the design and analysis of fault-tolerant control systems. The
class of hybrid systems of interest to the focus of this bookswitched systemscan
be described by
x =f
i(x,t )
(x) +g
i(x,t )
(x)u
i(x,t )
, (2.58)
where x R
n
, u R
n
are the continuous variables and i N are the discrete vari-
ables indexing the mode of operation. The nature of the function i(x, t ) and, in par-
ticular, its two specic forms i(x) and i(t ) result in the so-called state-dependent and
time-dependent switching. What is of more interest from a stability analysis and de-
sign point of view (both when considering the design of control laws and, in the case
of time-dependent switching, the switching signal) is the possibility of innitely
many switches where it becomes crucial to explicitly consider the switched nature
of the system in the stability analysis. In particular, when the possibility of innitely
many switches exists, establishing stability in the individual modes of operation
is not sufcient [19], and additional conditions on the behavior of the Lyapunov-
functions (used to establish stability in the individual modes of operation) during
the switching (as well as of sufcient dwell-time [68]) need to be satised for the
stability of the switched system. For the case of nite switches, the considerations
include ensuring stability requirements at the onset of a particular mode are satised
and, in particular, satised for the terminal (last) mode of operation.
2.10 Conclusions
In this chapter, some fundamental results on nonlinear systems analysis and control
were briey reviewed. First, the class of nonlinear systems that will be considered
in this book was presented; then the denitions of stability of nonlinear systems
were introduced; and following that, techniques for stabilizing nonlinear systems,
for example, Lyapunov-based control, feedback linearization, handling constraints,
model predictive control and Lyapunov-based model predictive control and stability
of hybrid (switched) systems were discussed.
http://www.springer.com/978-1-4471-4807-4