Adaptive Control of Hyperbolic PDEs (PDFDrive)
Adaptive Control of Hyperbolic PDEs (PDFDrive)
Henrik Anfinsen
Ole Morten Aamo
Adaptive
Control of
Hyperbolic
PDEs
Communications and Control Engineering
Series editors
Alberto Isidori, Roma, Italy
Jan H. van Schuppen, Amsterdam, The Netherlands
Eduardo D. Sontag, Boston, USA
Miroslav Krstic, La Jolla, USA
Communications and Control Engineering is a high-level academic monograph
series publishing research in control and systems theory, control engineering and
communications. It has worldwide distribution to engineers, researchers, educators
(several of the titles in this series find use as advanced textbooks although that is not
their primary purpose), and libraries.
The series reflects the major technological and mathematical advances that have a
great impact in the fields of communication and control. The range of areas to
which control and systems theory is applied is broadening rapidly with particular
growth being noticeable in the fields of finance and biologically-inspired control.
Books in this series generally pull together many related research threads in more
mature areas of the subject than the highly-specialised volumes of Lecture Notes in
Control and Information Sciences. This series’s mathematical and control-theoretic
emphasis is complemented by Advances in Industrial Control which provides a
much more applied, engineering-oriented outlook.
Publishing Ethics: Researchers should conduct their research from research
proposal to publication in line with best practices and codes of conduct of relevant
professional bodies and/or national and international regulatory bodies. For more
details on individual ethics matters please see:
https://www.springer.com/gp/authors-editors/journal-author/journal-author-
helpdesk/publishing-ethics/14214
Adaptive Control
of Hyperbolic PDEs
123
Henrik Anfinsen Ole Morten Aamo
Department of Engineering Cybernetics Department of Engineering Cybernetics
Norwegian University of Science Norwegian University of Science
and Technology and Technology
Trondheim, Norway Trondheim, Norway
MATLAB® is a registered trademark of The MathWorks, Inc., 1 Apple Hill Drive, Natick, MA
01760-2098, USA, http://www.mathworks.com.
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
A few years ago, we came across an interesting problem related to oil well drilling.
By controlling pressure at the surface, the task was to attenuate pressure oscillations
at the bottom of the well, several kilometers below the surface. At the same time,
the 2011 CDC paper by Vazquez, Krstic, and Coron on “Backstepping boundary
stabilization and state estimation of a 2 2 linear hyperbolic system” was pub-
lished, providing the tools necessary to solve the problem. Various applications in
the oil and gas industry, prone to uncertainty, subsequently led us to study the
adaptive control problem for hyperbolic partial differential equations relying
heavily on the infinite-dimensional backstepping technique.
Over the years that followed, we derived a fairly complete theory for adaptive
control of one-dimensional systems of coupled linear hyperbolic PDEs. The
material has been prepared in this book in a systematic manner giving a clear
overview of the state-of-the-art. The book is divided into five parts, with Part I
devoted to introductory material and the remaining four parts distinguished by the
structure of the system of equations under consideration. Part II contains scalar
systems, while Part III deals with the simplest systems with bi-directional infor-
mation flow. They constitute the bulk of book with the most complete treatment in
terms of variations of the problem: collocated versus anti-collocated sensing and
control, swapping design, identifier-based design, and various constellations of
uncertainty. Parts IV and V extend (some of) the results from Part III to systems
with bi-directional information flow governed by several coupled transport equa-
tions in one or both directions.
The book should be of interest to researchers, practicing control engineers, and
students of automatic control. Readers having studied adaptive control for ODEs
will recognize the techniques used for developing adaptive laws and providing
closed-loop stability guarantees. The book can form the basis of a graduate course
focused on adaptive control of hyperbolic PDEs, or a supplemental text for a course
on adaptive control or control of infinite-dimensional systems.
The book contains many simulation examples designed not only to demonstrate
performance of the various schemes but also to show how the numerical imple-
mentation of them is carried out. Since the theory is developed in infinite
v
vi Preface
We owe great gratitude to coauthors in works leading to this book: Miroslav Krstic,
Florent Di Meglio, Mamadou Diagne, and Timm Strecker. In addition, we have
benefited from support from or interaction with Ulf Jakob Flø Aarsnes, Anders Albert,
Delphine Bresch-Pietri, Anders Rønning Dahlen, Michael Demetriou, John-Morten
Godhavn, Espen Hauge, Haavard Holta, Glenn-Ole Kaasa, Ingar Skyberg Landet,
Henrik Manum, Ken Mease, Alexey Pavlov, Bjørn Rudshaug, Sigbjørn Sangesland,
Rafael Vazquez, Nils Christian Aars Wilhelmsen, and Jing Zhou.
We gratefully acknowledge the support that we have received from the
Norwegian Academy of Science and Letters, Equinor, and the Norwegian Research
Council.
The second author dedicates this book to his daughters Anna and Oline, and wife
Linda.
vii
Contents
Part I Background
1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Linear Hyperbolic PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 Classes of Linear Hyperbolic PDEs Considered . . . . . . . . . . . . 7
1.4.1 Scalar Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4.2 2 2 Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4.3 n þ 1 Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4.4 n þ m Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5 Collocated Versus Anti-collocated Sensing and Control . . . . . . 10
1.6 Stability of PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.7 Some Useful Properties of Linear Hyperbolic PDEs . . . . . . . . . 13
1.8 Volterra Integral Transformations . . . . . . . . . . . . . . . . . . . . . . . 14
1.8.1 Time-Invariant Volterra Integral Transformations . . . . . 14
1.8.2 Time-Variant Volterra Integral Transformations . . . . . . 21
1.8.3 Affine Volterra Integral Transformations . . . . . . . . . . . 23
1.9 The Infinite-Dimensional Backstepping Technique for PDEs . . . 24
1.10 Approaches to Adaptive Control of PDEs . . . . . . . . . . . . . . . . 30
1.10.1 Lyapunov Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.10.2 Identifier-Based Design . . . . . . . . . . . . . . . . . . . . . . . . 32
1.10.3 Swapping-Based Design . . . . . . . . . . . . . . . . . . . . . . . 34
1.10.4 Discussion of the Three Methods . . . . . . . . . . . . . . . . 38
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
ix
x Contents
Part IV n + 1 Systems
13 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
14 Non-adaptive Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
14.2 State Feedback Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
14.3 State Observers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
14.3.1 Sensing Anti-collocated with Actuation . . . . . . . . . . . . 268
14.3.2 Sensing Collocated with Actuation . . . . . . . . . . . . . . . 271
14.4 Output Feedback Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . 276
14.4.1 Sensing Anti-collocated with Actuation . . . . . . . . . . . . 276
14.4.2 Sensing Collocated with Actuation . . . . . . . . . . . . . . . 276
14.5 Output Tracking Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . 277
14.6 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
14.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
15 Adaptive State-Feedback Controller . . . . . . . . . . . . . . . . . . . . . . . . 281
15.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
15.2 Swapping-Based Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
15.2.1 Filter Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
15.2.2 Adaptive Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
15.2.3 Control Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
15.2.4 Estimator Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . 287
15.2.5 Target System and Backstepping . . . . . . . . . . . . . . . . . 287
15.2.6 Proof of Theorem 15.2 . . . . . . . . . . . . . . . . . . . . . . . . 290
xiv Contents
Part V n þ m Systems
18 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
19 Non-adaptive Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
19.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
19.2 State Feedback Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
19.2.1 Non-minimum-time Controller . . . . . . . . . . . . . . . . . . 350
19.2.2 Minimum-Time Controller . . . . . . . . . . . . . . . . . . . . . 353
Contents xv
1.1 Introduction
Systems of hyperbolic partial differential equations (PDEs) describe flow and trans-
port phenomena. Typical examples are transmission lines (Curró et al. 2011), road
traffic (Amin et al. 2008), heat exchangers (Xu and Sallet 2010), oil wells (Landet
et al. 2013), multiphase flow (Di Meglio et al. 2011; Diagne et al. 2017), time-delays
(Krstić and Smyshlyaev 2008b) and predator–prey systems (Wollkind 1986), to men-
tion a few. These distributed parameter systems give rise to important estimation
and control problems, with methods ranging from the use of control Lyapunov func-
tions (Coron et al. 2007), Riemann invariants (Greenberg and Tsien 1984), frequency
domain approaches (Litrico and Fromion 2006) and active disturbance rejection con-
trol (ADRC) (Gou and Jin 2015). The approach taken in this book makes extensive use
of Volterra integral transformations, and is known as the infinite-dimensional back-
stepping approach. The backstepping approach offers a systematic way of designing
controllers and observers for linear PDEs - non-adaptive as well as adaptive. One
of its key strengths is that the controllers and observers are derived for the infinite-
dimensional system directly, and all analysis can therefore be done directly in the
infinite-dimensional framework. Discretization is avoided before an eventual imple-
mentation on a computer.
While integral transformations were used as early as the 1970s and 1980s in order
to study solutions and controllability properties of PDEs (Colton 1977; Seidman
1984), the very first use of infinite-dimensional backstepping for controller design
of PDEs is usually credited to Weijiu Liu for his paper (Liu 2003) published in 2003,
in which a parabolic PDE is stabilized using this technique. Following (Liu 2003),
the technique was quickly expanded in numerous directions, particularly in the work
authored by Andrey Smyshlyaev and Miroslav Krstić, published between 2004 and
approximately 2010. The earliest publication is Smyshlyaev and Krstić (2004), in
which non-adaptive state-feedback control laws for a class of parabolic PDEs are
derived, followed by backstepping-based boundary observer design in Smyshlyaev
and Krstić (2005). Adaptive solutions are derived in Smyshlyaev and Krstić (2006)
and in their comprehensive work in three parts (Krstić and Smyshlyaev 2008a;
Smyshlyaev and Krstić 2007a, b). Most of their work is collected in two extensive
books on non-adaptive (Krstić and Smyshlyaev 2008c) and adaptive (Smyshlyaev
and Krstić 2010a) backstepping-based controller and observer design, respectively.
The first use of backstepping for control of linear hyperbolic PDEs, on the other
hand, was in 2008 in the paper (Krstić and Smyshlyaev 2008b) for a scalar 1-D
system. Extensions to more complicated systems of hyperbolic PDEs were derived
a few years later in Vazquez et al. (2011), for two coupled linear hyperbolic PDEs,
and in Di Meglio et al. (2013) and more recently, Hu et al. (2016) for an arbitrary
number of coupled PDEs.
The very first result on adaptive control of hyperbolic PDEs using backstepping
was published as late as 2014 (Bernard and Krstić 2014). In that paper, the results
for parabolic PDEs in Smyshlyaev and Krstić (2006) were extended in order to
adaptively stabilize a scalar 1-D linear hyperbolic PDE with an uncertain in-domain
parameter using boundary sensing only.
A series of papers then followed developing a quite complete theory of adaptive
control of systems of coupled linear hyperbolic PDEs. This book gives a systematic
preparation of this body of work.
1.2 Notation
Domains
The following domains will be frequently used:
T = {(x, ξ) | 0 ≤ ξ ≤ x ≤ 1} (1.1a)
T1 = T × {t ≥ 0} (1.1b)
S = {(x, ξ) | 0 ≤ x ≤ ξ ≤ 1} (1.1c)
S1 = S × {t ≥ 0} . (1.1d)
T
For a vector-valued signal u(x, t) = u 1 (x, t) u 2 (x, t) . . . u n (x, t) defined for x ∈
[0, 1], t ≥ 0:
1.2 Notation 5
Convergence
An arrow is used to denote asymptotic convergence, for instance
||z|| → 0 (1.6)
means that the L 2 -norm of the signal z(x, t) converges asymptotically to zero. Noth-
ing, however, is said about the rate of convergence. Similarly, the notation
a→b (1.7)
denotes that the signal a(t) converges asymptotically to some (possibly constant)
signal b(t).
Derivatives
The partial derivative of a variable is usually denoted using a subscript, that is
∂
u x (x, t) = u(x, t). (1.8)
∂x
When the variable already has a subscript, we will use the notation ∂x to denote the
partial derivative, so that for instance
∂
∂x u 1 (x, t) = u 1 (x, t). (1.9)
∂x
6 1 Background
d
f (x) = f (x), (1.10)
dx
for some function f of x. For a function in time only, we will use a dot to denote the
derivative, that is
d
η̇(t) = η(t) (1.11)
dt
for some signal η of time t.
Other Notation
For a function of several variables, we will use · to indicate with respect to which
variable the norm is taken. For a signal u(x, t) defined for 0 ≤ x ≤ 1, t ≥ 0, we will
for instance let
u(x, ·) ∈ L2 (1.12)
for a function u(x, t) defined in the spatial variable x ∈ R and time t ≥ 0. Equations
in the form (1.14) have an infinite number of solutions. In fact, any function u in the
form
u(x, t) = f (x − t) (1.15)
1.3 Linear Hyperbolic PDEs 7
for some function u 0 (x) defined for x ∈ [0, 1]. The type of boundary conditions
considered in this book are of Dirichlet type, which are in the form
for some function g(t) defined for t ≥ 0. By imposing initial condition (1.16) and
boundary condition (1.17), the solution to (1.14) is narrowed down to a unique one,
namely
u 0 (x − t) for t < x
u(x, t) = (1.18)
g(t − x) for t ≥ x.
Hence, for t ≥ 0, the values in u(x, t) at time t are completely determined by the
values of g in the domain [t − 1, t], and thus for t ≥ 1
which clearly shows the transport property of the linear hyperbolic PDE (1.14) with
boundary condition (1.17): the values of g are transported without loss through the
domain [0, 1].
We will categorize linear hyperbolic PDEs into four types, which we will refer to as
classes. We assume all of them to be defined over the unit spatial domain x ∈ [0, 1],
which can always be achieved by scaling, and time t ≥ 0.
The first and simplest ones, are scalar first order linear hyperbolic partial (integral)
differential equations, which we will refer to as scalar systems. They consist of a
single P(I)DE, and are in the form
8 1 Background
for the system state u(x, t) defined for x ∈ [0, 1], t ≥ 0, some functions μ, f, g, h,
with μ(x) > 0, ∀x ∈ [0, 1], some initial condition u 0 , and an actuation signal U .
1.4.2 2 × 2 Systems
The second class of systems consists of two coupled first order linear hyperbolic
partial differential equations with opposite signs on their transport speeds, so that
they convect information in opposite directions. This type of systems has in the
literature (Vazquez et al. 2011; Aamo 2013) been referred to as 2 × 2 systems. They
are in the form
for the system states u(x, t) and v(x, t) defined for x ∈ [0, 1], t ≥ 0, some functions
λ, μ, c11 , c12 , c21 , c22 , with λ(x), μ(x) > 0, ∀x ∈ [0, 1], some constants ρ, q, some
initial conditions u 0 , v0 , and an actuation signal U .
1.4.3 n + 1 Systems
The third class of systems consists of an arbitrary number of PDEs with positive
transport speeds, and a single one with negative transport speed. They are referred
to as n + 1 systems, and have the form
1.4 Classes of Linear Hyperbolic PDEs Considered 9
with λi (x), μ(x) > 0 for i = 1, 2, . . . , n, some functions Σ(x), ω, , π and vectors
q, ρ of appropriate sizes, initial conditions u 0 , v0 and an actuation signal U .
1.4.4 n + m Systems
Sensing is either distributed (that is: assuming the full state u(x, t) for all x ∈ [0, 1]
is available), or taken at the boundaries. For boundary sensing, a distinction between
collocated and anti-collocated sensing and control is often made for systems of the
2 × 2, n + 1 and n + m classes.
If the sensing is taken at the same boundary as the actuation, it is referred to
as collocated sensing and control. The collocated measurement for systems (1.21),
(1.22) and (1.25) is
Systems of linear hyperbolic PDEs can, when left uncontrolled, be stable or unstable.
When closing the loop with a control law, we want to establish as strong stability
properties as possible for the closed-loop system. We will here list the stability
properties we are concerned with in this book:
1. L 2 -stability: ||u|| ∈ L∞
2. Square integrability in the L 2 -norm: ||u|| ∈ L2
3. Boundedness pointwise in space: ||u||∞ ∈ L∞
4. Square integrability pointwise in space: ||u||∞ ∈ L2
5. Convergence to zero in the L 2 -norm: ||u|| → 0
6. Convergence to zero pointwise in space: ||u||∞ → 0.
1.6 Stability of PDEs 11
If the PDE fails to be stable, it is unstable. The latter of the above degrees of stability
is the desired result for all derived controllers in this book. However, this is not
always possible to achieve.
For the latter two, there is also a distinction between convergence in finite time,
convergence to zero in minimum time and asymptotic convergence to zero. For the
many non-adaptive schemes, convergence in finite time can usually be achieved. For
the adaptive schemes, asymptotic convergence to zero is the best possible result.
The transport delays for system (1.25) are given as
1 1
dγ dγ
tu,i = tv, j = (1.30)
0 λi (γ) 0 μ j (γ)
u ≡ 0, v≡0 (1.32)
for t ≥ tmin for any arbitrary initial condition, the system is said to converge to zero
in minimum-time, and the controller is said to be a minimum-time controller. The
concept of minimum time convergence also applies to observers, but is only relevant
for the n + 1 and n + m classes of systems, where multiple states convect in the
same direction.
Convergence in minimum time implies convergence in finite time, which in turn
also implies asymptotic convergence.
We will now demonstrate the different degrees of stability on a simple PDE in
the following example, and give assumptions needed to ensure the different stability
properties.
where u(x, t) is defined for x ∈ [0, 1] and t ≥ 0, q is a constant and u 0 ∈ L 2 ([0, 1])
is a function. It is straightforward to show that the solution to (1.33) is
12 1 Background
q n u 0 (t − n + x) for n ≤ t < n + 1 − x
u(x, t) = (1.34)
q u 0 (t − n − 1 + x) for n + 1 − x ≤ t < n + 1
n+1
We assume ||u 0 || is nonzero, and emphasize that u 0 ∈ L 2 ([0, 1]) does not imply
u 0 ∈ B([0, 1]).
1. L 2 -stability: If |q| ≤ 1, system (1.33) is stable in the L 2 -sense. This is seen from
(1.36). Hence ||u|| ∈ L∞ if |q| ≤ 1.
2. Square integrability of the L 2 -norm: We evaluate
∞ ∞ 1 1 ∞
||u(t)|| dt =
2
u (x, t)d xdt =
2
u 2 (x, t)dtd x
0 0 0 0 0
∞ 1 n+1−x
= q 2n u 20 (t − n + x)dt
n=0 0 n
n+1
+ q 2n+2 u 20 (t − n − 1 + x)dt d x
n+1−x
∞
=M q 2n (1.37)
n=0
where
1 x 1
M= q2 u 20 (s)ds + u 20 (s)ds d x (1.38)
0 0 x
is a bounded constant. It is clear that the expression (1.37) is bounded only for
|q| < 1. Hence, ||u|| ∈ L2 only if |q| < 1.
3. Boundedness pointwise in space: Boundedness pointwise in space cannot be
established for initial conditions in L 2 ([0, 1]). However, for u 0 ∈ B([0, 1]),
||u||∞ ∈ L∞ if |q| ≤ 1.
4. Square integrability pointwise in space: Since (1.33) is a pure transport equation,
it suffices to consider a single x ∈ [0, 1]. For simplicity, we choose x = 0 and
find from (1.34),
1.6 Stability of PDEs 13
∞ ∞ n+1 ∞ 1
u 2 (0, t)dt = q 2n u 20 (t − n)dt = q 2n u 20 (x)d x
0 n=0 n n=0 0
∞
= ||u 0 ||2 q 2n . (1.39)
n=0
The expression (1.39) is bounded for |q| < 1 only, and hence ||u||∞ ∈ L2 only
if |q| < 1.
5. Convergence to zero in the L 2 -norm: It is seen from (1.36) that ||u|| → 0 only
if |q| < 1. Moreover, if q = 0, then ||u|| = 0 for all t ≥ 1, and hence finite-time
convergence is achieved.
6. Convergence to zero pointwise in space: Pointwise convergence cannot be
established for initial conditions in L 2 ([0, 1]). However, if u 0 ∈ B([0, 1]), then
||u||∞ → 0, provided |q| < 1. If, in addition, q = 0, then pointwise finite-time
convergence is achieved.
We will in the following list some useful properties for the type of systems of coupled
1 − D linear hyperbolic PDEs with actuation laws considered in this book.
Theorem 1.1 Consider a system of linear first-order hyperbolic PDEs defined for
x ∈ [0, 1], t ≥ 0, with bounded system coefficients and bounded additive distur-
bances. Let w(x, t) be a vector containing all system states, with initial condition
w(x, 0) = w0 (x), with w0 ∈ L 2 ([0, 1]). Then
where A depends on the initial condition norm ||w0 || and d̄, where d̄ is a constant
bounding all disturbances in the system, and c depends on the system parameters.
Moreover, if w0 ∈ B([0, 1]), then
where B depends on the initial condition norm ||w0 ||∞ and d̄, where d̄ is a constant
bounding all disturbances in the system, and k depends on the system parameters.
The proof is given in Appendix E.1 for the most general type of systems considered
in the book.
An important consequence of Theorem 1.1 is that the system’s L 2 -norm (or ∞-
norm in case of initial conditions in B([0, 1]) cannot diverge to infinity in finite
time.
14 1 Background
Corollary 1.1 A system of linear first-order hyperbolic PDEs with bounded system
coefficients and initial conditions in L 2 ([0, 1]) (respectively B([0, 1])) that converges
to zero in finite time in the L 2 -sense (respectively in the B-sense) is exponentially
stable at the origin and square integrable in the L 2 -sense (respectively in the B-
sense).
The proof is given in Appendix E.1. Several results on control of linear hyperbolic
PDEs (Vazquez et al. 2011; Di Meglio et al. 2013; Chen et al. 2017) include proofs
of exponential stability in the L 2 -sense in addition to proof of convergence to zero
in finite time. With the use of Corollary 1.1 the former is not necessary.
This book uses a particular change of variables as an essential tool for controller and
observer design. By changing variables, the original system dynamics is transformed
into a form which is more amenable to stability analysis. The change of variables
is invertible, so that stability properties established for the transformed dynamics
also apply to the original dynamics. A particularly favorable feature of the approach,
is that the change of variables provides the state feedback law for the controller
design problem and the output injection gains for the observer design problem.
We refer to the change of variables as a Volterra integral transformation, since it
takes the form of a Volterra integral equation involving an integration kernel. In
this section, we introduce the variants of Volterra integral transformations used in
this book. First, we consider time-invariant transformations, where the integration
kernel is time-invariant. Such transformations are used for non-adaptive controller
design for systems with time-invariant coefficients. Then, we consider time-variant
transformations, where the integration kernel is allowed to vary with time. Such
transformations are needed for all adaptive solutions in the book. Finally, we consider
affine transformations, where an arbitrary function can be added to the transformation
in order to allow for shifting the origin. This transformation is used for controller
and observer design for coupled PDE-ODE systems.
defined for x ∈ [0, 1]. The Volterra integral transformations used in this book take
the form
x
v(x) = u(x) − K (x, ξ)u(ξ)dξ (1.43)
0
and
1
z(x) = w(x) − M(x, ξ)w(ξ)dξ, (1.44)
x
and
and
1
w(x) = z(x) + N (x, ξ)z(ξ)dξ, (1.48)
x
respectively, for some functions L and N defined over the same domains as K and
M, respectively. By inserting (1.47) into (1.43), we find
16 1 Background
x x
v(x) = v(x) + L(x, ξ)v(ξ)dξ − K (x, ξ)v(ξ)dξ
0 0
x ξ
− K (x, ξ) L(ξ, s)v(s)dsdξ. (1.49)
0 0
We have thus shown that (1.47) and (1.48) are the inverses of (1.43) and (1.44),
respectively, provided L and N satisfy (1.53) and (1.56), respectively. The following
lemma addresses the existence of a solution to a Volterra integral equation for a
vector-valued function, which will be used to prove that solutions L and N of (1.53)
and (1.56) do exist. Since the equations for L and N in (1.53) and (1.56) are column-
wise independent, the lemma is applicable to (1.53) and (1.56) as well.
where the vector f (x, ξ) and matrix G(x, ξ) are given and bounded. Equation (1.58)
has a unique, bounded solution F(x, ξ), with a bound in the form
Proof (originally stated in Anfinsen and Aamo 2016) Define the operator
x
Ψ [F](x, ξ) = G(x, s)F(s, ξ)ds (1.61)
ξ
F 0 (x, ξ) = 0 (1.62a)
F (x, ξ) = f (x, ξ) + Ψ [F
q q−1
](x, ξ), q ≥ 1. (1.62b)
∞
F(x, ξ) = ΔF q (x, ξ), (1.65)
q=1
which by construction satisfies (1.58). Recall that f¯ and Ḡ bound each element of
f and G, respectively, and suppose
n q−1
Ḡ q−1 (x − ξ)q−1
|ΔF q (x, ξ)|∞ ≤ f¯ . (1.66)
(q − 1)!
n q Ḡ q (x − ξ)q
≤ f¯ . (1.67)
q!
Furthermore, (1.66) trivially holds for q = 1. Hence, an upper bound for (1.65) is
∞ ∞
n q Ḡ q (x − ξ)q
|F(x, ξ)|∞ ≤ |ΔF q (x, ξ)|∞ ≤ f¯ ≤ f¯en Ḡ(x−ξ) . (1.68)
q=0 q=1
q!
This shows that the series is bounded and converges uniformly (Coron et al. 2013).
For uniqueness, consider two solutions F 1 (x, ξ) and F 2 (x, ξ), and consider their
difference F̃(x, ξ) = F 1 (x, ξ) − F 2 (x, ξ). Due to linearity, F̃(x, ξ) must also satisfy
(1.58), with f (x, ξ) ≡ 0. The upper bound (1.68) with f¯ = 0 then yields F̃(x, ξ) ≡
0, and hence F 1 ≡ F 2 .
Theorem 1.2 The Volterra integral transformations (1.43) and (1.44) with bounded
kernels K and M are invertible, with inverses (1.47) and (1.48), respectively, where
the integration kernels L and N are given as the unique, bounded solutions to the
Volterra integral equations (1.53) and (1.56).
Moreover, for the transformation (1.43), the following bounds hold
and
Proof The fact that the inverses of (1.43) and (1.44) are (1.47) and (1.48) with L and
N given as the solution to (1.53) and (1.56) follows from the derivations (1.49)–(1.56)
and Lemma 1.1. To prove the bounds (1.69) and (1.70), we have
2
1 1 x
||v|| = v 2 (x)d x = u(x) − K (x, ξ)u(ξ)dξ d x. (1.71)
0 0 0
where
1 x
||K ||2 = K 2 (x, ξ)dξd x. (1.74)
0 0
yielding
20 1 Background
A2 = 1 + ||L|| (1.77)
where
1 x
||L||2 = L 2 (x, ξ)dξd x. (1.78)
0 0
and hence
which gives ||v||∞ ≤ B1 ||u||∞ with B1 = 1 + ||K ||∞ . A similar proof gives ||u||∞ ≤
B2 ||v||∞ with B2 = 1 + ||L||∞ . Similar derivations give equivalent bounds for the
transformation (1.44).
Example 1.2 Consider the Volterra integral transformation from u(x) to w(x),
defined over x ∈ [0, 1]
x
w(x) = u(x) − θ u(ξ)dξ, (1.81)
0
for some constant θ. Using the Volterra integral equation (1.53), we find the following
equation for L in the inverse transformation (1.47)
x
L(x, ξ) = θ + θ L(s, ξ)ds, (1.82)
ξ
and
1
z(x) = w(x) − M(x, ξ, t)w(ξ)dξ. (1.87)
x
In this case, K and M are functions of three variables including time, and are defined
over T1 and S1 , respectively, defined in (1.1b) and (1.1d).
Theorem 1.3 If the kernels K and M are bounded for every t, then the time-varying
Volterra integral transformations (1.86) and (1.87) are invertible for every t, with
inverses in the form
x
u(x) = v(x) + L(x, ξ, t)v(ξ)dξ (1.88)
0
and
1
w(x) = z(x) + N (x, ξ, t)z(ξ)dξ (1.89)
x
respectively, where L and N depend on and are defined over the same domains as
K and M, respectively, and can uniquely be determined by solving the time-varying
Volterra integral equations
x
L(x, ξ, t) = K (x, ξ, t) + K (x, s, t)L(s, ξ, t)ds (1.90)
ξ
22 1 Background
and
ξ
N (x, ξ, t) = M(x, ξ, t) + M(x, s, t)N (s, ξ, t)ds. (1.91)
x
Moreover, if the kernels are bounded uniformly in time, that is there exist constants
K̄ and M̄ such that ||K (t)||∞ ≤ K̄ and ||M(t)||∞ ≤ M̄ for every t ≥ 0, then there
exist constants G 1 , G 2 , H1 and H2 such that
and
for all t ≥ 0. Similar bounds hold for the transformation (1.87) with inverse (1.89).
Proof The proof of (1.88) and (1.89) being inverses of (1.86) and (1.87), respectively,
can be found using the same steps as for Theorem 1.2, and is therefore omitted.
For every fixed t, we have from Theorem 1.2 the following bounds
and
where
Choosing G 1 , G 2 , H1 , H2 as
we obtain the bounds (1.92)–(1.93). Similar derivations give equivalent bounds for
the transformation (1.87).
Time-varying Volterra transformations in the form (1.86) and (1.87) are also
invertible for every t, provided the kernels K and M are (uniformly) bounded for all
t. Volterra transformations in this form are typically used for adaptive schemes.
1.8 Volterra Integral Transformations 23
Sometimes it is convenient to shift the origin when transforming into new variables.
This leads to an affine Volterra integral transformation, which involves a function
that is added or subtracted to the usual Volterra integral transformation. Examples
are the change of variables from u(x) to w(x) where the origin is shifted by F(x),
given as
x
α(x) = u(x) − K (x, ξ)u(ξ)dξ − F(x) (1.98)
0
or
1
β(x) = w(x) − M(x, ξ)w(ξ)dξ − F(x). (1.99)
x
Theorem 1.4 The transformations (1.98) and (1.99) with bounded kernels K and
M are invertible, with inverses in the form
x
u(x) = α(x) + L(x, ξ)α(ξ)dξ + G(x) (1.100)
0
and
1
w(x) = β(x) + N (x, ξ)β(ξ)dξ + H (x) (1.101)
x
respectively, where L and N are the solutions to the Volterra integral equations (1.53)
and (1.56), respectively, and
x
G(x) = F(x) + L(x, ξ)F(ξ)dξ (1.102)
0
and
1
H (x) = F(x) + N (x, ξ)F(ξ)dξ. (1.103)
x
Proof Defining
w(x, t) = T [u(t)](x)
u(x, t) = T −1 [w(t)](x)
Substituting (1.104) into (1.105) gives (1.100) and (1.102). Similar steps, defining
z(x) = β(x) + F(x), give (1.101) and (1.103).
Affine Volterra integral transformations in the form (1.98) and (1.99) are typically
used for controller and observer design for coupled ODE-PDE systems.
When using infinite-dimensional backstepping (or backstepping for short) for con-
trol or observer design for PDEs, an invertible Volterra integral transformation, T ,
with a bounded integration kernel is introduced along with a control law F[u] that
map the system of interest into a carefully designed target system possessing some
desirable stability properties. This is illustrated in Fig. 1.1, where a backstepping
transformation T is used to map a system with dynamics in terms of u into a target
system with dynamics in terms of w. Due to the invertibility of the transformation,
the equivalence of norms as stated in Theorem 1.2 holds, which implies that the orig-
inal system is stabilized as well. We will demonstrate this in two examples. The first
example employs the transformation studied in Example 1.2.
for a signal u(x, t) defined for x ∈ [0, 1], t ≥ 0, where θ is a real constant, and the
initial condition u 0 (x) satisfies u 0 ∈ B([0, 1]). The state feedback control law
1
U (t) = −θ eθ(1−ξ) u(ξ, t)dξ. (1.107)
0
guarantees u ≡ 0 for t ≥ 1.
We prove this using the target system
for some initial condition w0 ∈ B([0, 1]). System (1.108) can be solved explicitly to
find
w0 (x + t) for t < 1 − x
w(x, t) = (1.109)
w(1, t − (1 − x)) for t ≥ 1 − x
and, since w(1, t) = 0, this implies that w ≡ 0 for t ≥ 1. The backstepping trans-
formation (that is: Volterra integral transformation) mapping u into w is
x
w(x, t) = u(x, t) + θ eθ(x−ξ) u(ξ, t)dξ = T [u(t)](x). (1.110)
0
We will now verify that the backstepping transformation (1.110) maps system (1.106)
into (1.108).
Firstly, rearranging (1.110) as
x
u(x, t) = w(x, t) − θ eθ(x−ξ) u(ξ, t)dξ, (1.111)
0
Consider the second term on the right. Using integration by parts, we get
x x x
d θ(x−ξ)
θ e θ(x−ξ)
u x (ξ, t)dξ = θ e θ(x−ξ)
u(ξ, t)0 − θ e u(ξ, t)dξ
0 0 dξ
= θu(x, t) − θeθx u(0, t)
x
+θ 2
eθ(x−ξ) u(ξ, t)dξ. (1.114)
0
Similarly, differentiating (1.111) with respect to space, we find using Leibniz’s rule
x
u x (x, t) = wx (x, t) − θu(x, t) − θ2 eθ(x−ξ) u(ξ, t)dξ. (1.117)
0
Inserting (1.116) and (1.117) into the original dynamics (1.106a), gives
which proves that w obeys the dynamics (1.108a). Evaluating (1.110) at x = 1, gives
1
w(1, t) = u(1, t) + θ eθ(1−ξ) u(ξ, t)dξ
0
1
= U (t) + θ eθ(1−ξ) u(ξ, t)dξ. (1.119)
0
Inserting the control law (1.107) yields the boundary condition (1.108b).
As with all Volterra integral transformations, the transformation (1.110) is invert-
ible. The inverse is as stated in Theorem 1.2, and thus in the form (1.47) with L given
as the solution to the Volterra integral equation (1.53) with K (x, ξ) = −θeθ(x−ξ) . The
inverse is
x
u(x, t) = w(x, t) − θ w(ξ, t)dξ = T −1 [w(t)](x). (1.120)
0
1.9 The Infinite-Dimensional Backstepping Technique for PDEs 27
This can be verified by again differentiating with respect to time and space, giving
and
wx (x, t) = u x (x, t) + θw(x, t), (1.122)
Using the fact that w(0, t) = u(0, t), we immediately find the dynamics (1.106a).
Evaluating (1.120) at x = 1, we find
1
u(1, t) = −θ w(ξ, t)dξ, (1.124)
0
where we used the fact that w(1, t) = 0. Inserting the transformation (1.110) gives
1 1 ξ
u(1, t) = −θ u(ξ, t)dξ − θ2 eθ(ξ−s) u(s, t)dsdξ. (1.125)
0 0 0
which is the control law (1.107). Hence (1.120) is the inverse of (1.110), mapping
target system (1.108) into system (1.106).
From (1.120), it is obvious that since w ≡ 0 for t ≥ 1, we will also have u ≡ 0 for
t ≥ 1. Figure 1.2 illustrates the use of the backstepping transformation and control
law to map system (1.106) into the finite-time convergent stable target system (1.108).
Example 1.4 The following example uses backstepping to design a controller for
an ordinary differential equation (ODE) system with actuator delay, following the
technique proposed in Krstić and Smyshlyaev (2008b). Consider the simple ODE
with actuator delay
x
w(x, t) = u(x, t) + θ 0
eθ(x−ξ) u(ξ, t)dξ
x
u(x, t) = w(x, t) − θ 0
w(ξ, t)dξ
for some scalar signal η(t) ∈ R, constants a ∈ R, b ∈ R\{0} and initial condition
η0 ∈ R. The actuator signal U is delayed by a known time d ≥ 0. Consider the
control law
1
U (t) = dk eda(1−ξ) bu(ξ, t)dξ + keda η(t), (1.128)
0
where u(x, t) is a distributed actuator state defined over x ∈ [0, 1], t ≥ 0 which
satisfies
for
μ = d −1 (1.130)
a + bk < 0. (1.131)
The control law (1.128) with k satisfying (1.131) guarantees exponential stability
of the origin η = 0.
To prove this, we first represent the time-delay in the ODE system (1.127) using
the PDE (1.129), and obtain
and the control law (1.128) map the system consisting of (1.129) and (1.132) into
the target system
which is the dynamics (1.134b). Moreover, inserting the transformation (1.133) into
(1.132), we obtain
Choosing u(1, t) = U (t) as (1.128) then gives the boundary condition (1.134c).
In Smyshlyaev and Krstić (2010a), three main types of control design methods for
adaptive control of PDEs are mentioned. These are
1. Lyapunov design.
2. Identifier-based design.
3. Swapping-based design.
We will briefly explain these next, and demonstrate the three methods to adaptively
stabilizing the simple ODE system in the scalar state x
ẋ = ax + u (1.142)
where a is an unknown constant and u is the control input. The steps needed for
applying the methods to (1.142) are in principle the same as for the PDE case,
although the details become more involved.
The Lyapunov approach directly addresses the problem of closed-loop stability, with
the controller and adaptive law designed simultaneously using Lyapunov analysis.
1.10 Approaches to Adaptive Control of PDEs 31
where ã(t) = a − â(t), â(t) is an estimate of a, and γ1 > 0 is a design gain. Differ-
entiating with respect to time and inserting the dynamics (1.142), we obtain
V̇1 (t) = ax 2 (t) + x(t)u(t) = â(t)x 2 (t) + ã(t)x 2 (t) + x(t)u(t) (1.144a)
1 ˙
V̇2 (t) = ã(t)ã(t). (1.144b)
γ1
˙ = −ã(t)
â(t) ˙ = γ1 x 2 (t), (1.146)
we obtain
x, ã ∈ L∞ . (1.149)
and hence
∞
γ2 x 2 (s)ds = V3 (0) − V3,∞ ≤ V3 (0) < ∞ (1.151)
0
where γ1 and γ2 are positive design gains. The error e(t) = x(t) − x̂(t) satisfies
1 2 1 2
V1 (t) = e (t) + ã (t) (1.155)
2 2γ3
˙ = γ3 e(t)x(t).
â(t) (1.157)
e, ã ∈ L∞ . (1.158)
e, ex ∈ L2 . (1.159)
1 2 1
V2 (t) = x̂ (t) + e2 (t) (1.162)
2 2
from which we find using Young’s inequality (Lemma C.3 in Appendix C)
γ4 6 6a02
ρ1 = , ρ2 = , ρ3 = , (1.164)
3γ1 γ4 γ2 γ4
are integrable functions (i.e. l1 , l2 ∈ L1 ). It then follows from Lemma B.3 in Appendix B
that
34 1 Background
V2 ∈ L1 ∩ L∞ , V2 → 0 (1.167)
and hence
x ∈ L2 ∩ L∞ x →0 (1.169)
follows.
When using swapping design, filters are carefully designed so that they can be used
to express the system states as linear, static combinations of the filters, the unknown
parameters and some error terms. The error terms are then shown to converge to zero.
From the static parameterization of the system states, standard parameter identifi-
cation laws can be used to estimate the unknown parameters. Then, by substituting
the system parameters in the static parameterization with their respective estimates,
adaptive estimates of the system states can be generated. A controller is designed
for stabilization of the adaptive state estimates, meaning that this method, like the
identifier-based method, is based on the certainty equivalence principle. The number
of filters required when using this method typically equals the number of unknown
parameters plus one. Consider the following swapping filters
for some positive design constant γ1 and some initial conditions p0 and η0 . A non-
adaptive estimate x̄ of the state x in (1.142) can then be generated as
is found to satisfy
1.10 Approaches to Adaptive Control of PDEs 35
e ∈ L2 ∩ L∞ e → 0. (1.174)
with e exponentially converging to zero. From the static relationship (1.175) with
e converging to zero, commonly referred to as the linear parametric model, a wide
range of well-known adaptive laws can be applied, for instance those derived in
Ioannou and Sun (1995). We will here use the gradient law with normalization,
which takes the form
˙ = γ2 ê(t) p(t) ,
â(t) (1.176)
1 + p 2 (t)
for some positive design gain γ2 , where x̂ is an adaptive estimate of the state x
generated by simply substituting a in the non-adaptive estimate (1.171) with its
estimate â, that is
1 2 1 2
V1 (t) = e (t) + ã (t), (1.179)
2γ1 2γ2
where ã(t) = a(t) − â(t) is the estimation error. By differentiation and inserting the
dynamics (1.173) and the adaptive law (1.176), recalling that ã(t) ˙
˙ = −â(t), we find
and hence
1 1 ê2 (t)
V̇1 (t) ≤ − e2 (t) − , (1.184)
2 2 1 + p 2 (t)
e, ã ∈ L∞ (1.185)
ê
e, ∈ L2 . (1.186)
1 + p2
ê
∈ L∞ . (1.188)
1 + p2
â˙ ∈ L2 ∩ L∞ . (1.190)
x̂(t) ˙ p(t).
˙ = â(t)x(t) + u(t) + γ1 ê(t) + â(t) (1.191)
x̂(t) ˙ p(t).
˙ = −γ3 x̂(t) + γ1 ê(t) + â(t) (1.193)
˙ p(t)
V̇2 (t) = −γ3 x̂ 2 (t) + γ1 x̂(t)ê(t) + x̂(t)â(t) (1.195a)
V̇3 (t) = −γ1 p (t) + p(t)x(t).
2
(1.195b)
Using Young’s inequality and the relationship x(t) = x̂(t) + ê(t), we can bound
these as
1 γ2 1
V̇2 (t) ≤ − γ3 x̂ 2 (t) + 1 ê2 (t) + â˙ 2 (t) p 2 (t) (1.196a)
2 γ3 γ3
1 1 1
V̇3 (t) ≤ − γ1 p 2 (t) + x̂ 2 (t) + ê2 (t). (1.196b)
2 γ1 γ1
we obtain
2
1 γ 4
V̇4 (t) ≤ −γ3 x̂ 2 (t) − γ12 γ3 p 2 (t) + 4 1 + γ3 ê2 (t) + â˙ 2 (t) p 2 (t). (1.198)
2 γ3 γ3
ê2 (t)
ê2 (t) = (1 + p 2 (t)) (1.199)
1 + p 2 (t)
38 1 Background
gives
2
1 γ1 ê2 (t) 4
V̇4 (t) ≤ −γ3 x̂ (t) − γ12 γ3 p 2 (t) +
2
4 + γ3 + â˙ 2 (t) p 2 (t)
2 γ3 1 + p (t) γ3
2
2
γ1 ê2 (t)
+ 4 + γ3 (1.200)
γ3 1 + p 2 (t)
˙ √ ê
which are bounded and integrable since â, ∈ L2 ∩ L∞ and γ1 and γ3 are
1+ p2
bounded constants. It then follows from Lemma B.3 in Appendix B that V4 ∈ L1 ∩
L∞ and V4 → 0, resulting in
η ∈ L2 ∩ L∞ , η → 0, (1.205)
x ∈ L2 ∩ L∞ , x → 0. (1.206)
From applying the three methods for adaptive stabilization to the simple ODE (1.142),
it is quite evident that the complexity of the stability proof increases from applying
the Lyapunov method to applying the identifier-based method, with the swapping
method being the one involving the most complicated analysis. The dynamical order
also differs for the three methods, with the Lyapunov method having the lowest order
1.10 Approaches to Adaptive Control of PDEs 39
as it only employs a single ODE for the update law. The identifier method, on the
other hand, involves a copy of the system dynamics in addition to the ODE for the
update law. The swapping method has the highest order, as it employs a number of
filters equal to the number of unknowns plus one in addition to the adaptive law.
A clear benefit with the swapping method, is that it brings the system to a para-
metric form which is linear in the uncertain parameter. This allows a range of already
established adaptive laws to be used, for instance the gradient law or the least squares
method. It also allows for normalization, so that the update laws are bounded, regard-
less of the boundedness properties of the system states. Normalization can be incor-
porated into the Lyapunov-based update law by choosing the Lyapunov function
V1 in (1.143) differently (for instance logarithmic), however, doing so adds other
complexities to the proof. The identifier method does not have this property. The
property of having bounded update laws is even more important for PDEs, where
there is a distinction between boundedness in L 2 and pointwise boundedness. An
update law that employs for instance boundary measurements may fail to be bounded
even though the closed loop system is bounded in L 2 .
Although the Lyapunov method is quite simple and straightforward to use for
the design of an adaptive stabilizing control law for the ODE (1.142), rendering
the other two methods overly complicated, this is not the case for PDEs. This can
for instance be seen from the derivation of an adaptive controller for a scalar linear
hyperbolic PDE with an uncertain spatially varying interior parameter derived in Xu
and Liu (2016) using the Lyapunov method. Although the resulting control law is
simple and of a low dynamical order, the stability proof is not and constitutes the
majority of the 16-page paper (Xu and Liu 2016). Due to this increased complexity,
the Lyapunov method is seldom used for adaptive control of linear hyperbolic PDE
systems, with the result in Xu and Liu (2016) being, at the time of writing this
book, the only result using this method for adaptive stabilization of linear hyperbolic
PDEs. The identifier-based and swapping-based methods, however, more easily and
in a more straightforward manner extend to PDEs. However, the identifier in the
identifier-based method, and the filters in the swapping-based method are PDEs as
well, making both type of controllers infinite-dimensional.
References
Aamo OM (2013) Disturbance rejection in 2 × 2 linear hyperbolic systems. IEEE Trans Autom
Control 58(5):1095–1106
Amin S, Hante FM, Bayen AM (2008) On stability of switched linear hyperbolic conservation laws
with reflecting boundaries. In: Hybrid systems computation and control. Springer, pp 602–605
Anfinsen H, Aamo OM (2018) A note on establishing convergence in adaptive systems. Automatica
93:545–549
Anfinsen H, Aamo OM (2016) Tracking in minimum time in general linear hyperbolic PDEs using
collocated sensing and control. In: 2nd IFAC workshop on control of systems governed by partial
differential equations. Bertinoro, Italy
40 1 Background
Auriol J, Di Meglio F (2016) Minimum time control of heterodirectional linear coupled hyperbolic
PDEs. Automatica 71:300–307
Bernard P, Krstić M (2014) Adaptive output-feedback stabilization of non-local hyperbolic PDEs.
Automatica 50:2692–2699
Chen S, Vazquez R, Krstić M (2017) Stabilization of an underactuated coupled transport-wave PDE
system. In: American control conference. Seattle, WA, USA
Colton D (1977) The solution of initial-boundary value problems for parabolic equations by the
method of integral operators. J Differ Equ. 26:181–190
Coron J-M, d’Andréa Novel B, Bastin G (2007) A strict Lyapunov function for boundary control
of hyperbolic systems of conservation laws. IEEE Trans Autom Control 52(1):2–11
Coron J-M, Vazquez R, Krstić M, Bastin G (2013) Local exponential H 2 stabilization of a 2 × 2
quasilinear hyperbolic system using backstepping. SIAM J Control Optim 51(3):2005–2035
Curró C, Fusco D, Manganaro N (2011) A reduction procedure for generalized Riemann problems
with application to nonlinear transmission lines. J Phys A: Math Theor 44(33):335205
Di Meglio F (2011) Dynamics and control of slugging in oil production. Ph.D. thesis, MINES
ParisTech
Di Meglio F, Vazquez R, Krstić M (2013) Stabilization of a system of n + 1 coupled first-order
hyperbolic linear PDEs with a single boundary input. IEEE Trans Autom Control 58(12):3097–
3111
Diagne A, Diagne M, Tang S, Krstić M (2017) Backstepping stabilization of the linearized Saint-
Venant-Exner model. Automatica 76:345–354
Gou B-Z, Jin F-F (2015) Output feedback stabilization for one-dimensional wave equation subject
to boundary disturbance. IEEE Trans Autom Control 60(3):824–830
Greenberg JM, Tsien LT (1984) The effect of boundary damping for the quasilinear wave equation.
J Differ Equ 52(1):66–75
Hu L, Di Meglio F, Vazquez R, Krstić M (2016) Control of homodirectional and general heterodi-
rectional linear coupled hyperbolic PDEs. IEEE Trans Autom Control 61(11):3301–3314
Ioannou P, Sun J (1995) Robust adaptive control. Prentice-Hall Inc, Upper Saddle River, NJ, USA
Krstić M, Smyshlyaev A (2008a) Adaptive boundary control for unstable parabolic PDEs - Part I:
Lyapunov design. IEEE Trans Autom Control 53(7):1575–1591
Krstić M, Smyshlyaev A (2008b) Backstepping boundary control for first-order hyperbolic PDEs
and application to systems with actuator and sensor delays. Syst Control Lett 57(9):750–758
Krstić M, Smyshlyaev A (2008c) Boundary control of PDEs: a course on backstepping designs.
Soc Ind Appl Math
Landet IS, Pavlov A, Aamo OM (2013) Modeling and control of heave-induced pressure fluctuations
in managed pressure drilling. IEEE Trans Control Syst Technol 21(4):1340–1351
Litrico X, Fromion V (2006) Boundary control of hyperbolic conservation laws with a frequency
domain approach. In: 45th IEEE conference on decision and control. San Diego, CA, USA
Liu W (2003) Boundary feedback stabilization of an unstable heat equation. SIAM J Control Optim
42:1033–1043
Seidman TI (1984) Two results on exact boundary control of parabolic equations. Appl Math Optim
11:891–906
Smyshlyaev A, Krstić M (2004) Closed form boundary state feedbacks for a class of 1-D partial
integro-differential equations. IEEE Trans Autom Control 49:2185–2202
Smyshlyaev A, Krstić M (2005) Backstepping observers for a class of parabolic PDEs. Syst Control
Lett 54:613–625
Smyshlyaev A, Krstić M (2006) Output-feedback adaptive control for parabolic PDEs with spatially
varying coefficients. In: 45th IEEE conference on decision and control. San Diego, CA, USA
Smyshlyaev A, Krstić M (2007a) Adaptive boundary control for unstable parabolic PDEs - Part II:
estimation-based designs. Automatica 43:1543–1556
Smyshlyaev A, Krstić M (2007b) Adaptive boundary control for unstable parabolic PDEs - Part III:
output feedback examples with swapping identifiers. Automatica 43:1557–1564
Reference 41
Smyshlyaev A, Krstić M (2010) Adaptive control of parabolic PDEs. Princeton University Press,
Princeton
Vazquez R, Krstić M, Coron J-M (2011) Backstepping boundary stabilization and state estimation
of a 2 × 2 linear hyperbolic system. In: 2011 50th IEEE conference on decision and control and
European control conference (CDC-ECC) December. pp 4937–4942
Wollkind DJ (1986) Applications of linear hyperbolic partial differential equations: predator-prey
systems and gravitational instability of nebulae. Math Model 7:413–428
Xu Z, Liu Y (2016) Adaptive boundary stabilization for first-order hyperbolic PDEs with unknown
spatially varying parameter. Int J Robust Nonlinear Control 26(3):613–628
Xu C-Z, Sallet G (2010) Exponential stability and transfer functions of processes governed by
symmetric hyperbolic systems. ESAIM: Control Optim Calc Var 7:421–442
Part II
Scalar Systems
Chapter 2
Introduction
This part considers systems in the form (1.20), consisting of a single first order linear
hyperbolic PIDE, with local and non-local reaction terms, and with scaled actuation
and anti-collocated measurement. These can be stated as
In Chap. 4, we design the first adaptive control law of this book. It is based on an
identifier for estimation of the parameter θ in system (2.4), which is then combined
with an adaptive control law to stabilize the system. The resulting control law is state-
feedback, requiring measurements of the full state u(x, t) for all x ∈ [0, 1]. This is
relaxed in Chap. 5 where swapping design is used to solve the adaptive stabilization
problem using output-feedback, requiring the boundary measurement (2.4d), only.
In Part II’s last chapter, Chap. 6, we solve a model reference adaptive control
(MRAC) problem using output feedback. The goal is to make the measured signal
y(t) track a signal generated from a simple reference model from minimal knowledge
of system parameters. The problem of regulating the state to zero is covered by the
MRAC problem, by simply setting the reference signal to zero.
First off, we rescale the domain to get rid of the spatially varying transport speed.
We will show that the mapping
where is defined as
x
dγ
(x) = μ , (2.9)
0 λ(γ)
where
We note from (2.9) that is strictly increasing, and hence invertible, and that
μ
(x) = , (1) = 1, (0) = 0. (2.12)
λ(x)
48 2 Introduction
and
μ
u x (x, t) = (x)ū x ((x), t) = ū x ((x), t). (2.14)
λ(x)
A remapping of the domain x → −1 (x) and a substitution ξ → (ξ) in the integral
gives (2.10a) with coefficients (2.11). The boundary condition, initial condition and
measurement (2.10b)–(2.10d) follow immediately from insertion and using (2.12).
Next, we remove the source term in f¯ and scale the state so that the constant k2
in the measurement (2.10c) is removed. We will show that the mapping
ǔ(x, t)
ǔ(x, t) = k2 ϕ(x)ū(x, t), ū(x, t) = (2.16)
k2 ϕ(x)
where ϕ is defined as
x
−1 −1
ϕ(x) = exp μ f ( (ξ))dξ , (2.17)
0
where
ϕ(x)
ǧ(x) = ḡ(x)ϕ(x), ȟ(x, ξ) = h̄(x, ξ) (2.19a)
ϕ(ξ)
ρ = k1 k2 ϕ(1), ǔ 0 (x) = k2 ϕ(x)ū 0 (x). (2.19b)
2.2 Proof of Lemma 2.1 49
This can be seen from differentiating (2.16) with respect to time and space, respec-
tively, to find
1
ū t (x, t) = ǔ t (x, t) (2.20a)
k2 ϕ(x)
1
ū x (x, t) = ǔ x (x, t) − μ−1 f¯(x)ǔ(x, t) . (2.20b)
k2 ϕ(x)
Inserting (2.20) into (2.10), we obtain (2.18) with coefficients (2.19). Inserting t = 0
into (2.16) gives ǔ 0 from ū 0 .
Consider now the backstepping transformation
x
v̌(x, t) = ǔ(x, t) − Ω(x, ξ)ǔ(ξ, t)dξ (2.21)
0
We will show that the backstepping transformation (2.21) maps system (2.18) into
a pure transport PDE
where
x
σ(ξ) = −Φ(1, ξ), v̌0 (x) = ǔ 0 (x) − Ω(x, ξ)ǔ 0 (ξ)dξ. (2.26)
0
From differentiating (2.21) with respect to time and space, and inserting the result
into (2.18a) we find
x
0 = ǔ t (x, t) − μǔ x (x, t) − ǧ(x)ǔ(0, t) − ȟ(x, ξ)ǔ(ξ, t)dξ
0
= v̌t (x, t) − μv̌x (x, t)
x
− μΩ(x, 0) − μ Ω(x, ξ)ǧ(x)dξ + ǧ(x) ǔ(0, t)
0
x
− μΩx (x, ξ) + μΩξ (x, ξ) + ȟ(x, ξ)
0
x
−μ Ω(x, s)ȟ(s, ξ)ds ǔ(ξ, t)dξ. (2.27)
ξ
from which we find (2.25b) using the value of σ given in (2.26) and the boundary
condition (2.18b). The measurement (2.25d) follows directly from inserting x = 0
into (2.21) and using (2.18d). The value of v̌0 given in (2.26) follows from inserting
t = 0 into (2.21).
Lastly, we show that the backstepping transformation
x
v(x, t) = v̌(x, t) − σ(1 − x + ξ)v̌(ξ, t)dξ (2.29)
0
From differentiating (2.29) with respect to time and space, respectively, and insert-
ing the result into (2.25a), we obtain
which yields the dynamics (2.4a) provided θ is chosen according to (2.30). Moreover,
by inserting x = 1 into (2.29) and using the boundary condition (2.25b), we find
2.2 Proof of Lemma 2.1 51
1 1
v(1, t) = ρU (t) + σ(ξ)v̌(ξ, t)dξ − σ(ξ)v̌(ξ, t)dξ = ρU (t) (2.32)
0 0
which gives (2.4b). Inserting t = 0 into (2.29) gives the expression (2.30) for v0 .
Remark 2.1 The proof of Lemma 2.1 can been shortened down by choosing the
boundary conditions of kernel Ω in (2.21) differently, obtaining (2.4) directly from
(2.21). However, we have chosen to include the intermediate pure transport system
(2.25), as it will be used for designing a model reference adaptive controller in
Chap. 6.
References
Haberman R (2004) Applied partial differential equations: with fourier series and boundary value
problems. Pearson Education, New Jersey
Korteweg D, de Vries G (1895) On the change of form of long waves advancing in a rectangular
canal and on a new type of long stationary waves. Philos Mag 39(240):422–443
Krstić M, Smyshlyaev A (2008) Backstepping boundary control for first-order hyperbolic PDEs
and application to systems with actuator and sensor delays. Syst Control Lett 57(9):750–758
Chapter 3
Non-adaptive Schemes
3.1 Introduction
where
with
A non-adaptive state feedback controller for system (3.1) is derived in Sect. 3.2,
based on Krstić and Smyshlyaev (2008).
In Sect. 3.3, we derive a state observer for system (3.1), assuming only the bound-
ary measurement (3.1d) is available. The observer and state-feedback controller are
then combined into an output-feedback controller in Sect. 3.4, that achieves stabi-
lization of the system using boundary sensing only.
Section 3.5 proposes a state-feedback output tracking controller whose goal is to
make the measured output track some bounded reference signal r (t) of choice. The
proposed controller can straightforwardly be combined with the state observer to
solve the output-feedback tracking problem.
All derived schemes are implemented and simulation results can be found in
Sect. 3.6. Finally, some concluding remarks and discussion of the methods are offered
in Sect. 3.7.
Left uncontrolled (U ≡ 0), system (3.1) may be unstable, depending on the system
parameters. Stabilizing controllers will here be derived, assuming all system param-
eters are known, demonstrating the synthesis of a stabilizing controller for the simple
PDE (3.1) using backstepping. We propose the control law
1
1
U (t) = k(1 − ξ)v(ξ, t)dξ (3.4)
ρ 0
Theorem 3.1 Consider system (3.1). The control law (3.4) ensures that
v≡0 (3.6)
for t ≥ d1 , where
d1 = μ−1 . (3.7)
Notice that the control law (3.4) achieves convergence to zero in finite time, a property
that is not achieved for linear ODEs or linear parabolic PDEs. It is due to the particular
dynamics of transport equations. It is not straightforwardly obvious why the state
feedback control law (3.4) stabilizes the system, let alone how the Eq. (3.5) for k is
obtained. We hope to shine some light on this in the following proof of Theorem 3.1,
which shows in detail the steps involved in the backstepping technique for control
design.
Proof (Proof of Theorem 3.1) As the reader may recall, the idea of backstepping
is to find an invertible Volterra integral transformation and a corresponding control
law U that map the system of interest into an equivalent target system designed with
some desirable stability properties. We propose the following target system
3.2 State Feedback Controller 55
where d1 is defined in (3.7), and α0 ∈ B([0, 1]) is the initial condition. It is clear that
for t ≥ d1 , we will have
α≡0 (3.10)
since α(1, t) = 0 for all t ≥ 0. Thus, we seek an invertible transformation that maps
system (3.1) into (3.8).
Consider the backstepping transformation
x
α(x, t) = v(x, t) − K (x, ξ)v(ξ, t)dξ (3.11)
0
We now apply integration by parts to the first integral on the right hand side of (3.13),
obtaining
x x
K (x, ξ)vx (ξ, t)dξ = [K (x, ξ)v(ξ, t)]0x − K ξ (x, ξ)v(ξ, t)dξ
0 0
= K (x, x)v(x, t) − K (x, 0)v(0, t)
x
− K ξ (x, ξ)v(ξ, t)dξ. (3.14)
0
56 3 Non-adaptive Schemes
Similarly, differentiating (3.11) with respect to space and using Leibniz’ rule, we
obtain
x
d
vx (x, t) = αx (x, t) + K (x, ξ)v(ξ, t)dξ
dx 0
x
= αx (x, t) + K (x, x)v(x, t) + K x (x, ξ)v(ξ, t)dξ. (3.16)
0
defined over T given in (1.1a), we obtain the target system dynamics (3.8a). Substi-
tuting x = 1 into (3.11), we obtain
1 1
α(1, t) = v(1, t) − K (1, ξ)v(ξ, t)dξ = ρU (t) − K (1, ξ)v(ξ, t)dξ (3.20)
0 0
3.2 State Feedback Controller 57
where we have inserted the boundary condition (3.1). Choosing the control law as
1
1
U (t) = K (1, ξ)v(ξ, t)dξ (3.21)
ρ 0
we obtain the boundary condition (3.1b). From (3.19a), it is evident that a solution
K to the Eq. (3.19) is in the form
Using this, the Volterra integral equation (3.19b) reduces to (3.5), and the control
law (3.21) becomes (3.4).
The inverse of (3.11) is in a similar form, as stated in Theorem 1.2, given as
x
v(x, t) = α(x, t) + L(x, ξ)α(ξ, t)dξ (3.23)
0
for a function L = L(x, ξ) defined over T given in (1.1a). L can be found by eval-
uating the Volterra integral equation (1.53). However, we show here an alternative
way to derive the inverse transformation. Using a similar technique used in deriving
K , we differentiate (3.23) with respect to time and space, respectively, insert the
dynamics (3.8a) and integrate by parts to find
and
x
αx (x, t) = vx (x, t) − L(x, x)α(x, t) − L x (x, ξ)α(ξ, t)dξ. (3.25)
0
over T yields the original system dynamics (3.1a). The simple form of (3.27) yields
the solution
for d1 defined in (3.7). By simply using the Volterra integral equation (1.53), we
obtain an equation for L as follows
x
L(x, ξ) = k(x − ξ) + k(x − s)L(s, ξ)ds (3.29)
ξ
where K is the solution to (3.19). However, it is not at all evident from the Volterra
integral equations (3.29) and (3.5) for k that the solution to (3.29) is as simple as
(3.28).
The Volterra integral equation (3.5) for the controller gain k does not in general have
a solution that can be found explicitly, and a numerical approximation is often used
instead. We will here give some examples where the controller gain can be found
explicitly. The integral in (3.5) can be recognized as a convolution, and applying the
Laplace transform with respect to x gives
and hence
θ(s)
k(s) = , (3.31)
θ(s) − μ
from which k(x) in some cases can be computed explicitly if θ(s) is known, illustrated
in the following examples.
θ(x) = θ (3.32)
θ
θ(s) = . (3.33)
s
Using (3.30), we obtain
3.2 State Feedback Controller 59
d1 θ
k(s) = − (3.34)
s − d1 θ
θ(x) = θx (3.36)
d1 θ
k(s) = − (3.37)
s 2 − d1 θ
The control law U = 0 for θ = 0 should not be surprising, as system (3.1) with θ ≡ 0
reduces to the target system, which is stable for the trivial control law U ≡ 0.
d1 ω
k(s) = − (3.40)
s 2 + (ω 2 − d1 ω)
k(x) = −ω 2 x if ω = d1 (3.41)
⎪
⎪ ω
⎪−
⎪ ω 2 − d ωx) if ω > d1 .
⎩ sin( 1
ω 2 − d1 ω
60 3 Non-adaptive Schemes
The state feedback controller derived in the above section requires distributed mea-
surements, which are rarely available in practice. Often, only boundary sensing in
the form (3.1d) is available, and a state observer is therefore needed. Consider the
observer
v̂ ≡ v. (3.43)
where ṽ0 = v0 − v̂0 , which can be seen from subtracting (3.42) from (3.1) and using
the fact that y(t) = v(0, t) as follows:
ṽt (x, t) − μṽx (x, t) = vt (x, t) − v̂t (x, t) − μvx (x, t) + μv̂x (x, t)
= μvx (x, t) + θ(x)v(0, t) − μv̂x (x, t)
− θ(x)v(0, t) − μvx (x, t) + μv̂x (x, t) = 0, (3.45)
and
The error ṽ governed by the dynamics (3.44) is clearly zero in finite time d1 , where
d1 is defined in (3.7), resulting in v̂ ≡ v.
Although the observer (3.42) for system (3.1) is only a copy of the system dynam-
ics and seems trivial to design, it is rarely the case that the resulting error dynamics
are trivial to stabilize. This will become evident in the design of observers for 2 × 2
systems in Sect. 8.3 where output injection terms have to be added to the observer
equations and carefully designed to achieve stability of the error dynamics.
3.4 Output Feedback Controller 61
As the state estimate converges to its true value in finite time, it is obvious that
simply substituting the state in the state feedback controller with the state estimate
will produce finite-time convergent output feedback controllers.
Theorem 3.3 Consider system (3.1), and let the controller be taken as
1
1
U (t) = k(1 − ξ)v̂(ξ, t)dξ (3.47)
ρ 0
where v̂ is generated using the observer of Theorem 3.2, and k is the solution to the
Volterra integral equation (3.5). Then
v≡0 (3.48)
Proof It was stated in Theorem 3.2 that û ≡ u for t ≥ d1 . Thus, for t ≥ d1 , the control
law (3.47) is the very same as (3.4), for which Theorem 3.1 states that v ≡ 0 after a
finite time d1 . Hence, after a total time of 2d1 , v ≡ 0.
Consider the simple system (3.1) again. The goal in this section is to make the
measured output (3.1d) track a signal r (t), that is y → r . Consider the control law
1
1 1
U (t) = k(1 − ξ)v(ξ, t)dξ + r (t + d1 ) (3.49)
ρ 0 ρ
Theorem 3.4 Consider system (3.1), and let the control law be taken as (3.49). Then
||v||∞ ∈ L∞ . (3.51)
Proof It is shown in the proof of Theorem 3.1 that system (3.1) can be mapped using
the backstepping transformation (3.11) into
62 3 Non-adaptive Schemes
provided k is the solution to the Volterra integral equation (3.5). Inserting the control
law (3.49) gives
for t ≥ d1 , which is the tracking goal. Moreover, if r ∈ L∞ , we see from the sim-
ple dynamics (3.53a) and the boundary condition (3.53b) that ||α||∞ ∈ L∞ . The
invertibility of transformation (3.11) then gives ||v||∞ ∈ L∞ (Theorem 1.2).
3.6 Simulations
The one-parameter system (3.1) and the controllers of Theorems 3.1, 3.3 and 3.4 are
implemented using the system parameters
3 1
λ= , ρ = 1, θ(x) = (1 + e−x cosh(πx)) (3.55)
4 2
and initial condition
u 0 (x) = x. (3.56)
The controller gain k, needed by all controllers, is computed from (3.5) by using suc-
cessive approximations (as described in Appendix F.1). The resulting gain is plotted
in Fig. 3.1. It is observed from Figs. 3.2 and 3.3 that the system state and observer
3.6 Simulations 63
Gain k
−5
−10
0 0.2 0.4 0.6 0.8 1
x
Fig. 3.2 Left: State during state feedback. Right: State during output feedback
Fig. 3.3 Left: State estimation error. Right: State during output tracking
0.6
1.5
||v − v̂||
1 0.4
||v||
0.5 0.2
0 0
0 1 2 3 4 5 0 1 2 3 4 5
Time [s] Time [s]
Fig. 3.4 Left: State norm during state feedback (solid red), output feedback (dashed-dotted blue)
and output tracking (dashed green). Right: State estimation error norm
state are bounded in all cases, and that the system state converges to zero when using
the controllers of Theorems 3.1 and 3.3, while standing oscillations are observed
for the case of using the controller of Theorem 3.4, which should be expected when
the reference signal is a sinusoid. The estimation error from using the observer in
Theorem 3.3 also converges to zero.
From the comparison plot of the state norms in Fig. 3.4, the finite-time convergence
property is evident for the controllers of Theorems 3.1 and 3.3, with the state feedback
64 3 Non-adaptive Schemes
1
0 2
and r
−1
U
1
−2
−3 0
0 1 2 3 4 5 0 1 2 3 4 5
Time [s] Time [s]
Fig. 3.5 Left: Actuation signal during state feedback (solid red), output feedback (dashed-dotted
blue) and output tracking (dashed green). Right: Measured signal (dashed red) and reference r
during tracking
4
d1 = λ−1 = ≈ 1.333 (3.58)
3
seconds, while convergence to zero for the output feedback controller of Theorem 3.3
is achieved for t ≥ 2d1 , since the estimation error takes d1 time to converge, as
observed from the figure. The control inputs are seen from Fig. 3.5 also to be zero
for t ≥ d1 and t ≥ 2d1 for the controllers of Theorems 3.1 and 3.3, respectively.
Lastly, the controller of Theorem 3.4 achieves the tracking objective for t ≥ d1 , in
accordance with the theory.
3.7 Notes
The above results clearly show the strength of the backstepping technique in con-
troller and observer design. One of the key strengths, as demonstrated, is that spatial
discretization need not be performed in any way before the actual implementation in a
computer. When using the backstepping technique, one instead analyzes the infinite-
dimensional system directly, avoiding any artifacts that discretization methods can
introduce that may potentially cause stability problems. In infinite dimensions, it is
straightforward to prove convergence in finite time, for instance, a particular feature
of hyperbolic partial differential equations which is lost by spatial discretization.
The major challenge in the backstepping technique instead lies in the choice of
target system and backstepping transformation. In the above design, we start by
choosing a target system and a form for the backstepping transformation, and then
derive conditions on the backstepping kernel so the backstepping transformation
maps the system of interest into the target system. The existence of such a kernel
is the major challenge, and it may happen that the conditions required on the back-
stepping kernel constitutes an ill posed problem, in which case either a different
backstepping transformation or an alternative target system must be found. These
3.7 Notes 65
issues will become far more evident when we in Part III and onwards consider sys-
tems of coupled PDEs.
One drawback of the above design, is that the controller (and observer) gains can
rarely be expressed explicitly, but rather as the solution to a set of partial differential
equations in the form (3.19) that may be difficult or time-consuming to solve. This
is of minor concern when the equation is time-invariant, because then a solution
can be computed once and for all, prior to implementation. However, for adaptive
controllers, the gains typically depend on uncertain parameters that are continuously
updated by some adaptive law. This brings us to the topic of the next chapter, where
we use the backstepping technique to derive controllers for systems with uncertain
parameters. The resulting controllers then have time-varying gains which must be
computed at every time step.
Reference
Krstić M, Smyshlyaev A (2008) Backstepping boundary control for first-order hyperbolic PDEs
and application to systems with actuator and sensor delays. Syst Control Lett 57(9):750–758
Chapter 4
Adaptive State-Feedback Controller
4.1 Introduction
where
with
the identifier-based design is simpler to carry out than the Lyapunov design in Xu
and Liu (2016), but at the cost of increasing dynamic order of the controller due to
the identifier dynamics. The details of the design are given in Sect. 4.2, simulations
are presented in Sect. 4.3, while some concluding remarks are offered in Sect. 4.4.
Although it is assumed unknown, we assume we have some a priori knowledge
of the parameter θ, formally stated in the following assumption.
Assumption 4.1 A bound on θ is known. That is, we are in knowledge of a constant
θ̄ so that
This assumption is not a limitation, since the bound θ̄ can be arbitrarily large.
v̂t (x, t) − μv̂x (x, t) = θ̂(x, t)v(0, t) + γ0 (v(x, t) − v̂(x, t))v 2 (0, t) (4.5a)
v̂(1, t) = U (t) (4.5b)
v̂(x, 0) = v̂0 (x) (4.5c)
for some design gains γ0 > 0, γ̄ ≥ γ(x) ≥ γ > 0, x ∈ [0, 1] and initial conditions
satisfying
Lemma 4.1 Consider system (4.1). The identifier (4.5) and the update law (4.6)
with initial conditions satisfying (4.7) guarantee that
4.2 Identifier-Based Design 69
where
Proof The property (4.8a) follows from the projection operator and the initial condi-
tion (4.7b) (Lemma A.1 in Appendix A). The error signal (4.9) can straightforwardly
be shown to have dynamics
for which we find by differentiating with respect to time, inserting the dynamics
(4.10a) and integrating by parts
1
V̇1 (t) = −μe2 (0, t) − μ||e(t)||2 + 2 (1 + x)e(x, t)θ̃(x, t)v(0, t)d x
0
1
−2 (1 + x) γ0 e2 (x, t)v 2 (0, t) + γ −1 (x)θ̃(x, t)θ̃t (x, t) d x. (4.12)
0
Inserting the adaptive law (4.6), and using the property −θ̃(x, t)projθ̄ (τ , θ̂(x, t)) ≤
−θ̃(x, t)τ (Lemma A.1), we obtain
which shows that V1 (t) is non-increasing and hence bounded, and thus ||e|| ∈ L∞
follows. This also implies that the limit limt→∞ V1 (t) = V1,∞ exists. By integrating
(4.13) from zero to infinity, we obtain
∞ ∞ ∞
V̇1 (τ )dτ = V1,∞ − V1 (0) ≤ −μ e2 (0, τ )dτ − μ ||e(τ )||2 dτ
0
0 ∞ 0
and hence
∞ ∞ ∞
μ e2 (0, τ )dτ + μ ||e(τ )||2 dτ + 2γ0 ||e(τ )||2 v 2 (0, τ )dτ
0 0 0
≤ V1 (0) − V1,∞ ≤ V1 (0) < ∞ (4.15)
which, since μ, γ0 > 0, proves that all integrals in (4.15) are bounded, resulting in
||θ̂t || ∈ L2 . (4.18)
Using the identifier and adaptive law designed in the previous section, we are ready
to design a stabilizing control law. Consider the control law
1
U (t) = k̂(1 − ξ, t)v̂(ξ, t)dξ (4.19)
0
Theorem 4.1 The control law (4.19) in closed loop with system (4.1), identifier (4.5)
and adaptive law (4.6), guarantees that
Theorem 4.1 is proved in Sect. 4.2.4 using Lyapunov theory, facilitated by the back-
stepping transformation with accompanying target system, which are presented next.
4.2 Identifier-Based Design 71
and
or equivalently
x+ξ
μk̂(x, t) = −θ̂(x, t) + k̂(x + ξ − s, t)θ̂(s − ξ, t)ds. (4.28)
ξ
A substitution τ = s − ξ in the integral yields (4.20). Consider also the target system
Lemma 4.2 The backstepping transformation (4.22) with k̂ satisfying (4.20) maps
identifier (4.5) into system (4.29).
Proof From differentiating (4.22) with respect to time, inserting the dynamics (4.5a)
and integrating by parts, we find
Substituting (4.31) and (4.32) into the identifier dynamics (4.5a), we find
Choosing k̂ as the solution to (4.20) yields the target system dynamics (4.29a).
Substituting x = 1 into (4.22) and inserting the boundary condition (4.5b), we find
1
w(1, t) = v̂(1, t) − k̂(1 − ξ, t)v̂(ξ, t)dξ
0
1
= U (t) − k̂(1 − ξ, t)v̂(ξ, t)dξ. (4.34)
0
Choosing the control law (4.19) yields the boundary condition (4.29b).
4.2 Identifier-Based Design 73
We will here use the following inequalities that hold for all t ≥ 0
||k̂(t)|| ≤ Mk (4.35a)
||w(t)|| ≤ G 1 ||v̂(t)|| (4.35b)
||v̂(t)|| ≤ G 2 ||w(t)|| (4.35c)
||k̂t || ∈ L2 . (4.36)
The property (4.35a) follows from applying Lemma 1.1 to (4.20), and the fact that
θ̂ is uniformly bounded. Properties (4.35b)–(4.35c) follow from Theorem 1.3, while
for (4.36), we differentiate (4.20) with respect to time and find
x
μk̂t (x, t) = −θ̂t (x, t) + k̂t (x − ξ, t)θ̂(ξ, t)dξ
x 0
or
Hence
We will consider the three rightmost integrals in (4.43) individually. For the second
integral on the right hand, we obtain by applying Young’s inequality to the cross
terms (Appendix C)
1
−2μ eδx w(x, t)k̂(x, t)d xe(0, t)
0
1
δx 2 1 2 1 δx 2
≤ ρ1 e w (x, t) + μ e k̂ (x, t)d xe2 (0, t)
0 ρ1 0
1
1
≤ ρ1 V2 (t) + μ2 eδ k̂ 2 (x, t)d xe2 (0, t)
ρ1 0
1
≤ ρ1 V2 (t) + μ2 eδ Mk2 e2 (0, t) (4.44)
ρ1
for an arbitrary positive constant ρ1 . Similarly for the third and fourth integral, using
v(0, t) = v̂(0, t) + e(0, t) = w(0, t) + e(0, t), we find, using Cauchy–Schwarz’ and
Young’s inequalities (Appendix C)
1
2γ0 eδx w(x, t)T [e](x, t)d xv 2 (0, t) ≤ 2γ0 eδ ||w(t)||||T [e](t)||v 2 (0, t)
0
≤ 2G 1 γ0 eδ ||w(t)||||e(t)|||v(0, t)||w(0, t) + e(0, t)|
1
≤ G 21 ρ2 γ02 e2δ ||w(t)||2 ||e(t)||2 v 2 (0, t) + (w(0, t) + e(0, t))2
ρ2
2 2
≤ ρ2 G 21 γ02 e2δ ||w(t)||2 ||e(t)||2 v 2 (0, t) + w 2 (0, t) + e2 (0, t) (4.45)
ρ2 ρ2
4.2 Identifier-Based Design 75
2
ρ1 = μ, ρ2 = ρ3 = μ (4.48)
μ
yields
V̇2 (t) ≤ −μ [δ − 2] V2 (t) + μ eδ Mk2 + 1 e2 (0, t)
2
+ G 21 γ02 e2δ ||w(t)||2 ||e(t)||2 v 2 (0, t)
μ
1
+ eδ G 22 ||k̂t (t)||2 ||w(t)||2 . (4.49)
μ
Now choosing
76 4 Adaptive State-Feedback Controller
δ=3 (4.50)
yields
V2 ∈ L1 ∩ L∞ , V2 → 0 (4.53)
and hence
In the non-adaptive case investigated in Sect. 3.2.1, it is shown that system (2.4) is,
through the invertible backstepping transformation (3.11), equivalent to the system
provided k satisfies the Volterra integral equation (3.5), and where we have inserted
for the control law (4.19). Since ||v||, ||v̂|| ∈ L2 ∩ L∞ , ||v||, ||v̂|| → 0 and k, k̂ are
bounded it follows that
and hence
4.2 Identifier-Based Design 77
Thus, all signals in the closed loop system are pointwise bounded and converge to
zero.
4.3 Simulations
System (4.1), identifier (4.5) and the control law of Theorem 4.1 are implemented
using the same system parameters as in the simulation in Sect. 3.6, that is
3 1
μ= θ(x) = (1 + e−x cosh(πx)) (4.63)
4 2
and initial condition
u 0 (x) = x. (4.64)
The initial condition for the identifier and parameter estimate are set to zero, and the
design gains are set to
The equation (4.20) is solved on-line using successive approximations (as described
in Appendix F.1) for the controller gain k̂.
It is observed from Fig. 4.1 that the system and identifier states are bounded and
converge asymptotically to zero. The error in the identifier (u − û) also converges
to zero, as does the actuation signal U seen in Fig. 4.2. The estimated parameter θ̂
is seen from Fig. 4.3 to be bounded and converge, although not to the true value θ.
Convergence of parameters to their true values requires persistent excitation, and is
therefore not compatible with the objective of regulation to zero.
78 4 Adaptive State-Feedback Controller
0.6
2
||u − û||
0.4
1
0.2
0 0
0 2 4 6 8 10 0 2 4 6 8 10
Time [s] Time [s]
Fig. 4.1 Left: State (solid red) and identifier (dashed-dotted blue) norms. Right: Identifier error
norm
−2
U
−4
0 2 4 6 8 10
Time [s]
3
θ and θ̂
Fig. 4.3 Left: Estimated parameter θ̂. Right: Actual value of θ (solid black) and final estimate θ̂
(dashed red)
4.4 Notes
Proving stability properties by the Lyapunov design in Xu and Liu (2016) is in gen-
eral more difficult than for the identifier-based design demonstrated in this chapter,
and the difference in complexity becomes more prominent as the complexity of the
system increases. On the other hand, Lyapunov designs in general result in adaptive
controllers of lower dynamical order than their identifier-based counterparts, and
are therefore simpler to implement in practice. This is of course due to the identi-
fier inheriting the dynamic order of the system, while the Lyapunov design gives a
dynamic order that only depends on the uncertain parameters. Both solutions assume
that measurements of the full state are available, which is unrealistic in most cases.
4.4 Notes 79
Reference
Xu Z, Liu Y (2016) Adaptive boundary stabilization for first-order hyperbolic PDEs with unknown
spatially varying parameter. Int J Robust Nonlinear Control 26(3):613–628
Chapter 5
Adaptive Output-Feedback Controller
5.1 Introduction
We consider again systems in the form (2.4) and recall the equations for the conve-
nience of the reader as
vt (x, t) − μvx (x, t) = θ(x)v(0, t) (5.1a)
v(1, t) = U (t) (5.1b)
v(x, 0) = v0 (x) (5.1c)
y(t) = v(0, t), (5.1d)
where
μ ∈ R, μ > 0 θ ∈ C 0 ([0, 1]) (5.2)
with
v0 ∈ B([0, 1]). (5.3)
For simplicity, we again assume ρ = 1. (This assumption will be relaxed in Chap. 6).
A Lyapunov-based state-feedback controller for the special case μ = 1 is pre-
sented in Xu and Liu (2016), while an identifier-based state-feedback controller is
designed in Chap. 4. We will in this chapter derive an adaptive controller using the
third design method mentioned in Sect. 1.10: swapping-based design. This method
employs filters, carefully designed so that the system states can be expressed as lin-
ear, static combinations of the filter states, the unknown parameters and some error
terms. The error terms are shown to converge to zero. The static parameterization
of the system states is referred to as the linear parametric model, to which a range
of standard parameter estimation algorithms can be applied. The number of filters
required when using this method typically equals the number of unknown parameters
plus one.
for some function v̂0 ∈ B([0, 1]). The corresponding prediction error is defined as
From the parametric model (5.7) and corresponding error (5.8), we also have
1
y(t) = ψ(0, t) + d1 θ(ξ)φ(1 − ξ, t)dξ + e(0, t), (5.13)
0
with e(0, t) = 0 for t ≥ d1 . From (5.13), we propose the following adaptive law with
normalization and projection
ê(0, t)φ(1 − x, t)
θ̂t (x, t) = projθ̄ γ1 (x) , θ̂(x, t) , θ̂(x, 0) = θ̂0 (x) (5.14)
1 + ||φ(t)||2
where γ̄ ≥ γ1 (x) ≥ γ > 0 for all x ∈ [0, 1] is a design gain, and the initial guess is
chosen inside the feasible domain, i.e.
ê(0, t)
σ(t) = . (5.17)
1 + ||φ(t)||2
Proof The property (5.16a) follows from the projection operator and the condition
(5.15) (Lemma A.1 in Appendix A). Consider the Lyapunov function candidate
1 1
d1
V1 (t) = d1 e2 (x, t)d x + γ1−1 (x)θ̃2 (x, t)d x. (5.18)
0 2 0
Differentiating with respect to time and inserting the dynamics (5.9) and adaptive
law (5.14), we find
1
V1 (t) = 2 e(x, t)ex (x, t)d x
0
1
−1 ê(0, t)φ(1 − x, t)
− d1 γ1 (x)θ̃(x, t)projθ̄ γ1 (x) , θ̂(x, t) d x. (5.19)
0 1 + ||φ(t)||2
Since −θ̃(x, t)projθ̄ (τ (x, t), θ̂(x, t)) ≤ −θ̃(x, t)τ (x, t) (Lemma A.1), we get
ê(0, t)e(0, t)
V̇1 (t) ≤ −e2 (0, t) − σ 2 (t) + . (5.22)
1 + ||φ(t)||2
5.2 Swapping-Based Design 85
1 1
V̇1 (t) ≤ − e2 (0, t) − σ 2 (t), (5.23)
2 2
where we used definition (5.17). This proves that V1 (t) is bounded and non-
increasing, and hence has a limit as t → ∞. Integrating (5.23) in time from zero
to infinity gives
which proves
σ ∈ L∞ . (5.26)
≤ γ̄|σ(t)| (5.27a)
where v̂ is generated using (5.10), and k̂ is the on-line solution to the Volterra integral
equation
x
μk̂(x, t) = k̂(x − ξ, t)θ̂(ξ, t)dξ − θ̂(x, t). (5.29)
0
86 5 Adaptive Output-Feedback Controller
Theorem 5.1 Consider system (5.1), filters (5.5), adaptive laws (5.14) and the state
estimate (5.10). The control law (5.28) guarantees
Before proving Theorem 5.1 in Sect. 5.2.5 we will as we did for the identifier-based
design, introduce a target system and a backstepping transformation that facilitate
the proof.
Lemma 5.2 The backstepping transformation (5.31) and controller (5.28) map sys-
tem (5.11) into the target system (5.34).
5.2 Swapping-Based Design 87
Proof Differentiating (5.31) with respect to time and space, respectively, inserting
the dynamics (5.11a) and integrating by parts yield
and
x
v̂x (x, t) = wx (x, t) + k̂(0, t)v̂(x, t) + k̂ x (x − ξ, t)v̂(ξ, t)dξ. (5.36)
0
which can be rewritten as (5.34a) when using (5.29). The boundary condition (5.34b)
follows from inserting x = 1 into (5.31), and using (5.28).
As for the identifier-based solution, the following inequalities hold for all t ≥ 0 since
θ̂ is bounded by projection
||k̂(t)|| ≤ Mk (5.38a)
||w(t)|| ≤ G 1 ||v̂(t)|| (5.38b)
||v̂(t)|| ≤ G 2 ||w(t)|| (5.38c)
88 5 Adaptive Output-Feedback Controller
||k̂t || ∈ L2 . (5.39)
Differentiating (5.40a) with respect to time and inserting the dynamics (5.34a) and
integrating by parts, we obtain
1
V̇2 (t) ≤ −μw (0, t) − μ||w(t)|| − 2μ
2
(1 + x)w(x, t)k̂(x, t)d x ê(0, t)
2
0
1 1
+ 2d1 (1 + x)w(x, t)T θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ (x, t)d x
0 x
1 x
−2 (1 + x)w(x, t) k̂t (x − ξ, t)T −1 [w](ξ, t)dξd x (5.41)
0 0
where we have inserted for the boundary condition (5.34b). We now consider the
three integrals in (5.41) individually. Applying Young’s inequality, we obtain
1
− 2μ (1 + x)w(x, t)k̂(x, t)d x ê(0, t)
0
1 1
4
≤ ρ1 w 2 (x, t)d x + μ2 k̂ 2 (x, t)d x ê2 (0, t)
0 ρ1 0
4
≤ ρ1 ||w(t)||2 + μ2 Mk2 ê2 (0, t) (5.42)
ρ1
1 1 2
4 2 2
≤ ρ2 ||w(t)||2 + d G θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ dx
ρ2 1 1 0 x
1 1 2
4
≤ ρ2 ||w(t)|| + d12 G 21 2
|θ̂t (ξ, t)||φ(1 − ξ, t)|dξ dx
ρ2 0 0
1 2
4 2 2
≤ ρ2 ||w(t)||2 + d G |θ̂t (ξ, t)||φ(1 − ξ, t)|dξ (5.43)
ρ2 1 1 0
4 2 2 2
V̇2 (t) ≤ −μw 2 (0, t) − [μ − ρ1 − ρ2 − ρ3 ] ||w(t)||2 + μ Mk ê (0, t)
ρ1
4 2 2 4
+ d G ||θ̂t (t)||2 ||φ(t)||2 + G 22 ||k̂t (t)||2 ||w(t)||2 . (5.46)
ρ2 1 1 ρ3
Next, from differentiating (5.40b) with respect to time, inserting the dynamics (5.5b)
and integrating by parts, we obtain
90 5 Adaptive Output-Feedback Controller
where we have inserted for the boundary condition (5.5b), recalling that y(t) =
v(0, t) = v̂(0, t) + ê(0, t) = w(0, t) + ê(0, t). Now, forming the Lyapunov function
candidate
and choosing
μ
ρ1 = ρ2 = ρ3 = (5.49)
6
we find
to obtain
where
1
V̇4 (t) ≤ −μV2 (t) − μV3 (t) + l1 (t)V2 (t) + l2 (t)V3 (t) + l3 (t), (5.54)
2
and in terms of V4 , we have
1
V̇4 (t) ≤ − μV4 (t) + l4 (t)V4 (t) + l3 (t), (5.55)
4
5.2 Swapping-Based Design 91
where
1
l4 (t) = l1 (t) + l2 (t) (5.56)
4
is a nonnegative, bounded and integrable function. Lemma B.3 in Appendix B gives
V4 ∈ L1 ∩ L∞ , V4 → 0, (5.57)
and hence
U ∈ L∞ ∩ L2 , U → 0, (5.62)
and
5.3 Simulations
The system (5.1), the filters (5.5) and the control law of Theorem 5.1 are implemented
using the same system parameters as for the simulation for the identifier-based design
in Chap. 4, that is
3 1
μ= θ(x) = (1 + e−x cosh(πx)) (5.67)
4 2
and initial condition
u 0 (x) = x. (5.68)
All additional initial conditions are set to zero. The design gains are set to
10
4
||v||, ||ψ||, ||φ||
8
3
||v − v̂||
6
4 2
2 1
0 0
0 5 10 15 20 0 5 10 15 20
Time [s] Time [s]
Fig. 5.1 Left: State (solid red), filter ψ (dashed-dotted blue) and filter φ (dashed green) norms.
Right: Adaptive state estimate error norm
−5
U
−10
0 5 10 15 20
Time [s]
5.3 Simulations 93
2.5
θ and θ̂
2
1.5
0.5
0 0.2 0.4 0.6 0.8 1
x
Fig. 5.3 Left: Estimated parameter θ̂. Right: Actual value of θ (solid black) and final estimate θ̂
(dashed red)
5.4 Notes
In the next chapter, we will extend the swapping-based method to solve a model
reference adaptive control problem and output-feedback adaptive stabilization prob-
lem for system (2.1).
References
6.1 Introduction
The goal is to make y(t) track a signal yr (t) generated from a reference model.
Additionally, the system should be stabilized. The only required knowledge of the
system is stated in the following assumption.
for some T > 0, where the reference signal yr is generated using the reference model
for some initial condition b0 ∈ B([0, 1]) and a bounded reference signal r of choice.
We note that system (6.6) is simply a time delay, since yr (t) = r (t − d2 ) for t ≥ d2 .
Regarding the reference signal r , we assume the following.
Assumption 6.2 The reference signal r (t) is known for all t ≥ 0, and there exists a
constant r̄ so that
|r (t)| ≤ r̄ (6.7)
for all t ≥ 0.
We proceed in Sect. 6.2 by solving the model reference adaptive control prob-
lem stated above. In Sect. 6.3, we solve the adaptive output-feedback stabilization
problem, which is covered by the MRAC by simply setting r ≡ 0, and prove some
additional stability and convergence properties. The controllers are demonstrated on
a linearized Korteweg de Vries-like equation in Sect. 6.4, before some concluding
remarks are offered in Sect. 6.5.
This model reference adaptive control problem was originally solved in Anfinsen
and Aamo (2017), and is based on the swapping-based adaptive output-feedback
stabilization scheme presented in Chap. 5.
Firstly, invertible mappings are introduced to bring system (6.1) into an equivalent,
simplified system, where the number of uncertain parameters is reduced to only
two. Then filters are designed so the state in the new system can be expressed as a
6.2 Model Reference Adaptive Control 97
linear static parametrization of the filter states and the uncertain parameters, facili-
tating for the design of adaptive laws. The adaptive laws are then combined with a
backstepping-based adaptive control law that adaptively stabilizes the system, and
achieves the tracking goal (6.5).
Consider system (2.25), which we for the reader’s convenience restate here
Lemma 6.1 System (6.1) is equivalent to system (6.8), where ρ and σ are uncertain
parameters which are linear combinations of f, g, h and λ, while μ is known and
specified in Assumption 6.1.
which can straightforwardly, using (6.8) and (6.6) be shown to have the dynamics
and
x
z 0 (x) = ž 0 (x) − σ(1 − x + ξ)ž 0 (ξ)dξ. (6.14)
0
Proof From differentiating (6.11) with respect to time and space, respectively, and
inserting the result into (6.10a), we obtain
which yields the dynamics (6.12a) provided θ is chosen according to (6.13). Evalu-
ating (6.11) at x = 1 and inserting the boundary condition (6.10b) gives
1 1
z(1, t) = ρU (t) − r (t + d2 ) + σ(ξ)(ž(ξ, t) + b(ξ, t))dξ − σ(ξ)ž(ξ, t)dξ
0 0
1
= ρU (t) − r (t + d2 ) + θ(ξ)b(1 − ξ, t)dξ (6.16)
0
which gives (6.12b). Inserting t = 0 into (6.11) gives the expression (6.14) for z 0 .
The fact that ž(0, t) = z(0, t) immediately gives (6.12d).
Lemma 6.3 Consider system (6.12), filters (6.18) and the non-adaptive estimate z̄
generated from (6.20). Then
z̄ ≡ z (6.21)
ρ ≤ ρ ≤ ρ̄ θ ≤ θ(x) ≤ θ̄ (6.24)
100 6 Model Reference Adaptive Control
/ [ρ, ρ̄].
0∈ (6.25)
This assumption is not a limitation, since the bounds are arbitrary. Assumption (6.25)
requires the sign on the product k1 k2 to be known (see (2.19b)), which is ensured by
Assumption 6.1. Now, motivated by (6.20), we construct an adaptive estimate of the
state by replacing the uncertain parameters by their estimates as follows
1
ẑ(x, t) = ρ̂(t)ψ(x, t) − b(x, t) + d2 θ̂(ξ, t)φ(1 − (ξ − x), t)dξ
x
1
+ d2 θ̂(ξ, t)M(x, ξ, t)dξ. (6.26)
0
where we have from Lemma 6.3 that e(0, t) = 0 in a finite time d2 . We propose the
following adaptive laws
˙ = proj ê(0, t)ψ(0, t)
ρ̂(t) ρ,ρ̄ γ1 , ρ̂(t) (6.28a)
1 + f 2 (t)
ê(0, t)(φ(1 − x, t) + m 0 (x, t))
θ̂t (x, t) = projθ,θ̄ γ2 (x) , θ̂(x, t) (6.28b)
1 + f 2 (t)
ρ̂(0) = ρ̂0 (6.28c)
θ̂(x, 0) = θ̂0 (x) (6.28d)
where
and
with
and γ1 > 0, γ2 (x) > 0, ∀x ∈ [0, 1] are design gains. The initial guesses ρ̂0 , θ̂0 (x)
are chosen inside the feasible domain, i.e.
where ρ̃ = ρ − ρ̂, θ̃ = θ − θ̂, with f 2 given in (6.31), and where we have defined
ê(0, t)
ν(t) = . (6.34)
1 + f 2 (t)
Proof The property (6.33a) follows from the projection operator used in (6.28) and
the conditions (6.32). Consider the Lyapunov function candidate
1 1
1 2 d2
V (t) = μ−1 e2 (x, t)d x + ρ̃ (t) + γ2−1 (x)θ̃2 (x)d x. (6.35)
0 2γ1 2 0
Differentiating with respect to time, inserting the adaptive laws (6.33) and using the
property −ρ̃(t)projρ,ρ̄ (τ (t), ρ̂(t)) ≤ −ρ̃(t)τ (t) (Lemma A.1 in Appendix A), and
similarly for θ̃, we find
ê(0, t)
V̇ (t) ≤ e (1, t) − e (0, t) −
2 2
ρ̃(t)ψ(0, t)
1 + f 2 (t)
1
+ d2 θ̃(x, t)(φ(1 − x, t) + m 0 (x, t))d x . (6.36)
0
where we have used the definition of ν in (6.34). Young’s inequality now gives
1 1
V̇ (t) ≤ − e2 (0, t) − ν 2 (t). (6.39)
2 2
This proves that V is bounded, non-increasing, and hence has a limit as t → ∞.
Integrating (6.39) from zero to infinity gives e(0, ·), ν ∈ L2 . Using (6.37), we obtain,
for t ≥ d2
1
|ê(0, t)| |ρ̃(t)ψ(0, t) + 0 θ̃(ξ, t)(φ(1 − ξ, t) + m 0 (ξ, t))dξ|
|ν(t)| = =
1 + f 2 (t) 1 + f 2 (t)
1 1
|ρ̃(t)ψ(0, t)| | 0 θ̃(ξ, t)φ(1 − ξ, t)dξ| + | 0 θ̃(ξ, t)m 0 (ξ, t)dξ|
≤ +
1 + f 2 (t) 1 + f 2 (t)
|ψ(0, t)| ||φ(t)|| + ||m 0 (t)||
≤ |ρ̃(t)| + ||θ̃(t)||
1 + f (t)
2 1 + f 2 (t)
≤ |ρ̃(t)| + ||θ̃(t)|| (6.40)
where ẑ is generated using (6.26), and k̂ is the on-line solution to the Volterra integral
equation
x
μk̂(x, t) = k̂(x − ξ, t)θ̂(ξ, t)dξ − θ̂(x, t), (6.43)
0
Theorem 6.1 Consider system (6.1), filters (6.18), reference model (6.6), and the
adaptive laws (6.28). Suppose Assumption 6.2 holds. Then, the control law (6.42)
guarantees (6.5), and
6.2.5 Backstepping
˙
ẑ t (x, t) − μẑ x (x, t) = θ̂(x, t)z(0, t) + ρ̂(t)ψ(x, t)
1
+ d2 θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ
x
1
+ d2 θ̂t (ξ, t)M(x, ξ, t)dξ (6.45a)
x
1
ẑ(1, t) = ρ̂(t)U (t) − r (t) + θ̂(ξ, t)b(1 − ξ, t)dξ, (6.45b)
0
ẑ(x, 0) = ẑ 0 (x) (6.45c)
where k̂ is the one-line solution to (6.43). Consider also the target system
˙
ηt (x, t) − μηx (x, t) = −μk̂(x, t)ê(0, t) + ρ̂(t)T [ψ](x, t)
1
+ d2 T θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ (x, t)
x
104 6 Model Reference Adaptive Control
1
+ d2 T θ̂t (ξ, t)M(x, ξ, t)dξ (x, t)
0
x
− k̂t (x − ξ, t)T −1 [η](ξ, t)dξ (6.49a)
0
η(1, t) = 0 (6.49b)
η(x, 0) = η0 (x). (6.49c)
Lemma 6.5 The transformation (6.47) with inverse (6.48) and controller (6.42) map
system (6.45) into (6.49).
Proof Differentiating (6.47) with respect to time and space, respectively, inserting
the dynamics (6.45a) and integrating by parts, yield
and
x
ẑ x (x, t) = ηx (x, t) + k̂(0, t)ẑ(x, t) + k̂ x (x − ξ, t)ẑ(ξ, t)dξ. (6.51)
0
Inserting (6.50) and (6.51) into (6.45a), we obtain (6.49a). Inserting x = 1 into (6.47),
using (6.45b) and the control law (6.42) we obtain (6.49b).
||k̂(t)||∞ ≤ Mk , ∀t ≥ 0 (6.52)
6.2 Model Reference Adaptive Control 105
for some constant Mk . Moreover, from the invertibility of the transformations (6.47)
and (6.48) and the fact that the estimate θ̂ and hence also k̂ are bounded by projection,
we have from Theorem 1.3 the following inequalities
or
||k̂t || ∈ L2 ∩ L∞ . (6.58)
From differentiating (6.60a) with respect to time, inserting the dynamics (6.49a) and
integrating by parts, we find
106 6 Model Reference Adaptive Control
1
V̇1 (t) = 2μη (1, t) − μη (0, t) − μ
2 2
η 2 (x, t)d x
0
1
− 2μ (1 + x)η(x, t)k̂(x, t)d x ê(0, t)
0
1
+2 ˙
(1 + x)η(x, t)ρ̂(t)T [ψ](x, t)d x
0
1 1
+ 2d2 (1 + x)η(x, t)T θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ (x, t)d x
0 x
1 1
+ 2d2 (1 + x)η(x, t)T θ̂t (ξ, t)M(x, ξ, t)dξ (x, t)d x
0 0
1 x
−2 (1 + x)η(x, t) k̂t (x − ξ, t)T −1 [η](ξ, t)dξd x. (6.61)
0 0
Inserting the boundary condition (6.49b), and applying Young’s inequality to the
cross terms yield
4
μ 1
V̇1 (t) ≤ −μη (0, t) − 2
− ρi (1 + x)η 2 (x, t)d x
2 i=1 0
1 1
2μ2 2
+ k̂ 2 (x, t)d x ê2 (0, t) + ρ̂˙ 2 (t)(T [ψ](x, t))2 d x
ρ1 0 ρ2 0
1 1 2
2
+ d22 T θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ (x, t) dx
ρ3 0 x
1 2
2 2 1
+ d2 T θ̂t (ξ, t)M(x, ξ, t)dξ (x, t) dx
ρ4 0 0
1 x 2
2
+ k̂t (x − ξ, t)T −1 [η](ξ, t)dξ d x (6.62)
ρ5 0 0
By expanding the term in ê2 (0, t), using the definition (6.34), as
we obtain
μ
V̇1 (t) ≤ −μη 2 (0, t) − V1 (t) + h 1 μν 2 (t)ψ 2 (0, t) + l1 (t)V1 (t)
4
+ l2 (t)V2 (t) + l3 (t)V3 (t) + l4 (t) (6.66)
h 1 = 40Mk2 (6.67)
Lastly, consider (6.60c). From differentiating with respect to time and inserting
the dynamics (6.18a), integrating by parts and inserting the boundary condition in
108 6 Model Reference Adaptive Control
(6.18a), we find
μ 1
V̇3 (t) = 2μψ (1, t) − μψ (0, t) −
2 2
(1 + x)ψ 2 (x, t)d x
2 0
1
1
= 2μ r (t) − θ̂(ξ, t)b(1 − ξ, t)dξ
ρ̂(t) 0
1 2
μ
+ k̂(1 − ξ, t)ẑ(ξ, t)dξ − μψ 2 (0, t) − V3 (t)
2
0
≤ 6μMρ2 r 2 (t) + Mθ2 ||b(t)||2 + Mk2 G 22 ||η(t)||2
μ
− μψ 2 (0, t) − V3 (t) (6.72)
2
where
1
Mρ = . (6.73)
min{|ρ|, |ρ̄|}
Now, forming
V̇4 (t) ≤ −c1 V4 (t) + l7 (t)V4 (t) + l8 (t) − μ 1 − b1 ν 2 (t) ψ 2 (0, t) + h 3 (6.78)
6.2 Model Reference Adaptive Control 109
for some integrable functions l7 (t), l8 (t) and positive constants c1 and b1 . Moreover,
from (6.39), we have
1
V̇ (t) ≤ − ν 2 (t) (6.79)
2
while from (6.40) and (6.35), we have
1
1 1
ν (t) ≤ 2|ρ̃(t)| + 2||θ̃(t)|| ≤ 4γ1
2 2 2
|ρ̃(t)|2 + 4γ̄2 γ2−1 (x)θ̃2 (x, t)d x
2γ1 2 0
≤ kV (t) (6.80)
with γ̄2 bounding γ2 from above, and where we have utilized that e ≡ 0. Lemma B.4
in Appendix B then gives V4 ∈ L∞ and thus
||ẑ|| ∈ L∞ . (6.83)
From the definition of the filter ψ in (6.18a) and the control law U in (6.42), we will
then have U ∈ L∞ , and
||ψ||∞ ∈ L∞ (6.84)
we find
μ μ
V̇5 (t) ≤ −8 V1 (t) − V2 (t) + 8l1 (t)V1 (t) + (8l2 (t) + l5 (t))V2 (t)
4 2
+ 8l3 (t)V3 (t) + 8l4 (t) + l6 (t) + 4(2h 1 + 1)ν 2 (t)ψ 2 (0, t). (6.86)
Since ψ(0, ·) ∈ L∞ and ν ∈ L2 , the latter term is integrable, and we can write
(6.86) as
for a positive constant c2 and integrable functions l9 (t) and l10 (t). It then immediately
follows from Lemma B.3 in Appendix B that
V5 ∈ L1 ∩ L∞ , V5 → 0, (6.88)
and hence
||φ||∞ ∈ L∞ . (6.93)
||u|| ∈ L∞ , (6.94)
and
||u||∞ ∈ L∞ . (6.95)
for some arbitrary T > 0, which from the definition of z implies (6.5).
6.3 Adaptive Output Feedback Stabilization 111
where ẑ is generated using (6.26), and k̂ is the on-line solution to the Volterra integral
equation
x
μk̂(x, t) = k̂(x − ξ, t)θ̂(ξ, t)dξ − θ̂(x, t), (6.98)
0
From the control law (6.97) and the definition of the filter ψ in (6.18a), we will then
have U ∈ L∞ ∩ L2 , U → 0, and
6.4 Simulation
The controllers of Theorems 6.1 and 6.2 are implemented on the potentially unstable,
linearized Korteweg de Vries-like equation from Krstić and Smyshlyaev (2008), with
scaled actuation and boundary measurement. It is given as
aa
u t (x, t) = u x (x, t) − γ
sinh x u(0, t)
a x a
+γ cosh (x − ξ) u(ξ, t)dξ (6.105a)
0
u(1, t) = k1 U (t) (6.105b)
y(t) = k2 u(0, t) (6.105c)
for some constants , a and γ, with , a > 0. The Korteweg de Vries equation serves
as a model of shallow water waves and ion acoustic waves in plasma (Korteweg and
de Vries 1895). The goal is to make the measured output (6.105c) track the reference
signal
1 + sin(2πt) for 0 ≤ t ≤ 10
r (t) = (6.106)
0 for t > 10.
The reference signal is intentionally set identically zero after ten seconds to demon-
strate the stabilization and convergence properties of Theorem 6.2.
Figures 6.1, 6.2 and 6.3 show the simulation results from implementing system
(6.105) with system parameters
a = 1, = 0.2, γ = 4, k1 = 2, k2 = 2 (6.107)
using the controllers of Theorems 6.1 and 6.2 with tuning parameters
Fig. 6.2 Reference signal (solid black) and measured signal (dashed red)
u 0 (x) = x. (6.109)
ρ̂0 = 1. (6.110)
6.5 Notes
The result presented in this chapter is definitely the strongest result in Part II, show-
ing that system (6.1) can be stabilized from a single boundary sensing, with little
knowledge of the system parameters. One of the key steps in solving the model refer-
ence adaptive control problem for system (6.1) is the use of Lemma 2.1, which states
114 6 Model Reference Adaptive Control
Fig. 6.3 Left: Estimated parameter θ̂. Right: Actual (solid black) and estimated parameter ρ̂ (dashed
red)
that system (6.1) is equivalent to system (2.4), the latter of which only contains two
uncertain parameters. A slightly modified version of the swapping-based controller
already established in Chap. 4 can then be applied.
We will now proceed to Part III, adding an additional PDE to the system, and con-
sider systems of two coupled PDEs, so-called 2 × 2 systems. Many of the techniques
presented in Part II extend to 2 × 2 systems.
References
Anfinsen H, Aamo OM (2017) Model reference adaptive control of an unstable 1–D hyperbolic
PDE. In: 56th conference on decision and control. Melbourne, Victoria, Australia
Krstić M, Smyshlyaev A (2008) Backstepping boundary control for first-order hyperbolic PDEs
and application to systems with actuator and sensor delays. Syst Control Lett 57(9):750–758
Korteweg D, de Vries G (1895) On the change of form of long waves advancing in a rectangular
canal and on a new type of long stationary waves. Philos Mag 39(240):422–443
Part III
2 × 2 Systems
Chapter 7
Introduction
The signal U (t) is an actuation signal. As mentioned in Chap. 1, systems in the form
(7.1) consist of two transport equations u, v convecting in opposite directions, with u
convecting from x = 0 to x = 1 and v from x = 1 to x = 0. They are coupled both in
the domain (c12 , c21 ) and at the boundaries (ρ, q), and additionally have reaction terms
(c11 , c22 ). This type of systems can be used to model the pressure and flow profiles
in oil wells (Landet et al. 2013), current and voltage along electrical transmission
lines (Heaviside 1892) and propagation of water in open channels (the Saint-Venant
equations or shallow water equations) (Saint-Venant 1871), just to mention a few
examples.
An early result in Vazquez et al. (2011) on control and observer design for 2 × 2
systems considered a slightly simpler version of system (7.1), in the form
for
and
x
c11 (s)
ū(x, t) = u(x, t) exp − ds (7.7a)
0 λ(s)
x
c22 (s)
v̄(x, t) = v(x, t) exp ds (7.7b)
0 μ(s)
from u, v into the new variables ū, v̄ (we have omitted the bars on u and v in (7.4)),
and scaling of the input. Moreover, the term in ρu(1, t) can be removed by defining
a new control signal U1 as
References
Heaviside O (1892) Electromagnetic induction and its propagation. In: Electrical papers, vol II, 2nd
edn. Macmillan and Co, London
Landet IS, Pavlov A, Aamo OM (2013) Modeling and control of heave-induced pressure fluctuations
in managed pressure drilling. IEEE Trans Control Syst Technol 21(4):1340–1351
Saint-Venant AJCBd (1871) Théorie du mouvement non permanent des eaux, avec application aux
crues des rivières et a l’introduction de marées dans leurs lits. Comptes Rendus des Séances de
l’Académie des Sciences 73:147–154
Vazquez R, Krstić M, Coron J-M (2011) Backstepping boundary stabilization and state estimation
of a 2 × 2 linear hyperbolic system. In: 2011 50th IEEE conference on decision and control and
European control conference (CDC-ECC). pp 4937–4942
Chapter 8
Non-adaptive Schemes
8.1 Introduction
In this chapter , non-adaptive controllers and observers will be derived. Most of the
results will concern systems in the form (7.4), which we restate here
where
with
In Sect. 8.2, we derive the state-feedback law from Vazquez et al. (2011) for system
(8.1), before state observers are derived in Sect. 8.3. Note that there will be a distinc-
tion between observers using sensing anti-collocated or collocated with the actuation
U , as defined in (7.9a) and (7.9b), respectively, which we restate here as
The observer using sensing collocated with actuation was originally derived in
Vazquez et al. (2011), while the observer using anti-collocated actuation and sens-
ing is based on a similar design for n + 1 systems in Di Meglio et al. (2013). The
controller and observers are combined into output-feedback controllers in Sect. 8.4.
An output tracking controller is derived in Sect. 8.5, whose aim is to make the
measurement anti-collocated with actuation track some bounded reference signal of
choice, achieving tracking in finite time. The design in Sect. 8.5 is different from
the output-feedback solution to the output tracking problem offered in Lamare and
Bekiaris-Liberis (2015), where a reference model is used to generate a reference
trajectory, before a backstepping transformation is applied “inversely” to the refer-
ence model to generate a reference trajectory u r , vr for the original state variables
u, v. The resulting controller contains no feedback from the actual states, so the
controller would only work if the initial conditions of the reference trajectory u r , vr
matched the initial conditions of u, v. To cope with this, a standard PI controller is
used to drive the output y0 (t) = v(0, t) to the generated reference output vr (0, t). A
weakness with this approach, apart from being far more complicated in both design
and accompanying stability proof than the design of Sect. 8.5, is that tracking is not
achieved in finite time due to the presence of the PI controller. Also, the PI imple-
mentation requires the signal v(0, t) to be measured, which is not necessary for the
design in Sect. 8.5 when the tracking controller is combined with an observer using
measurement collocated with actuation.
Most of the derived controllers and observers are implemented and simulated in
Sect. 8.6, before some concluding remarks are offered in Sect. 8.7.
As for the scalar system, system (8.1) may, depending on the system parameters,
be unstable. However, if c2 ≡ 0 the system reduces to a cascade system from v into
u, which is trivially stabilized using the control law U ≡ 0. For the case c2 ≡ 0,
a stabilizing controller is needed. Such a controller for system (8.1) with q = 0 is
derived in Vazquez et al. (2011), where a state feedback control law in the form
1
U (t) = K u (1, ξ)u(ξ, t) + K v (1, ξ)v(ξ, t) dξ (8.5)
0
λ(0) u
K v (x, 0) = q K (x, 0), (8.6d)
μ(0)
u=v≡0 (8.7)
for t ≥ t F , where
1 1
dγ dγ
t F = t1 + t2 , t1 = , t2 = . (8.8)
0 λ(γ) 0 μ(γ)
Proof We will offer two different proofs of this theorem. The two proofs are similar
and both employ the backstepping technique. The first one uses the simplest back-
stepping transformation, while the second one produces the simplest target system.
We include them both because the first one most closely resembles similar proof for
the state feedback controller designs for the more general n + 1 and n + m systems,
while the second produces a target system which will be used when deriving adaptive
output-feedback schemes in later chapters.
To ease the derivations to follow, we state the Eq. (8.1) in vector form as follows
where
u(x, t) λ(x) 0 0 c1 (x)
w(x, t) = , Λ(x) = , Π (x) = (8.10a)
v(x, t) 0 −μ(x) c2 (x) 0
0q 10 0
Q0 = , R1 = , Ū (t) = (8.10b)
01 00 U (t)
u (x)
w0 (x) = 0 . (8.10c)
v0 (x)
Solution 1:
Consider the target system
x
γt (x, t) + Λ(x)γx (x, t) = Ω(x)γ(x, t) + B(x, ξ)γ(ξ)dξ (8.11a)
0
γ(0, t) = Q 0 γ(0, t) (8.11b)
γ(1, t) = R1 γ(1, t) (8.11c)
124 8 Non-adaptive Schemes
where
0 0
K (x, ξ) = (8.14)
K u (x, ξ) K v (x, ξ)
where
0 0
L(x, ξ) = (8.16)
L α (x, ξ) L β (x, ξ)
Inserting (8.18) and (8.19) into (8.9a), and inserting the boundary condition (8.9b),
we find
0 = wt (x, t) + Λ(x)wx (x, t) − Π (x)w(x, t)
= γt (x, t) + Λ(x)γx (x, t) − Ω(x)w(x, t) + K (x, 0)Λ(0)Q 0 w(0, t)
x
+ Λ(x)K x (x, ξ) + K ξ (x, ξ)Λ(ξ)
0
+ K (x, ξ)Λ (ξ) + K (x, ξ)Π (ξ) w(ξ, t)dξ
Λ(x)K x (x, ξ) + K ξ (x, ξ)Λ(ξ) = −K (x, ξ)Π (ξ) − K (x, ξ)Λ (ξ) (8.21a)
Λ(x)K (x, x) − K (x, x)Λ(x) = Π (x) − Ω(x) (8.21b)
K (x, 0)Λ(0)Q 0 = 0, (8.21c)
where we changed the order of integration in the double integral. Using (8.17) yields
the target system dynamics (8.11).
126 8 Non-adaptive Schemes
The boundary condition (8.11b) follows immediately from the boundary condition
(8.9b) and the fact that w(0, t) = γ(0, t). Substituting (8.13) into (8.9c) gives
1
γ(1, t) = R1 γ(1, t) − [K (1, ξ) − R1 K (1, ξ)] w(ξ, t)dξ + Ū (t). (8.24)
0
Choosing
1
Ū (t) = [K (1, ξ) − R1 K (1, ξ)] w(ξ, t)dξ (8.25)
0
and is observed to be a cascade system from β into α. After a finite time t2 , defined
in (8.8), we will have β ≡ 0. Hence, for t ≥ t2 , system (8.26) reduces to
x
αt (x, t) + λ(x)αx (x, t) = b1 (x, ξ)α(ξ, t)dξ (8.27a)
0
α(0, t) = 0 (8.27b)
α(x, t2 ) = αt2 (x). (8.27c)
where F is defined over T defined in (1.1a) and is the solution to the PDE
F(0, ξ) = 0. (8.29b)
Differentiating (8.28) with respect to time and space, inserting the dynamics (8.27a),
integrating by parts, changing the order of integration in the double integral and using
the boundary condition (8.27b), we obtain
x
αt (x, t) = ηt (x, t) − F(x, x)λ(x)α(x, t) + F(x, ξ)λ (ξ)α(ξ, t)dξ
x 0
and
x
αx (x, t) = ηx (x, t) + F(x, x)α(x, t) + Fx (x, ξ)α(ξ, t)dξ, (8.32)
0
and using (8.29a) gives (8.30a). Evaluating (8.28) at x = 0 and inserting the boundary
conditions (8.27b) and (8.29b) gives (8.30b). The initial condition follows immedi-
ately from evaluating (8.28) at t = t2 . From the structure of system (8.30), we have
η ≡ 0 for t ≥ t F = t1 + t2 , and from the invertibility of the transformations (8.28)
and (8.13), α = u = v ≡ 0 for t ≥ t F follows.
Lastly, we prove that the PDE (8.29) has a solution F. Consider the invertible
mapping
ξ
λ (s)
F(x, ξ) = G(x, ξ)ϕ(ξ), ϕ(ξ) = exp − ds , (8.34)
0 λ(s)
128 8 Non-adaptive Schemes
where
b1 (x, ξ) ϕ(s)
p1 (x, ξ) = , p2 (x, ξ) = b1 (x, ξ) . (8.37)
ϕ(ξ) ϕ(ξ)
where t1 is defined in (8.8). We note that φ is strictly increasing and hence invertible.
From (8.38), we find
t1 t1
G x (x, ξ) = Hx (φ(x), φ(ξ)) G ξ (x, ξ) = Hξ (φ(x), φ(ξ)). (8.39)
λ(x) λ(ξ)
where
Well-posedness and the existence of a unique solution H to the PDE (8.40) is now
ensured by Lemma D.1 in Appendix D.1. The invertibility of the transformations
(8.38) and (8.34) then proves that (8.29) has a unique solution.
8.2 State Feedback Controller 129
Solution 2:
This second proof employs a slightly more involved backstepping transformation
that produces a simpler target system without the integral term in (8.11a). Consider
the target system
where
K uu (x, ξ) K uv (x, ξ)
K (x, ξ) = (8.45)
K vu (x, ξ) K vv (x, ξ)
K vu ≡ K u K vv ≡ K v . (8.47)
The backstepping transformation (8.44) is also invertible, with inverse in the form
x
w(x, t) = γ(x, t) + L(x, ξ)γ(ξ, t)dξ (8.48)
0
where
L αα (x, ξ) L αβ (x, ξ)
L(x, ξ) = (8.49)
L βα (x, ξ) L ββ (x, ξ)
which once again can be found from solving the Volterra integral equation (1.53).
We will show that the backstepping transformation (8.44) with K satisfying the
PDE (8.46) for an arbitrary k uu , maps (8.9) into (8.42) with
From differentiating (8.44) with respect to time and space, respectively, inserting
the dynamics (8.9a), integrating by parts and inserting the boundary condition (8.9b),
we get
Λ(x)K x (x, ξ) + K ξ (x, ξ)Λ(ξ) + K (x, ξ)Λ (ξ) + K (x, ξ)Π (ξ) = 0 (8.54a)
Λ(x)K (x, x) − K (x, x)Λ(x) − Π (x) = 0 (8.54b)
K (x, 0)Λ(0)Q 0 + G(x) = 0 (8.54c)
gives the target dynamics (8.42a). The PDE (8.54) is under-determined, and we
impose the additional constraint
for some arbitrary function k uu to ensure well-posedness. The Eq. (8.54) with (8.55)
is equivalent to (8.46) and (8.50).
The boundary condition (8.42b) follows from (8.9b) and w(0, t) = γ(0, t). Sub-
stituting (8.44) into (8.9c) and choosing the controller Ū as
1
Ū (t) = [K (1, ξ) − R1 K (1, ξ)] w(ξ, t)dξ, (8.56)
0
that is
1 1
U (t) = K vu (1, ξ)u(ξ, t)dξ + K vv (1, ξ)v(ξ, t)dξ (8.57)
0 0
which from (8.47) is the same as (8.5), we obtain the boundary condition (8.42c).
Written out, target system (8.42) reads
once again a cascade system from β into α. After a finite time t2 given in (8.8),
β ≡ 0, after which system (8.58) reduces to
μ(0) uv
k uu (x) = K (x, 0) (8.60)
qλ(0)
g≡0 (8.61)
The controller derived in the previous section requires that full-state measurements
of states u and v are available, which is often not the case in practice. An observer
is therefore needed.
Two boundary measurements are available for system (8.1), as stated in (8.4).
These are y1 (t) = u(1, t), which is referred to as the measurement collocated
with actuation, and y0 (t) = v(0, t), which is referred to as the measurement anti-
collocated with actuation. Only one of the measurements is needed for the design of
an observer estimating both states u and v. We will here present the two designs.
In Di Meglio et al. (2013), a state observer design for n + 1 systems is developed for
the case of sensing anti-collocated with actuation. Here, we present the design for
2 × 2 systems in the form (8.1) with sensing (8.4a), by letting n = 1. Consider the
observer equations
Theorem 8.2 Consider system (8.1) and observer (8.83) and let p1 and p2 be given
by (8.64). Then
û ≡ u, v̂ ≡ v (8.66)
Proof By using (8.1) and (8.62), the observer errors ũ = u − û and ṽ = v − v̂ can
straightforwardly be shown to satisfy the dynamics
for some functions g1 and g2 to be determined, defined over the triangular domain
T . Consider also the backstepping transformation
x
w̃(x, t) = γ̃(x, t) + M(x, ξ)γ̃(ξ, t)dξ (8.72)
0
where
0 M α (x, ξ)
M(x, ξ) = (8.73)
0 M β (x, ξ)
satisfies
From differentiating (8.72) with respect to time, inserting the dynamics (8.70a),
integrating by parts and changing the order of integration in the double integral, we
get
Using the Eqs. (8.74a)–(8.74b), the identity γ̃(0, t) = w̃(0, t), letting G be the solu-
tion to the Volterra integral equation (8.75), and choosing
It is noted from (8.81a) and (8.81c) that the dynamics of α̃ is independent of β̃, and
will converge to zero in a finite time t1 . Thus, for t ≥ t1 , target system (8.70) reduces to
for some function β̃t1 . This system is a pure transport equation whose state will be
identically zero after the additional time t2 , and hence α̃ = β̃ ≡ 0 for t ≥ t1 + t2 =
t F . Due to the invertibility of the transformation (8.72), ũ = ṽ ≡ 0 as well, and thus
û ≡ u, v̂ ≡ v for t ≥ t F .
An observer for system (8.1) using the sensing (8.4b) collocated with actuation is
presented in Vazquez et al. (2011) for the case q = 0. It is claimed in Vazquez
et al. (2011) that it is necessary to use measurements of v at x = 0 to implement a
boundary observer for values of q near zero. It turns out, however, that the proof can
be accommodated to show that the observer proposed in Vazquez et al. (2011) also
works for q = 0, but requires a slightly modified target system.
Consider the observer equations
λ(x)Pxα (x, ξ) + λ(ξ)Pξα (x, ξ) = −λ (ξ)P α (x, ξ) + c1 (x)P β (x, ξ) (8.86a)
β
μ(x)Pxβ (x, ξ) − λ(ξ)Pξ (x, ξ) = −c2 (x)P α (x, ξ) + λ (ξ)P β (x, ξ) (8.86b)
α β
P (0, ξ) = q P (0, ξ) (8.86c)
c2 (x)
P β (x, x) = − (8.86d)
λ(x) + μ(x)
Theorem 8.3 Consider system (8.1) and observer (8.83) with injection gains p1 and
p2 given as (8.85). Then
û ≡ u, v̂ ≡ v (8.87)
Proof The observer errors ũ = u − û and ṽ = v − v̂ can, using (8.1) and (8.83), be
shown to satisfy the dynamics
for some functions d1 and d2 defined over S. Consider the following backstepping
transformation
1
w̃(x, t) = γ̃(x, t) − P(x, ξ)γ̃(ξ, t)dξ (8.93)
x
where
P α (x, ξ) 0
P(x, ξ) = (8.94)
P β (x, ξ) 0
satisfies
which from Lemma 1.1 has a solution D, into the error system (8.89).
8.3 State Observers 139
which is equivalent to (8.85), and choosing D as the solution to the Volterra integral
equation (8.96), we obtain the dynamics (8.89a). The existence of a solution D of
(8.96) is guaranteed by Lemma 1.1. Inserting (8.93) into (8.91b) gives
1
w(0, t) = Q 0 w(0, t) + [Q 0 P(0, ξ) − P(0, ξ)] γ̃(ξ, t)dξ (8.101)
0
Using (8.95c) yields (8.89b). The identity γ̃(1, t) = w̃(1, t) immediately yields the
boundary condition (8.89c) from (8.91c).
140 8 Non-adaptive Schemes
which is a cascade system from β̃ into α̃. The β̃-subsystem is independent of α̃, and
converges to zero in a finite time given by the propagation time through the domain.
Hence for t ≥ t2 , β̃ ≡ 0, and the subsystem α̃ reduces to
As with scalar systems, the state estimates generated by the observers of Theorems 8.3
and 8.2 converge to their true values in finite time, and hence, designing output-
feedback controllers is almost trivial (separation principle). However, we formally
state these results in the following two theorems.
where (K u , K v ) is the solution to the PDE (8.6), and û and v̂ are generated using
the observer of Theorem 8.2. Then
u=v≡0 (8.105)
where (K u , K v ) is the solution to the PDE (8.6), and û and v̂ are generated using
the observer of Theorem 8.3. Then
u=v≡0 (8.107)
The goal here is to design a control law U so that the measurement y0 (t) = v(0, t)
of system (8.1) tracks a reference signal r (t).
Theorem 8.6 Consider system (8.1). Let the control law be taken as
1
U (t) = K u (1, ξ)u(ξ, t) + K v (1, ξ)v(ξ, t) dξ + r (t + t2 ), (8.108)
0
Proof As part of the proof of Theorem 8.1, it is shown that the backstepping transfor-
mation (8.13) maps system (8.1) with measurement (8.4a) into system (8.11), which
we restate here
x
αt (x, t) + λ(x)αx (x, t) = c1 (x)β(x, t) + b1 (x, ξ)α(ξ, t)dξ
x 0
where we have added the measurement (8.111g), which follows immediately from
substituting x = 0 into (8.13), resulting in β(0, t) = v(0, t), and hence y0 (t) =
β(0, t). In the state feedback stabilizing control design of Theorem 8.1, U is chosen
as (8.5), to obtain the boundary condition β(1, t) = 0, stabilizing the system.
From the structure of the subsystem in β consisting of (8.111b) and (8.111d), it
is clear that
for t ≥ t2 . Choosing the control law as (8.108), the boundary condition (8.111d),
becomes
β(1, t) = r (t + t2 ) (8.113)
and hence
−0.3
2
Controllergains
−0.4
Injectiongains
1.5
−0.5
1
−0.6
0.5
0 0.5 1 0 0.5 1
x x
Fig. 8.1 Left: Controller gains K vu (1, x) (solid red) and K vv (1, x) (dashed-dotted blue). Right:
Observer gains p1 (x) (solid red) and p2 (x) (dashed-dotted blue)
and by the invertibiliy of the transformation (8.13) Theorem 1.2, ||u||∞ , ||v||∞ ∈ L∞
follows.
The above tracking controller can also be combined with the observer of Theo-
rems 8.3 or 8.2 to solve the tracking problem from output feedback in a finite time
t F + t2 . Note that if the observer of Theorem 8.3 is used, the signal y0 (t) for which
tracking is achieved, need not be measured.
8.6 Simulations
System (8.1) with the state feedback controller of Theorem 8.1, the collocated
observer of Theorem 8.3, the output feedback controller of Theorem 8.5 and the
tracking controller of Theorem 8.6 are implemented using the system parameters
4 1.5
1
2
0.5
0 0
0 1 2 3 4 0 1 2 3 4
Time [s] Time [s]
Fig. 8.2 Left: State norm during state feedback (solid red), output feedback (dashed-dotted blue)
and output tracking (dashed green). Right: State estimation error norm
3
1
2
0 1
−1 0
−1
0 1 2 3 4 0 1 2 3 4
Time [s] Time [s]
Fig. 8.3 Left: Actuation signal during state feedback (solid red), output feedback (dashed-dotted
blue) and output tracking (dashed green). Right: Reference r (solid black) and measured signal
(dashed green) during tracking
The controller and observer gains are shown in Fig. 8.1. It is observed from Fig. 8.2
that the norm of the state estimation error from using the observer of Theorem 8.3
converges to zero in t F time. Moreover, the state norm is zero for t ≥ t F for the
state feedback case, zero for t ≥ 2t F for the output feedback case and bounded for
the tracking case, in accordance with the theory. The same is true for the respective
actuation signals as shown in Fig. 8.3. Finally, the tracking objective is achieved for
t ≥ t2 , as stated in Theorem 8.6.
8.7 Notes
The solution (K u , K v ) of the PDE (8.6) is required for implementation of the control
law of Theorem 8.1. These are generally non-trivial to solve, but since they are
static, they can be solved once and for all prior to implementation. The execution
time of a solver is therefore of minor concern. For the special case of constant
system parameters in (7.1) (which can be transformed to the form (7.4) required
by Theorem 8.1 by the linear transformation (7.7), creating exponentially weighted
coefficients c1 and c2 ), explicit solutions to (8.6) are available in Vazquez and Krstić
(2014). The solutions are quite complicated, involving Bessel functions of the first
kind and the generalized first order Marcum Q-function (Marcum 1950).
8.7 Notes 145
In Sect. 8.5, we solved a tracking problem for the output y0 (t) = v(0, t) anti-
collocated with actuation. The tracking problem for the collocated output y1 (t) =
u(1, t), however, is much harder. It is solved in Deutscher (2017) for a restricted class
of reference signals, namely ones generated using an autonomous linear system,
particularly aimed at modeling biased harmonic oscillators. Tracking is achieved
subject to some assumptions on the systems parameters. The problem of making y1
track some arbitrary, bounded reference signal, however, is at present still an open
problem. The difficulty arises from the backstepping transformation (8.13). For the
anti-collocated case, the simple relationship (8.111g) between the measurement y0
and the new backstepping variable β can be utilized. For the collocated case, the
backstepping transformation (8.13) gives the equally simple relationship y1 (t) =
u(1, t) = α(1, t), however, any signal propagating in α whose dynamics is given by
(8.26a) and (8.26c), is distorted by the integral terms and source term in (8.26a).
When attempting to use the decoupling backstepping transformation (8.44), with
inverse (8.48), the relationship to the new variables is
1 1
y1 (t) = α(1, t) + K uu (1, ξ)u(ξ, t)dξ + K uu (1, ξ)v(ξ, t)dξ
0 0
1 1
= α(1, t) + L αα (1, ξ)α(ξ, t)dξ + L αβ (1, ξ)β(ξ, t)dξ (8.118)
0 0
which contains weighted integrals of the states. So either way, complications occur
for the collocated case which are not present in the anti-collocated case.
The optimal control problem for (8.1) is investigated in Hasan et al. (2016). The
resulting controller requires the solution to a set of co-state equations propagating
backwards in time. It is hence non-causal and not possible to implement on-line.
However, it can be the basis for the derivation of a linear quadratic regulator (LQR)
state-feedback law for the infinite horizon, requiring the solution to non-linear, dis-
tributed Riccatti equations. This is attempted in Hasan et al. (2016), but the validity
of this controller is questionable as it does not involve any state-feedback from the
state u. In Anfinsen and Aamo (2017) a state-feedback inverse optimal controller
is derived for system (8.1) with constant transport speeds, which avoids the need to
solve Riccati equations often associated with optimal controllers, and exponentially
stabilizes the system in the L 2 -sense, while also minimizing a cost function that is
positive definite in the system states and control signal. However, the finite-time con-
vergence property of the backstepping controller is lost. Some remarkable features
of the resulting inverse optimal control law are that it is simply a scaled version of
the backstepping controller of Theorem 8.1, and that it approaches the backstepping
controller when the cost of actuation approach zero.
146 8 Non-adaptive Schemes
References
9.1 Introduction
In this chapter, we present the book’s first adaptive stabilizing controllers for 2 × 2
systems. These are state-feedback solutions requiring full state measurements. The
first result on adaptive control of 2 × 2 systems is given in the back-to-back papers
Anfinsen and Aamo (2016a, b), for a system in the form (7.1), but with constant
in-domain parameters, that is
where
and
Extensions to the case of having spatially varying coefficients is for the identifier-
based method straightforward, but for the swapping method more involved. One such
solution is given in Anfinsen and Aamo (2017) for systems in the form (7.4), which
we restate here
u t (x, t) + λ(x)u x (x, t) = c1 (x)v(x, t) (9.4a)
vt (x, t) − μ(x)vx (x, t) = c2 (x)u(x, t) (9.4b)
u(0, t) = qv(0, t) (9.4c)
v(1, t) = U (t) (9.4d)
u(x, 0) = u 0 (x) (9.4e)
v(x, 0) = v0 (x) (9.4f)
where
and
The solution offered in Anfinsen and Aamo (2017) requires a substantially different
set of swapping filters, which in turn leads to a more comprehensive stability proof.
In this chapter, we present in Sect. 9.2 the identifier-based solution from Anfinsen
and Aamo (2018) for the constant-coefficient system (9.1). In Sect. 9.3, we present
the swapping-based solution for the spatially-varying coefficient system (9.4).
We emphasize that the controllers in this chapter require state-feedback. That is,
they assume that distributed measurements of the states in the domain are available,
which is rarely the case in practice. The more realistic case of taking measurements
at the boundary of the domain only, is treated in Chaps. 10 and 11.
|c11 | ≤ c̄11 , |c12 | ≤ c̄12 , |c21 | ≤ c̄21 , |c22 | ≤ c̄22 , |q| ≤ q̄. (9.7)
where proj denotes the projection operator given in Appendix A, ρ, γ, γ5 > 0 are
scalar design gains, and
Define
T T
b1 = c11 c12 , b2 = c21 c22 (9.13)
and let b̂1 and b̂2 be estimates of b1 and b2 , respectively, and let
T T
b̄1 = c̄11 c̄12 , b̄2 = c̄21 c̄22 (9.14)
150 9 Adaptive State Feedback Controllers
be bounds on b1 and b2 , respectively, where c̄11 , c̄12 , c̄21 , c̄22 , q̄ are given in Assump-
T T
tion 9.1. The initial guesses b̂1,0 = ĉ11,0 ĉ12,0 , b̂2,0 = ĉ21,0 ĉ22,0 and q̂0 are
chosen inside the feasible domain, that is
|ĉ11,0 | ≤ c̄11 , |ĉ12,0 | ≤ c̄12 , |ĉ21,0 | ≤ c̄21 , |ĉ22,0 | ≤ c̄22 , |q̂0 | ≤ q̄. (9.15)
Lemma 9.1 Consider system (9.1). The identifier (9.8)–(9.10) with initial conditions
(9.15) guarantees
where
Proof Property (9.16a) follows trivially from projection in (9.10) and Lemma A.1
in Appendix A.1. The dynamics of (9.17) is
where
q̃(t) = q − q̂(t), b̃1 (t) = b1 − b̂1 (t), b̃2 (t) = b2 − b̂2 (t). (9.19)
λ 2
V1 (t) = V2 (t) + b̃1T (t)Γ1−1 b̃1 (t) + b̃2T (t)Γ2−1 b̃2 (t) + q̃ (t) (9.20)
2γ5
9.2 Identifier-Based Design for a System with Constant Coefficients 151
where
1 1
V2 (t) = e−γx e2 (x, t)d x + eγx 2 (x, t)d x. (9.21)
0 0
Differentiating (9.20) with respect to time and inserting the dynamics (9.18a)–
(9.18b), integrating by parts and using the boundary condition (9.18d), we find
1
V̇1 (t) = −λe−γ e2 (1, t) + λe2 (0, t) − λγ e−γx e2 (x, t)d x
0
1 1
+2 e−γx e(x, t) T (x, t)b̃1 (t)d x − 2ρ e−γx e2 (x, t)||(t)||2 d x
0 0
1 1
− μ2 (0, t) − μγ eγx 2 (x, t)d x + 2 eγx (x, t) T (x, t)b̃2 (t)d x
0 0
1
− 2ρ eγx 2 (x, t)||(t)||2 d x + 2b̃1T (t)Γ1−1 b̃˙1 (t) + 2b̃2T (t)Γ2−1 b̃˙2 (t)
0
˙
+ λγ5−1 q̃(t)q̃(t). (9.22)
Inserting the adaptive laws (9.10), and using the property −b̃1T (t)Γ1 projb̄1 (τ (t), b̂1 (t))
≤ −b̃1T (t)Γ1 τ (t) (Lemma A.1 in Appendix A) and similarly for b̃2 and q̃, give
1
−γ 2
V̇1 (t) ≤ −λe e (1, t) + λe (0, t) − λγ
2
e−γx e2 (x, t)d x
0
1 1
−γx 2
− 2ρ e e (x, t)||(t)|| d x − μ (0, t) − μγ
2 2
eγx 2 (x, t)d x
0 0
1
− 2ρ eγx 2 (x, t)||(t)||2 d x − λq̃(t)e(0, t)v(0, t). (9.23)
0
V̇1 (t) ≤ −λe−γ e2 (1, t) − λe2 (0, t)v 2 (0, t) − λγe−γ ||e(t)||2
− 2ρe−γ ||e(t)||2 ||(t)||2 − μ2 (0, t)
− μγ||(t)||2 − 2ρeγ ||(t)||2 ||(t)||2 (9.25)
which shows that V1 is bounded and from the definition of V1 and V2 that ||e||, |||| ∈
L∞ . Integrating (9.25) in time from zero to infinity gives ||e||, |||| ∈ L2 , (9.16c) and
|e(1, ·)|, |(0, ·)|, |e(0, ·)v(0, ·)| ∈ L2 . From the properties (9.16c), |e(0, ·)v(0, ·)| ∈
152 9 Adaptive State Feedback Controllers
L2 and the adaptive laws (9.10), (9.16e) follow. Using the following Lyapunov func-
tion candidate
1 2
V3 (t) = q̃ (t), (9.26)
2γ5
and the property −q̃(t)γprojq̄ (τ (t), q̂(t)) ≤ −q̃(t)γτ (t) (Lemma A.1 in Appendix
A), we find
q̃ 2 (t)v 2 (0, t)
V̇3 (t) ≤ −q̃(t)e(0, t)v(0, t) ≤ − . (9.27)
1 + v 2 (0, t)
This means that V3 is bounded from above, and hence V3 ∈ L∞ . Integrating (9.27)
from zero to infinity gives (9.16f). From (9.24) and (9.18c), we have
and from |e(0, ·)v(0, ·)| ∈ L2 and (9.16f), |e(0, ·)| ∈ L2 follows.
defined over T1 , given in (1.1b). By Theorem D.1 in Appendix D.2, Eq. (9.29) has a
unique, bounded solution for every time t, and since the set of admissible ĉ1 , . . . ĉ4 , q̂,
is compact due to projection, it follows that there exists a constant K̄ so that
Additionally, from differentiating the Eq. (9.29) with respect to time, applying The-
orem D.1 in Appendix D.2 on the resulting equations, and using (9.16e), we obtain
|| K̂ tu ||, || K̂ tv || ∈ L2 . (9.31)
Property (9.31) is crucial for the closed loop analysis that follows.
Consider now the control law
1 1
U (t) = K̂ u (1, ξ, t)û(ξ, t)dξ + K̂ v (1, ξ, t)v̂(ξ, t)dξ (9.32)
0 0
where K̂ u , K̂ v is the solution to (9.29), and û, v̂ are the states of the identifier (9.8).
Theorem 9.1 Consider system (9.1) and identifier (9.8)–(9.10). The control law
(9.32) guarantees
Remark 9.1 The particular controller kernel equation (9.29) can, by a change of
variables, be brought into the form for which explicit solutions are given in Vazquez
and Krstić (2014).
Lemma 9.2 Transformation (9.34) along with control law (9.32) map identifier (9.8)
into (9.36) with
x
ω(x, ξ, t) = ĉ12 (t) K̂ u (x, ξ, t) + κ(x, s, t) K̂ u (s, ξ, t)ds (9.37a)
ξ
x
κ(x, ξ, t) = ĉ12 (t) K̂ v (x, ξ, t) + κ(x, s, t) K̂ v (s, ξ, t)ds. (9.37b)
ξ
Proof Differentiating (9.34b) with respect to time, inserting the dynamics (9.8a)–
(9.8b), integrating by parts, and inserting the boundary condition (9.8c) we find
x x
v̂t (x, t) = z t (x, t) + K̂ tu (x, ξ, t)û(ξ, t)dξ + K̂ tv (x, ξ, t)v̂(ξ, t)dξ
0 0
− λ K̂ u (x, x, t)û(x, t) + λq̂(t) K̂ u (x, 0, t)v̂(0, t)
+ λq̂(t) K̂ u (x, 0, t)(0, t) + λ K̂ u (x, 0, t)q̃(t)v(0, t)
x
− λ K̂ (x, 0, t)e(0, t) +
u
K̂ ξu (x, ξ, t)λû(ξ, t)dξ
x 0
x
+ K̂ (x, ξ, t)ĉ11 (t)û(ξ, t)dξ +
u
K̂ u (x, ξ, t)ĉ11 (t)e(ξ, t)dξ
x
0
x
0
Inserting (9.38) and (9.39) into (9.8b), using the Eq. (9.29), one obtains (9.36b).
Inserting (9.34) into (9.36a), changing the order of integration in the double integrals
and using (9.37), we obtain (9.8a). The boundary condition (9.36c) follows from
inserting (9.34) into (9.8c) and noting that
and
v(0, t) = v̂(0, t) + (0, t) = z(0, t) + (0, t). (9.41)
Recall from Theorem 1.3 the following inequalities that hold since T is a backstep-
ping transformation with bounded integration kernels
Moreover, from applying Lemma 1.1 to (9.37), and using the fact that K̂ u , K̂ v and
ĉ12 are all uniformly bounded, there must exist constants ω̄, κ̄ so that
Consider now the following components that will eventually form a Lyapunov
function candidate
1
V4 (t) = e−δx w 2 (x, t)d x (9.44a)
0
1
V5 (t) = ekx z 2 (x, t)d x. (9.44b)
0
V̇4 (t) ≤ h 1 z 2 (0, t) − [λδ − h 2 ] V4 (t) + h 3 V5 (t) + l1 (t)V4 (t) + l2 (t) (9.45a)
V̇5 (t) ≤ − μ − ek h 4 q̃ 2 (t) z 2 (0, t) + h 5 V4 (t) − [kμ − h 6 ] V5 (t)
+ l3 (t)V4 (t) + l4 (t)V5 (t) + l5 (t). (9.45b)
for a positive constant a, differentiating by time and using Lemma 9.3 (assuming
δ ≥ 1), we find
V̇6 (t) ≤ − aμ − h 1 − aek h 4 q̃ 2 (t) z 2 (0, t) − [λδ − h 2 − ah 5 ] V4 (t)
− [akμ − ah 6 − h 3 ] V5 (t) + (l1 (t) + al3 (t))V4 (t)
+ al4 (t)V5 (t) + l2 (t) + al5 (t). (9.47)
By choosing
h1 + 1
a= (9.48)
μ
h 2 + ah 5 h 3 + ah 6
δ > max 1, , k> (9.49)
λ aμ
we obtain
V̇6 (t) ≤ − 1 − bq̃ 2 (t) z 2 (0, t) − cV6 (t) + l6 (t)V6 (t) + l7 (t) (9.50)
9.2 Identifier-Based Design for a System with Constant Coefficients 157
1 + v 2 (0, t) 2
q̃ 2 (t)z 2 (0, t) = q̃ 2 (t) z (0, t)
1 + v 2 (0, t)
q̃ 2 (t)v 2 (0, t) 2 q̃ 2 (t)
= z (0, t) + z 2 (0, t)
1 + v 2 (0, t) 1 + v 2 (0, t)
q̃ 2 (t)v 2 (0, t) 2 q̃ 2 (t)
≤ z (0, t) + 2 (v 2 (0, t) + 2 (0, t))
1 + v 2 (0, t) 1 + v 2 (0, t)
q̃ 2 (t)v 2 (0, t) 2
≤ z (0, t) + l8 (t) (9.51)
1 + v 2 (0, t)
where
q̃ 2 (t)v 2 (0, t)
l8 (t) = 2 + 8q̄ 2 2 (0, t) (9.52)
1 + v 2 (0, t)
q̃ 2 (t)v 2 (0, t)
σ 2 (t) = . (9.54)
1 + v 2 (0, t)
and
V6 ∈ L1 ∩ L∞ (9.57)
and hence
Since ||z|| ∈ L∞ , it follows that z(x, t) must be bounded for almost all x ∈ [0, 1],
implying that
158 9 Adaptive State Feedback Controllers
σ 2 z(0, ·) ∈ L1 (9.59)
V6 → 0 (9.62)
and hence
for a positive constant c̄, and some nonnegative function l11 , which is integrable
since e(0, ·), ||u||, ||v||, ||e||, |||| ∈ L2 ∩ L∞ , and b̃1 , b̃2 are bounded. Lemma B.2
in Appendix B gives
V2 → 0 (9.68)
9.2 Identifier-Based Design for a System with Constant Coefficients 159
and hence
where we have inserted for the control law (9.32). Since ||u||, ||v||, ||û||, ||v̂|| ∈
L2 ∩ L∞ and the kernels K̂ uv , K̂ vv , K uv , K vv are all bounded, it follows that β(1, ·) ∈
L2 ∩ L∞ . Since β and α are simple, cascaded transport equations, this implies
while the invertibility of the transformation (8.13) (Theorem 1.3) then yields
where η and φ are defined for x ∈ [0, 1], t ≥ 0, while M and N are defined over T
and S given by (1.1b) and (1.1d), respectively. The initial conditions are assumed to
satisfy
Using the filters (9.74), non-adaptive estimates of the states can be generated from
x
ū(x, t) = qη(x, t) + θ(ξ)M(x, ξ, t)dξ (9.77a)
0
1
v̄(x, t) = φ(x, t) + κ(ξ)N (x, ξ, t)dξ (9.77b)
x
where
c1 (x) c2 (x)
θ(x) = , κ(x) = . (9.78)
λ(x) μ(x)
Lemma 9.4 Consider system (9.4) and the non-adaptive estimates (9.77) generated
using the filters (9.74). Then
ū ≡ u, v̄ ≡ v (9.79)
for t ≥ t0 , where
This assumption is equivalent to Assumption 9.1 for the constant coefficient case.
Since the bounds are arbitrary, the assumption is not a limitation. From the swapping
representations (9.77), we have
x
u(x, t) = qη(x, t) + θ(ξ)M(x, ξ, t)dξ + e(x, t) (9.84a)
0
1
v(x, t) = φ(x, t) + κ(ξ)N (x, ξ, t)dξ + (x, t) (9.84b)
x
where
and
with
x
û(x, t) = q̂η(x, t) + θ̂(ξ, t)M(x, ξ, t)dξ (9.88a)
0
1
v̂(x, t) = φ(x, t) + κ̂(ξ, t)N (x, ξ, t)dξ. (9.88b)
x
The projection operator is defined in Appendix A, and the initial guesses q̂0 , θ̂0 , κ̂0
are chosen inside the feasible domain
Lemma 9.5 The adaptive laws (9.85) with initial conditions satisfying (9.89) have
the following properties
Proof The property (9.90a) follows from the conditions (9.89) and the projection
operator. Consider
1 1
−1
V (t) = (2 − x)λ (x)e (x, t)d x +
2
(1 + x)μ−1 (x)2 (x, t)d x
0 0
q̃ 2 (t) 1 1
θ̃2 (x, t) 1 1
κ̃2 (x, t)
+ + dx + d x, (9.91)
2γ1 2 0 γ2 (x) 2 0 γ3 (x)
9.3 Swapping-Based Design for a System with Spatially Varying Coefficients 163
from which we find, using the property −θ̃(x, t)γ(x)projθ̄ (τ (x, t), θ̂(x, t))
≤ −θ̃(x, t)γ(x)τ (x, t) (Lemma A.1), and similarly for q̃ and κ̃
V̇ (t) ≤ −e2 (1, t) + 2e2 (0, t) − ||e(t)||2 + 22 (1, t) − 2 (0, t) − ||(t)||2
1
1
− ê(x, t)q̃(t)η(x, t)d x
1 + f 2 (t) 0
1 1
1
− θ̃(x, t)ê(ξ, t)M(ξ, x, t)dξd x
1 + f 2 (t) 0 x
1 x
1
− κ̃(x, t)ˆ(ξ, t)N (ξ, x, t)dξd x
1 + ||N (t)||2 0 0
1
1
− ˆ(0, t) κ̃(x, t)n 0 (x, t)d x. (9.92)
1 + ||n 0 (t)||2 0
Inserting the boundary conditions (9.82) and changing the order of integration in the
double integrals yield
Noticing that
x
ê(x, t) = e(x, t) + q̃(t)η(x, t) + θ̃(ξ, t)M(x, ξ, t)dξ (9.94a)
0
1
ˆ(x, t) = (x, t) + κ̃(ξ, t)N (x, ξ, t)dξ (9.94b)
x
1
ˆ(0, t) = (0, t) + κ̃(ξ, t)n 0 (ξ)dξ (9.94c)
0
we find
||ê(t)||2
V̇ (t) ≤ −e2 (1, t) − ||e(t)||2 − 2 (0, t) − ||(t)||2 −
1 + f 2 (t)
||ê(t)||||e(t)|| ||ˆ(t)|| 2
||ˆ(t)||||(t)||
+ − +
1 + f 2 (t) 1 + ||N (t)||2 1 + ||N (t)||2
ˆ (0, t)
2
ˆ(0, t)(0, t)
− + (9.95)
1 + ||n 0 (t)|| 2 1 + ||n 0 (t)||2
164 9 Adaptive State Feedback Controllers
||ˆ(t)|| ˆ (0,t)
2 2
and similarly for 1+||N (t)||2
and 1+||n 0 (t)||
2 , which give the remaining properties
and similarly for θ̂t and κ̂t , so using (9.90b)–(9.90c) gives (9.90d).
for the state estimates û, v̂ generated using (9.88), the filters (9.74) and the adaptive
laws (9.85), and where ( K̂ u , K̂ v ) is defined over T1 given in (1.1b) and is for every
t, the solution to the PDE
μ(x)κ̂(x, t)
K̂ u (x, x, t) = − (9.101c)
λ(x) + μ(x)
λ(0) u
K̂ v (x, 0, t) = q̂(t) K̂ (x, 0, t). (9.101d)
μ(0)
By Theorem D.1 in Appendix D.2, Eq. (9.101) has a unique, bounded solution for
every time t, and since the set of admissible θ̂, κ̂ and q̂, is bounded due to projection,
it also follows that the set of admissible K̂ u , K̂ v is bounded as well. The kernels
K̂ u , K̂ v are therefore uniformly, pointwise bounded, and there exists a constant K̄
so that
Moreover,
|| K̂ tu ||, || K̂ tv || ∈ L2 ∩ L∞ , (9.103)
which follows from differentiating equations (9.101) with respect to time, applying
Theorem D.1 in Appendix D.2 and using (9.90d).
Theorem 9.2 Consider system (9.4). The control law (9.100) guarantees
The proof of Theorem 9.2 is given in Sect. 9.3.5, following the introduction of a
backstepping transformation in the next section, that facilitates the Lyapunov anal-
ysis.
9.3.4 Backstepping
It is straightforward to show that the state estimates (9.88) satisfy the dynamics
˙
û t (x, t) + λ(x)û x (x, t) = λ(x)θ̂(x, t)v(x, t) + q̂(t)η(x, t)
x
+ θ̂t (ξ, t)M(x, ξ)dξ (9.105a)
0
1
v̂t (x, t) − μ(x)v̂x (x, t) = μ(x)κ̂(x, t)u(x, t) + κ̂t (ξ, t)N (x, ξ)dξ (9.105b)
x
û(0, t) = q̂(t)v(0, t) (9.105c)
166 9 Adaptive State Feedback Controllers
for some functions û 0 , v̂0 ∈ B([0, 1]). Consider the backstepping transformation
where T −1 is an operator in the same form as (9.106b). Consider also the target
system
˙
+ q̂(t)η(x, t) + θ̂t (ξ, t)M(x, ξ, t)dξ (9.109a)
0
z t (x, t) − μ(x)z x (x, t) = − K̂ (x, 0, t)λ(0)q̂(t)ˆ(0, t)
u
x
+ T q̂η˙ + θ̂t (ξ)M(x, ξ, t)dξ,
0
1
κ̂t (ξ)N (x, ξ, t)dξ (x, t)
x
x
− K̂ tu (x, ξ, t)w(ξ, t)dξ
0 x
− K̂ tv (x, ξ, t)T −1 [w, z](ξ, t)dξ (9.109b)
0
w(0, t) = q̂(t)z(0, t) + q̂(t)ˆ(0, t) (9.109c)
z(1, t) = 0 (9.109d)
w(x, 0) = w0 (x) (9.109e)
z(x, 0) = z 0 (x) (9.109f)
9.3 Swapping-Based Design for a System with Spatially Varying Coefficients 167
for some functions ω, b defined over T1 , and initial conditions w0 , z 0 ∈ B([0, 1]).
We seek a transformation mapping (9.105) into (9.109).
Lemma 9.6 Consider system (9.105). The backstepping transformation (9.106) and
the control law (9.100), with ( K̂ u , K̂ v ) satisfying (9.101), map (9.105) into (9.109),
where ω and b are given by
x
ω(x, ξ, t) = λ(x)θ̂(x, t) K̂ (x, ξ, t) +
u
b(x, s, t) K̂ u (s, ξ, t)ds (9.110a)
ξ
x
b(x, ξ, t) = λ(x)θ̂(x, t) K̂ v (x, ξ, t) + b(x, s, t) K̂ v (s, ξ, t)ds. (9.110b)
ξ
Proof Differentiating (9.106b) with respect to time and space, respectively, inserting
the dynamics (9.105a)–(9.105b), integrating by parts and inserting the result into
(9.105b) yield
1 x
− κ̂t (ξ, t)N (x, ξ, t)dξ + ˙
K̂ u (x, ξ, t)q̂(t)η(ξ, t)dξ
x 0
x ξ
+ K̂ u (x, ξ, t) θ̂t (s, t)M(ξ, s, t)dsdξ
0 0
x 1
+ K̂ v (x, ξ, t) κ̂t (s, t)N (ξ, s, t)dsdξ = 0. (9.111)
0 ξ
Choosing K̂ u and K̂ v to satisfy (9.101) yields the target system dynamics (9.109b).
Inserting the transformations (9.106) into the w-dynamics (9.109a), using the dynam-
ics (9.105a) and changing the order of integration in the double integrals yield
168 9 Adaptive State Feedback Controllers
x
0=− ω(x, ξ, t) − λ(x)θ̂(x, t) K̂ u (x, ξ, t)
0
x
− b(x, s, t) K̂ u (s, ξ, t)ds û(ξ, t)dξ
ξ
x
− b(x, ξ, t) − λ(x)θ̂(x, t) K̂ v (x, ξ, t)
0
x
− b(x, s, t) K̂ v (s, ξ, t)ds v̂(ξ, t)dξ (9.112)
ξ
which gives the Eq. (9.110) for ω and b. The bounds (9.114a) follow from applying
Lemma 1.1 to (9.110) and (9.102).
Since the backstepping kernels K̂ u and K̂ v used in (9.106) are uniformly bounded,
by Theorem 1.3, there exist constants G 1 , G 2 , G 3 , G 4 so that
Moreover, from (9.110) and the fact that λ, μ, K̂ u , K̂ v , θ̂, κ̂ are all uniformly
bounded, there exist constants ω̄ and b̄ such that
Consider now the following components that will eventually form a Lyapunov func-
tion candidate
1
V1 (t) = e−δx λ−1 (x)w 2 (x, t)d x (9.116a)
0
1
V2 (t) = (1 + x)μ−1 (x)z 2 (x, t)d x (9.116b)
0
1
V3 (t) = (2 − x)λ−1 (x)η 2 (x, t)d x (9.116c)
0
1 x
V4 (t) = (2 − x)λ−1 (x)M 2 (x, ξ, t)dξd x (9.116d)
0 0
1 1
V5 (t) = (1 + x)μ−1 (x)N 2 (x, ξ, t)dξd x. (9.116e)
0 x
9.3 Swapping-Based Design for a System with Spatially Varying Coefficients 169
Lemma 9.7 Let δ > 6 + λ−2 ω̄ 2 . Then there exist positive constants h 1 , h 2 , . . . , h 6
and nonnegative, integrable functions l1 , l2 , . . . , l15 such that
where
ˆ2 (0, t)
σ 2 (t) = (9.118)
1 + ||n 0 (t)||2
Choosing
5 1
V6 (t) = V1 (t) + max(h 2 , h 5 )V2 (t) + h 1 V3 (t)
μ 4
+ e−δ min(h −1 −1 −δ −1
4 , h 5 )V4 (t) + e h 6 V5 (t) (9.119)
V̇6 (t) ≤ −cV6 (t) + l17 (t)V6 (t) + l18 (t) − a(1 − bσ 2 (t))||n 0 (t)||2 (9.121)
for some integrable functions l17 and l18 , and positive constants a, b and c. We also
have from (9.96) that
where
V6 ∈ L1 ∩ L∞ , (9.125)
and thus
implying
||φ|| ∈ L2 ∩ L∞ , (9.128)
The remaining properties can be shown using the same technique as in the proof
of Theorem 9.1.
9.4 Simulations
System (9.1) and the controller of Theorem 9.1 are implemented using the system
parameters
6 1
0
4
−1
2
−2
0 −3
0 2 4 6 8 10 0 2 4 6 8 10
Time [s] Time [s]
Fig. 9.1 Left: State norm. Right: Actuation signal for the controller of Theorem 9.1
1.5
1
1
0.5 0.5
0
0
−0.5
0 5 10 0 5 10
Time [s] Time [s]
0.3
4
0.4
0.2
2
0.2 0.1
0 0 0
0 5 10 0 5 10 0 5 10
Time [s] Time [s] Time [s]
Fig. 9.2 Actual (solid black) and estimated parameters (dashed red) using the adaptive controller
of Theorem 9.1
All additional initial conditions are set to zero. The design gains are set to
γ = ρ = 10−2 , γi = 1, i = 1 . . . 5. (9.132)
The controller kernel equations (9.29) are solved using the method described in
Appendix F.2. From Fig. 9.1, it is seen that the system states converge to zero, as
does the actuation signal U . The estimated parameters shown in Fig. 9.2 are bounded
and stagnate, but only one parameter (ĉ21 ) converges to its true value. Convergence of
the estimated parameters to their true values is not guaranteed by the control law. This
is common in adaptive control, since persistent excitation and set-point regulation
can in general not be achieved simultaneously.
172 9 Adaptive State Feedback Controllers
1.5
0
1
0.5 −0.1
0
−0.2
0 5 10 15 0 5 10 15
Time [s] Time [s]
Fig. 9.3 Left: State norm. Right: Actuation signal for the controller of Theorem 9.2
2 0.8 2
0.6
1 0.4 1
0.2
0 0 0
0 0.5 1 0 0.5 1 0 5 10 15
x x Time [s]
Fig. 9.4 Estimated parameters using the adaptive controller of Theorem 9.2. Left: Actual (solid
black) and final estimated value (dashed red) of θ. Middle: Actual (solid black) and final estimated
value (dashed red) of κ. Right: Actual (solid black) and estimated value (dashed red) of q
Finally, system (9.4) in closed loop with the controller of Theorem 9.2 is implemented
using the system parameters
All additional initial conditions are set to zero. The design gains are set to
The controller kernel equations (9.101) are solved using the method described in
Appendix F.2.
The state norm and actuation signal both converge to zero, as shown in Fig. 9.3,
in accordance with the theory. All estimated parameters are seen in Fig. 9.4 to be
bounded, but do not converge to their true values. It is interesting to note that even
9.4 Simulations 173
though the estimated functions θ̂ and κ̂ and the estimated parameter q̂ are quite
different from the actual functions, the adaptive controller manages to stabilize the
system.
9.5 Notes
The adaptive control laws derived in this chapter adaptively stabilize a system of
2 × 2 linear hyperbolic PDEs with uncertain in-domain cross terms and source terms,
and an uncertain boundary parameter. They assume that full-state measurements are
available. As mentioned in the introduction, this assumption can be questioned, as
distributed measurements in the domain rarely are available in practice. However,
the solutions offered here are some of the many steps towards a complete coverage
of adaptive control of linear hyperbolic PDEs.
We proceed in the next chapter by limiting the available measurements to be taken
at the boundaries, which is a more practically feasible problem, but also considerably
harder to solve. We start in Chap. 10 by solving adaptive control problems for the
case of known in-domain coefficients, but uncertainty in the boundary parameter q.
References
10.1 Introduction
The adaptive control laws of the previous chapter assumed distributed measurements,
which are rarely available in practice. This chapter presents adaptive output-feedback
control laws for system (7.4) with an uncertain parameter q in the boundary con-
dition anti-collocated with actuation. Only one boundary measurement is assumed
available, and designs for both sensing collocated with actuation and anti-collocated
with actuation are presented, since they require significantly different analysis. For
the convenience of the reader, we restate the system under consideration, which is
where
which is collocated with the uncertain parameter q. For this problem, swapping
design will be used to achieve stabilization along the lines of Anfinsen and Aamo
(2017a). In the collocated case, the measurement is taken as
which requires a fairly different adaptive observer design for estimation of the states
u, v and parameter q, originally presented in Anfinsen and Aamo (2016). In particular,
the output injection gains are time-varying in the anti-collocated case, while static
in the collocated case. Although the control law differs from the anti-collocated case
only in the way state- and parameter estimates are generated, closed-loop stability
analysis, which was originally presented in Anfinsen and Aamo (2017b), becomes
more involved as a consequence of the estimation scheme. For both cases, we assume
the following.
Assumption 10.1 A bound q̄ on q is known, so that
For system (10.1) with measurement (10.4), we define the input filters
and (M α , M β ) is the solution to the PDE (8.65). We propose the adaptive law
˙ = proj γ (y0 (t) − v̂(0, t))r (0, t) , q̂(t) ,
q̂(t) (10.11a)
q̄
1 + r 2 (0, t)
q̂(0) = q̂0 (10.11b)
for some design gain γ > 0, and initial guess q̂0 satisfying
|q̂0 | ≤ q̄ (10.12)
with q̄ provided by Assumption (10.1), and where proj is the projection operator
defined in Appendix A. Finally, we define the adaptive state estimates
û(x, t) = η(x, t) + q̂(t) p(x, t), v̂(x, t) = φ(x, t) + q̂(t)r (x, t), (10.13)
Theorem 10.1 Consider system (10.1), filters (10.7)–(10.8), adaptive law (10.11a)
and the state estimates (10.13). Then
and
|ê(x, t)| ≤ |q̃(t)|| p(x, t)|, |ˆ(x, t)| ≤ |q̃(t)||r (x, t)|, (10.16)
T, k1 , k2 so that
t+T
1
k1 ≥ r 2 (0, τ )dτ ≥ k2 , (10.17)
T t
then q̂ → q exponentially fast. If additionally p(x, t) and r (x, t) are bounded for
all x ∈ [0, 1], then ||û − u||∞ → 0 and ||v̂ − v||∞ → 0 exponentially fast.
Proof The property (10.15a) follows from the projection operator and Lemma A.1
in Appendix A. Non-adaptive state estimates ū, v̄ can be constructed from
where e0 , 0 ∈ B([0, 1]). The error dynamics has the same form as the error dynamics
(8.67) of Theorem 8.2, where it was shown that by choosing the injection gains as
(10.10), the system can be mapped by an invertible backstepping transformation in
the form (8.72), that is
x
e(x, t) α̃(x, t) α̃(ξ, t)
= + M(x, ξ) dξ, (10.21)
(x, t) β̃(x, t) 0 β̃(ξ, t)
with g1 and g2 given by (8.71a) and (8.75). System (10.22) is a cascade from α into
β, and will be zero in finite time t F . Hence, the following static relationships are
valid
for some constant δ ≥ 1, from which we find, using integration by parts, the boundary
condition (10.22c) and Young’s inequality,
1 1
d −δx −1
e λ (x)α̃ (x, t)d x = 2
2
e−δx λ−1 (x)α̃(x, t)αt (x, t)d x
dt 0 0
1
=− e−δx α̃(x, t)α̃x (x, t)d x
0
1 x
+2 e−δx λ−1 (x)α̃(x, t) g1 (x, ξ)α̃(ξ, t)dξd x
0 0
1
≤ −e−δ α̃2 (1, t) + α̃2 (0, t) − δ e−δx α̃2 (x, t)d x
0
1 x
1 1 −δx x 2
+ ḡ12 e−δx λ−1 (x)α̃2 (x, t) dξd x + e α̃ (ξ, t)dξd x
0 0 λ 0 0
1
ḡ 2 1 −δx 2
≤ −δ e−δx α̃2 (x, t)d x + 1 e α̃ (x, t)d x
0 λ 0
x 1 1
1 −δx 1
− e α̃ (ξ, t)dξd x
2
+ e−δx α̃2 (x, t)d x
δλ 0 x=0 δλ 0
1
ḡ 2 1
≤− δ− 1 − e−δx α̃2 (x, t)d x (10.26)
λ λ 0
for some arbitrary positive constants ρ1 and ρ2 , and where c̄2 and ḡ2 bound c2 and
g2 , respectively. Choosing ρ1 = ρ2 = 18 μ, yields
1
d
(1 + x)μ−1 (x)β̃ 2 (x, t)d x ≤ −β̃ 2 (0, t)
dt 0
c̄22 + ḡ22 δ 1 −δx 2 1 1 2
+ 16 e e α̃ (x, t)d x − β̃ (x, t)d x (10.28)
μ2 0 4 0
and
we obtain
δ ḡ 2 1 c̄2 + ḡ 2 1
V̇ (t) ≤ −β̃ (0, t) − e 2
δ − 1 − − 16 2 2 2 e−δx α̃2 (x, t)d x
λ λ μ 0
1 1
ˆ2 (0, t) ˆ(0, t)(0, t)
− β̃ 2 (x, t)d x − + . (10.31)
4 0 1 + r (0, t)
2 1 + r 2 (0, t)
Choosing
ḡ12 1 c̄2 + ḡ 2
δ > max 1, + + 16 2 2 2 (10.32)
λ λ μ
10.2 Anti-collocated Sensing and Control 181
and applying Young’s inequality to the last term, recalling that β̃(0, t) = (0, t), we
obtain
1 ˆ2 (0, t)
V̇ (t) ≤ − , (10.33)
2 1 + r 2 (0, t)
which proves that V is bounded and nonincreasing, and hence has a limit as t → ∞.
Integrating (10.33) from zero to infinity, and using
|ˆ(0, t)|
≤ γ (10.35)
1 + r 2 (0, t)
from which (10.15b) yields (10.15c). The property (10.16) follows immediately from
the relationships
c2 (x)
K̂ u (x, x, t) = − (10.38c)
λ(x) + μ(x)
λ(0) u
K̂ v (x, 0, t) = q̂(t) K̂ (x, 0, t) (10.38d)
μ(0)
with q̂, û and v̂ generated using the adaptive law (10.11a) and the relationship (10.13).
As with previous PDEs in the form, (9.29) and (9.101), Theorem D.1 in Appendix
D guarantees a unique solution to (10.38), and since |q̂| ≤ q̄ and q̂˙ ∈ L2 ∩ L∞ , we
also have
|| K̂ tu ||, || K̂ tv || ∈ L2 ∩ L∞ (10.40)
since q̂˙ ∈ L2 ∩ L∞ .
Theorem 10.2 Consider system (10.1), filters (10.7)–(10.8), adaptive law (10.11a)
and the adaptive state estimates (10.13). The control law (10.1) guarantees
The proof is given in Sect. 10.2.4, following some intermediate results in Sect.
10.2.3.
10.2.3 Backstepping
First, we derive the dynamics of the adaptive estimates (10.13). Using the filters
(10.7) and (10.8), it can straightforwardly be shown that
˙ p(x, t)
û t (x, t) + λ(x)û x (x, t) = c1 (x)v̂(x, t) + k1 (x)ˆ(0, t) + q̂(t) (10.42a)
˙
v̂t (x, t) − μ(x)v̂x (x, t) = c2 (x)û(x, t) + k2 (x)ˆ(0, t) + q̂(t)r (x, t) (10.42b)
û(0, t) = q̂(t)v(0, t) (10.42c)
v̂(1, t) = U (t) (10.42d)
û(x, 0) = û 0 (x) (10.42e)
v̂(x, 0) = v̂0 (x), (10.42f)
10.2 Anti-collocated Sensing and Control 183
where û 0 , v̂0 ∈ B([0, 1]). Consider the backstepping transformation from û, v̂ into
the new variables α, β given by
x
˙
+ q̂(t)T [ p, r ](x, t) − K̂ tu (x, ξ, t)α(ξ, t)dξ
x 0
v −1
− K̂ t (x, ξ, t)T [α, β](ξ, t)dξ (10.45b)
0
α(0, t) = q̂(t)β(0, t) + q̂(t)ˆ(0, t) (10.45c)
β(1, t) = 0 (10.45d)
α(x, 0) = α0 (x) (10.45e)
β(x, 0) = β0 (x) (10.45f)
for initial conditions α0 (x) = û 0 (x), β0 (x) = T [û 0 , v̂0 ](x, 0) and parameters ω, β
defined over T1 given in (1.1b).
Lemma 10.1 The backstepping transformation (10.43) and control law (10.37) with
( K̂ u , K̂ v ) satisfying (10.38) map system (10.42) into the target system (10.45) with
ω and κ given by
x
ω(x, ξ, t) = c1 (x) K̂ u (x, ξ, t) + κ(x, s, t) K̂ u (s, ξ, t)ds (10.46a)
ξ
x
κ(x, ξ, t) = c1 (x) K̂ v (x, ξ, t) + κ(x, s, t) K̂ v (s, ξ, t)ds. (10.46b)
ξ
184 10 Adaptive Output-Feedback: Uncertain Boundary Condition
Proof By differentiating (10.43b) with respect to time and space, inserting the
dynamics (10.42a)–(10.42b), integrating by parts and inserting the boundary condi-
tion (10.42c), we find
and
Using the Eqs. (10.38) and the definitions of T and T −1 in (10.43b) and (10.44b),
Eq. (10.50) can be written as (10.45b).
Inserting (10.43) into (10.45a) and changing the order of integration in the double
integrals give
from which (10.46) gives (10.42a). The boundary condition (10.45c) comes from
(10.42c) and noting that û(0, t) = α(0, t) and v(0, t) = v̂(0, t) + ˆ(0, t) = β(0, t) +
ˆ(0, t). Evaluating (10.43b) at x = 1, inserting the boundary condition (10.42d) and
the control law (10.37) yield (10.45d).
Lastly, Lemma 1.1 applied to (10.46) gives the bounds (10.47), since K̂ u , K̂ v and
c1 are all uniformly bounded.
We will also map the filter (10.8) in ( p, r ) into a target system that is easier to
analyze. Consider the backstepping transformation from a new set of variables (w, z)
into ( p, r ) given by
x
p(x, t) w(x, t) 0 M α (x, ξ) w(ξ, t)
= + dξ (10.52)
r (x, t) z(x, t) 0 0 M β (x, ξ) z(ξ, t)
where M α and M β satisfy the PDEs (8.65). Consider also the target system
186 10 Adaptive Output-Feedback: Uncertain Boundary Condition
x
wt (x, t) + λ(x)wx (x, t) = g1 (x, ξ)w(ξ, t)dξ (10.53a)
0
x
z t (x, t) − μ(x)z x (x, t) = c2 (x)w(x, t) + g2 (x, ξ)w(ξ, t)dξ (10.53b)
0
w(0, t) = β(0, t) + ˆ(0, t) (10.53c)
z(1, t) = 0 (10.53d)
w(x, 0) = w0 (x) (10.53e)
z(x, 0) = z 0 (x) (10.53f)
for some functions g1 , g2 defined over T , and initial conditions w0 , z 0 ∈ B([0, 1]).
Lemma 10.2 The backstepping transformation (10.52) maps filter (10.8) into target
system (10.53) with g1 and g2 given by (8.71a) and (8.75).
Proof The dynamics of the filter (10.8) has the same form as the error dynamics
(8.67) of Theorem 8.2, where it was shown that a backstepping transformation in
the form (10.52) with injection gains given as (10.10) maps the system into a target
system in the form (10.53). The boundary condition (10.53c) follows from the fact
that
and
Choosing
h3 h6
δ > max 1, , (10.61)
λ λ
h 7 eδ 4h 4
a3 > a2 > max h 1 + 2a3 , (10.62)
δλ − h 6 μ
188 10 Adaptive Output-Feedback: Uncertain Boundary Condition
we obtain
V̇5 (t) ≤ −cV5 (t) + l6 (t)V5 (t) + bˆ2 (0, t) − z 2 (0, t) (10.63)
b = h 2 + a2 h 5 + 2a3 . (10.64)
ˆ2 (0, t)
ˆ2 (0, t) = (1 + r 2 (0, t)) (10.65)
1 + r 2 (0, t)
where
ˆ2 (0, t)
l7 (t) = bσ 2 (t), σ 2 (t) = (10.67)
1 + r 2 (0, t)
1
V̇ ≤ − σ 2 (t) (10.68)
2
and
Since ||z|| ∈ L∞ , z 2 (x, t) must for all fixed t be bounded for x almost everywhere
in [0, 1]. This in turn implies that z 2 (0, t) must be bounded for almost all t ≥ 0,
implying that
σ 2 z 2 (0, ·) ∈ L1 (10.71)
From the invertibility of the transformations (10.1) and (10.2), it follows that
while (10.18) and (10.19) with bounded ||e|| and |||| give
û t (x, t) + λ(x)û x (x, t) = c1 (x)v̂(x, t) + Γ1 (x, t)(y1 (t) − û(1, t)) (10.76a)
v̂t (x, t) − μ(x)v̂x (x, t) = c2 (x)û(x, t) + Γ2 (x, t)(y1 (t) − û(1, t)) (10.76b)
û(0, t) = q̂(t)v̂(0, t) (10.76c)
v̂(1, t) = U (t), (10.76d)
û(x, 0) = û 0 (x, t) (10.76e)
v̂(x, 0) = v̂0 (x, t) (10.76f)
where q̂ is an estimate of q generated from some adaptive law, Γ1 (x, t) and Γ2 (x, t)
are output injection gains to be designed, and the initial conditions û 0 , v̂0 satisfy
ṽ(1, t) = 0, (10.78d)
ũ(x, 0) = ũ 0 (x) (10.78e)
ṽ(x, 0) = ṽ0 (x) (10.78f)
β
P0α , P0 ∈ B(S), (10.80)
where S is defined in (1.1c). From Theorem D.3 in Appendix D.3, Eq. (10.79) has
a bounded solution (P α , P β ) with bounds depending on q̄ and the initial conditions
β
P0α , P0 . In other words, there exist constants P̄ α , P̄ β so that
Lemma 10.4 The backstepping transformation (10.82) maps target system (10.83)
where b1 and b2 are given by
ξ
b1 (s, ξ, t) = P α (x, ξ, t)c1 (ξ) − P α (x, s, t)b1 (s, ξ, t)ds (10.84a)
x
ξ
β
b2 (s, ξ, t) = P (x, ξ, t)c1 (ξ) − P β (x, s, t)b1 (s, ξ, t)ds (10.84b)
x
Proof From time differentiating (10.82a), inserting the dynamics (10.83a), integrat-
ing by parts and changing the order of integration in the double integral, we find
1
ũ t (x, t) = αt (x, t) + Ptα (x, ξ, t)α(ξ, t)dξ − P α (x, 1, t)λ(1)α(1, t)
x
1
α
+ P (x, x, t)λ(x)α(x, t) + Pξα (x, ξ, t)λ(ξ)α(ξ, t)dξ
x
1 1
α
+ P (x, ξ, t)λ (ξ)α(ξ, t)dξ + P α (x, ξ, t)c1 (ξ)β(ξ, t)dξ
x x
1 ξ
− P α (x, s, t)b1 (s, ξ, t)dsβ(ξ, t)dξ. (10.86)
x x
Moreover, from differentiating the first element of (10.82a) with respect to space,
we obtain
1
ũ x (x, t) = αx (x, t) − P α (x, x, t)α(x, t) + Pxα (x, ξ, t)α(ξ, t)dξ. (10.87)
x
192 10 Adaptive Output-Feedback: Uncertain Boundary Condition
Using (10.79d) yields (10.83c). The last boundary condition (10.83d) follows from
inserting (10.82) into (10.78d).
v̄(t) = v̂(0, t − t1 )
1
+ P β (0, ξ, t − t1 ) y1 (t − h α (ξ)) − û(1, t − h α (ξ)) dξ, (10.90a)
0
ϑ(t) = y1 (t) − û(1, t) + q̂(t − t1 )v̄(t) (10.90b)
with t1 defined in (8.8). We note that t1 = h α (1), and that the signals (10.90) can be
computed using available measurements and estimates only.
Lemma 10.5 Consider system (10.1) with measurement (10.5), observer (10.76)
and the signals (10.90). The relationship
10.3 Collocated Sensing and Control 193
Proof From (10.83b) and (10.83d), one has that β ≡ 0 for t ≥ t2 where t2 is given
in (8.8). Thus, for t ≥ t2 , system (10.83) reduces to
or
where h α and t1 are given in (10.91). Moreover, from (10.82b) and β ≡ 0, one will
for t ≥ t2 have 1
v(0, t) = v̂(0, t) + P β (0, ξ, t)α(ξ, t)dξ. (10.97)
0
Given the linear parametric model of Lemma 10.5, a large number of adaptive laws
can be applied to generate estimates of q. The resulting estimate can then be combined
with the observer (10.76) to generate estimates of the system states. To best facilitate
for the adaptive control law design, the gradient algorithm with normalization is
used here, which is given as
⎧
⎨0 for 0 ≤ t < t F
˙ =
q̂(t) (ϑ(t) − q̂(t)v̄(t))v̄(t) (10.101a)
⎩γ for t ≥ t F
1 + v̄ 2 (t)
q̂(0) = q̂0 , |q̂0 | ≤ q̄ (10.101b)
Theorem 10.3 Consider system (10.1) with measurement (10.5), observer (10.76)
and the adaptive law (10.101), where ϑ and v̄ are generated using Lemma 10.5 and
t F is defined in (8.8). The adaptive law (10.101) has the following properties:
exponentially fast.
10.3 Collocated Sensing and Control 195
Proof By inserting ϑ(t) and using q̃(t) = q − q̂(t) and (10.92) one finds for t ≥ t F
⎧
⎨0 for 0 ≤ t < t F
˙ =
q̃(t) v (0, t − t1 )
2 , (10.107)
⎩−γ q̃(t) for t ≥ t F
1 + v 2 (0, t − t1 )
showing (10.102) by selecting t0 = t F . Equation (10.108) also shows that the decay
rate is at maximum exponential with a rate γ. Next, form the Lyapunov function
1 2
V (t) = q̃ (t). (10.109)
2γ
and hence
⎧
⎨0 for 0 ≤ t < t F
V̇ (t) ≤ e−2γt1 q̃ 2 (t − t1 )v 2 (0, t − t1 ) . (10.112)
⎩− for t ≥ t F
1 + v (0, t − t1 )
2
which shows that V is non-increasing and bounded from above. Integrating (10.112)
from zero to infinity gives that the signal
q̃ 2 (t − t1 )v 2 (0, t − t1 )
s(t) = (10.113)
1 + v 2 (0, t − t1 )
is in L1 . This in turn means that the signal s(t + t1 ) also lies in L1 , and hence
q̃v(0, ·)
∈ L2 (10.114)
1 + v 2 (0, ·)
196 10 Adaptive Output-Feedback: Uncertain Boundary Condition
q̃ 2 (t − t1 )v 2 (0, t − t1 )
s(t) = ≤ q̃ 2 (t − t1 ) ≤ q̃ 2 (0) (10.115)
1 + v 2 (0, t − t1 )
q̃v(0, ·)
∈ L∞ . (10.116)
1 + v 2 (0, ·)
where ( K̂ u , K̂ v ) is the on-line solution to the PDE (10.38) with q̂ generated using
the method of Theorem 10.3. Note that the bounds (10.39) and (10.40) still apply,
since q̂ is bounded and q̂˙ ∈ L2 ∩ L∞ (Theorem 10.3).
Theorem 10.4 Consider system (10.1) with measurement (10.5), observer (10.76)
and the adaptive law of Theorem 10.3. The control law (10.118) ensures
||u||, ||v||, ||û||, ||v̂|| ∈ L2 ∩ L∞ (10.119a)
||u||∞ , ||v||∞ , ||û||∞ , ||v̂||∞ ∈ L2 ∩ L∞ , (10.119b)
||u||, ||v||, ||û||, ||v̂|| → 0 (10.119c)
||u||∞ , ||v||∞ , ||û||∞ , ||v̂||∞ → 0. (10.119d)
10.3.6 Backstepping
where ( K̂ u , K̂ v ) is the on-line solution to the PDE (10.38). Its inverse is in the form
û(x, t) = w(x, t) (10.121a)
−1
v̂(x, t) = T [w, z](x, t) (10.121b)
for some initial conditions w0 , z 0 ∈ B([0, 1]), and where ω and κ are defined over
T1 given in (1.1b) and Ω is defined for x ∈ [0, 1], t ≥ 0.
Lemma 10.6 The backstepping transformation (10.120) maps observer (10.76) into
target system (10.122), where ω, κ are defined by (10.46) and satisfy the bounds
(10.47) for some constants κ̄, ω̄, while
and satisfies
Proof From differentiating (10.120b) with respect to time and space, inserting the
dynamics (10.76a)–(10.76b), and integrating by parts, we find
v̂t (x, t) = z t (x, t) − K̂ u (x, x, t)λ(x)û(x, t) + K̂ u (x, 0, t)λ(0)q̂(t)v̂(0, t)
x x
+ K̂ ξu (x, ξ, t)λ(ξ)û(ξ, t)dξ + K̂ u (x, ξ, t)λ (ξ)û(ξ, t)dξ
0 x 0 x
+ K̂ (x, ξ, t)c1 (ξ)v̂(ξ, t)dξ +
u
K̂ u (x, ξ, t)Γ1 (ξ, t)α(1, t)dξ
0 0
v v
+ K̂ (x, x, t)μ(x)v̂(x, t) − K̂ (x, 0, t)μ(0)v̂(0, t)
x x
− K̂ ξv (x, ξ, t)μ(ξ)v̂(ξ, t)dξ − K̂ v (x, ξ, t)μ (ξ)v̂(ξ, t)dξ
x
0
x
0
v
+ K̂ (x, ξ, t)c2 (ξ)û(ξ, t)dξ + K̂ v (x, ξ, t)Γ2 (ξ, t)α(1, t)dξ
x
0
x 0
and
v̂x (x, t) = z x (x, t) + K̂ u (x, x, t)û(x, t) + K̂ v (x, x, t)v̂(x, t)
x x
+ K̂ x (x, ξ, t)û(ξ, t)dξ +
u
K̂ xv (x, ξ, t)v̂(ξ, t)dξ. (10.126)
0 0
Inserting (10.125)–(10.126) into the dynamics (10.76b) and using Eq. (10.38) we
obtain
x
z t (x, t) − μ(x)z x (x, t) = Γ2 (x, t)α(1, t) − K̂ u (x, ξ, t)Γ1 (ξ, t)dξα(1, t)
x 0
x
v
− K̂ (x, ξ, t)Γ2 (ξ, t)dξα(1, t) − K̂ tu (x, ξ, t)û(ξ, t)dξ
x
0 0
which can be written as (10.122b). Inserting (10.120) into (10.122a), changing the
order of integration in the double integral, we find
Using the Eqs. (10.46) yields the dynamics (10.76a), since α(1, t) = ũ(1, t). Substi-
tuting the backstepping transformation (10.120) into the boundary condition (10.76c)
immediately yields (10.122c). Lastly, inserting x = 1 into (10.120b) gives
1 1
z(1, t) = U (t) − K̂ (1, ξ, t)û(ξ, t)dξ −
u
K̂ v (1, ξ, t)v̂(ξ, t)dξ. (10.129)
0 0
The control law (10.118) then yields the last boundary condition (10.122d).
Since β in the observer dynamics is zero in finite time, it suffices to consider the state
α satisfying the dynamics (10.94), which we restate here
with P β being time-varying, bounded and satisfying (10.79). Consider the functions
1
V1 (t) = e−δx λ−1 (x)α2 (x, t)d x (10.132a)
0
1
V2 (t) = e−δx λ−1 (x)w 2 (x, t)d x (10.132b)
0
1
V3 (t) = ekx μ−1 (x)z 2 (x, t)d x (10.132c)
0
Now forming
Choosing
h1 a3 h 2 + μ̄
a3 = q̄ 2 + 1, a1 > eδ (1 + a3 ek ), δ> , k> (10.136)
λ a3 μ
we obtain
V̇4 (t) ≤ −z 2 (0, t) − cV4 (t) + l3 (t)V4 (t) + a1 q̃ 2 (t)v 2 (0, t) (10.137)
for some positive constant c and integrable function l3 . Inequality (10.137) can be
written as
V̇4 (t) ≤ −z 2 (0, t) − cV4 (t) + l3 (t)V4 (t) + a1 σ 2 (t)(1 + v 2 (0, t)) (10.138)
where
q̃ 2 (t)v 2 (0, t)
σ 2 (t) = (10.139)
1 + v 2 (0, t)
where P̄ β bounds the kernel P β , and inserting this into (10.138), we obtain
V̇4 (t) ≤ −cV4 (t) + l4 (t)V4 (t) + l5 (t) − 1 − 2a1 σ 2 (t) z 2 (0, t) (10.141)
where
are integrable functions. Moreover, from (10.109) and (10.112), we have that
1 2
V (t) = q̃ (t) (10.143)
2γ
satisfies
Define
1 2
V5 (t) = V (t + t1 ) = q̃ (t + t1 ) (10.145)
2γ
And since |q̃| is decaying with a rate that is at most exponential with a rate γ, we
have
v 2 (0, t)
σ 2 (t) = q̃ 2 (t) < q̃ 2 (t) ≤ e2γt1 q̃ 2 (t + t1 ) = 2e2γt1 γV5 (t) (10.147)
1 + v 2 (0, t)
V4 ∈ L1 ∩ L∞ , (10.148)
and hence
This, in turn, implies that z(0, t) is bounded for almost all t ≥ 0, meaning that
σ 2 z 2 (0, ·) ∈ L1 (10.150)
V4 → 0 (10.153)
and hence
The remaining properties can be proved using the same techniques as in the proof of
Theorem 10.2.
10.4 Simulations
System (10.1) and the observer of Theorem 10.1 are implemented using the param-
eters
u 0 ≡ 0, v0 ≡ 1. (10.156)
All additional initial conditions are set to zero. The parameters constitute a stable
system. To excite the system, the actuation signal is chosen as
The observer kernel equations are solved using the method described in Appendix F.2.
It is observed from Fig. 10.1 that the system norms stay bounded, while the actu-
ation signal excites the system. From Fig. 10.2, the estimated q̂ converges to its real
value after approximately 2 s, with the adaptive estimation errors converging to zero
as well in the same amount of time.
10.4 Simulations 203
3 2
2
1
1
0 0
0 2 4 6 8 10 0 2 4 6 8 10
Time [s] Time [s]
1.5 1
1 0
0.5
−1
0
0 2 4 6 8 10 0 2 4 6 8 10
Time [s] Time [s]
Fig. 10.2 Left: Adaptive estimation error norm. Right: Actual value of q (solid black) and estimated
value q̂ (dashed red)
System (10.1) and the controller of Theorem 10.2 are implemented using the same
system parameters and initial conditions as above, except that we set
q = 2, (10.159)
γ = 1, (10.160)
and the controller kernel equations are solved using the method described in
Appendix F.2. The simulation results are shown in Figs. 10.3–10.4. It is observed
that the adaptive controller successfully stabilizes the system, and makes the system
10 0
5 −2
0
−4
0 2 4 6 8 10 0 2 4 6 8 10
Time [s] Time [s]
3
4 2
2 1
0
0
−1
0 2 4 6 8 10 0 2 4 6 8 10
Time [s] Time [s]
Fig. 10.4 Left: Adaptive estimation error norm. Right: Actual value of q (solid black) and estimated
value q̂ (dashed red)
norm and actuation signal converge to zero, even though the value of q is not esti-
mated correctly. Parameter convergence is not guaranteed by the control law, and
does not happen in this cas.
System (10.1) and the observer of Theorem 10.3 are implemented using the system
parameters
All additional initial conditions are set to zero. The actuation signal is chosen as
1 √
U (t) = 1 + sin t + sin( 2t) + sin(πt) (10.163)
2
in order the excite the system, while the design gains are set to
γ = 1, q̄ = 10. (10.164)
The observer kernel equations are implemented using the method described in
Appendix F.2.
10.4 Simulations 205
20
4
2
10
0
0 −2
0 5 10 15 20 25 0 5 10 15 20 25
Time [s] Time [s]
0.5
6
0
4
−0.5
2
−1
0
−1.5
0 5 10 15 20 25 0 5 10 15 20 25
Time [s] Time [s]
Fig. 10.6 Left: Adaptive estimation error norm. Right: Actual value of q (solid black) and estimated
value q̂ (dashed red)
10 0
5 −1
−2
0
0 5 10 15 0 5 10 15
Time [s] Time [s]
Again the system norm and actuation signal are bounded, as seen from Fig. 10.5,
while the estimate q̂ converges to its real value q, and the observer error norm
converges to zero, as seen from Fig. 10.6.
The controller of Theorem 10.4 is now implemented on the system (10.3), using the
same system parameters and initial conditions as in the previous simulation, except
that
q = 2. (10.165)
This now constitutes an unstable system. The controller kernel equations are solved
on-line using the spatial discretization method described in Appendix F.2.
206 10 Adaptive Output-Feedback: Uncertain Boundary Condition
3
4
2
2 1
0
0
−1
0 5 10 15 0 5 10 15
Time [s] Time [s]
Fig. 10.8 Left: Adaptive estimation error norm. Right: Actual value of q (solid black) and estimated
value q̂ (dashed red)
From Figs. 10.7 and 10.8, it is seen that the estimated parameter q̂ stagnates, but
does not converge to its true value q. However, the system state norm, state estimation
error norm and actuation converge to zero after approximately five seconds.
10.5 Notes
The second observer design, using sensing collocated with actuation, employs time-
varying injection gains as part of its observer design, which are given as the solution
to a set of time-varying kernel PDEs. This significantly complicates both design
and implementation. The method described in Appendix F.2 used for on-line imple-
menting the kernel equations is developed specifically for implementing the observer
kernels (10.79) in Anfinsen and Aamo (2017b). The result in Anfinsen and Aamo
(2017b) is also the first time time-varying kernels and time-varying injection gains
have been used for designing adaptive observers for linear hyperbolic PDEs, clearly
illustrating the involved complexity from just having a single uncertain boundary
parameter, and assuming sensing to be taken at the boundary anti-collocated with
the uncertain parameter.
We will proceed in the next chapter to assume that the in-domain coefficients are
uncertain, and seek to adaptively stabilize the system from boundary sensing only.
References
Anfinsen H, Aamo OM (2016) Boundary parameter and state estimation in 2 × 2 linear hyperbolic
PDEs using adaptive backstepping. In: 55th IEEE conference on decision and control, Las Vegas,
NV, USA
Anfinsen H, Aamo OM (2017a) Adaptive stabilization of n + 1 coupled linear hyperbolic systems
with uncertain boundary parameters using boundary sensing. Syst Control Lett 99:72–84
Anfinsen H, Aamo OM (2017b) Adaptive stabilization of 2 × 2 linear hyperbolic systems with an
unknown boundary parameter from collocated sensing and control. IEEE Trans Autom Control
62(12):6237–6249
Ioannou P, Sun J (1995) Robust adaptive control. Prentice-Hall Inc., Upper Saddle River
Chapter 11
Adaptive Output-Feedback: Uncertain
In-Domain Parameters
11.1 Introduction
We once again consider system (7.4) with measurement restricted to the boundary
anti-collocated with actuation, that is
for
with
In Sect. 9.3, we assumed that the boundary parameter q and the in-domain parameters
c1 and c2 were uncertain and derived a state-feedback control law U that adaptively
stabilized the system, assuming distributed state measurements were available. In
Chap. 10, we restricted the sensing to be taken at the boundaries, and derived both state
observers and control laws for both the collocated and anti-collocated case, assuming
the boundary parameter q was uncertain, but assuming all in-domain parameters were
known. We will in Sect. 11.2 relax this assumption and assume that the in-domain
cross terms as well as the boundary parameter q are uncertain, and design an adaptive
control law that stabilizes the system from a single boundary sensing anti-collocated
with actuation. This solution was initially presented in Anfinsen and Aamo (2017),
and is based on the filter-based method proposed for hyperbolic 1–D systems in
Bernard and Krstić (2014). Note that the actual parameter values and system states
are not estimated directly, so that the proposed design is not suited for pure estimation
purposes.
In Chap. 12, we use the same filter-based technique to solve a more general prob-
lem, but we state here the solution to the problem of adaptively stabilizing the simpler
system (11.1) for illustrative purposes.
First off, we decouple the system states u and v, and the system (11.1) into the
following system, which has a cascade structure in the domain
for some functions L 1 , L 2 , L 3 , and with initial conditions α̌0 , β̌0 ∈ B([0, 1]).
11.2 Anti-collocated Sensing and Control 209
for some functions σ1 , σ2 , σ3 , and parameters λ̄, μ̄ ∈ R, λ̄, μ̄ > 0, with initial con-
ditions α0 , β0 ∈ B([0, 1]).
Lemma 11.2 The invertible mapping
α(x, t) = α̌(h −1
α (x), t), β(x, t) = β̌(h −1
β (x), t) (11.6)
where
x x
1 dγ 1 dγ
h α (x) = , h β (x) = (11.7)
t1 0 λ(γ) t2 0 μ(γ)
σ1 (x) = L 1 (h −1
α (x, t)) (11.9a)
σ2 (x) = −t1 λ(h −1 −1
α (x))L 2 (h α (x)) (11.9b)
σ3 (x) = −t2 μ(h −1 −1
β (x))L 3 (h β (x)) (11.9c)
Proof We note that (11.7) are strictly increasing and thus invertible. The invert-
iblility of the transformation (11.6) therefore follows. The rest of the proof follows
immediately from insertion and noting that
1 1 1 1
h α (x) = , h β (x) = (11.10a)
t1 λ(x) t2 μ(x)
h α (0) = h β (0) = 0, h α (1) = h β (1) = 1. (11.10b)
Proof Differentiating equation (11.12) with respect to time, inserting the dynamics
(11.5b) and integrating by parts, we obtain
Similarly, differentiating the latter equation in (11.12) with respect to space, we find
x
βx (x, t) = z x (x, t) + σ(1)β(x, t) − σ3 (1 − x + ξ)β(ξ, t)dξ. (11.15)
0
Inserting (11.14) and (11.15) into (11.5b) and using β(0, t) = z(0, t), we obtain the
dynamics (11.11b). Inserting x = 1 and (11.5d) into (11.12) gives (11.11d).
where we have introduced a filter w, which is a pure transport equation of the signal
z(0, t). Here, ε is a signal defined for t ≥ 0, w0 ∈ B([0, 1]) and
1
κ(x) = qσ2 (x) + λ̄−1 σ2 (ξ)σ1 (ξ − x)dξ. (11.17)
x
Lemma 11.4 Consider systems (11.11) and (11.16). The signal ε(t), which is char-
acterized in the proof, is zero for t ≥ t1 . Moreover, stabilization of (11.16) implies
stabilization of (11.11). More precisely,
where e ≡ 0 for t ≥ t1 , which provides (11.18). Inserting this into (11.11d), we obtain
1 1 ξ
z(1, t) = U (t) + σ2 (ξ)qw(ξ, t)dξ + σ2 (ξ)λ̄−1 σ1 (ξ − s)w(s, t)dsdξ
0 0 0
1
+ σ2 (ξ)e(ξ, t)dξ. (11.23)
0
We have thus shown that stabilizing system (11.1) is achieved by stabilizing system
(11.16). In deriving an adaptive control law for (11.16), we will use the filter-based
design presented in Bernard and Krstić (2014). However, as we will see, the additional
11.2 Anti-collocated Sensing and Control 213
term κ somewhat complicates the control design. We introduce the following filters
Note that w(ξ, t) used in the boundary condition for filter P is itself a filter and hence
known. Define also
Lemma 11.5 Consider system (11.16) and the non-adaptive estimate (11.29) gen-
erated using filters (11.26). Then,
z̄ ≡ z (11.30)
for t ≥ t1 + t2 .
and
1
z̄ x (x, t) = ψx (x, t) − θ(x)φ(1, t) + θ(ξ)φx (1 − (ξ − x), t)dξ
x
1
+ κ(ξ)Px (x, ξ, t)dξ, (11.34)
0
which, when using the definition of in (11.31), the dynamics (11.16b) and the
boundary condition (11.26a), gives the dynamics (11.32). Substituting x = 1 into
in (11.31), using the definition of z̄ in (11.29), and inserting the boundary condition
(11.16d) give
1
(1, t) = z(1, t) − z̄(1, t) = U (t) + κ(ξ)w(ξ, t)dξ + ε(t)
0
1
− ψ(1, t) − κ(ξ)P(1, ξ, t)dξ. (11.35)
0
Using the boundary conditions (11.26a) and (11.26c), we obtain the boundary con-
dition (11.32).
This assumption should not be a limitation, since the bounds θ̄ and κ̄ can be made
arbitrarily large. Now, motivated by the parametrization (11.29), we generate an
estimate of z from
11.2 Anti-collocated Sensing and Control 215
1
ẑ(x, t) =ψ(x, t) + θ̂(ξ, t)φ(1 − (ξ − x), t)dξ
x
1
+ κ̂(ξ, t)P(x, ξ, t)dξ (11.37)
0
where θ̂ and κ̂ are estimates of θ and κ, respectively. The dynamics of (11.37) can
straightforwardly be found to satisfy
1
ẑ t (x, t) − μ̄ẑ x (x, t) = μ̄θ̂(x, t)z(0, t) + θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ
x
1
+ κ̂t (ξ, t)P(x, ξ, t)dξ (11.38a)
0
1
ẑ(1, t) = U (t) + κ̂(ξ)w(ξ, t)dξ (11.38b)
0
ẑ(x, 0) = ẑ 0 (x). (11.38c)
for some initial condition ẑ 0 ∈ B([0, 1]). The corresponding prediction error is
defined as
From the parametric model (11.29) and corresponding error (11.31), we also have
1 1
y0 (t) = ψ(0, t) + θ(ξ)φ(1 − ξ, t)dξ + κ(ξ) p0 (ξ, t)dξ + (0, t), (11.40)
0 0
where
for p0 defined in (11.28), and where γ1 (x), γ2 (x) > 0 for all x ∈ [0, 1] are some
bounded design gains. The initial guesses are chosen inside the feasible domain
Lemma 11.6 The adaptive law (11.41) with initial condition satisfying (11.43) has
the following properties
ˆ(0, t)
σ(t) = . (11.45)
1 + f 2 (t)
Proof The property (11.44a) follows from the projection operator. Consider the Lya-
punov function candidate
1 1
V (t) = a1 λ̄−1 (2 − x)e2 (x, t)d x + μ̄−1 2
(x, t)d x
0 0
1 1 −1 1 1 −1
+ γ (x)θ̃ (x, t)d x +
2
γ (x)κ̃2 (x, t)d x (11.46)
2 0 1 2 0 2
Using property (A.5b) of Lemma A.1, inserting the dynamics (11.21) and (11.32)
and integrating by parts give
Inserting the boundary conditions (11.21) and (11.32) and using Cauchy–Schwarz’
inequality, we find
11.2 Anti-collocated Sensing and Control 217
We note that
1 1
ˆ(0, t) = (0, t) + θ̃(ξ, t)φ(1 − ξ, t)dξ + κ̃(ξ, t) p0 (ξ, t)dξ, (11.51)
0 0
The latter term is zero for t ≥ t F , and hence σ ∈ L∞ follows. From the adaptation
laws (11.41), we have
where ẑ is generated using (11.37), and ĝ is the on-line solution to the Volterra integral
equation
x
ĝ(x, t) = ĝ(x − ξ, t)θ̂(ξ, t)dξ − θ̂(x, t), (11.56)
0
11.2.5 Backstepping
Proof Differentiating (11.58) with respect to time and space, respectively, inserting
the dynamics (11.38a), and substituting the result into (11.38a), we obtain
x
ηt (x, t) − μ̄ηx (x, t) − μ̄ θ̂(x, t) − ĝ(x − ξ, t)θ̂(ξ, t)dξ ˆ(0, t)
0
x 1
+ ĝ(x − ξ, t) θ̂t (s, t)φ(1 − (s − ξ), t)dsdξ
0 ξ
x 1 1
+ ĝ(x − ξ, t)
κ̂t (s, t)P(ξ, s, t)dsdξ − θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ
0 0 x
1 x
− κ̂t (ξ, t)P(x, ξ, t)dξ + ĝt (x − ξ, t)ẑ(ξ, t)dξ = 0 (11.61)
0 0
which can be rewritten as (11.60a). The boundary condition (11.60b) follows from
inserting x = 1 into (11.58), and using (11.38b) and (11.55).
Since θ̂ is bounded by projection, we have from (11.56), (11.58) and Theorem 1.3
the following inequalities
||ĝt || ∈ L2 ∩ L∞ . (11.63)
1
V̇1 (t) ≤ 4η 2 (0, t) + 4ˆ2 (0, t) − λ̄V1 (t) (11.65a)
2
1
V̇2 (t) ≤ −η 2 (0, t) − μ̄V2 (t) + l1 (t)V2 (t) + l2 (t)V3 (t)
4
+ l3 (t)V4 (t) + 32ḡ 2 ˆ2 (0, t) (11.65b)
1
V̇3 (t) ≤ − μ̄V3 (t) + 4η 2 (0, t) + 4ˆ2 (0, t) (11.65c)
2
1
V4 (t) = −|| p0 (t)||2 + 2λ̄V1 (t) − μ̄V4 (t). (11.65d)
2
Now, forming the Lyapunov function candidate
1 1
V̇5 (t) ≤ −2λ̄V1 (t) − 9μ̄V2 (t) − μ̄V3 (t) − μ̄V4 (t)
2 2
+ 36l1 (t)V2 (t) + 36l2 (t)V3 (t) + 36l3 (t)V4 (t)
+ 36 1 + 32ḡ 2 ˆ2 (0, t) − || p0 (t)||2 . (11.67)
1 1
V̇5 ≤ −2λ̄V1 (t) − 9μ̄V2 (t) − μ̄V3 (t) − μ̄V4 (t)
2 2
+ 36l1 (t)V2 (t) + (36l2 (t) + bσ 2 (t)μ̄)V3 (t) + 36l3 (t)V4 (t)
+ bσ 2 (t) − (1 − bσ 2 (t))|| p0 (t)||2 (11.69)
or
V̇5 (t) ≤ −cV5 (t) + l4 (t)V5 (t) + l5 (t) − (1 − bσ 2 (t))|| p0 (t)||2 (11.70)
for the positive constants c and b, and some nonnegative, integrable functions l4 and
l5 . Moreover, from (11.52), (11.46) and (11.53) we have
1
V̇ (t) ≤ − σ 2 (t) (11.71)
2
and for t ≥ t F
ˆ2 (0, t)
σ 2 (t) = ≤ 2||θ̃(t)||2 + 2||κ̃(t)||2 ≤ kV (t) (11.72)
1 + f 2 (t)
where
k = 4 max max γ1 (x), max γ2 (x) . (11.73)
x∈[0,1] x∈[0,1]
Since ||P(t)|| is bounded, || p0 (t)||2 must be bounded for almost all t ≥ 0, implying
that σ 2 (t)|| p0 (t)||2 is integrable, since σ 2 ∈ L∞ . Inequality (11.70) can therefore be
written
From (11.37),
and
1
z(x, t) = ψ(x, t) + θ̂(ξ, t)φ(1 − (ξ − x), t)dξ
x
1
+ κ̂(ξ, t)P(x, ξ, t)dξ + ˆ(x, t) (11.84)
0
From the filter structure (11.26a) and the control law (11.55), we have
U ∈ L∞ ∩ L2 , U →0 (11.86)
11.2 Anti-collocated Sensing and Control 223
and
Lemma 11.4 and the invertibility of the transformations of Lemmas 11.1–11.3, then
give
11.3 Simulations
System (11.1) and the controller of Theorem 11.1 are implemented using the system
parameters
1 1
λ(x) = (1 + x), μ(x) = e 2 x (11.91a)
2
c1 (x) = 1 + x, c2 (x) = 1 + sin(x) q=1 (11.91b)
constituting an unstable system. All additional initial conditions are set to zero. The
design gains are set to
From Fig. 11.1 it is observed that the state norm and the actuation signal both
converge to zero in approximately seven seconds, while from Fig. 11.2, the estimated
parameters are bounded.
224 11 Adaptive Output-Feedback: Uncertain In-Domain Parameters
1.5
0.2
u + v
1 0
U
0.5 −0.2
−0.4
0
0 5 10 15 0 5 10 15
Time [s] Time [s]
11.4 Notes
The above adaptive controller of Theorem 11.1 is both simpler and easier to imple-
ment than the controllers of Chap. 10. However, neither the system parameters or the
system states are estimated directly.
The problem of stabilizing a system of 2 × 2 linear hyperbolic PDEs with uncer-
tain system parameters using boundary sensing only is also solved in Yu et al. (2017).
The solution in Yu et al. (2017), however, requires sensing to be taken at both bound-
aries (u(1, t) as well as v(0, t)), and the paper only concerns systems with constant
and equal transport speeds set to 1. The systems considered are on the other hand
allowed to have non-local source terms in the form of integrals similar to the term h
in (2.1a), but such a term can be removed by a transformation and the controller of
Theorem 11.1 can therefore be used directly on such systems as well.
In Chap. 12, we further develop the above adaptive output-feedback scheme in
a number of ways: we use it to solve a model reference adaptive control problem,
and to reject a biased harmonic disturbance with uncertain amplitudes, bias and
phases, and also allow the actuation and sensing to be scaled by arbitrary nonzero
constants.
References 225
References
12.1 Introduction
We will in this chapter show how the technique of Chap. 11 can be generalized to
solve a model reference adaptive control problem, as well as being used to reject the
effect of a biased harmonic disturbance affecting the system’s interior, boundaries
and measurement. Furthermore, we allow the actuation and anti-collocated sensing
to be scaled by arbitrary nonzero constants. The system under consideration is
with
Note that the exact profiles of λ and μ are not required to be known.
The goal of this chapter is to design an adaptive control law U (t) in (12.1d) so that
system (12.1) is adaptively stabilized subject to Assumption 12.1, and the following
tracking objective
t+T
lim (y0 (s) − yr (s))2 ds = 0 (12.5)
t→∞ t
is obtained for some bounded constant T > 0, where yr is generated using the ref-
erence model
for some reference signal r of choice. The goal (12.5) should be achieved from
using the sensing (12.1g) only. Moreover, all additional variables in the closed loop
system should be bounded pointwise in space. We assume the reference signal r
and disturbances d1 , d2 , . . . , d5 are bounded, as formally stated in the following
assumption.
Assumption 12.2 The reference signal r (t) is known for all t ≥ 0, and there exist
constants r̄ , d̄ so that
as follows
where
0 ωi
Ai = (12.10)
−ωi 0
12.2.2.1 Decoupling
Proof We will prove that system (12.1) with disturbance model (12.8) and system
(12.11) are connected through an invertible backstepping transformation. To ease the
derivations to follow, we write system (12.1) in vector form as follows
where
u(x, t) λ(x) 0
ζ(x, t) = , Λ(x) = (12.13a)
v(x, t) 0 −μ(x)
T
0 c1 (x) g (x)
Π (x) = , G(x) = 1T (12.13b)
c2 (x) 0 g2 (x)
0q 10
Q0 = , R1 = (12.13c)
01 00
T
0 g 0
Ū (t) = , G3 = 3 , G4 = . (12.13d)
U (t) 0 g4T
where
α̌(x, t)
γ(x, t) = (12.15)
β̌(x, t)
are specified shortly. Differentiating (12.14) with respect to time, inserting the
dynamics (12.12a) and (12.8c) and integrating by parts, we find
If K satisfies the PDE (8.54)–(8.55) with k uu chosen according to Remark 8.1, and
F satisfies the equation
x
Λ(x)F (x) = −F(x)A + G(x) − K (x, ξ)G(ξ)dξ
0
− K (x, 0)Λ(0)G 3 , (12.20)
Choosing
q T 1 T
f 1T (0) = − g + g3T , f 2T (0) = − g (12.22)
k2 5 k2 5
we obtain (12.11c) and (12.11g). The equation consisting of (12.20) and (12.22) is a
standard matrix ODE which can be explicitly solved for F. From Theorem 1.4, the
transformation (12.14) is invertible, and the inverse is in the form
x
ζ(x, t) = γ(x, t) + L(x, ξ)γ(ξ, t)dξ + R(x)X (t) (12.23)
0
232 12 Model Reference Adaptive Control
where
L αα (x, ξ) L αβ (x, ξ) r1T (x)
L(x, ξ) = , R(x) = (12.24)
L βα (x, ξ) L ββ (x, ξ) r2T (x)
are given from (1.53) and (1.102). From inserting x = 1 into (12.23), we obtain
(12.11d), where
m 1 (ξ) = L βα (1, ξ), m 2 (ξ) = L ββ (1, ξ), m 3T = r2T (1) − g4T . (12.25)
We now use a transformation to get rid of the spatially varying transport speeds in
(12.11), and also scale the variables to ease subsequent analysis.
Lemma 12.2 System (12.11) is equivalent to the system
k2
α(x, t) = α̌(h −1
α (x), t), β(x, t) = k2 β̌(h −1
β (x), t) (12.27)
q
where
x x
dγ dγ
h α (x) = λ̄ , h β (x) = μ̄ (12.28)
0 λ(γ) 0 μ(γ)
12.2 Model Reference Adaptive Control 233
with λ̄, μ̄ defined in Assumption 12.1, are strictly increasing and hence invertible
functions. The invertibility of the transformation (12.27) therefore follows. The rest
of the proof follows immediately from insertion and noting that
λ̄ μ̄
h α (x) = , h β (x) = (12.29a)
λ(x) μ(x)
h α (0) = h β (0) = 0, h α (1) = h β (1) = 1 (12.29b)
In view of the structure of system (12.26), we augment the reference model (12.6)
with an auxiliary state a, and introduce the system
Lemma 12.3 Consider system (12.26) and the extended reference model (12.31).
The error variables
and
x
ž x (x, t) = z(x, t) + σ(1)ž(x, t) − σ (1 − x + ξ)ž(ξ, t)dξ. (12.39)
0
We have thus shown that stabilizing (12.35) is equivalent to stabilizing the original
system (12.1), because the reference system (12.31) itself is stable for any bounded
r . Moreover, the objective (12.5) can be stated in terms of z as
t+T
lim z 2 (0, s)ds = 0. (12.44)
t→∞ t
236 12 Model Reference Adaptive Control
The goal is to design a control law U so that z and w converge in L 2 ([0, 1]) at least
asymptotically to zero, while at the same time ensuring pointwise boundedness of
all variables and convergence of z(0, t) to zero in the sense of (12.44).
where
χT (t) = 1 sin(ω1 t) cos(ω1 t) . . . sin(ωn t) cos(ωn t) (12.46)
contains the unknown amplitudes and bias. This representation facilitates for identi-
fication, since all the uncertain parameters are now in a single vector ν.
and define
and define
Lemma 12.5 Consider system (12.35) and state estimates (12.53) generated using
the filters (12.48) and (12.49). After a finite time t F given in (8.8), we have
w̄ ≡ w, z̄ ≡ z. (12.54)
e(0, t) = 0 (12.56c)
1
(1, t) = κ(ξ)e(ξ, t)dξ (12.56d)
0
e(x, 0) = e0 (x) (12.56e)
(x, 0) = 0 (x) (12.56f)
where e0 , 0 ∈ B([0, 1]). It can be shown using the boundary condition P(x, 0, t) =
φ(x, t) in (12.48d) and the dynamics of φ in (12.48b), that Pt (x, ξ, t) = μ̄Px (x, ξ, t)
for t ≥ t1 . Moreover, from (12.56a) and (12.56c), it is observed that e ≡ 0 for t ≥ t1 ,
and therefore (12.56b) and (12.56d) imply that ≡ 0 for t ≥ t F where t F is given by
(8.8).
ρ ≤ ρ ≤ ρ̄ (12.57a)
θ ≤ θ(x) ≤ θ̄, ∀x ∈ [0, 1] (12.57b)
κ ≤ κ(x) ≤ κ̄, ∀x ∈ [0, 1] (12.57c)
ν i ≤ νi ≤ ν̄i , i = 1 . . . (2n + 1) (12.57d)
and with
/ [ρ, ρ̄].
0∈ (12.59)
with ẑ 0 ∈ B([0, 1]), and where the term in the first integral of (12.62a) is zero in a
finite time t1 . Moreover, we have
where the error term (0, t) converges to zero in a finite time t F = t1 + t2 . From
(12.63), we propose the adaptive laws
where
and γ1 > 0, γ2 (x), γ3 (x) > 0 for all x ∈ [0, 1] and Γ4 > 0 are design gains. The
initial conditions are chosen inside the feasible domain
ρ ≤ ρ̂0 ≤ ρ̄ (12.66a)
θ ≤ θ̂0 (x) ≤ θ̄, ∀x ∈ [0, 1] (12.66b)
κ ≤ κ̂0 (x) ≤ κ̄, ∀x ∈ [0, 1] (12.66c)
ν i ≤ ν̂i,0 ≤ ν̄i , i = 1 . . . (2n + 1) (12.66d)
for
T
ν̂0 = ν̂1,0 ν̂2,0 . . . ν̂2n+1,0 (12.67)
for t ≥ t2 .
12.2 Model Reference Adaptive Control 241
Lemma 12.6 The adaptive laws (12.64) with initial conditions satisfying (12.66)
have the following properties
ˆ(0, t)
σ(t) = (12.70)
1 + f 2 (t)
Proof The properties (12.69a)–(12.69d) follow from the projection operator used in
(12.64) and the conditions (12.66). Consider the Lyapunov function candidate
1 2 1 1 −1
V (t) = ρ̃ (t) + γ (x)θ̃2 (x, t)d x
2γ1 2 0 2
1 1 −1 1
+ γ (x)κ̃2 (x, t)d x + ν̃ T (t)Γ4−1 ν̃(t). (12.71)
2 0 3 2
Differentiating with respect to time, inserting the adaptive laws (12.64) and using
the property −ν̃ T projν,ν̄ (τ , ν̂) ≤ −ν̃ T τ (Lemma A.1 in Appendix A), and similarly
for ρ̂, θ̂ and κ̂, we get
1
ˆ(0, t)
V̇ (t) ≤ − ρ̃(t)ψ(0, t) + θ̃(x, t)(φ(1 − x, t) + n 0 (x, t))
1 + f 2 (t) 0
+ κ̃(x, t)( p0 (x, t) + m 0 (x, t)) d x + ϑ (0, t)ν̃(t) .
T
(12.72)
We note that
1
ˆ(0, t) = (0, t) + ρ̃(t)ψ(0, t) + θ̃(ξ, t)(φ(1 − ξ, t) + n 0 (ξ, t))dξ
0
1
+ κ̃(ξ, t)( p0 (ξ, t) + m 0 (ξ, t))dξ + ϑT (0, t)ν̃(t), (12.73)
0
242 12 Model Reference Adaptive Control
σ ∈ L2 . (12.75)
which gives
σ ∈ L∞ . (12.77)
where ẑ is generated using (12.60), and ĝ is the on-line solution to the Volterra integral
equation
x
ĝ(x, t) = ĝ(x − ξ, t)θ̂(ξ, t)dξ − θ̂(x, t), (12.80)
0
with ρ̂, θ̂, κ̂ and ν̂ generated from the adaptive laws (12.64).
Theorem 12.1 Consider system (12.1), filters (12.48) and (12.49), reference model
(12.31), and adaptive laws (12.64). Suppose Assumption 12.2 holds. Then the control
law (12.79) guarantees (12.5), and
12.2.7 Backstepping
Lemma 12.7 The transformation (12.82) and controller (12.79) map system (12.62)
into (12.84).
Proof Differentiating (12.82) with respect to time inserting the dynamics (12.62a)
and integrating by parts, we obtain
+ ˙
ĝ(x − ξ, t)ρ̂(t)ψ(ξ, t)dξ
0
x 1
+ ĝ(x − ξ, t) θ̂t (s, t)φ(1 − (s − ξ), t)dsdξ
0 ξ
x 1
+ ĝ(x − ξ, t) κ̂t (s, t)[P(ξ, s, t) + M(ξ, s, t)]dsdξ
0 0
x 1
+ ĝ(x − ξ, t) θ̂t (s, t)N (ξ, s, t)dsdξ
0 x 0
+ ˙
ĝ(x − ξ, t)ϑT (ξ, t)ν̂(t)dξ
0
x
+ ĝt (x − ξ, t)ẑ(ξ, t)dξ. (12.85)
0
˙ +
− ϑT (x, t)ν̂(t) ˙
ĝ(x − ξ, t)ϑT (ξ, t)ν̂(t)dξ
x
0
which can we rewritten as (12.84a). The boundary condition (12.84b) follows from
inserting x = 1 into (12.82), and using (12.62b) and (12.79).
for all t ≥ 0, and for some positive constants ḡ, G 1 and G 2 , and
||ĝt || ∈ L2 ∩ L∞ . (12.90)
μ̄
V̇1 (t) ≤ −η 2 (0, t) − V1 (t) + h 1 σ 2 (t)ψ 2 (0, t) + l1 (t)V1 (t) + l2 (t)V2 (t)
4
+ l3 (t)V3 (t) + l4 (t)V4 (t) + l5 (t)V6 (t) + l6 (t) (12.92a)
μ̄
V̇2 (t) ≤ −φ2 (0, t) + 4η 2 (0, t) − V2 (t) + 4σ 2 (t)ψ 2 (0, t)
2
+ l7 (t)V2 (t) + l8 (t)V4 (t) + l9 (t) (12.92b)
1
V̇3 (t) ≤ − λ̄V3 (t) + 2μ̄V2 (t) (12.92c)
2
λ̄
V̇4 (t) ≤ 2φ2 (0, t) − V4 (t) (12.92d)
2
λ̄
V̇5 (t) ≤ 4η 2 (0, t) − V5 (t) + 4σ 2 (t)ψ 2 (0, t)
2
+ l7 (t)V2 (t) + l8 (t)V4 (t) + l9 (t) (12.92e)
12.2 Model Reference Adaptive Control 247
μ̄
V̇6 (t) ≤ −ψ 2 (0, t) − V6 (t) + h 2 r 2 (t) + h 3 V1 (t) + h 4 V5 (t)
2
+ h 5 ||a(t)||2 + h 6 ||b(t)||2 + h 7 ||χ(t)||2 . (12.92f)
Now forming
V7 (t) = 64V1 (t) + 8V2 (t) + V3 (t) + 4V4 (t) + 8V5 (t) + 2k1 V6 (t) (12.93)
where
k1 = min{μ̄h −1 −1
3 , λ̄h 4 }, (12.94)
V̇7 (t) ≤ −cV7 (t) + l10 V7 (t) + l11 (t) − 2k1 − 64(1 + h 1 )σ 2 (t) ψ 2 (0, t)
for some positive constant c and integrable functions l9 and l10 . The terms in
r, ||a||, ||b|| and ||χ|| are all bounded by Assumption 12.2.
Moreover, from the inequality (12.76) and the definition of V in (12.71), we have,
for t ≥ t1
and from the invertibility of the transformation (12.82), we will also have
||ẑ|| ∈ L∞ . (12.98)
From the definition of the filter ψ in (12.48a) and the control law U in (12.79), we
then have U ∈ L∞ , and
||ψ||∞ ∈ L∞ (12.99)
V8 (t) = 64V1 (t) + 8V2 (t) + V3 (t) + 4V4 (t) + 8V5 (t) (12.100)
248 12 Model Reference Adaptive Control
V̇8 (t) ≤ −c̄V8 (t) + l12 (t)V8 (t) + l13 (t) + 64(1 + h 1 )σ 2 (t)ψ 2 (0, t) (12.101)
for some positive constant c̄ and integrable functions l12 and l13 . Since σ 2 ∈ L1 and
ψ(0, ·) ∈ L∞ , the latter term is integrable, and hence
V̇8 (t) ≤ −c̄V8 (t) + l12 (t)V8 (t) + l14 (t) (12.102)
V8 ∈ L1 ∩ L∞ , V8 → 0 (12.103)
and hence
From the invertibility of the transformations, and the fact that ||a|| and ||b|| are
bounded, we obtain
||z||∞ ∈ L∞ , (12.108)
||w||∞ ∈ L∞ . (12.110)
From the invertibility of the transformations in Lemmas 12.1–12.4 and since a and
b are pointwise bounded, we finally get
Lastly, we prove that the tracking goal (12.5) is achieved. By solving (12.48b),
we find
for any T > 0, and from the definition of z(0, t) in (12.35g), this implies that
t+T
(y0 (s) − yr (s))2 ds → 0 (12.115)
t
where ẑ is generated using (12.60), and ĝ is the on-line solution to the Volterra integral
equation (12.80) with ρ̂, θ̂ and κ̂ generated using the adaptive laws (6.28).
Theorem 12.2 Consider system (12.1), filters (12.48) and (12.49), and the adap-
tive laws (12.64). Suppose d1 = d2 ≡ 0, d3 = d4 = d5 ≡ 0. Then, the control law
(12.116) guarantees
From the control law (12.116) and the definition of the filter ψ in (12.48a), we
then have U ∈ L∞ ∩ L2 , U → 0, and
12.4 Simulations
System (12.1), reference model (12.31) and filters (12.48)–(12.51) are implemented
along with the adaptive laws (12.64) and the controller of Theorem 12.1. The system
parameters are set to
All initial conditions for the filters and parameter estimates are set to zero, except
ρ̂(0) = 1. (12.130)
252 12 Model Reference Adaptive Control
2
40
u + v
U
20
−2
0 −4
0 20 40 60 80 100 0 20 40 60 80 100
Time [s] Time [s]
1.5 0.6
0.4
1 0.2
0
0.5
0 50 100 0 50 100
Time [s] Time [s]
0.4
0.2
0.2
0
0
−0.2 −0.2
0 50 100 0 50 100
Time [s] Time [s]
−2
0 10 20 30 40 50 60 70 80 90 100
Time [s]
Fig. 12.3 Reference model output yr (t) (solid black) and measured signal y0 (t) (dashed red)
γ1 = 5, γ2 = γ3 ≡ 5, Γ4 = 5I3 (12.131a)
12.4 Simulations 253
We now generalize the class of systems considered, and allow an arbitrary number
of states convecting in one of the directions. They are referred to as n + 1 systems,
where the phrasing “n + 1” refers to the number of variables, with u being a vector
containing n components convecting from x = 0 to x = 1, and v is a scalar convecting
in the opposite direction. They are typically stated in the following form
defined over x ∈ [0, 1], t ≥ 0. The system parameters are in the form
Λ(x) = diag {λ1 (x), λ2 (x), . . . , λn (x)} , Σ(x) = {σi j (x)}1≤i, j≤n (13.3a)
T
ω(x) = ω1 (x) ω2 (x) . . . ωn (x) (13.3b)
T
(x) = 1 (x) 2 (x) . . . n (x) (13.3c)
T T
q = q1 q2 . . . qn , c = c1 c2 . . . cn (13.3d)
However, some of the designs to follow require the slightly more restrictive assump-
tion of
−μ(x) < 0 < λ1 (x) < λ2 (x) < · · · < λn (x), ∀x ∈ [0, 1]. (13.8)
π≡0 (13.9)
and
σii ≡ 0, i = 1, 2, . . . , n, (13.10)
for the terms in (13.1a)–(13.1b). This is not a restriction, since these terms can be
removed by scaling as demonstrated for 2 × 2 systems in Chap. 7. This assumption
13 Introduction 259
sometimes makes the analysis far easier. In addition, we will sometimes not allow
scaling in the inputs and outputs, and assume that
k1 = k2 = k3 = 1. (13.11)
References
Diagne A, Bastin G, Coron J-M (2012) Lyapunov exponential stability of 1-D linear hyperbolic
systems of balance laws. Automatica 48:109–114
Di Meglio F, Kaasa G-O, Petit N, Alstad V (2011) Slugging in multiphase flow as a mixed initial-
boundary value problem for a quasilinear hyperbolic system. In: American control conference.
CA, USA, San Francisco
Di Meglio F, Vazquez R, Krstić M, Petit N (2012) Backstepping stabilization of an underactuated
3 × 3 linear hyperbolic system of fluid flow transport equations. In: American control conference.
Montreal, QC, Canada
Hudson J, Sweby P (2003) Formulations for numerically approximating hyperbolic systems gov-
erning sediment transport. J Sci Comput 19:225–252
Zuber N (1965) Average volumetric concentration in two-phase flow systems. J Heat Transf
87(4):453–468
Chapter 14
Non-adaptive Schemes
14.1 Introduction
where
K u (x, ξ) = K 1u (x, ξ) K 2u (x, ξ) . . . K nu (x, ξ) , K v (x, ξ) (14.2)
Note that K u in this case is a row vector. Well-posedness of Eq. (14.3) is guaranteed
by Theorem D.4 in Appendix D.
Theorem 14.1 Consider system (13.1) subject to assumption (13.7). Let the con-
troller be taken as (14.1) where (K u , K v ) is the solution to (14.3). Then,
u ≡ 0, v≡0 (14.4)
for t ≥ t F , where
1 1
dγ dγ
t F = tu,1 + tv , tu,i = , tv = . (14.5)
0 λi (γ) 0 μ(γ)
Proof As for the 2 × 2 case in Sect. 8.2, we will here provide two proofs of this
Theorem, where the first one uses the simplest backstepping transformation, while the
second one produces the simplest target system. The first proof is the one originally
given in Di Meglio et al. (2013), while the second proof is included since it employs
a target system that facilitates the model reference adaptive controller design in
Chap. 17.
14.2 State Feedback Controller 263
Solution 1:
We will show that the backstepping transformation
where (K u , K v ) is the solution to the PDE (14.3) maps system (13.1) into the target
system
x
αt (x, t) + Λ(x)αx (x, t) = Σ(x)α(x, t) + ω(x)β(x, t) + B1 (x, ξ)α(ξ, t)dξ
x 0
x
+ − K ξv (x, ξ)μ(ξ) v
− K (x, ξ)μ (ξ) + K (x, ξ)ω(ξ) v(ξ, t)dξ
u
0
v
+ K (x, x)μ(x)v(x, t). (14.11)
Using Eq. (14.3), we obtain the dynamics (14.8b). Inserting the backstepping trans-
formation (14.6) into the target system dynamics (14.8a), we find
Changing the order of integration in the double integrals, (14.14) can be written as
Using (14.10) yields (13.1a). The boundary condition (14.8c) follows trivially from
(13.1c) and the fact that u(0, t) = α(0, t) and v(0, t) = β(0, t). Evaluating (14.6b)
at x = 1 and inserting the boundary condition (13.1d), we get
1
β(1, t) = U (t) + c u(1, t) − T
K u (1, ξ)u(ξ, t)dξ
0
1
− K v (1, ξ)v(ξ, t)dξ, (14.16)
0
from which the control law (14.1) gives the boundary condition (14.8d).
The target system (14.8) is a cascade from β into α. The subsystem in β will be
zero for t ≥ tv for tv defined in (14.5). System (14.8) is then reduced to
x
αt (x, t) + Λ(x)αx (x, t) = Σ(x)α(x, t) + B1 (x, ξ)α(ξ, t)dξ (14.17a)
0
α(0, t) = 0 (14.17b)
α(x, tv ) = αtv (x) (14.17c)
for some function αtv ∈ B([0, 1]). System (14.17) will be zero after an additional time
tu,1 , corresponding to the slowest transport speed in Λ. Hence, for t ≥ tu,1 + tv = t F ,
we will have α ≡ 0 and β ≡ 0, and the result follows from the invertibility of the
backstepping transformation (14.6).
Solution 2:
This proof is based on Hu et al. (2015) and uses a bit more complicated backstep-
ping transformation with the advantage of producing a simpler target system, that
266 14 Non-adaptive Schemes
K uu (x, ξ) = {K iuu
j (x, ξ)}i, j=1,2,...,n (14.19a)
T
K uv (x, ξ) = K 1uv (x, ξ) K 2uv (x, ξ) . . . K nuv (x, ξ) (14.19b)
Λ(x)K xuu (x, ξ) + K ξuu (x, ξ)Λ(ξ) = −K uu (x, ξ)Λ (ξ) − K uu (x, ξ)Σ(ξ)
− K uv (x, ξ) T (ξ) (14.20a)
Λ(x)K xuv (x, ξ) − K ξuv (x, ξ)μ(ξ) = −K (x, ξ)ω(ξ)
uu
Note that K uu is a matrix, while K uv is a column vector. The PDE (14.20) is under-
determined, and to ensure well-posedness, we add the boundary conditions
uu,1
j (x, 0) = ki j (x), 1 ≤ j ≤ i ≤ n
K iuu (14.21a)
uu,2
j (1, ξ) = ki j (ξ), 1 ≤ i < j ≤ n
K iuu (14.21b)
with g given as
x
+ − K ξuv (x, ξ)μ(ξ) + K uu (x, ξ)ω(ξ)
0
− K (x, ξ)μ (ξ) v(ξ, t)dξ
uv
uv
− K (x, 0)μ(0) − K uu (x, 0)Λ(0)q v(0, t). (14.24)
Using the Eq. (14.20) yields the target system dynamics (14.22a) with g given from
(14.23). The rest of the proof follows the same steps as in Solution 1.
This observer design was originally presented in Di Meglio et al. (2013). Consider
the observer
for some initial conditions û 0 , v̂0 ∈ B([0, 1]), and the injection gains chosen as
where
T
M α (x, ξ) = M1α (x, ξ) M2α (x, ξ) . . . Mnα (x, ξ) , M β (x, ξ) (14.29)
Theorem 14.2 Consider system (13.1) subject to assumption (13.7), and the
observer (14.27) with injection gains p1 and p2 given as (14.28). Then
û ≡ u, v̂ ≡ v (14.31)
and
x
0 = β̃t (x, t) − μ(x)β̃x (x, t) − T (x)α̃(x, t) − d2T (x, ξ)α̃(ξ, t)dξ
0
β
= ṽt (x, t) − μ(x)ṽx (x, t) − T (x)ũ(x, t) + M (x, 0)μ(0)ṽ(0, t)
x
β
+ μ(x)Mxβ (x, ξ) + Mξ (x, ξ)μ(ξ)
0
β α
+ M (x, ξ)μ (ξ) + (x)M (x, ξ) β̃(ξ, t)dξ
T
x
− d2T (x, ξ) + M β (x, ξ) T (ξ)
0
x
β
+ M (x, s)d2 (s, ξ)ds α̃(ξ, t)dξ.
T
(14.39)
ξ
Using (14.30d) gives (14.32d). The last boundary condition (14.32c) follows trivially
from inserting (14.33) into (14.34c).
The target system (14.34) is a cascade from α̃ to β̃. For t ≥ tu,1 , we will have
α̃ ≡ 0, and for t ≥ tu,1 + tv = t F , β̃ ≡ 0. The invertibility of the backstepping trans-
formation (14.33) then gives the desired result.
In the collocated case, we have to assume distinct transport speeds (13.8) in order to
ensure well-posedness of the kernel equations and continuous kernels. To ease the
analysis, we also assume (13.10). Consider the observer
272 14 Non-adaptive Schemes
for some initial conditions û 0 , v̂0 ∈ B([0, 1]), and injection gains chosen as
where
Λ(x)N xα (x, ξ) + Nξα (x, ξ)Λ(ξ) = −N α (x, ξ)Λ (ξ) + Σ(x)N α (x, ξ)
+ ω(x)N β (x, ξ) (14.44a)
β
μ(x)N xβ (x, ξ) − Nξ (x, ξ)Λ(ξ) β
= N (x, ξ)Λ (ξ) − (x)N (x, ξ)
T α
(14.44b)
α α
N (x, x)Λ(x) − Λ(x)N (x, x) = Σ(x) (14.44c)
β β
N (x, x)Λ(x) + μ(x)N (x, x) = (x) T
(14.44d)
Niαj (0, ξ) = qi N j (0, ξ), for 1 ≤ i ≤ j ≤ n. (14.44e)
Note that N α is a matrix and N β is a row vector. The PDE is under-determined, and
uniqueness can be ensured by imposing the following additional boundary conditions
σi j (1)
Niαj (x, 1) = , ∀x ∈ [0, 1], 1 ≤ j < i ≤ n. (14.45)
λ j (1) − λi (1)
The boundary conditions at Niαj (x, 1), 1 ≤ j < i ≤ n can be arbitrary, but choos-
ing them as (14.45) ensures continuity of Niαj (x, 1), 1 ≤ j < i ≤ n at x = ξ = 1.
Well-posedness of (14.44)–(14.45) now follows from Theorem D.5 in Appendix D.5
following a change of coordinates (x, ξ) → (ξ, x).
Theorem 14.3 Consider system (13.1) subject to assumptions (13.8) and (13.10),
and the observer (14.41) with injection gains P1 and p2 given as (14.42). Then
14.3 State Observers 273
û ≡ u, v̂ ≡ v (14.46)
for t ≥ t0 where
n
t0 = tu,i + tv (14.47)
i=1
for some functions g1 , g2 defined over S = {(x, ξ) | 0 ≤ x ≤ ξ ≤ 1}, and where the
matrix
h i j ≡ 0, for 1 ≤ i ≤ j ≤ n. (14.51)
where (N α , N β ) satisfies the PDE (14.44) maps system (14.49) into (14.48), provided
g1 and g2 are given by
ξ
g1 (x, ξ) = N α (x, ξ)ω(ξ) − N α (x, s)g1 (s, ξ)ds (14.53a)
x
ξ
g2 (x, ξ) = N β (x, ξ)ω(ξ) − N β (x, s)g1 (s, ξ)ds (14.53b)
x
and H is given by
Differentiating (14.52) with respect to time, inserting the dynamics (14.49a) and
integrating by parts, we find
and
from which (14.54) gives (14.49c). The boundary condition (14.49d) follows trivially
from (14.48d) by noting that ṽ(1, t) = β̃(1, t).
The target system (14.49) has a cascade structure from β̃ to α̃. For t ≥ tv , β̃ ≡ 0,
and system (14.49) reduces to
276 14 Non-adaptive Schemes
for some function αtv ∈ B([0, 1]). Due to the strictly lower triangular structure of
H , system (14.60) is also a cascade system, and will be zero for t ≥ t0 for t0 defined
in (14.47).
The state feedback controllers and state observers can straightforwardly be combined
into output feedback controllers, as we will do next. The proofs are straightforward
and omitted.
Combining the results of Theorems 14.1 and 14.2, the following result trivially
follows.
Theorem 14.4 Consider system (13.1), subject to assumption (13.7), and with mea-
surement (13.1g). Let the controller be taken as
1 1
U (t) = −c T û(1, t) + K u (1, ξ)û(ξ, t)dξ + K v (1, ξ)v̂(ξ, t)dξ (14.61)
0 0
where (K u , K v ) is the solution to the PDE (14.3), and û and v̂ are generated using
the observer of Theorem 14.2. Then,
u ≡ 0, v≡0 (14.62)
Similarly, combining the results of Theorems 14.1 and 14.3, the following result
trivially follows.
14.5 Output Tracking Controllers 277
Theorem 14.5 Consider system (13.1), subject to assumption (13.8), and with mea-
surement (13.1h). Let the controller be taken as
1 1
U (t) = −c y1 (t) +
T
K (1, ξ)û(ξ, t)dξ +
u
K v (1, ξ)v̂(ξ, t)dξ (14.63)
0 0
where (K u , K v ) is the solution to the PDE (14.3), and û and v̂ are generated using
the observer of Theorem 14.3. Then
u ≡ 0, v≡0 (14.64)
Theorem 14.6 Consider system (13.1). Let the control law be taken as
1
U (t) = K u (1, ξ)u(ξ, t) + K v (1, ξ)v(ξ, t) dξ + r (t + tv ), (14.65)
0
Proof In the proof of Theorem 14.1, it is shown that system (13.1) can be mapped
using the backstepping transformation (14.6) into target system (14.8), that is
x
αt (x, t) + Λ(x)αx (x, t) = Σ(x)α(x, t) + ω(x)β(x, t) + B1 (x, ξ)α(ξ, t)dξ
x 0
where we have inserted the control law (14.65), and added the measurement (14.68g)
which follows from (13.1g) and the fact that v(0, t) = β(0, t). It is clear from the
structure of the subsystem in β consisting of (14.68b) and (14.68d) that
for t ≥ tv , which is the tracking goal. System (14.68) is a cascade system from β to α.
For t ≥ tv , all values in β will be given by past values of r , while for t ≥ t1,u + tv = t F ,
this will also be true for α. Due to the invertibility of the transformation (14.6), this
also holds for u and v.
The tracking controller of Theorem 14.6 can also be combined with the observers
of Theorems 14.2 and 14.3 into output-feedback tracking controllers.
14.6 Simulations
System (13.1) with the state feedback controller of Theorem 14.1, the output-
feedback controller of Theorem 14.4 using sensing anti-collocated with actuation
and the tracking controller of Theorem 14.6 are implemented for n = 2 using the
system parameters
Controller gains
1
Observer gains
1
0.5
0.5
0
0
−0.5
−0.5
0 0.5 1 0 0.5 1
Space Space
Fig. 14.1 Left: Controller gains K 1u (1, x) (solid red), K 2u (1, x) (dashed-dotted blue) and K v (1, x)
(dashed green). Right: Observer gains p1,1 (x) (solid red), p1,2 (x) (dashed-dotted blue) and p2 (x)
(dashed green)
4
8
6
2
4
2
0 0
0 1 2 3 4 0 1 2 3 4
Time [s] Time [s]
Fig. 14.2 Left: State norm during state feedback (solid red), output feedback (dashed-dotted blue)
and output tracking (dashed green). Right: State estimation error norm
2
4
2
1
0
−2 0
0 1 2 3 4 0 1 2 3 4
Time [s] Time [s]
Fig. 14.3 Left: Actuation signal during state feedback (solid red), output feedback (dashed-dotted
blue) and output tracking (dashed green). Right: Reference r (solid black) and measured signal
(dashed red) during tracking
The controller and observer gains are shown in Fig. 14.1. In the state feedback
case, the system’s norm and actuation signal converge to zero in a finite time t = t F ,
as seen in Figs. 14.2 and 14.3. In the output feedback case, the state estimation error
norm converges to zero in finite time t = t F , while the state norm and actuation signal
converge to zero in t = 2t F . For the tracking case, the state norm and actuation signals
stay bounded, while the tracking goal is achieved for t ≥ tv , as seen in Fig. 14.3.
280 14 Non-adaptive Schemes
14.7 Notes
It is clear that the complexity now has increased considerably from the 2 × 2 designs
in Chap. 8, especially in the design of the observer of Theorem 14.3 using sensing
collocated with actuation. The number of kernels used in the design is n 2 + n, and
hence scales quadratically with the number of states n. Moreover, some assumptions
are needed on the system parameters, specifically the transport speeds which cannot
be arbitrary, but has to be ordered systematically.
It is, as in the 2 × 2 case, possible to perform a decoupling of the controller target
system, as we showed in the alternative proof of Theorem 14.1. This is utilized in
Chap. 17 where a model reference adaptive control law for systems in the form (13.1)
is derived.
References
Bin M, Di Meglio F (2017) Boundary estimation of boundary parameters for linear hyperbolic
PDEs. IEEE Trans Autom Control 62(8):3890–3904
Di Meglio F, Vazquez R, Krstić M (2013) Stabilization of a system of n + 1 coupled first-order
hyperbolic linear PDEs with a single boundary input. IEEE Trans Autom Control 58(12):3097–
3111
Hu L, Vazquez R, Meglio FD, Krstić M (2015) Boundary exponential stabilization of 1-D inhomo-
geneous quasilinear hyperbolic systems. SlAM J Control Optim
Chapter 15
Adaptive State-Feedback Controller
15.1 Introduction
are unknown. Note that σiT , i = 1, . . . , n are the rows of the matrix Σ. The control
law employs full state-feedback, and the practical interest of the controller is therefore
limited, since distributed measurements are at best a coarse approximation in practice.
This problem was originally solved in Anfinsen and Aamo (2017).
Output feedback problems, which are significantly harder to solve, are considered
in Chaps. 16 and 17.
and φ(x, t), where 1 is a column vector of length n with all elements equal to one.
The initial conditions are assumed to satisfy
Note that pi (x, t), i = 1, . . . , n are the rows of the matrix P(x, t), each containing
n elements. Consider the non-adaptive state estimates
ū i (x, t) = ϕiT (x, t)κi , v̄(x, t) = ϕ0T (x, t)κ0 + φ(x, t) (15.5)
where
T
ϕi (x, t) = ηi (x, t) pi (x, t) νi (x, t) , i = 1, . . . , n (15.6a)
T
ϕ0 (x, t) = ψ T (x, t) r T (x, t) (15.6b)
t S = max{μ−1 , λ−1
1 }, (15.8)
we have
ū ≡ u, v̄ ≡ v. (15.9)
By straightforward calculations, it can be verified that the error terms (15.10) satisfy
where
T
e(x, t) = e1 (x, t) e2 (x, t) . . . en (x, t) (15.12)
for all i, j = 1, . . . , n.
Using this assumption, consider now the adaptive laws
1
êi (x, t)ϕi (x, t)d x êi (1, t)ϕi (1, t) ˙
κ̂˙ i (t) = projκ̄i Γi 0 + , κ̂ i (t) (15.14a)
1 + ||ϕi (t)||2 1 + |ϕi (1, t)|2
1
ˆ(x, t)ϕ0 (x, t)d x ˆ(0, t)ϕ2 (0, t) ˙
κ̂˙ 0 (t) = projκ̄0 Γ0 0 + , κ̂ 0 (t) (15.14b)
1 + ||ϕ2 (t)||2 1 + |ϕ0 (0, t)|2
κ̂i (0) = κ̂i,0 (15.14c)
κ̂0 (0) = κ̂0,0 (15.14d)
284 15 Adaptive State-Feedback Controller
and bounds
T T
κ̄i = q̄ σ̄ ω̄ , κ̄0 = c̄ T
¯T (15.16)
û i (x, t) = ϕiT (x, t)κ̂i (t), v̂(x, t) = ϕ0T (x, t)κ̂0 (t) + φ(x, t), (15.18)
Theorem 15.1 The adaptive laws (15.14) with initial conditions satisfying (15.19)
guarantee
for i = 1, . . . , n. Moreover, the prediction errors satisfy the following bounds, for
t ≥ tS
||êi (t)|| ≤ ||ϕi (t)|||κ̃i (t)|, ||ˆ(t)|| ≤ ||ϕ0 (t)|||κ̃0 (t)|, (15.21)
for i = 1, . . . , n.
Proof The property (15.20a) follows from the projection operator. Consider the
Lyapunov function candidate
15.2 Swapping-Based Design 285
1 T
V (t) = κ̃ (t)Γ −1 κ̃(t) (15.22)
2
where
T
κ̃(t) = κ̃1T (t) κ̃2T (t) . . . κ̃0T (t) (15.23)
and
By differentiating (15.22), inserting the adaptive laws (15.14) and using Lemma A.1
in Appendix A, we find
n 1
êi (x, t)ϕi (x, t)d x êi (1, t)ϕi (1, t)
V̇ (t) ≤ − κ̃iT (t) 0
+
i=1
1 + ||ϕi (t)|| 2 1 + |ϕi (1, t)|2
1
ˆ(x, t)ϕ0 (x, t)d x ˆ(0, t)ϕ2 (0, t)
− κ̃0T (t) 0 + . (15.25)
1 + ||ϕ0 (t)||2 1 + |ϕ0 (0, t)|2
êi (x, t) = ϕiT (x, t)κ̃i (t) + ei (x, t), ˆ(x, t) = ϕ0T (x, t)κ̃0 (t) + (x, t) (15.26)
êi (x, t) = ei (x, t) + ϕiT (x, t)κ̃i (t), ˆ(x, t) = (x, t) + ϕ0T (x, t)κ̃0 (t), (15.31)
for some constant K̄ . Additionally, from differentiating (15.33) with respect to time,
applying Theorem D.4 in Appendix D to the resulting equations, and using (15.20d),
we obtain
|| K̂ u ||, || K̂ v || ∈ L2 . (15.35)
15.2 Swapping-Based Design 287
Theorem 15.2 Consider system (13.1), filters (15.2) and the observer of Theorem
15.1. The control law (15.32) guarantees
||η||, ||ψ||, ||φ||, ||P||, ||ν||, ||r ||, ||û||, ||v̂||, ||u||, ||v|| ∈ L2 ∩ L∞ (15.36)
and
||η||, ||ψ||, ||φ||, ||P||, ||ν||, ||r ||, ||û||, ||v̂||, ||u||, ||v|| → 0. (15.37)
From straightforward calculations, one can verify that the adaptive state estimates
(15.18) have the dynamics
˙
û t (x, t) + Λû x (x, t) = Σ̂(t)u(x, t) + ω̂(t)v(x, t) + (ϕ(x, t) ◦ κ̂(t))1 (15.38a)
v̂t (x, t) − μv̂x (x, t) = ˆ (t)u(x, t) + ϕ0 (x, t)κ̂˙ 0 (t)
T T
(15.38b)
û(0, t) = q̂(t)v(0, t) (15.38c)
v̂(1, t) = ĉ (t)u(1, t) + U (t)
T
(15.38d)
û(x, 0) = û 0 (x) (15.38e)
v̂(x, 0) = v̂0 (x) (15.38f)
and
T
κ̂(t) = κ̂1 (t) κ̂2 (t) . . . κ̂n (t) , (15.40)
x
β(x, t) = v̂(x, t) − K̂ u (x, ξ, t)û(ξ, t)dξ
0
x
− K̂ v (x, ξ, t)v̂(ξ, t)dξ = T [û, v̂](x, t) (15.41b)
0
where ( K̂ u , K̂ v ) is the on-line solution to the PDE (15.33). As with all backstepping
transformations with uniformly bounded integration kernels, transformation (15.41)
is invertible, with inverse in the form
where T −1 is a Volterra integral operator similar to T . Consider also the target system
x
αt (x, t) + Λαx (x, t) = Σ̂(t)α(x, t) + ω̂(t)β(x, t) + B̂1 (x, ξ, t)α(ξ, t)dξ
0
x
+ b̂2 (x, ξ, t)β(ξ, t)dξ + Σ̂(t)ê(x, t)
0
˙
+ ω̂(t)ˆ(x, t) + (ϕ(x, t) ◦ κ̂(t))1 (15.43a)
βt (x, t) − μβx (x, t) = − K̂ (x, 0, t)Λq̂(t)ˆ(0, t) + T [Σ̂ ê + ω̂ ˆ,
u
ˆ ê](x, t) T
x
− K̂ tu (x, ξ, t)α(ξ, t)dξ
0
x
− K̂ tv (x, ξ, t)T −1 [α, β](ξ, t)dξ
0
˙
+ T [(ϕ ◦ κ̂)1, ϕ0T κ̂˙ 0 ](x, t) (15.43b)
α(0, t) = q̂(t)β(0, t) + q̂(t)ˆ(0, t) (15.43c)
β(1, t) = 0 (15.43d)
α(x, 0) = α0 (x) (15.43e)
β(x, 0) = β0 (x) (15.43f)
for α0 , β0 ∈ B([0, 1]), and for some functions B̂1 and b̂2 .
Lemma 15.2 Transformation (15.41) maps system (15.38) in closed loop with con-
trol law (15.32) into the target system (15.43) with B̂1 and b̂2 given as the solution
to the Volterra integral equation
x
B̂1 (x, ξ, t) = ω̂(t) K̂ u (x, ξ, t) + b̂2 (x, s, t) K̂ u (s, ξ, t)ds (15.44a)
ξ
x
b̂2 (x, ξ, t) = ω̂(t) K̂ v (x, ξ, t) + b̂2 (x, s, t) K̂ v (s, ξ, t)ds. (15.44b)
ξ
15.2 Swapping-Based Design 289
Proof From differentiating (15.41b) with respect to time, inserting the dynamics
(15.38b) and integrating by parts, we get
x x
− K ξv (x, ξ, t)μv̂(ξ, t)dξ + K̂ v (x, ξ, t)
ˆ T (t)û(ξ, t)dξ
0 0
x x
+ K̂ v (x, ξ, t)
ˆ T (t)ê(ξ, t)dξ + K̂ v (x, ξ, t)ϕ0T (ξ, t)κ̂˙ 0 (t)dξ
0 0
x x
+ K tu (x, ξ, t)û(ξ, t)dξ + K tv (x, ξ, t)v̂(ξ, t)dξ. (15.45)
0 0
x
v v
− K̂ ξ (x, ξ, t)μ + μ K̂ x (x, ξ, t) − K̂ (x, ξ, t)ω̂(t) v̂(ξ, t)dξ
u
0
− K̂ (x, x, t)Λ + μ K̂ (x, x, t) +
u u
ˆ (t) û(x, t)
T
v
− K̂ (x, 0, t)μ − K̂ (x, 0, t)Λq̂(t) v̂(0, t) + K̂ u (x, 0, t)Λq̂(t)ˆ(0, t)
u
x
−
ˆ T (t)ê(x, t) + K̂ (x, ξ, t)Σ̂ ê(ξ, t)dξ − ϕ0T (x, t)κ̂˙ 0 (t)
0
290 15 Adaptive State-Feedback Controller
x x
+ K̂ v (x, ξ, t)
ˆ T (t)ê(ξ, t)dξ + K̂ u (x, ξ, t)ω̂(t)ˆ(ξ, t)dξ
0 0
x x
+ K tu (x, ξ, t)û(ξ, t)dξ + K tv (x, ξ, t)v̂(ξ, t)dξ
0 0
x
+ ˙
K̂ u (x, ξ, t)(ϕ(ξ, t) ◦ κ̂(t))1dξ
0
x
+ K̂ v (x, ξ, t)ϕ0T (ξ, t)κ̂˙ 0 (t)dξ. (15.47)
0
Using Eqs. (15.33), the result can be written as (15.43b). Inserting (15.41b) into
(15.43a), we get
x
û t (x, t) + Λû x (x, t) = Σ̂(t)û(x, t) + ω̂(t)v̂(x, t) − ω̂(t) K̂ u (x, ξ, t)û(ξ, t)dξ
0
x x
− ω̂(t) K̂ v (x, ξ, t)v̂(ξ, t)dξ + B̂1 (x, ξ, t)û(ξ, t)dξ
0 0
x x ξ
+ b̂2 (x, ξ, t)v̂(ξ, t)dξ − b̂2 (x, ξ, t) K̂ u (ξ, s)û(s, t)dsdξ
0 0 0
x ξ
− b̂2 (x, ξ, t) K̂ v (ξ, s)v̂(s, t)dsdξ
0 0
˙
+ Σ̂(t)ê(x, t) + ω̂(t)ˆ(x, t) + (ϕ(x, t) ◦ κ̂(t))1, (15.48)
and using the û-dynamics (15.38a), and changing the order of integration in the
double integrals, we find
x x
0= B̂1 (x, ξ, t) − ω̂(t) K̂ (x, ξ, t) − u
b̂2 (x, s, t) K̂ (s, ξ, t)ds û(ξ, t)dξ
u
0 ξ
x
+ b̂2 (x, ξ, t) − ω̂(t) K̂ v (x, ξ, t)
0
x
− b̂2 (x, s, t) K̂ v (s, ξ, t)ds v̂(ξ, t)dξ (15.49)
ξ
V̇2 (t) ≤ −λ1 e−δ |α(1, t)|2 + h 1 β 2 (0, t) + h 1 ˆ2 (0, t) − (δλ1 − h 2 ) V2 (t)
˙
+ V3 (t) + ||ê(t)||2 + ||ˆ(t)||2 + ||(ϕ(t) ◦ κ̂(t))1)|| 2
(15.52a)
V̇3 (t) ≤ −μβ 2 (0, t) − [kμ − h 3 ] V3 (t) + ek ˆ2 (0, t) + h 4 ||ê(t)||2
+ h 5 ||ˆ(t)||2 + l1 (t)V2 (t) + l2 (t)V3 (t)
˙
+ h 6 ek ||(ϕ(t) ◦ κ̂(t))1|| 2
+ h 7 ek ||ϕT (t)κ̂˙ 0 (t)||2 (15.52b)
0
−δ
V̇4 (t) ≤ −λ1 e |η(1, t)| + h 8 β (0, t) + h 8 ˆ (0, t) − δλ1 V4 (t)
2 2 2
(15.52c)
V̇5 (t) ≤ h 9 e |α(1, t)| + h 9 e |ê(1, t)| − μ|ψ(0, t)| − kμV5 (t)
k 2 k 2 2
(15.52d)
−δ
V̇6 (t) ≤ −λ1 e |P(1, t)| − [δλ1 − 1] V6 (t) + h 10 V2 (t) + h 10 ||ê(t)||
2 2
(15.52e)
−δ
V̇7 (t) ≤ −λ1 e |ν(1, t)| − (δλ1 − h 11 ) V7 (t)
2
9
V9 (t) = ai Vi (t). (15.53)
i=3
If we let
h9 a5 h 8 + a3 h 1
a3 = , a4 = (15.54a)
λ1 μ
a5 = a7 = 1, a6 = a9 = e−δ−k , a8 = e−δ (15.54b)
V̇9 (t) ≤ −e−δ λ1 |η(1, t)|2 − e−δ−k μ|ψ(0, t)|2 − λ1 e−δ |P(1, t)|2
− λ1 e−2δ |ν(1, t)|2 − e−δ−k μ|r (0, t)|2 − cV9 (t)
+ a3 h 1 + a4 ek + a5 h 8 ˆ2 (0, t) + a4 l1 V2 (t) + a4 l2 V3 (t)
+ a6 h 9 ek |ê(1, t)|2 + a3 + a4 h 4 ek + a7 h 10 + a9 ek ||ê(t)||2
+ a3 + a4 h 5 ek + 2a8 ||ˆ(t)||2 + a3 + a4 h 6 ek ||(ϕ(t) ◦ κ̂(t))1||˙ 2
and
||ˆ(t)||2
||ˆ(t)||2 = (1 + ||ϕ0 (t)||2 ) = l4 (t) + l4 (t)||ϕ0 (t)||2
1 + ||ϕ0 (t)||2
= l4 (t) + l4 (t)(||ψ(t)||2 + ||r (t)||2 )
≤ l4 (t) + l4 (t)(V5 (t) + V8 (t)) (15.58)
where
n
||êi (t)||2 ||ˆ(t)||2
l3 (t) = , l4 (t) = (15.59)
i=1
1 + ||ϕi (t)||2 1 + ||ϕ0 (t)||2
and
||ϕ0T (t)κ̂˙ 0 (t)||2 ≤ |κ̂˙ 0 (t)|2 ||ϕ0 (t)||2 ≤ l6 (t)(V5 (t) + V8 (t)) (15.61)
where
n
l5 (t) = |κ̂˙ i (t)|2 , l6 (t) = |κ̂˙ 0 (t)|2 (15.62)
i=1
V̇9 (t) ≤ −cV9 (t) − λ1 e−2δ |ϕ(1, t)|2 − e−δ−k μ|ϕ0 (0, t)|2
+ b1 |ê(1, t)|2 + b2 ˆ2 (0, t) + l7 (t)V9 (t) + l8 (t) (15.63)
b1 = a6 h 9 ek , b2 = a3 h 1 + a4 ek + a5 h 8 , (15.64)
Moreover, for t ≥ t S
n n n
|êi (1, t)|2 |êi (1, t)|2
|ê(1, t)|2 = |êi (1, t)|2 = + |ϕi (1, t)|2
i=1 i=1
1 + |ϕi (1, t)|2 i=1
1 + |ϕi (1, t)|2
≤ ζ 2 (t) + ζ 2 (t)|ϕ(1, t)|2 (15.66a)
|ˆ(0, t)|2
|ˆ(0, t)|2 = (1 + |ϕ0 (0, t)|2 )
1 + |ϕ0 (0, t)|2
≤ ζ 2 (t) + ζ 2 (t)|ϕ0 (0, t)|2 (15.66b)
where we have used the definition of ζ in (15.28). Substituting (15.66) into (15.63),
we obtain
V̇9 (t) ≤ −cV9 (t) − λ1 e−2δ − b1 ζ 2 (t) |ϕ(1, t)|2
− μe−δ−k − b2 ζ 2 (t) |ϕ0 (0, t)|2 + l7 (t)V9 (t) + l9 (t) (15.67)
where
meaning that |ϕ(1, t)|2 and |ϕ0 (0, t)|2 must be bounded for almost all t ≥ 0, resulting
in
where
Due to the invertibility of the backstepping transformation (15.41), we then also have
15.3 Simulation
System (13.1) and the controller of Theorem 15.2 are implemented for n = 2 using
the system parameters
1 0
Λ= , μ=2 (15.78a)
0 1.5
−2 −1 −2 1
Σ= , ω= , = (15.78b)
3 1 1 −2
1 −1
q= , c= . (15.78c)
1 2
40 100
30
50
20
10 0
0
−50
0 5 10 15 0 5 10 15
Time [s] Time [s]
All initial conditions for the filters and adaptive laws are set to zero. The kernel
equations (15.33) are solved on-line using the method described in Appendix F.2.
Figure 15.1 shows that the norm of the system states and actuation signal are
bounded and converge to zero in Fig. 15.1. All estimated parameters are seen to be
bounded in Fig. 15.2, as predicted by theory.
15.4 Notes 297
15.4 Notes
The problematic issue is the term c̃ T (t)u(1, t) in (15.80d), due to which we can-
not ensure β(1, ·) ∈ L∞ and pointwise boundedness. If c T , however, is known, so
that c̃ T (t)u(1, t) = 0, then pointwise boundedness and convergence to zero can be
proved.
Reference
Anfinsen H, Aamo OM (2017) Adaptive state feedback stabilization of n + 1 coupled linear hyper-
bolic PDEs. In: 25th mediterranean conference on control and automation, Valletta, Malta
Chapter 16
Adaptive Output-Feedback: Uncertain
Boundary Condition
16.1 Introduction
We will now consider the n + 1 system (13.1) again, but with the parameter q in
the boundary condition at x = 0 anti-collocated with actuation uncertain.We allow
system (13.1) to have spatially varying coefficients, and assume (13.9) and (13.7), and
derive an observer estimating the system states and q from boundary sensing only.
The derived adaptive observer is also combined with a control law achieving closed-
loop adaptive stabilization from boundary sensing only. This adaptive observer design
was initially proposed in Anfinsen et al. (2016), while the observer was combined
with a control law in Anfinsen and Aamo (2017a).
Considering system (13.1) with the uncertain parameter q, we introduce the filters
and
where
T
η(x, t) = η1 (x, t) . . . ηn (x, t) (16.3a)
P(x, t) = { pi, j (x, t)}1≤i, j≤n (16.3b)
T
r (x, t) = r1 (x, t) . . . rn (x, t) (16.3c)
defined over the triangular domain T given in (1.1a). Note that Eq. (16.6) is the same
as (14.30) with c = 0, and is therefore well-posed. Using these filters, we define the
non-adaptive state estimates
Lemma 16.1 Consider system (13.1) subject to (13.11) and (13.9), and the non-
adaptive state estimates (16.7) generated using the filters (16.1)–(16.2). For t ≥ t F ,
with t F defined in (14.5), we have
ū ≡ u, v̄ ≡ v. (16.8)
for some initial conditions e0 , 0 ∈ B([0, 1]). The dynamics (16.10) has the same
form as the the dynamics (14.32), the only difference being that c = 0. The result
then immediately follows from the proof of Theorem 14.2 and the fact that the kernel
equations (16.6) are identical to the kernel equations (14.30) with c = 0.
From the static form (16.7) and the result of Lemma 16.1, one can use standard
gradient or least squares update laws to estimate the unknown parameters in q. First,
we will assume we have some bounds on q.
Assumption 16.1 A bound q̄ on all elements in q is known, so that
where
e(1, t) u(1, t) − η(1, t) P(1, t)
ε(t) = , h(t) = , Ψ (t) = (16.13a)
(0, t) v(0, t) − φ(0, t) r T (0, t)
302 16 Adaptive Output-Feedback: Uncertain Boundary Condition
Note that all the elements of h(t) and Ψ (t) are either generated using filters or mea-
sured. We now propose a gradient law with normalization and projection, given as
˙q̂(t) = proj Γ Ψ (t)ε̂(t) , q̂(t) ,
T
q̂(0) = q̂0 (16.14)
q̄
1 + |Ψ (t)|2
for some gain matrix Γ > 0, some initial guess q̂0 satisfying
Theorem 16.1 Consider system (13.1) with filters (16.1) and (16.2) and injection
gains given by (16.5). The update law (16.14) guarantees
where
ε̂(t)
ζ(t) = . (16.18)
1 + |Ψ (t)|2
Moreover, if Ψ (t) and Ψ̇ (t) are bounded and Ψ (t) is persistently exciting (PE), then
q̂ → q (16.19)
exponentially fast.
Proof Property (16.17a) follows from Lemma A.1. Consider the Lyapunov function
candidate
1 T
V1 (t) = q̃ (t)Γ −1 q̃(t) (16.20)
2
where q̃ = q − q̂. Differentiating with respect to time and inserting (16.14), and
using Lemma A.1, we find
Ψ T (t)ε̂(t)
V̇1 (t) ≤ −q̃ T (t) (16.21)
1 + |Ψ (t)|2
16.2 Sensing at Both Boundaries 303
Using (16.12) and (16.16), and noticing from (16.10c) and (16.10d) the fact that
≡ 0, we have
where we have used the definition (16.18). Inequality (16.23) shows that V1 is non-
increasing, and hence has a limit as t → ∞. Integrating from zero to infinity gives
ζ ∈ L2 . (16.24)
|ε̂(t)| |Ψ (t)q̃(t)|
|ζ(t)| = = ≤ |q̃(t)| (16.25)
1 + |Ψ (t)| 2 1 + |Ψ (t)|2
˙ |Ψ T (t)ε̂(t)|
|q̂(t)| ≤ |Γ | ≤ |Γ ||ζ(t)| (16.26)
1 + |Ψ (t)|2
which from (16.17b) proves (16.17c). The property (16.19) follows immediately
from part iii) of Ioannou and Sun (1995), Theorem 4.3.2.
Using the filters derived above and the boundary parameter estimates generated from
Theorem 16.1, we can generate estimates of the system states u, v by simply replacing
q in (16.7) by its estimate q̂, as follows
Lemma 16.2 Consider system (13.1) and the adaptive state estimates (16.27) gen-
erated using the filters (16.1) and (16.2) and the update law of Theorem 16.1. The
corresponding prediction errors
Proof Using the definitions (16.7), (16.9), (16.27) and (16.28), one immediately
finds
where q̂ is generated using the adaptive observer of Theorem 16.1. The existence of
a unique solution ( K̂ u , K̂ v ) to (16.32) for every time t is guaranteed by Theorem D.4
in Appendix D. Moreover, since the coefficients are uniformly bounded, the solution
is bounded in the sense of
for some constant K̄ . Additionally, from differentiating (16.32) with respect to time,
and applying Theorem D.4, we obtain
|| K̂ iu ||, || K̂ v || ∈ L2 , i = 1 . . . n. (16.34)
Theorem 16.2 Consider system (13.1) with the update law of Theorem 16.1 and
state estimates û, v̂ generated using Lemma 16.2. The control law (16.31) guarantees
16.2 Sensing at Both Boundaries 305
First, we will derive the dynamics of the state estimates (16.27). Their dynamics
are needed for the backstepping design in subsequent sections. By straightforward
differentiation using (16.27) and the filters (16.1) and (16.2), one can verify that the
dynamics satisfy
for initial conditions û 0 , v̂0 ∈ B([0, 1]), and where we have inserted for the measure-
ments (13.1g)–(13.1h).
Consider the backstepping transformation
with inverse
x
αt (x, t) + Λ(x)αx (x, t) = Σ(x)α(x, t) + ω(x)β(x, t) + B̂1 (x, ξ, t)α(ξ, t)dξ
0
x
+ b̂2 (x, ξ, t)β(ξ, t)dξ − k1 (x)ˆ(0, t)
0
˙
+ P(x, t)q̂(t) (16.39a)
˙ + T [k1 , k2 ](x, t)ˆ(0, t)
βt (x, t) − μ(x)βx (x, t) = T [P, r ](x, t)q̂(t) T
− K̂ u (x, 0, t)Λ(0)q̂(t)ˆ(0, t)
x
− K̂ tu (x, ξ, t)α(ξ, t)dξ
0
x
− K̂ tv (x, ξ, t)T −1 [α, β](ξ, t)dξ (16.39b)
0
α(0, t) = q̂(t)β(0, t) + q̂(t)ˆ(0, t) (16.39c)
β(1, t) = 0 (16.39d)
α(x, 0) = α0 (x) (16.39e)
β(x, 0) = β0 (x) (16.39f)
where ( B̂1 , b̂2 ) is the solution to the Volterra integral equation (15.44), and α0 , β0 ∈
B([0, 1]). The following holds.
Lemma 16.3 The backstepping transformation (16.37) and control law (16.31) map
system (16.36) into the target system (16.39).
Proof Differentiating (16.37b) with respect to time, inserting the dynamics (16.36a)–
(16.36b) and integrating by parts, we find
v
− K̂ (x, 0, t)μ(0) − K̂ (x, 0, t)Λ(0)q̂(t) v̂(0, t)
u
+ K̂ (x, 0, t)Λ(0)q̂(t)ˆ(0, t)
x x
+ K tu (x, ξ, t)û(ξ, t)dξ + K tv (x, ξ, t)v̂(ξ, t)dξ. (16.42)
0 0
Using Eq. (16.32) and the inverse transformation (16.38), we obtain (16.39b). Insert-
ing the transformation (16.37) into (16.39a), we find
x ξ
− b̂2 (x, ξ, t) K̂ u (ξ, s, t)û(s, t)dsdξ
0 0
x ξ
− b̂2 (x, ξ, t) K̂ v (ξ, s, t)v̂(s, t)dsdξ
0 0
˙
− k1 (x)ˆ(0, t) + P(x, t)q̂(t). (16.43)
Changing the order of integration in the double integrals, (16.43) can be written as
˙
û t (x, t) + Λ(x)û x (x, t) − Σ(x)û(x, t) − ω(x)v̂(x, t) + k1 (x)ˆ(0, t) − P(x, t)q̂(t)
x
= B̂1 (x, ξ, t) − ω(x) K̂ u (x, ξ, t)
0
x
− b̂2 (x, s, t) K̂ (s, ξ, t)ds û(ξ, t)dξ
u
ξ
x
+ b̂2 (x, ξ, t) − ω(x) K̂ v (x, ξ, t)
0
x
v
− b̂2 (x, s, t) K̂ (s, ξ, t)ds v̂(ξ, t)dξ. (16.44)
ξ
Using the Eqs. (15.44) yields the dynamics (16.36a). We find from inserting the back-
stepping transformation (16.37) into (16.36d) that the control law (16.31) produces
the boundary condition (16.39d). The last boundary condition (16.39c) results from
inserting (16.37) into (16.36c) and noting that v(0, t) = β(0, t) + ˆ(0, t).
z T (1, t) = 0 (16.46d)
W (x, 0) = W0 (x) (16.46e)
z(x, 0) = z 0 (x) (16.46f)
Proof The filters (16.2) have the same structure as the error dynamics (16.10), which
in turn have the same form as the error dynamics (14.32). The proof of the trans-
formation is therefore similar to the proof of Theorem 14.2, and is omitted. The
boundary condition (16.46c) follows from noting that v(0, t) = β(0, t) + ˆ(0, t).
First, we state the following inequalities, which result from the fact that the back-
stepping transformations are invertible and also act on the individual columns of P
and
Consider systems (16.39) and (16.46), and the Lyapunov function candidate
6
V2 (t) = ai Vi (t) (16.51)
i=3
then
V̇2 (t) ≤ −cV2 (t) − e−k−δ |z(0, t)|2 − e−δ |W (1, t)|2
+ b ˆ2 (0, t) + l5 (t)V2 (t) (16.57)
where
b = h 3 + a4 h 4 ek + 2n (16.58)
ˆ2 (0, t)
ˆ2 (0, t) = (1 + |r (0, t)|2 + |P(1, t)|2 ). (16.59)
1 + |r (0, t)|2 + |P(1, t)|2
where M̄ bounds the kernel M. Expressed using the Lyapunov function V6 , we find
ˆ2 (0, t)
ˆ2 (0, t) = (|z(0, t)|2 + 2|W (1, t)|2 ) + l8 (t)V6 (t) + l9 (t) (16.62)
1 + |Ψ (t)|2
where l8 and l9 are integrable, and where we have used the definition of Ψ stated in
(16.13). Moreover, we have
and hence
ˆ2 (0, t) ≤ ζ 2 (t)(|z(0, t)|2 + 2|W (1, t)|2 ) + l8 (t)V6 (t) + l9 (t) (16.64)
where we used the definition of ζ in (16.18). Now inserting (16.64) into (16.57), we
get
312 16 Adaptive Output-Feedback: Uncertain Boundary Condition
V̇2 (t) ≤ −cV2 (t) + l10 (t)V2 (t) + l11 (t) − e−k−δ − bζ 2 (t) |z(0, t)|2
− e−δ − bζ 2 (t) |W (1, t)|2 (16.65)
|ε̂(t)|2 |Ψ (t)q̃(t)|2
ζ 2 (t) = = ≤ |q̃(t)|2 ≤ 2γ̄V1 (t) (16.66)
1 + |Ψ (t)|2 1 + |Ψ (t)|2
Since ||W ||, ||z|| ∈ L∞ ∩ L2 , z(0, t) and W (1, t) must be bounded almost every-
where, so that
V̇2 (t) ≤ −cV2 (t) + l10 (t)V2 (t) + l12 (t) (16.69)
where
l12 (t) = l11 (t) + bζ 2 (t)|z(0, t)|2 + bζ 2 (t)|W (1, t)|2 (16.70)
Due to the invertibility of the transformations (16.37) and (16.45), we then also have
From (16.27)
x
αt (x, t) + Λ(x)αx (x, t) = Σ(x)α(x, t) + ω(x)β(x, t) + B1 (x, ξ)α(ξ, t)dξ
0
x
+ b2 (x, ξ)β(ξ, t)dξ (16.75a)
0
βt (x, t) − μ(x)βx (x, t) = 0 (16.75b)
α(0, t) = qβ(0, t) (16.75c)
1 1
β(1, t) = K̂ u (1, ξ, t)û(ξ, t)dξ + K̂ v (1, ξ, t)v̂(ξ, t)dξ
0 0
1
− K u (1, ξ)u(ξ, t)dξ
0
1
− K v (1, ξ)v(ξ, t)dξ (16.75d)
0
α(x, 0) = α0 (x) (16.75e)
β(x, 0) = β0 (x) (16.75f)
where we have inserted the control law (16.31). We observe that since ||u||, ||v||, ||û||,
||v̂|| ∈ L∞ ∩ L2 and ||u||, ||v||, ||û||, ||v̂|| → 0 in the boundary condition (16.75d),
we must have ||β||∞ ∈ L∞ ∩ L2 and ||β||∞ → 0. Due to the cascaded structure of
system (16.75), we must also ultimately have ||α||∞ ∈ L∞ ∩ L2 and ||α||∞ → 0.
Due to the invertibility of the transformation (14.6), we therefore also have
This also implies that all filters, being generated from measurements of u and v, are
bounded, square integrable and converge to zero pointwise in space.
16.3 Simulations
System (13.1), the observer of Theorem 16.1 and the adaptive control law of Theorem
16.2 are implemented using the transport speeds
λ1 = λ2 = μ = 1, (16.77)
6 0
||u|| + ||v||
4
−0.5
U
2
0 −1
0 2 4 6 8 10 0 2 4 6 8 10
Time [s] Time [s]
4 0
−1
q̂1
q̂2
−2
0 −3
0 2 4 6 8 10 0 2 4 6 8 10
Time [s] Time [s]
Fig. 16.2 Left: Actual (solid black) and estimated (dashed red) boundary parameter q1 . Right:
Actual (solid black) and estimated (dashed red) boundary parameter q2
The kernel equations (16.32) are solved online by mapping the equations to inte-
gral equations (details on how this is done can be found in the appendix of Anfinsen
and Aamo (2017a)).
In the closed loop case, the system state norm and actuation signal are in Fig. 16.1
seen to be bounded and converge to zero. The estimated boundary parameters as
generated using Theorem 16.1 are shown in Fig. 16.2 to converge to their true values,
although this has not been proved for the closed loop case.
16.4 Notes
References
17.1 Introduction
We revisit system (13.1) again with assumptions (13.9) and (13.10), and sensing
(17.1g) anti-collocated with actuation, that is
Here, we also allow the measurement and the actuation signal U to be scaled by
arbitrary (nonzero) constants, k1 and k2 . The system parameters are in the form
(x) = diag {λ1 (x), λ2 (x), . . . , λn (x)} , Σ(x) = {σi j (x)}1≤i, j≤n (17.2a)
T
ω(x) = ω1 (x) ω2 (x) . . . ωn (x) (17.2b)
T
(x) = 1 (x) 2 (x) . . . n (x) (17.2c)
T T
q = q1 q2 . . . qn , c = c1 c2 . . . cn (17.2d)
The initial conditions are assumed to satisfy u 0 , v0 ∈ B([0, 1]). We assume that (13.8)
holds for the transport speeds, that is
−μ(x) < 0 < λ1 (x) < λ2 (x) < · · · < λn (x), ∀x ∈ [0, 1]. (17.4)
We now seek to solve the model reference adaptive control (MRAC) problem assum-
ing
are uncertain. However, as before, we assume the transport delays and the sign of
the product k1 k2 is known, as formally stated the following assumption.
Assumption 17.1 The following quantities are known,
1 1
dγ dγ
tu,i = λ̄i−1 = , tv = μ̄−1 = , sign(k1 k2 ), (17.6)
0 λi (γ) 0 μ(γ)
for i = 1, 2, . . . , n.
Mathematically, the MRAC problem is stated as designing a control input U (t)
that achieves
t+T
lim (y0 (s) − yr (s))2 = 0 (17.7)
t→∞ t
for some T > 0, where the signal yr is generated using the reference model
for some initial condition b0 ∈ B([0, 1]) and a reference signal r of choice. The
signal r is assumed to be bounded, as formally stated in the following assumption.
Assumption 17.2 The reference signal r (t) is known for all t ≥ 0, and there exists
a constant r̄ so that
|r (t)| ≤ r̄ (17.9)
for all t ≥ 0.
Moreover, all other signals, such as the system states and other auxiliary (filter)
states should be bounded in the L 2 -sense.
17.2 Model Reference Adaptive Control 319
for the states α̌, β̌ defined for x ∈ [0, 1], t ≥ 0, and for some new parameters
m 1 , m 2 , m 3 and initial conditions α̌0 , β̌0 ∈ B([0, 1]).
where
K uu (x, ξ) K uv (x, ξ)
K (x, ξ) = (17.14)
K u (x, ξ) K v (x, ξ)
Proof This result follows directly from the alternative proof of Theorem 14.1, where
the backstepping transformation
x
α̌(x, t) u(x, t) u(ξ, t)
= − K (x, ξ) dξ (17.15)
β̌(x, t) v(x, t) 0 v(ξ, t)
is shown to map (17.1a)–(17.1c) into the form (17.10a)–(17.10c) with m 1 in the form
(17.11a). The inverse of (17.15) is from Theorem 1.2 given as
x
u(x, t) α̌(x, t) α̌(ξ, t)
= + L(x, ξ) dξ (17.16)
v(x, t) β̌(x, t) 0 β̌(ξ, t)
with L given by (17.13). Inserting (17.16) into (17.1d) immediately yields (17.10d)
with m 2 and m 3 given by (17.11b)–(17.11c).
We now use a transformation to get rid of the spatially varying transport speeds in
(17.10), and consider the system
¯ x (x, t) = m 4 (x)β(0, t)
αt (x, t) + α (17.17a)
βt (x, t) − μ̄βx (x, t) = 0 (17.17b)
α(0, t) = qβ(0, t) (17.17c)
1
β(1, t) = c T α(1, t) + ρU (t) + m 5T (ξ)α(ξ, t)dξ
0
1
+ m 6 (ξ)β(ξ, t)dξ (17.17d)
0
α(x, 0) = α0 (x) (17.17e)
β(x, 0) = β0 (x) (17.17f)
y0 (t) = β(0, t) (17.17g)
for the system states α, β, some new parameters ρ, m 4 , m 5 , m 6 and initial conditions
α0 , β0 ∈ B([0, 1]).
17.2 Model Reference Adaptive Control 321
ρ = k1 k2 (17.18a)
m 4,i (x) = m 1,i (h −1
α,i (x)) (17.18b)
m 5,i (x) = −tu,i λi (h −1 −1
α,i (x))m 2,i (h α,i (x)) (17.18c)
m 6 (x) = −tv μ(h −1 −1
β (x))m 3 (h β (x)) (17.18d)
for j = 1, 2, 4, 5, where
x x
dγ dγ
h α,i (x) = λ̄i , h β (x) = μ̄ . (17.20)
0 λi (γ) 0 μ(γ)
αi (x, t) = k2 α̌i (h −1
α,i (x), t), β(x, t) = k2 β̌(h −1
β (x), t), (17.21)
for i = 1, . . . n, which are invertible since the functions (17.20) are strictly increasing.
The rest of the proof follows immediately from insertion and noting that
λ̄i μ̄
h α,i (x) = , h β (x) = (17.22a)
λi (x) μ(x)
h α,i (0) = h β (0) = 0, h α,i (1) = h β (1) = 1 (17.22b)
¯ x (x, t) = 0
ζt (x, t) + ζ (17.23a)
βt (x, t) − μ̄βx (x, t) = 0 (17.23b)
ζ(0, t) = 1β(0, t) (17.23c)
1
β(1, t) = ν ζ(1, t) + ρU (t) +
T
κ(ξ)ζ1 (ξ, t)dξ
0
1
+ m 6 (ξ)β(ξ, t)dξ + ε(t) (17.23d)
0
322 17 Model Reference Adaptive Control
intentionally chosen as
ζ0 ≡ 0, (17.26)
where
1 for x ≤ a
δ(x, a) = (17.28)
0 otherwise,
and some signal ε(t) defined for t ≥ 0, and where 1 is a column vector of length n
with all elements equal to one.
Lemma 17.3 Consider systems (17.17) and (17.23). The signal ε(t), which is char-
acterized in the proof, is zero for t ≥ dα,1 . Moreover, stabilization of (17.23) implies
stabilization of (17.17). More precisely,
Consider the corresponding error e(x, t) = α(x, t) − ᾱ(x, t). It can straightfor-
wardly be proved that e satisfies the dynamics
¯ x (x, t) = 0,
et (x, t) + e e(0, t) = 0, e(x, 0) = e0 (x), (17.32)
where
1
εi (t) = ci ei (1, t) + m 5,i (ξ)ei (ξ, t)dξ (17.35)
0
is zero for t ≥ dα,i since the ei ’s are zero. Since all components of the filter ζ essen-
tially are transport equations with the same input y0 , the one with the slowest transport
speed, ζ1 , contains all the information in the other n − 1 states in ζ (recall that the
initial conditions are intentionally set to zero). Thus, the integrals
1
m 7,i (ξ)ζi (ξ, t)dξ (17.36)
0
λ̄1
1 λ̄i λ̄i λ̄i
m 7,i (ξ)ζi (ξ, t)dξ = m 7,i ξ ζ1 (ξ, t)dξ (17.37)
0 0 λ̄1 λ̄1
and hence
n
1 1
λ̄1 λ̄i λ̄i
m 7T (ξ)ζ(ξ, t)dξ = δ ξ, m 7,i ξ ζ1 (ξ, t)dξ. (17.38)
0 i=1 0 λ̄i λ̄1 λ̄1
Lemma 17.4 The variables (17.40) satisfy the dynamics (17.41) where w0 = ζ0 −
a0 , ž 0 = β0 − b0 .
Proof The proof follows straightforwardly from the dynamics (17.23) and (17.39)
and is therefore omitted.
which yields the new parameter θ as (17.43). The remaining details are omitted.
and define
p0 (x, t) = P(0, x, t), m 0 (x, t) = M(0, x, t), n 0 (x, t) = N (0, x, t). (17.47)
Lemma 17.6 Consider system (17.42) and the non-adaptive estimate (17.48) of the
state z. For t ≥ t F , where t F is defined in (14.5), we have
z̄ ≡ z. (17.49)
for some function 0 ∈ B([0, 1]). By differentiating (17.48) with respect to time and
space respectively, we find
and
which with (17.42b) yields the dynamics (17.51). Inserting x = 1 into (17.48), we
find
To ensure that the adaptive laws to be designed next generate bounded estimates of
the uncertain parameters, we assume the following.
Assumption 17.3 Bounds on the uncertain parameters ν, ρ, θ and κ are known.
That is, we are in knowledge of some constants ρ, ρ̄, θ, θ̄, κ, κ̄, ν i , ν̄i , i = 1, . . . , n
so that
ν i ≤ νi ≤ ν̄i , i = 1, . . . n, ρ ≤ ρ ≤ ρ̄ (17.56a)
θ ≤ θ(x) ≤ θ̄, κ ≤ κ(x) ≤ κ̄, ∀x ∈ [0, 1], (17.56b)
where
T T
ν = ν1 ν2 . . . νn , ν = ν1 ν2 . . . νn (17.57a)
T
ν̄ = ν̄1 ν̄2 . . . ν̄n (17.57b)
with
/ [ρ, ρ̄].
0∈ (17.58)
where we have used (17.47), and defined the estimation errors ν̃ = ν − ν̂, ρ̃ = ρ − ρ̂,
θ̃ = θ − θ̂, κ̃ = κ − κ̂. We propose the following adaptive laws
for some design matrix Γ1 > 0, and design gains γ2 > 0 and γ3 (x), γ4 (x) > 0 for
all x ∈ [0, 1], where
where
T
ν̂0 = ν̂1,0 ν̂2,0 . . . ν̂n,0 (17.65)
Lemma 17.7 The adaptive laws (17.62) with initial conditions satisfying (17.64)
guarantee the following properties
330 17 Model Reference Adaptive Control
ˆ(0, t)
σ(t) = (17.67)
1 + f 2 (t)
Proof The properties (17.66a)–(17.66d) follow from the projection operator and the
initial conditions (17.64). Consider the Lyapunov function candidate
1
1 T 1 2 1
V (t) = ν̃ (t)Γ1−1 ν̃(t) + ρ̃ (t) + γ3−1 (x)θ̃2 (x, t)d x
2 2γ2 2 0
1
1
+ γ −1 (x)κ̃2 (x, t)d x. (17.68)
2 0 4
Differentiating with respect to time, inserting the adaptive laws and using the property
−ν̃ T projν,ν̄ (τ , ν̂) ≤ −ν̃ T τ (Lemma A.1 in Appendix A), and similarly for ρ̂, θ̂ and
κ̂, we get
ˆ(0, t)
V̇ (t) ≤ − (h(0, t) + ϑ(0, t))T ν̃(t) + ρ̃(t)ψ(0, t)
1 + f 2 (t)
1
+ θ̃(x, t)(φ(1 − x, t) + n 0 (x, t)) d x
0
1
+ κ̃(x, t)( p0 (x, t) + m 0 (x, t)) d x . (17.69)
0
Using (17.61) with (0, t) = 0 for t ≥ t F , and inserting this into (17.69), we obtain
for t ≥ tu,1 + tv , where σ is defined in (17.67). This proves that V is bounded and
nonincreasing, and hence has a limit V∞ as t → ∞. Integrating (17.70) in time from
zero to infinity gives
17.2 Model Reference Adaptive Control 331
∞
σ 2 (t)dt = V (0) − V∞ ≤ V (0) < ∞ (17.71)
0
and hence
σ ∈ L2 . (17.72)
where ẑ is generated using (17.59), and ĝ is the on-line solution to the Volterra
integral equation
x
ĝ(x, t) = ĝ(x − ξ, t)θ̂(ξ, t)dξ − θ̂(x, t), (17.76)
0
with ρ̂, θ̂, κ̂ and ν̂ generated from the adaptive laws (17.62).
332 17 Model Reference Adaptive Control
Theorem 17.1 Consider system (17.1), filters (17.45) and (17.47), the augmented
reference model (17.39), and the adaptive laws (17.62). Suppose Assumption 17.2
holds. Then, the control law (17.75) with ĝ generated from (17.76) guarantees (17.7)
and
17.2.5 Backstepping
ẑ t (x, t) − μ̄ẑ x (x, t) = μ̄θ̂(x, t)z(0, t) + ν̂˙ T (t)[h(x, t) + ϑ(x, t)] + ρ̂(t)ψ(x,
˙ t)
1
+ κ̂t (ξ, t)[P(x, ξ, t) + M(x, ξ, t)]dξ
0
1
+ θ̂t (ξ, t)N (x, ξ, t)dξ
0
1
+ θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ (17.78a)
x
ẑ(1, t) = ν̂ (t)[w(1, t) + a(1, t)] + ρ̂(t)U (t) − r (t)
T
1
+ κ̂(ξ, t)[w1 (ξ, t) + a1 (ξ, t)]dξ
0
1
+ θ̂(ξ, t)b(1 − ξ, t)dξ (17.78b)
0
ẑ(x, 0) = ẑ 0 (x) (17.78c)
for some initial condition ẑ 0 ∈ B([0, 1]). Consider the backstepping transformation
x
η(x, t) = ẑ(x, t) − ĝ(x − ξ, t)ẑ(ξ, t)dξ = T [ẑ](x, t) (17.79)
0
where ĝ is the on-line solution to the Volterra integral equation (17.76). Consider
also the target system
17.2 Model Reference Adaptive Control 333
Lemma 17.8 The backstepping transformation (17.79) and controller (17.75) with
ĝ satisfying (17.76) map (17.78) into (17.80).
Proof Differentiating (17.79) with respect to time and space, respectively, inserting
the dynamics (17.78a), integrating by parts and inserting the result into (17.78a) give
x
ηt (x, t) − μ̄ηx (x, t) = μ̄ ĝ(x, t) + θ̂(x, t) − ĝ(x − ξ, t)θ̂(x, t)dξ ẑ(0, t)
0
x
+ μ̄θ̂(x, t)ˆ(0, t) − ĝ(x − ξ, t)μ̄θ̂(x, t)dξ ˆ(0, t)
0
1
+ θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ
x
x 1
− ĝ(x − ξ, t) θ̂t (s, t)φ(1 − (s − ξ), t)dsdξ + ν̂˙ T (t)[h(ξ, t) + ϑ(x, t)]
0 ξ
x
− ĝ(x − ξ, t)ν̂˙ T (t)[h(ξ, t) + ϑ(ξ, t)]dξ + ρ̂(t)ψ(x,
˙ t)
0
x 1
− ˙
ĝ(x − ξ, t)ρ̂(t)ψ(ξ, t)dξ + κ̂t (ξ, t)[P(x, ξ, t) + M(x, ξ, t)]dξ
0 0
x 1
− ĝ(x − ξ, t) κ̂t (s, t)[P(ξ, s, t) + M(ξ, s, t)]dsdξ
0 0
1 x 1
+ θ̂t (ξ, t)N (x, ξ, t)dξ − ĝ(x − ξ, t) θ̂t (s, t)N (ξ, s, t)dsdξ
0 x 0 0
Using (17.76) and the notation T defined in (17.79), we obtain (17.80a). Substituting
x = 1 into (17.79), using (17.78b) and the control law (17.75) yield (17.80b).
Since r is bounded by Assumption 17.2, the signals of the reference model (17.39),
the filters M, N and ϑ and the derived filters m 0 and n 0 are all bounded pointwise in
space. Hence
||a||, ||b||, ||ϑ||, ||M||, ||N ||, ||m 0 ||, ||n 0 || ∈ L∞ (17.82a)
||a||∞ , ||b||∞ , ||ϑ||∞ , ||M||∞ , ||N ||∞ , ||m 0 ||∞ , ||n 0 ||∞ ∈ L∞ . (17.82b)
Moreover, we have
and
1
V̇1 (t) ≤ −η 2 (0, t) − μ̄V1 (t) + l1 (t)V1 (t) + l2 (t)V3 (t)
4
+ l3 (t)V4 (t) + l4 (t)V5 (t) + l5 (t)V6 (t) + l6 (t) + b1 ˆ2 (0, t) (17.86a)
1
V̇2 (t) ≤ −|w(1, t)|2 + 4nη 2 (0, t) + 4n ˆ2 (0, t) − λ̄1 V2 (t) (17.86b)
2
1
V̇3 (t) ≤ 4η 2 (0, t) + 4ˆ2 (0, t) − μ̄V3 (t) (17.86c)
2
1
V̇4 (t) ≤ 2|w(1, t)|2 − |h(0, t)|2 − μ̄V4 (t) (17.86d)
2
1
V̇5 (t) ≤ b1 V2 (t) − || p0 ||2 − μ̄V5 (t) (17.86e)
2
1
V̇6 (t) ≤ b2 V1 (t) + b3 V2 (t) − μ̄V6 (t) + b4 |w(1, t)|2 − ψ 2 (0, t) + b5 . (17.86f)
2
We start by proving boundedness of all signals, by forming
we obtain
V̇7 (t) ≤ −b6 η 2 (0, t) − cV7 (t) + b7 ˆ2 (0, t) − |h(0, t)|2 − ψ 2 (0, t)
− || p0 (t)||2 + b5 + l7 (t)V7 (t) + l8 (t) (17.90)
for some positive constants c, b6 and b7 . Consider the term in ˆ2 (0, t), and expand it
as follows
336 17 Model Reference Adaptive Control
for some nonnegative, bounded, integrable functions l9 and l10 , and some constant
b5 . From (17.70), we have
and from the definition of V in (17.68) and the inequality (17.73), we have
for some positive constant k. It then follows from Lemma B.4 in Appendix B that
V7 ∈ L∞ and hence
||ẑ|| ∈ L∞ . (17.96)
||ψ|| ∈ L∞ , (17.97)
||α|| ∈ L∞ , (17.99)
while the invertibility of the transformations of Lemmas 17.1 and 17.2, yields
We now prove square integrability of the system states’ L 2 -norms. Since ||w|| ∈
L∞ , w 2 (x, t) for all fixed t must be bounded almost everywhere in the domain
x ∈ [0, 1]. Specifically, w2 (0, t) is bounded for almost all t ∈ [0, ∞), and hence
σ 2 ψ 2 (0, t) ∈ L1 (17.101)
V̇8 (t) ≤ −b8 η 2 (0, t) − c̄V8 (t) + b9 ˆ2 (0, t) − |h(0, t)|2
− || p0 (t)||2 + l11 (t)V8 (t) + l12 (t) (17.104)
for some nonnegative, bounded integrable functions l11 , l12 , and a positive constant
b9 . Inserting (17.91) yields
for some positive constant c̄, and nonnegative integrable functions l13 and l14 , where
we utilized (17.101). Now Lemma B.4 in Appendix B yield V8 ∈ L1 ∩ L∞ and thus
The above results then imply that h(0, t) and || p0 (t)|| are bounded for almost all t,
and hence
V̇8 (t) ≤ −c̄V8 (t) + l13 (t)V8 (t) + l15 (t) (17.108)
338 17 Model Reference Adaptive Control
t+T 2
for t ≥ tv , implying t z (0, s)ds → 0 for any T > 0, which from the definition
of z(0, t), is equivalent to the tracking goal (17.7).
The adaptive output feedback controller is obtained from the model reference adap-
tive controller of Theorem 12.1 by simply setting r ≡ 0. This controller also gives
the desirable property of square integrability and asymptotic convergence to zero of
the system states in the L 2 -sense. Consider the control law
1
1
U (t) = ĝ(1 − ξ, t)ẑ(ξ, t)dξ − ν̂ T (t)w(1, t)
ρ̂(t) 0
1
− κ̂(ξ, t)w1 (ξ, t)dξ (17.113)
0
where ẑ is generated using (12.60), and ĝ is the on-line solution to the Volterra
integral equation (12.80) with ρ̂, θ̂ and κ̂ generated using the adaptive laws (6.28).
Theorem 17.2 Consider system (17.1), filters (17.45) and (17.47), and the adaptive
laws (17.62). Let r ≡ 0. Then, the control law (17.113) with ĝ generated from (17.76)
guarantees
17.3 Adaptive Output Feedback Stabilization 339
and
and
and
17.4 Simulations
System (17.1) and the controllers of Theorems 17.1 and 17.2 are implemented for
n = 2. The system parameters are set to
System (17.1) with the given parameters is unstable in the open loop case. The
adaptation gains are set to
Γ1 = I 2 , γ2 = 1, γ3 = γ4 ≡ 1, (17.122)
17.4.1 Tracking
The controller of Theorem 17.1 is here applied, with the reference signal r set to
√
π 2
r (t) = 1 + sin t + 2 sin t . (17.124)
10 2
In the adaptive control case, the system state norm and actuation signal are seen in
Fig. 17.1 to be bounded. The estimated parameters are also seen to be bounded in
Fig. 17.2.
The tracking objective (17.7) is seen in Fig. 17.3 to be achieved after approximately
15 s. The violent transient observed from the onset of control at t = 0 to tracking
is achieved at around t = 12 is due to the choice of initial conditions, which are
deliberately chosen to induce transients so that the theoretical convergence results
are clearly demonstrated. In practice though, transients would be avoided by applying
an appropriate start-up procedure to the system.
||u1 || + ||u2 || + ||v||
60 0
40 −20
U
20 −40
0 −60
0 10 20 30 0 10 20 30
Time [s] Time [s]
Fig. 17.1 State norm ||u 1 || + ||u 2 || + ||v|| (left) and actuation signal (right) for the tracking case
of Theorem 17.1
17.4 Simulations 341
1
1.4
0.1
0.5
ν̂1
ν̂2
1.2
ρ̂
0
0 1
−0.1 0.8
0 10 20 30 0 10 20 30 0 10 20 30
Time [s] Time [s] Time [s]
Fig. 17.2 Estimated parameters for the tracking case of Theorem 17.1
4
y0 and r
−2
0 5 10 15 20 25 30
Time [s]
Fig. 17.3 Reference signal (solid black) and measured signal (dashed red)
17.4.2 Stabilization
To demonstrate the properties of Theorem 17.2, the reference signal is here set
identically zero. It is seen from Fig. 17.4 that the state norms and actuation signal in
this case are bounded and converge to zero. The convergence time is approximately
8 s.
17.5 Notes
The above result is arguably the strongest result concerning n + 1 systems, stabilizing
a general type of n + 1 coupled linear hyperbolic PDEs from a single boundary
measurement only.
342 17 Model Reference Adaptive Control
U
2 −5
0
−10
0 5 10 15 0 5 10 15
Time [s] Time [s]
Fig. 17.4 Left: State norm ||u 1 || + ||u 2 || + ||v|| for the stabilization case of Theorem 17.2. Right:
Actuation signal for the stabilization case of Theorem 17.2
In Anfinsen and Aamo (2017), it is shown that under the assumption of having
the parameter c known, and u(1, t) measured, the above adaptive controller can be
slightly simplified and some additional, interesting properties can be proved. First,
it is shown that the n + 1 system is from the controller’s perspective equal to a 2 × 2
system. Hence, the controller order does not increase with n if c is known and u(1, t)
is measured. Secondly, none of the values tu,i are required to be known, only an upper
bound is required. Lastly, pointwise boundedness and convergence to zero can be
proved, none of which were proved for the controller of Theorem 17.2.
Reference
This part considers the most general class of PDEs treated in this book. It contains
systems of n + m linear coupled hyperbolic PDEs of which n equations convect
information from x = 1 to x = 0, and m equations convect information in the oppo-
site direction. Such systems are usually stated in the form (1.25), which we for the
reader’s convenience restate here
defined over x ∈ [0, 1], t ≥ 0. The system parameters are in the form
satisfy
The signal
T
U (t) = U1 (t) U2 (t) . . . Um (t) , (18.7)
−μ1 (x) < −μ2 (x) < · · · < −μm (x) < 0 < λ1 (x) ≤ λ2 (x) ≤
. . . ≤ λn (x), ∀x ∈ [0, 1] (18.8)
and
−μ1 (x) < −μ2 (x) < · · · < −μm (x) < 0 < λ1 (x) < λ2 (x) <
. . . < λn (x), ∀x ∈ [0, 1] (18.9)
for all x ∈ [0, 1]. In addition, we will frequently, without loss of generality, assume
the diagonal elements of ++ and −− to be zero, hence
σii++ ≡ 0, i = 1, 2, . . . , n σ −−
j j ≡ 0, j = 1, 2, . . . , m. (18.10)
In Chap. 19, non-adaptive state-feedback and boundary observers are derived, and
these are combined into output-feedback solutions. We also solve an output tracking
problem, where the measurement (18.11a) anti-collocated with actuation is sought
to track an arbitrary, bounded reference signal. The resulting state-feedback track-
ing controller can be combined with the boundary observers into output-feedback
tracking controllers.
The problem of stabilizing system (18.1) when the boundary parameters Q 0 and
C1 in (18.1c)–(18.1d) are uncertain, is solved in Chap. 20. The method requires
measurements to be taken at both boundaries. The problems solved for n + 1-systems
in Chap. 15 and for 2 × 2 systems in Chap. 10, are straightforward to extend to n + m-
systems, and are therefore omitted.
Chapter 19
Non-adaptive Schemes
19.1 Introduction
where
Λ− (x)K xu (x, ξ) − K ξu (x, ξ)Λ+ (ξ) = K u (x, ξ) ++ (ξ) + K u (x, ξ)(Λ+ ) (ξ)
+ K v (x, ξ) −+ (ξ) (19.3a)
− v v − +− v −
Λ (x)K x (x, ξ) + K ξ (x, ξ)Λ (ξ) = K (x, ξ) (ξ) − K (x, ξ)(Λ ) (ξ)
u
u ≡ 0, v≡0 (19.6)
19.2 State Feedback Controllers 351
for t ≥ t F , where
m
t F = tu,1 + tv,tot , tv,tot = tv, j (19.7a)
j=1
1 1
dγ dγ
tu,i = , tv, j = . (19.7b)
0 λi (γ) 0 μ j (γ)
and the control law (19.1) with (K u , K v ) satisfying the PDE (19.3) map (18.1) into
the target system
for α0 , β0 ∈ B([0, 1]), where G has a triangular form (19.4), and is given from
(19.3e), while C + and C − are defined over the triangular domain T defined in
(1.1a), and given as the solution to the equation
x
C + (x, ξ) = +− (x)K u (x, ξ) + C − (x, s)K u (s, ξ)ds (19.10a)
ξ
x
C − (x, ξ) = +− (x)K v (x, ξ) + C − (x, s)K v (s, ξ)ds. (19.10b)
ξ
Using (19.3) and the fact that v(0, t) = β(0, t), we obtain (19.9b). Inserting (19.8)
into (19.9a) gives
where we have changed the order of integration in the double integrals. Using (19.10)
gives the dynamics (18.1a). The boundary condition (19.9c) follows trivially from
inserting (19.8) into (18.1c). Evaluating (19.8b) at x = 1 and inserting the boundary
condition (18.1d), we get
1
β(1, t) = C1 u(1, t) + U (t) − K u (1, ξ)u(ξ, t)dξ
0
1
− K v (1, ξ)v(ξ, t)dξ. (19.13)
0
19.2 State Feedback Controllers 353
The control law (19.1) gives the boundary condition (19.9d). The initial conditions
α0 and β0 are expressed from u 0 , v0 by evaluating (19.8) at t = 0, giving
Due to boundary condition (19.9d) and the fact that G in (19.9b) is strictly lower
triangular, we have ∂t β1 − μ1 ∂x β1 = 0 so that β1 ≡ 0 for t ≥ tv,1 . This fact reduces
the next equation to ∂t β2 − μ2 ∂x β2 = 0 for t ≥ tv,1 so that β2 ≡ 0 for t ≥ tv,1 + tv,2 .
m
Continuing this argument we obtain that β ≡ 0 for t ≥ tv,tot = i=1 tv,i , and system
(19.9) is reduced to
for some function αtv,tot . System (19.15) has the same form as system (14.17), and
will be zero after an additional time tu,1 . Hence, for t ≥ t F = tu,1 + tv,tot , we have
α ≡ 0, β ≡ 0. Due to the invertibility of the backstepping transformation (19.8), the
result follows.
where
1
u
K min (ξ) = K u (1, ξ) − Θ(1, s)K u (s, ξ)ds (19.17a)
ξ
1
v v
K min (ξ) = K (1, ξ) + Θ(1, ξ) − Θ(1, s)K v (s, ξ)ds (19.17b)
ξ
354 19 Non-adaptive Schemes
and Θ is a strictly lower triangular matrix defined over the square domain [0, 1]2 and
given as the solution to the Fredholm integral equation
1
Θ(x, ξ) = −F(x, ξ) + F(x, s)Θ(s, ξ)ds (19.18)
0
where F is a strictly lower triangular matrix defined over [0, 1]2 and given as the
solution to the PDE
Theorem 19.2 Consider system (18.1) subject to assumption (18.8). Let the con-
v
troller be taken as (19.16) with (K min
u
, K min ) given by (19.17). Then
u ≡ 0, v≡0 (19.20)
Proof It is shown in the proof of Theorem 19.1 that system (18.1), subject to assump-
tion (18.8), can be mapped using the backstepping transformation (19.8) into the
target system (19.9) provided the control law is chosen as (19.1). If, however, we
choose the slightly modified control law
1
U (t) = −C1 u(1, t) + K u (1, ξ)u(ξ, t)dξ
0
1
+ K v (1, ξ)v(ξ, t)dξ + Ua (t) (19.22)
0
from a new variable η into β, and where F satisfies the PDE (19.19) and is strictly
lower triangular, hence
0 if 1 ≤ i ≤ j ≤ n
F(x) = { f i j (x)}1≤i, j≤n = (19.25)
f i j (x) otherwise.
with Θ satisfying (19.18). This can be verified from inserting (19.26) into (19.24),
yielding
1 1
β(x, t) = β(x, t) − Θ(x, ξ)β(ξ, t)dξ − F(x, ξ)β(ξ, t)dξ
0 0
1 1
+ F(x, ξ) Θ(ξ, s)β(s, t)dsdξ, (19.27)
0 0
Using (19.19) and (19.29b) gives (19.29a). Evaluating (19.26) at x = 1 and inserting
the boundary condition (19.23d) gives
1
η(1, t) = Ua (t) − Θ(1, ξ)β(ξ, t)dξ. (19.34)
0
Choosing
1
Ua (t) = Θ(1, ξ)β(ξ, t)dξ (19.35)
0
results in the boundary condition (19.29b). The initial condition η0 is given from β0
as
1
η0 (x) = β0 (x) − Θ(x, ξ)β0 (ξ)dξ (19.36)
0
From the simple structure of the target system (19.29), it is evident that η ≡ 0 for
t ≥ tv,m , which corresponds to the slowest transport speed in η. From (19.24), we
then also have β ≡ 0 for t ≥ tv,m . The final result follows from the same reasoning
as in the proof of Theorem 19.1.
Inserting (19.8b) into (19.35) and substituting the result into (19.22), gives
1 1
U (t) = −C1 u(1, t) + K (1, ξ)u(ξ, t)dξ + K v (1, ξ)v(ξ, t)dξ
u
0 0
1 1 1
+ Θ(1, ξ)v(ξ, t)dξ − Θ(1, s)K u (s, ξ)dsu(ξ, t)dξ
0 0 ξ
1 1
− Θ(1, s)K v (s, ξ)dsv(ξ, t)dξ (19.37)
0 ξ
where we have changed the order of integration in the double integrals. Using the
definition (19.17) gives the control law (19.16).
19.3 Observers
with initial conditions û 0 , v̂0 ∈ B([0, 1]), and where the injection gains P + and
P − are given as
The matrices
Λ+ (x)Mxα (x, ξ) − Mξα (x, ξ)Λ− (ξ) = M α (x, ξ)(Λ− ) (ξ) + ++ (x)M α (x, ξ)
+ +− (x)M β (x, ξ) (19.41a)
− β
Λ (x)Mxβ (x, ξ) + Mξ (x, ξ)Λ− (ξ) α
= −M (x, ξ)(Λ ) (ξ) − − −+ α
(x)M (x, ξ)
− −− (x)M β (x, ξ) (19.41b)
Λ (x)M (x, x) + M (x, x)Λ (x) = +− (x)
+ α α −
(19.41c)
Λ− (x)M β (x, x) − M β (x, x)Λ− (x) = − −− (x) (19.41d)
M β (1, ξ) − C1 M α (1, ξ) = H (x) (19.41e)
β β
Mi j (x, 0) = m i j (x), 1 ≤ i < j ≤ m (19.43)
β
for some arbitrary functions m i j (x), 1 ≤ i < j ≤ m, defined for x ∈ [0, 1].
Well-posedness of the PDE consisting of (19.41) and (19.43) then follows from
Theorem D.6 in Appendix D.6 following a coordinate transformation (x, ξ) → (1 −
ξ, 1 − x) and transposing the equations.
Theorem 19.3 Consider system (18.1) subject to assumption (18.8), and the
observer (19.38) with injection gains P + and P − given as (19.39). Then
û ≡ u, v̂ ≡ v (19.44)
where (M α (x, ξ), M β (x, ξ)) satisfies the PDE (19.41) maps the target system
x
α̃t (x, t) + Λ+ (x)α̃x (x, t) = ++ (x)α̃(x, t) + D + (x, ξ)α̃(ξ, t)dξ (19.47a)
0
x
β̃t (x, t) − Λ− (x)β̃x (x, t) = −+ (x)α̃(x, t) + D − (x, ξ)α̃(ξ, t)dξ (19.47b)
0
α̃(0, t) = 0 (19.47c)
1
β̃(1, t) = C1 α̃(1, t) − H (ξ)β̃(ξ, t)dξ (19.47d)
0
α̃(x, 0) = α̃0 (x) (19.47e)
β̃(x, 0) = β̃0 (x) (19.47f)
where H is the upper triangular matrix satisfying (19.41e) and D + and D − satisfy
the Volterra integral equation
x
+ α −+
D (x, ξ) = −M (x, ξ) (ξ) − M α (x, s)D − (s, ξ)ds (19.48a)
ξ
x
D − (x, ξ) = −M β (x, ξ) −+ (ξ) − M β (x, s)D − (s, ξ)ds, (19.48b)
ξ
β̃t (x, t) = ṽt (x, t) − M β (x, x)Λ− (x)β̃(x, t) + M β (x, 0)Λ− (0)β̃(0, t)
x x
β
+ Mξ (x, ξ)Λ− (ξ)β̃(ξ, t)dξ + M β (x, ξ)(Λ− ) (ξ)β̃(ξ, t)dξ
0 x 0
β −+
− M (x, ξ) (ξ)α̃(ξ, t)dξ
x x
0
and
x
α̃x (x, t) = ũ x (x, t) − M α (x, x)β̃(x, t) − Mxα (x, ξ)β̃(ξ, t)dξ (19.50a)
0
x
β̃x (x, t) = ṽx (x, t) − M β (x, x)β̃(x, t) − Mxβ (x, ξ)β̃(ξ, t)dξ, (19.50b)
0
respectively. Inserting (19.49) and (19.50) into the dynamics (19.47a), (19.47b) and
noting that β̃(0, t) = ṽ(0, t), we obtain
x
0 = α̃t (x, t) + Λ+ (x)α̃x (x, t) − ++ (x)α̃(x, t) − D + (x, ξ)α̃(ξ, t)dξ
0
= ũ t (x, t) + Λ+ (x)ũ x (x, t) − ++ (x)ũ(x, t) − +− (x)ṽ(x, t)
+ M α (x, 0)Λ− (0)ṽ(0, t)
− Λ+ (x)M α (x, x) + M α (x, x)Λ− (x) − +− (x) β̃(x, t)
x
− Λ+ (x)Mxα (x, ξ) − Mξα (x, ξ)Λ− (ξ) − M α (x, ξ)(Λ− ) (ξ)
0
− ++ (x)M α (x, ξ) − +− (x)M β (x, ξ) β̃(ξ, t)dξ
x
− D + (x, ξ) + M α (x, ξ) −+ (ξ)
0
x
+ M α (x, s)D − (s, ξ)ds α̃(ξ, t)dξ (19.51)
ξ
x
0 = β̃t (x, t) − Λ− (x)β̃x (x, t) − −+ (x)α̃(x, t) − D − (x, ξ)α̃(ξ, t)dξ
0
19.3 Observers 361
Using the Eqs. (19.41a)–(19.41d), (19.48) and the injection gains (19.39) gives
(19.45a)–(19.45b).
The boundary condition (19.45c) follows immediately from (19.46a) and (19.47c).
Inserting (19.46) into the boundary condition (19.45d) gives
for some initial conditions û 0 , v̂0 ∈ B([0, 1]), where the injection gains P + and P −
are given as
The matrices
Λ+ (x)N xα (x, ξ) + Nξα (x, ξ)Λ+ (ξ) = −N α (x, ξ)(Λ+ ) (ξ) + ++ (x)N α (x, ξ)
+ +− (x)N β (x, ξ) (19.58a)
− β
Λ (x)N xβ (x, ξ) − Nξ (x, ξ)Λ+ (ξ) β +
= N (x, ξ)(Λ ) (ξ) − −+ α
(x)N (x, ξ)
− −− (x)N β (x, ξ) (19.58b)
Λ (x)N (x, x) − N (x, x)Λ (x) = − ++ (x)
+ α α +
(19.58c)
Λ− (x)N β (x, x) + N β (x, x)Λ+ (x) = −+ (x) (19.58d)
α β
N (0, ξ) − Q 0 N (0, ξ) = A(x) (19.58e)
As with the controller kernel equations and kernel equations for the anti-collocated
observer, these equations are under-determined. To ensure well-posedness, we add
the boundary conditions
for some arbitrary functions n iαj (x), 1 ≤ j < i ≤ m, defined for x ∈ [0, 1].
Well-posedness of the PDE consisting of (19.58) and (19.60) then follows from
Theorem D.6 in Appendix D.6.
Theorem 19.4 Consider system (18.1) subject to assumption (18.9), and the
observer (19.55) with injection gains P + and P − given as (19.56). Then
û ≡ u, v̂ ≡ v (19.61)
for t ≥ t0 , where
n
t0 = tu,tot + tv,m , tu,tot = tu,i (19.62)
i=1
can be mapped into (19.63) with injection gains (19.56) using the backstepping
transformation
1
ũ(x, t) = α̃(x, t) + N α (x, ξ)α̃(ξ, t)dξ (19.66a)
x
1
ṽ(x, t) = β̃(x, t) + N β (x, ξ)α̃(ξ, t)dξ, (19.66b)
x
where N α , N β satisfy the PDEs (19.58). The derivation follows the same steps as in
the proof of Theorem 19.3, and is omitted.
The β̃-dynamics in (19.64) is independent of α̃ and will be zero for t ≥ tv,m ,
corresponding to the slowest transport speed in β̃. The resulting system in α̃ is then
n
a cascade system which will be zero after an additional time i=1 tu,i = tu,tot , and
hence α̃ ≡ 0 and β̃ ≡ 0 for t ≥ tv,m + tu,tot = t0 . From (19.66), ũ ≡ 0 and ṽ ≡ 0 for
t ≥ t0 follows, which gives the desired result.
The state feedback controller of Theorems 19.2 or 19.1 can be combined with the
observers of Theorems 19.3 or 19.4 into output feedback controllers. The proofs are
straightforward and omitted.
Combining the results of Theorems 19.2 and 19.3, we obtain the following theorem.
Theorem 19.5 Consider system (18.1) with measurement (18.11a). Let the con-
troller be taken as
19.5 Reference Tracking 365
1
U (t) = −C1 û(1, t) + u
K min (1, ξ)û(ξ, t)dξ
0
1
v
+ K min (1, ξ)v̂(ξ, t)dξ (19.67)
0
v
u
where K min , K min are given from (19.17), and û and v̂ are generated using the
observer of Theorem 19.3. Then
u ≡ 0, v≡0 (19.68)
for t ≥ t F + tmin , where t F and tmin are defined in (19.7) and (19.21), respectively.
Combining the results of Theorems 19.2 and 19.4, we obtain the following theorem.
Theorem 19.6 Consider system (18.1) with measurement (18.11b). Let the con-
troller be taken as
1
U (t) = −C1 û(1, t) + u
K min (1, ξ)û(ξ, t)dξ
0
1
v
+ K min (1, ξ)v̂(ξ, t)dξ (19.69)
0
v
u
where K min , K min are given from (19.17), and û and v̂ are generated using the
observer of Theorem 19.4. Then
u ≡ 0, v≡0 (19.70)
for t ≥ t0 + tmin , where t0 and tmin are defined in (19.62) and (19.21), respectively.
det(R0 Q 0 + Im ) = 0. (19.73)
where K u , K v are given from the solution to the PDE (19.3) and (19.5), the state β is
given from the system states u, v through (19.8b), while Θ is the solution to (19.18)
with F given as the solution to the PDE (19.19) , and
T
ω(t) = ω1 (t) ω2 (t) ω3 (t) . . . ωm (t) (19.75)
is given recursively as
i−1
1
pik (τ )
ωi (t) = νi (t + φi (1)) − ωk (t + φi (1) − φi (τ ))dτ (19.76)
k=1 0 μi (τ )
for i = 1, . . . , m, where
T
ν(t) = ν1 (t) ν2 (t) ν3 (t) . . . νm (t) (19.77)
and
x
dγ
φi (x) = . (19.81)
0 μi (γ)
Theorem 19.7 Consider system (18.1), and assume that R0 satisfies (19.73). Then,
the control law (19.74) guarantees that (19.71) holds for t ≥ tv,m with tv,m defined
in (19.21). Moreover, if r ∈ L∞ , then
Proof By modifying the control law used in the Fredholm transformation performed
in the proof of Theorem 19.2, and instead of choosing Ua as (19.35), we choose
1
Ua (t) = Θ(1, ξ)β(ξ, t)dξ + Ub (t), (19.83)
0
for a new control signal Ub , we obtain a slightly modified version of the target system
(19.29) as
i−1
∂t ηi (x, t) − μi (x)∂x ηi (x, t) = pik (x)Ub,k (t) (19.86a)
k=1
ηi (1, t) = Ub,i (t) (19.86b)
ηi (x, 0) = ηi,0 (x) (19.86c)
for i = 1 . . . m, where
T
η(x, t) = η1 (x, t) η2 (x, t) . . . ηm (x, t) (19.87a)
368 19 Non-adaptive Schemes
T
Ub (t) = Ub,1 (t) Ub,2 (t) . . . Ub,m (t) (19.87b)
T
η0 (x) = η1,0 (x) η2,0 (x) . . . ηm,0 (x) (19.87c)
The Eq. (19.86) can be solved explicitly using the method of characteristics. Note
that φi defined in (19.81) are strictly increasing functions and hence invertible. Along
the characteristic lines
we have
d
i−1
ηi (x1 (x, s), t1 (t, s)) = − pik (x1 (x, s))Uc,k (t1 (t, s)). (19.89)
ds k=1
valid for t ≥ φi (1 − x). Using the substitution τ = φi−1 (φi (x) + s) in the integral,
(19.90) can be written
i−1
1
pik (τ )
ηi (0, t) = Ub,i (t − φi (1)) + Ub,k (t − φi (τ ))dτ (19.92)
k=1 0 μi (τ )
valid for t ≥ φi (1). Hence, choosing the control laws Ub,i recursively as
i−1
1
pik (τ )
Ub,i (t) = νi (t + φi (1)) − Ub,k (t − φi (τ ) + φi (1))dτ (19.93)
k=1 0 μi (τ )
with ω defined in (19.75) and (19.76), we obtain ηi (0, t) = νi (t) for t ≥ φi (1), and
for t ≥ tv,m .
From inserting (19.95) into the right hand side of (19.85) and using the definition
(19.78), it is verified that the control objective (19.85), which is equivalent with
(19.71), holds for t ≥ tv,m .
From (19.84) with Ub (t) = ω(t), it is clear that η will be pointwise bounded if
r is bounded. From the Fredholm transformation (19.24) and the cascade structure
of system (19.23), pointwise boundedness of α and β follows. From the invertibility
of the backstepping transformation (19.8), it is then clear that a bounded r implies
pointwise boundedness of u and v.
The state-feedback controller of Theorem 19.7 can also be combined with the
observers of Sect. 19.3 into output-feedback reference tracking controllers.
19.6 Simulations
||u|| + ||v||
minimum-time 6
(dashed-dotted blue) 4
controllers of Theorems 19.1
and 19.2. The theoretical 2
convergence times t F and
0
tmin are indicated by black
vertical lines
0 1 2 3 4
Time [s]
The controllers of Theorems 19.1 and 19.2 are here implemented to demonstrate
performance. The convergence times are computed to be
1 2 1
dγ dγ 2
t F = tu,1 + tv,tot = + =1+ +1
0 λ1 (γ) i=1 0 μi (γ) 3
≈ 2.667 (19.98a)
1 1
dγ dγ
tmin = tu,1 + tv,2 = + =1+1
0 λ1 (γ) 0 μ2 (γ)
= 2.000. (19.98b)
It is seen form the state norms shown in Fig. 19.1 and actuation signals shown in
Fig. 19.2 that both controllers achieve converge to zero of the state norm and actuation
signals in finite time, with the minimum-time convergent controller of Theorem 19.2
converging faster than the controller of Theorem 19.1. It is interesting to notice from
Fig. 19.2 that the actuation signals are approximately the same for the first 1.2 s, but
thereafter significantly different until convergence to zero is achieved.
20
15
15
10
5 10
U2
U1
0 5
−5 0
−10 −5
0 1 2 3 4 0 1 2 3 4
Time [s] Time [s]
Fig. 19.2 Left: Actuation signal U1 , and Right: Actuation signal U2 for the non-minimum time
(dashed red) and minimum-time (dashed-dotted blue) controllers of Theorems 19.1 and 19.2
19.6 Simulations 371
The output-feedback controller of Theorem 19.5 and the tracking controller of The-
orem 19.7 are implemented in this section to demonstrate performance. The matrix
R0 and reference signal r in (19.71) are set to
T
R0 = 2I2 r (t) = 0 sin(πt) . (19.99)
The observer used by the controller of Theorem 19.5 should converge to its true value
for
t ≥ t F = 2.667 (19.100)
while the state norm using the output-feedback controller should converge to zero
for
t ≥ tv,m = 1. (19.102)
From Fig. 19.3 it is observed that the state norms are bounded in both cases, and
that the state estimation error actually converges faster than anticipated, with the esti-
mates converging to its true values for approximately t ≥ 2 = tmin . Convergence to
zero of the state norm during output tracking is therefore also faster than anticipated.
The actuation signals are seen in Fig. 19.4 to be bounded.
T
Figure 19.5 shows the reference signal r (t) = r1 (t) r2 (t) and the right hand
side of (19.71) as
T
yc (t) = yc,1 (t) yc,2 (t) = R0 u(0, t) + v(0, t). (19.103)
||u − û|| + ||v − v̂||
8
25
20 6
||u|| + ||v||
15 4
10
2
5
0 0
0 1 2 3 4 5 6 0 1 2 3 4 5 6
Time [s] Time [s]
Fig. 19.3 Left: System norm for the output-feedback controller of Theorem 19.5 and output tracking
controller of Theorem 19.7. Right: State estimation error norm ||u − û|| + ||v − v̂||
372 19 Non-adaptive Schemes
60 50
40
20
U1
U2
0
0
−20
−40 −50
0 1 2 3 4 5 6 0 1 2 3 4 5 6
Time [s] Time [s]
Fig. 19.4 Left: Actuation signal U1 , and Right: Actuation signal U2 for the output-feedback (dashed
red) and output tracking (dashed-dotted blue) controllers of Theorems 19.5 and 19.7
8
1
6
0.5
yc,1
4
yc,2 0
2 −0.5
0 −1
0 1 2 3 4 5 6 0 1 2 3 4 5 6
Time [s] Time [s]
Fig. 19.5 Left: Reference signal r1 (t), and Right: Reference signal r2 (t) for the output-feedback
(dashed red) and output tracking (dashed-dotted blue) controllers of Theorems 19.5 and 19.7
It is observed from Fig. 19.5 that the tracking goal is achieved for t ≥ tv,m = 1, as
predicted by theory.
19.7 Notes
The complexity of non-adaptive controller and observer designs now further increases
compared to the n + 1 designs of Chap. 14. The number of controller kernels required
for implementation of a stabilizing controller for an n + m system is m(n + m), so
that a 1 + 2 system results in 6 kernels to be computed, compared to only 3 for the
2 + 1 case. Also, the resulting controller of Theorem 19.1 is non-minimum-time-
convergent, and an additional transformation is needed to derive the minimum time-
convergent controller of Theorem 19.2. This transformation is a Fredholm integral
transformation, and the technique was originally proposed in Coron et al. (2017).
An alternative way of deriving minimum time-controllers is offered in Auriol
and Di Meglio (2016), using a slightly altered target system. However, the result-
ing controller requires the solution to an even more complicated set of PDEs that
are cascaded in structure, making the proof of well-posedness as well as numeri-
cally solving them considerably harder. However, a minimum time-convergent anti-
collocated observer is also proposed in Auriol and Di Meglio (2016). This is opposed
to all observers derived in Sect. 19.3, which are non-minimum time-convergent. The
19.7 Notes 373
References
Anfinsen H, Aamo OM (2018) Minimum time disturbance rejection and tracking control of n + m
linear hyperbolic PDEs. In American Control Conference 2018, Milwaukee, WI, USA
Auriol J, Di Meglio F (2016) Minimum time control of heterodirectional linear coupled hyperbolic
PDEs. Automatica 71:300–307
Coron J-M, Hu L, Olive G (2017) Finite-time boundary stabilization of general linear hyperbolic
balance laws via Fredholm backstepping transformation. Automatica 84:95–100
Hu L, Di Meglio F, Vazquez R, Krstić M (2016) Control of homodirectional and general heterodi-
rectional linearcoupled hyperbolic PDEs. IEEE Trans Autom Control 61(11):3301–3314
Chapter 20
Adaptive Output-Feedback: Uncertain
Boundary Condition
20.1 Introduction
We will now consider the n + m system (18.1), but for simplicity restrict ourselves
to constant coefficients, that is
satisfying
λi , μ j ∈ R, λi , μ j > 0 (20.6a)
σik , σi j , σ ji , σ −−
++ +− −+
jl ∈ R, qi j , c ji ∈ R, (20.6b)
for i, k = 1, 2, . . . , n, j, l = 1, 2, . . . , m.
Additionally, we assume (18.9), that is
σii++ = 0, i = 1, 2, . . . , n σ −−
j j = 0, j = 1, 2, . . . , m. (20.8)
when the boundary parameters Q 0 and C1 are uncertain. We will consider the esti-
mation problem and closed loop adaptive control problem.
where
T
η(x, t) = η1 (x, t) . . . ηn (x, t) (20.11a)
T
φ(x, t) = φ1 (x, t) . . . φm (x, t) , (20.11b)
and initial conditions η0 , φ0 ∈ B([0, 1]). The output injection gains P + and P − will
be specified later.
Furthermore, we design parameter filters that model how the boundary parameters
Q 0 and C1 influence the system states u and v. We define
− P − (x)R(0, t) (20.12b)
P(0, t) = y0T (t) ⊗ In (20.12c)
R(1, t) = 0 (20.12d)
P(x, 0) = P0 (x) (20.12e)
R(x, 0) = R0 (x) (20.12f)
where
W (x, t) = W1 (x, t) W2 (x, t) . . . Wmn (x, t)
= {wi j (x, t)}1≤i≤n,1≤ j≤mn (20.15a)
Z (x, t) = Z 1 (x, t) Z 2 (x, t) . . . Z mn (x, t)
= {z i j (x, t)}1≤i≤m,1≤ j≤mn (20.15b)
with initial conditions W0 , Z 0 ∈ B([0, 1]). The output injection gains P + and P −
are the same ones as in (20.10).
We define non-adaptive state estimates as
Lemma 20.1 Consider system (20.1) and the non-adaptive state estimates (20.16)
generated using filters (20.10), (20.12) and (20.14). If the output injection gains
P + and P − are selected as (19.39), where (M α , M β ) is the solution to equation
(19.41)–(19.43) with constant coefficients and C1 = 0, then
ū ≡ u, v̄ ≡ v (20.18)
The dynamics (20.20) has the same form as the dynamics (19.45) but with C1 = 0.
The rest of the proof therefore follows the same steps as the proof of Theorem 19.3
and is omitted.
From the static relationships (20.16) and the result of Lemma 20.1 any standard
identification law can be applied to estimate the unknown parameters in q and c.
First, we will assume we have some bounds on the parameters q and c.
Assumption 20.1 Bounds q̄ and c̄ are known, so that
Next, we present the integral adaptive law with forgetting factor, normalization
and projection. Define
T
u(1, t) − η(1, t) P(1, t) W (1, t) q
h(t) = , ϕ(t) = , θ= (20.23)
v(0, t) − φ(0, t) R(0, t) Z (0, t) cT
where
T T
θ̂(t) = q̂ T (t) ĉ T (t) , θ̄ = q̄ T c̄ T , (20.25)
with q̄ and c̄ given from Assumption 20.1, while R I L and Q I L are generated from
⎧
⎪
⎨0 for t < t F
Ṙ I L (t)
= R I L (t) ϕT (t) −ϕ(t) (20.26)
Q̇ I L (t) ⎪
⎩−γ + for t ≥ t F
Q I L (t) 1 + |ϕ(t)| 2 h(t)
and where the scalar γ > 0 and 2nm × 2nm symmetric gain matrix Γ > 0 are tuning
parameters.
Moreover, adaptive state estimates can be generated by substituting the parameters
in (20.16) with their respective estimates
Theorem 20.1 Consider system (20.1) with filters (20.10), (20.12) and (20.14) and
output injection gains given by (19.39), where (M α , M β ) is given as the solution to
the PDE (19.41)–(19.43) with C1 = 0. The adaptive law (20.24), guarantees that
|ε̂(t)|
ζ(t) = . (20.30)
1 + |ϕ(t)|2
with
θ̂ → θ (20.32)
for t ≥ t F , which follows from (20.16) and Lemma 20.1, we note from (20.26) that
R I L and Q I L are bounded for all t ≥ 0. Additionally, R I L is symmetric and positive
semidefinite. Solving for Q I L (t) and R I L (t), we have
t
ϕT (τ )ϕ(τ )
Q I L (t) = e−γ(t−τ ) dτ θ = −R I L (t)θ (20.36)
0 1 + |ϕ(τ )|2
proving that
˙
θ̂ ∈ L∞ (20.38)
1 T
V1 (t) = θ̃ (t)Γ −1 θ̃(t) (20.39)
2
from which we find, using the update law (20.37) and Lemma A.1 in Appendix A
0 for t < t F
V̇1 (t) ≤ (20.40)
−θ̃ (t)R I L (t)θ̃(t) for t ≥ t F
T
˙
θ̂ ∈ L2 . (20.42)
˙ ¨
Since θ̂, Ṙ ∈ L∞ , it follows from (20.37) that θ̂ ∈ L∞ , from which Lemma B.1 in
Appendix B gives (20.29c), and
where we used Lemma A.1 in Appendix A. This gives, using ε̂(t) = ϕ(t)θ̃(t) that
t
ε̂T (τ )ε̂(τ ) t
dτ ≤ −γ θ̃ T (τ )R I L (τ )θ̃(τ )dτ − θ̃ T (t)R I L (t)θ̃(t)
0 1 + |ϕ(τ )|2 0
t
−2 θ̃ T (τ )R I L (τ )Γ R I L (τ )θ̃(τ )dτ . (20.45)
0
ζ ∈ L2 . (20.46)
Moreover, we have
which proves
ζ ∈ L∞ . (20.48)
with e = ≡ 0 for t ≥ t F .
We will in this section derive an adaptive control law that uses the parameter and
state estimates generated from the adaptive law of Theorem 20.1 to stabilize system
20.2 Sensing at Both Boundaries 383
(20.1). We start by stating the main results. Consider the following time-varying
PDEs defined over T1 defined in (1.1b)
where the matrix G is strictly lower triangular, as defined in (19.4). As with the
kernel equations (19.3), the Eqs. (20.50) are also under-determined, and to ensure
well-posedness, we add the boundary condition
|| K̂ tu ||, || K̂ tv || ∈ L2 ∩ L∞ . (20.53)
Theorem 20.2 Consider system (20.1) and the state and boundary parameter esti-
mates generated from Theorem 20.1. Let the control law be taken as
1
U (t) = −Ĉ1 (t)y1 (t) + K̂ u (1, ξ, t)û(ξ, t)dξ
0
1
+ K̂ v (1, ξ, t)v̂(ξ, t)dξ (20.54)
0
where ( K̂ u , K̂ v ) is the solution to the PDE consisting of (20.50) and (20.57). Then,
||u||, ||v||, ||η||, ||φ||, ||P||, ||R||, ||W ||, ||Z ||, ||û||, ||v̂|| ∈ L2 ∩ L∞ . (20.55)
We will need the dynamics of the estimates û and v̂ generated using (20.28). By
straightforward calculations, we find the dynamics to be
where ( K̂ u , K̂ v ) is the online solution to the PDE (20.50). The inverse transformation
has the form
û(x, t) = α(x, t), v̂(x, t) = T −1 [α, β](x, t) (20.58)
Lemma 20.2 The backstepping transformation (20.57) maps between system (20.56)
in closed loop with the control law (20.54) and the following target system
x
αt (x, t) + Λ+ αx (x, t) = Σ ++ α(x, t) + Σ +− β(x, t) + Ĉ + (x, ξ, t)α(ξ, t)dξ
x 0
for α0 , β0 , ∈ B([0, 1]), where G is the strictly lower triangular matrix given by
(20.50e) and Ĉ + and Ĉ − are given by
x
+ +−
Ĉ (x, ξ, t) = Σ (x) K̂ (x, ξ, t) +
u
Ĉ − (x, s, t) K̂ u (s, ξ, t)ds (20.60a)
ξ
x
Ĉ − (x, ξ, t) = Σ +− (x)K v (x, ξ, t) + Ĉ − (x, s, t)K v (s, ξ, t)ds. (20.60b)
ξ
Proof Differentiating (20.57b) with respect to time and space, respectively, inserting
the dynamics (20.56a) and (20.56b), integrating by parts and inserting the result into
(20.56b), we find
x
0 = βt (x, t) − Λ− βx (x, t) + K̂ ξu (x, ξ, t)Λ+ + K̂ u (x, ξ, t)Σ ++
0
x
v −+ − u
+ K̂ (x, ξ, t)Σ − Λ K̂ x (x, ξ, t) û(ξ, t)dξ + K̂ u (x, ξ, t)Σ +−
0
v − − v v −−
− K̂ ξ (x, ξ, t)Λ − Λ K̂ x (x, ξ, t) + K̂ (x, ξ, t)Σ v̂(ξ, t)dξ
x x
− P − (x) − K̂ u (x, ξ, t)P + (ξ)dξ − K̂ v (x, ξ, t)P − (ξ)dξ ˆ(0, t)
0
0
+ v −
+ K̂ (x, 0, t)Λ Q̂ 0 (t) − K̂ (x, 0, t)Λ v̂(0, t)
u
+ ˙
K̂ v (x, ξ, t)R(ξ, t)q̂(t)dξ + ˙
K̂ v (x, ξ, t)Z (ξ, t)ĉ(t)dξ
x
0
x 0
To ease the Lyapunov proof in the next section, we also perform backstepping trans-
formations of the parameter filters (20.12) and (20.14). Consider the target systems
x
+ ++
At (x, t) + Λ A x (x, t) = Σ A(x, t) + D + (x, ξ)A(ξ, t)dξ (20.62a)
0
x
Bt (x, t) − Λ− Bx (x, t) = Σ −+ A(x, t) + D − (x, ξ)A(ξ, t)dξ (20.62b)
0
A(0, t) = (β(0, t) + ˆ(0, t))T ⊗ In (20.62c)
1
B(1, t) = − H (ξ)B(ξ, t)dξ (20.62d)
0
A(x, 0) = A0 (x) (20.62e)
B(x, 0) = B0 (x) (20.62f)
and
x
+ ++
Ψt (x, t) + Λ Ψx (x, t) = Σ Ψ (x, t) + D + (x, ξ)Ψ (ξ, t)dξ (20.63a)
0 x
Ωt (x, t) − Λ− Ωx (x, t) = Σ −+ Ψ (x, t) + D − (x, ξ)Ψ (ξ, t)dξ (20.63b)
0
Ψ (0, t) = 0 (20.63c)
1
Ω(1, t) = − H (ξ)Ω(ξ, t)dξ
0
+ (α(1, t) + ê(1, t))T ⊗ In (20.63d)
Ψ (x, 0) = Ψ0 (x) (20.63e)
Ω(x, 0) = Ω0 (x) (20.63f)
for
A(x, t) = A1 (x, t) A2 (x, t) . . . Amn (x, t)
= {ai j (x, t)}1≤i≤n,1≤ j≤mn (20.64a)
20.2 Sensing at Both Boundaries 387
B(x, t) = B1 (x, t) B2 (x, t) . . . Bmn (x, t)
= {bi j (x, t)}1≤i≤m,1≤ j≤mn (20.64b)
Ψ (x, t) = Ψ1 (x, t) Ψ2 (x, t) . . . Ψmn (x, t)
= {ψi j (x, t)}1≤i≤n,1≤ j≤mn (20.64c)
Ω(x, t) = Ω1 (x, t) Ω2 (x, t) . . . Ωmn (x, t)
= {ωi j (x, t)}1≤i≤m,1≤ j≤mn . (20.64d)
Lemma 20.3 Consider systems (20.12) and (20.14). The following backstepping
transformations
x
P(x, t) = A(x, t) + M α (x, ξ)B(ξ, t)dξ (20.65a)
0
x
R(x, t) = B(x, t) + M β (x, ξ)B(ξ, t)dξ (20.65b)
0
and
x
W (x, t) = Ψ (x, t) + M α (x, ξ)Ω(ξ, t)dξ (20.66a)
0
x
Z (x, t) = Ω(x, t) + M β (x, ξ)Ω(ξ, t)dξ (20.66b)
0
Proof Column-wise, the proof is the same as the proof of Lemma 20.1, and is there-
fore skipped.
||Pi (t)|| ≤ H1 ||Ai (t)|| + H2 ||Bi (t)||, ||Ri (t)|| ≤ H3 ||Bi (t)|| (20.68a)
||Ai (t)|| ≤ H4 ||Pi (t)|| + H5 ||Ri (t)||, ||Bi (t)|| ≤ H6 ||Ri (t)|| (20.68b)
for some positive constants G 1 . . . G 4 . We are finally ready to prove Theorem 20.2.
Consider the functionals
1
V2 (t) = e−δx αT (x, t)α(x, t)d x (20.71a)
0
1
V3 (t) = ekx β T (x, t)Dβ(x, t)d x (20.71b)
0
nm
1
V4 (t) = e−δx AiT (x, t)Ai (x, t)d x (20.71c)
i=1 0
nm 1
V5 (t) = ekx BiT (x, t)Π Bi (x, t)d x (20.71d)
i=1 0
nm 1
V6 (t) = (1 + x)ΩiT (x, t)Π Ωi (x, t)d x (20.71e)
i=1 0
for some positive constants δ, k and positive definite matrices D and Π to be decided.
The following result is proved in Appendix E.12.
Lemma 20.4 It is possible to choose D and Π so that there exists positive constants
h 1 , h 2 , . . . , h 9 and nonnegative, integrable functions l1 , l2 , . . . , l8 such that
V̇2 (t) ≤ −e−δ λ1 |α(1, t)|2 + h 1 |β(0, t)|2 − [δλ1 − h 2 ] V2 (t) + 2d −1 V3 (t)
+ h 3 |ˆ(0, t)|2 + l1 (t)V4 (t) + l2 (t)V5 (t) + l3 (t)V6 (t) (20.72a)
20.2 Sensing at Both Boundaries 389
V̇3 (t) ≤ −h 4 |β(0, t)|2 − (kλ1 − 7)V3 (t) + ek d̄h 5 |ˆ(0, t)|2 + l4 (t)V2 (t)
+ l5 (t)V3 (t) + l6 (t)V4 (t) + l7 (t)V5 (t) + l8 (t)V6 (t) (20.72b)
−δ
V̇4 (t) ≤ −λ1 e |A(1, t)| + h 7 |β(0, t)| + h 7 |ˆ(0, t)|
2 2 2
where π and π̄ are lower and upper bounds on the elements on Π , respectively, and
d and d̄ are lower and upper bounds on the elements on D, respectively.
Consider now the Lyapunov function
6
V9 (t) = ai Vi (t) (20.73)
i=2
a2 = d, a3 = h −1
4 (dh 1 + h 7 ), a4 = 1 (20.74a)
−1 −δ−k −1 −δ
a5 = π̄ e , a6 = (8n π̄) de λ1 (20.74b)
where
k dh 1 + h 7 −δ
h 11 = max dh 3 + e d̄ h 5 + h 7 , de λ1 (20.77a)
h4
1 π de−δ λ1
h 12 = min λ1 e−δ , e−δ−k μm , π (20.77b)
2 π̄ 8n π̄
|ε̂(t)|2 (1 + |ϕ(t)|2 )
|ε̂(t)|2 =
1 + |ϕ(t)|2
= ζ (t)(1 + |P(1, t)|2 + |W (1, t)|2 + |R(0, t)|2 + |Z (0, t)|2 )
2
(20.78)
where we have used the definition of ζ in (20.30). We note from (20.65) and (20.66)
that |P(1, t)|2 ≤ 2|A(1, t)|2 + 2 M̄ 2 ||B(t)||2 , |W (1, t)|2 ≤ M̄ 2 ||Ω(t)||2 , |R(0, t)|2
= |B(0, t)|2 , |Z (0, t)|2 = |Ω(0, t)|2 where M̄ bounds the kernel M α , and thus
|ε̂(t)|2 ≤ ζ 2 (t) 1 + 2|A(1, t)|2 + 2 M̄ 2 ||B(t)||2 + M̄ 2 ||Ω(t)||2
+ |B(0, t)|2 + |Ω(0, t)|2 . (20.79)
for some bounded, integrable functions l10 , l11 (property (20.29b)). Moreover, we
have, for t ≥ t F
|ε̂(t)|2 |ϕ(t)θ̃(t)|2
ζ 2 (t) = = ≤ |θ̃(t)|2 ≤ γ̄V1 (t) (20.81)
1 + |ϕ(t)| 2 1 + |ϕ(t)|2
This in turn, implies that |A(1, t)|2 , |B(0, t)|2 and |Ω(0, t)|2 must be bounded almost
everywhere, meaning that
V̇9 (t) ≤ −cV9 (t) + l10 (t)V9 (t) + l12 (t) (20.84)
for some integrable function l12 (t). Lemma B.3 in Appendix B then gives
V9 → 0 (20.85)
20.2 Sensing at Both Boundaries 391
and hence
and
20.3 Simulations
System (20.1) and the adaptive observer of Theorem 20.1 are implemented for n =
m = 2, using the system parameters
0.2 0
0.1 −0.2
0 −0.4
0 5 10 15 0 5 10 15
Time [s] Time [s]
0.2 0
0.1 −0.5
0 −1
0 5 10 15 0 5 10 15
Time [s] Time [s]
0 0.3
−0.2 0.2
−0.4 0.1
0
0 5 10 15 0 5 10 15
Time [s] Time [s]
0 0
−0.1 −0.2
−0.2 −0.4
−0.6
0 5 10 15 0 5 10 15
Time [s] Time [s]
Fig. 20.1 Actual (solid black) and estimated (dashed red) parameters Q̂ 0 and Ĉ1
System (20.1) with parameters (20.96) constitute a stable system. The observer kernel
equation (19.41) is solved using the method described in Appendix F.2, with the
boundary condition (19.43), set to
−−
β σ12
m 12 ≡ , (20.94)
μ2 − μ1
β
so that the two boundary conditions of m 12 match at x = ξ = 0.
To excite the system, the actuation signals are set to
The estimated system parameters are seen in Fig. 20.1 to converge to their true
values after approximately 10 s of simulation.
20.3 Simulations 393
2000
400
1500
300
1000
200
100 500
0 0
0 5 10 15 20 0 5 10 15 20
Time [s] Time [s]
System (20.1) and the controller of Theorem 20.2 are now implemented for n = m =
2, using the system parameters
System (20.1) with parameters (20.96) is open-loop unstable. The controller kernel
PDE (20.50) is solved using the method described in Appendix F.2, with the boundary
condition (20.57), set to
−−
v σ21
k̂21 ≡ , (20.98)
μ1 − μ2
v
so that the two boundary conditions of k̂21 match at x = ξ = 1.
It is seen from the norms shown in Fig. 20.2 that the controller successfully stabi-
lizes the system, with the state and filter norms converging asymptotically to zero as
predicted by theory. The control signals are also seen in Fig. 20.3 to converge to zero
and the estimated parameters are seen in Fig. 20.4 to converge to their true values,
although this was not proved.
394 20 Adaptive Output-Feedback: Uncertain Boundary Condition
200 300
200
100
100
0
0
−100
−100 −200
−300
−200
0 5 10 15 20 0 5 10 15 20
Time [s] Time [s]
1 0
−0.2
0.5 −0.4
0 −0.6
−0.8
0 5 10 15 20 0 5 10 15 20
Time [s] Time [s]
2 0
1 −0.5
0 −1
0 5 10 15 20 0 5 10 15 20
Time [s] Time [s]
0 0.6
0.4
−0.5 0.2
−1 0
0 5 10 15 20 0 5 10 15 20
Time [s] Time [s]
0.2 0.5
0 0
−0.2 −0.5
−0.4 −1
−0.6
0 5 10 15 20 0 5 10 15 20
Time [s] Time [s]
Fig. 20.4 Actual (solid black) and estimated (dashed red) parameters Q̂ 0 and Ĉ1
20.4 Notes
It is evident that the adaptive observer of Theorem 20.1 and controller of Theorem
20.2 scale poorly. The number of required filters is (1 + 2nm)(n + m), so that for
the 2 + 2-case in the simulations in Sect. 20.3, a total of 36 filters is required. For the
4 + 4 case, the number of required filters is 264. The controller of Theorem 20.2 also
20.4 Notes 395
requires the kernel equations consisting of (20.50) and (20.57) to be solved at every
time step, which quickly scales into a non-trivial task requiring much computational
power.
Appendix A
Projection Operators
In the case of vectors τ , ω and a, b, the operator acts element-wise. Often, the
shorthand notation for a one-parameter projection operator
is used.
Lemma A.1 Consider the projection operator (A.1). Assume τ is continuously dif-
ferentiable. Let
˙
θ̂(t) = proja,b (τ (t), θ̂(t)), (A.3)
a ≤ θ̂0 ≤ b (A.4)
a ≤ θ̂(t) ≤ b (A.5a)
−θ̃ (t)proja,b (τ (t), θ̂(t)) ≤ −θ̃ (t)τ (t)
T T
(A.5b)
where
and where the inequality (A.5a) is taken component-wise in the case of vector-valued
a, b, θ̂.
Proof For property (A.5b), we consider the three cases independently and
component-wise. In the first two cases, the projection operator is active, and the
left hand side of (A.5b) is zero. Moreover, if ωi = ai and τi ≤ 0, then
−θ̃i (t)τi (t) = −(θi − θ̂i (t))τi (t) = −(θi − ai )τi (t) ≥ 0, (A.7)
−θ̃i (t)τi (t) = −(θi − θ̂i (t))τi (t) = −(θi − bi )τi (t) ≥ 0, (A.8)
since θi ≤ bi , and τi ≥ 0. Hence, the inequality holds for the first two cases. In the
last case, the projection is inactive, and inequality (A.5b) holds trivially with equality.
This proves (A.5b).
Appendix B
Lemmas for Proving Stability and Convergence
+
Lemma B.1 (Barbalat’s Lemma) t Consider the function φ : R → R. If φ is uni-
formly continuous and limt→∞ 0 φ(τ )dτ exists and is finite, then
Lemma B.2 (Lemma 2.17 from Tao 2003) Consider a signal g satisfying
g ∈ L∞ (B.4)
and
Lemma B.3 Let v, l1 , l2 be real valued, nonnegative functions defined over R+ , and
let c be a positive constant. If l1 , l2 ∈ L1 , and v satisfies
© Springer Nature Switzerland AG 2019 399
H. Anfinsen and O. M. Aamo, Adaptive Control of Hyperbolic PDEs,
Communications and Control Engineering,
https://doi.org/10.1007/978-3-030-05879-1
400 Appendix B: Lemmas for Proving Stability and Convergence
then
v ∈ L1 ∩ L∞ , (B.7)
Proof Properties (B.6) and (B.8) were originally stated in Krstić et al. (1995), Lemma
B.6, while (B.9) was stated in Anfinsen and Aamo (2018), Lemma 2.
Using the fact that w(t) ≤ v(t), ẇ(t) = −cw(t) + l1 (t)w(t) + l2 (t), w(0) =
v(0) (the comparison principle), we rewrite
We proceed
t by applying the variation of constants formula, by multiplying with
exp(ct − 0 l1 (s)ds) to obtain
d t t
w(t)ect− 0 l1 (s)ds = l2 (t)ect− 0 l1 (s)ds . (B.11)
dt
Integration from 0 to t gives
t t t
w(t) = w(0)e−ct+ 0 l1 (s)ds + e−c(t−τ )+ τ l1 (s)ds l2 (τ )dτ , (B.12)
0
t t t
v(t) ≤ v(0)e−ct+ 0 l1 (s)ds + e−c(t−τ )+ τ l1 (s)ds l2 (τ )dτ , (B.13)
0
and
which proves that v ∈ L∞ , and gives the bound (B.8a). Integrating (B.14) from 0 to
t, we obtain
t t τ
1
v(τ )dτ ≤ v(0)(1 − e−ct ) + e−c(τ −s) l2 (s)dsdτ e||l1 ||1 . (B.16)
0 c 0 0
where
for all t > T1 . We will prove that such a T1 exists by constructing it. Since f ∈ L1 ,
there exists T0 > 0 such that
∞
f (s)ds < 0 (B.21)
T0
and applying the comparison principle, gives the following bound for v(t)
402 Appendix B: Lemmas for Proving Stability and Convergence
t
v(t) ≤ v(0)e−ct + e−c(t−τ ) f (τ )dτ . (B.23)
0
1
0 = 1 , (B.26)
2
we have
t
1
v(t) ≤ Me−ct + f (τ )dτ < Me−ct + 0 = Me−ct + 1 . (B.27)
T0 2
Now, choosing T1 as
1 2M
T1 = max T0 , log (B.28)
c 1
we obtain
1 1
v(t) < 1 + 1 = 1 (B.29)
2 2
for all t > T1 , which proves (B.9).
Lemma B.4 (Lemma 12 from Anfinsen and Aamo 2017b) Let v1 (t), v2 (t), σ(t),
l1 (t), l2 (t), h(t) and f (t), be real-valued, nonnegative functions defined for t ≥ 0.
Suppose
l1 , l2 ∈ L1 (B.30a)
h ∈ L∞ (B.30b)
t
f (s)ds ≤ Ae Bt (B.30c)
0
Appendix B: Lemmas for Proving Stability and Convergence 403
Proof Proceeding as in the proof of Lemma B.3, using the comparison principle and
applying the variation of constants formula, we find
t
v2 (t) ≤ v2 (0)e−ct+ 0 l1 (s)ds
t t
+ e−c(t−s)+ s l1 (τ )dτ [l2 (s) + h(s) − a(1 − bσ(s)) f (s)] ds
0
≤ v2 (0)e−ct
t
−c(t−s)
+ e l2 (s) + h(s) − a(1 − bσ(s)) f (s) ds e||l1 ||1 (B.31)
0
and
1
v2 (t)e−||l1 ||1 ≤ v2 (0)e−ct + ||l2 ||1 + ||h||∞
c
t
−a e−c(t−s) [1 − bσ(s)] f (s)ds. (B.32)
0
Consider also the case where h ≡ 0, and integrate (B.32) from 0 to t, to obtain
t
e−||l1 ||1 v2 (τ )dτ
0
t τ
1
≤ v2 (0) + e−c(τ −s) l2 (s) − a(1 − bσ(s)) f (s) dsdτ . (B.33)
c 0 0
t
For v2 in (B.32) or limt→∞ 0 v2 (τ )dτ in (B.34) to be unbounded, the term in the
last brackets of (B.32) and (B.34) must be negative on a set whose measure increases
unboundedly as t → ∞. Supposing this is the case, there must exist constants T > 0,
T0 > 0 and ρ > 0 so that
404 Appendix B: Lemmas for Proving Stability and Convergence
t+T0
σ(τ )dτ ≥ ρ (B.35)
t
for t > T . Condition (B.35) is the requirement for persistence of excitation in (B.30e),
meaning that v1 and, from (B.30d), σ converge exponentially to zero. It must therefore
exist a time T1 > 0 after which σ(t) < b1 for all t > T1 , resulting in the expression in
the brackets being positive for all t > T1 , contradicting the initial assumption. Hence
v2 ∈ L∞ , while h ≡ 0 results in v2 ∈ L1 ∩ L∞ .
Appendix C
Minkowski’s, Cauchy–Schwarz’
and Young’s Inequalities
Lemma C.1 (Minkowski’s inequality) For two scalar functions f (x), g(x) defined
for x ∈ [a, b], the version of Minkowski’s inequality used in this book is
b b b
( f (x) + g(x))2 d x ≤ f 2 (x)d x + g 2 (x)d x. (C.1)
a a a
Lemma C.2 (Cauchy–Schwarz’ inequality) For two vector functions f (x), g(x)
defined for x ∈ [a, b], the version of Cauchy–Schwarz’ inequality used in this book
is
b b b
f T (x)g(x)d x ≤ f T (x) f (x)d x g T (x)g(x)d x. (C.3)
a a a
This inequality is also a special case of Hölder’s inequality. A special case frequently
used, is for a scalar function h(x) defined for x ∈ [a, b],
b 2 b
h(x)d x ≤ (b − a) h 2 (x)d x (C.4)
a a
which follows from letting f = h and g ≡ 1, and squaring the result. For two vector
functions u, v and scalar w defined for x ∈ [0, 1], we have
and
1 2 1
w(x)d x ≤ w 2 (x)d x = ||w||2 . (C.6)
0 0
Lemma C.3 (Young’s inequality) For two vector functions f (x), g(x) defined for
x ∈ [a, b], the version of Young’s inequality used in this book is
b
b
1 b
f T (x)g(x)d x ≤ f T (x) f (x)d x + g T (x)g(x)d x (C.7)
a 2 a 2 a
for some arbitrary positive constant . For two vector functions u, v defined for
x ∈ [0, 1], we have
1
u T (x)v(x)d x
0
1
1 1
1
≤ u T (x)u(x)d x + v T (x)v(x)d x = ||u||2 + ||v||2 . (C.8)
2 0 2 0 2 2
Proof We have
T
√ 1 √ 1
0≤ f (x) − √ g(x) f (x) − √ g(x)
1
≤ f T (x) f (x) − 2 f T (x)g(x) + g T (x)g(x) (C.9)
which implies
1
2 f T (x)g(x) ≤ f T (x) f (x) + g T (x)g(x). (C.10)
Integration from a to b and dividing by two yields the result.
Appendix D
Well-Posedness of Kernel Equations
a, b ∈ C N (T ) (D.2a)
c, d ∈ C N ([0, 1]) (D.2b)
for some positive integer N , and T is the triangular domain defined in (1.1a).
Lemma D.1 The PIDE (D.1) has a unique solution F ∈ C N (T ). Moreover, a bound
on the solution is
where
Proof This proof is based on a similar proof given in Krstić and Smyshlyaev (2008).
We proceed by transforming (D.1) to integral equations using the method of charac-
teristics. Along the characteristic lines
xτ (τ ; x) = x − τ (D.5)
ξτ (τ ; ξ) = ξ − τ , (D.6)
d d
F(xτ (τ ; x), ξτ (τ ; ξ)) = F(x − τ , ξ − τ )
dτ dτ
= −Fx (x − τ , ξ − τ ) − Fξ (x − τ , ξ − τ )
x−τ
=− F(x − τ , s)a(s, ξ − τ )ds − b(x − τ , ξ − τ ). (D.7)
ξ−τ
ξ x−τ
F(x, ξ) = F(x − ξ, 0) + F(x − τ , s)a(s, ξ − τ )dsdτ
0 ξ−τ
ξ
+ b(x − τ , ξ − τ )dτ . (D.8)
0
where
ξ
Ψ0 (x, ξ) = b(x − τ , ξ − τ )dτ + d(x − ξ) (D.10a)
0
ξ x−τ
Ψ [F](x, ξ) = F(x − τ , s)a(s, ξ − τ )dsdτ
0 ξ−τ
x−ξ
+ F(x − ξ, s)c(s)ds. (D.10b)
0
This equation can be solved using successive approximations, similar to what was
done in the proof of Lemma 1.1. However, as the integral operator Ψ in this case
contains a double integral, the proof is naturally also more complicated. We form
the series
for n ∈ Z, n ≥ 1. Clearly,
provided the limit exists, and F ∈ C N (T ), since all the terms Fn ∈ C N (T ). Consider
the differences
and
∞
F(x, ξ) = ΔF n (x, ξ) (D.16)
n=0
Therefore, the sum (D.16) uniformly converges to the solution F of (D.1). We proceed
by showing uniqueness. Consider two solutions F1 and F2 , and their difference
Due to linearity, F̃ must also satisfy the integral equation (D.9), but with Ψ0 (x, ξ) =
0, hence
Repeating the steps above, one can obtain a bound on the form
4
1 (x)Fx1 (x, ξ) + 1 (ξ)Fξ1 (x, ξ) = g1 (x, ξ) + C1i (x, ξ)F i (x, ξ) (D.25a)
i=1
Appendix D: Well-Posedness of Kernel Equations 411
4
1 (x)Fx2 (x, ξ) − 2 (ξ)Fξ2 (x, ξ) = g2 (x, ξ) + C2i (x, ξ)F i (x, ξ) (D.25b)
i=1
4
2 (x)Fx3 (x, ξ) − 1 (ξ)Fξ3 (x, ξ) = g3 (x, ξ) + C3i (x, ξ)F i (x, ξ) (D.25c)
i=1
4
2 (x)Fx4 (x, ξ) + 2 (ξ)Fξ4 (x, ξ) = g4 (x, ξ) + C4i (x, ξ)F i (x, ξ) (D.25d)
i=1
F 1 (x, 0) = h 1 (x) + q1 (x)F 2 (x, 0)
+ q2 (x)F 3 (x, 0) (D.25e)
F (x, x) = h 2 (x)
2
(D.25f)
F (x, x) = h 3 (x)
3
(D.25g)
F (x, 0) = h 4 (x) + q3 (x)F (x, 0)
4 2
Theorem D.1 (Theorem A.1 from Coron et al. 2013) The PDEs (D.25) with param-
eters (D.26) have a unique B(T ) solution (F 1 , F 2 , F 3 , F 4 ). Moreover, there exists
bounded constants A, B so that
β β β
||P0α ||∞ ≤ P̄0α , ||P0 ||∞ ≤ P̄0 , ∀(x, ξ) ∈ S, P0α , P0 ∈ C(S) (D.31)
β
for some bounded constants P̄0α , P̄0 , and where S is defined in (1.1c).
Theorem D.3 (Lemma 4 from Anfinsen and Aamo 2016) The solution (P α , P β )
to the PDE (10.79) is bounded in the L 2 sense for any bounded system parameters
β
λ, μ, c1 , c2 and estimate q̂(t), and initial conditions P0α , P0 satisfying (10.80), and
α β
there exists constants P̄ , P̄ so that
β
where P̄ α , P̄ β depend on the system parameters, q̂0 , P0α and P0 . Moreover, if q̂(t)
exponentially converges to q, then (P α , P β ) converge exponentially in L 2 to the
static solution (P α , P β ) given as the solution to (8.86).
Proof (Proof originally from Anfinsen and Aamo 2016) Let (M, N ) denote the
solution of the static equations (8.86). The difference between (P α , P β ), whose
dynamics is given in (10.79), and (M, N ), that is
β
where M̃0 = M − P0α , Ñ0 = N − P0 are bounded, and M̃0 , Ñ0 ∈ C(S). Consider
the Lyapunov function candidate
where
for some constants a and b with a > 0 to be decided. The domain S is defined
in (1.1a). Differentiating (D.36a) with respect to time, and inserting the dynamics
(D.34a), we find
1 ξ
V̇1 (t) = −2 e−bξ λ(x) M̃(x, ξ, t) M̃x (x, ξ, t)d xdξ
0 0
1 1
−2 e−bξ λ(ξ) M̃(x, ξ, t) M̃ξ (x, ξ, t)dξd x
0 x
−bξ
−2 e λ (ξ) M̃ 2 (x, ξ, t)dS
S
1 1
V̇1 (t) = − e−bξ λ(ξ) M̃ 2 (ξ, ξ, t)dξ + e−bξ λ(0) M̃ 2 (0, ξ, t)dξ
0 0
1
+ e−bξ λ (x, t) M̃ 2 (x, ξ, t)dS − e−b λ(1) M̃ 2 (x, 1, t)d x
S 0
1
+ e−bx λ(x) M̃ 2 (x, x, t)d x + e−bξ λ (ξ) M̃ 2 (x, ξ, t)dS
0 S
or, when using Young’s inequality on the last term and inserting the boundary con-
dition (D.34d) lead to
1 1
V̇1 (t) ≤ e−bξ λ(0)q̂ 2 (t) Ñ 2 (0, ξ, t)dξ + e−bξ λ(0)q̃ 2 (t) N̄ 2 (0, ξ, t)dξ
0 0
+ e−bξ λ (x, t) − bλ(ξ) − λ (ξ) + c1 (x) M̃ 2 (x, ξ, t)dS
S
1
− e−b λ(1) M̃ 2 (x, 1, t)d x + e−bξ c1 (x) Ñ 2 (x, ξ, t)dS. (D.39)
0 S
Time differentiating (D.36b), using (D.34b) and (D.34c) yields in a similar way
1
V̇2 (t) ≤ − e−bξ μ(0) Ñ 2 (0, ξ, t)dξ
0
+ e−bξ c2 (x) − λ (ξ) − μ (x, t) − bλ(ξ) Ñ 2 (x, ξ, t)dS
S
1
− e−b λ(1) Ñ 2 (x, 1, t)d x + e−bξ c2 (x) M̃ 2 (x, ξ, t)dS. (D.40)
0 S
V̇ (t) ≤ − e−bξ − λ (x, t) + bλ(ξ) + λ (ξ)
S
− c1 (x) − ac2 (x) M̃ 2 (x, ξ, t)dS
− e−bξ − c1 (x) − ac2 (x) + aλ (ξ)
S
+ aμ (x, t) + abλ(ξ) Ñ 2 (x, ξ, t)dS
1
− e−bξ aμ(0) − λ(0)q̂ 2 (t) Ñ 2 (0, ξ, t)dξ
0
Appendix D: Well-Posedness of Kernel Equations 415
1 1
− e−b λ(1) M̃ 2 (x, 1, t)d x − a e−b λ(1) Ñ 2 (x, 1, t)d x
0 0
1
+ q̃ 2 (t)λ(0) e−bξ N̄ 2 (0, ξ, t)dξ. (D.41)
0
We require
λ(0) 2
a>4 q̄ (D.43)
μ(0) 0
where
q̄0 = max{|q|, |q̄|}. (D.44)
Thus, choose
c̄1 + a c̄2 + 2λ̄d c̄1 + a c̄2 + a λ̄d + a μ̄d
b > max , (D.47)
λ aλ
where
Additionally, we know that the last integral in (D.41) is well-defined and bounded.
We thus obtain
416 Appendix D: Well-Posedness of Kernel Equations
1
V̇ (t) ≤ −k1 M̃ 2 (x, ξ, t)dS − k2 Ñ 2 (x, ξ, t)dS − k3 Ñ 2 (0, ξ, t)dξ
S S 0
1 1
− k4 M̃ 2 (x, 1, t)d x − k5 Ñ 2 (x, 1, t)d x + c0 q̃ 2 (t) (D.49)
0 0
for some positive constants c0 , ki , i = 1, . . . , 5. This, along with the assumed bound-
edness of q̃(t) proves that V and hence M̃ and Ñ are bounded. Moreover, q̃(t) expo-
nentially converges to zero, M̃ and Ñ will exponentially converge to zero. This can
be seen from rewriting (D.49) into
for i = 1, . . . , n.
the Eqs. (D.51) admit a unique continuous solution on T . Moreover, there exists
bounded constants A, B so that
Consider the PDEs defined over the triangular domain T defined in (1.1a)
n
λi (x)∂x K i j (x, ξ) + λ j (x)∂ξ K i j (x, ξ) = ak j (x, ξ)K ik (x, ξ) (D.54a)
k=1
K i j (x, x) = bi j (x), i, j = 1, 2, . . . , n, i = j (D.54b)
n−m
K i j (x, 0) = ck j K i,m+k (x, 0),
k=1
1≤i ≤ j ≤n (D.54c)
K i j (1, ξ) = di j (ξ), 1 ≤ j < i ≤ m
∪m+1≤i < j ≤n (D.54d)
K i j (x, 0) = ei j (ξ), m + 1 ≤ j ≤ i ≤ n (D.54e)
with
λ1 (x) < λ2 (x) < · · · < λm (x) < 0, ∀x ∈ [0, 1], (D.56a)
0 < λm+1 (x) < λm+2 (x) < · · · < λm+n (x), ∀x ∈ [0, 1]. (D.56b)
Theorem D.5 There exists a unique piecewise continuous solution K to the PDEs
(D.54) with coefficients satisfying (D.55)–(D.56).
Proof This theorem is a slight variation of Hu et al. (2015), Theorem A.1. We omit
further details and instead refer the reader to Hu et al. (2015).
Consider the PDEs defined over the triangular domain T defined in (1.1a)
Λ− (x)K xu (x, ξ) − K ξu (x, ξ)Λ+ (ξ) = K u (x, ξ)Σ ++ (ξ) + K u (x, ξ)(Λ+ ) (ξ)
+ K v (x, ξ)Σ −+ (ξ) (D.57a)
−
Λ (x)K xv (x, ξ) + K ξv (x, ξ)Λ− (ξ) = K u (x, ξ)Σ +− (ξ) − K v (x, ξ)(Λ− ) (ξ)
+ K v (x, ξ)Σ −− (ξ) (D.57b)
− + −+
Λ (x)K (x, x) + K (x, x)Λ (x) = −Σ (x)
u u
(D.57c)
Λ− (x)K v (x, x) − K v (x, x)Λ− (x) = −Σ −− (x) (D.57d)
K v (x, 0)Λ− (0) − K u (x, 0)Λ+ (0)Q 0 = G(x) (D.57e)
K ivj (1, ξ) = kivj (ξ), 1 ≤ j < i ≤ m (D.57f)
where
K u (x, ξ) = {K iuj (x, ξ)}1≤i≤m,1≤ j≤n K v (x, ξ) = {K ivj (x, ξ)}1≤i, j≤m (D.59)
and where
Σ −+ (x) = {σi−+
j (x)}1≤i≤m,1≤ j≤n Σ −− (x) = {σi−−
j (x)}1≤i, j≤m (D.60d)
Q 0 = {qi j }1≤i≤m,1≤ j≤n (D.60e)
with
−μ1 (x) < −μ2 (x) < · · · < −μm (x) < 0 < λ1 (x) < λ2 (x) <
· · · < λn (x) (D.62)
Consider the PDE in F, evolving over the quadratic domain [0, 1]2 .
where
for all i, j = 1, 2, . . . , n.
Theorem D.7 There exists a unique solution F ∈ L 2 ([0, 1])n×n to the Eqs. (D.63).
420 Appendix D: Well-Posedness of Kernel Equations
Proof This was originally proved in Coron et al. (2017). Regarding ξ as the time
parameter, the PDE (D.63) is a standardtime-dependent
uncoupled hyperbolic sys-
λi (x)
tem with only positive transport speeds λ j (ξ) , and therefore admits a unique solu-
tion.
for all i, j = 1, 2, . . . , n.
Lemma D.2 There exists a unique solution G ∈ L 2 ([0, 1]2 )n×n to (D.66).
gi j (x, ξ)
0 if 1 ≤ i ≤ j ≤ n
= i−1 1 (D.69)
ai j (x, ξ) + k= j+1 0 bik (x, s)gk j (s, ξ)ds otherwise.
Appendix D: Well-Posedness of Kernel Equations 421
Specifically,
The rows on G are independent of the above rows. The components of G can therefore
be computed in cascade from the components of A and B.
Appendix E
Additional Proofs
where
for some bounded kernels K u , K v , and bounded signal f . Let σ̄ bound all elements
in Σ ++ (x), Σ +− (x), Σ −+ (x), Σ −− (x), q̄ bound on all elements in Q 0 , c̄ bound on
all elements in C1 , d̄ bound all elements in d1 , d2 , d3 , d4 and f , λ̄ bound all elements
on Λ+ , μ̄ bound all elements on Λ− , λ bound all elements in Λ+ from below, μ
bound all elements in Λ− from below and K̄ bound all elements in K u and K v .
Firstly, we prove (1.40). Consider the weighted sum of state norms
1
V1 (t) = eδx u T (x, t)(Λ+ (x))−1 u(x, t)d x
0
1
+a v T (x, t)(Λ− (x))−1 v(x, t)d x (E.4)
0
for some positive constants a and δ to be decided. Differentiating (E.4) with respect
to time and inserting the dynamics (E.1a)–(E.1b), we find
1
V̇1 (t) = −2 eδx u T (x, t)u x (x, t)d x
0
1
+2 eδx u T (x, t)(Λ+ (x))−1 Σ ++ (x)u(x, t)d x
0
1
+2 eδx u T (x, t)(Λ+ (x))−1 Σ +− (x)v(x, t)d x
0
1 1
+2 eδx u T (x, t)(Λ+ (x))−1 d1 (x, t)d x + 2a v T (x, t)vx (x, t)d x
0 0
1
+ 2a v T (x, t)(Λ− (x))−1 Σ −+ (x)u(x, t)d x
0
1
+ 2a v T (x, t)(Λ− (x))−1 Σ −− (x)v(x, t)d x
0
1
+ 2a v T (x, t)(Λ− (x))−1 d2 (x, t)d x. (E.5)
0
1
+ eδx d1T (x, t)d1 (x, t)d x + av T (1, t)v(1, t) − av T (0, t)v(0, t)
0
1 1
+ ap 2 σ̄ 2 μ−2 v T (x, t)v(x, t)d x + a u T (x, t)u(x, t)d x
0 0
1 1
+ ap σ̄μ−1 v T (x, t)v(x, t)d x + aμ−1 v T (x, t)v(x, t)d x
0 0
1
+a d2T (x, t)d2 (x, t)d x (E.6)
0
where
Inserting the boundary conditions (E.1c)–(E.1d) and the control law (E.3), we can
bound V̇1 (t) as
V̇1 (t) ≤ − eδ − 6ap 2 (c̄2 + ḡ 2 ) u T (1, t)u(1, t) − a − 2 p 2 q̄ 2 v T (0, t)v(0, t)
1
+ b1 eδx (Λ+ (x))−1 u T (x, t)u(x, t)d x
0
1
+ b2 (Λ− (x))−1 v T (x, t)v(x, t)d x + b3 d̄ 2 (E.8)
0
where we have inserted the boundary conditions, and defined the positive constants
Choosing
a = 2 p 2 q̄ 2 + 1 (E.10)
and
we obtain
where
b2
k = max b1 , (E.13)
a
and hence
t
V1 (t) ≤ V1 (0)ekt + b3 d̄ 2 ek(t−τ ) dτ (E.14)
0
b3 d̄ 2 kt
V1 (t) ≤ V1 (0) + e (E.15)
k
where
x
dγ
φu,i (x) = (E.18a)
0 λi (γ)
1
dγ
φv,i (x) = . (E.18b)
x μi (γ)
We note that φu,i (x) and φv,i (x) are strictly increasing and decreasing functions,
respectively, and therefore invertible. Along their characteristic lines, we have from
(E.1a)–(E.1b)
d n
u i (xu,i (x, t, s), s) = σi++
j (x u,i (x, t, s))u j (x u,i (x, t, s), s)
ds j=1
m
+ σi+−
j (x u,i (x, t, s))v j (x u,i (x, t, s), s) + d1,i (x u,i (x, t, s), s) (E.19a)
j=1
Appendix E: Additional Proofs 427
d n
vi (xv,i (x, t, s), s) = σi−+
j (x v,i (x, t, s))u j (x v,i (x, t, s), s)
ds j=1
m
+ σi−−
j (x v,i (x, t, s))v j (x v,i (x, t, s), s) + d2,i (x v,i (x, t, s), s). (E.19b)
j=1
we obtain
where
t
n
Iu,i [u, v](x, t) = σi++
j (x u,i (x, t, s))u j (x u,i (x, t, s), s)ds
0
su,i (x,t) j=1
t
m
+ σi+−
j (x u,i (x, t, s))v j (x u,i (x, t, s), s)ds (E.22a)
0
su,i (x,t) j=1
t
n
Iv,i [u, v](x, t) = σi−+
j (x v,i (x, t, s))u j (x v,i (x, t, s), s)ds
0
sv,i (x,t) j=1
t
m
+ σi−−
j (x v,i (x, t, s))v j (x v,i (x, t, s), s)ds (E.22b)
0
sv,i (x,t) j=1
t
D1,i (x, t) = d1,i (xu,i (x, t, s), s)ds (E.22c)
0
su,i (x,t)
t
D2,i (x, t) = d2,i (xv,i (x, t, s), s)ds. (E.22d)
0
sv,i (x,t)
428 Appendix E: Additional Proofs
u i (x, t) = Hu,i (x, t) + Iu,i [u, v](x, t) + D1,i (x, t) + D3,i (x, t)
+ Cu,i (x, t) (E.23a)
vi (x, t) = Hv,i (x, t) + Iv,i [u, v](x, t) + D2,i (x, t) + D4,i (x, t)
+ Cv,i (x, t) + Pi (x, t) + Fi (x, t) (E.23b)
where
u 0,i (xu,i (x, t, 0)) if t < φu,i (x)
Hu,i (x, t) = (E.24a)
0 if t ≥ φu,i (x)
v0,i (xv,i (x, t, 0)) if t < φv,i (x)
Hv,i (x, t) = (E.24b)
0 if t ≥ φv,i (x)
and
0 if t < φu,i (x)
D3,i (x, t) = (E.25a)
d3,i (t − φu,i (x)) if t ≥ φu,i (x)
0 if t < φv,i (x)
D4,i (x, t) = (E.25b)
d4,i (t − φv,i (x)) if t ≥ φv,i (x)
with
0 if t < φu,i (x)
Cu,i (x, t) = m (E.26a)
j=1 qi j v j (0, t − φu,i (x)) if t ≥ φu,i (x)
⎧
⎪
⎨0 if t < φv,i (x)
Cv,i (x, t) = n
(r
j=1 i j + gij (t − φ v,i (x))) (E.26b)
⎪
⎩
×u j (1, t − φv,i (x)) if t ≥ φv,i (x)
and
0 if t < φv,i (x)
Pi (x, t) = (E.27)
pi (t − φv,i (x)) if t ≥ φv,i (x)
where
n 1
m 1
pi (t) = K iuj (ξ, t)u j (ξ, t)dξ + K ivj (ξ, t)v j (ξ, t)dξ (E.28)
j=1 0 j=1 0
Appendix E: Additional Proofs 429
and
0 if t < φv,i (x)
Fi (x, t) = (E.29)
f i (t − φv,i (x)) if t ≥ φv,i (x).
We now consider the terms in Cu,i and Cv,i , and insert (E.23), recalling that
t ≤ T = min{t¯u , t¯v }, to obtain
⎧
⎪
⎪ 0 if t < φu,i (x)
⎪ m
⎨
j=1 qi j Hv, j (0, t − φu,i (x))
Cu,i (x, t) = (E.30a)
⎪
⎪ + m
j=1 qi j Iv, j [u, v](0, t − φu,i (x))
⎪
⎩ m
+ j=1 qi j D2, j (0, t − φu,i (x)) if t ≥ φu,i (x)
⎧
⎪
⎪ 0 if t < φv,i (x)
⎪ n
⎪
⎪
⎪ j=1 (ri j + gi j (t − φv,i (x)))
⎪
⎪
⎪
⎨
Cv,i (x, t) = × Hu, j (0, t − φv,i (x)) (E.30b)
⎪
⎪
⎪
⎪ +Iu, j [u, v](0, t − φv,i (x))
⎪
⎪
⎪
⎪
⎪
⎩ +D1, j (0, t − φv,i (x)) if t ≥ φv,i (x).
u i (x, t) = Hu,i (x, t) + Iu,i [u, v](x, t) + D1,i (x, t) + D3,i (x, t)
+ Ju,i [u, v](x, t) + Q u,i (x, t) (E.31a)
vi (x, t) = Hv,i (x, t) + Iv,i [u, v](x, t) + D2,i (x, t) + D4,i (x, t)
+ Jv,i [u, v](x, t) + Q v,i (x, t) + Fi (x, t) + Pi (x, t) (E.31b)
where
0 if t < φu,i (x)
Ju,i [u, v](x, t) = m (E.32a)
j=1 q i j I v, j [u, v](0, t − φ u,i (x)) if t ≥ φu,i (x)
⎧
⎪
⎨0 if t < φv,i (x)
Jv,i [u, v](x, t) = n
j=1 (ri j
+ gi j (t − φv,i (x))) (E.32b)
⎪
⎩
×Iu, j [u, v](0, t − φv,i (x)) if t ≥ φv,i (x)
and
⎧
⎪
⎨0 if t < φu,i (x)
Q u,i (x, t) = m
j=1 qi j Hv, j (0, t − φu,i (x)) (E.33a)
⎪
⎩
+ mj=1 qi j D2, j (0, t − φu,i (x)) if t ≥ φu,i (x)
430 Appendix E: Additional Proofs
⎧
⎪
⎪0 if t < φv,i (x)
⎪
⎪
⎪
⎪ n
⎨ j=1 (ri j + gi j (t − φv,i (x)))
Q v,i (x, t) = ×Hu, j (0, t − φv,i (x)) (E.33b)
⎪
⎪ n
⎪
⎪ + j=1 (ri j + gi j (t − φv,i (x)))
⎪
⎪
⎩ ×D1, j (0, t − φv,i (x)) if t ≥ φv,i (x).
Next, we define
T
w(x, t) = u 1 (x, t) u 2 (x, t) . . . u n (x, t) v1 (x, t) v2 (x, t) . . . vm (x, t)
T
= w1 (x, t) w2 (x, t) . . . wn+m (x, t) (E.34)
where
⎡ ⎤
[Hu,1 + D1,1 + D3,1 + Q u,1 ](x, t)
⎢ [Hu,2 + D1,2 + D3,2 + Q u,2 ](x, t) ⎥
⎢ ⎥
⎢ .. ⎥
⎢ . ⎥
⎢ ⎥
ψ(x, t) = ⎢
⎢ [H u,n + D 1,n + D 3,n + Q u,n ](x, t) ⎥
⎥ (E.36)
⎢ [Hv,1 + D2,1 + D4,1 + Q v,1 + F1 + P1 ](x, t) ⎥
⎢ ⎥
⎢ .. ⎥
⎣ . ⎦
[Hv,m + D2,m + D4,m + Q v,m + F1 + P1 ](x, t)
and
⎡ ⎤
Iu,1 [u, v](x, t) + Ju,1 [u, v](x, t)
⎢ Iu,2 [u, v](x, t) + Ju,2 [u, v](x, t) ⎥
⎢ ⎥
⎢ .. ⎥
⎢ . ⎥
⎢ ⎥
Ψ [w](x, t) = ⎢
⎢ Iu,n [u, v](x, t) + Ju,n [u, v](x, t) ⎥
⎥. (E.37)
⎢ Iv,1 [u, v](x, t) + Jv,1 [u, v](x, t) ⎥
⎢ ⎥
⎢ .. ⎥
⎣ . ⎦
Iv,m [u, v](x, t) + Jv,m [u, v](x, t)
We note that ψ(x, t) is bounded for all x ∈ [0, 1] and t ∈ [0, T ] since it is a function
of the bounded initial states, bounded system parameters, bounded source terms and
pi , which is a weighted L 2 norm of the system states, and hence bounded by (1.40).
Let ψ̄ be such that
We will prove that the series (E.42) converges by induction. Suppose that
tq
|Δwq (x, t)|∞ ≤ ψ̄C q (E.43)
q!
j=1
0
su,i (x,t)
m t
+ σi+−
q
j (x u,i (x, t, s))Δv j (x u,i (x, t, s), s)ds
j=1
0
su,i (x,t)
n t
q
≤ σ̄ |Δu j (xu,i (x, t, s), s)|ds
j=1
0
su,i (x,t)
432 Appendix E: Additional Proofs
m t
q
+ σ̄ |Δv j (xu,i (x, t, s), s)|ds
j=1
0
su,i (x,t)
t
≤ σ̄(n + m) |Δwq (xu,i (x, t, s), s)|∞ ds (E.46)
0
su,i (x,t)
for all i = 1, . . . , n, x ∈ [0, 1] and t ∈ [0, T ]. Similar derivations for Iv,i [Δwq ](x, t)
and Jv,i [Δwq ](x, t) give the bounds
t q+1
|Iv,i [Δwq ](x, t)| ≤ ψ̄C q σ̄(n + m) (E.49)
(q + 1)!
t q+1
|Ju,i [Δwq ](x, t)| ≤ ψ̄C q (c̄ + ḡ)σ̄n(n + m) (E.50)
(q + 1)!
t q+1
|Φ[Δwq ](x, t)|∞ ≤ ψ̄C q (n + m)σ̄ [2 + q̄m + (c̄ + ḡ)n]
(q + 1)!
t q+1
≤ ψ̄C q+1 (E.51)
(q + 1)!
Appendix E: Additional Proofs 433
tq
|Δwq (x, t)|∞ ≤ ψ̄C q (E.52)
q!
≤ ψ̄e Ct
(E.53)
for all x ∈ [0, 1] and t ∈ [0, T ], which proves that u(x, t) and v(x, t) are bounded
for all x ∈ [0, 1] and t ∈ [0, T ].
The above result also implies that u i (x, T ) and v j (x, T ) for i = 1, 2, . . . , n, j =
1, 2, . . . , m, are bounded for all x ∈ [0, 1]. By now shifting time T units and repeating
the above line of reasoning, we obtain that u i (x, t) and v j (x, t) for i = 1, 2, . . . , n,
j = 1, 2, . . . , m, are bounded for all x ∈ [0, 1] and t ∈ [T, 2T ]. Continuing in this
manner, proofs the theorem.
Proof We start by proving this for the L 2 -norm. Consider a system u(x, t) defined
for x ∈ [0, 1], t ≥ 0 with initial condition u(x, 0) = u 0 (x). Assume u ≡ 0 after a
finite time T . By Theorem 1.1, we have
which proves exponential convergence of u to zero in the L 2 -sense. The proof for
the ∞-norm is similar and omitted.
434 Appendix E: Additional Proofs
Bound on V̇4 :
From differentiating (9.44a) with respect to time, inserting the dynamics (9.36a), and
integrating by parts, we find
1
V̇4 (t) = − λe−δ w 2 (1, t) + λw 2 (0, t) − λδ e−δx w 2 (x, t)d x
0
1 1
−δx
+2 e w (x, t)ĉ11 (t)d x + 2
2
e−δx w(x, t)ĉ12 (t)z(x, t)d x
0 0
1 x
−δx
+2 e w(x, t) ω(x, ξ, t)w(ξ, t)dξd x
0 0
1 x
+2 e−δx w(x, t) κ(x, ξ, t)z(ξ, t)dξd x
0 0
1
+2 e−δx w(x, t)ĉ11 (t)e(x, t)d x
0
1
+2 e−δx w(x, t)ĉ12 (t)(x, t)d x
0
1
+2 e−δx w(x, t)ρe(x, t)||(t)||2 d x. (E.57)
0
Using
1 x 2 1 x
−δx
e w(ξ, t)dξ dx ≤ e−δx w 2 (ξ, t)dξd x
0 0 0 0
x 1 1
1 −δx 1
≤− e w 2 (ξ, t)dξd x + e−δx w 2 (x, t)d x
δ 0 0 δ 0
1 1
1
≤ (e−δx − e−δ )w 2 (x, t)d x ≤ e−δx w 2 (x, t)d x (E.58)
δ 0 0
1
V̇4 (t) ≤ −λe−δ w 2 (1, t) + λw 2 (0, t) − λδ − 2c̄11 − ω̄ 2 − 5 e−δx w 2 (x, t)d x
0
+ (c̄12
2
+ κ̄2 )||z(t)||2 + c̄11
2
||e(t)||2 + c̄12
2
||(t)||2
+ 2ρ||w(t)||||e(t)||||(t)||2 . (E.59)
Appendix E: Additional Proofs 435
Consider the last term. We have, using Young’s and Minkowski’s inequalities
1
+ (||w(t)|| + ||e(t)|| + ||T −1 [w, z](t)|| + ||(t)||)2
ρ1
≤ ρ1 ρ2 ||w(t)||2 ||e(t)||2 ||(t)||2
4
+ ||w(t)||2 + ||T −1 [w, z](t)||2 + ||e(t)||2 + ||(t)||2
ρ1
≤ ρ1 ρ2 ||w(t)||2 ||e(t)||2 ||(t)||2
4
+ (1 + 2 A23 )||w(t)||2 + 2 A24 ||z(t)||2 + ||e(t)||2 + ||(t)||2 (E.60)
ρ1
Defining
and
we obtain
V̇4 (t) ≤ h 1 z 2 (0, t) − [λδ − h 2 ] V4 (t) + h 3 V5 (t) + l1 (t)V4 (t) + l2 (t). (E.64)
436 Appendix E: Additional Proofs
Bound on V̇5 :
From differentiating (9.44b) with respect to time, inserting the dynamics (9.36b),
and integration by parts, we find
1
V̇5 (t) = μek z 2 (1, t) − μz 2 (0, t) − μk ekx z 2 (x, t)d x
0
1
+2 e z(x, t)ĉ22 (t)z(x, t)d x
kx
0
1
−2 ekx z(x, t)λ K̂ u (x, 0, t)q(t)(0, t)d x
0
1
−2 ekx z(x, t)λ K̂ u (x, 0, t)q̃(t)z(0, t)d x
0
1
+2 ekx z(x, t)λ K̂ u (x, 0, t)e(0, t)d x
0
1 x
−2 ekx z(x, t) K̂ tu (x, ξ, t)w(ξ, t)dξd x
0 0
1 x
−2 ekx z(x, t) K̂ tv (x, ξ, t)T −1 [w, z](ξ, t)dξd x
0 0
1
+2 ekx z(x, t)T [ĉ11 e + ĉ12 , ĉ21 e + ĉ22 ](x, t)d x
0
1
+ 2ρ ekx z(x, t)T [e, ](x, t)||(t)||2 d x. (E.65)
0
and
Bound on V̇1 :
From differentiating (9.116a) with respect to time, inserting the dynamics (9.109a)
and integrating by parts, we find
1 1
V̇1 (t) ≤ w 2 (0, t) − δ e−δx w 2 (x, t)d x + 2 e−δx w(x, t)θ̂(x, t)z(x, t)d x
0 0
1
+2 e−δx w(x, t)θ̂(x, t)ˆ(x, t)d x
0
438 Appendix E: Additional Proofs
1 x
+2 e−δx λ−1 (x)w(x, t) ω(x, ξ, t)w(ξ, t)dξd x
0 0
1 x
+2 e−δx λ−1 (x)w(x, t) b(x, ξ, t)z(ξ, t)dξd x
0 0
1
+2 ˙
e−δx λ−1 (x)w(x, t)q̂(t)η(x, t)d x
0
1 x
+2 e−δx λ−1 (x)w(x, t) θ̂t (ξ, t)M(x, ξ, t)dξd x. (E.71)
0 0
where we have inserted for the boundary condition (9.109c). Applying Cauchy–
Schwarz’ inequality to the double integrals, yields
V̇1 (t) ≤ 2q̄ 2 z 2 (0, t) + 2q̄ 2 ˆ2 (0, t) + θ̄2 ||z(t)||2 + θ̄2 ||ˆ(t)||2
1
− (δ − 6 − λ−2 ω̄ 2 ) e−δx w 2 (x, t)d x + λ−2 b̄2 ||z(t)||2
0
+ λ−2 q̂˙ 2 (t)||η(t)||2 + λ−2 ||θ̂t (t)||2 ||M(t)||2 (E.73)
yield
ˆ2 (0, t)
V̇1 (t) ≤ 2q̄ 2 z 2 (0, t) + 2q̄ 2 (1 + ||n 0 (t)||2 )
1 + ||n 0 (t)||2
1
− (δ − 6 − λ−2 ω̄ 2 ) e−δx w 2 (x, t)d x + θ̄2 ||z(t)||2
0
||ˆ(t)||2
+ θ̄2 (1 + ||N (t)||2 ) + λ−2 b̄2 ||z(t)||2
1 + ||N (t)||2
+ λ−2 q̂˙ 2 (t)||η(t)||2 + λ−2 ||θ̂t (t)||2 ||M(t)||2 . (E.76)
Bound on V̇2 :
Differentiation (9.116b) with respect to time, inserting the dynamics (9.109), inte-
grating by parts and using Young’s inequality yield
1
V̇2 (t) ≤ −z 2 (0, t) − ||z(t)||2 + ρ1 (1 + x)μ−1 (x)z 2 (x, t)d x
0
1
2 −1 2 2 2
+ μ K̄ λ̄ q̄ (t)ˆ2 (0, t) + ρ2 (1 + x)μ−1 (x)z 2 (x, t)d x
ρ1 0
1 x
2 ˙ +
+ μ−1 T q̂η θ̂t (ξ, t)M(x, ξ, t)dξ,
ρ2 0 0
1 2
κ̂t (ξ, t)N (x, ξ, t)dξ (x, t)d x
x
440 Appendix E: Additional Proofs
1
+ ρ3 (1 + x)μ−1 (x)z 2 (x, t)d x
0
1 x 2
2
+ μ−1 K̂ tu (x, ξ, t)w(ξ, t)dξ dx
ρ3 0 0
1
+ ρ4 (1 + x)μ−1 (x)z 2 (x, t)d x
0
1 x 2
2 −1
+ μ K̂ tv (x, ξ, t)T −1 [w, z](ξ, t)dξ dx (E.80)
ρ4 0 0
1
1
V̇2 (t) ≤ −z 2 (0, t) − μ (1 + x)μ−1 (x)z 2 (x, t)d x
4 0
ˆ2 (0, t)
+ 32μ−2 K̄ 2 λ̄2 q̄ 2 (1 + ||n 0 (t)||2 ) + 64μ−2 G 21 q̂˙ 2 (t)||η(t)||2
1 + ||n 0 (t)||2
+ 64μ−2 G 21 ||θ̂t (t)||2 ||M(t)||2 + 32μ−2 G 22 ||κ̂t (t)||2 ||N (t)||2
+ 32μ−2 || K̂ tu (t)||2 ||w(t)||2 + 32μ−2 G 23 || K̂ tv (t)||2 ||w(t)||2
+ 32μ−2 G 24 || K̂ tv (t)||2 ||z(t)||2 . (E.81)
Specifically, we used
1 x 1 2
˙ +
T q̂η θ̂t (ξ)M(x, ξ)dξ, κ̂t (ξ)N (x, ξ)dξ (x, t)d x
0 0 x
1 1 x 2
≤ G 21 q̂˙ 2 (t)η 2 (x, t)d x + G 21 θ̂t (ξ, t)M(x, ξ, t)dξ dx
0 0 0
1 1 2
+ G 22 κ̂t (ξ, t)N (x, ξ, t)dξ dx
0 x
≤ G 21 q̂˙ 2 (t)||η(t)||2 + G 21 ||θ̂t (t)||2 ||M(t)||2 + G 22 ||κ̂t (t)||2 ||N (t)||2 (E.82)
and
1 x 2
K̂ tv (x, ξ, t)T −1 [w, z](ξ, t)dξ dx
0 0
1 x x
≤ ( K̂ tv (x, ξ, t))2 dξ (T −1 [w, z](ξ, t))2 dξd x
0 0 0
1 x 1
≤ ( K̂ tv (x, ξ, t))2 dξd x (T −1 [w, z](ξ, t))2 dξ
0 0 0
Appendix E: Additional Proofs 441
1
≤ || K̂ tv (t)||2 (T −1 [w, z](x, t))2 d x
0
≤ || K̂ tv (t)||2 (G 23 ||w(t)||2 + G 24 ||z(t)||2 ). (E.83)
Bound on V̇3 :
We find
V̇3 (t) ≤ −||η(t)||2 + 4z 2 (0, t) + 4ˆ2 (0, t). (E.87)
and hence
1 ˆ2 (0, t)
V̇3 (t) ≤ − μV3 (t) + 4z 2 (0, t) + 4 ||n 0 (t)||2 + l11 (t) (E.89)
2 1 + ||n 0 (t)||2
ˆ2 (0, t)
l11 (t) = (E.90)
1 + ||n 0 (t)||2
Bound on V̇4 :
We find
1 1
V̇4 (t) = −2 (2 − x)M(x, ξ, t)Mx (x, ξ, t)d xdξ
0 ξ
1 1
=− M 2 (1, ξ, t)dξ + (2 − ξ)M 2 (ξ, ξ, t)dξ − ||M(t)||2
0 0
≤ 2||v(t)||2 − ||M(t)||2 ≤ 4||v̂(t)||2 + 4||ˆ(t)||2 − ||M(t)||2
≤ 4G 21 ||w(t)||2 + 4G 22 ||z(t)||2 + 4||ˆ(t)||2 − ||M(t)||2 . (E.91)
and hence
1
V̇4 (t) ≤ − λV4 (t) + h 4 eδ V1 (t) + h 5 V2 (t) + l12 (t)V5 (t) + l13 (t) (E.93)
2
for the positive constants
h 4 = 4G 21 λ̄, h 5 = 4G 22 μ̄ (E.94)
||ˆ(t)||2
l12 (t) = μ̄l13 (t) l13 (t) = 4 . (E.95)
1 + ||N (t)||2
Bound on V̇5 :
Finally, we find
1 1
V̇5 (t) = 2 (1 + x)N (x, ξ, t)N x (x, ξ, t)dξd x
0 x
= 2||u(t)||2 − ||n 0 (t)||2 − ||N (t)||2
≤ 4||û(t)||2 + 4||ê(t)||2 − ||n 0 (t)||2 − ||N (t)||2
≤ 4||w(t)||2 + 4||ê(t)||2 − ||n 0 (t)||2 − ||N (t)||2 . (E.96)
Appendix E: Additional Proofs 443
and hence
1
V̇5 (t) ≤ −||n 0 (t)||2 − μV5 (t) + h 6 eδ V1 (t) + l14 (t)V3 (t)
2
+ l14 (t)V4 (t) + l15 (t) (E.98)
where
h 6 = 4λ̄ (E.99)
||ê(t)||2
l14 (t) = l15 (t)λ̄, l15 (t) = 4 (E.100)
1 + f 2 (t)
Bound on V̇1 :
We find
1 1
V̇1 (t) = −2 e−δx α(x, t)αx (x, t)d x + 2 e−δx λ−1 (x)α(x, t)c1 (x)β(x, t)d x
0 0
1 x
+2 e−δx λ−1 (x)α(x, t) ω(x, ξ, t)α(ξ, t)dξd x
0 0
1 x
+2 e−δx λ−1 (x)α(x, t) κ(x, ξ, t)β(ξ, t)dξd x
0 0
1
+2 e−δx λ−1 (x)α(x, t)k1 (x)ˆ(0, t)d x
0
1
+2 ˙ p(x, t)d x.
e−δx λ−1 (x)α(x, t)q̂(t) (E.101)
0
Integration by parts, inserting the boundary condition (10.45c) and using Young’s
inequality on the cross terms gives
444 Appendix E: Additional Proofs
where we used
1 x 1 x
e−δx λ−1 (x) α2 (ξ, t)dξd x ≤ λ−1 e−δx α2 (ξ, t)dξd x
0 0 0 0
x 1 1
1 1
≤− e−δx α2 (ξ, t)dξd x + e−δx α2 (x, t)d x
δλ 0 0 δλ 0
1 1
1 1
≤ e−δx − e−δ α2 (x, t)d x ≤ e−δx α2 (x, t)d x
δλ 0 δλ 0
λ̄ 1
≤ e−δx λ−1 (x)α2 (x, t)d x (E.103)
λ 0
where the last inequality follows from assuming δ ≥ 1, and similarly for the double
integral in β. Inequality (E.102) can be written
V̇1 (t) ≤ h 1 β 2 (0, t) + h 2 ˆ2 (0, t) − δλ − h 3 V1 (t)
+ h 4 V2 (t) + l1 (t)V3 (t) (E.104)
where
Bound on V̇2 :
We find
1
V̇2 (t) = (1 + x)β(x, t)βx (x, t)d x
0
1
+ (1 + x)μ−1 (x)β(x, t) K̂ u (x, 0, t)λ(0)q̂(t) + T [k1 , k2 ](x, t) ˆ(0, t)d x
0
1
+ ˙
(1 + x)μ−1 (x)β(x, t)q̂(t)T [ p, r ](x, t)d x
0
1 x
− (1 + x)μ−1 (x)β(x, t) K̂ tu (x, ξ, t)α(ξ, t)dξd x
0 0
1 x
− (1 + x)μ−1 (x)β(x, t) K̂ tv (x, ξ, t)T −1 [α, β](ξ, t)dξd x. (E.107)
0 0
1 1 2
+ (1 + x)μ−1 (x) K̂ u (x, 0, t)λ(0)q̂(t) + T [k1 , k2 ](x, t) d x ˆ2 (0, t)
ρ1 0
1
1
+ q̂˙ 2 (t) (1 + x)μ−1 (x)T 2 [ p, r ](x, t)d x
ρ2 0
x 2
1 1
+ (1 + x)μ−1 (x) K̂ tu (x, ξ, t)α(ξ, t)dξ d x
ρ3 0 0
1 x 2
1 −1 v −1
+ (1 + x)μ (x) K̂ t (x, ξ, t)T [α, β](ξ, t)dξ d x (E.108)
ρ4 0 0
1
32
+ || K̂ tu (t)||2 λ̄eδ e−δx λ−1 (x)α2 (x, t)d x
μ2 0
1
64
+ || K̂ tv (t)||2 A23 λ̄eδ e−δx λ−1 (x)α2 (x, t)d x
μ2 0
1
64
+ || K̂ tv (t)||2 A24 μ̄ (1 + x)μ−1 (x)β 2 (x, t)d x, (E.109)
μ2 0
1 x 2
K̂ tv (x, ξ, t)T −1 [α, β](ξ, t)dξ dx
0 0
2
1 x x
≤ ( K̂ tv (x, ξ, t))2 dξ (T −1 [α, β](ξ, t))2 dξ dx
0 0 0
1 x x
≤ ( K̂ tv (x, ξ, t))2 dξ (T −1 [α, β](ξ, t))2 dξd x
0 0 0
1 x 1
≤ ( K̂ tv (x, ξ, t))2 dξ (T −1 [α, β](ξ, t))2 dξd x
0 0 0
1 x
≤ ( K̂ tv (x, ξ, t))2 dξd x||T −1 [α, β](t)||2
0 0
≤ 2|| K̂ tv (t)||2 (A23 ||α(t)||2 + A24 ||β(t)||2 )
1
≤ 2|| K̂ tv (t)||2 A23 λ̄eδ e−δx λ−1 (x)α2 (x, t)d x
0
1
+ 2|| K̂ tv (t)||2 A24 μ̄ (1 + x)μ−1 (x)β 2 (x, t)d x (E.110)
0
1
V̇2 (t) ≤ −β 2 (0, t) − μV2 (t) + h 5 ˆ2 (0, t) + l2 (t)V1 (t)
4
+ l3 (t)V2 (t) + l4 (t)V3 (t) + l5 (t)V4 (t) (E.111)
where
64 2 2 2
h5 = K̄ λ̄ q̄ + 2(A21 ||k1 ||2 + A22 ||k2 ||2 ) (E.112)
μ2
32 u
l2 (t) = || K̂ t (t)||2 + 2|| K̂ tv (t)||2 A23 λ̄eδ (E.113a)
μ2
Appendix E: Additional Proofs 447
64
l3 (t) = || K̂ tv (t)||2 A24 μ̄ (E.113b)
μ2
64
l4 (t) = 2 q̂˙ 2 (t)(A21 B12 + 2 A22 B22 )λ̄eδ (E.113c)
μ
128
l5 (t) = 2 q̂˙ 2 (t)A22 B32 μ̄ (E.113d)
μ
where
λ̄
h 6 = ḡ12 + (E.115)
λ
where ḡ2 bounds g2 , and ρ1 and ρ2 are arbitrary positive constants. Choosing ρ1 =
ρ2 = 18 μ and using the boundary condition (10.53d) we find
1
V̇4 (t) ≤ −z 2 (0, t) − μV4 (t) + h 7 eδ V3 (t) (E.117)
4
where
16 2
h7 = (c̄ + ḡ22 )λ̄ (E.118)
μ2 2
Bound on V̇1 :
Differentiating (10.132a), integration by parts, inserting the boundary condition and
using Cauchy–Schwarz’ inequality, we find
1
V̇1 (t) = −e−δ α2 (1, t) + α2 (0, t) − δ e−δx α2 (x, t)d x
0
≤ −e−δ α2 (1, t) + q̃ 2 (t)v 2 (0, t) − δλV1 (t)
≤ −e−δ α2 (1, t) + 2q̃ 2 (t)z 2 (0, t)
− δλ − 2eδ λ̄q̃ 2 (t)( P̄ β )2 V1 (t) (E.119)
Using Young’s and Cauchy–Schwarz’ inequalities on the cross terms and assuming
δ ≥ 1, give
V̇2 ≤ q̄ 2 z 2 (0, t) − δλ − (c̄12 λ−2 + 1 + ω̄ 2 λ−2 + κ̄2 λ−2 + Γ¯12 λ−2 )λ̄ V2 (t)
+ μ̄V3 (t) + α2 (1, t), (E.121)
where c̄1 , ω̄, κ̄, Γ¯1 , λ̄ and μ̄ upper bound c1 , ω, κ, Γ1 , λ and μ, respectively, and λ
lower bounds λ. Inequality (E.121) can be written
V̇2 (t) ≤ q̄ 2 z 2 (0, t) − δλ − h 1 V2 (t) + μ̄V3 (t) + α2 (1, t), (E.122)
Appendix E: Additional Proofs 449
Bound on V̇3 :
Differentiating (10.132c), integration by parts and inserting the boundary condition,
we find
1
V̇3 (t) ≤ −z 2 (0, t) − k ekx z 2 (x, t)d x
0
1
+2 ekx μ−1 (x)z(x, t)Ω(x, t)α(1, t)d x
0
1 x
−2 ekx μ−1 (x)z(x, t) K̂ tu (x, ξ, t)w(ξ, t)dξd x
0 0
1 x
−2 ekx μ−1 (x)z(x, t) K̂ tv (x, ξ, t)T −1 [w, z](ξ, t)dξd x (E.124)
0 0
μ̄
h2 = 2 + Ω̄ 2 (E.127)
μ2
450 Appendix E: Additional Proofs
Bound on V̇1 :
From differentiating V1 in (11.64a) with respect to time, inserting the dynamics
(11.16a) and integration by parts, we find
1
V̇1 (t) = −w 2 (1, t) + 2w 2 (0, t) − w 2 (x, t)d x. (E.129)
0
Inserting the boundary condition (11.64c) and recalling that z(0, t) = ẑ(0, t) +
ˆ(0, t) = η(0, t) + ˆ(0, t), yields
1
V̇1 (t) ≤ 4η 2 (0, t) + 4ˆ2 (0, t) − λ̄V1 (t). (E.130)
2
Bound on V̇2 :
From differentiating V2 in (11.64b) with respect to time and inserting the dynamics
(11.60a), we find
1
V̇2 (t) = 2 (1 + x)η(x, t)ηx (x, t)d x
0
1 1
+ 2μ̄−1 (1 + x)η(x, t)T θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ (x, t)d x
0 x
1
−2 (1 + x)η(x, t)ĝ(x, t)d x ˆ(0, t)
0
1 1
+ 2μ̄−1 (1 + x)η(x, t)T κ̂t (ξ, t)P(x, ξ, t)dξ (x, t)d x
0 0
1 x
− 2μ̄−1 (1 + x)η(x, t) ĝt (x − ξ, t)T −1 [η](ξ, t)dξd x. (E.131)
0 0
1
V̇2 (t) ≤ −η 2 (0, t) − μ̄ − ρ1 − ρ2 − ρ3 − ρ4 V2 (t)
2
1 1 2
1
+ (1 + x)T θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ (x, t)d x
ρ2 μ̄2 0 x
2
2ḡ 2 2 1 1 1
+ ˆ (0, t) + (1 + x)T κ̂t (ξ, t)P(x, ξ, t)dξ (x, t)d x
ρ1 ρ3 μ̄2 0 0
1 x 2
1
+ (1 + x) ĝt (x − ξ, t)T −1 [η](ξ, t)dξ (x, t)d x (E.132)
ρ4 μ̄2 0 0
Appendix E: Additional Proofs 451
for some arbitrary positive constants ρi , i = 1, . . . , 4, and where we have used the
boundary condition (11.60b). Choosing ρ1 = ρ2 = ρ3 = ρ4 = 16 1
, we further find
1 32
V̇2 (t) ≤ −η 2 (0, t) − μ̄V2 (t) + 32ḡ 2 ˆ2 (0, t) + 2 G 21 ||θ̂t (t)||2 ||φ(t)||2
4 μ̄
32 32
+ 2 G 21 ||κ̂t (t)||2 ||P(t)||2 + 2 G 22 ||ĝt (t)||2 ||η(t)||2 . (E.133)
μ̄ μ̄
which are all integrable from (11.44b), (11.44c) and (11.63), we obtain
1
V̇2 (t) ≤ −η 2 (0, t) − μ̄V2 (t) + l1 (t)V2 (t) + l2 (t)V3 (t)
4
+ l3 (t)V4 (t) + 32ḡ 2 ˆ2 (0, t). (E.135)
Bound on V̇3 :
Similarly, differentiating V3 in (11.64c) with respect to time, inserting the dynamics
(11.26b), and integration by parts, we find
1 1
V̇3 (t) = 2 (1 + x)φ(x, t)φx (x, t)d x = 2φ2 (1, t) − φ2 (0, t) − φ2 (x, t)d x
0 0
1
≤ − μ̄V3 (t) + 4η 2 (0, t) + 4ˆ2 (0, t) (E.136)
2
where we have inserted the boundary condition in (11.26b).
Bound on V̇4 :
Differentiating V4 in (11.64d) with respect to time, inserting the dynamics (11.26c),
and integration by parts, we find
1 1
V4 (t) = 2 P 2 (1, ξ, t)dξ − P 2 (0, ξ, t)dξ
0 0
1 1
− P 2 (x, ξ, t)dξd x (E.137)
0 0
1
V4 (t) = −|| p0 (t)||2 + 2λ̄V1 (t) − μ̄V4 (t). (E.138)
2
452 Appendix E: Additional Proofs
Bound on V̇1 :
From differentiating V1 in (12.91a) with respect to time and inserting the dynamics
(12.84a), we find, for t ≥ t1
1 1
V̇1 (t) = 2 (1 + x)η(x, t)ηx (x, t)d x − 2 (1 + x)η(x, t)ĝ(x, t)d x ˆ(0, t)
0 0
1
2 ˙
+ (1 + x)η(x, t)ρ̂(t)T [ψ] (x, t)d x
μ̄ 0
1 1
2
+ (1 + x)η(x, t)T θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ (x, t)d x
μ̄ 0 x
1 1
2
+ (1 + x)η(x, t)T κ̂t (ξ, t)P(x, ξ, t)dξ (x, t)d x
μ̄ 0 0
1 1
2
+ (1 + x)η(x, t)T κ̂t (ξ, t)M(x, ξ, t)dξ
μ̄ 0 0
1 1
2
+ (1 + x)η(x, t)T θ̂t (ξ, t)N (x, ξ, t)dξ d x
μ̄ 0 0
2 1
+ ˙
(1 + x)η(x, t)T ϑT (x, t)ν̂(t)d x
μ̄ 0
1 x
2
− (1 + x)η(x, t) ĝt (x − ξ, t)T −1 [η](ξ, t)dξd x (E.139)
μ̄ 0 0
where we have utilized that Pt − μ̄Px is zero for t ≥ t1 . Using integration by parts
and Cauchy–Schwartz’ inequality on the cross terms, we find the following upper
bounds
⎡ ⎤
1 8
1 1
V̇1 (t) ≤ −η 2 (0, t) − μ̄ ⎣ − ρi ⎦ V1 (t) + (1 + x)ρ̂˙ 2 (t)T [ψ]2 (x, t)d x
2 ρ1 μ̄ 0
2
i=1
# $2
1 1 1
+ (1 + x)T θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ (x, t)d x
ρ2 μ̄2 0 x
1 1
+ (1 + x)ĝ 2 (x, t)d x ˆ 2 (0, t)
ρ3 0
# $2
1 1 1
+ (1 + x)T κ̂t (ξ, t)P(x, ξ, t)dξ (x, t)d x
ρ4 μ̄2 0 0
# $2
1 1 1
+ (1 + x)T κ̂t (ξ, t)M(x, ξ, t)dξ (x, t)d x
ρ5 μ̄2 0 0
Appendix E: Additional Proofs 453
# $2
1 1 1
+ (1 + x)T θ̂t (ξ, t)N (x, ξ, t)dξ (x, t)d x
ρ6 μ̄ 0
2
0
1 1
+ ˙ 2 (x, t)d x
(1 + x)T [ϑT ν̂]
ρ7 μ̄2 0
1 x 2
1
+ (1 + x) ĝt (x − ξ, t)T −1 [η](ξ, t)dξ d x, (E.140)
ρ8 μ̄ 0
2
0
Let
1
ρi = , i = 1, . . . , 8, (E.142)
32
then
μ̄ 64 64
V̇1 (t) ≤ −η 2 (0, t) − V1 (t) + 2 G 21 ρ̂˙ 2 (t)||ψ(t)||2 + 2 G 21 ||θ̂t (t)||2 ||φ(t)||2
4 μ̄ μ̄
+ 64Mg2 σ 2 (t) + 64Mg2 σ 2 (t)ψ 2 (0, t) + 64Mg2 σ 2 (t)||φ(t)||2
+ 64Mg2 σ 2 (t)|| p0 (t)||2 + 64Mg2 σ 2 (t)||m 0 (t)||2 + 64Mg2 σ 2 (t)||n 0 (t)||2
64
+ 64Mg2 σ 2 (t)|ϑ(0, t)|2 + 2 G 21 ||κ̂t (t)||2 ||P(t)||2
μ̄
64 2 64
+ 2 G 1 ||κ̂t (t)|| ||M(t)|| + 2 G 21 ||θ̂t (t)||2 ||N (t)||2
2 2
μ̄ μ̄
64 ˙ 64
+ 2 G 21 |ν̂(t)| 2
||ϑ(t)||2 + 2 G 22 ||gt (t)||2 ||η(t)||2 . (E.143)
μ̄ μ̄
64 2 64 2
l1 (t) = G ||ĝt (t)||2 , l2 (t) = G ||θ̂t (t)||2 + 64μ̄Mg2 σ 2 (t) (E.144a)
μ̄ 2 μ̄ 1
64λ̄
l3 (t) = 2 G 21 ||κ̂t (t)||2 , l4 (t) = 64λ̄Mg2 σ 2 (t), (E.144b)
μ̄
454 Appendix E: Additional Proofs
64 2 ˙ 2
l5 (t) = G ρ̂ (t) (E.144c)
μ̄ 1
l6 (t) = 64Mg2 σ 2 (t) + 64Mg2 σ 2 (t)||m 0 (t)||2 + 64Mg2 σ 2 (t)||n 0 (t)||2
64
+ 64Mg2 σ 2 (t)|ϑ(0, t)|2 + 2 G 21 ||κ̂t (t)||2 ||M(t)||2
μ̄
64 2 64 ˙
+ 2 G 1 ||θ̂t (t)||2 ||N (t)||2 + 2 G 21 |ν̂(t)| 2
||ϑ(t)||2 (E.144d)
μ̄ μ̄
μ̄
V̇1 (t) ≤ −η 2 (0, t) − V1 (t) + h 1 σ 2 (t)ψ 2 (0, t) + l1 (t)V1 (t) + l2 (t)V2 (t)
4
+ l3 (t)V3 (t) + l4 (t)V4 (t) + l5 (t)V6 (t) + l6 (t). (E.146)
Bound on V̇2 :
Similarly, differentiating V2 in (12.91b) with respect to time, inserting the dynamics
(12.48b), and integrating by parts, we find
1 1
V̇2 (t) = 2 (1 + x)φ(x, t)φx (x, t)d x = 2φ2 (1, t) − φ2 (0, t) − φ2 (x, t)d x
0 0
1
≤ −φ2 (0, t) + 4η 2 (0, t) − μ̄V2 (t) + 4ˆ2 (0, t) (E.147)
2
where we have inserted the boundary condition in (12.48b). Inequality (E.147) can
be written as
μ̄
V̇2 (t) ≤ −φ2 (0, t) + 4η 2 (0, t) − V2 (t) + 4σ 2 (t) 1 + ψ 2 (0, t) + ||φ(t)||2
2
+ || p0 (t)||2 + ||m 0 (t)||2 + ||n 0 (t)||2 + |ϑ(0, t)|2 . (E.148)
Bound on V̇3 :
Differentiating V3 in (12.91c) with respect to time and inserting the dynamics
(12.48d), we find
1 1
V̇3 (t) = −2 (2 − ξ)P(x, ξ, t)Pξ (x, ξ, t)dξd x
0 0
1 1
=− P 2 (x, 1, t)d x + 2 P 2 (x, 0, t)d x
0 0
1 1
− P (x, ξ, t)dξd x.
2
(E.151)
0 0
1
V̇3 (t) ≤ − λ̄V3 (t) + 2μ̄V2 (t) (E.152)
2
Bound on V̇4 :
From differentiating V4 in (12.91d) with respect to time and inserting p0 ’s dynamics
derived from the relationship given in (12.49), we find
1
V̇4 (t) = −2 (2 − x) p0 (x, t)∂x p0 (x, t)d x
0
λ̄
= − p02 (1, t) + 2 p02 (0, t) − V4 (t). (E.153)
2
Using (12.49) and (12.48d) yields
λ̄
V̇4 (t) ≤ 2φ2 (0, t) − V4 (t). (E.154)
2
Bound on V̇5 :
Similarly, differentiating V5 in (12.91e) with respect to time and integration by parts,
we find
λ̄
V̇5 (t) = − p12 (1, t) + 2 p12 (0, t) − V5 (t). (E.155)
2
Using (12.49) and (12.48d) yields
λ̄
V̇5 (t) ≤ 2φ2 (1, t) − V5 (t) (E.156)
2
456 Appendix E: Additional Proofs
λ̄
≤ 4η (0, t) − V5 (t) + 4σ (t) 1 + ψ 2 (0, t) + ||φ(t)||2
2 2
2
+ || p0 (t)|| + ||m 0 (t)|| + ||n 0 (t)|| + |ϑ(0, t)| .
2 2 2 2
(E.157)
λ̄
V̇5 (t) ≤ 4η 2 (0, t) − V5 (t) + 4σ 2 (t)ψ 2 (0, t)
2
+ l7 (t)V2 (t) + l8 (t)V4 (t) + l9 (t), (E.158)
μ̄ 1
V̇6 (t) ≤ −ψ 2 (0, t) − V6 (t) + 12Mρ2 r 2 (t) + 12Mρ2 ĝ 2 (1 − ξ, t)ẑ 2 (ξ, t)dξ
2 0
1 1
+ 12Mρ2 κ̂2 (ξ, t) p12 (ξ, t)dξ + 12Mρ2 κ̂2 (ξ, t)a 2 (ξ, t)dξ
0 0
1
+ 12Mρ2 θ̂2 (ξ, t)b2 (1 − ξ, t)dξ + 12Mρ2 (χT (t)ν̂(t))2 (E.160)
0
where
1
Mρ = . (E.161)
min{|ρ|, |ρ̄|}
μ̄
V̇6 (t) ≤ −ψ 2 (0, t) − V6 (t) + 12Mρ2 r 2 (t) + 12Mρ2 Mg2 G 22 ||η(t)||2
2
+ 12Mρ2 Mκ2 || p1 (t)||2 + 12Mρ2 Mκ2 ||a(t)||2
+ 12Mρ2 Mθ2 ||b(t)||2 + 12(2n + 1)Mρ2 Mν2 ||χ(t)||2 (E.162)
Appendix E: Additional Proofs 457
where
Bound on V̇2 :
Differentiating V2 , using the dynamics (15.43a), integration by parts, inserting the
boundary condition (15.43c) and using Young’s and Cauchy–Schwarz’ inequalities
on the cross terms, assuming δ ≥ 1, we find
V̇2 (t) ≤ −λ1 e−δ |α(1, t)|2 + h 1 β 2 (0, t) + h 1 ˆ2 (0, t) − (δλ1 − h 2 ) V2 (t)
˙
+ V3 (t) + ||ê(t)||2 + ||ˆ(t)||2 + ||(ϕ(t) ◦ κ̂(t))1)|| 2
(E.167)
Bound on V̇3 :
Similarly for V3 , we find using (15.43b)
1 1
V̇3 (t) ≤ −μβ 2 (0, t) − kμ ekx β 2 (x, t)d x + λ2n K̄ 2 q̄ 2 ekx β 2 (x, t)d x
0 0
1 1
+ ek ˆ2 (0, t) + ekx β 2 (x, t)d x + ekx T [Σ̂ ê + ω̂ ˆ,
ˆ T ê]2 (x, t)d x
0 0
1 1 x x
+ ekx β 2 (x, t)d x + ekx ( K̂ tu (x, ξ, t))2 dξ α2 (ξ, t)dξd x
0 0 0 0
1
+ ekx β 2 (x, t)d x
0
1 x x
+ ekx ( K̂ tv (x, ξ, t))2 dξ T −1 [α, β]2 (ξ, t)dξd x
0 0 0
1 1
+ e β (x, t)d x +
kx 2 ˙
e T [(ϕ ◦ κ̂)1,
kx
ϕ0T κ̂˙ 0 ]2 (x, t)d x (E.169)
0 0
h 3 = 4 + λ2n K̄ 2 q̄ 2 , h 4 = 2ek 2G 21 n σ̄ 2 + G 22 n
¯2 (E.171)
h 5 = 4e k
G 21 n ω̄ 2 , h6 = 2G 21 , h7 = 2G 22 (E.172)
Bound on V̇4 :
Following the same steps as before, we obtain from (15.51c) and the filter (15.2a)
V̇4 (t) ≤ −λ1 e−δ |η(1, t)|2 + h 8 β 2 (0, t) + h 8 ˆ2 (0, t) − δλ1 V4 (t) (E.175)
where
h 8 = 2nλn (E.176)
is a positive constant.
Bound on V̇5 :
By straightforward calculations, we obtain
1
V̇5 (t) = μek ψ T (1, t)ψ(1, t) − μψ T (0, t)ψ(0, t) − kμ ekx ψ T (x, t)ψ(x, t)d x
0
≤ h 9 ek |α(1, t)|2 + h 9 ek |ê(1, t)|2 − μ|ψ(0, t)|2 − kμV5 (t) (E.177)
n 1
V̇6 (t) = −2 e−δx λi piT (x, t)∂x pi (x, t)d x
i=1 0
n 1
+2 e−δx piT (x, t)u(x, t)d x
i=1 0
1 1
+ n2 e−δx ν T (x, t)ν(x, t)d x + e−δx v 2 (x, t)d x
0 0
−δ
≤ −λ1 e |ν(1, t)|2 − (δλ1 − h 11 ) V7 (t)
+ h 12 eδ V2 (t) + h 13 V3 (t) + 2||ˆ(t)||2 (E.179)
and where
h 11 = n 2 , h 12 = 4G 23 , h 13 = 4G 24 . (E.181)
and hence
V̇8 (t) ≤ −μ|r (0, t)|2 − [kμ − 2] V8 (t) + eδ+k V2 (t) + ek ||ê(t)||2 (E.183)
Bound on V̇3 :
From (16.52a) and the dynamics (16.39a), we find
1
V̇3 (t) = −e−δ αT (1, t)α(1, t) + αT (0, t)α(0, t) − δ e−δx αT (x, t)α(x, t)d x
0
1
+2 e−δx αT (x, t)Λ−1 (x)Σ(x)α(x, t)d x
0
Appendix E: Additional Proofs 461
1
+2 e−δx αT (x, t)Λ−1 (x)ω(x)β(x, t)d x
0
1 x
+2 e−δx αT (x, t)Λ−1 (x) B̂1 (x, ξ, t)α(ξ, t)dξd x
0 0
1 x
+2 e−δx αT (x, t)Λ−1 (x) b̂2 (x, ξ, t)β(ξ, t)dξd x
0 0
1
−2 e−δx αT (x, t)Λ−1 (x)Γ1 (x)ˆ(0, t)d x
0
1
+2 ˙
e−δx αT (x, t)Λ−1 (x)P(x, t)q̂(t)d x. (E.184)
0
where ω̄, σ̄, b̄1 , b̄2 , γ̄1 , γ̄2 and q̄ bounds the absolute values of all elements in ω, Σ,
B̂1 , b̂2 , Γ1 , Γ2 and q̂, respectively. Assuming δ ≥ 1, one can shorten it down to
V̇3 (t) ≤ −e−δ αT (1, t)α(1, t) + 2n q̄ 2 β 2 (0, t) − δλ − 2n σ̄ − n b̄12 − 7 V3 (t)
+ n(ω̄ 2 + b̄22 )λ−1 μ̄V4 (t) + (n γ̄12 λ−1 + 2n q̄ 2 )ˆ2 (0, t)
n 1
˙
+ q̂˙ T (t)q̂(t) e−δx PiT (x, t)Λ−1 (x)Pi (x, t)d x (E.186)
i=1 0
462 Appendix E: Additional Proofs
where Pi are the columns of P. Using (16.47a) and the property (16.17c), we can
write
V̇3 (t) ≤ 2n q̄ 2 β 2 (0, t) − δλ − h 1 V3 (t) + h 2 V4 (t)
+ h 3 ˆ2 (0, t) + l1 (t)V5 (t) + l2 (t)V6 (t) (E.187)
n
V̇5 (t) ≤ −e−δ |W (1, t)|2 + WiT (0, t)Wi (0, t)
i=1
− λδ − 2n σ̄ − 2n b̄1 V5 (t) (E.191)
where b̄1 bounds the absolute values of all elements in B̂1 . Inserting the boundary
condition (16.46c), we obtain
where ¯ and b̄2 bounds all elements in and b̂2 , respectively. Inequality (E.194)
can be written
! "
V̇6 (t) ≤ −|z(0, t)|2 + h 6 ek+δ V5 (t) − kμ − 2 V6 (t) (E.195)
Bound on V̇1 :
Differentiating V1 in (17.85a) with respect to time, inserting the dynamics (17.80a),
integrating by parts and using Young’s inequality on the cross terms, one obtains
464 Appendix E: Additional Proofs
1
1 2
V̇1 ≤ −η (0, t) −
2
− 9k (1 + x)η 2 (x, t)d x + ḡ 2 ˆ2 (0, t)
2 0 k
1 1
2 2
+ μ̄−2 (ν̂˙ T (t)T [h] (x, t))2 d x + μ̄−2 (ν̂˙ T (t)T [ϑ] (x, t))2 d x
k 0 k 0
1
2 −2
+ μ̄ ρ̂˙ 2 (t)T 2 [ψ](x, t)d x
k 0
1 1
2 −2
+ μ̄ T2 κ̂t (ξ, t)P(x, ξ, t)dξ (x, t)d x
k 0 0
1 1
2 −2
+ μ̄ T2 κ̂t (ξ, t)M(x, ξ, t)dξ (x, t)d x
k 0 0
1 1
2
+ μ̄−2 T2 θ̂t (ξ, t)N (x, ξ, t)dξ (x, t)d x
k 0 0
1 1
2
+ μ̄−2 T2 θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ (x, t)d x
k 0 x
1 x 2
2
+ μ̄−2 ĝt (x − ξ, t)T −1 [η](ξ, t)dξ dx (E.196)
k 0 0
1 1
V̇1 (t) ≤ −η 2 (0, t) − (1 + x)η 2 (x, t)d x + 72ḡ 2 ˆ2 (0, t)
4 0
˙
+ 72μ̄−2 G 2 |ν̂(t)| 2 ˙
||h(t)||2 + 72μ̄−2 G 2 |ν̂(t)| 2
||ϑ(t)||2
1 1
Since ||ϑ||, ||M||, ||N || are all bounded (Assumption 17.2), this can be written
1
V̇1 (t) ≤ −η 2 (0, t) − μ̄V1 (t) + l1 (t)V1 (t) + l2 (t)V3 (t) + l3 (t)V4 (t)
4
+ l4 (t)V5 (t) + l5 (t)V6 (t) + l6 (t) + b1 ˆ2 (0, t) (E.198)
where l1 . . . l6 are all bounded and integrable functions (Lemmas 17.7 and 17.8), and
b1 is a positive constant.
Bound on V̇2 :
Differentiating V2 in (17.85b) with respect to time, inserting the dynamics (17.42a)
and integrating by parts
Appendix E: Additional Proofs 465
Bound on V̇3 :
Similarly, differentiating V3 in (17.85c) with respect to time, inserting the dynamics
(17.45b), integrating by parts and inserting the boundary condition (17.45b), we
obtain in a similar manner the upper bound
1
V̇3 (t) ≤ −φ2 (0, t) + 4η 2 (0, t) + 4ˆ2 (0, t) − μ̄V3 (t). (E.202)
2
Bound on V̇4 :
Differentiating V4 in (17.85d) with respect to time, inserting the dynamics (17.45c),
integrating by parts and inserting the boundary condition (17.45c), yields
1
V̇4 (t) = 2|w(1, t)|2 − |h(0, t)|2 − μ̄V4 (t) (E.203)
2
Bound on V̇5 :
For V5 in (17.85e), using the dynamics (17.45e), integration by parts and inserting
the boundary condition (17.45e), yields
1
V̇5 (t) = 2||w1 (t)||2 − || p0 (t)||2 − μ̄V5 (t). (E.204)
2
Bound on V̇6 :
Similarly, for V6 in (17.85f), the dynamics and boundary condition (17.45a) yield
1
V̇6 (t) = 2U 2 (t) − ψ 2 (0, t) − μ̄V6 (t). (E.205)
2
Inserting the control law (17.75) and using Young’s and Cauchy–Schwarz’ inequal-
ities, we obtain
466 Appendix E: Additional Proofs
1
V̇6 (t) ≤ b2 V1 (t) + b3 V2 (t) − μ̄V6 (t) + b4 |w(1, t)|2 − ψ 2 (0, t) + b5 (E.207)
2
for some positive constants b2 . . . b5 , with b5 depending on r̄ .
Bound on V̇2 :
Differentiating V2 , inserting the dynamics (20.59a), integrating by parts and using
Cauchy–Schwartz’ inequality on the cross terms, bounding all the coefficients, insert-
ing the boundary conditions, and evaluating all the double integrals, we find, when
assuming δ > 1
where σ̄ bounds all the elements of the matrices Σ, κ̄ bounds the elements of κ+ and
κ− , γ̄ bounds the elements of P + , and q̄ bounds q̂. Define the positive constants
V̇2 (t) ≤ −e−δ λ1 |α(1, t)|2 + h 1 |β(0, t)|2 − [δλ1 − h 2 ] V2 (t) + 2d −1 V3 (t)
+ h 3 |ˆ(0, t)|2 + l1 (t)V4 (t) + l2 (t)V5 (t) + l3 (t)V6 (t). (E.211)
Bound on V̇3 :
Using the same steps as for V2 , we obtain using (20.59b) and assuming k > 1
˙
+ 2d̄ek ĉ˙ T (t)ĉ(t)G 2
2 ||Z (t)||2 + || K̂ u (t)||2 d̄ek ||α(t)||2
t
v
+ 2|| K̂ t (t)|| d̄e G 3 ||α(t)||2
2 k 2
+ 2|| K̂ tv (t)||2 d̄ek G 24 ||β(t)||2 (E.212)
where d̄ bounds all the elements of D. Consider the third term on the right hand
side. Written out, and using Cauchy–Schwarz’ inequality, we can bound the term as
follows
m
m
m
β T (0, t)G T (x)DG(x)β(0, t) ≤ βi2 (0, t) dk ḡ 2 , (E.213)
i=1 j=1 k=max(i+1, j+1)
where ḡ bounds all the elements of G, and hence the first and the third terms can be
bounded as
β T (0, t) μm D − ek G T (x)DG(x) β(0, t)
⎡ ⎤
m m
m
≤− βi2 (0, t) ⎣μm di − ek ḡ 2 dk ⎦ . (E.214)
i=1 j=1 k=max(i+1, j+1)
dm = 1 (E.215)
468 Appendix E: Additional Proofs
then choose
V̇3 (t) ≤ −h 4 |β(0, t)|2 − (kλ1 − 7)V3 (t) + ek d̄h 5 |ˆ(0, t)|2 + l4 (t)V2 (t)
+ l5 (t)V3 (t) + l6 (t)V4 (t) + l7 (t)V5 (t) + l8 (t)V6 (t) (E.217)
for some positive constant h 4 depending on the chosen values of D, the positive
constant
Bound on V̇4 :
Using the same steps as above, and assuming δ > 1, we find
nm
nm
V̇4 (t) = − λ1 e−δ AiT (1, t)Ai (1, t) + λn AiT (0, t)Ai (0, t)
i=1 i=1
− δλ1 − 2n σ̄ − 1 − n D̄ 2 V4 (t), (E.220)
V̇4 (t) ≤ −λ1 e−δ |A(1, t)|2 + 2λn mn(|β(0, t)|2 + |ˆ(0, t)|2 )
− δλ1 − 2n σ̄ − 1 − n D̄ 2 V4 (t). (E.221)
By defining
h 6 = 2n σ̄ + 1 + n D̄ 2 , h 7 = 2λn mn (E.222)
Appendix E: Additional Proofs 469
we obtain
V̇4 (t) ≤ −λ1 e−δ |A(1, t)|2 + h 7 |β(0, t)|2 + h 7 |ˆ(0, t)|2
− [δλ1 − h 6 ] V4 (t). (E.223)
Bound on V̇5 :
Differentiating V5 , inserting the dynamics, integration by parts, inserting the bound-
ary condition and using Young’s inequality, one can obtain, when assuming k > 1
nm 1
V̇5 (t) ≤ μ1 ek BiT (x, t)H T (x, t)Π H (x)Bi (x, t)dξ
i=1 0
nm
− μm BiT (0, t)Π Bi (0, t)
i=1
nm 1
− kμm ekx BiT (x, t)Π Bi (x, t)d x
i=1 0
nm 1
+ m σ̄ 2 ekx BiT (x, t)Π Λ− Bi (x, t)d x
i=1 0
nm 1
+ m D̄ 2 ekx BiT (x, t)Π Λ− Bi (x, t)d x
i=1 0
nm 1
+2 ekx AiT (x, t)Π Ai (x, t)d x. (E.224)
i=1 0
Since H has the same strictly triangular structure as G, one can use the same recursive
argument as for D in V3 for determining the coefficients of Π . This results in
nm
+ 8n π̄ ê T (1, t)ê(1, t) − ΩiT (0, t)Π Ωi (0, t)
i=1
nm 1
− ΩiT (x, t)Π Ωi (x, t)d x (E.226)
i=1 0
470 Appendix E: Additional Proofs
where π̄ is an upper bound for the elements of Π . Again, due to H having the same
structure as G, we can recursively choose the components of Π so that the sum of
the first and last components is negative, and hence obtain
V̇6 (t) ≤ −h 9 ek V6 (t) + 8n π̄|α(1, t)|2 + 8n π̄|ê(1, t)|2 − π|Ω(0, t)|2 (E.227)
This method is suitable for Volterra (integral) equations. This method iterates a
sequence similar to the sequence (1.62) used in the proof of existence of solution to
the Volterra equation (1.58) in Lemma 1.1. Consider the Volterra equation
x
k(x) = f (x) + G(x, ξ)k(ξ)dξ (F.1)
0
k ≈ kq (F.3)
F.2.1 Introduction
This method was originally proposed in Anfinsen and Aamo (2017a), and is based
on discretization of the domain into an uniformly spaced grid. We will demonstrate
the technique on the time-invariant PDE
which will be solved over a the lower triangular part of a uniformly spaced grid with
N × N nodes. The method straightforwardly extends to time-varying PDEs as well.
One well-known problem with solving the Eqs. (F.4), is the numerical issues one is
facing when evaluating the spatial derivatives K x and K ξ at the points (1, 1) and
(0, 0) respectively, as naively performing a finite difference scheme results in the
need for evaluating points outside the domain. The key to overcoming the numerical
issues faced at the sharp corners of the domain, is to treat both terms on the left
hand side of (F.4a) of as a directional derivative, and approximate the derivative
of K at a point (x, ξ) using a finite difference upwind scheme, using information
from the direction of flow to approximate the derivative. Intuitively, K represents
information that convects from the bottom boundary and upwards to the right. This
is depicted in Fig. F.1. The red boundary represent the boundary at which a boundary
condition is specified, while the blue lines are characteristics indicating the direction
of information flow.
For K in (F.4), we approximate the left hand side of (F.4a) as
where ν1 , ν2 are the components of a unit vector in the direction of the characteristic,
that is
ν1 (x, ξ) 1 μ(x)
ν(x, ξ) = =% (F.7)
ν1 (x, ξ) μ (x) + λ (ξ) λ(ξ)
2 2
F.2.3 Discretization
The method starts by discretizing the domain T into the lower triangular part of an
N × N grid, with discrete nodes defined for
1 ≤ j ≤ i ≤ N, (F.8)
constituting a total of 21 N (N + 1) nodes. One such grid is displayed in Fig. F.2 for
N = 4, with each node assigned a coordinate. The boundary condition (F.4b) is along
j = 1. Introducing the notation
1
Δ= , xi = Δi, ξi = Δj, (F.9)
N −1
(3, 3) (4, 3)
(i = 1, j = 1)
(2, 1) (3, 1) (4, 1)
where
νi, j,1
νi, j = (F.11)
νi, j,2
is a unit vector in the direction of the characteristic and point (xi , ξ j ), that is
1 μ(xi )
νi, j = % (F.12)
μ2 (xi ) + λ2 (ξ j ) λ(ξ j )
and σi, j > 0 is the step length. Note that the evaluation point P v (xi − σi, j νi, j , ξ j −
σi, j νi, j , t) usually is off-grid, and its value will have to be found from interpolating
neighboring points on the grid.
The performance of the proposed scheme depends on the step length σi, j one chooses.
Apparently, one should choose σi, j so that the evaluation point K (xi − σi, j νi, j , ξ j −
σi, j νi, j , t) is close to other points on the grid. Proposed here is a method for choosing
σi, j . Depending on the values of the vector νi, j , the extended vector −σi, j νi, j will
either cut through the left hand side of the square (blue arrow), or the bottom side
(red arrow), as depicted in Fig. F.3. In either case, the distance σi, j can be computed
so that one of the sides is hit. In the case of the left hand side being hit, one can
evaluate the value K (xi − σi, j νi, j , ξ j − σi, j νi, j , t) by simple, linear interpolation of
the points at (i − 1, j) and (i − 1, j − 1). Similarly, if the bottom side is hit, the point
is evaluated using linear interpolation of the points at (i − 1, j − 1) and (i, j − 1).
Index 475
(i − 1, j − 1) (i, j − 1)
Using the above discretization scheme, a linear set of equations can be built and solved
efficiently on a computer. In the case of adaptive schemes, most of the matrices can
be computed off-line prior to implementation, and updating the parts changing with
the adaptive laws should be a minor part of the implementation.
References
Abramowitz M, Stegun IA (eds) (1975) Handbook of mathematical functions with formulas, graphs,
and mathematical tables. Dover Publications Inc, New York
Anfinsen H, Aamo OM (2016) Boundary parameter and state estimation in 2 × 2 linear hyperbolic
PDEs using adaptive backstepping. In: 2016 IEEE 55th conference on decision and control (CDC),
Vegas, NV, USA
Anfinsen H, Aamo OM (2017a) Adaptive stabilization of 2 × 2 linear hyperbolic systems with an
unknown boundary parameter from collocated sensing and control. IEEE Trans Autom Control
62(12):6237–6249
Anfinsen H, Aamo OM (2017b) Model reference adaptive control of n + 1 coupled linear hyperbolic
PDEs. Syst Control Lett 109:1–11
Anfinsen H, Aamo OM (2018) A note on establishing convergence in adaptive systems. Automatica
93:545–549
Coron J-M, Vazquez R, Krstić M, Bastin G (2013) Local exponential H 2 stabilization of a 2 × 2
quasilinear hyperbolic system using backstepping. SIAM J Control Optim 51(3):2005–2035
Coron J-M, Hu L, Olive G (2017) Finite-time boundary stabilization of general linear hyperbolic
balance laws via Fredholm backstepping transformation. Automatica 84:95–100
Di Meglio F, Vazquez R, Krstić M (2013) Stabilization of a system of n + 1 coupled first-order
hyperbolic linear PDEs with a single boundary input. IEEE Trans Autom Control 58(12):3097–
3111
Hu L, Vazquez R, Meglio FD, Krstić M (2015) Boundary exponential stabilization of 1-D inhomo-
geneous quasilinear hyperbolic systems. SlAM J Control Optim (to appear)
Krstić M, Smyshlyaev A (2008) Backstepping boundary control for first-order hyperbolic PDEs
and application to systems with actuator and sensor delays. Syst Control Lett 57(9):750–758
Krstić M, Kanellakopoulos I, Kokotović PV (1995) Nonlinear and adaptive control design. Wiley,
New York
Tao G (2003) Adaptive control design and analysis. Wiley, New York
Index
A D
Adaptive control Discretization, 473
– identifier-based, 32, 70, 153 Disturbance, 227
– Lyapunov, 30 – parametrization, 228, 236
– model reference, 103, 243, 332 – rejection, 243
– output feedback, 86, 111, 182, 196, Drift flux model, 258
218, 250, 304, 338, 383
– state feedback, 70, 153, 165, 287
– swapping-based, 34, 86, 165, 182, 218, F
243, 250, 287, 304, 332, 338, 383 Filters, 34, 82, 98, 159, 176, 213, 236, 281,
Adaptive law, 68, 83, 100, 149, 161, 177, 299, 325, 376
194, 215, 240, 283, 302, 329, 379
H
Heat exchangers, 3
B
Backstepping for PDEs, 24
Barbalat’s lemma, 399 I
Bessel functions, 144 Identifier, 32, 68, 149
K
C Korteweg de Vries equation, 46, 112
Canonical form, 97, 212, 234, 325
Cauchy–Schwarz’ inequality, 405
Certainty equivalence, 32, 34 L
Classes of linear hyperbolic PDEs, 7 Laplace transform, 58
– 2 × 2 systems, 8, 117, 121, 147, 176, L 2 -stability, 10
207, 227
– n + 1 systems, 8, 257, 261, 281, 299,
317 M
– n + m systems, 9, 345, 349, 375 Marcum Q-function, 144
– scalar systems, 7, 45, 53, 67, 81, 95 Minkowski’s inequality, 405
Convergence, 10, 399 Model reference adaptive control, see adap-
– minimum-time, 11, 354 tive control
– non-minimum time, 11, 350 Multiphase flow, 3, 258
N S
Non-adaptive control, 53, 121, 261, 349 Saint-Venant equations, 118
– output-feedback, 61, 140, 141, 276, Saint-Venant–Exner model, 258
277, 364, 365 Square integrability, 10
– state-feedback, 54, 123, 262, 350, 354 Stability, 10, 399, 402
– tracking, 61, 141, 277, 367 State feedback
Notation, 4 – adaptive control, see adaptive control
– non-adaptive control, see non-adaptive
control
Successive approximations, 17, 408, 471
O
Observer, 60, 132, 268, 357
– anti-collocated, 133, 269, 358 T
– collocated, 137, 272, 363 Target system, 24
Output feedback Time-delay, 3, 27
– adaptive control, see adaptive control Transmission lines, 3, 118
– non-adaptive control, see non-adaptive
control
U
Update law, see adaptive law
P
Parabolic PDEs, 3
Persistency of excitation, 177 V
Predator–prey systems, 3 Volterra integral transformations, 14
– affine, 23
Projection, 68, 83, 100, 149, 161, 177, 215,
– invertibility, 18, 21, 23
240, 283, 302, 329, 379, 397
– time-invariant, 14
– time-variant, 21
R
Reference model, 96, 228, 318 Y
Road traffic, 3, 46 Young’s inequality, 406