0% found this document useful (0 votes)
27 views

Adaptive Control of Hyperbolic PDEs (PDFDrive)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views

Adaptive Control of Hyperbolic PDEs (PDFDrive)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 472

Communications and Control Engineering

Henrik Anfinsen
Ole Morten Aamo

Adaptive
Control of
Hyperbolic
PDEs
Communications and Control Engineering

Series editors
Alberto Isidori, Roma, Italy
Jan H. van Schuppen, Amsterdam, The Netherlands
Eduardo D. Sontag, Boston, USA
Miroslav Krstic, La Jolla, USA
Communications and Control Engineering is a high-level academic monograph
series publishing research in control and systems theory, control engineering and
communications. It has worldwide distribution to engineers, researchers, educators
(several of the titles in this series find use as advanced textbooks although that is not
their primary purpose), and libraries.
The series reflects the major technological and mathematical advances that have a
great impact in the fields of communication and control. The range of areas to
which control and systems theory is applied is broadening rapidly with particular
growth being noticeable in the fields of finance and biologically-inspired control.
Books in this series generally pull together many related research threads in more
mature areas of the subject than the highly-specialised volumes of Lecture Notes in
Control and Information Sciences. This series’s mathematical and control-theoretic
emphasis is complemented by Advances in Industrial Control which provides a
much more applied, engineering-oriented outlook.
Publishing Ethics: Researchers should conduct their research from research
proposal to publication in line with best practices and codes of conduct of relevant
professional bodies and/or national and international regulatory bodies. For more
details on individual ethics matters please see:
https://www.springer.com/gp/authors-editors/journal-author/journal-author-
helpdesk/publishing-ethics/14214

More information about this series at http://www.springer.com/series/61


Henrik Anfinsen Ole Morten Aamo

Adaptive Control
of Hyperbolic PDEs

123
Henrik Anfinsen Ole Morten Aamo
Department of Engineering Cybernetics Department of Engineering Cybernetics
Norwegian University of Science Norwegian University of Science
and Technology and Technology
Trondheim, Norway Trondheim, Norway

Additional material to this book can be downloaded from http://extras.springer.com.

ISSN 0178-5354 ISSN 2197-7119 (electronic)


Communications and Control Engineering
ISBN 978-3-030-05878-4 ISBN 978-3-030-05879-1 (eBook)
https://doi.org/10.1007/978-3-030-05879-1
Library of Congress Control Number: 2018964238

MATLAB® is a registered trademark of The MathWorks, Inc., 1 Apple Hill Drive, Natick, MA
01760-2098, USA, http://www.mathworks.com.

Mathematics Subject Classification (2010): 93-02, 93C20, 93C40, 93D21

© Springer Nature Switzerland AG 2019


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publi-
cation does not imply, even in the absence of a specific statement, that such names are exempt from the
relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, express or implied, with respect to the material contained herein or
for any errors or omissions that may have been made. The publisher remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface

A few years ago, we came across an interesting problem related to oil well drilling.
By controlling pressure at the surface, the task was to attenuate pressure oscillations
at the bottom of the well, several kilometers below the surface. At the same time,
the 2011 CDC paper by Vazquez, Krstic, and Coron on “Backstepping boundary
stabilization and state estimation of a 2  2 linear hyperbolic system” was pub-
lished, providing the tools necessary to solve the problem. Various applications in
the oil and gas industry, prone to uncertainty, subsequently led us to study the
adaptive control problem for hyperbolic partial differential equations relying
heavily on the infinite-dimensional backstepping technique.
Over the years that followed, we derived a fairly complete theory for adaptive
control of one-dimensional systems of coupled linear hyperbolic PDEs. The
material has been prepared in this book in a systematic manner giving a clear
overview of the state-of-the-art. The book is divided into five parts, with Part I
devoted to introductory material and the remaining four parts distinguished by the
structure of the system of equations under consideration. Part II contains scalar
systems, while Part III deals with the simplest systems with bi-directional infor-
mation flow. They constitute the bulk of book with the most complete treatment in
terms of variations of the problem: collocated versus anti-collocated sensing and
control, swapping design, identifier-based design, and various constellations of
uncertainty. Parts IV and V extend (some of) the results from Part III to systems
with bi-directional information flow governed by several coupled transport equa-
tions in one or both directions.
The book should be of interest to researchers, practicing control engineers, and
students of automatic control. Readers having studied adaptive control for ODEs
will recognize the techniques used for developing adaptive laws and providing
closed-loop stability guarantees. The book can form the basis of a graduate course
focused on adaptive control of hyperbolic PDEs, or a supplemental text for a course
on adaptive control or control of infinite-dimensional systems.
The book contains many simulation examples designed not only to demonstrate
performance of the various schemes but also to show how the numerical imple-
mentation of them is carried out. Since the theory is developed in infinite

v
vi Preface

dimensions, spatial (as well as temporal) discretization is necessary for imple-


mentation. This is in itself a non-trivial task, so in order to lower the threshold for
getting started using the designs offered in this book, computer code (MATLAB) is
provided for many of the cases at http://extras.springer.com.

Trondheim, Norway Henrik Anfinsen


Ole Morten Aamo
Acknowledgements

We owe great gratitude to coauthors in works leading to this book: Miroslav Krstic,
Florent Di Meglio, Mamadou Diagne, and Timm Strecker. In addition, we have
benefited from support from or interaction with Ulf Jakob Flø Aarsnes, Anders Albert,
Delphine Bresch-Pietri, Anders Rønning Dahlen, Michael Demetriou, John-Morten
Godhavn, Espen Hauge, Haavard Holta, Glenn-Ole Kaasa, Ingar Skyberg Landet,
Henrik Manum, Ken Mease, Alexey Pavlov, Bjørn Rudshaug, Sigbjørn Sangesland,
Rafael Vazquez, Nils Christian Aars Wilhelmsen, and Jing Zhou.
We gratefully acknowledge the support that we have received from the
Norwegian Academy of Science and Letters, Equinor, and the Norwegian Research
Council.
The second author dedicates this book to his daughters Anna and Oline, and wife
Linda.

vii
Contents

Part I Background
1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Linear Hyperbolic PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 Classes of Linear Hyperbolic PDEs Considered . . . . . . . . . . . . 7
1.4.1 Scalar Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4.2 2  2 Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4.3 n þ 1 Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4.4 n þ m Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5 Collocated Versus Anti-collocated Sensing and Control . . . . . . 10
1.6 Stability of PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.7 Some Useful Properties of Linear Hyperbolic PDEs . . . . . . . . . 13
1.8 Volterra Integral Transformations . . . . . . . . . . . . . . . . . . . . . . . 14
1.8.1 Time-Invariant Volterra Integral Transformations . . . . . 14
1.8.2 Time-Variant Volterra Integral Transformations . . . . . . 21
1.8.3 Affine Volterra Integral Transformations . . . . . . . . . . . 23
1.9 The Infinite-Dimensional Backstepping Technique for PDEs . . . 24
1.10 Approaches to Adaptive Control of PDEs . . . . . . . . . . . . . . . . 30
1.10.1 Lyapunov Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.10.2 Identifier-Based Design . . . . . . . . . . . . . . . . . . . . . . . . 32
1.10.3 Swapping-Based Design . . . . . . . . . . . . . . . . . . . . . . . 34
1.10.4 Discussion of the Three Methods . . . . . . . . . . . . . . . . 38
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

ix
x Contents

Part II Scalar Systems


2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.1 System Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.2 Proof of Lemma 2.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3 Non-adaptive Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.2 State Feedback Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.2.1 Controller Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.2.2 Explicit Controller Gains . . . . . . . . . . . . . . . . . . . . . . 58
3.3 Boundary Observer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.4 Output Feedback Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.5 Output Tracking Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.6 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4 Adaptive State-Feedback Controller . . . . . . . . . . . . . . . . . . . . . . . . 67
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.2 Identifier-Based Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.2.1 Identifier and Update Law . . . . . . . . . . . . . . . . . . . . . 68
4.2.2 Control Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.2.3 Backstepping and Target System . . . . . . . . . . . . . . . . . 71
4.2.4 Proof of Theorem 4.1 . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.3 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
5 Adaptive Output-Feedback Controller . . . . . . . . . . . . . . . . . . . . . . 81
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.2 Swapping-Based Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
5.2.1 Filter Design and Non-adaptive State Estimates . . . . . . 82
5.2.2 Adaptive Laws and State Estimation . . . . . . . . . . . . . . 83
5.2.3 Control Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
5.2.4 Backstepping and Target System . . . . . . . . . . . . . . . . . 86
5.2.5 Proof of Theorem 5.1 . . . . . . . . . . . . . . . . . . . . . . . . . 87
5.3 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
5.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
6 Model Reference Adaptive Control . . . . . . . . . . . . . . . . . . . . . . . . . 95
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
6.2 Model Reference Adaptive Control . . . . . . . . . . . . . . . . . . . . . 96
Contents xi

6.2.1 Canonical Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97


6.2.2 Filter Design and Non-adaptive State Estimate . . . . . . . 98
6.2.3 Adaptive Laws and State Estimates . . . . . . . . . . . . . . . 99
6.2.4 Control Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
6.2.5 Backstepping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
6.2.6 Proof of Theorem 6.1 . . . . . . . . . . . . . . . . . . . . . . . . . 104
6.3 Adaptive Output Feedback Stabilization . . . . . . . . . . . . . . . . . . 111
6.4 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
6.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

Part III 2  2 Systems


7 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
8 Non-adaptive Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
8.2 State Feedback Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
8.3 State Observers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
8.3.1 Sensing Anti-collocated with Actuation . . . . . . . . . . . . 132
8.3.2 Sensing Collocated with Actuation . . . . . . . . . . . . . . . 136
8.4 Output Feedback Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . 140
8.4.1 Sensing Anti-collocated with Actuation . . . . . . . . . . . . 140
8.4.2 Sensing Collocated with Actuation . . . . . . . . . . . . . . . 141
8.5 Output Tracking Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
8.6 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
8.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
9 Adaptive State Feedback Controllers . . . . . . . . . . . . . . . . . . . . . . . 147
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
9.2 Identifier-Based Design for a System with Constant
Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
9.2.1 Identifier and Adaptive Laws . . . . . . . . . . . . . . . . . . . 148
9.2.2 Control Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
9.2.3 Backstepping Transformation . . . . . . . . . . . . . . . . . . . 153
9.2.4 Proof of Theorem 9.1 . . . . . . . . . . . . . . . . . . . . . . . . . 155
9.3 Swapping-Based Design for a System with Spatially Varying
Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
9.3.1 Filter Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
9.3.2 Adaptive Laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
9.3.3 Control Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
xii Contents

9.3.4 Backstepping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165


9.3.5 Proof of Theorem 9.2 . . . . . . . . . . . . . . . . . . . . . . . . . 168
9.4 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
9.4.1 Identifier-Based Controller . . . . . . . . . . . . . . . . . . . . . 170
9.4.2 Swapping-Based Controller with Spatially Varying
System Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
9.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
10 Adaptive Output-Feedback: Uncertain Boundary Condition . . . . . . 175
10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
10.2 Anti-collocated Sensing and Control . . . . . . . . . . . . . . . . . . . . 176
10.2.1 Filters and Adaptive Laws . . . . . . . . . . . . . . . . . . . . . 176
10.2.2 Control Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
10.2.3 Backstepping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
10.2.4 Proof of Theorem 10.2 . . . . . . . . . . . . . . . . . . . . . . . . 186
10.3 Collocated Sensing and Control . . . . . . . . . . . . . . . . . . . . . . . . 189
10.3.1 Observer Equations . . . . . . . . . . . . . . . . . . . . . . . . . . 189
10.3.2 Target System and Backstepping . . . . . . . . . . . . . . . . . 190
10.3.3 Analysis of the Target System . . . . . . . . . . . . . . . . . . . 192
10.3.4 Adaptive Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
10.3.5 Control Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
10.3.6 Backstepping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
10.3.7 Proof of Theorem 10.4 . . . . . . . . . . . . . . . . . . . . . . . . 199
10.4 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
10.4.1 Anti-collocated Sensing and Control . . . . . . . . . . . . . . 202
10.4.2 Collocated Sensing and Control . . . . . . . . . . . . . . . . . 204
10.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
11 Adaptive Output-Feedback: Uncertain In-Domain Parameters . . . . 207
11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
11.2 Anti-collocated Sensing and Control . . . . . . . . . . . . . . . . . . . . 208
11.2.1 Mapping to Observer Canonical Form . . . . . . . . . . . . . 208
11.2.2 Parametrization by Filters . . . . . . . . . . . . . . . . . . . . . . 212
11.2.3 Adaptive Law and State Estimation . . . . . . . . . . . . . . . 214
11.2.4 Closed Loop Adaptive Control . . . . . . . . . . . . . . . . . . 218
11.2.5 Backstepping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
11.2.6 Proof of Theorem 11.1 . . . . . . . . . . . . . . . . . . . . . . . . 219
11.3 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
11.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
Contents xiii

12 Model Reference Adaptive Control . . . . . . . . . . . . . . . . . . . . . . . . . 227


12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
12.2 Model Reference Adaptive Control . . . . . . . . . . . . . . . . . . . . . 228
12.2.1 Disturbance Parameterization . . . . . . . . . . . . . . . . . . . 228
12.2.2 Mapping to Canonical Form . . . . . . . . . . . . . . . . . . . . 229
12.2.3 Reparametrization of the Disturbance . . . . . . . . . . . . . 236
12.2.4 Filter Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
12.2.5 Adaptive Laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
12.2.6 Control Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
12.2.7 Backstepping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
12.2.8 Proof of Theorem 12.1 . . . . . . . . . . . . . . . . . . . . . . . . 245
12.3 Adaptive Output-Feedback Stabilization
in the Disturbance-Free Case . . . . . . . . . . . . . . . . . . . . . . . . . . 249
12.4 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251

Part IV n + 1 Systems
13 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
14 Non-adaptive Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
14.2 State Feedback Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
14.3 State Observers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
14.3.1 Sensing Anti-collocated with Actuation . . . . . . . . . . . . 268
14.3.2 Sensing Collocated with Actuation . . . . . . . . . . . . . . . 271
14.4 Output Feedback Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . 276
14.4.1 Sensing Anti-collocated with Actuation . . . . . . . . . . . . 276
14.4.2 Sensing Collocated with Actuation . . . . . . . . . . . . . . . 276
14.5 Output Tracking Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . 277
14.6 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
14.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
15 Adaptive State-Feedback Controller . . . . . . . . . . . . . . . . . . . . . . . . 281
15.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
15.2 Swapping-Based Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
15.2.1 Filter Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
15.2.2 Adaptive Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
15.2.3 Control Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
15.2.4 Estimator Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . 287
15.2.5 Target System and Backstepping . . . . . . . . . . . . . . . . . 287
15.2.6 Proof of Theorem 15.2 . . . . . . . . . . . . . . . . . . . . . . . . 290
xiv Contents

15.3 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295


15.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
16 Adaptive Output-Feedback: Uncertain Boundary Condition . . . . . . 299
16.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
16.2 Sensing at Both Boundaries . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
16.2.1 Filter Design and Non-adaptive State Estimates . . . . . . 299
16.2.2 Parameter Update Law . . . . . . . . . . . . . . . . . . . . . . . . 301
16.2.3 State Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
16.2.4 Control Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
16.2.5 Backstepping of Estimator Dynamics . . . . . . . . . . . . . . 305
16.2.6 Backstepping of Regressor Filters . . . . . . . . . . . . . . . . 308
16.2.7 Proof of Theorem 16.2 . . . . . . . . . . . . . . . . . . . . . . . . 309
16.3 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
16.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
17 Model Reference Adaptive Control . . . . . . . . . . . . . . . . . . . . . . . . . 317
17.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
17.2 Model Reference Adaptive Control . . . . . . . . . . . . . . . . . . . . . 319
17.2.1 Mapping to Canonical Form . . . . . . . . . . . . . . . . . . . . 319
17.2.2 Filter Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
17.2.3 Adaptive Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
17.2.4 Control Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
17.2.5 Backstepping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
17.2.6 Proof of Theorem 17.1 . . . . . . . . . . . . . . . . . . . . . . . . 334
17.3 Adaptive Output Feedback Stabilization . . . . . . . . . . . . . . . . . . 338
17.4 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
17.4.1 Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
17.4.2 Stabilization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
17.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342

Part V n þ m Systems
18 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
19 Non-adaptive Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
19.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
19.2 State Feedback Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
19.2.1 Non-minimum-time Controller . . . . . . . . . . . . . . . . . . 350
19.2.2 Minimum-Time Controller . . . . . . . . . . . . . . . . . . . . . 353
Contents xv

19.3 Observers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357


19.3.1 Anti-collocated Observer . . . . . . . . . . . . . . . . . . . . . . . 357
19.3.2 Collocated Observer . . . . . . . . . . . . . . . . . . . . . . . . . . 362
19.4 Output Feedback Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . 364
19.4.1 Sensing Anti-collocated with Actuation . . . . . . . . . . . . 364
19.4.2 Sensing Collocated with Actuation . . . . . . . . . . . . . . . 365
19.5 Reference Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
19.6 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
19.6.1 State-Feedback Control . . . . . . . . . . . . . . . . . . . . . . . . 370
19.6.2 Output-Feedback and Tracking Control . . . . . . . . . . . . 371
19.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
20 Adaptive Output-Feedback: Uncertain Boundary Condition . . . . . . 375
20.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
20.2 Sensing at Both Boundaries . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
20.2.1 Filter Design and Non-adaptive State Estimates . . . . . . 376
20.2.2 Adaptive Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
20.2.3 Output-Feedback Control Using Sensing at Both
Boundaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
20.2.4 Backstepping of Estimator Dynamics . . . . . . . . . . . . . . 384
20.2.5 Backstepping of Filters . . . . . . . . . . . . . . . . . . . . . . . . 386
20.2.6 Proof of Theorem 20.2 . . . . . . . . . . . . . . . . . . . . . . . . 388
20.3 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
20.3.1 Parameter Estimation . . . . . . . . . . . . . . . . . . . . . . . . . 391
20.3.2 Output-Feedback Adaptive Control . . . . . . . . . . . . . . . 393
20.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
Appendix A: Projection Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
Appendix B: Lemmas for Proving Stability and Convergence . . . . . . . . 399
Appendix C: Minkowski’s, Cauchy–Schwarz’ and Young’s
Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
Appendix D: Well-Posedness of Kernel Equations . . . . . . . . . . . . . . . . . . 407
Appendix E: Additional Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
Appendix F: Numerical Methods for Solving Kernel Equations . . . . . . . 471
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
Part I
Background
Chapter 1
Background

1.1 Introduction

Systems of hyperbolic partial differential equations (PDEs) describe flow and trans-
port phenomena. Typical examples are transmission lines (Curró et al. 2011), road
traffic (Amin et al. 2008), heat exchangers (Xu and Sallet 2010), oil wells (Landet
et al. 2013), multiphase flow (Di Meglio et al. 2011; Diagne et al. 2017), time-delays
(Krstić and Smyshlyaev 2008b) and predator–prey systems (Wollkind 1986), to men-
tion a few. These distributed parameter systems give rise to important estimation
and control problems, with methods ranging from the use of control Lyapunov func-
tions (Coron et al. 2007), Riemann invariants (Greenberg and Tsien 1984), frequency
domain approaches (Litrico and Fromion 2006) and active disturbance rejection con-
trol (ADRC) (Gou and Jin 2015). The approach taken in this book makes extensive use
of Volterra integral transformations, and is known as the infinite-dimensional back-
stepping approach. The backstepping approach offers a systematic way of designing
controllers and observers for linear PDEs - non-adaptive as well as adaptive. One
of its key strengths is that the controllers and observers are derived for the infinite-
dimensional system directly, and all analysis can therefore be done directly in the
infinite-dimensional framework. Discretization is avoided before an eventual imple-
mentation on a computer.
While integral transformations were used as early as the 1970s and 1980s in order
to study solutions and controllability properties of PDEs (Colton 1977; Seidman
1984), the very first use of infinite-dimensional backstepping for controller design
of PDEs is usually credited to Weijiu Liu for his paper (Liu 2003) published in 2003,
in which a parabolic PDE is stabilized using this technique. Following (Liu 2003),
the technique was quickly expanded in numerous directions, particularly in the work
authored by Andrey Smyshlyaev and Miroslav Krstić, published between 2004 and
approximately 2010. The earliest publication is Smyshlyaev and Krstić (2004), in
which non-adaptive state-feedback control laws for a class of parabolic PDEs are
derived, followed by backstepping-based boundary observer design in Smyshlyaev

© Springer Nature Switzerland AG 2019 3


H. Anfinsen and O. M. Aamo, Adaptive Control of Hyperbolic PDEs,
Communications and Control Engineering,
https://doi.org/10.1007/978-3-030-05879-1_1
4 1 Background

and Krstić (2005). Adaptive solutions are derived in Smyshlyaev and Krstić (2006)
and in their comprehensive work in three parts (Krstić and Smyshlyaev 2008a;
Smyshlyaev and Krstić 2007a, b). Most of their work is collected in two extensive
books on non-adaptive (Krstić and Smyshlyaev 2008c) and adaptive (Smyshlyaev
and Krstić 2010a) backstepping-based controller and observer design, respectively.
The first use of backstepping for control of linear hyperbolic PDEs, on the other
hand, was in 2008 in the paper (Krstić and Smyshlyaev 2008b) for a scalar 1-D
system. Extensions to more complicated systems of hyperbolic PDEs were derived
a few years later in Vazquez et al. (2011), for two coupled linear hyperbolic PDEs,
and in Di Meglio et al. (2013) and more recently, Hu et al. (2016) for an arbitrary
number of coupled PDEs.
The very first result on adaptive control of hyperbolic PDEs using backstepping
was published as late as 2014 (Bernard and Krstić 2014). In that paper, the results
for parabolic PDEs in Smyshlyaev and Krstić (2006) were extended in order to
adaptively stabilize a scalar 1-D linear hyperbolic PDE with an uncertain in-domain
parameter using boundary sensing only.
A series of papers then followed developing a quite complete theory of adaptive
control of systems of coupled linear hyperbolic PDEs. This book gives a systematic
preparation of this body of work.

1.2 Notation

Domains
The following domains will be frequently used:

T = {(x, ξ) | 0 ≤ ξ ≤ x ≤ 1} (1.1a)
T1 = T × {t ≥ 0} (1.1b)
S = {(x, ξ) | 0 ≤ x ≤ ξ ≤ 1} (1.1c)
S1 = S × {t ≥ 0} . (1.1d)

Norms and Vector Spaces


For some real-valued matrix F = {Fi j }1≤i≤n,1≤ j≤m :

|F|∞ = max |Fi j |. (1.2)


1≤i≤n,1≤ j≤m

 T
For a vector-valued signal u(x, t) = u 1 (x, t) u 2 (x, t) . . . u n (x, t) defined for x ∈
[0, 1], t ≥ 0:
1.2 Notation 5

||u(t)||∞ = sup |u(x, t)|∞ (1.3a)


x∈[0,1]

 1
||u(t)|| = u T (x, t)u(x, t)d x. (1.3b)
0

For vector-valued functions defined on x ∈ [0, 1] (i.e. time-invariant) we omit refer-


ence to time in the above notation, of course. We further define the function spaces

B([0, 1]) = {u(x) | ||u||∞ < ∞} (1.4a)


L 2 ([0, 1]) = {u(x) | ||u|| < ∞}. (1.4b)

For a function f (t) defined for t ≥ 0:


 ∞  1p
f ∈ Lp ⇔ | f (t)| p dt < ∞ for p ∈ [0, ∞) (1.5a)
0
f ∈ L∞ ⇔ sup | f (t)| < ∞. (1.5b)
t≥0

Convergence
An arrow is used to denote asymptotic convergence, for instance

||z|| → 0 (1.6)

means that the L 2 -norm of the signal z(x, t) converges asymptotically to zero. Noth-
ing, however, is said about the rate of convergence. Similarly, the notation

a→b (1.7)

denotes that the signal a(t) converges asymptotically to some (possibly constant)
signal b(t).
Derivatives
The partial derivative of a variable is usually denoted using a subscript, that is


u x (x, t) = u(x, t). (1.8)
∂x

When the variable already has a subscript, we will use the notation ∂x to denote the
partial derivative, so that for instance


∂x u 1 (x, t) = u 1 (x, t). (1.9)
∂x
6 1 Background

For a function in one variable, we use a prime ( ) to denote differentiation, that is

d
f  (x) = f (x), (1.10)
dx
for some function f of x. For a function in time only, we will use a dot to denote the
derivative, that is

d
η̇(t) = η(t) (1.11)
dt
for some signal η of time t.
Other Notation
For a function of several variables, we will use · to indicate with respect to which
variable the norm is taken. For a signal u(x, t) defined for 0 ≤ x ≤ 1, t ≥ 0, we will
for instance let

u(x, ·) ∈ L2 (1.12)

denote that the signal u(x, t) belongs to L2 for any fixed x.


Estimates are usually denoted using a hat, so that e.g. θ̂(t) is an estimate of the
parameter θ. Such estimates are always time-varying. Estimation errors are usually
denoted using a tilde, that is: θ̃(t) = θ − θ̂(t) where θ̂(t) is the estimate of θ.
The boundaries x = 0 and x = 1 of the domain [0, 1] are sometimes referred to
as the left and right boundaries, respectively.
The n × n identity matrix is denoted In .
For two functions u, v ∈ B([0, 1]), we define the operator ≡ as

u ≡ v ⇔ ||u − v||∞ = 0 (1.13a)


u ≡ 0 ⇔ ||u||∞ = 0. (1.13b)

1.3 Linear Hyperbolic PDEs

One of the simplest linear hyperbolic PDEs is

u t (x, t) + u x (x, t) = 0 (1.14)

for a function u(x, t) defined in the spatial variable x ∈ R and time t ≥ 0. Equations
in the form (1.14) have an infinite number of solutions. In fact, any function u in the
form

u(x, t) = f (x − t) (1.15)
1.3 Linear Hyperbolic PDEs 7

for some arbitrary function f defined on R is a solution to (1.14). The solutions of


interest are usually singled out by imposing additional constraints. First of all, we will
always limit the spatial variable to the unit domain, hence x ∈ [0, 1]. Additionally,
initial conditions (ICs) and boundary conditions (BCs), that is, conditions the solution
must satisfy at some given point (or set of points) in the domain or at the boundary,
are imposed. An example of an initial condition for (1.14) on the domain [0, 1] is

u(x, 0) = u 0 (x) (1.16)

for some function u 0 (x) defined for x ∈ [0, 1]. The type of boundary conditions
considered in this book are of Dirichlet type, which are in the form

u(0, t) = g(t) (1.17)

for some function g(t) defined for t ≥ 0. By imposing initial condition (1.16) and
boundary condition (1.17), the solution to (1.14) is narrowed down to a unique one,
namely

u 0 (x − t) for t < x
u(x, t) = (1.18)
g(t − x) for t ≥ x.

Hence, for t ≥ 0, the values in u(x, t) at time t are completely determined by the
values of g in the domain [t − 1, t], and thus for t ≥ 1

u(x, t) = g(t − x) (1.19)

which clearly shows the transport property of the linear hyperbolic PDE (1.14) with
boundary condition (1.17): the values of g are transported without loss through the
domain [0, 1].

1.4 Classes of Linear Hyperbolic PDEs Considered

We will categorize linear hyperbolic PDEs into four types, which we will refer to as
classes. We assume all of them to be defined over the unit spatial domain x ∈ [0, 1],
which can always be achieved by scaling, and time t ≥ 0.

1.4.1 Scalar Systems

The first and simplest ones, are scalar first order linear hyperbolic partial (integral)
differential equations, which we will refer to as scalar systems. They consist of a
single P(I)DE, and are in the form
8 1 Background

u t (x, t) − λ(x)u x (x, t) = f (x)u(x, t) + g(x)u(0, t)


 x
+ h(x, ξ)u(ξ, t)dξ (1.20a)
0
u(1, t) = U (t) (1.20b)
u(x, 0) = u 0 (x) (1.20c)

for the system state u(x, t) defined for x ∈ [0, 1], t ≥ 0, some functions μ, f, g, h,
with μ(x) > 0, ∀x ∈ [0, 1], some initial condition u 0 , and an actuation signal U .

1.4.2 2 × 2 Systems

The second class of systems consists of two coupled first order linear hyperbolic
partial differential equations with opposite signs on their transport speeds, so that
they convect information in opposite directions. This type of systems has in the
literature (Vazquez et al. 2011; Aamo 2013) been referred to as 2 × 2 systems. They
are in the form

u t (x, t) + λ(x)u x (x, t) = c11 (x)u(x, t) + c12 (x)v(x, t) (1.21a)


vt (x, t) − μ(x)vx (x, t) = c21 (x)u(x, t) + c22 (x)v(x, t) (1.21b)
u(0, t) = qv(0, t) (1.21c)
v(1, t) = ρu(1, t) + U (t) (1.21d)
u(x, 0) = u 0 (x) (1.21e)
v(x, 0) = v0 (x) (1.21f)

for the system states u(x, t) and v(x, t) defined for x ∈ [0, 1], t ≥ 0, some functions
λ, μ, c11 , c12 , c21 , c22 , with λ(x), μ(x) > 0, ∀x ∈ [0, 1], some constants ρ, q, some
initial conditions u 0 , v0 , and an actuation signal U .

1.4.3 n + 1 Systems

The third class of systems consists of an arbitrary number of PDEs with positive
transport speeds, and a single one with negative transport speed. They are referred
to as n + 1 systems, and have the form
1.4 Classes of Linear Hyperbolic PDEs Considered 9

u t (x, t) + Λ(x)u x (x, t) = Σ(x)u(x, t) + ω(x)v(x, t) (1.22a)


vt (x, t) − μ(x)vx (x, t) = (x)u(x, t) + π(x)v(x, t) (1.22b)
u(0, t) = qv(0, t) (1.22c)
v(1, t) = ρT u(1, t) + U (t) (1.22d)
u(x, 0) = u 0 (x) (1.22e)
v(x, 0) = v0 (x) (1.22f)

for the system states


 T
u(x, t) = u 1 (x, t) u 2 (x, t) . . . u n (x, t) , v(x, t) (1.23)

defined for x ∈ [0, 1], t ≥ 0, the transport speeds

Λ(x) = diag{λ1 (x), λ2 (x), . . . , λn (x)}, μ(x) (1.24)

with λi (x), μ(x) > 0 for i = 1, 2, . . . , n, some functions Σ(x), ω, , π and vectors
q, ρ of appropriate sizes, initial conditions u 0 , v0 and an actuation signal U .

1.4.4 n + m Systems

The most general class of systems considered is referred to as n + m systems. Here,


an arbitrary number of states convect in each direction. They have the form

u t (x, t) + Λ+ (x)u x (x, t) = Σ ++ (x)u(x, t) + Σ +− (x)v(x, t) (1.25a)


− −+ −−
vt (x, t) − Λ (x)vx (x, t) = Σ (x)u(x, t) + Σ (x)v(x, t) (1.25b)
u(0, t) = Q 0 v(0, t) (1.25c)
v(1, t) = C1 u(1, t) + U (t) (1.25d)
u(x, 0) = u 0 (x) (1.25e)
v(x, 0) = v0 (x) (1.25f)

for the system states


 T
u(x, t) = u 1 (x, t) u 2 (x, t) . . . u n (x, t) (1.26a)
 T
v(x, t) = v1 (x, t) v2 (x, t) . . . vm (x, t) , (1.26b)

defined for x ∈ [0, 1], t ≥ 0, the transport speeds

Λ+ (x) = diag{λ1 (x), λ2 (x), . . . , λn (x)} (1.27a)


Λ− (x) = diag{μ1 (x), μ2 (x), . . . , μm (x)}, (1.27b)
10 1 Background

with λi (x), μ j (x) > 0 for i = 1, 2, . . . , n, j = 1, 2, . . . , m, some functions Σ ++ (x),


Σ +− (x), Σ −+ (x), Σ −− (x) and matrices Q 0 , C1 of appropriate sizes, initial condi-
tions u 0 , v0 and an actuation signal U . Note that the actuation signal U in this case
is a vector with m components.
Clearly, the class of 2 × 2 systems is contained in the class of n + 1 systems,
which in turn is contained in the class of n + m systems. We distinguish between
these classes because the theory is more evolved, and the analysis sometimes easier,
for the class of simpler systems.

1.5 Collocated Versus Anti-collocated Sensing and Control

Sensing is either distributed (that is: assuming the full state u(x, t) for all x ∈ [0, 1]
is available), or taken at the boundaries. For boundary sensing, a distinction between
collocated and anti-collocated sensing and control is often made for systems of the
2 × 2, n + 1 and n + m classes.
If the sensing is taken at the same boundary as the actuation, it is referred to
as collocated sensing and control. The collocated measurement for systems (1.21),
(1.22) and (1.25) is

y1 (t) = u(1, t). (1.28)

If the sensing is taken at the opposite boundary of actuation, it is referred to as anti-


collocated sensing and control. The anti-collocated measurement for systems (1.20),
(1.21), (1.22) and (1.25) is

y0 (t) = v(0, t). (1.29)

1.6 Stability of PDEs

Systems of linear hyperbolic PDEs can, when left uncontrolled, be stable or unstable.
When closing the loop with a control law, we want to establish as strong stability
properties as possible for the closed-loop system. We will here list the stability
properties we are concerned with in this book:
1. L 2 -stability: ||u|| ∈ L∞
2. Square integrability in the L 2 -norm: ||u|| ∈ L2
3. Boundedness pointwise in space: ||u||∞ ∈ L∞
4. Square integrability pointwise in space: ||u||∞ ∈ L2
5. Convergence to zero in the L 2 -norm: ||u|| → 0
6. Convergence to zero pointwise in space: ||u||∞ → 0.
1.6 Stability of PDEs 11

If the PDE fails to be stable, it is unstable. The latter of the above degrees of stability
is the desired result for all derived controllers in this book. However, this is not
always possible to achieve.
For the latter two, there is also a distinction between convergence in finite time,
convergence to zero in minimum time and asymptotic convergence to zero. For the
many non-adaptive schemes, convergence in finite time can usually be achieved. For
the adaptive schemes, asymptotic convergence to zero is the best possible result.
The transport delays for system (1.25) are given as
 1  1
dγ dγ
tu,i = tv, j = (1.30)
0 λi (γ) 0 μ j (γ)

for i = 1, 2, . . . , n, j = 1, 2, . . . , m, where for instance tu,2 is the time it takes an


arbitrary signal at u 2 (0, t) to propagate to u 2 (1, t).
According to Auriol and Di Meglio (2016), the theoretically smallest time an
n + m system (1.25) can converge to a steady state by applying a control signal,
for any arbitrary initial condition, is the sum of the slowest transport delays in each
direction, that is

tmin = min tu,i + min tv, j . (1.31)


i∈[1,2,...,n] j∈[1,2,...,m]

Hence, if a control law U achieves

u ≡ 0, v≡0 (1.32)

for t ≥ tmin for any arbitrary initial condition, the system is said to converge to zero
in minimum-time, and the controller is said to be a minimum-time controller. The
concept of minimum time convergence also applies to observers, but is only relevant
for the n + 1 and n + m classes of systems, where multiple states convect in the
same direction.
Convergence in minimum time implies convergence in finite time, which in turn
also implies asymptotic convergence.
We will now demonstrate the different degrees of stability on a simple PDE in
the following example, and give assumptions needed to ensure the different stability
properties.

Example 1.1 Consider

u t (x, t) − u x (x, t) = 0, u(1, t) = qu(0, t), u(x, 0) = u 0 (x) (1.33)

where u(x, t) is defined for x ∈ [0, 1] and t ≥ 0, q is a constant and u 0 ∈ L 2 ([0, 1])
is a function. It is straightforward to show that the solution to (1.33) is
12 1 Background

q n u 0 (t − n + x) for n ≤ t < n + 1 − x
u(x, t) = (1.34)
q u 0 (t − n − 1 + x) for n + 1 − x ≤ t < n + 1
n+1

for integers n ≥ 0. Specifically, for t = n, we have

u(x, n) = q n u 0 (x), (1.35)

from which we can derive



 1
||u(n)|| = q 2n u 20 (x)d x = |q|n ||u 0 ||. (1.36)
0

We assume ||u 0 || is nonzero, and emphasize that u 0 ∈ L 2 ([0, 1]) does not imply
u 0 ∈ B([0, 1]).
1. L 2 -stability: If |q| ≤ 1, system (1.33) is stable in the L 2 -sense. This is seen from
(1.36). Hence ||u|| ∈ L∞ if |q| ≤ 1.
2. Square integrability of the L 2 -norm: We evaluate
 ∞  ∞  1  1  ∞
||u(t)|| dt =
2
u (x, t)d xdt =
2
u 2 (x, t)dtd x
0 0 0 0 0
∞  1  n+1−x
= q 2n u 20 (t − n + x)dt
n=0 0 n
 n+1
+ q 2n+2 u 20 (t − n − 1 + x)dt d x
n+1−x

=M q 2n (1.37)
n=0

where
 1  x  1
M= q2 u 20 (s)ds + u 20 (s)ds d x (1.38)
0 0 x

is a bounded constant. It is clear that the expression (1.37) is bounded only for
|q| < 1. Hence, ||u|| ∈ L2 only if |q| < 1.
3. Boundedness pointwise in space: Boundedness pointwise in space cannot be
established for initial conditions in L 2 ([0, 1]). However, for u 0 ∈ B([0, 1]),
||u||∞ ∈ L∞ if |q| ≤ 1.
4. Square integrability pointwise in space: Since (1.33) is a pure transport equation,
it suffices to consider a single x ∈ [0, 1]. For simplicity, we choose x = 0 and
find from (1.34),
1.6 Stability of PDEs 13

 ∞ ∞  n+1 ∞  1
u 2 (0, t)dt = q 2n u 20 (t − n)dt = q 2n u 20 (x)d x
0 n=0 n n=0 0

= ||u 0 ||2 q 2n . (1.39)
n=0

The expression (1.39) is bounded for |q| < 1 only, and hence ||u||∞ ∈ L2 only
if |q| < 1.
5. Convergence to zero in the L 2 -norm: It is seen from (1.36) that ||u|| → 0 only
if |q| < 1. Moreover, if q = 0, then ||u|| = 0 for all t ≥ 1, and hence finite-time
convergence is achieved.
6. Convergence to zero pointwise in space: Pointwise convergence cannot be
established for initial conditions in L 2 ([0, 1]). However, if u 0 ∈ B([0, 1]), then
||u||∞ → 0, provided |q| < 1. If, in addition, q = 0, then pointwise finite-time
convergence is achieved.

1.7 Some Useful Properties of Linear Hyperbolic PDEs

We will in the following list some useful properties for the type of systems of coupled
1 − D linear hyperbolic PDEs with actuation laws considered in this book.

Theorem 1.1 Consider a system of linear first-order hyperbolic PDEs defined for
x ∈ [0, 1], t ≥ 0, with bounded system coefficients and bounded additive distur-
bances. Let w(x, t) be a vector containing all system states, with initial condition
w(x, 0) = w0 (x), with w0 ∈ L 2 ([0, 1]). Then

||w(t)|| ≤ Aect (1.40)

where A depends on the initial condition norm ||w0 || and d̄, where d̄ is a constant
bounding all disturbances in the system, and c depends on the system parameters.
Moreover, if w0 ∈ B([0, 1]), then

||w(t)||∞ ≤ Bekt , (1.41)

where B depends on the initial condition norm ||w0 ||∞ and d̄, where d̄ is a constant
bounding all disturbances in the system, and k depends on the system parameters.

The proof is given in Appendix E.1 for the most general type of systems considered
in the book.
An important consequence of Theorem 1.1 is that the system’s L 2 -norm (or ∞-
norm in case of initial conditions in B([0, 1]) cannot diverge to infinity in finite
time.
14 1 Background

Corollary 1.1 A system of linear first-order hyperbolic PDEs with bounded system
coefficients and initial conditions in L 2 ([0, 1]) (respectively B([0, 1])) that converges
to zero in finite time in the L 2 -sense (respectively in the B-sense) is exponentially
stable at the origin and square integrable in the L 2 -sense (respectively in the B-
sense).
The proof is given in Appendix E.1. Several results on control of linear hyperbolic
PDEs (Vazquez et al. 2011; Di Meglio et al. 2013; Chen et al. 2017) include proofs
of exponential stability in the L 2 -sense in addition to proof of convergence to zero
in finite time. With the use of Corollary 1.1 the former is not necessary.

1.8 Volterra Integral Transformations

This book uses a particular change of variables as an essential tool for controller and
observer design. By changing variables, the original system dynamics is transformed
into a form which is more amenable to stability analysis. The change of variables
is invertible, so that stability properties established for the transformed dynamics
also apply to the original dynamics. A particularly favorable feature of the approach,
is that the change of variables provides the state feedback law for the controller
design problem and the output injection gains for the observer design problem.
We refer to the change of variables as a Volterra integral transformation, since it
takes the form of a Volterra integral equation involving an integration kernel. In
this section, we introduce the variants of Volterra integral transformations used in
this book. First, we consider time-invariant transformations, where the integration
kernel is time-invariant. Such transformations are used for non-adaptive controller
design for systems with time-invariant coefficients. Then, we consider time-variant
transformations, where the integration kernel is allowed to vary with time. Such
transformations are needed for all adaptive solutions in the book. Finally, we consider
affine transformations, where an arbitrary function can be added to the transformation
in order to allow for shifting the origin. This transformation is used for controller
and observer design for coupled PDE-ODE systems.

1.8.1 Time-Invariant Volterra Integral Transformations

Consider four vector-valued functions u, v, w, z with n components, given as


 T
u(x) = u 1 (x) u 2 (x) . . . u n (x) (1.42a)
 T
v(x) = v1 (x) v2 (x) . . . vn (x) (1.42b)
 T
w(x) = w1 (x) w2 (x) . . . wn (x) (1.42c)
 T
z(x) = z 1 (x) z 2 (x) . . . z n (x) (1.42d)
1.8 Volterra Integral Transformations 15

defined for x ∈ [0, 1]. The Volterra integral transformations used in this book take
the form
 x
v(x) = u(x) − K (x, ξ)u(ξ)dξ (1.43)
0

and
 1
z(x) = w(x) − M(x, ξ)w(ξ)dξ, (1.44)
x

for two matrix-valued functions

K (x, ξ) = {K i j (x, ξ)}1≤i, j≤n


⎡ 11 ⎤
K (x, ξ) K 12 (x, ξ)
. . . K 1n (x, ξ)
⎢ K 21 (x, ξ) K 22 (x, ξ)
. . . K 2n (x, ξ)⎥
⎢ ⎥
=⎢ .. ..
.. .. ⎥ (1.45)
⎣ . . . . ⎦
K (x, ξ) K (x, ξ) . . . K (x, ξ)
n1 n2 nn

and

M(x, ξ) = {M i j (x, ξ)}1≤i, j≤n


⎡ 11 ⎤
M (x, ξ) M 12 (x, ξ) . . . M 1n (x, ξ)
⎢ M 21 (x, ξ) M 22 (x, ξ) . . . M 2n (x, ξ)⎥
⎢ ⎥
=⎢ .. .. .. .. ⎥ (1.46)
⎣ . . . . ⎦
M n1 (x, ξ) M n2 (x, ξ) . . . M nn (x, ξ)

with components of K and M defined over T and S, respectively, defined in (1.1a)


and (1.1c). The mappings (1.43) and (1.44) are different due to the integration limits
in the integral terms.
Assume now that the Volterra transformations (1.43) and (1.44) are invertible,
with inverses in the form
 x
u(x) = v(x) + L(x, ξ)v(ξ)dξ (1.47)
0

and
 1
w(x) = z(x) + N (x, ξ)z(ξ)dξ, (1.48)
x

respectively, for some functions L and N defined over the same domains as K and
M, respectively. By inserting (1.47) into (1.43), we find
16 1 Background
 x  x
v(x) = v(x) + L(x, ξ)v(ξ)dξ − K (x, ξ)v(ξ)dξ
0 0
 x  ξ
− K (x, ξ) L(ξ, s)v(s)dsdξ. (1.49)
0 0

Changing the order of integration in the double integral, yields


 x  ξ  x  ξ
K (x, ξ) L(ξ, s)v(s)dsdξ = K (x, ξ)L(ξ, s)v(s)dsdξ
0
 x x
0 0 0

= K (x, ξ)L(ξ, s)v(s)dξds. (1.50)


0 s

Interchanging the variables of integration (ξ, s) ⇒ (s, ξ), we get


 x  ξ  x  x
K (x, ξ) L(ξ, s)v(s)dsdξ = K (x, s)L(s, ξ)dξv(ξ)ds. (1.51)
0 0 0 ξ

Substituting (1.51) into (1.49), we get


 x  x
0= L(x, ξ) − K (x, ξ) − K (x, s)L(s, ξ)ds v(ξ)dξ (1.52)
0 ξ

which gives a Volterra integral equation for L from K as


 x
L(x, ξ) = K (x, ξ) + K (x, s)L(s, ξ)ds. (1.53)
ξ

Similarly, by inserting (1.48) into (1.44), one obtains


 1  1
z(x) = z(x) + N (x, ξ)z(ξ)dξ − M(x, ξ)z(ξ)dξ
x x
 1  1
− M(x, ξ) N (ξ, s)z(s)dsdξ. (1.54)
x ξ

Changing the order of integration in the double integral yields


 1  ξ
0= N (x, ξ) − M(x, ξ) − M(x, s)N (s, ξ)ds z(ξ)dξ, (1.55)
x x

which gives a Volterra integral equation for N in terms of M as


 ξ
N (x, ξ) = M(x, ξ) + M(x, s)N (s, ξ)ds. (1.56)
x
1.8 Volterra Integral Transformations 17

We have thus shown that (1.47) and (1.48) are the inverses of (1.43) and (1.44),
respectively, provided L and N satisfy (1.53) and (1.56), respectively. The following
lemma addresses the existence of a solution to a Volterra integral equation for a
vector-valued function, which will be used to prove that solutions L and N of (1.53)
and (1.56) do exist. Since the equations for L and N in (1.53) and (1.56) are column-
wise independent, the lemma is applicable to (1.53) and (1.56) as well.

Lemma 1.1 Consider a vector F of n functions


 T
F(x, ξ) = F1 (x, ξ) F2 (x, ξ) . . . Fn (x, ξ) , (1.57)

and the Volterra integral equation


 x
F(x, ξ) = f (x, ξ) + G(x, s)F(s, ξ)ds (1.58)
ξ

where the vector f (x, ξ) and matrix G(x, ξ) are given and bounded. Equation (1.58)
has a unique, bounded solution F(x, ξ), with a bound in the form

|F(x, ξ)|∞ ≤ f¯en Ḡ(x−ξ) (1.59)

where f¯ and Ḡ bound each element of f and G, respectively, i.e.

f¯ = || f ||∞ , Ḡ = ||G||∞ . (1.60)

Proof (originally stated in Anfinsen and Aamo 2016) Define the operator
 x
Ψ [F](x, ξ) = G(x, s)F(s, ξ)ds (1.61)
ξ

and consider the sequence

F 0 (x, ξ) = 0 (1.62a)
F (x, ξ) = f (x, ξ) + Ψ [F
q q−1
](x, ξ), q ≥ 1. (1.62b)

Next, define the differences

ΔF q (x, ξ) = F q (x, ξ) − F q−1 (x, ξ), q ≥ 1. (1.63)

From the linearity of the operator (1.61), we have

ΔF q+1 = Ψ [ΔF q ](x, ξ), q ≥ 1. (1.64)

Consider the infinite series


18 1 Background


F(x, ξ) = ΔF q (x, ξ), (1.65)
q=1

which by construction satisfies (1.58). Recall that f¯ and Ḡ bound each element of
f and G, respectively, and suppose

n q−1
Ḡ q−1 (x − ξ)q−1
|ΔF q (x, ξ)|∞ ≤ f¯ . (1.66)
(q − 1)!

Then it follows that

|ΔF q+1 (x, ξ)|∞ = |Ψ [ΔF q ](x, ξ)|∞


 x 
  x  
≤ G(x, s)ΔF q (s, ξ) ds ≤ n Ḡ ΔF q (s, ξ) ds
∞ ∞
ξ ξ
q−1 q−1  x
nn Ḡ
≤ f¯Ḡ (s − ξ)q−1 ds
(q − 1)! ξ

n q Ḡ q (x − ξ)q
≤ f¯ . (1.67)
q!

Furthermore, (1.66) trivially holds for q = 1. Hence, an upper bound for (1.65) is
∞ ∞
n q Ḡ q (x − ξ)q
|F(x, ξ)|∞ ≤ |ΔF q (x, ξ)|∞ ≤ f¯ ≤ f¯en Ḡ(x−ξ) . (1.68)
q=0 q=1
q!

This shows that the series is bounded and converges uniformly (Coron et al. 2013).
For uniqueness, consider two solutions F 1 (x, ξ) and F 2 (x, ξ), and consider their
difference F̃(x, ξ) = F 1 (x, ξ) − F 2 (x, ξ). Due to linearity, F̃(x, ξ) must also satisfy
(1.58), with f (x, ξ) ≡ 0. The upper bound (1.68) with f¯ = 0 then yields F̃(x, ξ) ≡
0, and hence F 1 ≡ F 2 . 

Theorem 1.2 The Volterra integral transformations (1.43) and (1.44) with bounded
kernels K and M are invertible, with inverses (1.47) and (1.48), respectively, where
the integration kernels L and N are given as the unique, bounded solutions to the
Volterra integral equations (1.53) and (1.56).
Moreover, for the transformation (1.43), the following bounds hold

||v|| ≤ A1 ||u||, ||u|| ≤ A2 ||v|| (1.69)

and

||v||∞ ≤ B1 ||u||∞ , ||u||∞ ≤ B2 ||v||∞ (1.70)


1.8 Volterra Integral Transformations 19

for some bounded constants A1 , A2 , B1 , B2 , depending on K . Similar bounds hold


for the transformation (1.44) with inverse (1.48).

Proof The fact that the inverses of (1.43) and (1.44) are (1.47) and (1.48) with L and
N given as the solution to (1.53) and (1.56) follows from the derivations (1.49)–(1.56)
and Lemma 1.1. To prove the bounds (1.69) and (1.70), we have
   2
 1  1  x
||v|| = v 2 (x)d x = u(x) − K (x, ξ)u(ξ)dξ d x. (1.71)
0 0 0

By Minkowski’s inequality (Lemma C.1 in Appendix C), we find


   2
 1  1 x
||v|| ≤ u 2 (x)d x + K (x, ξ)u(ξ)dξ d x, (1.72)
0 0 0

while Cauchy–Schwarz’ inequality (Lemma C.2 in Appendix C) gives


 
 1  1  x  x
||v|| ≤ u 2 (x)d x + K 2 (x, ξ)dξ u 2 (ξ)dξd x
0 0 0 0
 
 1  1  x  1
≤ u 2 (x)d x + K 2 (x, ξ)dξ u 2 (ξ)dξd x
0 0 0 0
⎛  ⎞
 1  x  1
≤ ⎝1 + K 2 (x, ξ)dξd x ⎠ u 2 (x)d x
0 0 0

≤ (1 + ||K ||) ||u|| (1.73)

where
 1  x
||K ||2 = K 2 (x, ξ)dξd x. (1.74)
0 0

Hence ||v|| ≤ A1 ||u|| holds with

A1 = 1 + ||K ||. (1.75)

The proof of ||u|| ≤ A2 ||v|| is similar, using the inverse transformation


 x
u(x) = w(x) + L(x, ξ)w(ξ)dξ, (1.76)
0

yielding
20 1 Background

A2 = 1 + ||L|| (1.77)

where
 1  x
||L||2 = L 2 (x, ξ)dξd x. (1.78)
0 0

Moreover, we have, for all x ∈ [0, 1]


  x   x 
   

|v(x)| = u(x) −  
K (x, ξ)u(ξ)dξ  ≤ |u(x)| +  K (x, ξ)u(ξ)dξ 
0 0
 x
≤ |u(x)| + ||K ||∞ |u(ξ)|dξ ≤ |u(x)| + ||K ||∞ ||u||∞ (1.79)
0

and hence

||v||∞ ≤ ||u||∞ + ||K ||∞ ||u||∞ , (1.80)

which gives ||v||∞ ≤ B1 ||u||∞ with B1 = 1 + ||K ||∞ . A similar proof gives ||u||∞ ≤
B2 ||v||∞ with B2 = 1 + ||L||∞ . Similar derivations give equivalent bounds for the
transformation (1.44). 

Given some arbitrary functions K or M, the Volterra integral equations (1.53)


and (1.56) rarely have solutions L and N that can be found explicitly. In practice,
an approximate solution can be found by iterating (1.62) a finite number of times, or
truncating the sum (1.65). However, there are some exceptions where the solution
can be found explicitly, which we consider in the following example.

Example 1.2 Consider the Volterra integral transformation from u(x) to w(x),
defined over x ∈ [0, 1]
 x
w(x) = u(x) − θ u(ξ)dξ, (1.81)
0

for some constant θ. Using the Volterra integral equation (1.53), we find the following
equation for L in the inverse transformation (1.47)
 x
L(x, ξ) = θ + θ L(s, ξ)ds, (1.82)
ξ

the solution to which is

L(x, ξ) = θeθ(x−ξ) . (1.83)

This can be verified by insertion:


1.8 Volterra Integral Transformations 21
 x  x
L(x, ξ) = θ + θ L(s, ξ)ds = θ + θ2 eθ(s−ξ) ds = θ + θeθ(x−ξ) − θ
ξ ξ
θ(x−ξ)
= θe . (1.84)

Hence, the inverse transformation of (1.81) is


 x
u(x) = w(x) + θ eθ(x−ξ) w(ξ)dξ. (1.85)
0

1.8.2 Time-Variant Volterra Integral Transformations

In adaptive systems, we will encounter time-varying Volterra integral transformations


in the form
 x
v(x) = u(x) − K (x, ξ, t)u(ξ)dξ, (1.86)
0

and
 1
z(x) = w(x) − M(x, ξ, t)w(ξ)dξ. (1.87)
x

In this case, K and M are functions of three variables including time, and are defined
over T1 and S1 , respectively, defined in (1.1b) and (1.1d).

Theorem 1.3 If the kernels K and M are bounded for every t, then the time-varying
Volterra integral transformations (1.86) and (1.87) are invertible for every t, with
inverses in the form
 x
u(x) = v(x) + L(x, ξ, t)v(ξ)dξ (1.88)
0

and
 1
w(x) = z(x) + N (x, ξ, t)z(ξ)dξ (1.89)
x

respectively, where L and N depend on and are defined over the same domains as
K and M, respectively, and can uniquely be determined by solving the time-varying
Volterra integral equations
 x
L(x, ξ, t) = K (x, ξ, t) + K (x, s, t)L(s, ξ, t)ds (1.90)
ξ
22 1 Background

and
 ξ
N (x, ξ, t) = M(x, ξ, t) + M(x, s, t)N (s, ξ, t)ds. (1.91)
x

Moreover, if the kernels are bounded uniformly in time, that is there exist constants
K̄ and M̄ such that ||K (t)||∞ ≤ K̄ and ||M(t)||∞ ≤ M̄ for every t ≥ 0, then there
exist constants G 1 , G 2 , H1 and H2 such that

||v(t)|| ≤ G 1 ||u(t)|| ||u(t)|| ≤ G 2 ||v(t)|| (1.92)

and

||v(t)||∞ ≤ H1 ||u(t)||∞ , ||u(t)||∞ ≤ H2 ||v(t)||∞ (1.93)

for all t ≥ 0. Similar bounds hold for the transformation (1.87) with inverse (1.89).

Proof The proof of (1.88) and (1.89) being inverses of (1.86) and (1.87), respectively,
can be found using the same steps as for Theorem 1.2, and is therefore omitted.
For every fixed t, we have from Theorem 1.2 the following bounds

||v(t)|| ≤ A1 (t)||u(t)||, ||u(t)|| ≤ A2 (t)||v(t)|| (1.94)

and

||v(t)||∞ ≤ B1 (t)||u(t)||∞ , ||u(t)||∞ ≤ B2 (t)||v(t)||∞ (1.95)

where

A1 (t) = 1 + ||K (t)|| A2 (t) = 1 + ||L(t)|| (1.96a)


B1 (t) = 1 + ||K (t)||∞ B2 (t) = 1 + ||L(t)||∞ . (1.96b)

Choosing G 1 , G 2 , H1 , H2 as

G 1 = sup |A1 (t)|s G 2 = sup |A2 (t)| (1.97a)


t≥0 t≥0

H1 = sup |B1 (t)| H2 = sup |B2 (t)|, (1.97b)


t≥0 t≥0

we obtain the bounds (1.92)–(1.93). Similar derivations give equivalent bounds for
the transformation (1.87). 

Time-varying Volterra transformations in the form (1.86) and (1.87) are also
invertible for every t, provided the kernels K and M are (uniformly) bounded for all
t. Volterra transformations in this form are typically used for adaptive schemes.
1.8 Volterra Integral Transformations 23

1.8.3 Affine Volterra Integral Transformations

Sometimes it is convenient to shift the origin when transforming into new variables.
This leads to an affine Volterra integral transformation, which involves a function
that is added or subtracted to the usual Volterra integral transformation. Examples
are the change of variables from u(x) to w(x) where the origin is shifted by F(x),
given as
 x
α(x) = u(x) − K (x, ξ)u(ξ)dξ − F(x) (1.98)
0

or
 1
β(x) = w(x) − M(x, ξ)w(ξ)dξ − F(x). (1.99)
x

Theorem 1.4 The transformations (1.98) and (1.99) with bounded kernels K and
M are invertible, with inverses in the form
 x
u(x) = α(x) + L(x, ξ)α(ξ)dξ + G(x) (1.100)
0

and
 1
w(x) = β(x) + N (x, ξ)β(ξ)dξ + H (x) (1.101)
x

respectively, where L and N are the solutions to the Volterra integral equations (1.53)
and (1.56), respectively, and
 x
G(x) = F(x) + L(x, ξ)F(ξ)dξ (1.102)
0

and
 1
H (x) = F(x) + N (x, ξ)F(ξ)dξ. (1.103)
x

Proof Defining

v(x) = α(x) + F(x) (1.104)


24 1 Background

w(x, t) = T [u(t)](x)

Original system Target system


U (t) = F [u(t)]

u(x, t): Potentially unstable w(x, t): Stable

u(x, t) = T −1 [w(t)](x)

Fig. 1.1 The concept of backstepping

and applying Theorem 1.2 gives


 x
u(x) = v(x) + L(x, ξ)v(ξ)dξ. (1.105)
0

Substituting (1.104) into (1.105) gives (1.100) and (1.102). Similar steps, defining
z(x) = β(x) + F(x), give (1.101) and (1.103). 

Affine Volterra integral transformations in the form (1.98) and (1.99) are typically
used for controller and observer design for coupled ODE-PDE systems.

1.9 The Infinite-Dimensional Backstepping Technique for


PDEs

When using infinite-dimensional backstepping (or backstepping for short) for con-
trol or observer design for PDEs, an invertible Volterra integral transformation, T ,
with a bounded integration kernel is introduced along with a control law F[u] that
map the system of interest into a carefully designed target system possessing some
desirable stability properties. This is illustrated in Fig. 1.1, where a backstepping
transformation T is used to map a system with dynamics in terms of u into a target
system with dynamics in terms of w. Due to the invertibility of the transformation,
the equivalence of norms as stated in Theorem 1.2 holds, which implies that the orig-
inal system is stabilized as well. We will demonstrate this in two examples. The first
example employs the transformation studied in Example 1.2.

Example 1.3 (Stabilization of an unstable PDE) Consider the simple PDE

u t (x, t) − u x (x, t) = θu(0, t) (1.106a)


u(1, t) = U (t) (1.106b)
u(x, 0) = u 0 (x) (1.106c)
1.9 The Infinite-Dimensional Backstepping Technique for PDEs 25

for a signal u(x, t) defined for x ∈ [0, 1], t ≥ 0, where θ is a real constant, and the
initial condition u 0 (x) satisfies u 0 ∈ B([0, 1]). The state feedback control law
 1
U (t) = −θ eθ(1−ξ) u(ξ, t)dξ. (1.107)
0

guarantees u ≡ 0 for t ≥ 1.
We prove this using the target system

wt (x, t) − wx (x, t) = 0 (1.108a)


w(1, t) = 0 (1.108b)
w(x, 0) = w0 (x) (1.108c)

for some initial condition w0 ∈ B([0, 1]). System (1.108) can be solved explicitly to
find

w0 (x + t) for t < 1 − x
w(x, t) = (1.109)
w(1, t − (1 − x)) for t ≥ 1 − x

and, since w(1, t) = 0, this implies that w ≡ 0 for t ≥ 1. The backstepping trans-
formation (that is: Volterra integral transformation) mapping u into w is
 x
w(x, t) = u(x, t) + θ eθ(x−ξ) u(ξ, t)dξ = T [u(t)](x). (1.110)
0

We will now verify that the backstepping transformation (1.110) maps system (1.106)
into (1.108).
Firstly, rearranging (1.110) as
 x
u(x, t) = w(x, t) − θ eθ(x−ξ) u(ξ, t)dξ, (1.111)
0

and then differentiating with respect to time, we obtain


 x
u t (x, t) = wt (x, t) − θ eθ(x−ξ) u t (ξ, t)dξ. (1.112)
0

Inserting the dynamics (1.106a), we find


 x
u t (x, t) = wt (x, t) − θ eθ(x−ξ) u x (ξ, t)dξ
 x 0
θ(x−ξ)
−θ 2
e dξu(0, t). (1.113)
0
26 1 Background

Consider the second term on the right. Using integration by parts, we get
   
x x x
d θ(x−ξ)
θ e θ(x−ξ)
u x (ξ, t)dξ = θ e θ(x−ξ)
u(ξ, t)0 − θ e u(ξ, t)dξ
0 0 dξ
= θu(x, t) − θeθx u(0, t)
 x
+θ 2
eθ(x−ξ) u(ξ, t)dξ. (1.114)
0

The second integral in (1.113) can be evaluated to obtain


 x x
θ2 eθ(x−ξ) dξu(0, t) = − θeθ(x−ξ) ξ=0 u(0, t)
0
= −θu(0, t) + θeθx u(0, t). (1.115)

Inserting (1.114) and (1.115) into (1.113), we get


 x
u t (x, t) = wt (x, t) − θu(x, t) − θ 2
eθ(x−ξ) u(ξ, t)dξ + θu(0, t). (1.116)
0

Similarly, differentiating (1.111) with respect to space, we find using Leibniz’s rule
 x
u x (x, t) = wx (x, t) − θu(x, t) − θ2 eθ(x−ξ) u(ξ, t)dξ. (1.117)
0

Inserting (1.116) and (1.117) into the original dynamics (1.106a), gives

u t (x, t) − u x (x, t) − θu(0, t) = wt (x, t) − wx (x, t) = 0, (1.118)

which proves that w obeys the dynamics (1.108a). Evaluating (1.110) at x = 1, gives
 1
w(1, t) = u(1, t) + θ eθ(1−ξ) u(ξ, t)dξ
0
 1
= U (t) + θ eθ(1−ξ) u(ξ, t)dξ. (1.119)
0

Inserting the control law (1.107) yields the boundary condition (1.108b).
As with all Volterra integral transformations, the transformation (1.110) is invert-
ible. The inverse is as stated in Theorem 1.2, and thus in the form (1.47) with L given
as the solution to the Volterra integral equation (1.53) with K (x, ξ) = −θeθ(x−ξ) . The
inverse is
 x
u(x, t) = w(x, t) − θ w(ξ, t)dξ = T −1 [w(t)](x). (1.120)
0
1.9 The Infinite-Dimensional Backstepping Technique for PDEs 27

This can be verified by again differentiating with respect to time and space, giving

wt (x, t) = u t (x, t) + θw(x, t) − θw(0, t) (1.121)

and
wx (x, t) = u x (x, t) + θw(x, t), (1.122)

respectively, and inserting into (1.108a), giving

wt (x, t) − wx (x, t) = u t (x, t) − u x (x, t) − θw(0, t) = 0 (1.123)

Using the fact that w(0, t) = u(0, t), we immediately find the dynamics (1.106a).
Evaluating (1.120) at x = 1, we find
 1
u(1, t) = −θ w(ξ, t)dξ, (1.124)
0

where we used the fact that w(1, t) = 0. Inserting the transformation (1.110) gives
 1  1  ξ
u(1, t) = −θ u(ξ, t)dξ − θ2 eθ(ξ−s) u(s, t)dsdξ. (1.125)
0 0 0

Changing the order of integration in the double integral, we find


 1  1  1
u(1, t) = −θ u(ξ, t)dξ − θ 2
eθ(s−ξ) dsu(ξ, t)dξ
0 0 ξ
 1  1  1
= −θ u(ξ, t)dξ − θ eθ(1−ξ) u(ξ, t)dξ + θ u(ξ, t)dξ
0 0 0
 1
= −θ eθ(1−ξ) u(ξ, t)dξ, (1.126)
0

which is the control law (1.107). Hence (1.120) is the inverse of (1.110), mapping
target system (1.108) into system (1.106).
From (1.120), it is obvious that since w ≡ 0 for t ≥ 1, we will also have u ≡ 0 for
t ≥ 1. Figure 1.2 illustrates the use of the backstepping transformation and control
law to map system (1.106) into the finite-time convergent stable target system (1.108).

Example 1.4 The following example uses backstepping to design a controller for
an ordinary differential equation (ODE) system with actuator delay, following the
technique proposed in Krstić and Smyshlyaev (2008b). Consider the simple ODE
with actuator delay

η̇(t) = aη(t) + bU (t − d), η(0) = η0 (1.127)


28 1 Background

x
w(x, t) = u(x, t) + θ 0
eθ(x−ξ) u(ξ, t)dξ

Original system Target system


1
U (t) = −θ 0
eθ(1−ξ) u(ξ, t)dξ
ut (x, t) − ux (x, t) = θu(0, t) wt (x, t) − wx (x, t) = 0
u(1, t) = U (t) w(1, t) = 0
u(x, 0) = u0 (x) w(x, 0) = w0 (x)

x
u(x, t) = w(x, t) − θ 0
w(ξ, t)dξ

Fig. 1.2 The backstepping transformation of Example 1.3

for some scalar signal η(t) ∈ R, constants a ∈ R, b ∈ R\{0} and initial condition
η0 ∈ R. The actuator signal U is delayed by a known time d ≥ 0. Consider the
control law
 1
U (t) = dk eda(1−ξ) bu(ξ, t)dξ + keda η(t), (1.128)
0

where u(x, t) is a distributed actuator state defined over x ∈ [0, 1], t ≥ 0 which
satisfies

u t (x, t) − μu x (x, t) = 0 (1.129a)


u(1, t) = U (t) (1.129b)
u(x, 0) = u 0 (x) (1.129c)

for

μ = d −1 (1.130)

and initial condition u 0 ∈ B([0, 1]), and where k ∈ R is a constant, so that

a + bk < 0. (1.131)

The control law (1.128) with k satisfying (1.131) guarantees exponential stability
of the origin η = 0.
To prove this, we first represent the time-delay in the ODE system (1.127) using
the PDE (1.129), and obtain

η̇(t) = aη(t) + bu(0, t). (1.132)

We will show that the backstepping transformation


1.9 The Infinite-Dimensional Backstepping Technique for PDEs 29
 x
w(x, t) = u(x, t) − dk eda(x−ξ) bu(ξ, t)dξ − kedax η(t) (1.133)
0

and the control law (1.128) map the system consisting of (1.129) and (1.132) into
the target system

η̇(t) = (a + bk)η(t) + bw(0, t) (1.134a)


wt (x, t) − μwx (x, t) = 0 (1.134b)
w(1, t) = 0 (1.134c)
w(x, 0) = w0 (x) (1.134d)

from which it is observed that w ≡ 0 for t ≥ d, after which (1.134a) becomes an


exponentially stable autonomous system. The backstepping transformation (1.133)
is in the form described in Sect. 1.8.3, and is hence invertible with inverse in the
form (1.100) as stated in Theorem 1.4. Differentiating (1.133) with respect to time,
we find
 x
u t (x, t) = wt (x, t) + dk eda(x−ξ) bu t (ξ, t)dξ + kedax η̇(t). (1.135)
0

Inserting the dynamics (1.129a) and (1.132) gives


 x
u t (x, t) = wt (x, t) + k eda(x−ξ) bu x (ξ, t)dξ
0
+ kedax aη(t) + kedax bu(0, t). (1.136)

Integrating the second term on the right by parts yields

u t (x, t) = wt (x, t) + keda(x−x) bu(x, t) − kedax bu(0, t)


 x
+ dk eda(x−ξ) abu(ξ, t)dξ + kedax aη(t) + kedax bu(0, t). (1.137)
0

Similarly, differentiating (1.133) with respect to space, we find


 x
u x (x, t) = wx (x, t) + dkbu(x, t) + d 2
keda(x−ξ) abu(ξ, t)dξ
0
+ dkedax aη(t). (1.138)

Inserting (1.137) and (1.138) into (1.129a) gives


30 1 Background

0 = u t (x, t) − μu x (x, t) = wt (x, t) + keda(x−x) bu(x, t) − kedax bu(0, t)


 x
+ dk eda(x−ξ) abu(ξ, t)dξ + kedax aη(t) + kedax bu(0, t)
0
 x
− μwx (x, t) − kbu(x, t) − dk eda(x−ξ) abu(ξ, t)dξ − kedax aη(t)
0
= wt (x, t) − μwx (x, t) (1.139)

which is the dynamics (1.134b). Moreover, inserting the transformation (1.133) into
(1.132), we obtain

η̇(t) = aη(t) + b(w(0, t) + kη(t)), (1.140)

which gives (1.134a). Evaluating (1.133) at x = 1 yields


 1
w(1, t) = u(1, t) − dk eda(1−ξ) bu(ξ, t)dξ − keda η(t). (1.141)
0

Choosing u(1, t) = U (t) as (1.128) then gives the boundary condition (1.134c).

1.10 Approaches to Adaptive Control of PDEs

In Smyshlyaev and Krstić (2010a), three main types of control design methods for
adaptive control of PDEs are mentioned. These are
1. Lyapunov design.
2. Identifier-based design.
3. Swapping-based design.
We will briefly explain these next, and demonstrate the three methods to adaptively
stabilizing the simple ODE system in the scalar state x

ẋ = ax + u (1.142)

where a is an unknown constant and u is the control input. The steps needed for
applying the methods to (1.142) are in principle the same as for the PDE case,
although the details become more involved.

1.10.1 Lyapunov Design

The Lyapunov approach directly addresses the problem of closed-loop stability, with
the controller and adaptive law designed simultaneously using Lyapunov analysis.
1.10 Approaches to Adaptive Control of PDEs 31

Consider the functions


1 2 1 2
V1 (t) = x (t), V2 (t) = ã (t) (1.143)
2 2γ1

where ã(t) = a − â(t), â(t) is an estimate of a, and γ1 > 0 is a design gain. Differ-
entiating with respect to time and inserting the dynamics (1.142), we obtain

V̇1 (t) = ax 2 (t) + x(t)u(t) = â(t)x 2 (t) + ã(t)x 2 (t) + x(t)u(t) (1.144a)
1 ˙
V̇2 (t) = ã(t)ã(t). (1.144b)
γ1

Now, choosing the control law

u(t) = −(â(t) + γ2 )x(t) (1.145)

for some design gain γ2 > 0, the adaptive law

˙ = −ã(t)
â(t) ˙ = γ1 x 2 (t), (1.146)

and forming the Lyapunov function candidate

V3 (t) = V1 (t) + V2 (t) (1.147)

we obtain

V̇3 (t) = −γ2 x 2 (t), (1.148)

which proves that V3 is non-increasing, and hence V3 ∈ L∞ and

x, ã ∈ L∞ . (1.149)

Moreover, since V3 is non-increasing and non-negative, V3 must have a limit V3,∞


as t → ∞. Integrating (1.148) from zero to infinity, we thus obtain
 ∞
V3,∞ − V3 (0) = −γ2 x 2 (s)ds (1.150)
0

and hence
 ∞
γ2 x 2 (s)ds = V3 (0) − V3,∞ ≤ V3 (0) < ∞ (1.151)
0

which proves that x ∈ L2 , and therefore V1 ∈ L1 . Lastly, from (1.144) we have


32 1 Background

V̇1 (t) ≤ (ã(t) − γ2 )x 2 (t), (1.152)

which proves that V̇1 ∈ L∞ . Since V1 ∈ L1 ∩ L∞ and V̇1 ∈ L∞ , it follows from


Corollary B.1 in Appendix B that V1 → 0, and hence x → 0.

1.10.2 Identifier-Based Design

When using identifier-based design, a dynamical system - the identifier - is intro-


duced. The identifier is usually a copy of the system dynamics with estimated system
parameters instead of the actual parameters, and with certain injection gains added
for the purpose of making the adaptive laws integrable. Boundedness of the identifier
error is then shown, before a control law is designed with the aim of stabilizing the
identifier. As the identifier error is bounded, the original system will be stabilized as
well. Since the control law is designed for the generated state estimates, this method
is based on certainty equivalence (CE). The designed identifier is sometimes termed
an observer, although its purpose is parameter estimation and not state estimation.
For system (1.142), we select the identifier (Anfinsen and Aamo 2018)

˙ = −γ1 (x̂(t) − x(t)) + â(t)x(t) + u(t) + γ2 (x(t) − x̂(t))x 2 (t)


x̂(t) (1.153)

where γ1 and γ2 are positive design gains. The error e(t) = x(t) − x̂(t) satisfies

ė(t) = −γ1 e(t) + ã(t)x(t) − γ2 e(t)x 2 (t). (1.154)

Consider the Lyapunov function candidate

1 2 1 2
V1 (t) = e (t) + ã (t) (1.155)
2 2γ3

for some design gain γ3 > 0. Its time derivative is

V̇1 (t) = −γ1 e2 (t) − γ2 e2 (t)x 2 (t) (1.156)

where we have chosen the adaptive law

˙ = γ3 e(t)x(t).
â(t) (1.157)

From (1.156) it is clear that V1 is non-increasing, and therefore

e, ã ∈ L∞ . (1.158)

Since V1 is non-increasing and bounded from below, V1 has a limit as t → ∞, and


so (1.156) can be integrated from t = 0 to infinity to obtain
1.10 Approaches to Adaptive Control of PDEs 33

e, ex ∈ L2 . (1.159)

Now, choosing the control law

u(t) = −â(t)x(t) − γ4 x̂(t) (1.160)

for a design gain γ4 > 0, and substituting into (1.153), we get

˙ = −γ4 x̂(t) + γ1 e(t) + γ2 e(t)x 2 (t).


x̂(t) (1.161)

Consider the Lyapunov function candidate

1 2 1
V2 (t) = x̂ (t) + e2 (t) (1.162)
2 2
from which we find using Young’s inequality (Lemma C.3 in Appendix C)

V̇2 (t) = −γ4 x̂ 2 (t) + x̂(t)γ1 e(t) + γ2 x̂(t)e(t)x 2 (t)


− γ1 e2 (t) + e(t)ã(t)x(t) − γ2 e2 (t)x 2 (t)
ρ1 γ1 x̂ 2 (t) γ1 e2 (t) γ2 ρ2 x̂ 2 (t)e2 (t)x 2 (t)
≤ −γ4 x̂ 2 (t) + + +
2 2ρ1 2
γ2 x̂ (t) γ2 e (t)
2 2
ρ3 e (t)
2
+ + − γ1 e2 (t) +
ρ2 ρ2 2
ã (t)x̂ (t) ã (t)e (t)
2 2 2 2
+ + − γ2 e2 (t)x 2 (t) (1.163)
ρ3 ρ3

for arbitrary positive constants ρ1 , ρ2 , ρ3 . Choosing

γ4 6 6a02
ρ1 = , ρ2 = , ρ3 = , (1.164)
3γ1 γ4 γ2 γ4

where a0 upper bounds |â|, and recalling that e, ex ∈ L2 , we obtain

V̇2 (t) ≤ −cV2 (t) + l1 (t)V2 (t) + l2 (t) (1.165)

where c = min{γ4 , 2γ1 } is a positive constant and


 
6 2 3γ12 γ4 3a 2
l1 (t) = e (t)x 2 (t), l2 (t) = + + 0 e2 (t) (1.166)
γ2 2γ4 3 γ4

are integrable functions (i.e. l1 , l2 ∈ L1 ). It then follows from Lemma B.3 in Appendix B
that
34 1 Background

V2 ∈ L1 ∩ L∞ , V2 → 0 (1.167)

and hence

x̂, e ∈ L2 ∩ L∞ x̂, e → 0 (1.168)

immediately follows. From the definition e = x − x̂,

x ∈ L2 ∩ L∞ x →0 (1.169)

follows.

1.10.3 Swapping-Based Design

When using swapping design, filters are carefully designed so that they can be used
to express the system states as linear, static combinations of the filters, the unknown
parameters and some error terms. The error terms are then shown to converge to zero.
From the static parameterization of the system states, standard parameter identifi-
cation laws can be used to estimate the unknown parameters. Then, by substituting
the system parameters in the static parameterization with their respective estimates,
adaptive estimates of the system states can be generated. A controller is designed
for stabilization of the adaptive state estimates, meaning that this method, like the
identifier-based method, is based on the certainty equivalence principle. The number
of filters required when using this method typically equals the number of unknown
parameters plus one. Consider the following swapping filters

ṗ(t) = −γ1 p(t) + x(t), p(0) = p0 (1.170a)


η̇(t) = −γ1 (η(t) − x(t)) + u(t), η(0) = η0 , (1.170b)

for some positive design constant γ1 and some initial conditions p0 and η0 . A non-
adaptive estimate x̄ of the state x in (1.142) can then be generated as

x̄(t) = ap(t) + η(t). (1.171)

The non-adaptive state estimation error

e(t) = x(t) − x̄(t) (1.172)

is found to satisfy
1.10 Approaches to Adaptive Control of PDEs 35

˙ = ẋ(t) − a ṗ(t) − η̇(t)


ė(t) = ẋ(t) − x̄(t)
= ax(t) + u(t) + aγ1 p(t) − ax(t) + γ1 (η(t) − x(t)) − u(t)
= γ1 (ap(t) + η(t) − x(t))
= −γ1 e(t) (1.173)

which is an exponentially stable system, meaning that

e ∈ L2 ∩ L∞ e → 0. (1.174)

This also means that

x(t) = ap(t) + η(t) + e(t) (1.175)

with e exponentially converging to zero. From the static relationship (1.175) with
e converging to zero, commonly referred to as the linear parametric model, a wide
range of well-known adaptive laws can be applied, for instance those derived in
Ioannou and Sun (1995). We will here use the gradient law with normalization,
which takes the form

˙ = γ2 ê(t) p(t) ,
â(t) (1.176)
1 + p 2 (t)

for some positive design gain γ2 , where x̂ is an adaptive estimate of the state x
generated by simply substituting a in the non-adaptive estimate (1.171) with its
estimate â, that is

x̂(t) = â(t) p(t) + η(t), (1.177)

and ê is the prediction error defined as

ê(t) = x(t) − x̂(t). (1.178)

Consider now the Lyapunov function candidate

1 2 1 2
V1 (t) = e (t) + ã (t), (1.179)
2γ1 2γ2

where ã(t) = a(t) − â(t) is the estimation error. By differentiation and inserting the
dynamics (1.173) and the adaptive law (1.176), recalling that ã(t) ˙
˙ = −â(t), we find

1 1 ˙ = −e2 (t) − ê(t)ã(t) p(t) .


V̇1 (t) = e(t)ė(t) + ã(t)ã(t) (1.180)
γ1 γ2 1 + p 2 (t)

From the relationships (1.171), (1.172), (1.177) and (1.178), we have


36 1 Background

ê(t) − e(t) = ã(t) p(t) (1.181)

and inserting this, we obtain

ê2 (t) ê(t)e(t)


V̇1 (t) = −e2 (t) − + . (1.182)
1 + p (t) 1 + p 2 (t)
2

Applying Young’s inequality to the last term, we get

ê2 (t) 1 ê2 (t) 1 e2 (t)


V̇1 (t) ≤ −e2 (t) − + + (1.183)
1 + p 2 (t) 2 1 + p 2 (t) 2 1 + p 2 (t)

and hence

1 1 ê2 (t)
V̇1 (t) ≤ − e2 (t) − , (1.184)
2 2 1 + p 2 (t)

which proves that V1 is non-increasing and thus bounded, from which

e, ã ∈ L∞ (1.185)

follows. Since V1 is non-negative and non-increasing, it must have a limit as t → ∞,


and it follows from (1.184) that


e,  ∈ L2 . (1.186)
1 + p2

Moreover, using the relationship (1.181), we have

ê(t) e(t) + ã(t) p(t)


 =  ≤ e(t) + ã(t) (1.187)
1 + p 2 (t) 1 + p 2 (t)

and since e, ã ∈ L∞ , it follows that


 ∈ L∞ . (1.188)
1 + p2

Finally, from the adaptive law (1.176), we have

˙ = γ2 ê(t) p(t) = γ2  ê(t)


â(t) 
p(t)
≤ γ2 
ê(t)
. (1.189)
1 + p 2 (t) 1 + p (t) 1 + p (t)
2 2 1 + p 2 (t)

Since √ ê ∈ L2 ∩ L∞ , it follows that


1+ p2
1.10 Approaches to Adaptive Control of PDEs 37

â˙ ∈ L2 ∩ L∞ . (1.190)

Next, the dynamics of (1.177) can straightforwardly be shown to be

x̂(t) ˙ p(t).
˙ = â(t)x(t) + u(t) + γ1 ê(t) + â(t) (1.191)

Choosing the control law

u(t) = −â(t)x(t) − γ3 x̂(t) (1.192)

for some positive design gain γ3 , we obtain the closed-loop dynamics

x̂(t) ˙ p(t).
˙ = −γ3 x̂(t) + γ1 ê(t) + â(t) (1.193)

Consider now the functions


1 2 1 2
V2 (t) = x̂ (t) V3 (t) = p (t), (1.194)
2 2
from which one finds

˙ p(t)
V̇2 (t) = −γ3 x̂ 2 (t) + γ1 x̂(t)ê(t) + x̂(t)â(t) (1.195a)
V̇3 (t) = −γ1 p (t) + p(t)x(t).
2
(1.195b)

Using Young’s inequality and the relationship x(t) = x̂(t) + ê(t), we can bound
these as

1 γ2 1
V̇2 (t) ≤ − γ3 x̂ 2 (t) + 1 ê2 (t) + â˙ 2 (t) p 2 (t) (1.196a)
2 γ3 γ3
1 1 1
V̇3 (t) ≤ − γ1 p 2 (t) + x̂ 2 (t) + ê2 (t). (1.196b)
2 γ1 γ1

Forming the Lyapunov function candidate

V4 (t) = 4V2 (t) + γ1 γ3 V3 (t) (1.197)

we obtain
 2 
1 γ 4
V̇4 (t) ≤ −γ3 x̂ 2 (t) − γ12 γ3 p 2 (t) + 4 1 + γ3 ê2 (t) + â˙ 2 (t) p 2 (t). (1.198)
2 γ3 γ3

Using the identity

ê2 (t)
ê2 (t) = (1 + p 2 (t)) (1.199)
1 + p 2 (t)
38 1 Background

gives
 2 
1 γ1 ê2 (t) 4
V̇4 (t) ≤ −γ3 x̂ (t) − γ12 γ3 p 2 (t) +
2
4 + γ3 + â˙ 2 (t) p 2 (t)
2 γ3 1 + p (t) γ3
2
 2 
γ1 ê2 (t)
+ 4 + γ3 (1.200)
γ3 1 + p 2 (t)

which can be written as

V̇4 (t) ≤ −cV4 (t) + l1 (t)V4 (t) + l2 (t) (1.201)

for the positive constant


 
1
c = min γ3 , γ1 (1.202)
2

and the functions


   2 
2 4 ˙2 γ1 ê2 (t)
l1 (t) = l2 (t) + â (t) , l2 (t) = 4 + γ3 , (1.203)
γ1 γ3 γ3 γ3 1 + p 2 (t)

˙ √ ê
which are bounded and integrable since â, ∈ L2 ∩ L∞ and γ1 and γ3 are
1+ p2
bounded constants. It then follows from Lemma B.3 in Appendix B that V4 ∈ L1 ∩
L∞ and V4 → 0, resulting in

x̂, p ∈ L2 ∩ L∞ , x̂, p → 0. (1.204)

The relationship (1.177) now gives

η ∈ L2 ∩ L∞ , η → 0, (1.205)

while (1.175) with (1.174) finally gives

x ∈ L2 ∩ L∞ , x → 0. (1.206)

1.10.4 Discussion of the Three Methods

From applying the three methods for adaptive stabilization to the simple ODE (1.142),
it is quite evident that the complexity of the stability proof increases from applying
the Lyapunov method to applying the identifier-based method, with the swapping
method being the one involving the most complicated analysis. The dynamical order
also differs for the three methods, with the Lyapunov method having the lowest order
1.10 Approaches to Adaptive Control of PDEs 39

as it only employs a single ODE for the update law. The identifier method, on the
other hand, involves a copy of the system dynamics in addition to the ODE for the
update law. The swapping method has the highest order, as it employs a number of
filters equal to the number of unknowns plus one in addition to the adaptive law.
A clear benefit with the swapping method, is that it brings the system to a para-
metric form which is linear in the uncertain parameter. This allows a range of already
established adaptive laws to be used, for instance the gradient law or the least squares
method. It also allows for normalization, so that the update laws are bounded, regard-
less of the boundedness properties of the system states. Normalization can be incor-
porated into the Lyapunov-based update law by choosing the Lyapunov function
V1 in (1.143) differently (for instance logarithmic), however, doing so adds other
complexities to the proof. The identifier method does not have this property. The
property of having bounded update laws is even more important for PDEs, where
there is a distinction between boundedness in L 2 and pointwise boundedness. An
update law that employs for instance boundary measurements may fail to be bounded
even though the closed loop system is bounded in L 2 .
Although the Lyapunov method is quite simple and straightforward to use for
the design of an adaptive stabilizing control law for the ODE (1.142), rendering
the other two methods overly complicated, this is not the case for PDEs. This can
for instance be seen from the derivation of an adaptive controller for a scalar linear
hyperbolic PDE with an uncertain spatially varying interior parameter derived in Xu
and Liu (2016) using the Lyapunov method. Although the resulting control law is
simple and of a low dynamical order, the stability proof is not and constitutes the
majority of the 16-page paper (Xu and Liu 2016). Due to this increased complexity,
the Lyapunov method is seldom used for adaptive control of linear hyperbolic PDE
systems, with the result in Xu and Liu (2016) being, at the time of writing this
book, the only result using this method for adaptive stabilization of linear hyperbolic
PDEs. The identifier-based and swapping-based methods, however, more easily and
in a more straightforward manner extend to PDEs. However, the identifier in the
identifier-based method, and the filters in the swapping-based method are PDEs as
well, making both type of controllers infinite-dimensional.

References

Aamo OM (2013) Disturbance rejection in 2 × 2 linear hyperbolic systems. IEEE Trans Autom
Control 58(5):1095–1106
Amin S, Hante FM, Bayen AM (2008) On stability of switched linear hyperbolic conservation laws
with reflecting boundaries. In: Hybrid systems computation and control. Springer, pp 602–605
Anfinsen H, Aamo OM (2018) A note on establishing convergence in adaptive systems. Automatica
93:545–549
Anfinsen H, Aamo OM (2016) Tracking in minimum time in general linear hyperbolic PDEs using
collocated sensing and control. In: 2nd IFAC workshop on control of systems governed by partial
differential equations. Bertinoro, Italy
40 1 Background

Auriol J, Di Meglio F (2016) Minimum time control of heterodirectional linear coupled hyperbolic
PDEs. Automatica 71:300–307
Bernard P, Krstić M (2014) Adaptive output-feedback stabilization of non-local hyperbolic PDEs.
Automatica 50:2692–2699
Chen S, Vazquez R, Krstić M (2017) Stabilization of an underactuated coupled transport-wave PDE
system. In: American control conference. Seattle, WA, USA
Colton D (1977) The solution of initial-boundary value problems for parabolic equations by the
method of integral operators. J Differ Equ. 26:181–190
Coron J-M, d’Andréa Novel B, Bastin G (2007) A strict Lyapunov function for boundary control
of hyperbolic systems of conservation laws. IEEE Trans Autom Control 52(1):2–11
Coron J-M, Vazquez R, Krstić M, Bastin G (2013) Local exponential H 2 stabilization of a 2 × 2
quasilinear hyperbolic system using backstepping. SIAM J Control Optim 51(3):2005–2035
Curró C, Fusco D, Manganaro N (2011) A reduction procedure for generalized Riemann problems
with application to nonlinear transmission lines. J Phys A: Math Theor 44(33):335205
Di Meglio F (2011) Dynamics and control of slugging in oil production. Ph.D. thesis, MINES
ParisTech
Di Meglio F, Vazquez R, Krstić M (2013) Stabilization of a system of n + 1 coupled first-order
hyperbolic linear PDEs with a single boundary input. IEEE Trans Autom Control 58(12):3097–
3111
Diagne A, Diagne M, Tang S, Krstić M (2017) Backstepping stabilization of the linearized Saint-
Venant-Exner model. Automatica 76:345–354
Gou B-Z, Jin F-F (2015) Output feedback stabilization for one-dimensional wave equation subject
to boundary disturbance. IEEE Trans Autom Control 60(3):824–830
Greenberg JM, Tsien LT (1984) The effect of boundary damping for the quasilinear wave equation.
J Differ Equ 52(1):66–75
Hu L, Di Meglio F, Vazquez R, Krstić M (2016) Control of homodirectional and general heterodi-
rectional linear coupled hyperbolic PDEs. IEEE Trans Autom Control 61(11):3301–3314
Ioannou P, Sun J (1995) Robust adaptive control. Prentice-Hall Inc, Upper Saddle River, NJ, USA
Krstić M, Smyshlyaev A (2008a) Adaptive boundary control for unstable parabolic PDEs - Part I:
Lyapunov design. IEEE Trans Autom Control 53(7):1575–1591
Krstić M, Smyshlyaev A (2008b) Backstepping boundary control for first-order hyperbolic PDEs
and application to systems with actuator and sensor delays. Syst Control Lett 57(9):750–758
Krstić M, Smyshlyaev A (2008c) Boundary control of PDEs: a course on backstepping designs.
Soc Ind Appl Math
Landet IS, Pavlov A, Aamo OM (2013) Modeling and control of heave-induced pressure fluctuations
in managed pressure drilling. IEEE Trans Control Syst Technol 21(4):1340–1351
Litrico X, Fromion V (2006) Boundary control of hyperbolic conservation laws with a frequency
domain approach. In: 45th IEEE conference on decision and control. San Diego, CA, USA
Liu W (2003) Boundary feedback stabilization of an unstable heat equation. SIAM J Control Optim
42:1033–1043
Seidman TI (1984) Two results on exact boundary control of parabolic equations. Appl Math Optim
11:891–906
Smyshlyaev A, Krstić M (2004) Closed form boundary state feedbacks for a class of 1-D partial
integro-differential equations. IEEE Trans Autom Control 49:2185–2202
Smyshlyaev A, Krstić M (2005) Backstepping observers for a class of parabolic PDEs. Syst Control
Lett 54:613–625
Smyshlyaev A, Krstić M (2006) Output-feedback adaptive control for parabolic PDEs with spatially
varying coefficients. In: 45th IEEE conference on decision and control. San Diego, CA, USA
Smyshlyaev A, Krstić M (2007a) Adaptive boundary control for unstable parabolic PDEs - Part II:
estimation-based designs. Automatica 43:1543–1556
Smyshlyaev A, Krstić M (2007b) Adaptive boundary control for unstable parabolic PDEs - Part III:
output feedback examples with swapping identifiers. Automatica 43:1557–1564
Reference 41

Smyshlyaev A, Krstić M (2010) Adaptive control of parabolic PDEs. Princeton University Press,
Princeton
Vazquez R, Krstić M, Coron J-M (2011) Backstepping boundary stabilization and state estimation
of a 2 × 2 linear hyperbolic system. In: 2011 50th IEEE conference on decision and control and
European control conference (CDC-ECC) December. pp 4937–4942
Wollkind DJ (1986) Applications of linear hyperbolic partial differential equations: predator-prey
systems and gravitational instability of nebulae. Math Model 7:413–428
Xu Z, Liu Y (2016) Adaptive boundary stabilization for first-order hyperbolic PDEs with unknown
spatially varying parameter. Int J Robust Nonlinear Control 26(3):613–628
Xu C-Z, Sallet G (2010) Exponential stability and transfer functions of processes governed by
symmetric hyperbolic systems. ESAIM: Control Optim Calc Var 7:421–442
Part II
Scalar Systems
Chapter 2
Introduction

2.1 System Equations

This part considers systems in the form (1.20), consisting of a single first order linear
hyperbolic PIDE, with local and non-local reaction terms, and with scaled actuation
and anti-collocated measurement. These can be stated as

u t (x, t) − λ(x)u x (x, t) = f (x)u(x, t) + g(x)u(0, t)


 x
+ h(x, ξ)u(ξ, t)dξ (2.1a)
0
u(1, t) = k1 U (t) (2.1b)
u(x, 0) = u 0 (x) (2.1c)
y(t) = k2 u(0, t) (2.1d)

for system parameters satisfying

λ ∈ C 1 ([0, 1]), λ(x) > 0, ∀x ∈ [0, 1] (2.2a)


f, g ∈ C ([0, 1]),
0
h ∈ C (T ),
0
k1 , k2 ∈ R\{0}, (2.2b)

where T is defined in (1.1a), and initial condition u 0 satisfying

u 0 ∈ B([0, 1]). (2.3)

U (t) is an actuation signal, while y(t) is a boundary measurement. Systems in the


form (2.1) are in fact partial integro-differential equations (PIDEs) due to the non-
local term in h, but is often referred to as PDEs. We note that the PIDEs in Exam-
ples 1.3 and 1.4 are both such systems.
PDEs in the form (2.1) are often obtained from models consisting of coupled
PDE dynamics that incorporate at least one transport process, after various changes
© Springer Nature Switzerland AG 2019 45
H. Anfinsen and O. M. Aamo, Adaptive Control of Hyperbolic PDEs,
Communications and Control Engineering,
https://doi.org/10.1007/978-3-030-05879-1_2
46 2 Introduction

of variables, linearization around an equilibrium profile, rescaling and a singular


perturbation reduction relative to all the PDEs except the slowest one. Examples are
road traffic flow (Haberman 2004, p. 562) or a linearized Korteweg de Vries equation
(Korteweg and de Vries 1895), (Krstić and Smyshlyaev 2008).
System (2.1) is the general type of scalar linear hyperbolic PDEs considered in
this book. However, for pedagogical and illustrative purposes we will derive much of
the theory for a simplified, yet potentially unstable class of scalar linear hyperbolic
PDEs. This simplified system is in the form

vt (x, t) − μvx (x, t) = θ(x)v(0, t) (2.4a)


v(1, t) = ρU (t) (2.4b)
v(x, 0) = v0 (x) (2.4c)
y(t) = v(0, t) (2.4d)

for the system parameters

μ ∈ R, μ > 0 ρ ∈ R\{0}, θ ∈ C 0 ([0, 1]), (2.5)

and initial condition

v0 ∈ B([0, 1]). (2.6)

The signal y(t) is the measurement.


It turns out that systems (2.1) and (2.4) are in fact equivalent following an invert-
ible transformation, scalings of the state and actuation signal and remapping of the
domain. This is formally stated in the following lemma.
Lemma 2.1 Systems (2.1) and (2.4) are equivalent, with θ, ρ and μ being continuous
functions of λ, f, g, h, k1 , k2 . Specifically, μ is given by
 1

μ−1 = (2.7)
0 λ(γ)

with μ−1 being the propagation time from x = 1 to x = 0.


The proof of this lemma is given in Sect. 2.2. The significance of Lemma 2.1 is that
if suffices to derive controllers and observers for system (2.4), and the result will be
valid for the (seemingly) more general system (2.1). In other words, (2.1) and (2.4)
are two equivalent realizations of the input-output mapping U (t) → y(t).
In Chap. 3, we assume that all parameters of (2.4) are known, and derive
(non-adaptive) state-feedback controllers and observers, and also combine the two
into output-feedback stabilizing controllers. Finally, output tracking controllers are
derived, for which the measured output tracks some arbitrary, bounded reference
signal r (t), while other signals are bounded.
2.1 System Equations 47

In Chap. 4, we design the first adaptive control law of this book. It is based on an
identifier for estimation of the parameter θ in system (2.4), which is then combined
with an adaptive control law to stabilize the system. The resulting control law is state-
feedback, requiring measurements of the full state u(x, t) for all x ∈ [0, 1]. This is
relaxed in Chap. 5 where swapping design is used to solve the adaptive stabilization
problem using output-feedback, requiring the boundary measurement (2.4d), only.
In Part II’s last chapter, Chap. 6, we solve a model reference adaptive control
(MRAC) problem using output feedback. The goal is to make the measured signal
y(t) track a signal generated from a simple reference model from minimal knowledge
of system parameters. The problem of regulating the state to zero is covered by the
MRAC problem, by simply setting the reference signal to zero.

2.2 Proof of Lemma 2.1

First off, we rescale the domain to get rid of the spatially varying transport speed.
We will show that the mapping

ū(x, t) = u(−1 (x), t), u(x, t) = ū((x), t) (2.8)

where  is defined as
 x

(x) = μ , (2.9)
0 λ(γ)

maps (2.1) into

ū t (x, t) − μū x (x, t) = f¯(x)ū(x, t) + ḡ(x)ū(0, t)


 x
+ h̄(x, ξ)ū(ξ, t)dξ (2.10a)
0
ū(1, t) = k1 U (t) (2.10b)
ū(x, 0) = ū 0 (x) (2.10c)
y(t) = k2 ū(0, t) (2.10d)

where

f¯(x) = f (−1 (x)), ḡ(x) = g(−1 (x)) (2.11a)


−1
λ( (ξ))
h̄(x, ξ) = h(−1 (x), −1 (ξ)), ū 0 (x) = u 0 (−1 (x)). (2.11b)
μ

We note from (2.9) that  is strictly increasing, and hence invertible, and that
μ
 (x) = , (1) = 1, (0) = 0. (2.12)
λ(x)
48 2 Introduction

Differentiating (2.8) with respect to time and space, respectively, we find

u t (x, t) = ū t ((x), t) (2.13)

and
μ
u x (x, t) =  (x)ū x ((x), t) = ū x ((x), t). (2.14)
λ(x)

Inserting (2.8), (2.13) and (2.14) into (2.1a) gives


 x
0 = u t (x, t) − λ(x)u x (x, t) − f (x)u(x, t) − g(x)u(0, t) − h(x, ξ)u(ξ, t)dξ
0
= ū t ((x), t) − μū x ((x), t) − f (x)ū((x), t) − g(x)ū(0, t)
 x
− h(x, ξ)ū((ξ), t)dξ. (2.15)
0

A remapping of the domain x → −1 (x) and a substitution ξ → (ξ) in the integral
gives (2.10a) with coefficients (2.11). The boundary condition, initial condition and
measurement (2.10b)–(2.10d) follow immediately from insertion and using (2.12).
Next, we remove the source term in f¯ and scale the state so that the constant k2
in the measurement (2.10c) is removed. We will show that the mapping

ǔ(x, t)
ǔ(x, t) = k2 ϕ(x)ū(x, t), ū(x, t) = (2.16)
k2 ϕ(x)

where ϕ is defined as
  x 
−1 −1
ϕ(x) = exp μ f ( (ξ))dξ , (2.17)
0

maps (2.10) into


 x
ǔ t (x, t) − μǔ x (x, t) = ǧ(x)ǔ(0, t) + ȟ(x, ξ)ǔ(ξ, t)dξ (2.18a)
0
ǔ(1, t) = ρU (t) (2.18b)
ǔ(x, 0) = ǔ 0 (x) (2.18c)
y(t) = ǔ(0, t) (2.18d)

where
ϕ(x)
ǧ(x) = ḡ(x)ϕ(x), ȟ(x, ξ) = h̄(x, ξ) (2.19a)
ϕ(ξ)
ρ = k1 k2 ϕ(1), ǔ 0 (x) = k2 ϕ(x)ū 0 (x). (2.19b)
2.2 Proof of Lemma 2.1 49

This can be seen from differentiating (2.16) with respect to time and space, respec-
tively, to find

1
ū t (x, t) = ǔ t (x, t) (2.20a)
k2 ϕ(x)
1  
ū x (x, t) = ǔ x (x, t) − μ−1 f¯(x)ǔ(x, t) . (2.20b)
k2 ϕ(x)

Inserting (2.20) into (2.10), we obtain (2.18) with coefficients (2.19). Inserting t = 0
into (2.16) gives ǔ 0 from ū 0 .
Consider now the backstepping transformation
 x
v̌(x, t) = ǔ(x, t) − Ω(x, ξ)ǔ(ξ, t)dξ (2.21)
0

where Ω satisfies the PDE


 x
μΩx (x, ξ) + μΩξ (x, ξ) = μ Ω(x, s)ȟ(s, ξ)ds − ȟ(x, ξ) (2.22a)
ξ
 x
μΩ(x, 0) = μ Ω(x, ξ)ǧ(x)dξ + ǧ(x). (2.22b)
0

The existence of a unique solution Ω to (2.22) is ensured by Lemma D.1 in Appendix


D. By Theorem 1.2, the inverse of (2.21) is
 x
ǔ(x, t) = v̌(x, t) + Φ(x, ξ)v̌(ξ, t)dξ (2.23)
0

where Φ satisfies the Volterra integral equation


 x
Φ(x, ξ) = Ω(x, ξ) + Φ(x, s)Ω(s, ξ)ds. (2.24)
ξ

We will show that the backstepping transformation (2.21) maps system (2.18) into
a pure transport PDE

v̌t (x, t) − μv̌x (x, t) = 0 (2.25a)


 1
v̌(1, t) = ρU (t) + σ(ξ)v̌(ξ, t)dξ (2.25b)
0
v̌(x, 0) = v̌0 (x) (2.25c)
y(t) = v̌(0, t) (2.25d)
50 2 Introduction

where
 x
σ(ξ) = −Φ(1, ξ), v̌0 (x) = ǔ 0 (x) − Ω(x, ξ)ǔ 0 (ξ)dξ. (2.26)
0

From differentiating (2.21) with respect to time and space, and inserting the result
into (2.18a) we find
 x
0 = ǔ t (x, t) − μǔ x (x, t) − ǧ(x)ǔ(0, t) − ȟ(x, ξ)ǔ(ξ, t)dξ
0
= v̌t (x, t) − μv̌x (x, t)
  x 
− μΩ(x, 0) − μ Ω(x, ξ)ǧ(x)dξ + ǧ(x) ǔ(0, t)
0
 x
− μΩx (x, ξ) + μΩξ (x, ξ) + ȟ(x, ξ)
0
 x 
−μ Ω(x, s)ȟ(s, ξ)ds ǔ(ξ, t)dξ. (2.27)
ξ

Using (2.22), we obtain (2.25a). Evaluating (2.23) at x = 1 gives


 1
v̌(1, t) = ǔ(1, t) − Φ(1, ξ)v̌(ξ, t)dξ (2.28)
0

from which we find (2.25b) using the value of σ given in (2.26) and the boundary
condition (2.18b). The measurement (2.25d) follows directly from inserting x = 0
into (2.21) and using (2.18d). The value of v̌0 given in (2.26) follows from inserting
t = 0 into (2.21).
Lastly, we show that the backstepping transformation
 x
v(x, t) = v̌(x, t) − σ(1 − x + ξ)v̌(ξ, t)dξ (2.29)
0

maps system (2.25) into (2.4) with


 x
θ(x) = μσ(1 − x), v0 (x) = v̌0 (x) − σ(1 − x + ξ)v̌0 (ξ)dξ. (2.30)
0

From differentiating (2.29) with respect to time and space, respectively, and insert-
ing the result into (2.25a), we obtain

0 = vt (x, t) − μvx (x, t) − μσ(1 − x)v̌(0, t) (2.31)

which yields the dynamics (2.4a) provided θ is chosen according to (2.30). Moreover,
by inserting x = 1 into (2.29) and using the boundary condition (2.25b), we find
2.2 Proof of Lemma 2.1 51
 1  1
v(1, t) = ρU (t) + σ(ξ)v̌(ξ, t)dξ − σ(ξ)v̌(ξ, t)dξ = ρU (t) (2.32)
0 0

which gives (2.4b). Inserting t = 0 into (2.29) gives the expression (2.30) for v0 . 

Remark 2.1 The proof of Lemma 2.1 can been shortened down by choosing the
boundary conditions of kernel Ω in (2.21) differently, obtaining (2.4) directly from
(2.21). However, we have chosen to include the intermediate pure transport system
(2.25), as it will be used for designing a model reference adaptive controller in
Chap. 6.

References

Haberman R (2004) Applied partial differential equations: with fourier series and boundary value
problems. Pearson Education, New Jersey
Korteweg D, de Vries G (1895) On the change of form of long waves advancing in a rectangular
canal and on a new type of long stationary waves. Philos Mag 39(240):422–443
Krstić M, Smyshlyaev A (2008) Backstepping boundary control for first-order hyperbolic PDEs
and application to systems with actuator and sensor delays. Syst Control Lett 57(9):750–758
Chapter 3
Non-adaptive Schemes

3.1 Introduction

This chapter contains non-adaptive state-feedback controller and boundary observer


designs for system (2.4), which we restate here for the convenience of the reader:

vt (x, t) − μvx (x, t) = θ(x)v(0, t) (3.1a)


v(1, t) = ρU (t) (3.1b)
v(x, 0) = v0 (x) (3.1c)
y(t) = v(0, t) (3.1d)

where

μ ∈ R, μ > 0, ρ ∈ R\{0}, θ ∈ C 0 ([0, 1]) (3.2)

with

v0 ∈ B([0, 1]). (3.3)

A non-adaptive state feedback controller for system (3.1) is derived in Sect. 3.2,
based on Krstić and Smyshlyaev (2008).
In Sect. 3.3, we derive a state observer for system (3.1), assuming only the bound-
ary measurement (3.1d) is available. The observer and state-feedback controller are
then combined into an output-feedback controller in Sect. 3.4, that achieves stabi-
lization of the system using boundary sensing only.
Section 3.5 proposes a state-feedback output tracking controller whose goal is to
make the measured output track some bounded reference signal r (t) of choice. The
proposed controller can straightforwardly be combined with the state observer to
solve the output-feedback tracking problem.

© Springer Nature Switzerland AG 2019 53


H. Anfinsen and O. M. Aamo, Adaptive Control of Hyperbolic PDEs,
Communications and Control Engineering,
https://doi.org/10.1007/978-3-030-05879-1_3
54 3 Non-adaptive Schemes

All derived schemes are implemented and simulation results can be found in
Sect. 3.6. Finally, some concluding remarks and discussion of the methods are offered
in Sect. 3.7.

3.2 State Feedback Controller

3.2.1 Controller Design

Left uncontrolled (U ≡ 0), system (3.1) may be unstable, depending on the system
parameters. Stabilizing controllers will here be derived, assuming all system param-
eters are known, demonstrating the synthesis of a stabilizing controller for the simple
PDE (3.1) using backstepping. We propose the control law
 1
1
U (t) = k(1 − ξ)v(ξ, t)dξ (3.4)
ρ 0

where k is the solution to the Volterra integral equation


 x
μk(x) = k(x − ξ)θ(ξ)dξ − θ(x). (3.5)
0

Theorem 3.1 Consider system (3.1). The control law (3.4) ensures that

v≡0 (3.6)

for t ≥ d1 , where

d1 = μ−1 . (3.7)

Notice that the control law (3.4) achieves convergence to zero in finite time, a property
that is not achieved for linear ODEs or linear parabolic PDEs. It is due to the particular
dynamics of transport equations. It is not straightforwardly obvious why the state
feedback control law (3.4) stabilizes the system, let alone how the Eq. (3.5) for k is
obtained. We hope to shine some light on this in the following proof of Theorem 3.1,
which shows in detail the steps involved in the backstepping technique for control
design.

Proof (Proof of Theorem 3.1) As the reader may recall, the idea of backstepping
is to find an invertible Volterra integral transformation and a corresponding control
law U that map the system of interest into an equivalent target system designed with
some desirable stability properties. We propose the following target system
3.2 State Feedback Controller 55

αt (x, t) − μαx (x, t) = 0 (3.8a)


α(1, t) = 0 (3.8b)
α(x, 0) = α0 (x) (3.8c)

which is a simple transport equation, transporting the boundary value α(1, t) = 0


through the domain at the speed μ. In fact, the solution to (3.8) is

α(1, t − d1 (1 − x)) for t ≥ d1 (1 − x)
α(x, t) = (3.9)
α0 (x + μt) for t < d1 (1 − x)

where d1 is defined in (3.7), and α0 ∈ B([0, 1]) is the initial condition. It is clear that
for t ≥ d1 , we will have

α≡0 (3.10)

since α(1, t) = 0 for all t ≥ 0. Thus, we seek an invertible transformation that maps
system (3.1) into (3.8).
Consider the backstepping transformation
 x
α(x, t) = v(x, t) − K (x, ξ)v(ξ, t)dξ (3.11)
0

from the original variable v to the auxiliary variable α, where K = K (x, ξ) is a


C 1 -function to be determined, defined over the triangular domain T given in (1.1a).
From differentiating (3.11) with respect to time, we obtain
 x
vt (x, t) = αt (x, t) + K (x, ξ)vt (ξ, t)dξ. (3.12)
0

Substituting the dynamics (3.1a) into (3.12) yields


 x
vt (x, t) = αt (x, t) + μ K (x, ξ)vx (ξ, t)dξ
 x 0

+ K (x, ξ)θ(ξ)dξv(0, t). (3.13)


0

We now apply integration by parts to the first integral on the right hand side of (3.13),
obtaining
 x  x
K (x, ξ)vx (ξ, t)dξ = [K (x, ξ)v(ξ, t)]0x − K ξ (x, ξ)v(ξ, t)dξ
0 0
= K (x, x)v(x, t) − K (x, 0)v(0, t)
 x
− K ξ (x, ξ)v(ξ, t)dξ. (3.14)
0
56 3 Non-adaptive Schemes

Inserting (3.14) into (3.13), yields

vt (x, t) = αt (x, t) + μK (x, x)v(x, t) − μK (x, 0)v(0, t)


 x  x
+μ K ξ (x, ξ)v(ξ, t)dξ + K (x, ξ)θ(ξ)dξv(0, t). (3.15)
0 0

Similarly, differentiating (3.11) with respect to space and using Leibniz’ rule, we
obtain
 x 
d
vx (x, t) = αx (x, t) + K (x, ξ)v(ξ, t)dξ
dx 0
 x
= αx (x, t) + K (x, x)v(x, t) + K x (x, ξ)v(ξ, t)dξ. (3.16)
0

Substituting (3.15) and (3.16) into (3.1a) gives

0 = vt (x, t) − μvx (x, t) − θ(x)v(0, t)


= αt (x, t) + μK (x, x)v(x, t) − μK (x, 0)v(0, t)
 x  x
−μ K ξ (x, ξ)v(ξ, t)dξ + K (x, ξ)θ(ξ)dξv(0, t) − μαx (x, t)
0
 x 0
− μK (x, x)v(x, t) − μ K x (x, ξ)v(ξ, t)dξ − θ(x)v(0, t), (3.17)
0

which can be written as

αt (x, t) − μαx (x, t)


 x
=μ [K x (x, ξ) + K ξ (x, ξ)]v(ξ, t)dξ
0  x 
+ μK (x, 0) − K (x, ξ)θ(ξ)dξ + θ(x) v(0, t). (3.18)
0

By choosing K as the solution to the PDE

K x (x, ξ) + K ξ (x, ξ) = 0 (3.19a)


 x
μK (x, 0) − K (x, ξ)θ(ξ)dξ + θ(x) = 0 (3.19b)
0

defined over T given in (1.1a), we obtain the target system dynamics (3.8a). Substi-
tuting x = 1 into (3.11), we obtain
 1  1
α(1, t) = v(1, t) − K (1, ξ)v(ξ, t)dξ = ρU (t) − K (1, ξ)v(ξ, t)dξ (3.20)
0 0
3.2 State Feedback Controller 57

where we have inserted the boundary condition (3.1). Choosing the control law as
 1
1
U (t) = K (1, ξ)v(ξ, t)dξ (3.21)
ρ 0

we obtain the boundary condition (3.1b). From (3.19a), it is evident that a solution
K to the Eq. (3.19) is in the form

K (x, ξ) = k(x − ξ). (3.22)

Using this, the Volterra integral equation (3.19b) reduces to (3.5), and the control
law (3.21) becomes (3.4).
The inverse of (3.11) is in a similar form, as stated in Theorem 1.2, given as
 x
v(x, t) = α(x, t) + L(x, ξ)α(ξ, t)dξ (3.23)
0

for a function L = L(x, ξ) defined over T given in (1.1a). L can be found by eval-
uating the Volterra integral equation (1.53). However, we show here an alternative
way to derive the inverse transformation. Using a similar technique used in deriving
K , we differentiate (3.23) with respect to time and space, respectively, insert the
dynamics (3.8a) and integrate by parts to find

αt (x, t) = vt (x, t) − μL(x, x)α(x, t)


 x
+ μL(x, 0)α(0, t) + μ L ξ (x, ξ)α(ξ, t)dξ (3.24)
0

and
 x
αx (x, t) = vx (x, t) − L(x, x)α(x, t) − L x (x, ξ)α(ξ, t)dξ. (3.25)
0

Inserting (3.24) and (3.25) into (3.8a), we obtain

0 = αt (x, t) − μαx (x, t)


= vt (x, t) − μvx (x, t) − θ(x)v(0, t) + [μL(x, 0) + θ(x)] v(0, t)
 x
+μ [L x (x, ξ) + L ξ (x, ξ)]α(ξ, t)dξ. (3.26)
0

Choosing L as the solution to

L x (x, ξ) + L ξ (x, ξ) = 0 (3.27a)


μL(x, 0) + θ(x) = 0 (3.27b)
58 3 Non-adaptive Schemes

over T yields the original system dynamics (3.1a). The simple form of (3.27) yields
the solution

L(x, ξ) = −d1 θ(x − ξ) (3.28)

for d1 defined in (3.7). By simply using the Volterra integral equation (1.53), we
obtain an equation for L as follows
 x
L(x, ξ) = k(x − ξ) + k(x − s)L(s, ξ)ds (3.29)
ξ

where K is the solution to (3.19). However, it is not at all evident from the Volterra
integral equations (3.29) and (3.5) for k that the solution to (3.29) is as simple as
(3.28). 

3.2.2 Explicit Controller Gains

The Volterra integral equation (3.5) for the controller gain k does not in general have
a solution that can be found explicitly, and a numerical approximation is often used
instead. We will here give some examples where the controller gain can be found
explicitly. The integral in (3.5) can be recognized as a convolution, and applying the
Laplace transform with respect to x gives

μk(s) = k(s)θ(s) − θ(s) (3.30)

and hence
θ(s)
k(s) = , (3.31)
θ(s) − μ

from which k(x) in some cases can be computed explicitly if θ(s) is known, illustrated
in the following examples.

Example 3.1 Consider system (3.1), where

θ(x) = θ (3.32)

is a constant. The Laplace transform of θ(x) is then

θ
θ(s) = . (3.33)
s
Using (3.30), we obtain
3.2 State Feedback Controller 59

d1 θ
k(s) = − (3.34)
s − d1 θ

which yields the closed form controller gain

k(x) = −d1 θed1 θx (3.35)

where d1 is defined in (3.7). If μ = d1 = 1, then he control law of Example 1.3 is


obtained.

Example 3.2 Consider system (3.1), where

θ(x) = θx (3.36)

for a constant θ. Using (3.30), we obtain

d1 θ
k(s) = − (3.37)
s 2 − d1 θ

which yields the closed form controller gain


⎧√ √

⎨ −d1 θ sin( −d1 θx) if θ < 0
k(x) = 0 if θ = 0 (3.38)

⎩ √ √
− d1 θ sinh( d1 θx) if θ > 0.

The control law U = 0 for θ = 0 should not be surprising, as system (3.1) with θ ≡ 0
reduces to the target system, which is stable for the trivial control law U ≡ 0.

Example 3.3 Consider system (3.1), where

θ(x) = sin(ωx) (3.39)

for a positive constant ω. Using (3.30), we obtain

d1 ω
k(s) = − (3.40)
s 2 + (ω 2 − d1 ω)

which yields the closed form controller gain


⎧ ω

⎪ − sinh( d1 ω − ω 2 x) if ω < d1


⎨ d 1 ω − ω 2

k(x) = −ω 2 x if ω = d1 (3.41)

⎪ ω
⎪−
⎪ ω 2 − d ωx) if ω > d1 .
⎩ sin( 1
ω 2 − d1 ω
60 3 Non-adaptive Schemes

3.3 Boundary Observer

The state feedback controller derived in the above section requires distributed mea-
surements, which are rarely available in practice. Often, only boundary sensing in
the form (3.1d) is available, and a state observer is therefore needed. Consider the
observer

v̂t (x, t) − μv̂x (x, t) = θ(x)y(t) (3.42a)


v̂(1, t) = ρU (t) (3.42b)
v̂(x, 0) = v̂0 (x) (3.42c)

for some initial condition v̂0 ∈ B([0, 1]).


Theorem 3.2 Consider system (3.1) and observer (3.42). For t ≥ d1 , where d1 is
defined in (3.7), we will have

v̂ ≡ v. (3.43)

Proof The error dynamics, in terms of ṽ = v − v̂, satisfies

ṽt (x, t) − μṽx (x, t) = 0 (3.44a)


ṽ(1, t) = 0 (3.44b)
ṽ(x, 0) = ṽ0 (0) (3.44c)

where ṽ0 = v0 − v̂0 , which can be seen from subtracting (3.42) from (3.1) and using
the fact that y(t) = v(0, t) as follows:

ṽt (x, t) − μṽx (x, t) = vt (x, t) − v̂t (x, t) − μvx (x, t) + μv̂x (x, t)
= μvx (x, t) + θ(x)v(0, t) − μv̂x (x, t)
− θ(x)v(0, t) − μvx (x, t) + μv̂x (x, t) = 0, (3.45)

and

ṽ(1, t) = v(1, t) − v̂(1, t) = 0. (3.46)

The error ṽ governed by the dynamics (3.44) is clearly zero in finite time d1 , where
d1 is defined in (3.7), resulting in v̂ ≡ v. 
Although the observer (3.42) for system (3.1) is only a copy of the system dynam-
ics and seems trivial to design, it is rarely the case that the resulting error dynamics
are trivial to stabilize. This will become evident in the design of observers for 2 × 2
systems in Sect. 8.3 where output injection terms have to be added to the observer
equations and carefully designed to achieve stability of the error dynamics.
3.4 Output Feedback Controller 61

3.4 Output Feedback Controller

As the state estimate converges to its true value in finite time, it is obvious that
simply substituting the state in the state feedback controller with the state estimate
will produce finite-time convergent output feedback controllers.

Theorem 3.3 Consider system (3.1), and let the controller be taken as
 1
1
U (t) = k(1 − ξ)v̂(ξ, t)dξ (3.47)
ρ 0

where v̂ is generated using the observer of Theorem 3.2, and k is the solution to the
Volterra integral equation (3.5). Then

v≡0 (3.48)

for t ≥ 2d1 , where d1 is defined in (3.7).

Proof It was stated in Theorem 3.2 that û ≡ u for t ≥ d1 . Thus, for t ≥ d1 , the control
law (3.47) is the very same as (3.4), for which Theorem 3.1 states that v ≡ 0 after a
finite time d1 . Hence, after a total time of 2d1 , v ≡ 0. 

3.5 Output Tracking Controller

Consider the simple system (3.1) again. The goal in this section is to make the
measured output (3.1d) track a signal r (t), that is y → r . Consider the control law
 1
1 1
U (t) = k(1 − ξ)v(ξ, t)dξ + r (t + d1 ) (3.49)
ρ 0 ρ

where k is the solution to the Volterra integral equation (3.5).

Theorem 3.4 Consider system (3.1), and let the control law be taken as (3.49). Then

y(t) = r (t) (3.50)

for t ≥ d1 , where d1 is defined in (3.7). Moreover, if r ∈ L∞ , then

||v||∞ ∈ L∞ . (3.51)

Proof It is shown in the proof of Theorem 3.1 that system (3.1) can be mapped using
the backstepping transformation (3.11) into
62 3 Non-adaptive Schemes

αt (x, t) − μαx (x, t) = 0 (3.52a)


 1
1
α(1, t) = U (t) − k(1 − ξ)v(ξ, t)dξ (3.52b)
ρ 0
α(x, 0) = α0 (x) (3.52c)
y(t) = α(0, t) (3.52d)

provided k is the solution to the Volterra integral equation (3.5). Inserting the control
law (3.49) gives

αt (x, t) − μαx (x, t) = 0 (3.53a)


α(1, t) = r (t + d1 ) (3.53b)
α(x, 0) = α0 (x) (3.53c)
y(t) = α(0, t). (3.53d)

From the simple transport structure of system (3.53), it is clear that

y(t) = α(0, t) = α(1, t − d1 ) = r (t) (3.54)

for t ≥ d1 , which is the tracking goal. Moreover, if r ∈ L∞ , we see from the sim-
ple dynamics (3.53a) and the boundary condition (3.53b) that ||α||∞ ∈ L∞ . The
invertibility of transformation (3.11) then gives ||v||∞ ∈ L∞ (Theorem 1.2). 

3.6 Simulations

The one-parameter system (3.1) and the controllers of Theorems 3.1, 3.3 and 3.4 are
implemented using the system parameters

3 1
λ= , ρ = 1, θ(x) = (1 + e−x cosh(πx)) (3.55)
4 2
and initial condition

u 0 (x) = x. (3.56)

For the controller of Theorem 3.4, the reference signal is set to

r (t) = 1 + sin(2πt). (3.57)

The controller gain k, needed by all controllers, is computed from (3.5) by using suc-
cessive approximations (as described in Appendix F.1). The resulting gain is plotted
in Fig. 3.1. It is observed from Figs. 3.2 and 3.3 that the system state and observer
3.6 Simulations 63

Gain k
−5

−10
0 0.2 0.4 0.6 0.8 1
x

Fig. 3.1 Controller gain k(x)

Fig. 3.2 Left: State during state feedback. Right: State during output feedback

Fig. 3.3 Left: State estimation error. Right: State during output tracking

0.6
1.5
||v − v̂||

1 0.4
||v||

0.5 0.2

0 0

0 1 2 3 4 5 0 1 2 3 4 5
Time [s] Time [s]

Fig. 3.4 Left: State norm during state feedback (solid red), output feedback (dashed-dotted blue)
and output tracking (dashed green). Right: State estimation error norm

state are bounded in all cases, and that the system state converges to zero when using
the controllers of Theorems 3.1 and 3.3, while standing oscillations are observed
for the case of using the controller of Theorem 3.4, which should be expected when
the reference signal is a sinusoid. The estimation error from using the observer in
Theorem 3.3 also converges to zero.
From the comparison plot of the state norms in Fig. 3.4, the finite-time convergence
property is evident for the controllers of Theorems 3.1 and 3.3, with the state feedback
64 3 Non-adaptive Schemes

1
0 2

and r
−1
U

1
−2
−3 0
0 1 2 3 4 5 0 1 2 3 4 5
Time [s] Time [s]

Fig. 3.5 Left: Actuation signal during state feedback (solid red), output feedback (dashed-dotted
blue) and output tracking (dashed green). Right: Measured signal (dashed red) and reference r
during tracking

controller of Theorem 3.1 achieving this for t ≥ d1 , where

4
d1 = λ−1 = ≈ 1.333 (3.58)
3
seconds, while convergence to zero for the output feedback controller of Theorem 3.3
is achieved for t ≥ 2d1 , since the estimation error takes d1 time to converge, as
observed from the figure. The control inputs are seen from Fig. 3.5 also to be zero
for t ≥ d1 and t ≥ 2d1 for the controllers of Theorems 3.1 and 3.3, respectively.
Lastly, the controller of Theorem 3.4 achieves the tracking objective for t ≥ d1 , in
accordance with the theory.

3.7 Notes

The above results clearly show the strength of the backstepping technique in con-
troller and observer design. One of the key strengths, as demonstrated, is that spatial
discretization need not be performed in any way before the actual implementation in a
computer. When using the backstepping technique, one instead analyzes the infinite-
dimensional system directly, avoiding any artifacts that discretization methods can
introduce that may potentially cause stability problems. In infinite dimensions, it is
straightforward to prove convergence in finite time, for instance, a particular feature
of hyperbolic partial differential equations which is lost by spatial discretization.
The major challenge in the backstepping technique instead lies in the choice of
target system and backstepping transformation. In the above design, we start by
choosing a target system and a form for the backstepping transformation, and then
derive conditions on the backstepping kernel so the backstepping transformation
maps the system of interest into the target system. The existence of such a kernel
is the major challenge, and it may happen that the conditions required on the back-
stepping kernel constitutes an ill posed problem, in which case either a different
backstepping transformation or an alternative target system must be found. These
3.7 Notes 65

issues will become far more evident when we in Part III and onwards consider sys-
tems of coupled PDEs.
One drawback of the above design, is that the controller (and observer) gains can
rarely be expressed explicitly, but rather as the solution to a set of partial differential
equations in the form (3.19) that may be difficult or time-consuming to solve. This
is of minor concern when the equation is time-invariant, because then a solution
can be computed once and for all, prior to implementation. However, for adaptive
controllers, the gains typically depend on uncertain parameters that are continuously
updated by some adaptive law. This brings us to the topic of the next chapter, where
we use the backstepping technique to derive controllers for systems with uncertain
parameters. The resulting controllers then have time-varying gains which must be
computed at every time step.

Reference

Krstić M, Smyshlyaev A (2008) Backstepping boundary control for first-order hyperbolic PDEs
and application to systems with actuator and sensor delays. Syst Control Lett 57(9):750–758
Chapter 4
Adaptive State-Feedback Controller

4.1 Introduction

Having derived state-feedback and output-feedback controllers for (2.4), we will


now proceed with adaptive solutions. In this chapter we present an adaptive state-
feedback controller for system (2.4), with the additional assumption that ρ = 1. For
the reader’s convenience, we restate the system here

vt (x, t) − μvx (x, t) = θ(x)v(0, t) (4.1a)


v(1, t) = U (t) (4.1b)
v(x, 0) = v0 (x) (4.1c)

where

μ ∈ R, μ > 0 θ ∈ C 0 ([0, 1]) (4.2)

with

v0 ∈ B([0, 1]). (4.3)

An adaptive state-feedback controller is derived using Lyapunov design in Xu


and Liu (2016) with μ = 1 and uncertain θ, and is at the writing of this book the
only result on adaptive control of linear hyperbolic PDEs using the Lyapunov design
approach. The stability proof is complicated, and spans the majority of the 16-page
paper. We will in this chapter construct an adaptive state-feedback controller for
system (2.4) with arbitrary μ > 0 by identifier-based design, which incorporates a
dynamical system referred to as an identifier. The identifier is usually a copy of
the system dynamics with certain output injection gains added for the purpose of
making the adaptive laws integrable. The identifier is sometimes termed an observer,
although its purpose is parameter estimation and not state estimation. As we will see,
© Springer Nature Switzerland AG 2019 67
H. Anfinsen and O. M. Aamo, Adaptive Control of Hyperbolic PDEs,
Communications and Control Engineering,
https://doi.org/10.1007/978-3-030-05879-1_4
68 4 Adaptive State-Feedback Controller

the identifier-based design is simpler to carry out than the Lyapunov design in Xu
and Liu (2016), but at the cost of increasing dynamic order of the controller due to
the identifier dynamics. The details of the design are given in Sect. 4.2, simulations
are presented in Sect. 4.3, while some concluding remarks are offered in Sect. 4.4.
Although it is assumed unknown, we assume we have some a priori knowledge
of the parameter θ, formally stated in the following assumption.
Assumption 4.1 A bound on θ is known. That is, we are in knowledge of a constant
θ̄ so that

||θ||∞ ≤ θ̄. (4.4)

This assumption is not a limitation, since the bound θ̄ can be arbitrarily large.

4.2 Identifier-Based Design

4.2.1 Identifier and Update Law

We propose the following identifier for system (4.1)

v̂t (x, t) − μv̂x (x, t) = θ̂(x, t)v(0, t) + γ0 (v(x, t) − v̂(x, t))v 2 (0, t) (4.5a)
v̂(1, t) = U (t) (4.5b)
v̂(x, 0) = v̂0 (x) (4.5c)

and the adaptive law


 
θ̂t (x, t) = projθ̄ γ(x)(v(x, t) − v̂(x, t))v(0, t), θ̂(x, t) (4.6a)

θ̂(x, 0) = θ̂0 (x) (4.6b)

for some design gains γ0 > 0, γ̄ ≥ γ(x) ≥ γ > 0, x ∈ [0, 1] and initial conditions
satisfying

v0 ∈ B([0, 1]) (4.7a)


||θ̂0 ||∞ ≤ θ̄, (4.7b)

where θ̄ is as stated in Assumption 4.1. The operator proj{} is defined in Appendix A.

Lemma 4.1 Consider system (4.1). The identifier (4.5) and the update law (4.6)
with initial conditions satisfying (4.7) guarantee that
4.2 Identifier-Based Design 69

||θ̂(t)||∞ ≤ θ̄, ∀t ≥ 0 (4.8a)


||e|| ∈ L∞ ∩ L2 (4.8b)
|e(0, ·)|, ||e|||v(0, ·)|, ||θ̃t || ∈ L2 (4.8c)

where

e(x, t) = v(x, t) − v̂(x, t). (4.9)

Proof The property (4.8a) follows from the projection operator and the initial condi-
tion (4.7b) (Lemma A.1 in Appendix A). The error signal (4.9) can straightforwardly
be shown to have dynamics

et (x, t) − μex (x, t) = θ̃(x, t)v(0, t) − γ0 e(x, t)v 2 (0, t) (4.10a)


e(1, t) = 0 (4.10b)
e(x, 0) = e0 (x) (4.10c)

where e0 = v0 − v̂0 ∈ B([0, 1]). Consider the Lyapunov function candidate


 1  
V1 (t) = (1 + x) e2 (x, t) + γ −1 (x)θ̃2 (x, t) d x, (4.11)
0

for which we find by differentiating with respect to time, inserting the dynamics
(4.10a) and integrating by parts
 1
V̇1 (t) = −μe2 (0, t) − μ||e(t)||2 + 2 (1 + x)e(x, t)θ̃(x, t)v(0, t)d x
0
 1  
−2 (1 + x) γ0 e2 (x, t)v 2 (0, t) + γ −1 (x)θ̃(x, t)θ̃t (x, t) d x. (4.12)
0

Inserting the adaptive law (4.6), and using the property −θ̃(x, t)projθ̄ (τ , θ̂(x, t)) ≤
−θ̃(x, t)τ (Lemma A.1), we obtain

V̇1 (t) ≤ −μe2 (0, t) − μ||e(t)||2 − 2γ0 ||e(t)||2 v 2 (0, t) (4.13)

which shows that V1 (t) is non-increasing and hence bounded, and thus ||e|| ∈ L∞
follows. This also implies that the limit limt→∞ V1 (t) = V1,∞ exists. By integrating
(4.13) from zero to infinity, we obtain
 ∞  ∞  ∞
V̇1 (τ )dτ = V1,∞ − V1 (0) ≤ −μ e2 (0, τ )dτ − μ ||e(τ )||2 dτ
0
0 ∞ 0

− 2γ0 ||e(τ )|| v (0, τ )dτ


2 2
(4.14)
0
70 4 Adaptive State-Feedback Controller

and hence
 ∞  ∞  ∞
μ e2 (0, τ )dτ + μ ||e(τ )||2 dτ + 2γ0 ||e(τ )||2 v 2 (0, τ )dτ
0 0 0
≤ V1 (0) − V1,∞ ≤ V1 (0) < ∞ (4.15)

which, since μ, γ0 > 0, proves that all integrals in (4.15) are bounded, resulting in

|e(0, ·)|, ||e||, ||e|||v(0, ·)| ∈ L2 . (4.16)

From the adaptive law (4.6), we have

||θ̂t (t)|| ≤ γ̄||e(t)|||v(0, t)| (4.17)

and since ||e|||v(0, ·)| ∈ L2 , it follows that

||θ̂t || ∈ L2 . (4.18)

4.2.2 Control Law

Using the identifier and adaptive law designed in the previous section, we are ready
to design a stabilizing control law. Consider the control law
 1
U (t) = k̂(1 − ξ, t)v̂(ξ, t)dξ (4.19)
0

where k̂ is the on-line solution to the Volterra integral equation


 x
μk̂(x, t) = k̂(x − ξ, t)θ̂(ξ, t)dξ − θ̂(x, t). (4.20)
0

Theorem 4.1 The control law (4.19) in closed loop with system (4.1), identifier (4.5)
and adaptive law (4.6), guarantees that

||v||, ||v̂||, ||v||∞ , ||v̂||∞ ∈ L2 ∩ L∞ (4.21a)


||v||, ||v̂||, ||v||∞ , ||v̂||∞ → 0. (4.21b)

Theorem 4.1 is proved in Sect. 4.2.4 using Lyapunov theory, facilitated by the back-
stepping transformation with accompanying target system, which are presented next.
4.2 Identifier-Based Design 71

4.2.3 Backstepping and Target System

Consider the backstepping transformation


 x
w(x, t) = v̂(x, t) − k̂(x − ξ, t)v̂(ξ, t)dξ = T [v̂](x, t) (4.22)
0

where k̂ is the on-line solution to the Volterra integral equation

μk̂(x, t) = −T [θ̂](x, t) (4.23)

which is equivalent to (4.20). As with all Volterra integral transformations, transfor-


mation (4.22) is invertible (Theorem 1.3), with inverse
 x
−1
v̂(x, t) = w(x, t) − μ θ̂(x − ξ, t)w(ξ, t)dξ = T −1 [w](x, t). (4.24)
0

This can be verified by inserting

K (x, ξ, t) = k̂(x − ξ, t) (4.25)

and

L(x, ξ, t) = −μ−1 θ̂(x − ξ, t) (4.26)

into (1.90), yielding


 x
−μ−1 θ̂(x − ξ, t) = k̂(x − ξ, t) − μ−1 k̂(x − s, t)θ̂(s − ξ, t)ds (4.27)
ξ

or equivalently
 x+ξ
μk̂(x, t) = −θ̂(x, t) + k̂(x + ξ − s, t)θ̂(s − ξ, t)ds. (4.28)
ξ

A substitution τ = s − ξ in the integral yields (4.20). Consider also the target system

wt (x, t) − μwx (x, t) = −μk̂(x, t)e(0, t) + γ0 T [e](x, t)v 2 (0, t)


 x
+ k̂t (x − ξ, t)T −1 [w](ξ, t)dξ (4.29a)
0
w(1, t) = 0 (4.29b)
w(x, 0) = w0 (x) (4.29c)
72 4 Adaptive State-Feedback Controller

for some initial condition

w0 ∈ B([0, 1]). (4.30)

Lemma 4.2 The backstepping transformation (4.22) with k̂ satisfying (4.20) maps
identifier (4.5) into system (4.29).

Proof From differentiating (4.22) with respect to time, inserting the dynamics (4.5a)
and integrating by parts, we find

v̂t (x, t) = wt (x, t) + μk̂(0, t)v̂(x, t) − μk̂(x, t)v̂(0, t)


 x  x
−μ k̂ x (x − ξ, t)v̂(x, t)dξ + k̂(x − ξ, t)θ̂(ξ, t)dξv(0, t)
 x
0 0

+ γ0 k̂(x − ξ, t)e(ξ, t)dξv 2 (0, t). (4.31)


0

Similarly, differentiating (4.22) with respect to space yields


 x
v̂x (x, t) = wx (x, t) + k̂(0, t)v̂(x, t) + k̂ x (x − ξ, t)v̂(ξ, t)dξ. (4.32)
0

Substituting (4.31) and (4.32) into the identifier dynamics (4.5a), we find

0 = v̂t (x, t) − μv̂x (x, t) − θ̂(x, t)v(0, t) − γ0 e(x, t)v 2 (0, t)


= wt (x, t) − μwx (x, t) − θ̂(x, t)e(0, t)
  x 
− μk̂(x, t) − k̂(x − ξ, t)θ̂(ξ, t)dξ + θ̂(x, t) v̂(0, t)
0
 x  x
+ k̂(x − ξ, t)θ̂(ξ, t)dξe(0, t) + k̂(x − ξ, t)γ0 e(ξ, t)dξv 2 (0, t)
0
 x 0

− γ0 e(x, t)v 2 (0, t) + k̂t (x − ξ, t)v̂(ξ, t)dξ. (4.33)


0

Choosing k̂ as the solution to (4.20) yields the target system dynamics (4.29a).
Substituting x = 1 into (4.22) and inserting the boundary condition (4.5b), we find
 1
w(1, t) = v̂(1, t) − k̂(1 − ξ, t)v̂(ξ, t)dξ
0
 1
= U (t) − k̂(1 − ξ, t)v̂(ξ, t)dξ. (4.34)
0

Choosing the control law (4.19) yields the boundary condition (4.29b). 
4.2 Identifier-Based Design 73

4.2.4 Proof of Theorem 4.1

We will here use the following inequalities that hold for all t ≥ 0

||k̂(t)|| ≤ Mk (4.35a)
||w(t)|| ≤ G 1 ||v̂(t)|| (4.35b)
||v̂(t)|| ≤ G 2 ||w(t)|| (4.35c)

for some positive constants Mk , G 1 and G 2 , and

||k̂t || ∈ L2 . (4.36)

The property (4.35a) follows from applying Lemma 1.1 to (4.20), and the fact that
θ̂ is uniformly bounded. Properties (4.35b)–(4.35c) follow from Theorem 1.3, while
for (4.36), we differentiate (4.20) with respect to time and find
 x
μk̂t (x, t) = −θ̂t (x, t) + k̂t (x − ξ, t)θ̂(ξ, t)dξ
 x 0

+ k̂(x − ξ, t)θ̂t (ξ, t)dξ, (4.37)


0

which can be rewritten as


 x
−1
k̂t (x, t) − μ θ̂(x − ξ, t)k̂t (ξ, t)dξ
0
 x
= −μ−1 θ̂t (x, t) + μ−1 k̂(x − ξ, t)θ̂t (ξ, t)dξ (4.38)
0

or

T −1 [k̂t ](x, t) = −μ−1 T [θ̂t ](x, t). (4.39)

Hence

k̂t (x, t) = −μ−1 T [T [θ̂t ]](x, t) (4.40)

which gives the bound

||k̂t (t)|| ≤ μ−1 G 21 ||θ̂t (t)||2 . (4.41)

Since ||θ̂t || ∈ L2 by Lemma 4.1, (4.36) follows.


74 4 Adaptive State-Feedback Controller

Consider now the Lyapunov function candidate


 1
V2 (t) = eδx w 2 (x, t)d x (4.42)
0

for some positive constant δ to be determined. Differentiating (4.42) with respect to


time, inserting the dynamics (4.29a), and integrating by parts give
 1
δ
V̇2 (t) = μe w (1, t) − μw (0, t) − μδ
2 2
eδx w 2 (x, t)d x
0
 1
− 2μ eδx w(x, t)k̂(x, t)d xe(0, t)
0
 1
+ 2γ0 eδx w(x, t)T [e](x, t)d xv 2 (0, t)
0
 1  x
+2 eδx w(x, t) k̂t (x − ξ, t)T −1 [w](ξ, t)dξd x. (4.43)
0 0

We will consider the three rightmost integrals in (4.43) individually. For the second
integral on the right hand, we obtain by applying Young’s inequality to the cross
terms (Appendix C)
 1
−2μ eδx w(x, t)k̂(x, t)d xe(0, t)
0
 1 
δx 2 1 2 1 δx 2
≤ ρ1 e w (x, t) + μ e k̂ (x, t)d xe2 (0, t)
0 ρ1 0
 1
1
≤ ρ1 V2 (t) + μ2 eδ k̂ 2 (x, t)d xe2 (0, t)
ρ1 0
1
≤ ρ1 V2 (t) + μ2 eδ Mk2 e2 (0, t) (4.44)
ρ1

for an arbitrary positive constant ρ1 . Similarly for the third and fourth integral, using
v(0, t) = v̂(0, t) + e(0, t) = w(0, t) + e(0, t), we find, using Cauchy–Schwarz’ and
Young’s inequalities (Appendix C)
 1
2γ0 eδx w(x, t)T [e](x, t)d xv 2 (0, t) ≤ 2γ0 eδ ||w(t)||||T [e](t)||v 2 (0, t)
0
≤ 2G 1 γ0 eδ ||w(t)||||e(t)|||v(0, t)||w(0, t) + e(0, t)|
1
≤ G 21 ρ2 γ02 e2δ ||w(t)||2 ||e(t)||2 v 2 (0, t) + (w(0, t) + e(0, t))2
ρ2
2 2
≤ ρ2 G 21 γ02 e2δ ||w(t)||2 ||e(t)||2 v 2 (0, t) + w 2 (0, t) + e2 (0, t) (4.45)
ρ2 ρ2
4.2 Identifier-Based Design 75

for an arbitrary positive constant ρ2 . Lastly,


 1  x
δx
2 e w(x, t) k̂t (x − ξ, t)T −1 [w](ξ, t)dξd x
0 0
 1  x
≤ ρ3 eδx w 2 (x, t) dξd x
0 0
 1  x 2
1 δ
+ e k̂t (x − ξ, t)T −1 [w](ξ, t)dξ dx
ρ3 0 0
 1   1 2
1 δ 1
≤ ρ3 eδx w 2 (x, t)d x +
e k̂t (1 − ξ, t)T −1 [w](ξ, t)dξ dx
0 ρ3 0 0
 1  1
1 2
≤ ρ3 eδx w 2 (x, t)d x + eδ ||k̂t (t)||||T −1 [w](t)|| d x
0 ρ3 0
 1
1
≤ ρ3 eδx w 2 (x, t)d x + eδ ||k̂t (t)||2 ||T −1 [w](t)||2
0 ρ3
1 δ 2
≤ ρ3 V2 (t) + e G 2 ||k̂t (t)||2 ||w(t)||2 . (4.46)
ρ3

Substituting (4.44)–(4.46) into (4.43) yields


 
2
V̇2 (t) ≤ − μ − w 2 (0, t) − [μδ − ρ1 − ρ3 ] V2 (t)
ρ2
 
1 2 δ 2 2
+ μ e Mk + e2 (0, t) + ρ2 G 21 γ02 e2δ ||w(t)||2 ||e(t)||2 v 2 (0, t)
ρ1 ρ2
1
+ eδ G 22 ||k̂t (t)||2 ||w(t)||2 (4.47)
ρ3

where we have used the boundary condition (4.29b). Choosing

2
ρ1 = μ, ρ2 = ρ3 = μ (4.48)
μ

yields

V̇2 (t) ≤ −μ [δ − 2] V2 (t) + μ eδ Mk2 + 1 e2 (0, t)
2
+ G 21 γ02 e2δ ||w(t)||2 ||e(t)||2 v 2 (0, t)
μ
1
+ eδ G 22 ||k̂t (t)||2 ||w(t)||2 . (4.49)
μ

Now choosing
76 4 Adaptive State-Feedback Controller

δ=3 (4.50)

yields

V̇2 (t) ≤ −μV2 (t) + l1 (t)V2 (t) + l2 (t) (4.51)

where we have defined


2 2 2 2δ 1
l1 (t) = G 1 γ0 e ||e(t)||2 v 2 (0, t) + eδ G 22 ||k̂t (t)||2 (4.52a)
μ μ

l2 (t) = μ e3 Mk2 + 1 e2 (0, t) (4.52b)

which are nonnegative, integrable functions (i.e. l1 , l2 ∈ L1 ), following Lemma 4.1


and (4.36). It then follows from Lemma B.3 in Appendix B that

V2 ∈ L1 ∩ L∞ , V2 → 0 (4.53)

and hence

||w|| ∈ L2 ∩ L∞ , ||w|| → 0. (4.54)

From the invertibility of the transformation (4.22), we have

||v̂|| ∈ L2 ∩ L∞ , ||v̂|| → 0, (4.55)

and since ||e|| ∈ L2 ∩ L∞ , we have from (4.9) that

||v|| ∈ L2 ∩ L∞ , ||v|| → 0. (4.56)

In the non-adaptive case investigated in Sect. 3.2.1, it is shown that system (2.4) is,
through the invertible backstepping transformation (3.11), equivalent to the system

αt (x, t) − μαx (x, t) = 0 (4.57a)


 1  1
α(1, t) = k̂(1 − ξ, t)v̂(ξ, t)dξ − k(1 − ξ)v(ξ, t)dξ (4.57b)
0 0
α(x, 0) = α0 (x) (4.57c)

provided k satisfies the Volterra integral equation (3.5), and where we have inserted
for the control law (4.19). Since ||v||, ||v̂|| ∈ L2 ∩ L∞ , ||v||, ||v̂|| → 0 and k, k̂ are
bounded it follows that

α(1, ·) ∈ L2 ∩ L∞ , α(1, ·) → 0, (4.58)

and hence
4.2 Identifier-Based Design 77

||α||∞ ∈ L2 ∩ L∞ , ||α||∞ → 0. (4.59)

Due to the invertibility of the transformation (3.11),

||v||∞ ∈ L2 ∩ L∞ , ||v||∞ → 0, (4.60)

while from the structure of v̂ in (4.5), with U, v(0, ·) ∈ L2 ∩ L∞ ,

||v̂||∞ ∈ L2 ∩ L∞ ||v̂||∞ → 0, (4.61)

follows, and hence also

||w||∞ , ||e||∞ ∈ L2 ∩ L∞ , ||w||∞ , ||e||∞ → 0. (4.62)

Thus, all signals in the closed loop system are pointwise bounded and converge to
zero. 

4.3 Simulations

System (4.1), identifier (4.5) and the control law of Theorem 4.1 are implemented
using the same system parameters as in the simulation in Sect. 3.6, that is

3 1
μ= θ(x) = (1 + e−x cosh(πx)) (4.63)
4 2
and initial condition

u 0 (x) = x. (4.64)

The initial condition for the identifier and parameter estimate are set to zero, and the
design gains are set to

ρ = 1, γ = 1, θ = −100, θ̄ = 100. (4.65)

The equation (4.20) is solved on-line using successive approximations (as described
in Appendix F.1) for the controller gain k̂.
It is observed from Fig. 4.1 that the system and identifier states are bounded and
converge asymptotically to zero. The error in the identifier (u − û) also converges
to zero, as does the actuation signal U seen in Fig. 4.2. The estimated parameter θ̂
is seen from Fig. 4.3 to be bounded and converge, although not to the true value θ.
Convergence of parameters to their true values requires persistent excitation, and is
therefore not compatible with the objective of regulation to zero.
78 4 Adaptive State-Feedback Controller

||u|| and ||û|| 3

0.6
2

||u − û||
0.4
1
0.2
0 0
0 2 4 6 8 10 0 2 4 6 8 10
Time [s] Time [s]

Fig. 4.1 Left: State (solid red) and identifier (dashed-dotted blue) norms. Right: Identifier error
norm

−2
U

−4

0 2 4 6 8 10
Time [s]

Fig. 4.2 Actuation signal

3
θ and θ̂

0 0.2 0.4 0.6 0.8 1


x

Fig. 4.3 Left: Estimated parameter θ̂. Right: Actual value of θ (solid black) and final estimate θ̂
(dashed red)

4.4 Notes

Proving stability properties by the Lyapunov design in Xu and Liu (2016) is in gen-
eral more difficult than for the identifier-based design demonstrated in this chapter,
and the difference in complexity becomes more prominent as the complexity of the
system increases. On the other hand, Lyapunov designs in general result in adaptive
controllers of lower dynamical order than their identifier-based counterparts, and
are therefore simpler to implement in practice. This is of course due to the identi-
fier inheriting the dynamic order of the system, while the Lyapunov design gives a
dynamic order that only depends on the uncertain parameters. Both solutions assume
that measurements of the full state are available, which is unrealistic in most cases.
4.4 Notes 79

We relax this assumption in the next chapter, where we derive an output-feedback


adaptive controller for (4.1) using swapping-based design.

Reference

Xu Z, Liu Y (2016) Adaptive boundary stabilization for first-order hyperbolic PDEs with unknown
spatially varying parameter. Int J Robust Nonlinear Control 26(3):613–628
Chapter 5
Adaptive Output-Feedback Controller

5.1 Introduction

We consider again systems in the form (2.4) and recall the equations for the conve-
nience of the reader as
vt (x, t) − μvx (x, t) = θ(x)v(0, t) (5.1a)
v(1, t) = U (t) (5.1b)
v(x, 0) = v0 (x) (5.1c)
y(t) = v(0, t), (5.1d)

where
μ ∈ R, μ > 0 θ ∈ C 0 ([0, 1]) (5.2)

with
v0 ∈ B([0, 1]). (5.3)

For simplicity, we again assume ρ = 1. (This assumption will be relaxed in Chap. 6).
A Lyapunov-based state-feedback controller for the special case μ = 1 is pre-
sented in Xu and Liu (2016), while an identifier-based state-feedback controller is
designed in Chap. 4. We will in this chapter derive an adaptive controller using the
third design method mentioned in Sect. 1.10: swapping-based design. This method
employs filters, carefully designed so that the system states can be expressed as lin-
ear, static combinations of the filter states, the unknown parameters and some error
terms. The error terms are shown to converge to zero. The static parameterization
of the system states is referred to as the linear parametric model, to which a range
of standard parameter estimation algorithms can be applied. The number of filters
required when using this method typically equals the number of unknown parameters
plus one.

© Springer Nature Switzerland AG 2019 81


H. Anfinsen and O. M. Aamo, Adaptive Control of Hyperbolic PDEs,
Communications and Control Engineering,
https://doi.org/10.1007/978-3-030-05879-1_5
82 5 Adaptive Output-Feedback Controller

Swapping-based design is used in Bernard and Krstić (2014) to adaptively stabilize


system (2.4) with uncertain θ for the special case of μ = 1. We present the method for
an arbitrary μ > 0. The resulting controller possesses the highly desirable feature of
being an output-feedback controller, requiring only a boundary measurement (5.1d)
at x = 0. This is not the case in general when using swapping-based design, but a
nice feature achievable for system (2.4), since the uncertain parameter θ is multiplied
by a measured signal v(0, t) (see the right hand side of (5.1a)).
We proceed in Sect. 5.2 to derive the swapping-based output-feedback adap-
tive controller. The controller and system (5.1) are implemented and simulated in
Sect. 5.3, before some concluding remarks are given in Sect. 5.4.
As for the identifier-based state-feedback solution of Chap. 4, we require a bound
on the parameter θ, formally stated in the following assumption.
Assumption 5.1 A bound on θ is known. That is, we are in knowledge of a constant
θ̄ so that

||θ||∞ ≤ θ̄. (5.4)

5.2 Swapping-Based Design

5.2.1 Filter Design and Non-adaptive State Estimates

We introduce the filters

ψt (x, t) − μψx (x, t) = 0, ψ(1, t) = U (t), ψ(x, 0) = ψ0 (x) (5.5a)


φt (x, t) − μφx (x, t) = 0, φ(1, t) = y(t), φ(x, 0) = φ0 (x), (5.5b)

for some initial conditions satisfying

ψ0 , φ0 ∈ B([0, 1]). (5.6)

Then a non-adaptive estimate of the signal v can be generated from


 1
v̄(x, t) = ψ(x, t) + d1 θ(ξ)φ(1 − (ξ − x), t)dξ (5.7)
x

where d1 = μ−1 , as defined in (3.7). Define the non-adaptive estimation error as

e(x, t) = v(x, t) − v̄(x, t). (5.8)


5.2 Swapping-Based Design 83

Straightforward calculations yield that e satisfies the dynamics

et (x, t) − μex (x, t) = 0, e(1, t) = 0, e(x, 0) = e0 (x) (5.9)

for which e ≡ 0 for t ≥ d1 , with d1 defined in (3.7).

5.2.2 Adaptive Laws and State Estimation

Motivated by the parametrization (5.7), we generate an adaptive estimate of v from


 1
v̂(x, t) = ψ(x, t) + d1 θ̂(ξ, t)φ(1 − (ξ − x), t)dξ (5.10)
x

where θ̂ is an estimate of θ. The dynamics of (5.10) can straightforwardly be found


to satisfy

v̂t (x, t) − μv̂x (x, t) = θ̂(x, t)v(0, t)


 1
+ d1 θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ (5.11a)
x
v̂(1, t) = U (t) (5.11b)
v̂(x, 0) = v̂0 (x) (5.11c)

for some function v̂0 ∈ B([0, 1]). The corresponding prediction error is defined as

ê(x, t) = v(x, t) − v̂(x, t). (5.12)

From the parametric model (5.7) and corresponding error (5.8), we also have
 1
y(t) = ψ(0, t) + d1 θ(ξ)φ(1 − ξ, t)dξ + e(0, t), (5.13)
0

with e(0, t) = 0 for t ≥ d1 . From (5.13), we propose the following adaptive law with
normalization and projection
 
ê(0, t)φ(1 − x, t)
θ̂t (x, t) = projθ̄ γ1 (x) , θ̂(x, t) , θ̂(x, 0) = θ̂0 (x) (5.14)
1 + ||φ(t)||2

where γ̄ ≥ γ1 (x) ≥ γ > 0 for all x ∈ [0, 1] is a design gain, and the initial guess is
chosen inside the feasible domain, i.e.

||θ̂0 ||∞ ≤ θ̄, (5.15)


84 5 Adaptive Output-Feedback Controller

where θ̄ is as given in Assumption 5.1. The projection operator proj{} is defined in


Appendix A.
Lemma 5.1 The adaptive law (5.14) with initial condition satisfying (5.15) has the
following properties

||θ̂(t)||∞ ≤ θ̄, ∀t ≥ 0 (5.16a)


||θ̃t || ∈ L∞ ∩ L2 (5.16b)
e(0, ·), σ ∈ L∞ ∩ L2 (5.16c)

where θ̃ = θ − θ̂, and

ê(0, t)
σ(t) =  . (5.17)
1 + ||φ(t)||2

Proof The property (5.16a) follows from the projection operator and the condition
(5.15) (Lemma A.1 in Appendix A). Consider the Lyapunov function candidate
 1  1
d1
V1 (t) = d1 e2 (x, t)d x + γ1−1 (x)θ̃2 (x, t)d x. (5.18)
0 2 0

Differentiating with respect to time and inserting the dynamics (5.9) and adaptive
law (5.14), we find
 1
V1 (t) = 2 e(x, t)ex (x, t)d x
0
 1  
−1 ê(0, t)φ(1 − x, t)
− d1 γ1 (x)θ̃(x, t)projθ̄ γ1 (x) , θ̂(x, t) d x. (5.19)
0 1 + ||φ(t)||2

Since −θ̃(x, t)projθ̄ (τ (x, t), θ̂(x, t)) ≤ −θ̃(x, t)τ (x, t) (Lemma A.1), we get

V̇1 (t) ≤ e2 (1, t) − e2 (0, t)


 1
d1
− ê(0, t)θ̃(x, t)φ(1 − x, t)d x. (5.20)
1 + ||φ(t)||2 0

We note from (5.7), (5.8), (5.10) and (5.12) that


 1
ê(0, t) = e(0, t) + d1 θ̃(x, t)φ(1 − x, t)d x (5.21)
0

and inserting this into (5.20), we obtain

ê(0, t)e(0, t)
V̇1 (t) ≤ −e2 (0, t) − σ 2 (t) + . (5.22)
1 + ||φ(t)||2
5.2 Swapping-Based Design 85

Applying Young’s inequality to the last term gives

1 1
V̇1 (t) ≤ − e2 (0, t) − σ 2 (t), (5.23)
2 2
where we used definition (5.17). This proves that V1 (t) is bounded and non-
increasing, and hence has a limit as t → ∞. Integrating (5.23) in time from zero
to infinity gives

e(0, ·), σ ∈ L2 , (5.24)

while from the relationship (5.21) with e(0, t) = 0 for t ≥ d1 , we find

|ê(0, t)| ||θ̃(t)||||φ(t)||


|σ(t)| =  ≤ ≤ ||θ̃(t)|| (5.25)
1 + ||φ(t)|| 2 1 + ||φ(t)||2

which proves

σ ∈ L∞ . (5.26)

From the adaptation law (5.14), we have

|ê(0, t)|||φ(t)|| |ê(0, t)| ||φ(t)||


||ĝt (t)|| ≤ γ̄ ≤ γ̄  
1 + ||φ(t)|| 2
1 + ||φ(t)|| 1 + ||φ(t)||2
2

≤ γ̄|σ(t)| (5.27a)

which, along with (5.16c) gives (5.16b). 

5.2.3 Control Law

Consider the control law


 1
U (t) = k̂(1 − ξ, t)v̂(ξ, t)dξ (5.28)
0

where v̂ is generated using (5.10), and k̂ is the on-line solution to the Volterra integral
equation
 x
μk̂(x, t) = k̂(x − ξ, t)θ̂(ξ, t)dξ − θ̂(x, t). (5.29)
0
86 5 Adaptive Output-Feedback Controller

Theorem 5.1 Consider system (5.1), filters (5.5), adaptive laws (5.14) and the state
estimate (5.10). The control law (5.28) guarantees

||v||, ||v̂||, ||ψ||, ||φ||, ||v||∞ , ||v̂||∞ , ||ψ||∞ , ||φ||∞ ∈ L2 ∩ L∞ (5.30a)


||v||, ||v̂||, ||ψ||, ||φ||, ||v||∞ , ||v̂||∞ , ||ψ||∞ , ||φ||∞ → 0. (5.30b)

Before proving Theorem 5.1 in Sect. 5.2.5 we will as we did for the identifier-based
design, introduce a target system and a backstepping transformation that facilitate
the proof.

5.2.4 Backstepping and Target System

Consider the transformation


 x
w(x, t) = v̂(x, t) − k̂(x − ξ, t)v̂(ξ, t)dξ = T [û](x, t) (5.31)
0

where k̂ is the solution to

μk̂(x, t) = −T [θ̂](x, t), (5.32)

which is equivalent to (5.29). As with all Volterra integral transformations, transfor-


mation (5.31) is invertible (Theorem 1.3), with an inverse in the form

v̂(x, t) = T −1 [w](x, t) (5.33)

for a Volterra integral operator T −1 . Consider also the target system

wt (x, t) − μwx (x, t) = −μk̂(x, t)ê(0, t)


 1 
+ d1 T θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ (x, t)
x
 x
− k̂t (x − ξ, t)T −1 [w](ξ, t)dξ (5.34a)
0
w(1, t) = 0 (5.34b)
w(x, 0) = w0 (x). (5.34c)

Lemma 5.2 The backstepping transformation (5.31) and controller (5.28) map sys-
tem (5.11) into the target system (5.34).
5.2 Swapping-Based Design 87

Proof Differentiating (5.31) with respect to time and space, respectively, inserting
the dynamics (5.11a) and integrating by parts yield

v̂t (x, t) = wt (x, t) + μk̂(0, t)v̂(x, t) − μk̂(x, t)v̂(0, t)


 x  x
+μ k̂ x (x − ξ, t)v̂(ξ, t)dξ + k̂(x − ξ, t)θ̂(ξ, t)dξv(0, t)
0 0
 x  1
+ d1 k̂(x − ξ, t) θ̂t (s, t)φ(1 − (s − ξ), t)dsdξ
0 ξ
 x
+ k̂t (x − ξ, t)v̂(ξ, t)dξ (5.35)
0

and
 x
v̂x (x, t) = wx (x, t) + k̂(0, t)v̂(x, t) + k̂ x (x − ξ, t)v̂(ξ, t)dξ. (5.36)
0

Inserting the results into (5.11a), we obtain


  x
wt (x, t) − μwx (x, t) − μk̂(x, t) + θ̂(x, t) − k̂(x − ξ, t)θ̂(ξ, t)dξ v̂(0, t)
0
  x 
− θ̂(x, t) − k̂(x − ξ, t)θ̂(ξ, t)dξ ê(0, t)
0
 1
− d1 θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ
x
 x  1
+ d1 k̂(x − ξ, t) θ̂t (s, t)φ(1 − (s − ξ), t)dsdξ
0 ξ
 x
+ k̂t (x − ξ, t)v̂(ξ, t)dξ = 0 (5.37)
0

which can be rewritten as (5.34a) when using (5.29). The boundary condition (5.34b)
follows from inserting x = 1 into (5.31), and using (5.28). 

5.2.5 Proof of Theorem 5.1

As for the identifier-based solution, the following inequalities hold for all t ≥ 0 since
θ̂ is bounded by projection

||k̂(t)|| ≤ Mk (5.38a)
||w(t)|| ≤ G 1 ||v̂(t)|| (5.38b)
||v̂(t)|| ≤ G 2 ||w(t)|| (5.38c)
88 5 Adaptive Output-Feedback Controller

for some positive constants G 1 , G 2 and Mk , and

||k̂t || ∈ L2 . (5.39)

Consider the Lyapunov-like functions


 1
V2 (t) = (1 + x)w 2 (x, t)d x, (5.40a)
0
 1
V3 (t) = (1 + x)φ2 (x, t)d x. (5.40b)
0

Differentiating (5.40a) with respect to time and inserting the dynamics (5.34a) and
integrating by parts, we obtain
 1
V̇2 (t) ≤ −μw (0, t) − μ||w(t)|| − 2μ
2
(1 + x)w(x, t)k̂(x, t)d x ê(0, t)
2
0
 1  1 
+ 2d1 (1 + x)w(x, t)T θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ (x, t)d x
0 x
 1  x
−2 (1 + x)w(x, t) k̂t (x − ξ, t)T −1 [w](ξ, t)dξd x (5.41)
0 0

where we have inserted for the boundary condition (5.34b). We now consider the
three integrals in (5.41) individually. Applying Young’s inequality, we obtain
 1
− 2μ (1 + x)w(x, t)k̂(x, t)d x ê(0, t)
0
 1  1
4
≤ ρ1 w 2 (x, t)d x + μ2 k̂ 2 (x, t)d x ê2 (0, t)
0 ρ1 0
4
≤ ρ1 ||w(t)||2 + μ2 Mk2 ê2 (0, t) (5.42)
ρ1

where we have used (5.38a). Next, Young’s inequality yields


 1  1 
2d1 (1 + x)w(x, t)T θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ d x
0 x
 1
≤ ρ2 w 2 (x, t)d x
0
 1   1  2
4 2
+ d T θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ (x, t) dx
ρ2 1 0 x
 1   1  2
4
≤ ρ2 ||w(t)|| + d12 2
T θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ (x, t) dx
ρ2 0 x
5.2 Swapping-Based Design 89

 1  1 2
4 2 2
≤ ρ2 ||w(t)||2 + d G θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ dx
ρ2 1 1 0 x
 1  1 2
4
≤ ρ2 ||w(t)|| + d12 G 21 2
|θ̂t (ξ, t)||φ(1 − ξ, t)|dξ dx
ρ2 0 0
 1 2
4 2 2
≤ ρ2 ||w(t)||2 + d G |θ̂t (ξ, t)||φ(1 − ξ, t)|dξ (5.43)
ρ2 1 1 0

where we used inequality (5.38b). Using Cauchy–Schwarz’ inequality, we find


 1  1 
2 (1 + x)w(x, t)T θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ (x, t)d x
0 x
4 2
≤ ρ2 ||w(t)|| + d12 G 21 ||θ̂t (t)||||φ(t)||
2
ρ2
4
≤ ρ2 ||w(t)||2 + d12 G 21 ||θ̂t (t)||2 ||φ(t)||2 . (5.44)
ρ2

For the last term of (5.41), we get


 1  x
−2 (1 + x)w(x, t) k̂t (x − ξ, t)T −1 [w](ξ, t)dξd x
0 0
 1  1  x 2
4
≤ ρ3 w 2 (x, t)d x + k̂t (x − ξ, t)T −1 [w](ξ, t)dξ dx
0 ρ3 0 0
 1  1 2
4
≤ ρ3 ||w(t)||2 + k̂t (1 − ξ)T −1 [w](ξ, t)dξ dx
ρ3 0 0
 1 2
4
≤ ρ3 ||w(t)||2 + k̂t (1 − ξ)T −1 [w](ξ, t)dξ
ρ3 0
4 2
≤ ρ3 ||w(t)|| +
2
||k̂t (t)||||T −1 [w](t)||
ρ3
4
≤ ρ3 ||w(t)||2 + G 22 ||k̂t (t)||2 ||w(t)||2 (5.45)
ρ3

where we used Young’s and Cauchy–Schwarz’ inequalities, and (5.38c). Substituting


all this into (5.41), we obtain

4 2 2 2
V̇2 (t) ≤ −μw 2 (0, t) − [μ − ρ1 − ρ2 − ρ3 ] ||w(t)||2 + μ Mk ê (0, t)
ρ1
4 2 2 4
+ d G ||θ̂t (t)||2 ||φ(t)||2 + G 22 ||k̂t (t)||2 ||w(t)||2 . (5.46)
ρ2 1 1 ρ3

Next, from differentiating (5.40b) with respect to time, inserting the dynamics (5.5b)
and integrating by parts, we obtain
90 5 Adaptive Output-Feedback Controller

V̇3 (t) = 2μφ2 (1, t) − μφ2 (0, t) − μ||φ(t)||2


≤ 4μw 2 (0, t) + 4μê2 (0, t) − μ||φ(t)||2 (5.47)

where we have inserted for the boundary condition (5.5b), recalling that y(t) =
v(0, t) = v̂(0, t) + ê(0, t) = w(0, t) + ê(0, t). Now, forming the Lyapunov function
candidate

V4 (t) = 4V2 (t) + V3 (t) (5.48)

and choosing
μ
ρ1 = ρ2 = ρ3 = (5.49)
6
we find

V̇4 (t) ≤ −2μ||w(t)||2 − μ||φ(t)||2 + 4μ(24μMk2 + 1)ê2 (0, t)


+ 96G 21 d12 ||θ̂t (t)||2 ||φ(t)||2 + 96G 22 ||k̂t (t)||2 ||w(t)||2 . (5.50)

Using the definition of σ in (5.17), we rewrite ê2 (0, t) as

ê2 (0, t) = σ 2 (t)(1 + ||φ(t)||2 ) (5.51)

to obtain

V̇4 (t) ≤ −2μ||w(t)||2 − μ||φ(t)||2 + l1 (t)||w(t)||2


+ l2 (t)||φ(t)||2 + l3 (t) (5.52)

where

l1 (t) = 96G 22 ||k̂t (t)||2 (5.53a)


l2 (t) = 96G 21 d12 ||θ̂t (t)||2 + 4μ(24μMk2 + 1)σ (t)
2
(5.53b)
l3 (t) = 4μ(24μMk2 + 1)σ 2 (t) (5.53c)

are nonnegative, bounded and integrable functions. In terms of V2 and V3 , we have

1
V̇4 (t) ≤ −μV2 (t) − μV3 (t) + l1 (t)V2 (t) + l2 (t)V3 (t) + l3 (t), (5.54)
2
and in terms of V4 , we have

1
V̇4 (t) ≤ − μV4 (t) + l4 (t)V4 (t) + l3 (t), (5.55)
4
5.2 Swapping-Based Design 91

where
1
l4 (t) = l1 (t) + l2 (t) (5.56)
4
is a nonnegative, bounded and integrable function. Lemma B.3 in Appendix B gives

V4 ∈ L1 ∩ L∞ , V4 → 0, (5.57)

and hence

||w||, ||φ|| ∈ L2 ∩ L∞ , ||w||, ||φ|| → 0. (5.58)

Furthermore, from the invertibility of the transformation (5.31), we get

||v̂|| ∈ L∞ ∩ L2 , ||v̂|| → 0, (5.59)

and from (5.10),

||ψ|| ∈ L∞ ∩ L2 , ||ψ|| → 0. (5.60)

From (5.7), (5.8) and the fact that e ≡ 0 for t ≥ d1 , we obtain

||v|| ∈ L∞ ∩ L2 , ||v|| → 0. (5.61)

We now proceed to show pointwise boundedness, square integrability and con-


vergence to zero of u for all x ∈ [0, 1]. From the filter structure (5.5a) and the control
law (5.28), we obtain

U ∈ L∞ ∩ L2 , U → 0, (5.62)

and

||ψ||∞ ∈ L∞ ∩ L2 , ||ψ||∞ → 0. (5.63)

Then, from (5.7) and (5.8), with e ≡ 0 for t ≥ d1 ,

||v||∞ ∈ L∞ ∩ L2 , ||v||∞ → 0, (5.64)

and in particular, v(0, ·) ∈ L∞ ∩ L2 , v(0, ·) → 0, and from (5.5b), we get

||φ||∞ ∈ L∞ ∩ L2 , ||φ||∞ → 0. (5.65)


92 5 Adaptive Output-Feedback Controller

From (5.10), and the invertibility of the transformation (5.31), we find

||v̂||∞ , ||w||∞ ∈ L∞ ∩ L2 , ||v̂||∞ , ||w||∞ → 0. (5.66)

5.3 Simulations

The system (5.1), the filters (5.5) and the control law of Theorem 5.1 are implemented
using the same system parameters as for the simulation for the identifier-based design
in Chap. 4, that is

3 1
μ= θ(x) = (1 + e−x cosh(πx)) (5.67)
4 2
and initial condition

u 0 (x) = x. (5.68)

All additional initial conditions are set to zero. The design gains are set to

10
4
||v||, ||ψ||, ||φ||

8
3
||v − v̂||

6
4 2

2 1
0 0

0 5 10 15 20 0 5 10 15 20
Time [s] Time [s]

Fig. 5.1 Left: State (solid red), filter ψ (dashed-dotted blue) and filter φ (dashed green) norms.
Right: Adaptive state estimate error norm

Fig. 5.2 Actuation signal


0

−5
U

−10

0 5 10 15 20
Time [s]
5.3 Simulations 93

2.5

θ and θ̂
2

1.5

0.5
0 0.2 0.4 0.6 0.8 1
x

Fig. 5.3 Left: Estimated parameter θ̂. Right: Actual value of θ (solid black) and final estimate θ̂
(dashed red)

γ ≡ 0, θ = −100, θ̄ = 100. (5.69)

Successive approximations is used to solve (5.29) for the gain k̂.


It is observed from Fig. 5.1 that the norms of the system state and filters converge
asymptotically to zero. The error in the adaptive state estimate (v − v̂) also converges
to zero, and so does the actuation signal U seen in Fig. 5.2. As in the identifier case,
the estimated parameter θ̂ is seen from Fig. 5.3 to be bounded and that it converges,
but not to θ.

5.4 Notes

The swapping-based adaptive controller is more complicated than the identifier-based


controller of Chap. 4 in several ways. Firstly, it requires two filters, each of the same
dynamical order as the system itself. A rule of thumb is that the swapping method
requires m + 1 filters, where m is the number of unknowns. Secondly, the Lyapunov
proof is more complicated as some of the filters have to be included in the analysis
as well.
An advantage, however, is that the swapping method exploits the linearity of the
system and separates the information in different filters, effectively “decoupling”
the system. In this case, the information from the actuation signal is stored in the
filter ψ, while the information from the measurement y is stored in the filter φ.
Swapping also brings the system to a standard (Eq. (5.13) above) linear parametric
form, opening up for applying a large family of already well established adaptive
laws. Another advantage of the swapping-based controller is, of course, the fact
that it is an output-feedback adaptive controller, as opposed to the Lypaunov based
controller of Xu and Liu (2016), and identifier-based controller of Chap. 4 which
both are state-feedback controllers. This, however, is not a general feature of the
swapping method, but achievable for the above adaptive control problem since the
only uncertain parameter in (5.1) is multiplied by a measured signal.
94 5 Adaptive Output-Feedback Controller

In the next chapter, we will extend the swapping-based method to solve a model
reference adaptive control problem and output-feedback adaptive stabilization prob-
lem for system (2.1).

References

Bernard P, Krstić M (2014) Adaptive output-feedback stabilization of non-local hyperbolic PDEs.


Automatica 50:2692–2699
Xu Z, Liu Y (2016) Adaptive boundary stabilization for first-order hyperbolic PDEs with unknown
spatially varying parameter. Int J Robust Nonlinear Control 26(3):613–628
Chapter 6
Model Reference Adaptive Control

6.1 Introduction

We consider here an adaptive version of the output tracking results established in


Sect. 3.5, and solve a model reference adaptive control problem where the goal is
to make a measured signal track a reference signal from minimal knowledge of the
system parameters. Consider system (2.1), which we restate here

u t (x, t) − λ(x)u x (x, t) = f (x)u(x, t) + g(x)u(0, t)


 x
+ h(x, ξ)u(ξ, t)dξ (6.1a)
0
u(1, t) = k1 U (t) (6.1b)
u(x, 0) = u 0 (x) (6.1c)
y(t) = k2 u(0, t), (6.1d)

for system parameters satisfying

λ ∈ C 1 ([0, 1]), λ(x) > 0, ∀x ∈ [0, 1] (6.2a)


f, g ∈ C ([0, 1]),
0
h ∈ C (T ),
0
k1 , k2 ∈ R\{0}, (6.2b)

where T is defined in (1.1a), and initial condition u 0 satisfying

u 0 ∈ B([0, 1]). (6.3)

The goal is to make y(t) track a signal yr (t) generated from a reference model.
Additionally, the system should be stabilized. The only required knowledge of the
system is stated in the following assumption.

© Springer Nature Switzerland AG 2019 95


H. Anfinsen and O. M. Aamo, Adaptive Control of Hyperbolic PDEs,
Communications and Control Engineering,
https://doi.org/10.1007/978-3-030-05879-1_6
96 6 Model Reference Adaptive Control

Assumption 6.1 The following quantities are known,


 1
−1 dγ
μ = d2 = , sign(k1 k2 ). (6.4)
0 λ(γ)

The tracking objective that we seek to achieve is mathematically stated as


 t+T
lim (y(s) − yr (s))2 ds = 0 (6.5)
t→∞ t

for some T > 0, where the reference signal yr is generated using the reference model

bt (x, t) − μbx (x, t) = 0 (6.6a)


b(1, t) = r (t) (6.6b)
b(x, 0) = b0 (x) (6.6c)
yr (t) = b(0, t) (6.6d)

for some initial condition b0 ∈ B([0, 1]) and a bounded reference signal r of choice.
We note that system (6.6) is simply a time delay, since yr (t) = r (t − d2 ) for t ≥ d2 .
Regarding the reference signal r , we assume the following.
Assumption 6.2 The reference signal r (t) is known for all t ≥ 0, and there exists a
constant r̄ so that

|r (t)| ≤ r̄ (6.7)

for all t ≥ 0.
We proceed in Sect. 6.2 by solving the model reference adaptive control prob-
lem stated above. In Sect. 6.3, we solve the adaptive output-feedback stabilization
problem, which is covered by the MRAC by simply setting r ≡ 0, and prove some
additional stability and convergence properties. The controllers are demonstrated on
a linearized Korteweg de Vries-like equation in Sect. 6.4, before some concluding
remarks are offered in Sect. 6.5.
This model reference adaptive control problem was originally solved in Anfinsen
and Aamo (2017), and is based on the swapping-based adaptive output-feedback
stabilization scheme presented in Chap. 5.

6.2 Model Reference Adaptive Control

Firstly, invertible mappings are introduced to bring system (6.1) into an equivalent,
simplified system, where the number of uncertain parameters is reduced to only
two. Then filters are designed so the state in the new system can be expressed as a
6.2 Model Reference Adaptive Control 97

linear static parametrization of the filter states and the uncertain parameters, facili-
tating for the design of adaptive laws. The adaptive laws are then combined with a
backstepping-based adaptive control law that adaptively stabilizes the system, and
achieves the tracking goal (6.5).

6.2.1 Canonical Form

Consider system (2.25), which we for the reader’s convenience restate here

v̌t (x, t) − μv̌x (x, t) = 0 (6.8a)


 1
v̌(1, t) = ρU (t) + σ(ξ)v̌(ξ, t)dξ (6.8b)
0
v̌(x, 0) = v̌0 (x) (6.8c)
y(t) = v̌(0, t) (6.8d)

Lemma 6.1 System (6.1) is equivalent to system (6.8), where ρ and σ are uncertain
parameters which are linear combinations of f, g, h and λ, while μ is known and
specified in Assumption 6.1.

Proof This is proved as part of the proof of Lemma 2.1.

Consider now the difference

ž(x, t) = v̌(x, t) − b(x, t) (6.9)

which can straightforwardly, using (6.8) and (6.6) be shown to have the dynamics

ž t (x, t) − μž x (x, t) = 0 (6.10a)


 1
ž(1, t) = ρU (t) − r (t) + σ(ξ)v̌(ξ, t)dξ (6.10b)
0
ž(x, 0) = ž 0 (x) (6.10c)
y(t) = ž(0, t) + b(0, t) (6.10d)

where ž 0 = v̌0 − b0 . Consider also the backstepping transformation


 x
z(x, t) = ž(x, t) − σ(1 − x + ξ)ž(ξ, t)dξ (6.11)
0

and the target system, which we refer to as the canonical form

z t (x, t) − μz x (x, t) = θ(x)z(0, t) (6.12a)


 1
z(1, t) = ρU (t) − r (t) + θ(ξ)b(1 − ξ, t)dξ (6.12b)
0
98 6 Model Reference Adaptive Control

z(x, 0) = z 0 (x) (6.12c)


y(t) = z(0, t) + b(0, t) (6.12d)

for some initial condition z 0 ∈ B([0, 1]).


Lemma 6.2 The backstepping transformation (6.11) maps system (6.10) into (6.12),
with

θ(x) = μσ(1 − x) (6.13)

and
 x
z 0 (x) = ž 0 (x) − σ(1 − x + ξ)ž 0 (ξ)dξ. (6.14)
0

Proof From differentiating (6.11) with respect to time and space, respectively, and
inserting the result into (6.10a), we obtain

0 = ž t (x, t) − μž x (x, t) = z t (x, t) − μz x (x, t) − μσ(1 − x)ž(0, t) (6.15)

which yields the dynamics (6.12a) provided θ is chosen according to (6.13). Evalu-
ating (6.11) at x = 1 and inserting the boundary condition (6.10b) gives
 1  1
z(1, t) = ρU (t) − r (t + d2 ) + σ(ξ)(ž(ξ, t) + b(ξ, t))dξ − σ(ξ)ž(ξ, t)dξ
0 0
 1
= ρU (t) − r (t + d2 ) + θ(ξ)b(1 − ξ, t)dξ (6.16)
0

which gives (6.12b). Inserting t = 0 into (6.11) gives the expression (6.14) for z 0 .
The fact that ž(0, t) = z(0, t) immediately gives (6.12d).

The goal is now to design an adaptive controller that achieves


 t+T
z 2 (0, s)ds → 0, (6.17)
t

which, from the definition of z, is equivalent to (6.5)

6.2.2 Filter Design and Non-adaptive State Estimate

We now design the following filters

ψt (x, t) − μψx (x, t) = 0, ψ(1, t) = U (t)


ψ(x, 0) = ψ0 (x) (6.18a)
6.2 Model Reference Adaptive Control 99

φt (x, t) − μφx (x, t) = 0, φ(1, t) = y(t) − b(0, t)


φ(x, 0) = φ0 (x) (6.18b)
Mt (x, ξ, t) − μMx (x, ξ, t) = 0, M(1, ξ) = b(1 − ξ, t)
M(x, ξ, 0) = M0 (x, ξ) (6.18c)

with initial conditions satisfying

ψ0 , φ0 ∈ B([0, 1]), M0 ∈ B([0, 1]2 ). (6.19)

We propose a non-adaptive estimate of the state z as follows


 1
z̄(x, t) = ρψ(x, t) − b(x, t) + d2 θ(ξ)φ(1 − (ξ − x), t)dξ
x
 1
+ d2 θ(ξ)M(x, ξ, t)dξ. (6.20)
0

Lemma 6.3 Consider system (6.12), filters (6.18) and the non-adaptive estimate z̄
generated from (6.20). Then

z̄ ≡ z (6.21)

for t ≥ d2 , with d2 given by (6.4).

Proof We construct the error signal

e(x, t) = z(x, t) − z̄(x, t), (6.22)

which can straightforwardly be verified to satisfy the dynamics

et (x, t) − μex (x, t) = 0, e(1, t) = 0, e(x, 0) = e0 (x) (6.23)

from which it follows that e ≡ 0 in a finite time d2 = μ−1 .

6.2.3 Adaptive Laws and State Estimates

We start by assuming the following.


Assumption 6.3 Bounds on θ, ρ are known. That is, we are in knowledge of some
constants θ, θ̄, ρ, ρ̄, so that

ρ ≤ ρ ≤ ρ̄ θ ≤ θ(x) ≤ θ̄ (6.24)
100 6 Model Reference Adaptive Control

for all x ∈ [0, 1], where

/ [ρ, ρ̄].
0∈ (6.25)

This assumption is not a limitation, since the bounds are arbitrary. Assumption (6.25)
requires the sign on the product k1 k2 to be known (see (2.19b)), which is ensured by
Assumption 6.1. Now, motivated by (6.20), we construct an adaptive estimate of the
state by replacing the uncertain parameters by their estimates as follows
 1
ẑ(x, t) = ρ̂(t)ψ(x, t) − b(x, t) + d2 θ̂(ξ, t)φ(1 − (ξ − x), t)dξ
x
 1
+ d2 θ̂(ξ, t)M(x, ξ, t)dξ. (6.26)
0

From (6.20), we also have


 1
y(t) = z(0, t) = ρψ(0, t) − b(0, t) + d2 θ(ξ)φ(1 − ξ, t)dξ
0
 1
+ d2 θ(ξ)M(0, ξ, t)dξ + e(0, t) (6.27)
0

where we have from Lemma 6.3 that e(0, t) = 0 in a finite time d2 . We propose the
following adaptive laws
 
˙ = proj ê(0, t)ψ(0, t)
ρ̂(t) ρ,ρ̄ γ1 , ρ̂(t) (6.28a)
1 + f 2 (t)
 
ê(0, t)(φ(1 − x, t) + m 0 (x, t))
θ̂t (x, t) = projθ,θ̄ γ2 (x) , θ̂(x, t) (6.28b)
1 + f 2 (t)
ρ̂(0) = ρ̂0 (6.28c)
θ̂(x, 0) = θ̂0 (x) (6.28d)

where

ê(x, t) = z(x, t) − ẑ(x, t) (6.29)

and

m 0 (x, t) = M(0, x, t) (6.30)

with

f 2 (t) = ψ 2 (0, t) + ||φ(t)||2 + ||m 0 (t)||2 (6.31)


6.2 Model Reference Adaptive Control 101

and γ1 > 0, γ2 (x) > 0, ∀x ∈ [0, 1] are design gains. The initial guesses ρ̂0 , θ̂0 (x)
are chosen inside the feasible domain, i.e.

ρ ≤ ρ̂0 ≤ ρ̄ θ ≤ θ̂0 (x) ≤ θ̄, ∀x ∈ [0, 1] (6.32)

and the projection operator is defined in Appendix A.


Lemma 6.4 The adaptive laws (6.28) with initial conditions (6.32) provide the fol-
lowing properties

ρ ≤ ρ̂(t) ≤ ρ̄, θ ≤ θ̂(x, t) ≤ θ̄, ∀x ∈ [0, 1], t ≥ 0 (6.33a)


˙ ||θ̃t || ∈ L2 ∩ L∞
ρ̂, (6.33b)
ν ∈ L2 ∩ L∞ (6.33c)

where ρ̃ = ρ − ρ̂, θ̃ = θ − θ̂, with f 2 given in (6.31), and where we have defined

ê(0, t)
ν(t) =  . (6.34)
1 + f 2 (t)

Proof The property (6.33a) follows from the projection operator used in (6.28) and
the conditions (6.32). Consider the Lyapunov function candidate
 1  1
1 2 d2
V (t) = μ−1 e2 (x, t)d x + ρ̃ (t) + γ2−1 (x)θ̃2 (x)d x. (6.35)
0 2γ1 2 0

Differentiating with respect to time, inserting the adaptive laws (6.33) and using the
property −ρ̃(t)projρ,ρ̄ (τ (t), ρ̂(t)) ≤ −ρ̃(t)τ (t) (Lemma A.1 in Appendix A), and
similarly for θ̃, we find

ê(0, t)
V̇ (t) ≤ e (1, t) − e (0, t) −
2 2
ρ̃(t)ψ(0, t)
1 + f 2 (t)
 1 
+ d2 θ̃(x, t)(φ(1 − x, t) + m 0 (x, t))d x . (6.36)
0

Using the relationship


 1
ê(0, t) = ρ̃(t)ψ(0, t) + d2 θ̃(ξ, t)(φ(1 − ξ, t) + m 0 (ξ, t))dξ + e(0, t) (6.37)
0

and inserting this into (6.36), we obtain


ê(0, t)e(0, t)
V̇ (t) ≤ −e2 (0, t) − ν 2 (t) + (6.38)
1 + f 2 (t)
102 6 Model Reference Adaptive Control

where we have used the definition of ν in (6.34). Young’s inequality now gives
1 1
V̇ (t) ≤ − e2 (0, t) − ν 2 (t). (6.39)
2 2
This proves that V is bounded, non-increasing, and hence has a limit as t → ∞.
Integrating (6.39) from zero to infinity gives e(0, ·), ν ∈ L2 . Using (6.37), we obtain,
for t ≥ d2
1
|ê(0, t)| |ρ̃(t)ψ(0, t) + 0 θ̃(ξ, t)(φ(1 − ξ, t) + m 0 (ξ, t))dξ|
|ν(t)| =  = 
1 + f 2 (t) 1 + f 2 (t)
1 1
|ρ̃(t)ψ(0, t)| | 0 θ̃(ξ, t)φ(1 − ξ, t)dξ| + | 0 θ̃(ξ, t)m 0 (ξ, t)dξ|
≤  + 
1 + f 2 (t) 1 + f 2 (t)
|ψ(0, t)| ||φ(t)|| + ||m 0 (t)||
≤ |ρ̃(t)|  + ||θ̃(t)|| 
1 + f (t)
2 1 + f 2 (t)
≤ |ρ̃(t)| + ||θ̃(t)|| (6.40)

where we used Cauchy–Schwarz’ inequality. This proves that ν ∈ L∞ . From the


adaptive laws (6.28), we have

˙ |ê(0, t)| |ψ(0, t)|


|ρ̂(t)| ≤ γ1   ≤ γ1 |ν(t)| (6.41a)
1 + f (t) 1 + f 2 (t)
2

|ê(0, t)| ||φ(t)||


||θ̂t (t)|| ≤ ||γ2 ||   ≤ ||γ2 |||ν(t)| (6.41b)
1 + f (t) 1 + f 2 (t)
2

which, along with (6.33c) gives (6.33b).

6.2.4 Control Law

We state here the main theorem. Consider the control law


 1
1
U (t) = r (t) + k̂(1 − ξ, t)ẑ(ξ, t)dξ
ρ̂(t) 0
 1
− θ̂(ξ, t)b(1 − ξ, t)dξ (6.42)
0

where ẑ is generated using (6.26), and k̂ is the on-line solution to the Volterra integral
equation
 x
μk̂(x, t) = k̂(x − ξ, t)θ̂(ξ, t)dξ − θ̂(x, t), (6.43)
0

with ρ̂ and θ̂ generated from the adaptive laws (6.28).


6.2 Model Reference Adaptive Control 103

Theorem 6.1 Consider system (6.1), filters (6.18), reference model (6.6), and the
adaptive laws (6.28). Suppose Assumption 6.2 holds. Then, the control law (6.42)
guarantees (6.5), and

||u||, ||û||, ||ψ||, ||φ||, ||u||∞ , ||û||∞ , ||ψ||∞ , ||φ||∞ ∈ L∞ . (6.44)

Before proving this theorem, we apply a backstepping transformation to the state


estimate (6.26) to facilitate for the subsequent Lyapunov analysis.

6.2.5 Backstepping

By straightforward calculations, it can be verified that ẑ has the dynamics

˙
ẑ t (x, t) − μẑ x (x, t) = θ̂(x, t)z(0, t) + ρ̂(t)ψ(x, t)
 1
+ d2 θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ
x
 1
+ d2 θ̂t (ξ, t)M(x, ξ, t)dξ (6.45a)
x
 1
ẑ(1, t) = ρ̂(t)U (t) − r (t) + θ̂(ξ, t)b(1 − ξ, t)dξ, (6.45b)
0
ẑ(x, 0) = ẑ 0 (x) (6.45c)

for some initial condition

ẑ 0 ∈ B([0, 1]). (6.46)

Consider the backstepping transformation


 x
η(x, t) = ẑ(x, t) − k̂(x − ξ, t)ẑ(ξ, t)dξ = T [ẑ](x, t) (6.47)
0

and the inverse


 x
ẑ(x, t) = η(x, t) − d2 θ̂(x − ξ, t)η(ξ, t)dξ = T −1 [η](x, t) (6.48)
0

where k̂ is the one-line solution to (6.43). Consider also the target system

˙
ηt (x, t) − μηx (x, t) = −μk̂(x, t)ê(0, t) + ρ̂(t)T [ψ](x, t)
 1 
+ d2 T θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ (x, t)
x
104 6 Model Reference Adaptive Control
 1 
+ d2 T θ̂t (ξ, t)M(x, ξ, t)dξ (x, t)
0
 x
− k̂t (x − ξ, t)T −1 [η](ξ, t)dξ (6.49a)
0
η(1, t) = 0 (6.49b)
η(x, 0) = η0 (x). (6.49c)

Lemma 6.5 The transformation (6.47) with inverse (6.48) and controller (6.42) map
system (6.45) into (6.49).

Proof Differentiating (6.47) with respect to time and space, respectively, inserting
the dynamics (6.45a) and integrating by parts, yield

ẑ t (x, t) = ηt (x, t) + μk̂(0, t)ẑ(x, t) − μk̂(x, t)ẑ(0, t)


 x  x
+μ k̂ x (x − ξ, t)ẑ(ξ, t)dξ + k̂(x − ξ, t)θ̂(ξ, t)dξ ẑ(0, t)
 x 0 0
 x
+ ˙
k̂(x − ξ, t)θ̂(ξ, t)dξ ê(0, t) + ρ̂(t) k̂(x − ξ, t)ψ(ξ, t)dξ
0 0
 x  1
+ d2 k̂(x − ξ, t) θ̂t (s, t)φ(1 − (s − ξ), t)dsdξ
0 ξ
 x  1
+ d2 k̂(x − ξ, t) θ̂t (s, t)M(ξ, s, t)dsdξ
0 ξ
 x
+ k̂t (x − ξ, t)ẑ(ξ, t)dξ (6.50)
0

and
 x
ẑ x (x, t) = ηx (x, t) + k̂(0, t)ẑ(x, t) + k̂ x (x − ξ, t)ẑ(ξ, t)dξ. (6.51)
0

Inserting (6.50) and (6.51) into (6.45a), we obtain (6.49a). Inserting x = 1 into (6.47),
using (6.45b) and the control law (6.42) we obtain (6.49b).

6.2.6 Proof of Theorem 6.1

Since the parameter θ̂ is bounded by projection, it follows from (6.43) that

||k̂(t)||∞ ≤ Mk , ∀t ≥ 0 (6.52)
6.2 Model Reference Adaptive Control 105

for some constant Mk . Moreover, from the invertibility of the transformations (6.47)
and (6.48) and the fact that the estimate θ̂ and hence also k̂ are bounded by projection,
we have from Theorem 1.3 the following inequalities

||T [u](t)||∞ ≤ G 1 ||u(t)||∞ , ||T −1 [u](t)||∞ ≤ G 2 ||u(t)||∞ (6.53)

for some positive constants G 1 and G 2 .


From differentiating (6.43) with respect to time, we find
 x  x
μk̂t (x, t) − θ̂(x − ξ, t)k̂t (ξ, t)dξ = k̂(x − ξ, t)θ̂t (ξ, t)dξ
0 0
− θ̂t (x, t), (6.54)

which, by using (6.47) and (6.48), can be written as

μT −1 [k̂t ](x, t) = −T [θ̂t ](x, t) (6.55)

or

k̂t (x, t) = −d2 T [T [θ̂t ]](x, t). (6.56)

This in turn implies that

||k̂t (t)||∞ ≤ d2 G 21 ||θ̂t (t)||∞ , (6.57)

and hence, by Lemma 6.4

||k̂t || ∈ L2 ∩ L∞ . (6.58)

Since r ∈ L∞ (Assumption 6.2), we have

||b||, ||M||, ||m 0 ||, ||b||∞ , ||M||∞ , ||m 0 ||∞ ∈ L∞ . (6.59)

Consider the functionals


 1
V1 (t) = (1 + x)η 2 (x, t)d x (6.60a)
0
 1
V2 (t) = (1 + x)φ2 (x, t)d x (6.60b)
0
 1
V3 (t) = (1 + x)ψ 2 (x, t)d x. (6.60c)
0

From differentiating (6.60a) with respect to time, inserting the dynamics (6.49a) and
integrating by parts, we find
106 6 Model Reference Adaptive Control
 1
V̇1 (t) = 2μη (1, t) − μη (0, t) − μ
2 2
η 2 (x, t)d x
0
 1
− 2μ (1 + x)η(x, t)k̂(x, t)d x ê(0, t)
0
 1
+2 ˙
(1 + x)η(x, t)ρ̂(t)T [ψ](x, t)d x
0
 1  1 
+ 2d2 (1 + x)η(x, t)T θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ (x, t)d x
0 x
 1  1 
+ 2d2 (1 + x)η(x, t)T θ̂t (ξ, t)M(x, ξ, t)dξ (x, t)d x
0 0
 1  x
−2 (1 + x)η(x, t) k̂t (x − ξ, t)T −1 [η](ξ, t)dξd x. (6.61)
0 0

Inserting the boundary condition (6.49b), and applying Young’s inequality to the
cross terms yield

4 
μ 1
V̇1 (t) ≤ −μη (0, t) − 2
− ρi (1 + x)η 2 (x, t)d x
2 i=1 0
 1  1
2μ2 2
+ k̂ 2 (x, t)d x ê2 (0, t) + ρ̂˙ 2 (t)(T [ψ](x, t))2 d x
ρ1 0 ρ2 0
 1  1  2
2
+ d22 T θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ (x, t) dx
ρ3 0 x
  1  2
2 2 1
+ d2 T θ̂t (ξ, t)M(x, ξ, t)dξ (x, t) dx
ρ4 0 0
 1  x 2
2
+ k̂t (x − ξ, t)T −1 [η](ξ, t)dξ d x (6.62)
ρ5 0 0

for some arbitrary constants ρi > 0, i = 1, . . . , 5. Choosing


μ
ρ1 = ρ2 = ρ3 = ρ4 = ρ5 = (6.63)
20
and applying Cauchy–Schwarz’s inequality to the double integrals, one can upper
bound V̇1 (t) as

μ 1
V̇1 (t) ≤ −μη 2 (0, t) − (1 + x)η 2 (x, t)d x + 40μMk2 ê2 (0, t)
4 0
+ 40d2 ρ̂˙ 2 (t)G 21 ||ψ(t)||2 + 40d23 G 21 ||θ̂t (t)||2 ||φ(t)||2
+ 40d23 G 21 ||θ̂t (t)||2 ||M(t)||2 + 40d2 G 22 ||k̂t (t)||2 ||η(t)||2 . (6.64)
6.2 Model Reference Adaptive Control 107

By expanding the term in ê2 (0, t), using the definition (6.34), as

ê2 (0, t) = ν 2 (t)(1 + ψ 2 (0, t) + ||φ(t)||2 + ||m 0 (t)||2 ) (6.65)

we obtain
μ
V̇1 (t) ≤ −μη 2 (0, t) − V1 (t) + h 1 μν 2 (t)ψ 2 (0, t) + l1 (t)V1 (t)
4
+ l2 (t)V2 (t) + l3 (t)V3 (t) + l4 (t) (6.66)

for the positive constant

h 1 = 40Mk2 (6.67)

and the nonnegative functions

l1 (t) = 40d2 G 22 ||k̂t (t)||2 (6.68a)


l2 (t) = 40μMk2 ν 2 (t) + 40d23 G 21 ||θ̂t (t)||2 (6.68b)
l3 (t) = 40d2 ρ̂˙ 2 (t)G 21 (6.68c)
l4 (t) = 40μMk2 ν 2 (t)(1 + ||m 0 (t)|| ) +
2
40d23 G 21 ||θ̂t (t)||2 ||M(t)||2 (6.68d)

which are integrable (Lemma 6.4 and (6.58)).


Next, consider (6.60b). From differentiating with respect to time and inserting
the dynamics (6.18b), integrating by parts and inserting the boundary condition in
(6.18b), we obtain
 1
V̇2 (t) = 2 (1 + x)φ(x, t)φx (x, t)d x
0

μ 1
≤ 2μφ2 (1, t) − μφ2 (0, t) − (1 + x)φ2 (x, t)d x
2 0
1
≤ 4μη 2 (0, t) − μV2 (t) + 4μê2 (0, t) (6.69)
2
Using (6.65), inequality (6.69) can be written as
μ
V̇2 (t) ≤ 4μη 2 (0, t) − V2 (t) + l5 (t)V2 (t) + l6 (t) + 4μν 2 (t)ψ 2 (0, t) (6.70)
2
for the integrable functions

l5 (t) = 4μν 2 (t), l6 (t) = 4μν 2 (t)(1 + ||m 0 (t)||2 ). (6.71a)

Lastly, consider (6.60c). From differentiating with respect to time and inserting
the dynamics (6.18a), integrating by parts and inserting the boundary condition in
108 6 Model Reference Adaptive Control

(6.18a), we find

μ 1
V̇3 (t) = 2μψ (1, t) − μψ (0, t) −
2 2
(1 + x)ψ 2 (x, t)d x
2 0
  1
1
= 2μ r (t) − θ̂(ξ, t)b(1 − ξ, t)dξ
ρ̂(t) 0
 1 2
μ
+ k̂(1 − ξ, t)ẑ(ξ, t)dξ − μψ 2 (0, t) − V3 (t)
2
 0 
≤ 6μMρ2 r 2 (t) + Mθ2 ||b(t)||2 + Mk2 G 22 ||η(t)||2
μ
− μψ 2 (0, t) − V3 (t) (6.72)
2
where
1
Mρ = . (6.73)
min{|ρ|, |ρ̄|}

Using Assumption 6.2, inequality (6.72) can be written as


μ
V̇3 (t) ≤ −μψ 2 (0, t) − V3 (t) + h 2 μV1 (t) + h 3 (6.74)
2
for the positive constants

h 2 = 6Mρ2 Mk2 G 22 , h 3 = 6μMρ2 (1 + Mθ2 )r̄ 2 . (6.75)

Now, forming

V4 (t) = 8h 2 V1 (t) + h 2 V2 (t) + V3 (t) (6.76)

we find using (6.66), (6.70) and (6.74) that

V̇4 (t) ≤ −4h 2 μη 2 (0, t) − μ 1 − 4h 2 (2h 1 + 1)ν 2 (t) ψ 2 (0, t)


μ μ
− h 2 μV1 (t) − h 2 V2 (t) − V3 (t) + 8h 2 l1 (t)V1 (t) + 8h 2 l2 (t)V2 (t)
2 2
+ h 2 l5 (t)V2 (t) + 8h 2 l3 (t)V3 (t) + 8h 2 l4 (t) + h 2 l6 (t) + h 3 (6.77)

which can be written as

V̇4 (t) ≤ −c1 V4 (t) + l7 (t)V4 (t) + l8 (t) − μ 1 − b1 ν 2 (t) ψ 2 (0, t) + h 3 (6.78)
6.2 Model Reference Adaptive Control 109

for some integrable functions l7 (t), l8 (t) and positive constants c1 and b1 . Moreover,
from (6.39), we have

1
V̇ (t) ≤ − ν 2 (t) (6.79)
2
while from (6.40) and (6.35), we have
 1
1 1
ν (t) ≤ 2|ρ̃(t)| + 2||θ̃(t)|| ≤ 4γ1
2 2 2
|ρ̃(t)|2 + 4γ̄2 γ2−1 (x)θ̃2 (x, t)d x
2γ1 2 0
≤ kV (t) (6.80)

for V defined in (6.35), and where

k = 4 max{γ1 , γ̄2 } (6.81)

with γ̄2 bounding γ2 from above, and where we have utilized that e ≡ 0. Lemma B.4
in Appendix B then gives V4 ∈ L∞ and thus

||η||, ||φ||, ||ψ|| ∈ L∞ (6.82)

and from the transformation (6.48), we will also have

||ẑ|| ∈ L∞ . (6.83)

From the definition of the filter ψ in (6.18a) and the control law U in (6.42), we will
then have U ∈ L∞ , and

||ψ||∞ ∈ L∞ (6.84)

and particularly, ψ(0, ·) ∈ L∞ . Now, constructing

V5 (t) = 8V1 (t) + V2 (t) (6.85)

we find
μ μ
V̇5 (t) ≤ −8 V1 (t) − V2 (t) + 8l1 (t)V1 (t) + (8l2 (t) + l5 (t))V2 (t)
4 2
+ 8l3 (t)V3 (t) + 8l4 (t) + l6 (t) + 4(2h 1 + 1)ν 2 (t)ψ 2 (0, t). (6.86)

Since ψ(0, ·) ∈ L∞ and ν ∈ L2 , the latter term is integrable, and we can write
(6.86) as

V̇5 (t) ≤ −c2 V5 (t) + l9 (t)V5 (t) + l10 (t) (6.87)


110 6 Model Reference Adaptive Control

for a positive constant c2 and integrable functions l9 (t) and l10 (t). It then immediately
follows from Lemma B.3 in Appendix B that

V5 ∈ L1 ∩ L∞ , V5 → 0, (6.88)

and hence

||η||, ||φ|| ∈ L2 ∩ L∞ , ||η||, ||φ|| → 0. (6.89)

From the invertibility of the transformation (6.47), it follows that

||ẑ|| ∈ L2 ∩ L∞ , ||ẑ|| → 0. (6.90)

Moreover from (6.20) and (6.22), we have


 1  
z(x, t) = ρψ(x, t) − b(x, t) + d2 θ(ξ) φ(1 − (ξ − x), t) + M(x, ξ, t) dξ
x
+ e(x, t) (6.91)

where e ≡ 0 for t ≥ d2 , and hence

||z|| ∈ L∞ , ||z||∞ ∈ L∞ , (6.92)

which in turn means that z(0, ·) ∈ L∞ and hence

||φ||∞ ∈ L∞ . (6.93)

Since M is bounded, it follows from the invertibility of the transformations of


Lemmas 6.1–6.2 that

||u|| ∈ L∞ , (6.94)

and

||u||∞ ∈ L∞ . (6.95)

Hence, all signals are pointwise bounded.


From the definition of the filter φ in (6.18b), it follows from ||φ|| → 0 that
 t+T
z 2 (0, s)ds → 0 (6.96)
t

for some arbitrary T > 0, which from the definition of z implies (6.5).
6.3 Adaptive Output Feedback Stabilization 111

6.3 Adaptive Output Feedback Stabilization

Stabilization of the origin by adaptive output feedback is achieved by the model


reference adaptive controller of Theorem 6.1 by simply setting r ≡ 0, b0 ≡ 0 and
M0 ≡ 0. Moreover, this controller also gives the desirable property of square inte-
grability and asymptotic convergence to zero of the system states pointwise in space.
Consider the control law
 1
1
U (t) = k̂(1 − ξ, t)ẑ(ξ, t)dξ (6.97)
ρ̂(t) 0

where ẑ is generated using (6.26), and k̂ is the on-line solution to the Volterra integral
equation
 x
μk̂(x, t) = k̂(x − ξ, t)θ̂(ξ, t)dξ − θ̂(x, t), (6.98)
0

with ρ̂ and θ̂ generated using the adaptive laws (6.28).


Theorem 6.2 Consider system (6.1), filters (6.18) and the adaptive laws (6.28). The
control law (6.97) guarantees

||u||, ||û||, ||ψ||, ||φ||, ||u||∞ , ||û||∞ , ||ψ||∞ , ||φ||∞ ∈ L2 ∩ L∞ , (6.99a)


||u||, ||û||, ||ψ||, ||φ||, ||u||∞ , ||û||∞ , ||ψ||∞ , ||φ||∞ → 0. (6.99b)

Proof From the proof of Theorem 6.1, we already know that

||η||, ||φ||, ||ẑ|| ∈ L2 ∩ L∞ , ||η||, ||φ|| → 0. (6.100)

From the control law (6.97) and the definition of the filter ψ in (6.18a), we will then
have U ∈ L∞ ∩ L2 , U → 0, and

||ψ||, ||ψ||∞ ∈ L2 ∩ L∞ , ||ψ||, ||ψ||∞ → 0. (6.101)

Moreover, with r ≡ 0, b0 ≡ 0 and M0 ≡ 0, Eq. (6.91) reduces to


 1
z(x, t) = ρψ(x, t) + θ(ξ, t)φ(1 − (ξ − x), t)dξ + e(x, t) (6.102)
x

with e ≡ 0 for t ≥ d2 , which gives

||z||, ||z||∞ ∈ L2 ∩ L∞ , ||z||, ||z||∞ → 0. (6.103)


112 6 Model Reference Adaptive Control

In particular z(0, ·) ∈ L2 ∩ L∞ , z(0, ·) → 0, which from the definition of the filter


φ in (6.18b) gives

||φ||, ||φ||∞ ∈ L2 ∩ L∞ , ||φ||, ||φ||∞ → 0. (6.104)

6.4 Simulation

The controllers of Theorems 6.1 and 6.2 are implemented on the potentially unstable,
linearized Korteweg de Vries-like equation from Krstić and Smyshlyaev (2008), with
scaled actuation and boundary measurement. It is given as
 
aa
u t (x, t) = u x (x, t) − γ
sinh x u(0, t)
 
a x a
+γ cosh (x − ξ) u(ξ, t)dξ (6.105a)
0
u(1, t) = k1 U (t) (6.105b)
y(t) = k2 u(0, t) (6.105c)

for some constants , a and γ, with , a > 0. The Korteweg de Vries equation serves
as a model of shallow water waves and ion acoustic waves in plasma (Korteweg and
de Vries 1895). The goal is to make the measured output (6.105c) track the reference
signal

1 + sin(2πt) for 0 ≤ t ≤ 10
r (t) = (6.106)
0 for t > 10.

The reference signal is intentionally set identically zero after ten seconds to demon-
strate the stabilization and convergence properties of Theorem 6.2.
Figures 6.1, 6.2 and 6.3 show the simulation results from implementing system
(6.105) with system parameters

a = 1, = 0.2, γ = 4, k1 = 2, k2 = 2 (6.107)

using the controllers of Theorems 6.1 and 6.2 with tuning parameters

γ1 = γ2 (x) = 20, ∀x ∈ [0, 1] (6.108a)


ρ = 0.1, ρ̄ = 100, θ = −100, θ̄ = 100, (6.108b)

and initial condition


6.5 Notes 113

Fig. 6.1 Left: State norm. Right: Actuation signal

Fig. 6.2 Reference signal (solid black) and measured signal (dashed red)

u 0 (x) = x. (6.109)

All additional initial conditions are set to zero, except

ρ̂0 = 1. (6.110)

System (6.105) with parameters (6.107) is open-loop unstable, as demonstrated in a


simulation in Krstić and Smyshlyaev (2008). In the closed loop case, however, it is
noted from Fig. 6.1 that the state u is stabilized and the actuation signal is bounded.
Moreover, from Fig. 6.2, the measured output y successfully tracks the reference r
after only four seconds of simulation. The initial transients are due to initial conditions
in the system. At t = 10, after which r ≡ 0, the norm of the system state and the
actuation signal both converge to zero in accordance with Theorem 6.2. It is also
observed from Fig. 6.3 that the estimated parameters stagnate, and that the estimated
parameter ρ̂ is quite different from the actual value ρ = k1 k2 = 4.

6.5 Notes

The result presented in this chapter is definitely the strongest result in Part II, show-
ing that system (6.1) can be stabilized from a single boundary sensing, with little
knowledge of the system parameters. One of the key steps in solving the model refer-
ence adaptive control problem for system (6.1) is the use of Lemma 2.1, which states
114 6 Model Reference Adaptive Control

Fig. 6.3 Left: Estimated parameter θ̂. Right: Actual (solid black) and estimated parameter ρ̂ (dashed
red)

that system (6.1) is equivalent to system (2.4), the latter of which only contains two
uncertain parameters. A slightly modified version of the swapping-based controller
already established in Chap. 4 can then be applied.
We will now proceed to Part III, adding an additional PDE to the system, and con-
sider systems of two coupled PDEs, so-called 2 × 2 systems. Many of the techniques
presented in Part II extend to 2 × 2 systems.

References

Anfinsen H, Aamo OM (2017) Model reference adaptive control of an unstable 1–D hyperbolic
PDE. In: 56th conference on decision and control. Melbourne, Victoria, Australia
Krstić M, Smyshlyaev A (2008) Backstepping boundary control for first-order hyperbolic PDEs
and application to systems with actuator and sensor delays. Syst Control Lett 57(9):750–758
Korteweg D, de Vries G (1895) On the change of form of long waves advancing in a rectangular
canal and on a new type of long stationary waves. Philos Mag 39(240):422–443
Part III
2 × 2 Systems
Chapter 7
Introduction

We now proceed by investigating systems of coupled linear hyperbolic PDEs. The


simplest type of such systems are referred to as 2 × 2 systems, and consists of two
PDEs convecting in opposite directions. They typically have the following form
(1.20), which we for the reader’s convenience restate here:

u t (x, t) + λ(x)u x (x, t) = c11 (x)u(x, t) + c12 (x)v(x, t) (7.1a)


vt (x, t) − μ(x)vx (x, t) = c21 (x)u(x, t) + c22 (x)v(x, t) (7.1b)
u(0, t) = qv(0, t) (7.1c)
v(1, t) = ρv(1, t) + U (t) (7.1d)
u(x, 0) = u 0 (x) (7.1e)
v(x, 0) = v0 (x) (7.1f)

for some system parameters assumed to satisfy

λ, μ ∈ C 1 ([0, 1]), λ(x), μ(x) > 0, ∀x ∈ [0, 1] (7.2a)


c11 , c12 , c21 , c22 ∈ C ([0, 1]), q, ρ ∈ R,
0
(7.2b)

with initial conditions

u 0 , v0 ∈ B([0, 1]). (7.3)

The signal U (t) is an actuation signal. As mentioned in Chap. 1, systems in the form
(7.1) consist of two transport equations u, v convecting in opposite directions, with u
convecting from x = 0 to x = 1 and v from x = 1 to x = 0. They are coupled both in
the domain (c12 , c21 ) and at the boundaries (ρ, q), and additionally have reaction terms
(c11 , c22 ). This type of systems can be used to model the pressure and flow profiles

© Springer Nature Switzerland AG 2019 117


H. Anfinsen and O. M. Aamo, Adaptive Control of Hyperbolic PDEs,
Communications and Control Engineering,
https://doi.org/10.1007/978-3-030-05879-1_7
118 7 Introduction

in oil wells (Landet et al. 2013), current and voltage along electrical transmission
lines (Heaviside 1892) and propagation of water in open channels (the Saint-Venant
equations or shallow water equations) (Saint-Venant 1871), just to mention a few
examples.
An early result in Vazquez et al. (2011) on control and observer design for 2 × 2
systems considered a slightly simpler version of system (7.1), in the form

u t (x, t) + λ(x)u x (x, t) = c1 (x)v(x, t) (7.4a)


vt (x, t) − μ(x)vx (x, t) = c2 (x)u(x, t) (7.4b)
u(0, t) = qv(0, t) (7.4c)
v(1, t) = U (t) (7.4d)
u(x, 0) = u 0 (x) (7.4e)
v(x, 0) = v0 (x) (7.4f)

for

λ, μ ∈ C 1 ([0, 1]), λ(x), μ(x) > 0, ∀x ∈ [0, 1] (7.5a)


c1 , c2 ∈ C ([0, 1]), q ∈ R,
0
(7.5b)

and

u 0 , v0 ∈ B([0, 1]), (7.6)

where c11 = c22 ≡ 0, c12 = c1 , c21 = c2 and ρ = 0. Developing controller and


observer designs for the simplified system (7.4) instead of (7.1) is justified by the
fact that (7.1) can be mapped to the form (7.4) by the invertible linear transformation

  x 
c11 (s)
ū(x, t) = u(x, t) exp − ds (7.7a)
0 λ(s)
 x 
c22 (s)
v̄(x, t) = v(x, t) exp ds (7.7b)
0 μ(s)

from u, v into the new variables ū, v̄ (we have omitted the bars on u and v in (7.4)),
and scaling of the input. Moreover, the term in ρu(1, t) can be removed by defining
a new control signal U1 as

U1 (t) = ρu(1, t) + U (t). (7.8)

In other words, (7.1) and (7.4) are equivalent.


Full-state measurements are seldom available in practice, so that boundary sensing
is usually assumed. They are in the form

y0 (t) = v(0, t) (7.9a)


y1 (t) = u(1, t) (7.9b)
References 119

where the sensing y0 , taken at x = 0, is referred to as the sensing anti-collocated


with actuation, while y1 at x = 1 is the sensing collocated with actuation. It will
later become evident that sometimes quite different observer designs and adaptive
output feedback control design schemes must be applied for the two different cases.
In Chap. 8, we develop non-adaptive schemes for system (7.4). State feedback and
output-feedback stabilizing controllers are derived, both from using the measurement
(7.9a) anti-collocated with actuation, and from using the measurement (7.9b) collo-
cated with actuation. A tracking controller whose aim is to make the measurement
anti-collocated with actuation track a reference signal is also derived.
Adaptive state-feedback controllers are given in Chap. 9. Firstly, identifier-based
and swapping-based adaptive controllers are derived for system (7.1), with uncertain
constant coefficients, before a swapping-based adaptive controller is proposed for
system (7.4) with spatially-varying coefficients.
In Chap. 10, we assume that only the boundary parameter q in (7.4c) is uncer-
tain, but allow sensing (7.9) to be taken on the boundaries only. We derive adaptive
observers for the parameter q and states u and v, and also combine the observers
with controllers to establish closed loop adaptive control laws. Two different designs
are offered, one where sensing is taken anti-collocated with actuation (7.9a), and one
where the sensing (7.9b) is taken collocated with actuation.
An adaptive output-feedback controller for system (7.4) is derived in Chap. 11, that
adaptively stabilizes the system from sensing (7.9a) anti-collocated with actuation,
assuming only the transport delays in each direction are known.
In Part III’s last chapter, Chap. 12, we solve a model reference adaptive control
problem for the PDE system (7.4) with scaled actuation and sensing, affected by
a disturbance. The disturbance is allowed to enter anywhere in the domain, and is
modeled as an autonomous linear ODE system, typically used for representing biased
harmonic disturbances. The derived controller stabilizes the system, rejects the effect
the disturbance has on the measured signal (7.9a) anti-collocated with actuation, and
at the same time makes the measured signal track a signal generated from a reference
model.
All solutions offered in this part of the book assume ρ in (7.1d) to be zero. The
case of having a nonzero ρ is covered in the solutions for n + 1-systems in Part IV.

References

Heaviside O (1892) Electromagnetic induction and its propagation. In: Electrical papers, vol II, 2nd
edn. Macmillan and Co, London
Landet IS, Pavlov A, Aamo OM (2013) Modeling and control of heave-induced pressure fluctuations
in managed pressure drilling. IEEE Trans Control Syst Technol 21(4):1340–1351
Saint-Venant AJCBd (1871) Théorie du mouvement non permanent des eaux, avec application aux
crues des rivières et a l’introduction de marées dans leurs lits. Comptes Rendus des Séances de
l’Académie des Sciences 73:147–154
Vazquez R, Krstić M, Coron J-M (2011) Backstepping boundary stabilization and state estimation
of a 2 × 2 linear hyperbolic system. In: 2011 50th IEEE conference on decision and control and
European control conference (CDC-ECC). pp 4937–4942
Chapter 8
Non-adaptive Schemes

8.1 Introduction

In this chapter , non-adaptive controllers and observers will be derived. Most of the
results will concern systems in the form (7.4), which we restate here

u t (x, t) + λ(x)u x (x, t) = c1 (x)v(x, t) (8.1a)


vt (x, t) − μ(x)vx (x, t) = c2 (x)u(x, t) (8.1b)
u(0, t) = qv(0, t) (8.1c)
v(1, t) = U (t) (8.1d)
u(x, 0) = u 0 (x) (8.1e)
v(x, 0) = v0 (x) (8.1f)

where

λ, μ ∈ C 1 ([0, 1]), λ(x), μ(x) > 0, ∀x ∈ [0, 1] (8.2a)


c1 , c2 , ∈ C ([0, 1]), q ∈ R,
0
(8.2b)

with

u 0 , v0 ∈ B([0, 1]). (8.3)

In Sect. 8.2, we derive the state-feedback law from Vazquez et al. (2011) for system
(8.1), before state observers are derived in Sect. 8.3. Note that there will be a distinc-
tion between observers using sensing anti-collocated or collocated with the actuation
U , as defined in (7.9a) and (7.9b), respectively, which we restate here as

y0 (t) = v(0, t) (8.4a)


y1 (t) = u(1, t). (8.4b)

© Springer Nature Switzerland AG 2019 121


H. Anfinsen and O. M. Aamo, Adaptive Control of Hyperbolic PDEs,
Communications and Control Engineering,
https://doi.org/10.1007/978-3-030-05879-1_8
122 8 Non-adaptive Schemes

The observer using sensing collocated with actuation was originally derived in
Vazquez et al. (2011), while the observer using anti-collocated actuation and sens-
ing is based on a similar design for n + 1 systems in Di Meglio et al. (2013). The
controller and observers are combined into output-feedback controllers in Sect. 8.4.
An output tracking controller is derived in Sect. 8.5, whose aim is to make the
measurement anti-collocated with actuation track some bounded reference signal of
choice, achieving tracking in finite time. The design in Sect. 8.5 is different from
the output-feedback solution to the output tracking problem offered in Lamare and
Bekiaris-Liberis (2015), where a reference model is used to generate a reference
trajectory, before a backstepping transformation is applied “inversely” to the refer-
ence model to generate a reference trajectory u r , vr for the original state variables
u, v. The resulting controller contains no feedback from the actual states, so the
controller would only work if the initial conditions of the reference trajectory u r , vr
matched the initial conditions of u, v. To cope with this, a standard PI controller is
used to drive the output y0 (t) = v(0, t) to the generated reference output vr (0, t). A
weakness with this approach, apart from being far more complicated in both design
and accompanying stability proof than the design of Sect. 8.5, is that tracking is not
achieved in finite time due to the presence of the PI controller. Also, the PI imple-
mentation requires the signal v(0, t) to be measured, which is not necessary for the
design in Sect. 8.5 when the tracking controller is combined with an observer using
measurement collocated with actuation.
Most of the derived controllers and observers are implemented and simulated in
Sect. 8.6, before some concluding remarks are offered in Sect. 8.7.

8.2 State Feedback Controller

As for the scalar system, system (8.1) may, depending on the system parameters,
be unstable. However, if c2 ≡ 0 the system reduces to a cascade system from v into
u, which is trivially stabilized using the control law U ≡ 0. For the case c2 ≡ 0,
a stabilizing controller is needed. Such a controller for system (8.1) with q = 0 is
derived in Vazquez et al. (2011), where a state feedback control law in the form
 1  
U (t) = K u (1, ξ)u(ξ, t) + K v (1, ξ)v(ξ, t) dξ (8.5)
0

is proposed, where (K u , K v ) is defined over the triangular domain T given in (1.1a),


and satisfies the PDE

μ(x)K xu (x, ξ) − λ(ξ)K ξu (x, ξ) = λ (ξ)K u (x, ξ) + c2 (ξ)K v (x, ξ) (8.6a)


μ(x)K xv (x, ξ) + μ(ξ)K ξv (x, ξ) 
= c1 (ξ)K (x, ξ) − μ (ξ)K (x, ξ)
u v
(8.6b)
c2 (x)
K u (x, x) = − (8.6c)
λ(x) + μ(x)
8.2 State Feedback Controller 123

λ(0) u
K v (x, 0) = q K (x, 0), (8.6d)
μ(0)

for which well-posedness is guaranteed by Theorem D.1 in Appendix D.2.


Theorem 8.1 Consider system (8.1). The control law (8.5) guarantees

u=v≡0 (8.7)

for t ≥ t F , where
 1  1
dγ dγ
t F = t1 + t2 , t1 = , t2 = . (8.8)
0 λ(γ) 0 μ(γ)

Proof We will offer two different proofs of this theorem. The two proofs are similar
and both employ the backstepping technique. The first one uses the simplest back-
stepping transformation, while the second one produces the simplest target system.
We include them both because the first one most closely resembles similar proof for
the state feedback controller designs for the more general n + 1 and n + m systems,
while the second produces a target system which will be used when deriving adaptive
output-feedback schemes in later chapters.
To ease the derivations to follow, we state the Eq. (8.1) in vector form as follows

wt (x, t) + Λ(x)wx (x, t) = Π (x)w(x, t) (8.9a)


w(0, t) = Q 0 w(0, t) (8.9b)
w(1, t) = R1 w(1, t) + Ū (t) (8.9c)
w(x, 0) = w0 (x) (8.9d)

where
     
u(x, t) λ(x) 0 0 c1 (x)
w(x, t) = , Λ(x) = , Π (x) = (8.10a)
v(x, t) 0 −μ(x) c2 (x) 0
     
0q 10 0
Q0 = , R1 = , Ū (t) = (8.10b)
01 00 U (t)
 
u (x)
w0 (x) = 0 . (8.10c)
v0 (x)

Solution 1:
Consider the target system
 x
γt (x, t) + Λ(x)γx (x, t) = Ω(x)γ(x, t) + B(x, ξ)γ(ξ)dξ (8.11a)
0
γ(0, t) = Q 0 γ(0, t) (8.11b)
γ(1, t) = R1 γ(1, t) (8.11c)
124 8 Non-adaptive Schemes

γ(x, 0) = γ0 (x) (8.11d)

for a new vector of variables γ and matrices Ω and B given as


   
α(x, t) 0 c1 (x)
γ(x, t) = , Ω(x) = (8.12a)
β(x, t) 0 0
 
b1 (x, ξ) b2 (x, ξ)
B(x, ξ) = (8.12b)
0 0

for some functions b1 , b2 defined over T . Consider the backstepping transformation


 x
γ(x, t) = w(x, t) − K (x, ξ)w(ξ, t)dξ (8.13)
0

where
 
0 0
K (x, ξ) = (8.14)
K u (x, ξ) K v (x, ξ)

with (K u , K v ) satisfying the PDE (8.6).


As with every Volterra integral transformation, transformation (8.13) is invertible,
with inverse in the form
 x
w(x, t) = γ(x, t) + L(x, ξ)γ(ξ, t)dξ (8.15)
0

where
 
0 0
L(x, ξ) = (8.16)
L α (x, ξ) L β (x, ξ)

can be found from solving the Volterra integral equation (1.53).


We will show that transformation (8.13) and controller (8.5) map system (8.9)
into the target system (8.11) with B given as the solution to the Volterra integral
equation
 x
B(x, ξ) = Ω(x)K (x, ξ) + B(x, s)K (s, ξ)ds (8.17)
ξ

which from Lemma 1.1 has a solution B.


Differentiating (8.13) with respect to time, inserting the dynamics (8.9a), inte-
grating by parts and inserting the boundary condition (8.9b), we find

wt (x, t) = γt (x, t) − K (x, x)Λ(x)w(x, t) + K (x, 0)Λ(0)Q 0 w(0, t)


 x 

+ K ξ (x, ξ)Λ(ξ) + K (x, ξ)Λ (ξ) + K (x, ξ)Π (ξ) w(ξ, t)dξ. (8.18)
0
8.2 State Feedback Controller 125

Equivalently, differentiating (8.13) with respect to space, we find


 x
wx (x, t) = γx (x, t) + K (x, x)w(x, t) + K x (x, ξ)w(ξ, t)dξ. (8.19)
0

Inserting (8.18) and (8.19) into (8.9a), and inserting the boundary condition (8.9b),
we find
0 = wt (x, t) + Λ(x)wx (x, t) − Π (x)w(x, t)
= γt (x, t) + Λ(x)γx (x, t) − Ω(x)w(x, t) + K (x, 0)Λ(0)Q 0 w(0, t)
 x
+ Λ(x)K x (x, ξ) + K ξ (x, ξ)Λ(ξ)
0


+ K (x, ξ)Λ (ξ) + K (x, ξ)Π (ξ) w(ξ, t)dξ

− [K (x, x)Λ(x) − Λ(x)K (x, x) + Π (x) − Ω(x)] w(x, t). (8.20)

Choosing K as the solution to the PDE

Λ(x)K x (x, ξ) + K ξ (x, ξ)Λ(ξ) = −K (x, ξ)Π (ξ) − K (x, ξ)Λ (ξ) (8.21a)
Λ(x)K (x, x) − K (x, x)Λ(x) = Π (x) − Ω(x) (8.21b)
K (x, 0)Λ(0)Q 0 = 0, (8.21c)

which is equivalent to (8.6), we obtain

γt (x, t) + Λ(x)γx (x, t) − Ω(x)w(x, t) = 0. (8.22)

Inserting the transformation (8.13) yields


 x
γt (x, t) + Λ(x)γx (x, t) − Ω(x)γ(x, t) − B(x, ξ)γ(ξ, t)dξ
 x  0x
= Ω(x)K (x, ξ)w(ξ, t)dξ − B(x, ξ)γ(ξ, t)dξ
0 x 0

= Ω(x)K (x, ξ)w(ξ, t)dξ


0
 x   ξ 
− B(x, ξ) w(ξ, t) − K (ξ, s)w(s, t)ds dξ
0 0
 x
= Ω(x)K (x, ξ) − B(x, ξ)
0
 x 
+ B(x, s)K (s, ξ)ds w(ξ, t)dξ (8.23)
ξ

where we changed the order of integration in the double integral. Using (8.17) yields
the target system dynamics (8.11).
126 8 Non-adaptive Schemes

The boundary condition (8.11b) follows immediately from the boundary condition
(8.9b) and the fact that w(0, t) = γ(0, t). Substituting (8.13) into (8.9c) gives
 1
γ(1, t) = R1 γ(1, t) − [K (1, ξ) − R1 K (1, ξ)] w(ξ, t)dξ + Ū (t). (8.24)
0

Choosing
 1
Ū (t) = [K (1, ξ) − R1 K (1, ξ)] w(ξ, t)dξ (8.25)
0

which is equivalent to (8.5) gives the boundary condition (8.11c).


System (8.11), when written out in its components, is
 x
αt (x, t) + λ(x)αx (x, t) = c1 (x)β(x, t) + b1 (x, ξ)α(ξ, t)dξ
 x 0

+ b2 (x, ξ)β(ξ, t)dξ (8.26a)


0
βt (x, t) − μ(x)βx (x, t) = 0 (8.26b)
α(0, t) = qβ(0, t) (8.26c)
β(1, t) = 0 (8.26d)
α(x, 0) = α0 (x) (8.26e)
β(x, 0) = β0 (x), (8.26f)

and is observed to be a cascade system from β into α. After a finite time t2 , defined
in (8.8), we will have β ≡ 0. Hence, for t ≥ t2 , system (8.26) reduces to
 x
αt (x, t) + λ(x)αx (x, t) = b1 (x, ξ)α(ξ, t)dξ (8.27a)
0
α(0, t) = 0 (8.27b)
α(x, t2 ) = αt2 (x). (8.27c)

The variable α in (8.27) will be identically zero for t ≥ t F = t1 + t2 . This can be


seen as follows. Consider the additional backstepping transformation
 x
η(x, t) = α(x, t) − F(x, ξ)α(ξ, t)dξ (8.28)
0

where F is defined over T defined in (1.1a) and is the solution to the PDE

λ(x)Fx (x, ξ) + λ(ξ)Fξ (x, ξ) = −λ (ξ)F(x, ξ) + b1 (x, ξ)


 x
− F(x, s)b1 (s, ξ)ds (8.29a)
ξ
8.2 State Feedback Controller 127

F(0, ξ) = 0. (8.29b)

We will show that (8.28) maps (8.27) into

ηt (x, t) + λ(x)ηx (x, t) = 0 (8.30a)


η(0, t) = 0 (8.30b)
η(x, t2 ) = ηt2 (x). (8.30c)

Differentiating (8.28) with respect to time and space, inserting the dynamics (8.27a),
integrating by parts, changing the order of integration in the double integral and using
the boundary condition (8.27b), we obtain
 x
αt (x, t) = ηt (x, t) − F(x, x)λ(x)α(x, t) + F(x, ξ)λ (ξ)α(ξ, t)dξ
 x 0

+ Fξ (x, ξ)λ(ξ)α(ξ, t)dξ


0
 x  ξ
+ F(x, ξ) b1 (ξ, s)α(s, t)dsdξ (8.31)
0 0

and
 x
αx (x, t) = ηx (x, t) + F(x, x)α(x, t) + Fx (x, ξ)α(ξ, t)dξ, (8.32)
0

respectively. Inserting (8.31) and (8.32) into (8.27a) yields


 x
αt (x, t) + λ(x)αx (x, t) − b1 (x, ξ)α(ξ, t)dξ
0
= ηt (x, t) + λ(x)ηx (x, t)
 x
+ λ(x)Fx (x, ξ) + λ(ξ)Fξ (x, ξ) + λ (ξ)F(x, ξ) − b1 (x, ξ)
0
 x 
+ F(x, s)b1 (s, ξ)ds α(ξ, t)dξ = 0, (8.33)
ξ

and using (8.29a) gives (8.30a). Evaluating (8.28) at x = 0 and inserting the boundary
conditions (8.27b) and (8.29b) gives (8.30b). The initial condition follows immedi-
ately from evaluating (8.28) at t = t2 . From the structure of system (8.30), we have
η ≡ 0 for t ≥ t F = t1 + t2 , and from the invertibility of the transformations (8.28)
and (8.13), α = u = v ≡ 0 for t ≥ t F follows.
Lastly, we prove that the PDE (8.29) has a solution F. Consider the invertible
mapping
  ξ 
λ (s)
F(x, ξ) = G(x, ξ)ϕ(ξ), ϕ(ξ) = exp − ds , (8.34)
0 λ(s)
128 8 Non-adaptive Schemes

from which we find

Fx (x, ξ) = G x (x, ξ)ϕ(ξ) (8.35a)



λ (ξ)
Fξ (x, ξ) = G ξ (x, ξ)ϕ(ξ) − G(x, ξ)ϕ(ξ). (8.35b)
λ(ξ)

Inserting (8.34) and (8.35) into (8.29), yields


 x
λ(x)G x (x, ξ) + λ(ξ)G ξ (x, ξ) = p1 (x, ξ) − G(x, s) p2 (s, ξ)ds (8.36a)
ξ
G(0, ξ) = 0, (8.36b)

where
b1 (x, ξ) ϕ(s)
p1 (x, ξ) = , p2 (x, ξ) = b1 (x, ξ) . (8.37)
ϕ(ξ) ϕ(ξ)

Consider now the mapping


 x

G(x, ξ) = H (φ(x), φ(ξ)), φ(x) = t1 , (8.38)
0 λ(γ)

where t1 is defined in (8.8). We note that φ is strictly increasing and hence invertible.
From (8.38), we find

t1 t1
G x (x, ξ) = Hx (φ(x), φ(ξ)) G ξ (x, ξ) = Hξ (φ(x), φ(ξ)). (8.39)
λ(x) λ(ξ)

Inserting (8.39) into (8.36), we find


 x
Hx (x, ξ) + Hξ (x, ξ) = r1 (x, ξ) − H (x, y)r2 (s, ξ)ds (8.40a)
ξ
H (0, ξ) = 0 (8.40b)

where

r1 (x, ξ) = t1−1 p1 (φ−1 (x), φ−1 (ξ)) (8.41a)


−1 −1 −1
r2 (x, ξ) = λ(φ (x)) p2 (φ (x), φ (ξ)). (8.41b)

Well-posedness and the existence of a unique solution H to the PDE (8.40) is now
ensured by Lemma D.1 in Appendix D.1. The invertibility of the transformations
(8.38) and (8.34) then proves that (8.29) has a unique solution.
8.2 State Feedback Controller 129

Solution 2:
This second proof employs a slightly more involved backstepping transformation
that produces a simpler target system without the integral term in (8.11a). Consider
the target system

γt (x, t) + Λ(x)γx (x, t) = G(x)γ(0, t) (8.42a)


γ(0, t) = Q 0 γ(0, t) (8.42b)
γ(1, t) = R1 γ(1, t) (8.42c)
γ(x, 0) = γ0 (x) (8.42d)

for γ, Λ, Q 0 and R1 defined in (8.10) and (8.12), and


 
0 g(x)
G(x) = (8.43)
0 0

for some function g. Consider also the backstepping transformation


 x
γ(x, t) = w(x, t) − K (x, ξ)w(ξ, t)dξ (8.44)
0

where
 
K uu (x, ξ) K uv (x, ξ)
K (x, ξ) = (8.45)
K vu (x, ξ) K vv (x, ξ)

satisfies the PDE

λ(x)K xuu (x, ξ) + λ(ξ)K ξuu (x, ξ) = −λ (ξ)K uu (x, ξ)


− c2 (ξ)K uv (x, ξ) (8.46a)
λ(x)K xuv (x, ξ) − μ(ξ)K ξuv (x, ξ) = −c1 (ξ)K (x, ξ) uu

+ μ (ξ)K uv (x, ξ) (8.46b)


μ(x)K xvu (x, ξ) − λ(ξ)K ξvu (x, ξ) = λ (ξ)K vu (x, ξ) + c2 (ξ)K vv (x, ξ) (8.46c)
μ(x)K xvv (x, ξ) + μ(ξ)K ξvv (x, ξ) = c1 (ξ)K vu (x, ξ) − μ (ξ)K vv (x, ξ) (8.46d)
K (x, 0) = k (x)
uu uu
(8.46e)
c1 (x)
K uv (x, x) = (8.46f)
λ(x) + μ(x)
c2 (x)
K vu (x, x) = − (8.46g)
λ(x) + μ(x)
λ(0)
K vv (x, 0) = q K vu (x, 0), (8.46h)
μ(0)
130 8 Non-adaptive Schemes

for some arbitrary function k uu . Well-posedness of (8.46) is guaranteed by Theo-


rem D.1 in Appendix D.2.
We note that the PDE (8.46) consists of two independent groups, one in K uu and
K , the other in K vu and K vv . Moreover, the PDE in K vu and K vv is exactly the
uv

same as the PDE (8.6), so that

K vu ≡ K u K vv ≡ K v . (8.47)

The backstepping transformation (8.44) is also invertible, with inverse in the form
 x
w(x, t) = γ(x, t) + L(x, ξ)γ(ξ, t)dξ (8.48)
0

where
 
L αα (x, ξ) L αβ (x, ξ)
L(x, ξ) = (8.49)
L βα (x, ξ) L ββ (x, ξ)

which once again can be found from solving the Volterra integral equation (1.53).
We will show that the backstepping transformation (8.44) with K satisfying the
PDE (8.46) for an arbitrary k uu , maps (8.9) into (8.42) with

g(x) = μ(0)K uv (x, 0) − qλ(0)K uu (x, 0). (8.50)

From differentiating (8.44) with respect to time and space, respectively, inserting
the dynamics (8.9a), integrating by parts and inserting the boundary condition (8.9b),
we get

wt (x, t) = γt (x, t) − K (x, x)Λ(x)w(x, t) + K (x, 0)Λ(0)w(0, t)


 x
 
+ K ξ (x, ξ)Λ(ξ) + K (x, ξ)Λ (ξ) + K (x, ξ)Π (ξ) w(ξ, t)dξ. (8.51)
0

Differentiating (8.44) with respect to space gives


 x
wx (x, t) = γx (x, t) + K (x, x)w(x, t) + K x (x, ξ)w(ξ, t)dξ. (8.52)
0

Inserting (8.51) and (8.52) into (8.9a) we find

0 = wt (x, t) + Λ(x)wx (x, t) − Π (x)w(x, t)


= γt (x, t) + Λ(x)γx (x, t) + K (x, 0)Λ(0)Q 0 w(0, t)
 x

+ Λ(x)K x (x, ξ) + K ξ (x, ξ)Λ(ξ)
0

+ K (x, ξ)Λ (ξ) + K (x, ξ)Π (ξ) w(ξ, t)dξ
+ [Λ(x)K (x, x) − K (x, x)Λ(x) − Π (x)] w(x, t). (8.53)
8.2 State Feedback Controller 131

Choosing K and G to satisfy

Λ(x)K x (x, ξ) + K ξ (x, ξ)Λ(ξ) + K (x, ξ)Λ (ξ) + K (x, ξ)Π (ξ) = 0 (8.54a)
Λ(x)K (x, x) − K (x, x)Λ(x) − Π (x) = 0 (8.54b)
K (x, 0)Λ(0)Q 0 + G(x) = 0 (8.54c)

gives the target dynamics (8.42a). The PDE (8.54) is under-determined, and we
impose the additional constraint

K uu (x, 0) = k uu (x) (8.55)

for some arbitrary function k uu to ensure well-posedness. The Eq. (8.54) with (8.55)
is equivalent to (8.46) and (8.50).
The boundary condition (8.42b) follows from (8.9b) and w(0, t) = γ(0, t). Sub-
stituting (8.44) into (8.9c) and choosing the controller Ū as
 1
Ū (t) = [K (1, ξ) − R1 K (1, ξ)] w(ξ, t)dξ, (8.56)
0

that is
 1  1
U (t) = K vu (1, ξ)u(ξ, t)dξ + K vv (1, ξ)v(ξ, t)dξ (8.57)
0 0

which from (8.47) is the same as (8.5), we obtain the boundary condition (8.42c).
Written out, target system (8.42) reads

αt (x, t) + λ(x)αx (x, t) = g(x)β(0, t) (8.58a)


βt (x, t) − μ(x)βx (x, t) = 0 (8.58b)
α(0, t) = qβ(0, t) (8.58c)
β(1, t) = 0 (8.58d)
α(x, 0) = α0 (x) (8.58e)
β(x, 0) = β0 (x), (8.58f)

once again a cascade system from β into α. After a finite time t2 given in (8.8),
β ≡ 0, after which system (8.58) reduces to

αt (x, t) + λ(x)αx (x, t) = 0 (8.59a)


α(0, t) = 0 (8.59b)
α(x, t2 ) = αt2 (x) (8.59c)

which is also identically zero for an additional time t1 , resulting in α = β ≡ 0 and


hence u = v ≡ 0 for t ≥ t F . 
132 8 Non-adaptive Schemes

Remark 8.1 We note that if q = 0, one can choose k uu in (8.46e) as

μ(0) uv
k uu (x) = K (x, 0) (8.60)
qλ(0)

which from (8.50) results in

g≡0 (8.61)

in the target system (8.58).

8.3 State Observers

The controller derived in the previous section requires that full-state measurements
of states u and v are available, which is often not the case in practice. An observer
is therefore needed.
Two boundary measurements are available for system (8.1), as stated in (8.4).
These are y1 (t) = u(1, t), which is referred to as the measurement collocated
with actuation, and y0 (t) = v(0, t), which is referred to as the measurement anti-
collocated with actuation. Only one of the measurements is needed for the design of
an observer estimating both states u and v. We will here present the two designs.

8.3.1 Sensing Anti-collocated with Actuation

In Di Meglio et al. (2013), a state observer design for n + 1 systems is developed for
the case of sensing anti-collocated with actuation. Here, we present the design for
2 × 2 systems in the form (8.1) with sensing (8.4a), by letting n = 1. Consider the
observer equations

û t (x, t) + λ(x)û x (x, t) = c1 (x)v̂(x, t) + p1 (x)(y0 (t) − v̂(0, t)) (8.62a)


v̂t (x, t) − μ(x)v̂x (x, t) = c2 (x)û(x, t) + p2 (x)(y0 (t) − v̂(0, t)) (8.62b)
û(0, t) = qy0 (t) (8.62c)
v̂(1, t) = U (t) (8.62d)
û(x, 0) = û 0 (x) (8.62e)
v̂(x, 0) = v̂0 (x) (8.62f)

for initial conditions

û 0 , v̂0 ∈ B([0, 1]), (8.63)


8.3 State Observers 133

and where p1 and p2 are given as

p1 (x) = μ(0)M α (x, 0) (8.64a)


β
p2 (x) = μ(0)M (x, 0) (8.64b)

where (M α , M β ) is the solution to the PDE

λ(x)Mxα (x, ξ) − μ(ξ)Mξα (x, ξ) = μ (ξ)M α (x, ξ) + c1 (x)M β (x, ξ) (8.65a)


β
μ(x)Mxβ (x, ξ) + μ(ξ)Mξ (x, ξ) α 
= −c2 (x)M (x, ξ) − μ (ξ)M (x, ξ) β
(8.65b)
c1 (x)
M α (x, x) = (8.65c)
λ(x) + μ(x)
M β (1, ξ) = 0 (8.65d)

defined for T given in (1.1a). Well-posedness of (8.65) follows from a coordinate


change (x, ξ) → (1 − ξ, 1 − x) and applying Theorem D.1 in Appendix D.2.

Theorem 8.2 Consider system (8.1) and observer (8.83) and let p1 and p2 be given
by (8.64). Then

û ≡ u, v̂ ≡ v (8.66)

for t ≥ t F , with t F given in (8.8).

Proof By using (8.1) and (8.62), the observer errors ũ = u − û and ṽ = v − v̂ can
straightforwardly be shown to satisfy the dynamics

ũ t (x, t) + λ(x)ũ x (x, t) = c1 (x)ṽ(x, t) − p1 (x)ṽ(0, t) (8.67a)


ṽt (x, t) − μ(x)ṽx (x, t) = c2 (x)ũ(x, t) − p2 (x)ṽ(0, t) (8.67b)
ũ(0, t) = 0 (8.67c)
ṽ(1, t) = 0 (8.67d)
ũ(x, 0) = ũ 0 (x) (8.67e)
ṽ(x, 0) = ṽ0 (x), (8.67f)

which we write in vector form as

w̃t (x, t) + Λ(x)w̃x (x, t) = Π (x)w̃(x, t) − Pi (x)w̃(0, t) (8.68a)


w̃(0, t) = R0 w̃(0, t) (8.68b)
w̃(1, t) = R1 w̃(1, t) (8.68c)
w̃(x, 0) = w̃0 (x) (8.68d)
134 8 Non-adaptive Schemes

where we have used the definitions in (8.10) and additionally defined


     
ũ(x, t) 0 p1 (x) 00
w̃(x, t) = , Pi (x) = , R0 = . (8.69)
ṽ(x, t) 0 p2 (x) 01

Consider the target system


 x
γ̃t (x, t) + Λ(x)γ̃x (x, t) = (x)γ̃(x, t) + G(x, ξ)γ̃(ξ, t)dξ (8.70a)
0
γ̃(0, t) = R0 γ̃(0, t) (8.70b)
γ̃(1, t) = R1 γ̃(1, t) (8.70c)
γ̃(x, 0) = γ̃0 (x) (8.70d)

where we have defined


   
α̃(x, t) α̃0 (x)
γ̃(x, t) = , γ̃0 (x) = (8.71a)
β̃(x, t) β̃0 (x)
   
0 0 g1 (x, ξ) 0
(x) = G(x, ξ) = (8.71b)
c2 (x) 0 g2 (x, ξ) 0

for some functions g1 and g2 to be determined, defined over the triangular domain
T . Consider also the backstepping transformation
 x
w̃(x, t) = γ̃(x, t) + M(x, ξ)γ̃(ξ, t)dξ (8.72)
0

where
 
0 M α (x, ξ)
M(x, ξ) = (8.73)
0 M β (x, ξ)

satisfies

Λ(x)Mx (x, ξ) + Mξ (x, ξ)Λ(ξ) = −M(x, ξ)Λ (ξ) + Π (x)M(x, ξ) (8.74a)


Λ(x)M(x, x) − M(x, x)Λ(x) = Π (x) − (x) (8.74b)
M(1, ξ) = R1 M(1, ξ), (8.74c)

which is equivalent to (8.65).


We will show that the backstepping transformation (8.72) maps the target system
(8.70) into (8.68) with G given from the Volterra integral equation
 x
G(x, ξ) = −M(x, ξ)(ξ) − M(x, s)G(s, ξ)ds (8.75)
ξ

which from Lemma 1.1 has a solution G.


8.3 State Observers 135

From differentiating (8.72) with respect to time, inserting the dynamics (8.70a),
integrating by parts and changing the order of integration in the double integral, we
get

γ̃t (x, t) = w̃t (x, t) + M(x, x)Λ(x)γ̃(x, t) − M(x, 0)Λ(0)γ̃(0, t)


 x
 
− Mξ (x, ξ)Λ(ξ) + M(x, ξ)Λ (ξ) γ̃(ξ, t)dξ
0 x   x 
− M(x, ξ)(ξ) + M(x, s)G(s, ξ)ds γ̃(ξ, t)dξ. (8.76)
0 ξ

Similarly, differentiating (8.72) with respect to space gives


 x
γ̃x (x, t) = w̃x (x, t) − M(x, x)γ̃(x, t) − Mx (x, ξ)γ̃(ξ, t)dξ (8.77)
0

Substituting (8.76) and (8.77) into (8.70a) gives


 x
0 = γ̃t (x, t) + Λ(x)γ̃x (x, t) − (x)γ̃(x, t) − G(x, ξ)γ̃(ξ, t)dξ
0
= w̃t (x, t) + Λ(x)w̃x (x, t) − Π (x)w̃(x, t) − M(x, 0)Λ(0)γ̃(0, t)
 x
− Λ(x)Mx (x, ξ) + Mξ (x, ξ)Λ(ξ) + M(x, ξ)Λ (ξ)
0

− Π (x)M(x, ξ) γ̃(ξ, t)dξ
 x  x 
− G(x, ξ) + M(x, ξ)(ξ) + M(x, s)G(s, ξ)ds γ̃(ξ, t)dξ
0 ξ
− [Λ(x)M(x, x) − M(x, x)Λ(x) + (x) − Π (x)] γ̃(x, t). (8.78)

Using the Eqs. (8.74a)–(8.74b), the identity γ̃(0, t) = w̃(0, t), letting G be the solu-
tion to the Volterra integral equation (8.75), and choosing

Pi (x) = −M(x, 0)Λ(0) (8.79)

which is equivalent to (8.64), yield the dynamics (8.68a).


Since w̃(0, t) = γ̃(0, t), the boundary condition (8.67b) immediately follows
from (8.70b). Evaluating (8.72) at x = 1 and inserting into (8.70c) yield
 1
w̃(1, t) = R1 w̃(1, t) + [M(1, ξ) − R1 M(1, ξ)] γ̃(ξ, t)dξ. (8.80)
0

Using (8.74c) gives (8.68c).


136 8 Non-adaptive Schemes

Expanding system (8.70) into its components gives


 x
α̃t (x, t) + λ(x)α̃x (x, t) = g1 (x, ξ)α̃(ξ, t)dξ (8.81a)
0
 x
β̃t (x, t) − μ(x)β̃x (x, t) = c2 (x)α̃(x, t) + g2 (x, ξ)α̃(ξ, t)dξ (8.81b)
0
α̃(0, t) = 0 (8.81c)
β̃(1, t) = 0 (8.81d)
α̃(x, 0) = α̃0 (x) (8.81e)
β̃(x, 0) = β̃0 (x). (8.81f)

It is noted from (8.81a) and (8.81c) that the dynamics of α̃ is independent of β̃, and
will converge to zero in a finite time t1 . Thus, for t ≥ t1 , target system (8.70) reduces to

β̃t (x, t) − μ(x)β̃x (x, t) = 0 (8.82a)


β̃(1, t) = 0 (8.82b)
β̃(x, t1 ) = β̃t1 (x) (8.82c)

for some function β̃t1 . This system is a pure transport equation whose state will be
identically zero after the additional time t2 , and hence α̃ = β̃ ≡ 0 for t ≥ t1 + t2 =
t F . Due to the invertibility of the transformation (8.72), ũ = ṽ ≡ 0 as well, and thus
û ≡ u, v̂ ≡ v for t ≥ t F . 

8.3.2 Sensing Collocated with Actuation

An observer for system (8.1) using the sensing (8.4b) collocated with actuation is
presented in Vazquez et al. (2011) for the case q = 0. It is claimed in Vazquez
et al. (2011) that it is necessary to use measurements of v at x = 0 to implement a
boundary observer for values of q near zero. It turns out, however, that the proof can
be accommodated to show that the observer proposed in Vazquez et al. (2011) also
works for q = 0, but requires a slightly modified target system.
Consider the observer equations

û t (x, t) + λ(x)û x (x, t) = c1 (x)v̂(x, t) + p1 (x)(y1 (t) − û(1, t)) (8.83a)


v̂t (x, t) − μ(x)v̂x (x, t) = c2 (x)û(x, t) + p2 (x)(y1 (t) − û(1, t)) (8.83b)
û(0, t) = q v̂(0, t) (8.83c)
v̂(1, t) = U (t) (8.83d)
û(x, 0) = û 0 (x) (8.83e)
v̂(x, 0) = v̂0 (x) (8.83f)
8.3 State Observers 137

for some initial conditions

û 0 , v̂0 ∈ B([0, 1]), (8.84)

and where p1 and p2 are injection gains given as

p1 (x) = −λ(1)P α (x, 1) (8.85a)


β
p2 (x) = −λ(1)P (x, 1) (8.85b)

where (P α , P β ) is the solution to the PDE

λ(x)Pxα (x, ξ) + λ(ξ)Pξα (x, ξ) = −λ (ξ)P α (x, ξ) + c1 (x)P β (x, ξ) (8.86a)
β
μ(x)Pxβ (x, ξ) − λ(ξ)Pξ (x, ξ) = −c2 (x)P α (x, ξ) + λ (ξ)P β (x, ξ) (8.86b)
α β
P (0, ξ) = q P (0, ξ) (8.86c)
c2 (x)
P β (x, x) = − (8.86d)
λ(x) + μ(x)

defined for S, given in (1.1c). Well-posedness of (8.86) is guaranteed by Theorem D.1


in Appendix D.2, following a domain flip (x, ξ) → (ξ, x).

Theorem 8.3 Consider system (8.1) and observer (8.83) with injection gains p1 and
p2 given as (8.85). Then

û ≡ u, v̂ ≡ v (8.87)

for t ≥ t F , where t F is defined in (8.8).

Proof The observer errors ũ = u − û and ṽ = v − v̂ can, using (8.1) and (8.83), be
shown to satisfy the dynamics

ũ t (x, t) + λ(x)ũ x (x, t) = c1 (x)ṽ(x, t) − p1 (x)ũ(1, t) (8.88a)


ṽt (x, t) − μ(x)ṽx (x, t) = c2 (x)ũ(x, t) − p2 (x)ũ(1, t) (8.88b)
ũ(0, t) = q ṽ(0, t) (8.88c)
ṽ(1, t) = 0, (8.88d)
ũ(x, 0) = ũ 0 (x) (8.88e)
ṽ(x, 0) = ṽ0 (x) (8.88f)

where ũ 0 = u 0 − û 0 , ṽ0 = v0 − û 0 , which can be written in vector form as

w̃t (x, t) + Λ(x)w̃x (x, t) = Π (x)w̃(x, t) − Pi (x)w̃(1, t) (8.89a)


w̃(0, t) = Q 0 w̃(0, t) (8.89b)
w̃(1, t) = R1 w̃(1, t) (8.89c)
138 8 Non-adaptive Schemes

w̃(x, 0) = w̃0 (x) (8.89d)

where Λ, Π , Q 0 , R1 are defined in (8.10), and


     
ũ(x, t) p1 (x) 0 ũ 0 (x)
w̃(x, t) = , Pi (x) = , w̃0 (x) = . (8.90)
ṽ(x, t) p2 (x) 0 ṽ0 (x)

Consider the following target system


 1
γ̃t (x, t) + Λ(x)γ̃x (x, t) = Ω(x)γ̃(x, t) − D(x, ξ)γ̃(ξ, t)dξ (8.91a)
x
γ̃(0, t) = Q 0 γ̃(0, t) (8.91b)
γ̃(1, t) = R1 γ̃(1, t) (8.91c)
γ̃(x, 0) = γ̃0 (x) (8.91d)

where Ω is defined in (8.12), and D is a matrix in the form


 
0 d1 (x, ξ)
D(x, ξ) = (8.92)
0 d2 (x, ξ)

for some functions d1 and d2 defined over S. Consider the following backstepping
transformation
 1
w̃(x, t) = γ̃(x, t) − P(x, ξ)γ̃(ξ, t)dξ (8.93)
x

where
 
P α (x, ξ) 0
P(x, ξ) = (8.94)
P β (x, ξ) 0

satisfies

Λ(x)Px (x, ξ) + Pξ (x, ξ)Λ(ξ) = Π (x)P(x, ξ) − P(x, ξ)Λ (ξ) (8.95a)


Λ(x)P(x, x) − P(x, x)Λ(x) = Π (x) − Ω(x) (8.95b)
P(0, ξ) = Q 0 P(0, ξ) (8.95c)

which is equivalent to (8.86). We will show that the backstepping transformation


(8.93) maps the target system (8.91) with D given from
 ξ
D(x, ξ) = −P(x, ξ)Ω(ξ) + P(x, s)D(s, ξ)ds (8.96)
x

which from Lemma 1.1 has a solution D, into the error system (8.89).
8.3 State Observers 139

By differentiating (8.93) with respect to time, inserting the dynamics (8.91a),


integrating by parts, and changing the order of integration in the double integral, we
find

γ̃t (x, t) = w̃t (x, t) − P(x, 1)Λ(1)γ̃(1, t) + P(x, x)Λ(x)γ̃(x, t)


 1
 
+ Pξ (x, ξ)Λ(ξ) + P(x, ξ)Λ (ξ) γ̃(ξ, t)dξ
x
 1  ξ 
+ P(x, ξ)Ω(ξ) − P(x, s)D(s, ξ)ds γ̃(ξ, t)dξ. (8.97)
x x

Similarly, differentiating (8.93) with respect to space yields


 1
γ̃x (x, t) = w̃x (x, t) − P(x, x)γ̃(x, t) + Px (x, ξ)γ̃(ξ, t)dξ (8.98)
x

Inserting (8.97) and (8.98) into (8.91a) yields


 1
0 = γ̃t (x, t) + Λ(x)γ̃x (x, t) − Ω(x)γ̃(x, t) + D(x, ξ)γ̃(ξ, t)dξ
x
= w̃t (x, t) + Λ(x)w̃x (x, t) − Π (x)w̃(x, t) − P(x, 1)Λ(1)γ̃(1, t)
 1
+ Λ(x)Px (x, ξ) + Pξ (x, ξ)Λ(ξ) + P(x, ξ)Λ (ξ)
x

− Π (x)P(x, ξ) γ̃(ξ, t)dξ
 1   ξ 
+ D(x, ξ) + P(x, ξ)Ω(ξ) − P(x, s)D(s, ξ)ds γ̃(ξ, t)dξ
x x
− [Λ(x)P(x, x) − P(x, x)Λ(x) − Π (x) + Ω(x)] γ̃(x, t). (8.99)

Using (8.95a)–(8.95b), the identity γ̃(1, t) = w̃(1, t), setting

Pi (x) = −P(x, 1)Λ(1) (8.100)

which is equivalent to (8.85), and choosing D as the solution to the Volterra integral
equation (8.96), we obtain the dynamics (8.89a). The existence of a solution D of
(8.96) is guaranteed by Lemma 1.1. Inserting (8.93) into (8.91b) gives
 1
w(0, t) = Q 0 w(0, t) + [Q 0 P(0, ξ) − P(0, ξ)] γ̃(ξ, t)dξ (8.101)
0

Using (8.95c) yields (8.89b). The identity γ̃(1, t) = w̃(1, t) immediately yields the
boundary condition (8.89c) from (8.91c).
140 8 Non-adaptive Schemes

The system (8.91) written out in its components is


 1
α̃t (x, t) + λ(x)α̃x (x, t) = c1 (x)β̃(x, t) − d1 (x, ξ)β̃(ξ, t)dξ (8.102a)
x
 1
β̃t (x, t) − μ(x)β̃x (x, t) = − d2 (x, ξ)β̃(ξ, t)dξ (8.102b)
x
α̃(0, t) = q β̃(0, t) (8.102c)
β̃(1, t) = 0 (8.102d)
α̃(x, 0) = α̃0 (x) (8.102e)
β̃(x, 0) = β̃0 (x), (8.102f)

which is a cascade system from β̃ into α̃. The β̃-subsystem is independent of α̃, and
converges to zero in a finite time given by the propagation time through the domain.
Hence for t ≥ t2 , β̃ ≡ 0, and the subsystem α̃ reduces to

α̃t (x, t) + λ(x)α̃x (x, t) = 0 (8.103a)


α̃(0, t) = 0 (8.103b)
α̃(x, t2 ) = α̃t2 (x) (8.103c)

for some function α̃t2 , and consequently, α̃ ≡ 0 for t ≥ t1 + t2 = t F . Due to the


invertibility of the transformation (8.93), ũ = ṽ ≡ 0 and hence û ≡ u, v̂ ≡ v for
t ≥ t1 + t2 = t F as well. 

8.4 Output Feedback Controllers

As with scalar systems, the state estimates generated by the observers of Theorems 8.3
and 8.2 converge to their true values in finite time, and hence, designing output-
feedback controllers is almost trivial (separation principle). However, we formally
state these results in the following two theorems.

8.4.1 Sensing Anti-collocated with Actuation

Theorem 8.4 Consider system (8.1). Let the controller be taken as


 1  
U (t) = K u (1, ξ)û(ξ, t) + K v (1, ξ)v̂(ξ, t) dξ (8.104)
0
8.4 Output Feedback Controllers 141

where (K u , K v ) is the solution to the PDE (8.6), and û and v̂ are generated using
the observer of Theorem 8.2. Then

u=v≡0 (8.105)

for t ≥ 2t F , where t F is defined in (8.8).


Proof From Theorem 8.2, we have û ≡ u and v̂ ≡ v for t ≥ t F . The control law
(8.106) then equals the controller of Theorem 8.1, which achieves u = v ≡ 0 after
an additional time t F . Hence, after a total time 2t F , we have u = v ≡ 0. 

8.4.2 Sensing Collocated with Actuation

Theorem 8.5 Consider system (8.1). Let the controller be taken as


 1  
U (t) = K u (1, ξ)û(ξ, t) + K v (1, ξ)v̂(ξ, t) dξ (8.106)
0

where (K u , K v ) is the solution to the PDE (8.6), and û and v̂ are generated using
the observer of Theorem 8.3. Then

u=v≡0 (8.107)

for t ≥ 2t F , where t F is defined in (8.8).


Proof From Theorem 8.3, we have û ≡ u and v̂ ≡ v for t ≥ t F . The control law
(8.106) then equals the controller of Theorem 8.1, which achieves u = v ≡ 0 after
an additional time t F . Hence, after a total time 2t F , we have u = v ≡ 0. 

8.5 Output Tracking Controller

The goal here is to design a control law U so that the measurement y0 (t) = v(0, t)
of system (8.1) tracks a reference signal r (t).
Theorem 8.6 Consider system (8.1). Let the control law be taken as
 1  
U (t) = K u (1, ξ)u(ξ, t) + K v (1, ξ)v(ξ, t) dξ + r (t + t2 ), (8.108)
0

where (K u , K v ) is the solution to the PDE (8.6). Then

y0 (t) = v(0, t) = r (t) (8.109)


142 8 Non-adaptive Schemes

for t ≥ t2 , where t2 is defined in (8.8). Moreover, if r ∈ L∞ , then

||u||∞ , ||v||∞ ∈ L∞ . (8.110)

Proof As part of the proof of Theorem 8.1, it is shown that the backstepping transfor-
mation (8.13) maps system (8.1) with measurement (8.4a) into system (8.11), which
we restate here
 x
αt (x, t) + λ(x)αx (x, t) = c1 (x)β(x, t) + b1 (x, ξ)α(ξ, t)dξ
 x 0

+ b2 (x, ξ)β(ξ, t)dξ (8.111a)


0
βt (x, t) − μ(x)βx (x, t) = 0 (8.111b)
α(0, t) = qβ(0, t) (8.111c)
 1 
β(1, t) = U (t) − K u (1, ξ)u(ξ, t)
0

+ K v (1, ξ)v(ξ, t) dξ (8.111d)
α(x, 0) = α0 (x) (8.111e)
β(x, 0) = β0 (x) (8.111f)
y0 (t) = β(0, t), (8.111g)

where we have added the measurement (8.111g), which follows immediately from
substituting x = 0 into (8.13), resulting in β(0, t) = v(0, t), and hence y0 (t) =
β(0, t). In the state feedback stabilizing control design of Theorem 8.1, U is chosen
as (8.5), to obtain the boundary condition β(1, t) = 0, stabilizing the system.
From the structure of the subsystem in β consisting of (8.111b) and (8.111d), it
is clear that

β(0, t) = β(1, t − t2 ) (8.112)

for t ≥ t2 . Choosing the control law as (8.108), the boundary condition (8.111d),
becomes

β(1, t) = r (t + t2 ) (8.113)

and hence

y0 (t) = v(0, t) = β(0, t) = r (t) (8.114)

for t ≥ t2 , which is the tracking goal.


Moreover, β is now a pure transport equation, with r as input. If r ∈ L∞ , then
||β||∞ ∈ L∞ . The cascade structure of (8.111) will then also imply ||α||∞ ∈ L∞ ,
8.5 Output Tracking Controller 143

−0.3
2
Controllergains

−0.4

Injectiongains
1.5
−0.5
1
−0.6
0.5
0 0.5 1 0 0.5 1
x x

Fig. 8.1 Left: Controller gains K vu (1, x) (solid red) and K vv (1, x) (dashed-dotted blue). Right:
Observer gains p1 (x) (solid red) and p2 (x) (dashed-dotted blue)

and by the invertibiliy of the transformation (8.13) Theorem 1.2, ||u||∞ , ||v||∞ ∈ L∞
follows. 

The above tracking controller can also be combined with the observer of Theo-
rems 8.3 or 8.2 to solve the tracking problem from output feedback in a finite time
t F + t2 . Note that if the observer of Theorem 8.3 is used, the signal y0 (t) for which
tracking is achieved, need not be measured.

8.6 Simulations

System (8.1) with the state feedback controller of Theorem 8.1, the collocated
observer of Theorem 8.3, the output feedback controller of Theorem 8.5 and the
tracking controller of Theorem 8.6 are implemented using the system parameters

λ(x) = 1 + x, μ(x) = 1 + e x (8.115a)


c1 (x) = 1 + cosh(x), c2 (x) = 1 + x, q=2 (8.115b)

and initial conditions

u 0 ≡ 1, v0 (x) = sin(x). (8.116)

From the above transport speeds, we compute


 1  1
ds ds
t1 = = = ln(2) ≈ 0.6931 (8.117a)
0 λ(s) 0 1+s
 1  1  
ds ds 1+e
t2 = = = 1 − ln ≈ 0.3799 (8.117b)
0 μ(s) 0 1+e
s 2
t F = t1 + t2 ≈ 1.0730. (8.117c)
144 8 Non-adaptive Schemes

4 1.5

1
2
0.5

0 0

0 1 2 3 4 0 1 2 3 4
Time [s] Time [s]

Fig. 8.2 Left: State norm during state feedback (solid red), output feedback (dashed-dotted blue)
and output tracking (dashed green). Right: State estimation error norm

3
1
2

0 1

−1 0

−1
0 1 2 3 4 0 1 2 3 4
Time [s] Time [s]

Fig. 8.3 Left: Actuation signal during state feedback (solid red), output feedback (dashed-dotted
blue) and output tracking (dashed green). Right: Reference r (solid black) and measured signal
(dashed green) during tracking

The controller and observer gains are shown in Fig. 8.1. It is observed from Fig. 8.2
that the norm of the state estimation error from using the observer of Theorem 8.3
converges to zero in t F time. Moreover, the state norm is zero for t ≥ t F for the
state feedback case, zero for t ≥ 2t F for the output feedback case and bounded for
the tracking case, in accordance with the theory. The same is true for the respective
actuation signals as shown in Fig. 8.3. Finally, the tracking objective is achieved for
t ≥ t2 , as stated in Theorem 8.6.

8.7 Notes

The solution (K u , K v ) of the PDE (8.6) is required for implementation of the control
law of Theorem 8.1. These are generally non-trivial to solve, but since they are
static, they can be solved once and for all prior to implementation. The execution
time of a solver is therefore of minor concern. For the special case of constant
system parameters in (7.1) (which can be transformed to the form (7.4) required
by Theorem 8.1 by the linear transformation (7.7), creating exponentially weighted
coefficients c1 and c2 ), explicit solutions to (8.6) are available in Vazquez and Krstić
(2014). The solutions are quite complicated, involving Bessel functions of the first
kind and the generalized first order Marcum Q-function (Marcum 1950).
8.7 Notes 145

In Sect. 8.5, we solved a tracking problem for the output y0 (t) = v(0, t) anti-
collocated with actuation. The tracking problem for the collocated output y1 (t) =
u(1, t), however, is much harder. It is solved in Deutscher (2017) for a restricted class
of reference signals, namely ones generated using an autonomous linear system,
particularly aimed at modeling biased harmonic oscillators. Tracking is achieved
subject to some assumptions on the systems parameters. The problem of making y1
track some arbitrary, bounded reference signal, however, is at present still an open
problem. The difficulty arises from the backstepping transformation (8.13). For the
anti-collocated case, the simple relationship (8.111g) between the measurement y0
and the new backstepping variable β can be utilized. For the collocated case, the
backstepping transformation (8.13) gives the equally simple relationship y1 (t) =
u(1, t) = α(1, t), however, any signal propagating in α whose dynamics is given by
(8.26a) and (8.26c), is distorted by the integral terms and source term in (8.26a).
When attempting to use the decoupling backstepping transformation (8.44), with
inverse (8.48), the relationship to the new variables is
 1  1
y1 (t) = α(1, t) + K uu (1, ξ)u(ξ, t)dξ + K uu (1, ξ)v(ξ, t)dξ
0 0
 1  1
= α(1, t) + L αα (1, ξ)α(ξ, t)dξ + L αβ (1, ξ)β(ξ, t)dξ (8.118)
0 0

which contains weighted integrals of the states. So either way, complications occur
for the collocated case which are not present in the anti-collocated case.
The optimal control problem for (8.1) is investigated in Hasan et al. (2016). The
resulting controller requires the solution to a set of co-state equations propagating
backwards in time. It is hence non-causal and not possible to implement on-line.
However, it can be the basis for the derivation of a linear quadratic regulator (LQR)
state-feedback law for the infinite horizon, requiring the solution to non-linear, dis-
tributed Riccatti equations. This is attempted in Hasan et al. (2016), but the validity
of this controller is questionable as it does not involve any state-feedback from the
state u. In Anfinsen and Aamo (2017) a state-feedback inverse optimal controller
is derived for system (8.1) with constant transport speeds, which avoids the need to
solve Riccati equations often associated with optimal controllers, and exponentially
stabilizes the system in the L 2 -sense, while also minimizing a cost function that is
positive definite in the system states and control signal. However, the finite-time con-
vergence property of the backstepping controller is lost. Some remarkable features
of the resulting inverse optimal control law are that it is simply a scaled version of
the backstepping controller of Theorem 8.1, and that it approaches the backstepping
controller when the cost of actuation approach zero.
146 8 Non-adaptive Schemes

References

Anfinsen H, Aamo OM (2017) Inverse optimal stabilization of 2 × 2 linear hyperbolic partial


differential equations. Valletta, Malta
Deutscher J (2017) Finite-time output regulation for linear 2 × 2 hyperbolic systems using back-
stepping. Automatica 75:54–62
Di Meglio F, Vazquez R, Krstić M (2013) Stabilization of a system of n + 1 coupled first-order
hyperbolic linear PDEs with a single boundary input. IEEE Trans Autom Control 58(12):3097–
3111
Hasan A, Imsland L, Ivanov I, Kostova S, Bogdanova B (2016) Optimal boundary control of 2 × 2
linear hyperbolic PDEs. In: Mediterranean conference on control and automation (MED) 2016.
Greece, Athens, pp 164–169
Lamare P-O, Bekiaris-Liberis N (2015) Control of 2 × 2 linear hyperbolic systems: backstepping-
based trajectory generation and PI-based tracking. Syst Control Lett 86:24–33
Marcum JI (1950) Table of Q functions. Technical report, U.S. Air Force RAND Research Memo-
randum M-339. Rand Corporation, Santa Monica, CA
Vazquez R, Krstić M (2014) Marcum Q-functions and explicit kernels for stabilization of 2 × 2
linear hyperbolic systems with constant coefficients. Syst Control Lett 68:33–42
Vazquez R, Krstić M, Coron JM (2011) Backstepping boundary stabilization and state estimation
of a 2 × 2 linear hyperbolic system. In: 2011 50th IEEE conference on decision and control and
European control conference (CDC-ECC), December. pp 4937–4942
Chapter 9
Adaptive State Feedback Controllers

9.1 Introduction

In this chapter, we present the book’s first adaptive stabilizing controllers for 2 × 2
systems. These are state-feedback solutions requiring full state measurements. The
first result on adaptive control of 2 × 2 systems is given in the back-to-back papers
Anfinsen and Aamo (2016a, b), for a system in the form (7.1), but with constant
in-domain parameters, that is

u t (x, t) + λu x (x, t) = c11 u(x, t) + c12 v(x, t) (9.1a)


vt (x, t) − μvx (x, t) = c21 u(x, t) + c22 v(x, t) (9.1b)
u(0, t) = qv(0, t) (9.1c)
v(1, t) = U (t) (9.1d)
u(x, 0) = u 0 (x) (9.1e)
v(x, 0) = v0 (x) (9.1f)

where

λ, μ, c11 , c12 , c21 , c22 , q ∈ R, λ, μ > 0, (9.2)

and

u 0 , v0 ∈ B([0, 1]). (9.3)

The problem considered in Anfinsen and Aamo (2016a, b) is stabilization to zero in


L 2 ([0, 1]), assuming ci j uncertain, and is solved using identifier-based and swapping-
based design, respectively. In Anfinsen and Aamo (2018), both methods are extended
to also cover having an uncertain boundary parameter q, and the stabilization result
is strengthened to provide pointwise convergence to zero.

© Springer Nature Switzerland AG 2019 147


H. Anfinsen and O. M. Aamo, Adaptive Control of Hyperbolic PDEs,
Communications and Control Engineering,
https://doi.org/10.1007/978-3-030-05879-1_9
148 9 Adaptive State Feedback Controllers

Extensions to the case of having spatially varying coefficients is for the identifier-
based method straightforward, but for the swapping method more involved. One such
solution is given in Anfinsen and Aamo (2017) for systems in the form (7.4), which
we restate here
u t (x, t) + λ(x)u x (x, t) = c1 (x)v(x, t) (9.4a)
vt (x, t) − μ(x)vx (x, t) = c2 (x)u(x, t) (9.4b)
u(0, t) = qv(0, t) (9.4c)
v(1, t) = U (t) (9.4d)
u(x, 0) = u 0 (x) (9.4e)
v(x, 0) = v0 (x) (9.4f)

where

λ, μ ∈ C 1 ([0, 1]), λ(x), μ(x) > 0, ∀x ∈ [0, 1] (9.5a)


c1 , c2 ∈ C ([0, 1]), q ∈ R,
0
(9.5b)

and

u 0 , v0 ∈ B([0, 1]). (9.6)

The solution offered in Anfinsen and Aamo (2017) requires a substantially different
set of swapping filters, which in turn leads to a more comprehensive stability proof.
In this chapter, we present in Sect. 9.2 the identifier-based solution from Anfinsen
and Aamo (2018) for the constant-coefficient system (9.1). In Sect. 9.3, we present
the swapping-based solution for the spatially-varying coefficient system (9.4).
We emphasize that the controllers in this chapter require state-feedback. That is,
they assume that distributed measurements of the states in the domain are available,
which is rarely the case in practice. The more realistic case of taking measurements
at the boundary of the domain only, is treated in Chaps. 10 and 11.

9.2 Identifier-Based Design for a System with Constant


Coefficients

9.2.1 Identifier and Adaptive Laws

We will use the following assumption in defining the adaptive laws.


Assumption 9.1 Bounds are known on all uncertain parameters, that is: constants
c̄11 , c̄12 , c̄21 , c̄22 , q̄ are known so that

|c11 | ≤ c̄11 , |c12 | ≤ c̄12 , |c21 | ≤ c̄21 , |c22 | ≤ c̄22 , |q| ≤ q̄. (9.7)

Since the bounds are arbitrary, this assumption is not a limitation.


9.2 Identifier-Based Design for a System with Constant Coefficients 149

Consider now the following identifier for system (9.1) consisting of

û t (x, t) + λû x (x, t) =  T (x, t)b̂1 (t) + ρe(x, t)||(t)||2 (9.8a)


v̂t (x, t) − μv̂x (x, t) =  (x, t)b̂2 (t) + ρ(x, t)||(t)||
T 2
(9.8b)
q̂(t)v(0, t) + u(0, t)v (0, t)
2
û(0, t) = (9.8c)
1 + v 2 (0, t)
v̂(1, t) = U (t) (9.8d)
û(x, 0) = û 0 (x) (9.8e)
v̂(x, 0) = v̂0 (x) (9.8f)

for some initial conditions

û 0 , v̂0 ∈ B([0, 1]), (9.9)

and the adaptive laws


  1 
b̂˙1 (t) = projb̄1 Γ1 e−γx (u(x, t) − û(x, t))(x, t)d x, b̂1 (t) (9.10a)
0
  1 
˙b̂ (t) = proj Γ γx
e (v(x, t) − v̂(x, t))(x, t)d x, b̂2 (t) (9.10b)
2 b̄2 2
0
 
˙ = proj γ5 (u(0, t) − û(0, t))v(0, t), q̂(t)
q̂(t) (9.10c)

b̂1 (0) = b̂1,0 (9.10d)


b̂2 (0) = b̂2,0 (9.10e)
q̂(0) = q̂0 (9.10f)

where proj denotes the projection operator given in Appendix A, ρ, γ, γ5 > 0 are
scalar design gains, and

Γ1 = diag{γ1 , γ2 } Γ2 = diag{γ3 , γ4 }, (9.11)

are design matrices with components γ1 , γ2 , γ3 , γ4 > 0, and


 T
(x, t) = u(x, t) v(x, t) . (9.12)

Define
 T  T
b1 = c11 c12 , b2 = c21 c22 (9.13)

and let b̂1 and b̂2 be estimates of b1 and b2 , respectively, and let
 T  T
b̄1 = c̄11 c̄12 , b̄2 = c̄21 c̄22 (9.14)
150 9 Adaptive State Feedback Controllers

be bounds on b1 and b2 , respectively, where c̄11 , c̄12 , c̄21 , c̄22 , q̄ are given in Assump-
 T  T
tion 9.1. The initial guesses b̂1,0 = ĉ11,0 ĉ12,0 , b̂2,0 = ĉ21,0 ĉ22,0 and q̂0 are
chosen inside the feasible domain, that is

|ĉ11,0 | ≤ c̄11 , |ĉ12,0 | ≤ c̄12 , |ĉ21,0 | ≤ c̄21 , |ĉ22,0 | ≤ c̄22 , |q̂0 | ≤ q̄. (9.15)

Lemma 9.1 Consider system (9.1). The identifier (9.8)–(9.10) with initial conditions
(9.15) guarantees

|ĉi j (t)| ≤ c̄i j , i, j = 1, 2, |q̂(t)| ≤ q̄, ∀t ≥ 0 (9.16a)


||e||, |||| ∈ L∞ ∩ L2 (9.16b)
||e||||||, |||||||| ∈ L2 (9.16c)
e(0, ·), e(1, ·), (0, ·), |e(0, ·)v(0, ·)| ∈ L2 (9.16d)
b̂˙ , b̂˙ , q̂˙ ∈ L
1 2 2 (9.16e)
q̃v(0, ·)
∈ L2 (9.16f)
1 + v 2 (0, ·)

where

e(x, t) = u(x, t) − û(x, t), (x, t) = v(x, t) − v̂(x, t). (9.17)

Proof Property (9.16a) follows trivially from projection in (9.10) and Lemma A.1
in Appendix A.1. The dynamics of (9.17) is

et (x, t) + λex (x, t) =  T (x, t)b̃1 (t) − ρe(x, t)||(t)||2 (9.18a)


t (x, t) − μx (x, t) =  (x, t)b̃2 (t) − ρ(x, t)||(t)||
T 2
(9.18b)
q̃(t)v(0, t)
e(0, t) = (9.18c)
1 + v 2 (0, t)
(1, t) = 0 (9.18d)
e(x, 0) = e0 (x) (9.18e)
(x, 0) = 0 (x) (9.18f)

where

q̃(t) = q − q̂(t), b̃1 (t) = b1 − b̂1 (t), b̃2 (t) = b2 − b̂2 (t). (9.19)

Consider the Lyapunov function candidate

λ 2
V1 (t) = V2 (t) + b̃1T (t)Γ1−1 b̃1 (t) + b̃2T (t)Γ2−1 b̃2 (t) + q̃ (t) (9.20)
2γ5
9.2 Identifier-Based Design for a System with Constant Coefficients 151

where
 1  1
V2 (t) = e−γx e2 (x, t)d x + eγx 2 (x, t)d x. (9.21)
0 0

Differentiating (9.20) with respect to time and inserting the dynamics (9.18a)–
(9.18b), integrating by parts and using the boundary condition (9.18d), we find
 1
V̇1 (t) = −λe−γ e2 (1, t) + λe2 (0, t) − λγ e−γx e2 (x, t)d x
0
 1  1
+2 e−γx e(x, t) T (x, t)b̃1 (t)d x − 2ρ e−γx e2 (x, t)||(t)||2 d x
0 0
 1  1
− μ2 (0, t) − μγ eγx 2 (x, t)d x + 2 eγx (x, t) T (x, t)b̃2 (t)d x
0 0
 1
− 2ρ eγx 2 (x, t)||(t)||2 d x + 2b̃1T (t)Γ1−1 b̃˙1 (t) + 2b̃2T (t)Γ2−1 b̃˙2 (t)
0
˙
+ λγ5−1 q̃(t)q̃(t). (9.22)

Inserting the adaptive laws (9.10), and using the property −b̃1T (t)Γ1 projb̄1 (τ (t), b̂1 (t))
≤ −b̃1T (t)Γ1 τ (t) (Lemma A.1 in Appendix A) and similarly for b̃2 and q̃, give
 1
−γ 2
V̇1 (t) ≤ −λe e (1, t) + λe (0, t) − λγ
2
e−γx e2 (x, t)d x
0
 1  1
−γx 2
− 2ρ e e (x, t)||(t)|| d x − μ (0, t) − μγ
2 2
eγx 2 (x, t)d x
0 0
 1
− 2ρ eγx 2 (x, t)||(t)||2 d x − λq̃(t)e(0, t)v(0, t). (9.23)
0

From the boundary condition (9.18c), we have the relationship

e(0, t) = q̃(t)v(0, t) − e(0, t)v 2 (0, t), (9.24)

and inserting this, we obtain

V̇1 (t) ≤ −λe−γ e2 (1, t) − λe2 (0, t)v 2 (0, t) − λγe−γ ||e(t)||2
− 2ρe−γ ||e(t)||2 ||(t)||2 − μ2 (0, t)
− μγ||(t)||2 − 2ρeγ ||(t)||2 ||(t)||2 (9.25)

which shows that V1 is bounded and from the definition of V1 and V2 that ||e||, |||| ∈
L∞ . Integrating (9.25) in time from zero to infinity gives ||e||, |||| ∈ L2 , (9.16c) and
|e(1, ·)|, |(0, ·)|, |e(0, ·)v(0, ·)| ∈ L2 . From the properties (9.16c), |e(0, ·)v(0, ·)| ∈
152 9 Adaptive State Feedback Controllers

L2 and the adaptive laws (9.10), (9.16e) follow. Using the following Lyapunov func-
tion candidate
1 2
V3 (t) = q̃ (t), (9.26)
2γ5

and the property −q̃(t)γprojq̄ (τ (t), q̂(t)) ≤ −q̃(t)γτ (t) (Lemma A.1 in Appendix
A), we find

q̃ 2 (t)v 2 (0, t)
V̇3 (t) ≤ −q̃(t)e(0, t)v(0, t) ≤ − . (9.27)
1 + v 2 (0, t)

This means that V3 is bounded from above, and hence V3 ∈ L∞ . Integrating (9.27)
from zero to infinity gives (9.16f). From (9.24) and (9.18c), we have

e2 (0, t) = e(0, t)(q̃(t)v(0, t) − e(0, t)v 2 (0, t))


q̃ 2 (t)v 2 (0, t)
= − e2 (0, t)v 2 (0, t) (9.28)
1 + v 2 (0, t)

and from |e(0, ·)v(0, ·)| ∈ L2 and (9.16f), |e(0, ·)| ∈ L2 follows.

9.2.2 Control Law

Consider the following equations in K̂ u (x, ξ, t), K̂ v (x, ξ, t)

μ K̂ xu (x, ξ, t) − λ K̂ ξu (x, ξ, t) = (ĉ11 (t) − ĉ22 (t)) K̂ u (x, ξ, t)


+ ĉ21 (t) K̂ v (x, ξ, t) (9.29a)
μ K̂ xv (x, ξ, t) + μ K̂ ξv (x, ξ, t) = ĉ12 (t) K̂ (x, ξ, t)
u
(9.29b)
ĉ21 (t)
K̂ u (x, x, t) = − (9.29c)
λ+μ
v λ
K̂ (x, 0, t) = q̂(t) K̂ u (x, 0, t) (9.29d)
μ

defined over T1 , given in (1.1b). By Theorem D.1 in Appendix D.2, Eq. (9.29) has a
unique, bounded solution for every time t, and since the set of admissible ĉ1 , . . . ĉ4 , q̂,
is compact due to projection, it follows that there exists a constant K̄ so that

|| K̂ u (t)||∞ ≤ K̄ , || K̂ v (t)||∞ ≤ K̄ , ∀t ≥ 0. (9.30)


9.2 Identifier-Based Design for a System with Constant Coefficients 153

Additionally, from differentiating the Eq. (9.29) with respect to time, applying The-
orem D.1 in Appendix D.2 on the resulting equations, and using (9.16e), we obtain

|| K̂ tu ||, || K̂ tv || ∈ L2 . (9.31)

Property (9.31) is crucial for the closed loop analysis that follows.
Consider now the control law
 1  1
U (t) = K̂ u (1, ξ, t)û(ξ, t)dξ + K̂ v (1, ξ, t)v̂(ξ, t)dξ (9.32)
0 0

where K̂ u , K̂ v is the solution to (9.29), and û, v̂ are the states of the identifier (9.8).

Theorem 9.1 Consider system (9.1) and identifier (9.8)–(9.10). The control law
(9.32) guarantees

||u||, ||v||, ||û||, ||v̂|, ||u||∞ , ||v||∞ , ||û||∞ , ||v̂||∞ ∈ L2 ∩ L∞ (9.33a)


||u||, ||v||, ||û||, ||v̂||, ||u||∞ , ||v||∞ , ||û||∞ , ||v̂||∞ → 0 (9.33b)

The proof of Theorem 9.1 is the subject of the next sections.

Remark 9.1 The particular controller kernel equation (9.29) can, by a change of
variables, be brought into the form for which explicit solutions are given in Vazquez
and Krstić (2014).

9.2.3 Backstepping Transformation

For every time t ≥ 0, consider the following adaptive backstepping transformation

w(x, t) = û(x, t) (9.34a)


 x
z(x, t) = v̂(x, t) − K̂ u (x, ξ, t)û(ξ, t)dξ
 x 0
v
− K̂ (x, ξ, t)v̂(ξ, t)dξ = T [û, v̂](x, t) (9.34b)
0

where ( K̂ u , K̂ v ) is the solution to (9.29). Since K̂ u and K̂ v are uniformly bounded,


the transformation (9.34) is an invertible backstepping transformation, with inverse
in the same form

û(x, t) = w(x, t) (9.35a)


−1
v̂(x, t) = T [w, z](x, t) (9.35b)
154 9 Adaptive State Feedback Controllers

where T −1 is an operator similar to T . Consider also the target system


 x
wt (x, t) + λwx (x, t) = ĉ11 (t)w(x, t) + ĉ12 (t)z(x, t) + ω(x, ξ, t)w(ξ, t)dξ
 x 0

+ κ(x, ξ, t)z(ξ, t)dξ + ĉ11 (t)e(x, t)


0
+ ĉ12 (t)(x, t) + ρe(x, t)||(t)||2 (9.36a)
z t (x, t) − μz x (x, t) = ĉ22 (t)z(x, t) − λ K̂ (x, 0, t)q(t)(0, t)
u

− λ K̂ u (x, 0, t)q̃(t)z(0, t) + λ K̂ u (x, 0, t)e(0, t)


 x
− K̂ tu (x, ξ, t)w(ξ, t)dξ
 x
0

− K̂ tv (x, ξ, t)T −1 [w, z](ξ, t)dξ


0
+ T [ĉ11 e + ĉ12 , ĉ21 e + ĉ22 ](x, t)
+ ρT [e, ](x, t)||(t)||2 (9.36b)
w(0, t) = q(t)z(0, t) + q(t)(0, t) − e(0, t) (9.36c)
z(1, t) = 0 (9.36d)

Lemma 9.2 Transformation (9.34) along with control law (9.32) map identifier (9.8)
into (9.36) with
 x
ω(x, ξ, t) = ĉ12 (t) K̂ u (x, ξ, t) + κ(x, s, t) K̂ u (s, ξ, t)ds (9.37a)
ξ
 x
κ(x, ξ, t) = ĉ12 (t) K̂ v (x, ξ, t) + κ(x, s, t) K̂ v (s, ξ, t)ds. (9.37b)
ξ

Proof Differentiating (9.34b) with respect to time, inserting the dynamics (9.8a)–
(9.8b), integrating by parts, and inserting the boundary condition (9.8c) we find
 x  x
v̂t (x, t) = z t (x, t) + K̂ tu (x, ξ, t)û(ξ, t)dξ + K̂ tv (x, ξ, t)v̂(ξ, t)dξ
0 0
− λ K̂ u (x, x, t)û(x, t) + λq̂(t) K̂ u (x, 0, t)v̂(0, t)
+ λq̂(t) K̂ u (x, 0, t)(0, t) + λ K̂ u (x, 0, t)q̃(t)v(0, t)
 x
− λ K̂ (x, 0, t)e(0, t) +
u
K̂ ξu (x, ξ, t)λû(ξ, t)dξ
 x 0
 x
+ K̂ (x, ξ, t)ĉ11 (t)û(ξ, t)dξ +
u
K̂ u (x, ξ, t)ĉ11 (t)e(ξ, t)dξ
 x
0
 x
0

+ K̂ u (x, ξ, t)ĉ12 (t)v̂(ξ, t)dξ + K̂ u (x, ξ, t)ĉ12 (t)(ξ, t)dξ


0 0
9.2 Identifier-Based Design for a System with Constant Coefficients 155
 x
+ρ K̂ u (x, ξ, t)e(ξ, t)dξ||(t)||2 + K̂ v (x, x, t)μv̂(x, t)
0
 x
− K̂ v (x, 0, t)μv̂(0, t) − K̂ ξv (x, ξ, t)μv̂(ξ, t)dξ
 x 0
 x
v
+ K̂ (x, ξ, t)ĉ21 (t)û(ξ, t)dξ + K̂ v (x, ξ, t)ĉ21 (t)e(ξ, t)dξ
 x
0
 x
0

+ K̂ v (x, ξ, t)ĉ22 (t)v̂(ξ, t)dξ + K̂ v (x, ξ, t)ĉ22 (t)(ξ, t)dξ


0
 x 0
v
+ρ K̂ (x, ξ, t)(ξ, t)dξ||(t)|| . 2
(9.38)
0

Equivalently, differentiating (9.34b) with respect to space, we obtain


 x
v̂x (x, t) = z x (x, t) + K̂ (x, x, t)û(x, t) +
u
K̂ xu (x, ξ, t)û(ξ, t)dξ
 x 0

+ K̂ v (x, x, t)v̂(x, t) + K̂ xv (x, ξ, t)v̂(ξ, t)dξ. (9.39)


0

Inserting (9.38) and (9.39) into (9.8b), using the Eq. (9.29), one obtains (9.36b).
Inserting (9.34) into (9.36a), changing the order of integration in the double integrals
and using (9.37), we obtain (9.8a). The boundary condition (9.36c) follows from
inserting (9.34) into (9.8c) and noting that

w(0, t) = û(0, t) = q̂(t)v(0, t) + e(0, t)v 2 (0, t)


= q̂(t)v(0, t) + q̃(t)v(0, t) − e(0, t)
= qv(0, t) − e(0, t) (9.40)

and
v(0, t) = v̂(0, t) + (0, t) = z(0, t) + (0, t). (9.41)

9.2.4 Proof of Theorem 9.1

Recall from Theorem 1.3 the following inequalities that hold since T is a backstep-
ping transformation with bounded integration kernels

||T [u, v](t)|| ≤ A1 ||u(t)|| + A2 ||v(t)|| (9.42a)


−1
||T [u, v](t)|| ≤ A3 ||u(t)|| + A4 ||v(t)||. (9.42b)

Moreover, from applying Lemma 1.1 to (9.37), and using the fact that K̂ u , K̂ v and
ĉ12 are all uniformly bounded, there must exist constants ω̄, κ̄ so that

||ω(t)||∞ ≤ ω̄, ||κ(t)||∞ ≤ κ̄, ∀t ≥ 0. (9.43a)


156 9 Adaptive State Feedback Controllers

Consider now the following components that will eventually form a Lyapunov
function candidate
 1
V4 (t) = e−δx w 2 (x, t)d x (9.44a)
0
 1
V5 (t) = ekx z 2 (x, t)d x. (9.44b)
0

The following result is proved in Appendix E.3.


Lemma 9.3 Let δ ≥ 1. There exist positive constants h 1 , h 2 , . . . , h 6 and nonnega-
tive, integrable functions l1 , l2 , . . . , l5 such that

V̇4 (t) ≤ h 1 z 2 (0, t) − [λδ − h 2 ] V4 (t) + h 3 V5 (t) + l1 (t)V4 (t) + l2 (t) (9.45a)
 
V̇5 (t) ≤ − μ − ek h 4 q̃ 2 (t) z 2 (0, t) + h 5 V4 (t) − [kμ − h 6 ] V5 (t)
+ l3 (t)V4 (t) + l4 (t)V5 (t) + l5 (t). (9.45b)

Constructing the Lyapunov function candidate

V6 (t) = V4 (t) + aV5 (t) (9.46)

for a positive constant a, differentiating by time and using Lemma 9.3 (assuming
δ ≥ 1), we find
 
V̇6 (t) ≤ − aμ − h 1 − aek h 4 q̃ 2 (t) z 2 (0, t) − [λδ − h 2 − ah 5 ] V4 (t)
− [akμ − ah 6 − h 3 ] V5 (t) + (l1 (t) + al3 (t))V4 (t)
+ al4 (t)V5 (t) + l2 (t) + al5 (t). (9.47)

By choosing

h1 + 1
a= (9.48)
μ

and then choosing

h 2 + ah 5 h 3 + ah 6
δ > max 1, , k> (9.49)
λ aμ

we obtain
 
V̇6 (t) ≤ − 1 − bq̃ 2 (t) z 2 (0, t) − cV6 (t) + l6 (t)V6 (t) + l7 (t) (9.50)
9.2 Identifier-Based Design for a System with Constant Coefficients 157

for some positive constants b, c, and nonnegative, integrable functions l6 , l7 . Consider

1 + v 2 (0, t) 2
q̃ 2 (t)z 2 (0, t) = q̃ 2 (t) z (0, t)
1 + v 2 (0, t)
q̃ 2 (t)v 2 (0, t) 2 q̃ 2 (t)
= z (0, t) + z 2 (0, t)
1 + v 2 (0, t) 1 + v 2 (0, t)
q̃ 2 (t)v 2 (0, t) 2 q̃ 2 (t)
≤ z (0, t) + 2 (v 2 (0, t) + 2 (0, t))
1 + v 2 (0, t) 1 + v 2 (0, t)
q̃ 2 (t)v 2 (0, t) 2
≤ z (0, t) + l8 (t) (9.51)
1 + v 2 (0, t)

where

q̃ 2 (t)v 2 (0, t)
l8 (t) = 2 + 8q̄ 2 2 (0, t) (9.52)
1 + v 2 (0, t)

is an integrable function (Lemma 9.1). Inserting this, we obtain


 
V̇6 (t) ≤ − 1 − bσ 2 (t) z 2 (0, t) − cV6 (t) + l6 (t)V6 (t) + l9 (t) (9.53)

for an integrable function l9 , and where we have defined

q̃ 2 (t)v 2 (0, t)
σ 2 (t) = . (9.54)
1 + v 2 (0, t)

Moreover, from (9.26) and (9.27), we have

σ 2 (t) ≤ 2γ5 V3 (t) (9.55)

and

V̇3 (t) ≤ −σ 2 (t). (9.56)

It then follows from Lemma B.4 in Appendix B that

V6 ∈ L1 ∩ L∞ (9.57)

and hence

||w||, ||z|| ∈ L2 ∩ L∞ . (9.58)

Since ||z|| ∈ L∞ , it follows that z(x, t) must be bounded for almost all x ∈ [0, 1],
implying that
158 9 Adaptive State Feedback Controllers

σ 2 z(0, ·) ∈ L1 (9.59)

since σ 2 ∈ L1 by Lemma 9.1. Inequality (9.53) can therefore be written

V̇6 (t) ≤ −cV6 (t) + l6 (t)V6 (t) + l10 (t) (9.60)

for the nonnegative, integrable function

l10 (t) = l9 (t) + bσ 2 (t)z 2 (0, t). (9.61)

Lemma B.3 in Appendix B then gives

V6 → 0 (9.62)

and hence

||w||, ||z|| → 0. (9.63)

Due to the invertibility of the backstepping transformation (9.34),

||û||, ||v̂|| ∈ L2 ∩ L∞ , ||û||, ||v̂|| → 0 (9.64)

follows. Since ||e||, |||| ∈ L2 ∩ L∞ , it follows that

||u||, ||v|| ∈ L2 ∩ L∞ . (9.65)

From (9.21), we have, using Cauchy–Schwarz’ inequality


 1  1
−γx 2
V̇2 (t) ≤ λe (0, t) − λγ
2
e e (x, t)d x − μγ eγx 2 (x, t)d x
0 0
 1  1
−γx 2
+ e e (x, t)d x + e−γx ( T (x, t)b̃1 (t))2 d x
0 0
 1  1
γx 2
+ e  (x, t)d x + eγx ( T (x, t)b̃2 (t))2 d x (9.66)
0 0

which can be written

V̇2 (t) ≤ −c̄V2 (t) + l11 (t) (9.67)

for a positive constant c̄, and some nonnegative function l11 , which is integrable
since e(0, ·), ||u||, ||v||, ||e||, |||| ∈ L2 ∩ L∞ , and b̃1 , b̃2 are bounded. Lemma B.2
in Appendix B gives

V2 → 0 (9.68)
9.2 Identifier-Based Design for a System with Constant Coefficients 159

and hence

||e||, |||| → 0 (9.69)

from which we conclude

||u||, ||v|| → 0. (9.70)

We proceed by showing pointwise boundedness, square integrability and conver-


gence to zero. As part of the proof of Theorem 7.4, it was shown that system (8.1)
can be mapped into system (8.58), which we restate here:

αt (x, t) + λ(x)αx (x, t) = g(x)β(0, t) (9.71a)


βt (x, t) − μ(x)βx (x, t) = 0 (9.71b)
α(0, t) = qβ(0, t) (9.71c)
 1
β(1, t) = K̂ uv (1, ξ, t)û(ξ, t) + K̂ vv (1, ξ, t)v̂(ξ, t) dξ
0
 1
 uv
− K (1, ξ)u(ξ, t)
0

+ K vv (1, ξ)v(ξ, t) dξ (9.71d)

where we have inserted for the control law (9.32). Since ||u||, ||v||, ||û||, ||v̂|| ∈
L2 ∩ L∞ and the kernels K̂ uv , K̂ vv , K uv , K vv are all bounded, it follows that β(1, ·) ∈
L2 ∩ L∞ . Since β and α are simple, cascaded transport equations, this implies

||α||∞ , ||β||∞ ∈ L2 ∩ L∞ , ||α||∞ , ||β||∞ → 0 (9.72)

while the invertibility of the transformation (8.13) (Theorem 1.3) then yields

||u||∞ , ||v||∞ ∈ L2 ∩ L∞ , ||u||∞ , ||v||∞ → 0. (9.73)

9.3 Swapping-Based Design for a System with Spatially


Varying Coefficients

9.3.1 Filter Design

Consider the filters

ηt (x, t) + λ(x)ηx (x, t) = 0, η(0, t) = v(0, t)


η(x, 0) = η0 (x) (9.74a)
160 9 Adaptive State Feedback Controllers

φt (x, t) − μ(x)φx (x, t) = 0, φ(1, t) = U (t)


φ(x, 0) = φ0 (x) (9.74b)
Mt (x, ξ, t) + λ(x)Mx (x, ξ, t) = 0, M(x, x, t) = v(x, t)
M(x, ξ, 0) = M0 (x, ξ) (9.74c)
Nt (x, ξ, t) − μ(x)N x (x, ξ, t) = 0, N (x, x, t) = u(x, t)
N (x, ξ, t) = N0 (x, ξ) (9.74d)

where η and φ are defined for x ∈ [0, 1], t ≥ 0, while M and N are defined over T
and S given by (1.1b) and (1.1d), respectively. The initial conditions are assumed to
satisfy

η0 , φ0 ∈ B([0, 1]), M0 ∈ B(T ) N0 ∈ B(S). (9.75)

Consider also the derived filter

n 0 (x, t) = N (0, x, t). (9.76)

Using the filters (9.74), non-adaptive estimates of the states can be generated from
 x
ū(x, t) = qη(x, t) + θ(ξ)M(x, ξ, t)dξ (9.77a)
0
 1
v̄(x, t) = φ(x, t) + κ(ξ)N (x, ξ, t)dξ (9.77b)
x

where
c1 (x) c2 (x)
θ(x) = , κ(x) = . (9.78)
λ(x) μ(x)

Lemma 9.4 Consider system (9.4) and the non-adaptive estimates (9.77) generated
using the filters (9.74). Then

ū ≡ u, v̄ ≡ v (9.79)

for t ≥ t0 , where

t0 = max {t1 , t2 } (9.80)

with t1 , t2 defined in (8.8).

Proof Consider the corresponding non-adaptive state estimation errors

e(x, t) = u(x, t) − ū(x, t), (x, t) = v(x, t) − v̄(x, t). (9.81)


9.3 Swapping-Based Design for a System with Spatially Varying Coefficients 161

From straightforward calculations, it can be shown that the non-adaptive estimation


errors (9.81) satisfy

et (x, t) + λ(x)ex (x, t) = 0, e(0, t) = 0, e(x, 0) = e0 (x) (9.82a)


t (x, t) − μ(x)x (x, t) = 0, (1, t) = 0, (x, 0) = 0 (x) (9.82b)

where e0 , 0 ∈ B([0, 1]). It is observed that e ≡ 0 for t ≥ t1 , while  ≡ 0 for t ≥ t2 ,


which gives the desired result.

9.3.2 Adaptive Laws

We start by assuming the following.


Assumption 9.2 Bounds on θ, κ and q are known. That is, we are in knowledge of
some constants θ̄, κ̄, q̄, so that

||θ||∞ ≤ θ̄, ||κ||∞ ≤ κ̄, |q| ≤ q̄. (9.83)

This assumption is equivalent to Assumption 9.1 for the constant coefficient case.
Since the bounds are arbitrary, the assumption is not a limitation. From the swapping
representations (9.77), we have
 x
u(x, t) = qη(x, t) + θ(ξ)M(x, ξ, t)dξ + e(x, t) (9.84a)
0
 1
v(x, t) = φ(x, t) + κ(ξ)N (x, ξ, t)dξ + (x, t) (9.84b)
x

where e,  are zero for t ≥ t0 . We propose the following adaptive laws


 1 
˙ = proj γ1 0 ê(x, t)η(x, t)d x
q̂(t) , q̂(t) , (9.85a)

1 + f 2 (t)
 1 
x ê(ξ, t)M(ξ, x, t)dξ
θ̂t (x, t) = projθ̄ γ2 (x) , θ̂(x, t) (9.85b)
1 + f 2 (t)
 x
ˆ(ξ, t)N (ξ, x, t)dξ
κ̂t (x, t) = projκ̄ γ3 (x) 0
1 + ||N (t)||2

ˆ(0, t)n 0 (x, t)
+ γ3 (x) , κ̂(x, t) (9.85c)
1 + ||n 0 (t)||2
q̂(0) = q̂0 (9.85d)
θ̂(x, 0) = θ̂0 (x) (9.85e)
κ̂(x, 0) = κ̂0 (x) (9.85f)
162 9 Adaptive State Feedback Controllers

where

f 2 (t) = ||η(t)||2 + ||M(t)||2 , (9.86)

and

ê(x, t) = u(x, t) − û(x, t), ˆ(x, t) = v(x, t) − v̂(x, t) (9.87)

with
 x
û(x, t) = q̂η(x, t) + θ̂(ξ, t)M(x, ξ, t)dξ (9.88a)
0
 1
v̂(x, t) = φ(x, t) + κ̂(ξ, t)N (x, ξ, t)dξ. (9.88b)
x

The projection operator is defined in Appendix A, and the initial guesses q̂0 , θ̂0 , κ̂0
are chosen inside the feasible domain

||θ̂0 ||∞ ≤ θ̄, ||κ̂0 ||∞ ≤ κ̄, |q̂0 | ≤ q̄. (9.89)

Lemma 9.5 The adaptive laws (9.85) with initial conditions satisfying (9.89) have
the following properties

|q̂| ≤ q̄, ||θ̂(t)||∞ ≤ θ̄, ||κ̂(t)||∞ ≤ κ̄, ∀t ≥ 0 (9.90a)


||ê|| ||ˆ||
, ∈ L∞ ∩ L2 (9.90b)
1+ f2 1 + ||N ||2
ˆ(0, ·)
∈ L∞ ∩ L2 (9.90c)
1 + ||n 0 ||2
˙ ||θ̂t ||, ||κ̂t || ∈ L∞ ∩ L2
|q̂|, (9.90d)

where q̃ = q − q̂, θ̃ = θ − θ̂, κ̃ = κ − κ̂.

Proof The property (9.90a) follows from the conditions (9.89) and the projection
operator. Consider
 1  1
−1
V (t) = (2 − x)λ (x)e (x, t)d x +
2
(1 + x)μ−1 (x)2 (x, t)d x
0 0
 
q̃ 2 (t) 1 1
θ̃2 (x, t) 1 1
κ̃2 (x, t)
+ + dx + d x, (9.91)
2γ1 2 0 γ2 (x) 2 0 γ3 (x)
9.3 Swapping-Based Design for a System with Spatially Varying Coefficients 163

from which we find, using the property −θ̃(x, t)γ(x)projθ̄ (τ (x, t), θ̂(x, t))
≤ −θ̃(x, t)γ(x)τ (x, t) (Lemma A.1), and similarly for q̃ and κ̃

V̇ (t) ≤ −e2 (1, t) + 2e2 (0, t) − ||e(t)||2 + 22 (1, t) − 2 (0, t) − ||(t)||2
 1
1
− ê(x, t)q̃(t)η(x, t)d x
1 + f 2 (t) 0
 1 1
1
− θ̃(x, t)ê(ξ, t)M(ξ, x, t)dξd x
1 + f 2 (t) 0 x
 1 x
1
− κ̃(x, t)ˆ(ξ, t)N (ξ, x, t)dξd x
1 + ||N (t)||2 0 0
 1
1
− ˆ(0, t) κ̃(x, t)n 0 (x, t)d x. (9.92)
1 + ||n 0 (t)||2 0

Inserting the boundary conditions (9.82) and changing the order of integration in the
double integrals yield

V̇ (t) ≤ −e2 (1, t) − ||e(t)||2 − 2 (0, t) − ||(t)||2


 1   x 
1
− ê(x, t) q̃(t)η(x, t) + θ̃(ξ, t)M(x, ξ, t)dξ dx
1 + f 2 (t) 0 0
 1  1
1
− ˆ(x, t) κ̃(ξ, t)N (x, ξ, t)dξd x
1 + ||N (t)||2 0 x
 1
1
+ ˆ(0, t) κ̃(x, t)n 0 (x, t)d x. (9.93)
1 + ||n 0 (t)||2 0

Noticing that
 x
ê(x, t) = e(x, t) + q̃(t)η(x, t) + θ̃(ξ, t)M(x, ξ, t)dξ (9.94a)
0
 1
ˆ(x, t) = (x, t) + κ̃(ξ, t)N (x, ξ, t)dξ (9.94b)
x
 1
ˆ(0, t) = (0, t) + κ̃(ξ, t)n 0 (ξ)dξ (9.94c)
0

we find

||ê(t)||2
V̇ (t) ≤ −e2 (1, t) − ||e(t)||2 − 2 (0, t) − ||(t)||2 −
1 + f 2 (t)
||ê(t)||||e(t)|| ||ˆ(t)|| 2
||ˆ(t)||||(t)||
+ − +
1 + f 2 (t) 1 + ||N (t)||2 1 + ||N (t)||2
ˆ (0, t)
2
ˆ(0, t)(0, t)
− + (9.95)
1 + ||n 0 (t)|| 2 1 + ||n 0 (t)||2
164 9 Adaptive State Feedback Controllers

and after applying Young’s inequality to the cross terms

1 ||ê(t)||2 1 ||ˆ(t)||2 1 ˆ2 (0, t)


V̇ (t) ≤ − − − (9.96)
2 1 + f 2 (t) 2 1 + ||N (t)||2 2 1 + ||n 0 (t)||2

which proves that V is nonincreasing and hence bounded, so V converges as t → ∞.


Integrating (9.96) in time from zero to infinity gives

||ê|| ||ˆ|| ˆ(0, ·)


, , ∈ L2 . (9.97)
1+ f2 1+ ||N ||2 1 + ||n 0 ||2

Moreover, from (9.94a) with e ≡ 0 for t ≥ t F we have

||ê(t)||2 |q̃(t)|2 ||η(t)||2 ||θ̃(t)||2 ||M(t)||2


≤ 2 + 2
1 + f 2 (t) 1 + f 2 (t) 1 + f 2 (t)
≤ 2(|q̃(t)|2 + ||θ̃(t)||2 ) (9.98)

||ˆ(t)|| ˆ (0,t)
2 2
and similarly for 1+||N (t)||2
and 1+||n 0 (t)||
2 , which give the remaining properties

(9.90b)–(9.90c). From (9.85a), we have

1 ˙ ||ê(t)||||η(t)|| ||ê(t)|| ||η(t)|| ||ê(t)||


|q̂(t)| ≤ ≤ ≤ (9.99)
γ1 1 + f 2 (t) 1+ f 2 (t) 1+ f 2 (t) 1 + f 2 (t)

and similarly for θ̂t and κ̂t , so using (9.90b)–(9.90c) gives (9.90d).

9.3.3 Control Law

We propose the control law


 1  1
U (t) = K̂ (1, ξ, t)û(ξ, t)dξ +
u
K̂ v (1, ξ, t)v̂(ξ, t)dξ (9.100)
0 0

for the state estimates û, v̂ generated using (9.88), the filters (9.74) and the adaptive
laws (9.85), and where ( K̂ u , K̂ v ) is defined over T1 given in (1.1b) and is for every
t, the solution to the PDE

μ(x) K̂ xu (x, ξ, t) − λ(ξ) K̂ ξu (x, ξ, t) = λ (ξ) K̂ u (x, ξ, t)


+ μ(ξ)κ̂(ξ, t) K̂ v (x, ξ, t) (9.101a)
μ(x) K̂ xv (x, ξ, t) + μ(ξ) K̂ ξv (x, ξ, t) = λ(x)θ̂(ξ, t) K̂ u (x, ξ, t)
− μ (ξ) K̂ v (x, ξ, t) (9.101b)
9.3 Swapping-Based Design for a System with Spatially Varying Coefficients 165

μ(x)κ̂(x, t)
K̂ u (x, x, t) = − (9.101c)
λ(x) + μ(x)
λ(0) u
K̂ v (x, 0, t) = q̂(t) K̂ (x, 0, t). (9.101d)
μ(0)

By Theorem D.1 in Appendix D.2, Eq. (9.101) has a unique, bounded solution for
every time t, and since the set of admissible θ̂, κ̂ and q̂, is bounded due to projection,
it also follows that the set of admissible K̂ u , K̂ v is bounded as well. The kernels
K̂ u , K̂ v are therefore uniformly, pointwise bounded, and there exists a constant K̄
so that

|| K̂ u (t)||∞ ≤ K̄ , || K̂ v (t)||∞ ≤ K̄ , ∀t ≥ 0. (9.102)

Moreover,

|| K̂ tu ||, || K̂ tv || ∈ L2 ∩ L∞ , (9.103)

which follows from differentiating equations (9.101) with respect to time, applying
Theorem D.1 in Appendix D.2 and using (9.90d).

Theorem 9.2 Consider system (9.4). The control law (9.100) guarantees

||u||, ||v||, ||η||, ||φ||, ||M||, ||N || ∈ L2 ∩ L∞ (9.104a)


||u||∞ , ||v||∞ , ||η||∞ , ||φ||∞ , ||M||∞ , ||N ||∞ ∈ L2 ∩ L∞ (9.104b)
||u||, ||v||, ||η||, ||φ||, ||M||, ||N || → 0 (9.104c)
||u||∞ , ||v||∞ , ||η||∞ , ||φ||∞ , ||M||∞ , ||N ||∞ → 0 (9.104d)

The proof of Theorem 9.2 is given in Sect. 9.3.5, following the introduction of a
backstepping transformation in the next section, that facilitates the Lyapunov anal-
ysis.

9.3.4 Backstepping

It is straightforward to show that the state estimates (9.88) satisfy the dynamics

˙
û t (x, t) + λ(x)û x (x, t) = λ(x)θ̂(x, t)v(x, t) + q̂(t)η(x, t)
 x
+ θ̂t (ξ, t)M(x, ξ)dξ (9.105a)
0
 1
v̂t (x, t) − μ(x)v̂x (x, t) = μ(x)κ̂(x, t)u(x, t) + κ̂t (ξ, t)N (x, ξ)dξ (9.105b)
x
û(0, t) = q̂(t)v(0, t) (9.105c)
166 9 Adaptive State Feedback Controllers

v̂(1, t) = U (t) (9.105d)


û(x, 0) = û 0 (x) (9.105e)
v̂(x, 0) = v̂0 (x) (9.105f)

for some functions û 0 , v̂0 ∈ B([0, 1]). Consider the backstepping transformation

w(x, t) = û(x, t) (9.106a)


 x  x
z(x, t) = v̂(x, t) − K̂ u (x, ξ, t)û(ξ, t)dξ − K̂ v (x, ξ, t)v̂(ξ, t)dξ
0 0
= T [û, v̂](x, t) (9.106b)

where ( K̂ u , K̂ v ) satisfies (9.101). The backstepping transformation (9.106) is invert-


ible, with inverse in the form

û(x, t) = w(x, t) (9.107)


−1
v̂(x, t) = T [w, z](x, t) (9.108)

where T −1 is an operator in the same form as (9.106b). Consider also the target
system

wt (x, t) + λ(x)wx (x, t) = λ(x)θ̂(x, t)z(x, t) + λ(x)θ̂(x, t)ˆ(x, t)


 x  x
+ ω(x, ξ, t)w(ξ, t)dξ + b(x, ξ, t)z(ξ, t)dξ
0
 x 0

˙
+ q̂(t)η(x, t) + θ̂t (ξ, t)M(x, ξ, t)dξ (9.109a)
0
z t (x, t) − μ(x)z x (x, t) = − K̂ (x, 0, t)λ(0)q̂(t)ˆ(0, t)
u
  x
+ T q̂η˙ + θ̂t (ξ)M(x, ξ, t)dξ,
0
 1 
κ̂t (ξ)N (x, ξ, t)dξ (x, t)
x
 x
− K̂ tu (x, ξ, t)w(ξ, t)dξ
0 x
− K̂ tv (x, ξ, t)T −1 [w, z](ξ, t)dξ (9.109b)
0
w(0, t) = q̂(t)z(0, t) + q̂(t)ˆ(0, t) (9.109c)
z(1, t) = 0 (9.109d)
w(x, 0) = w0 (x) (9.109e)
z(x, 0) = z 0 (x) (9.109f)
9.3 Swapping-Based Design for a System with Spatially Varying Coefficients 167

for some functions ω, b defined over T1 , and initial conditions w0 , z 0 ∈ B([0, 1]).
We seek a transformation mapping (9.105) into (9.109).

Lemma 9.6 Consider system (9.105). The backstepping transformation (9.106) and
the control law (9.100), with ( K̂ u , K̂ v ) satisfying (9.101), map (9.105) into (9.109),
where ω and b are given by
 x
ω(x, ξ, t) = λ(x)θ̂(x, t) K̂ (x, ξ, t) +
u
b(x, s, t) K̂ u (s, ξ, t)ds (9.110a)
ξ
 x
b(x, ξ, t) = λ(x)θ̂(x, t) K̂ v (x, ξ, t) + b(x, s, t) K̂ v (s, ξ, t)ds. (9.110b)
ξ

Proof Differentiating (9.106b) with respect to time and space, respectively, inserting
the dynamics (9.105a)–(9.105b), integrating by parts and inserting the result into
(9.105b) yield

z t (x, t) − μ(x)z x (x, t) + K̂ u (x, 0, t)λ(0)q̂(t)ˆ(0, t)


 x
+ K̂ ξu (x, ξ, t)λ(ξ) + K̂ u (x, ξ, t)λ (ξ)
0

v
+ K̂ (x, ξ, t)μ(ξ)κ̂(ξ, t) − μ(x)K x (x, ξ, t) û(ξ, t)dξ

x 
+ K̂ u (x, ξ, t)λ(x)θ̂(ξ, t) − K̂ ξv (x, ξ, t)μ(ξ)
0

v v
− K̂ (x, ξ, t)μ (ξ) − μ(x) K̂ x (x, ξ, t) v(ξ, t)dξ

− K̂ v (x, 0, t)μ(0) − K̂ u (x, 0, t)λ(0)q̂(t) v̂(0, t)


 x  x
+ K̂ tu (x, ξ, t)û(ξ, t)dξ + L̂ vt (x, ξ, t)v̂(ξ, t)dξ
0 0

− μ(x)κ̂(x, t) + μ(x) K̂ (x, x, t) + K̂ u (x, x, t)λ(x) u(x, t)


u

 1  x
− κ̂t (ξ, t)N (x, ξ, t)dξ + ˙
K̂ u (x, ξ, t)q̂(t)η(ξ, t)dξ
x 0
 x  ξ
+ K̂ u (x, ξ, t) θ̂t (s, t)M(ξ, s, t)dsdξ
0 0
 x  1
+ K̂ v (x, ξ, t) κ̂t (s, t)N (ξ, s, t)dsdξ = 0. (9.111)
0 ξ

Choosing K̂ u and K̂ v to satisfy (9.101) yields the target system dynamics (9.109b).
Inserting the transformations (9.106) into the w-dynamics (9.109a), using the dynam-
ics (9.105a) and changing the order of integration in the double integrals yield
168 9 Adaptive State Feedback Controllers
 x
0=− ω(x, ξ, t) − λ(x)θ̂(x, t) K̂ u (x, ξ, t)
0
 x 
− b(x, s, t) K̂ u (s, ξ, t)ds û(ξ, t)dξ
ξ
 x
− b(x, ξ, t) − λ(x)θ̂(x, t) K̂ v (x, ξ, t)
0
 x 
− b(x, s, t) K̂ v (s, ξ, t)ds v̂(ξ, t)dξ (9.112)
ξ

which gives the Eq. (9.110) for ω and b. The bounds (9.114a) follow from applying
Lemma 1.1 to (9.110) and (9.102).

9.3.5 Proof of Theorem 9.2

Since the backstepping kernels K̂ u and K̂ v used in (9.106) are uniformly bounded,
by Theorem 1.3, there exist constants G 1 , G 2 , G 3 , G 4 so that

||z(t)|| ≤ G 1 ||û(t)|| + G 2 ||v̂(t)||, ∀t ≥ 0 (9.113a)


||v̂(t)|| ≤ G 3 ||w(t)|| + G 4 ||z(t)||, ∀t ≥ 0. (9.113b)

Moreover, from (9.110) and the fact that λ, μ, K̂ u , K̂ v , θ̂, κ̂ are all uniformly
bounded, there exist constants ω̄ and b̄ such that

||ω(t)||∞ ≤ ω̄, ||b(t)||∞ ≤ b̄, ∀t ≥ 0. (9.114a)

We will let λ, μ, λ̄, μ̄ denote positive constants so that

λ ≤ λ(x) ≤ λ̄, μ ≤ μ(x) ≤ μ̄, ∀x ∈ [0, 1]. (9.115)

Consider now the following components that will eventually form a Lyapunov func-
tion candidate
 1
V1 (t) = e−δx λ−1 (x)w 2 (x, t)d x (9.116a)
0
 1
V2 (t) = (1 + x)μ−1 (x)z 2 (x, t)d x (9.116b)
0
 1
V3 (t) = (2 − x)λ−1 (x)η 2 (x, t)d x (9.116c)
0
 1 x
V4 (t) = (2 − x)λ−1 (x)M 2 (x, ξ, t)dξd x (9.116d)
0 0
 1 1
V5 (t) = (1 + x)μ−1 (x)N 2 (x, ξ, t)dξd x. (9.116e)
0 x
9.3 Swapping-Based Design for a System with Spatially Varying Coefficients 169

The following result is proved in Appendix E.4.

Lemma 9.7 Let δ > 6 + λ−2 ω̄ 2 . Then there exist positive constants h 1 , h 2 , . . . , h 6
and nonnegative, integrable functions l1 , l2 , . . . , l15 such that

V̇1 (t) ≤ h 1 z 2 (0, t) + (δ − 6 − λ−2 ω̄ 2 )λV1 (t) + h 2 V2 (t) + l1 (t)V3 (t)


+ l2 (t)V4 (t) + l3 (t)V5 (t) + l4 (t) + h 1 σ 2 (t)||n 0 (t)||2 (9.117a)
1
V̇2 (t) ≤ −z 2 (0, t) − μV2 (t) + h 3 σ 2 (t)||n 0 (t)||2 + l5 (t)V1 (t) + l6 (t)V2 (t)
4
+ l7 (t)V3 (t) + l8 (t)V4 (t) + l9 (t)V5 (t) + l10 (t) (9.117b)
1
V̇3 (t) ≤ − μV3 (t) + 4z 2 (0, t) + 4σ 2 (t)||n 0 (t)||2 + l11 (t) (9.117c)
2
1
V̇4 (t) ≤ − λV4 (t) + h 4 eδ V1 (t) + h 5 V2 (t) + l12 (t)V5 (t) + l13 (t) (9.117d)
2
1
V̇5 (t) ≤ −||n 0 (t)||2 − μV5 (t) + h 6 eδ V1 (t) + l14 (t)V3 (t)
2
+ l14 (t)V4 (t) + l15 (t) (9.117e)

where

ˆ2 (0, t)
σ 2 (t) = (9.118)
1 + ||n 0 (t)||2

is a non-negative function, with σ 2 ∈ L1 .

Choosing

5 1
V6 (t) = V1 (t) + max(h 2 , h 5 )V2 (t) + h 1 V3 (t)
μ 4
+ e−δ min(h −1 −1 −δ −1
4 , h 5 )V4 (t) + e h 6 V5 (t) (9.119)

and then choosing

δ > 6 + 2λ−1 + λ−2 ω̄ 2 (9.120)

we have by Lemma 9.7 that

V̇6 (t) ≤ −cV6 (t) + l17 (t)V6 (t) + l18 (t) − a(1 − bσ 2 (t))||n 0 (t)||2 (9.121)

for some integrable functions l17 and l18 , and positive constants a, b and c. We also
have from (9.96) that

V̇ (t) ≤ −σ 2 (t) (9.122)


170 9 Adaptive State Feedback Controllers

and from (9.91) that


1
ˆ2 (0, t) ( 0 κ̃(ξ, t)n 0 (ξ, t)dξ)2 ||κ̃(t)||2 ||n 0 (t)||2
σ (t) =
2
= ≤
1 + ||n 0 (t)|| 2 1 + ||n 0 (t)|| 2 1 + ||n 0 (t)||2
≤ ||κ̃(t)||2 ≤ 2γ̄3 V (t) (9.123)

where

γ̄3 = max γ3 (x). (9.124)


x∈[0,1]

It then follows from Lemma B.4 in Appendix B, that

V6 ∈ L1 ∩ L∞ , (9.125)

and thus

||w||, ||z||, ||η||, ||M||, ||N || ∈ L2 ∩ L∞ , (9.126)

implying

||û||, ||v̂|| ∈ L2 ∩ L∞ . (9.127)

From (9.88b), we then have

||φ|| ∈ L2 ∩ L∞ , (9.128)

while from (9.84) with e =  ≡ 0 in finite time, we obtain

||u||, ||v|| ∈ L2 ∩ L∞ . (9.129)

The remaining properties can be shown using the same technique as in the proof
of Theorem 9.1. 

9.4 Simulations

9.4.1 Identifier-Based Controller

System (9.1) and the controller of Theorem 9.1 are implemented using the system
parameters

λ = μ = 1, c11 = −0.1, c12 = 1, c21 = 0.4, c22 = 0.2, q = 4 (9.130)


9.4 Simulations 171

6 1
0
4
−1
2
−2
0 −3
0 2 4 6 8 10 0 2 4 6 8 10
Time [s] Time [s]

Fig. 9.1 Left: State norm. Right: Actuation signal for the controller of Theorem 9.1

1.5
1
1

0.5 0.5

0
0
−0.5
0 5 10 0 5 10
Time [s] Time [s]
0.3
4
0.4
0.2
2
0.2 0.1

0 0 0

0 5 10 0 5 10 0 5 10
Time [s] Time [s] Time [s]

Fig. 9.2 Actual (solid black) and estimated parameters (dashed red) using the adaptive controller
of Theorem 9.1

which constitute an open-loop unstable system, and initial conditions

u 0 (x) = sin(2πx), v0 (x) = x. (9.131)

All additional initial conditions are set to zero. The design gains are set to

γ = ρ = 10−2 , γi = 1, i = 1 . . . 5. (9.132)

The controller kernel equations (9.29) are solved using the method described in
Appendix F.2. From Fig. 9.1, it is seen that the system states converge to zero, as
does the actuation signal U . The estimated parameters shown in Fig. 9.2 are bounded
and stagnate, but only one parameter (ĉ21 ) converges to its true value. Convergence of
the estimated parameters to their true values is not guaranteed by the control law. This
is common in adaptive control, since persistent excitation and set-point regulation
can in general not be achieved simultaneously.
172 9 Adaptive State Feedback Controllers

1.5
0
1

0.5 −0.1

0
−0.2
0 5 10 15 0 5 10 15
Time [s] Time [s]

Fig. 9.3 Left: State norm. Right: Actuation signal for the controller of Theorem 9.2

2 0.8 2
0.6
1 0.4 1
0.2
0 0 0

0 0.5 1 0 0.5 1 0 5 10 15
x x Time [s]

Fig. 9.4 Estimated parameters using the adaptive controller of Theorem 9.2. Left: Actual (solid
black) and final estimated value (dashed red) of θ. Middle: Actual (solid black) and final estimated
value (dashed red) of κ. Right: Actual (solid black) and estimated value (dashed red) of q

9.4.2 Swapping-Based Controller with Spatially Varying


System Parameters

Finally, system (9.4) in closed loop with the controller of Theorem 9.2 is implemented
using the system parameters

λ≡1 μ≡2 (9.133a)


c1 (x) = x sin(x) + 1, c2 (x) = cosh(x) q=2 (9.133b)

which also constitute an open-loop unstable system, and initial conditions

u 0 (x) = sin(x), v0 (x) = cosh(x) cos(2πx). (9.134)

All additional initial conditions are set to zero. The design gains are set to

γ1 = γ2 (x) = γ3 (x) = 1, ∀x ∈ [0, 1]. (9.135)

The controller kernel equations (9.101) are solved using the method described in
Appendix F.2.
The state norm and actuation signal both converge to zero, as shown in Fig. 9.3,
in accordance with the theory. All estimated parameters are seen in Fig. 9.4 to be
bounded, but do not converge to their true values. It is interesting to note that even
9.4 Simulations 173

though the estimated functions θ̂ and κ̂ and the estimated parameter q̂ are quite
different from the actual functions, the adaptive controller manages to stabilize the
system.

9.5 Notes

The adaptive control laws derived in this chapter adaptively stabilize a system of
2 × 2 linear hyperbolic PDEs with uncertain in-domain cross terms and source terms,
and an uncertain boundary parameter. They assume that full-state measurements are
available. As mentioned in the introduction, this assumption can be questioned, as
distributed measurements in the domain rarely are available in practice. However,
the solutions offered here are some of the many steps towards a complete coverage
of adaptive control of linear hyperbolic PDEs.
We proceed in the next chapter by limiting the available measurements to be taken
at the boundaries, which is a more practically feasible problem, but also considerably
harder to solve. We start in Chap. 10 by solving adaptive control problems for the
case of known in-domain coefficients, but uncertainty in the boundary parameter q.

References

Anfinsen H, Aamo OM (2018) Adaptive control of linear 2 × 2 hyperbolic systems. Automatica


87:69–82
Anfinsen H, Aamo OM (2016a) Stabilization of linear 2 × 2 hyperbolic systems with uncertain
coupling coefficients - Part I: identifier-based design. In: 2016 Australian control conference.
Newcastle, New South Wales, Australia
Anfinsen H, Aamo OM (2016b) Stabilization of linear 2 × 2 hyperbolic systems with uncertain cou-
pling coefficients - Part II: swapping design. In: 2016 Australian control conference. Newcastle,
Australia, New South Wales
Anfinsen H, Aamo OM (2017) Adaptive stabilization of linear 2 × 2 hyperbolic PDEs with spatially
varying coefficients using swapping. In: 2017 Asian control conference. Australia, Gold Coast,
Queensland
Vazquez R, Krstić M (2014) Marcum Q-functions and explicit kernels for stabilization of 2 × 2
linear hyperbolic systems with constant coefficients. Syst Control Lett 68:33–42
Chapter 10
Adaptive Output-Feedback: Uncertain
Boundary Condition

10.1 Introduction

The adaptive control laws of the previous chapter assumed distributed measurements,
which are rarely available in practice. This chapter presents adaptive output-feedback
control laws for system (7.4) with an uncertain parameter q in the boundary con-
dition anti-collocated with actuation. Only one boundary measurement is assumed
available, and designs for both sensing collocated with actuation and anti-collocated
with actuation are presented, since they require significantly different analysis. For
the convenience of the reader, we restate the system under consideration, which is

u t (x, t) + λ(x)u x (x, t) = c1 (x)v(x, t) (10.1a)


vt (x, t) − μ(x)vx (x, t) = c2 (x)u(x, t) (10.1b)
u(0, t) = qv(0, t) (10.1c)
v(1, t) = U (t) (10.1d)
u(x, 0) = u 0 (x) (10.1e)
v(x, 0) = v0 (x) (10.1f)

where

λ, μ ∈ C 1 ([0, 1]), λ(x), μ(x) > 0, ∀x ∈ [0, 1] (10.2a)


c1 , c2 ∈ C ([0, 1]), q ∈ R,
0
(10.2b)

with initial conditions

u 0 , v0 ∈ B([0, 1]). (10.3)

In the anti-collocated case, treated in Sect. 10.2, the available measurement is

© Springer Nature Switzerland AG 2019 175


H. Anfinsen and O. M. Aamo, Adaptive Control of Hyperbolic PDEs,
Communications and Control Engineering,
https://doi.org/10.1007/978-3-030-05879-1_10
176 10 Adaptive Output-Feedback: Uncertain Boundary Condition

y0 (t) = v(0, t), (10.4)

which is collocated with the uncertain parameter q. For this problem, swapping
design will be used to achieve stabilization along the lines of Anfinsen and Aamo
(2017a). In the collocated case, the measurement is taken as

y1 (t) = u(1, t) (10.5)

which requires a fairly different adaptive observer design for estimation of the states
u, v and parameter q, originally presented in Anfinsen and Aamo (2016). In particular,
the output injection gains are time-varying in the anti-collocated case, while static
in the collocated case. Although the control law differs from the anti-collocated case
only in the way state- and parameter estimates are generated, closed-loop stability
analysis, which was originally presented in Anfinsen and Aamo (2017b), becomes
more involved as a consequence of the estimation scheme. For both cases, we assume
the following.
Assumption 10.1 A bound q̄ on q is known, so that

|q| ≤ q̄. (10.6)

10.2 Anti-collocated Sensing and Control

10.2.1 Filters and Adaptive Laws

For system (10.1) with measurement (10.4), we define the input filters

ηt (x, t) + λ(x)ηx (x, t) = c1 (x)φ(x, t) + k1 (x)(y0 (t) − φ(0, t)) (10.7a)


φt (x, t) − μ(x)φx (x, t) = c2 (x)η(x, t) + k2 (x)(y0 (t) − φ(0, t)) (10.7b)
η(0, t) = 0 (10.7c)
φ(1, t) = U (t) (10.7d)
η(x, 0) = η0 (x) (10.7e)
φ(x, 0) = φ0 (x) (10.7f)

and the parameter filters

pt (x, t) + λ(x) px (x, t) = c1 (x)r (x, t) − k1 (x)r (0, t) (10.8a)


rt (x, t) − μ(x)r x (x, t) = c2 (x) p(x, t) − k2 (x)r (0, t) (10.8b)
p(0, t) = y0 (t) (10.8c)
r (1, t) = 0 (10.8d)
p(x, 0) = p0 (x) (10.8e)
r (x, 0) = r0 (x) (10.8f)
10.2 Anti-collocated Sensing and Control 177

for some initial conditions

η0 , φ0 , p0 , r0 ∈ B([0, 1]) (10.9)

of choice, where k1 and k2 are injection gains given as

k1 (x) = μ(0)M α (x, 0) (10.10a)


β
k2 (x) = μ(0)M (x, 0) (10.10b)

and (M α , M β ) is the solution to the PDE (8.65). We propose the adaptive law

 
˙ = proj γ (y0 (t) − v̂(0, t))r (0, t) , q̂(t) ,
q̂(t) (10.11a)

1 + r 2 (0, t)
q̂(0) = q̂0 (10.11b)

for some design gain γ > 0, and initial guess q̂0 satisfying

|q̂0 | ≤ q̄ (10.12)

with q̄ provided by Assumption (10.1), and where proj is the projection operator
defined in Appendix A. Finally, we define the adaptive state estimates

û(x, t) = η(x, t) + q̂(t) p(x, t), v̂(x, t) = φ(x, t) + q̂(t)r (x, t), (10.13)

and state estimation errors

ê(x, t) = u(x, t) − û(x, t), ˆ(x, t) = v(x, t) − v̂(x, t). (10.14)

Theorem 10.1 Consider system (10.1), filters (10.7)–(10.8), adaptive law (10.11a)
and the state estimates (10.13). Then

|q̂(t)| ≤ q̄, ∀t ≥ 0 (10.15a)


ˆ(0, ·)
 ∈ L2 ∩ L∞ (10.15b)
1 + r 2 (0, ·)
q̂˙ ∈ L2 ∩ L∞ (10.15c)

and

|ê(x, t)| ≤ |q̃(t)|| p(x, t)|, |ˆ(x, t)| ≤ |q̃(t)||r (x, t)|, (10.16)

for all x ∈ [0, 1] and t ≥ t F , where t F is defined in (8.8). Moreover, if r (0, t) is


bounded and persistently exciting (PE), that is, if there exist positive constants
178 10 Adaptive Output-Feedback: Uncertain Boundary Condition

T, k1 , k2 so that
 t+T
1
k1 ≥ r 2 (0, τ )dτ ≥ k2 , (10.17)
T t

then q̂ → q exponentially fast. If additionally p(x, t) and r (x, t) are bounded for
all x ∈ [0, 1], then ||û − u||∞ → 0 and ||v̂ − v||∞ → 0 exponentially fast.

Proof The property (10.15a) follows from the projection operator and Lemma A.1
in Appendix A. Non-adaptive state estimates ū, v̄ can be constructed from

ū(x, t) = η(x, t) + qp(x, t), v̄(x, t) = φ(x, t) + qr (x, t). (10.18)

It can straightforwardly be shown that the corresponding non-adaptive state estima-


tion errors

e(x, t) = u(x, t) − ū(x, t), (x, t) = v(x, t) − v̄(x, t) (10.19)

satisfy the dynamics

et (x, t) + λ(x)ex (x, t) = c1 (x)(x, t) − k1 (x)(0, t) (10.20a)


t (x, t) − μ(x)x (x, t) = c2 (x)e(x, t) − k2 (x)(0, t) (10.20b)
e(0, t) = 0 (10.20c)
(1, t) = 0 (10.20d)
e(x, 0) = e0 (x) (10.20e)
(x, 0) = 0 (x) (10.20f)

where e0 , 0 ∈ B([0, 1]). The error dynamics has the same form as the error dynamics
(8.67) of Theorem 8.2, where it was shown that by choosing the injection gains as
(10.10), the system can be mapped by an invertible backstepping transformation in
the form (8.72), that is
     x  
e(x, t) α̃(x, t) α̃(ξ, t)
= + M(x, ξ) dξ, (10.21)
(x, t) β̃(x, t) 0 β̃(ξ, t)

into the target system


 x
α̃t (x, t) + λ(x)α̃x (x, t) = g1 (x, ξ)α̃(ξ, t)dξ (10.22a)
0
 x
β̃t (x, t) − μ(x)β̃x (x, t) = c2 (x)α̃(x, t) + g2 (x, ξ)α̃(ξ, t)dξ (10.22b)
0
α̃(0, t) = 0 (10.22c)
β̃(1, t) = 0 (10.22d)
α̃(x, 0) = α̃0 (x) (10.22e)
10.2 Anti-collocated Sensing and Control 179

β̃(x, 0) = β̃0 (x) (10.22f)

with g1 and g2 given by (8.71a) and (8.75). System (10.22) is a cascade from α into
β, and will be zero in finite time t F . Hence, the following static relationships are
valid

u(x, t) = η(x, t) + qp(x, t) + e(x, t) (10.23a)


v(x, t) = φ(x, t) + qr (x, t) + (x, t), (10.23b)

with e ≡ 0 and  ≡ 0 for t ≥ t F , and specifically

y0 (t) = v(0, t) = φ(0, t) + qr (0, t) + (0, t) (10.24)

with (0, t) = 0 for t ≥ t F . Consider


 1  1
V (t) = eδ e−δx λ−1 (x)α̃2 (x, t)d x + (1 + x)μ−1 (x)β̃ 2 (x, t)d x
0 0
1 2
+ q̃ (t) (10.25)

for some constant δ ≥ 1, from which we find, using integration by parts, the boundary
condition (10.22c) and Young’s inequality,
 1  1
d −δx −1
e λ (x)α̃ (x, t)d x = 2
2
e−δx λ−1 (x)α̃(x, t)αt (x, t)d x
dt 0 0
 1
=− e−δx α̃(x, t)α̃x (x, t)d x
0
 1  x
+2 e−δx λ−1 (x)α̃(x, t) g1 (x, ξ)α̃(ξ, t)dξd x
0 0
 1
≤ −e−δ α̃2 (1, t) + α̃2 (0, t) − δ e−δx α̃2 (x, t)d x
0
 1  x  
1 1 −δx x 2
+ ḡ12 e−δx λ−1 (x)α̃2 (x, t) dξd x + e α̃ (ξ, t)dξd x
0 0 λ 0 0
 1 
ḡ 2 1 −δx 2
≤ −δ e−δx α̃2 (x, t)d x + 1 e α̃ (x, t)d x
0 λ 0
  x 1  1
1 −δx 1
− e α̃ (ξ, t)dξd x
2
+ e−δx α̃2 (x, t)d x
δλ 0 x=0 δλ 0
  1
ḡ 2 1
≤− δ− 1 − e−δx α̃2 (x, t)d x (10.26)
λ λ 0

where ḡ1 bounds g1 . Similarly, we have


180 10 Adaptive Output-Feedback: Uncertain Boundary Condition
 1  1
d −1
(1 + x)μ (x)β̃ (x, t)d x = 2 2
(1 + x)μ−1 (x)β̃(x, t)β̃t (x, t)d x
dt 0 0
 1  1
=2 (1 + x)β̃(x, t)β̃x (x, t)d x + 2 (1 + x)μ−1 (x)β̃(x, t)c2 (x)α̃(x, t)d x
0 0
 1  x
−1
+2 (1 + x)μ (x)β̃(x, t) g2 (x, ξ)α̃(ξ, t)dξd x
0 0
 1  1
−1
≤ 2β̃ (1, t) − β̃ (0, t) −
2 2
β̃ (x, t)d x + 2ρ1 μ
2
β̃ 2 (x, t)d x
0 0
 1  1
c̄2
+2 2 α̃2 (x, t)d x + 2ρ2 μ−1 β̃ 2 (x, t)d x
ρ1 μ 0 0
 1 x
ḡ22
+2 α̃2 (ξ, t)dξd x (10.27)
ρ2 μ 0 0

for some arbitrary positive constants ρ1 and ρ2 , and where c̄2 and ḡ2 bound c2 and
g2 , respectively. Choosing ρ1 = ρ2 = 18 μ, yields
 1
d
(1 + x)μ−1 (x)β̃ 2 (x, t)d x ≤ −β̃ 2 (0, t)
dt 0
 
c̄22 + ḡ22 δ 1 −δx 2 1 1 2
+ 16 e e α̃ (x, t)d x − β̃ (x, t)d x (10.28)
μ2 0 4 0

Using (10.11a), (10.26), (10.28) and (10.24), Lemma A.1,

y0 (t) − v̂(0, t) = ˆ(0, t) (10.29)

and

ˆ(0, t) − (0, t) = q̃(t)r (0, t) (10.30)

we obtain
 
δ ḡ 2 1 c̄2 + ḡ 2 1
V̇ (t) ≤ −β̃ (0, t) − e 2
δ − 1 − − 16 2 2 2 e−δx α̃2 (x, t)d x
λ λ μ 0

1 1
ˆ2 (0, t) ˆ(0, t)(0, t)
− β̃ 2 (x, t)d x − + . (10.31)
4 0 1 + r (0, t)
2 1 + r 2 (0, t)

Choosing

ḡ12 1 c̄2 + ḡ 2
δ > max 1, + + 16 2 2 2 (10.32)
λ λ μ
10.2 Anti-collocated Sensing and Control 181

and applying Young’s inequality to the last term, recalling that β̃(0, t) = (0, t), we
obtain

1 ˆ2 (0, t)
V̇ (t) ≤ − , (10.33)
2 1 + r 2 (0, t)

which proves that V is bounded and nonincreasing, and hence has a limit as t → ∞.
Integrating (10.33) from zero to infinity, and using

ˆ2 (0, t) q̃ 2 (t)r 2 (0, t)


= ≤ q̃ 2 (t) (10.34)
1 + r 2 (0, t) 1 + r 2 (0, t)

for t ≥ t F gives (10.15b). From the adaptive law (10.11a), we have

˙ |ˆ(0, t)||r (0, t)| |ˆ(0, t)| |r (0, t)|


|q̂(t)| ≤γ = γ 
1 + r 2 (0, t) 1 + r (0, t) 1 + r 2 (0, t)
2

|ˆ(0, t)|
≤ γ (10.35)
1 + r 2 (0, t)

from which (10.15b) yields (10.15c). The property (10.16) follows immediately from
the relationships

ê(x, t) = e(x, t) + q̃(t) p(x, t) ˆ(x, t) = (x, t) + q̃(t)r (x, t) (10.36)

with e =  ≡ 0 for t ≥ t F . The last property regarding convergence in the presence


of PE follows e.g. from part (iii) of Theorem 4.3.2 in Ioannou and Sun (1995). 

10.2.2 Control Law

Consider the control law


 1  1
U (t) = K̂ (1, ξ, t)û(ξ, t)dξ +
u
K̂ v (1, ξ, t)v̂(ξ, t)dξ (10.37)
0 0

where ( K̂ u , K̂ v ) is the on-line solution to the PDE

μ(x) K̂ xu (x, ξ, t) − λ(ξ) K̂ ξu (x, ξ, t) = λ (ξ) K̂ u (x, ξ, t)


+ c2 (ξ) K̂ v (x, ξ, t) (10.38a)
μ(x) K̂ xv (x, ξ, t) + μ(ξ) K̂ ξv (x, ξ, t) = c1 (ξ) K̂ (x, ξ, t)
u

− μ (ξ) K̂ v (x, ξ, t) (10.38b)


182 10 Adaptive Output-Feedback: Uncertain Boundary Condition

c2 (x)
K̂ u (x, x, t) = − (10.38c)
λ(x) + μ(x)
λ(0) u
K̂ v (x, 0, t) = q̂(t) K̂ (x, 0, t) (10.38d)
μ(0)

with q̂, û and v̂ generated using the adaptive law (10.11a) and the relationship (10.13).
As with previous PDEs in the form, (9.29) and (9.101), Theorem D.1 in Appendix
D guarantees a unique solution to (10.38), and since |q̂| ≤ q̄ and q̂˙ ∈ L2 ∩ L∞ , we
also have

|| K̂ u (t)||∞ ≤ K̄ , || K̂ v (t)||∞ ≤ K̄ , ∀t ≥ 0, (10.39)

for some nonnegative constant K̄ , and

|| K̂ tu ||, || K̂ tv || ∈ L2 ∩ L∞ (10.40)

since q̂˙ ∈ L2 ∩ L∞ .

Theorem 10.2 Consider system (10.1), filters (10.7)–(10.8), adaptive law (10.11a)
and the adaptive state estimates (10.13). The control law (10.1) guarantees

||u||, ||v||, ||η||, ||φ||, || p||, ||r || ∈ L2 ∩ L∞ (10.41a)


||u||∞ , ||v||∞ , ||η||∞ , ||φ||∞ , || p||∞ , ||r ||∞ ∈ L2 ∩ L∞ , (10.41b)
||u||, ||v||, ||η||, ||φ||, || p||, ||r || → 0 (10.41c)
||u||∞ , ||v||∞ , ||η||∞ , ||φ||∞ , || p||∞ , ||r ||∞ → 0. (10.41d)

The proof is given in Sect. 10.2.4, following some intermediate results in Sect.
10.2.3.

10.2.3 Backstepping

First, we derive the dynamics of the adaptive estimates (10.13). Using the filters
(10.7) and (10.8), it can straightforwardly be shown that

˙ p(x, t)
û t (x, t) + λ(x)û x (x, t) = c1 (x)v̂(x, t) + k1 (x)ˆ(0, t) + q̂(t) (10.42a)
˙
v̂t (x, t) − μ(x)v̂x (x, t) = c2 (x)û(x, t) + k2 (x)ˆ(0, t) + q̂(t)r (x, t) (10.42b)
û(0, t) = q̂(t)v(0, t) (10.42c)
v̂(1, t) = U (t) (10.42d)
û(x, 0) = û 0 (x) (10.42e)
v̂(x, 0) = v̂0 (x), (10.42f)
10.2 Anti-collocated Sensing and Control 183

where û 0 , v̂0 ∈ B([0, 1]). Consider the backstepping transformation from û, v̂ into
the new variables α, β given by

α(x, t) = û(x, t) (10.43a)


 x  x
β(x, t) = v̂(x, t) − K̂ u (x, ξ, t)û(ξ, t)dξ − K̂ v (x, ξ, t)v̂(ξ, t)dξ
0 0
= T [û, v̂](x, t) (10.43b)

for K u , K v satisfying (10.38). The inverse of (10.43) is

û(x, t) = α(x, t) (10.44a)


−1
v̂(x, t) = T [α, β](x, t) (10.44b)

where T −1 denotes a Volterra integral operator in similar form as T . Lastly, consider


the target system
 x
αt (x, t) + λ(x)αx (x, t) = c1 (x)β(x, t) + ω(x, ξ, t)α(ξ, t)dξ
 x 0

+ κ(x, ξ, t)β(ξ, t)dξ + k1 (x)ˆ(0, t)


0
˙ p(x, t)
+ q̂(t) (10.45a)
βt (x, t) − μ(x)βx (x, t) = K̂ (x, 0, t)λ(0)q̂(t) + T [k1 , k2 ](x, t) ˆ(0, t)
u

 x
˙
+ q̂(t)T [ p, r ](x, t) − K̂ tu (x, ξ, t)α(ξ, t)dξ
 x 0
v −1
− K̂ t (x, ξ, t)T [α, β](ξ, t)dξ (10.45b)
0
α(0, t) = q̂(t)β(0, t) + q̂(t)ˆ(0, t) (10.45c)
β(1, t) = 0 (10.45d)
α(x, 0) = α0 (x) (10.45e)
β(x, 0) = β0 (x) (10.45f)

for initial conditions α0 (x) = û 0 (x), β0 (x) = T [û 0 , v̂0 ](x, 0) and parameters ω, β
defined over T1 given in (1.1b).
Lemma 10.1 The backstepping transformation (10.43) and control law (10.37) with
( K̂ u , K̂ v ) satisfying (10.38) map system (10.42) into the target system (10.45) with
ω and κ given by
 x
ω(x, ξ, t) = c1 (x) K̂ u (x, ξ, t) + κ(x, s, t) K̂ u (s, ξ, t)ds (10.46a)
ξ
 x
κ(x, ξ, t) = c1 (x) K̂ v (x, ξ, t) + κ(x, s, t) K̂ v (s, ξ, t)ds. (10.46b)
ξ
184 10 Adaptive Output-Feedback: Uncertain Boundary Condition

Moreover, there exist constants ω̄, κ̄ so that

||ω(t)||∞ ≤ ω̄, ||κ(t)||∞ ≤ κ̄, ∀t ≥ 0. (10.47a)

Proof By differentiating (10.43b) with respect to time and space, inserting the
dynamics (10.42a)–(10.42b), integrating by parts and inserting the boundary condi-
tion (10.42c), we find

v̂t (x, t) = βt (x, t) − K̂ u (x, x, t)λ(x)û(x, t) + K̂ u (x, 0, t)λ(0)q̂(t)v(0, t)


 x  x
+ K̂ ξ (x, ξ, t)λ(ξ)û(ξ, t)dξ +
u
K̂ u (x, ξ, t)λ (ξ)û(ξ, t)dξ
0 x 0 x
+ K̂ (x, ξ, t)c1 (ξ)v̂(ξ, t)dξ +
u
K̂ u (x, ξ, t)k1 (ξ)ˆ(0, t)dξ
 x
0 0

+ ˙ p(ξ, t)dξ + K̂ v (x, x, t)μ(x)v̂(x, t)


K̂ u (x, ξ, t)q̂(t)
0
 x
v
− K̂ (x, 0, t)μ(0)v̂(0, t) − K̂ ξv (x, ξ, t)μ(ξ)v̂(ξ, t)dξ
 x 0
 x
− K̂ v (x, ξ, t)μ (ξ)v̂(ξ, t)dξ + K̂ v (x, ξ, t)c2 (ξ)û(ξ, t)dξ
0 x  0x
+ v
K̂ (x, ξ, t)k2 (ξ)ˆ(0, t)dξ + ˙
K̂ v (x, ξ, t)q̂(t)r (ξ, t)dξ
 x
0
 x 0

+ K̂ tu (x, ξ, t)û(ξ, t)dξ + K̂ tv (x, ξ, t)v̂(ξ, t)dξ (10.48)


0 0

and

v̂x (x, t) = βx (x, t) + K̂ u (x, x, t)û(x, t) + K̂ v (x, x, t)v̂(x, t)


 x  x
+ K̂ xu (x, ξ, t)û(ξ, t)dξ + K̂ xv (x, ξ, t)v̂(ξ, t)dξ. (10.49)
0 0

Inserting (10.48) and (10.49) into (10.42b), we find

βt (x, t) − μ(x)βx (x, t) + K̂ u (x, 0, t)λ(0)q̂(t)ˆ(0, t)


  x  x 
− k2 (x) − K̂ u (x, ξ, t)k1 (ξ)dξ − K̂ v (x, ξ, t)k2 (ξ)dξ ˆ(0, t)
0 0
  x  x 
˙
− q̂(t) r (x, t) − K̂ u (x, ξ, t) p(ξ, t)dξ − K̂ v (x, ξ, t)r (ξ, t)dξ
0 0
 
v
+ K̂ (x, 0, t)λ(0)q̂(t) − K̂ (x, 0, t)μ(0) v̂(0, t)
u
10.2 Anti-collocated Sensing and Control 185
 x 
− μ(x) K̂ xu (x, ξ, t) − K̂ ξu (x, ξ, t)λ(ξ)
0

− K̂ u (x, ξ, t)λ (ξ) − K̂ v (x, ξ, t)c2 (ξ) û(ξ, t)dξ
 x 
− μ(x) K̂ xv (x, ξ, t) + K̂ v (x, ξ, t)μ (ξ)
0

− K̂ u (x, ξ, t)c1 (ξ) + K̂ ξv (x, ξ, t)μ(ξ) v̂(ξ, t)dξ
 
− λ(x) K̂ u (x, x, t) + μ(x) K̂ u (x, x, t) + c2 (x) û(x, t)
 x  x
+ K̂ t (x, ξ, t)û(ξ, t)dξ +
u
K̂ tv (x, ξ, t)v̂(ξ, t)dξ = 0. (10.50)
0 0

Using the Eqs. (10.38) and the definitions of T and T −1 in (10.43b) and (10.44b),
Eq. (10.50) can be written as (10.45b).
Inserting (10.43) into (10.45a) and changing the order of integration in the double
integrals give

û t (x, t) + λ(x)û x (x, t) = c1 (x)v̂(x, t)


 x  x 
+ ω(x, ξ, t) − c1 (x) K̂ (x, ξ, t) −
u
κ(x, s, t) K̂ (s, ξ, t)ds û(ξ, t)dξ
u
0 ξ
 x  x 
v v
+ κ(x, ξ, t) − c1 (x) K̂ (x, ξ, t) − κ(x, s, t) K̂ (s, ξ, t)ds v̂(x, t)dξ
0 ξ
˙ p(x, t),
+ k1 (x)ˆ(0, t) + q̂(t) (10.51)

from which (10.46) gives (10.42a). The boundary condition (10.45c) comes from
(10.42c) and noting that û(0, t) = α(0, t) and v(0, t) = v̂(0, t) + ˆ(0, t) = β(0, t) +
ˆ(0, t). Evaluating (10.43b) at x = 1, inserting the boundary condition (10.42d) and
the control law (10.37) yield (10.45d).
Lastly, Lemma 1.1 applied to (10.46) gives the bounds (10.47), since K̂ u , K̂ v and
c1 are all uniformly bounded. 

We will also map the filter (10.8) in ( p, r ) into a target system that is easier to
analyze. Consider the backstepping transformation from a new set of variables (w, z)
into ( p, r ) given by
     x  
p(x, t) w(x, t) 0 M α (x, ξ) w(ξ, t)
= + dξ (10.52)
r (x, t) z(x, t) 0 0 M β (x, ξ) z(ξ, t)

where M α and M β satisfy the PDEs (8.65). Consider also the target system
186 10 Adaptive Output-Feedback: Uncertain Boundary Condition
 x
wt (x, t) + λ(x)wx (x, t) = g1 (x, ξ)w(ξ, t)dξ (10.53a)
0
 x
z t (x, t) − μ(x)z x (x, t) = c2 (x)w(x, t) + g2 (x, ξ)w(ξ, t)dξ (10.53b)
0
w(0, t) = β(0, t) + ˆ(0, t) (10.53c)
z(1, t) = 0 (10.53d)
w(x, 0) = w0 (x) (10.53e)
z(x, 0) = z 0 (x) (10.53f)

for some functions g1 , g2 defined over T , and initial conditions w0 , z 0 ∈ B([0, 1]).
Lemma 10.2 The backstepping transformation (10.52) maps filter (10.8) into target
system (10.53) with g1 and g2 given by (8.71a) and (8.75).
Proof The dynamics of the filter (10.8) has the same form as the error dynamics
(8.67) of Theorem 8.2, where it was shown that a backstepping transformation in
the form (10.52) with injection gains given as (10.10) maps the system into a target
system in the form (10.53). The boundary condition (10.53c) follows from the fact
that

w(0, t) = p(0, t) = y0 (t) = v(0, t) = v̂(0, t) + ˆ(0, t)


= β(0, t) + ˆ(0, t). (10.54)

10.2.4 Proof of Theorem 10.2

We recall that, since the backstepping transformations are bounded uniformly in t,


and invertible, the following inequalities hold (Theorem 1.3)

||β(t)|| = ||T [û, v̂](t)|| ≤ A1 ||û(t)|| + A2 ||v̂(t)|| (10.55a)


−1
||v̂(t)|| = ||T [α, β](t)|| ≤ A3 ||α(t)|| + A4 ||β(t)|| (10.55b)

and

|| p(t)|| ≤ B1 ||w(t)|| (10.56a)


||r (t)|| ≤ B2 ||w(t)|| + B3 ||z(t)|| (10.56b)

for some positive constants A1 , . . . , A4 and B1 , . . . , B3 . Consider the functions


 1
V1 (t) = e−δx λ−1 (x)α2 (x, t)d x (10.57a)
0
10.2 Anti-collocated Sensing and Control 187
 1
V2 (t) = (1 + x)μ−1 (x)β 2 (x, t)d x (10.57b)
0
 1
V3 (t) = e−δx λ−1 (x)w 2 (x, t)d x (10.57c)
0
 1
V4 (t) = (1 + x)μ−1 (x)z 2 (x, t)d x, (10.57d)
0

which will eventually contribute to a Lyapunov function.


The following result is proved in Appendix E.5.

Lemma 10.3 There exist positive constants h 1 , h 2 . . . h 7 and nonnegative, inte-


grable functions l1 , l2 , . . . l5 such that
 
V̇1 (t) ≤ h 1 β 2 (0, t) + h 2 ˆ2 (0, t) − δλ − h 3 V1 (t)
+ h 4 V2 (t) + l1 (t)V3 (t) (10.58a)
1
V̇2 (t) ≤ −β 2 (0, t) − μV2 (t) + h 5 ˆ2 (0, t) + l2 (t)V1 (t)
4
+ l3 (t)V2 (t) + l4 (t)V3 (t) + l5 (t)V4 (t) (10.58b)
 
V̇3 (t) ≤ 2β (0, t) + 2ˆ (0, t) − δλ − h 6 V3 (t)
2 2
(10.58c)
1
V̇4 (t) ≤ −z 2 (0, t) − μV4 (t) + h 7 eδ V3 (t). (10.58d)
4
Now construct

V5 (t) = V1 (t) + a2 V2 (t) + a3 V3 (t) + V4 (t) (10.59)

for some positive constants a2 , a3 to de decided. Using Lemma (10.3), we have

V̇5 (t) ≤ − (a2 − h 1 − 2a3 ) β 2 (0, t) + (h 2 + a2 h 5 + 2a3 ) ˆ2 (0, t) − z 2 (0, t)


 
  1
− δλ − h 3 V1 (t) − a2 μ − h 4 V2 (t)
4
  1
− a3 (δλ − h 6 ) − h 7 eδ V3 (t) − μV4 (t) + a2 l2 (t)V1 (t)
4
+ a2 l3 (t)V2 (t) + (l1 (t) + a2 l4 (t))V3 (t) + a2 l5 (t)V4 (t). (10.60)

Choosing
 
h3 h6
δ > max 1, , (10.61)
λ λ

and then choosing

h 7 eδ 4h 4
a3 > a2 > max h 1 + 2a3 , (10.62)
δλ − h 6 μ
188 10 Adaptive Output-Feedback: Uncertain Boundary Condition

we obtain

V̇5 (t) ≤ −cV5 (t) + l6 (t)V5 (t) + bˆ2 (0, t) − z 2 (0, t) (10.63)

for some positive constants c and

b = h 2 + a2 h 5 + 2a3 . (10.64)

Rewriting ˆ2 (0, t) as

ˆ2 (0, t)
ˆ2 (0, t) = (1 + r 2 (0, t)) (10.65)
1 + r 2 (0, t)

and since z(0, t) = r (0, t), we obtain


 
V̇5 (t) ≤ −cV5 (t) + l6 (t)V5 (t) + l7 (t) − 1 − bσ 2 (t) z 2 (0, t) (10.66)

where

ˆ2 (0, t)
l7 (t) = bσ 2 (t), σ 2 (t) = (10.67)
1 + r 2 (0, t)

are integrable functions (Theorem 10.1). Moreover, from the definition of V in


(10.25) and the bound (10.33) on its derivative, we have

1
V̇ ≤ − σ 2 (t) (10.68)
2
and

ˆ2 (0, t) q̃ 2 (t)r 2 (0, t)


σ 2 (t) = = ≤ q̃ 2 (t) ≤ 2γV (t). (10.69)
1 + r 2 (0, t) 1 + r 2 (0, t)

Lemma B.4 in Appendix B then gives V5 ∈ L1 ∩ L∞ , and hence

||α||, ||β||, ||w||, ||z|| ∈ L2 ∩ L∞ . (10.70)

Since ||z|| ∈ L∞ , z 2 (x, t) must for all fixed t be bounded for x almost everywhere
in [0, 1]. This in turn implies that z 2 (0, t) must be bounded for almost all t ≥ 0,
implying that

σ 2 z 2 (0, ·) ∈ L1 (10.71)

since σ 2 ∈ L1 . Lemma B.3 in Appendix B.3 gives V5 → 0 and hence

||α||, ||β||, ||w||, ||z|| → 0. (10.72)


10.2 Anti-collocated Sensing and Control 189

From the invertibility of the transformations (10.1) and (10.2), it follows that

||û||, ||v̂||, || p||, ||r || ∈ L2 ∩ L∞ , ||û||, ||v̂||, || p||, ||r || → 0. (10.73)

From (10.13), we then have

||η||, ||φ|| ∈ L2 ∩ L∞ , ||η||, ||φ|| → 0 (10.74)

while (10.18) and (10.19) with bounded ||e|| and |||| give

||u||, ||v|| ∈ L2 ∩ L∞ , ||u||, ||v|| → 0. (10.75)

Pointwise boundedness, square integrability and convergence to zero of the system


variables u and v can now be shown using the same technique as in the proof of
Theorem 9.1, since U ∈ L2 ∩ L∞ and U → 0, in view of (10.37), ||û||∞ , ||v̂||∞ ∈
L2 ∩ L∞ and ||û||∞ , ||v̂||∞ → 0. 

10.3 Collocated Sensing and Control

10.3.1 Observer Equations

For system (10.1) with measurement (10.5), we propose the observer

û t (x, t) + λ(x)û x (x, t) = c1 (x)v̂(x, t) + Γ1 (x, t)(y1 (t) − û(1, t)) (10.76a)
v̂t (x, t) − μ(x)v̂x (x, t) = c2 (x)û(x, t) + Γ2 (x, t)(y1 (t) − û(1, t)) (10.76b)
û(0, t) = q̂(t)v̂(0, t) (10.76c)
v̂(1, t) = U (t), (10.76d)
û(x, 0) = û 0 (x, t) (10.76e)
v̂(x, 0) = v̂0 (x, t) (10.76f)

where q̂ is an estimate of q generated from some adaptive law, Γ1 (x, t) and Γ2 (x, t)
are output injection gains to be designed, and the initial conditions û 0 , v̂0 satisfy

û 0 , v̂0 ∈ B([0, 1]). (10.77)

The state estimation errors ũ = u − û and ṽ = v − v̂ satisfy the dynamics

ũ t (x, t) + λ(x)ũ x (x, t) = c1 (x)ṽ(x, t) − Γ1 (x, t)ũ(1, t) (10.78a)


ṽt (x, t) − μ(x)ṽx (x, t) = c2 (x)ũ(x, t) − Γ2 (x, t)ũ(1, t) (10.78b)
ũ(0, t) = q̂(t)ṽ(0, t) + q̃(t)v(0, t) (10.78c)
190 10 Adaptive Output-Feedback: Uncertain Boundary Condition

ṽ(1, t) = 0, (10.78d)
ũ(x, 0) = ũ 0 (x) (10.78e)
ṽ(x, 0) = ṽ0 (x) (10.78f)

where q̃ = q − q̂, ũ 0 = u 0 − û 0 and ṽ0 = v0 − v̂0 .

10.3.2 Target System and Backstepping

Consider the following PDEs in P α , P β , defined over S1 given in (1.1d)

Ptα (x, ξ, t) = −λ(x)Pxα (x, ξ, t) − λ(ξ)Pξα (x, ξ, t)


− λ (ξ)P α (x, ξ, t) + c1 (x)P β (x, ξ, t) (10.79a)
β β
Pt (x, ξ, t) = μ(x)Pxβ (x, ξ, t) − λ(ξ)Pξ (x, ξ, t)
−λ (ξ)P β (x, ξ, t) + c2 (x)P α (x, ξ, t) (10.79b)
c2 (x)
P β (x, x, t) = (10.79c)
λ(x) + μ(x)
P (0, ξ, t) = q̂(t)P β (0, ξ, t)
α
(10.79d)
P α (x, ξ, 0) = P0α (x, ξ) (10.79e)
β
P β (x, ξ, 0) = P0 (x, ξ) (10.79f)

for some initial conditions satisfying

β
P0α , P0 ∈ B(S), (10.80)

where S is defined in (1.1c). From Theorem D.3 in Appendix D.3, Eq. (10.79) has
a bounded solution (P α , P β ) with bounds depending on q̄ and the initial conditions
β
P0α , P0 . In other words, there exist constants P̄ α , P̄ β so that

||P α (t)||∞ ≤ P̄ α , ||P β (t)||∞ ≤ P̄ β , ∀t ≥ 0. (10.81)

Moreover, if q̂ converges exponentially to q, then the solution (P α , P β ) converges


β
exponentially to the equilibrium of (10.79) defined by setting Ptα = Pt ≡ 0 and
q̂ ≡ q. In other words, the solution of (10.79) converges in this case to the solution
of (8.86).
Consider also the backstepping transformation
 1
ũ(x, t) = α(x, t) + P α (x, ξ, t)α(ξ, t)dξ (10.82a)
x
 1
ṽ(x, t) = β(x, t) + P β (x, ξ, t)α(ξ, t)dξ (10.82b)
x
10.3 Collocated Sensing and Control 191

for P α , P β satisfying (10.79), and the target system


 1
αt (x, t) + λ(x)αx (x, t) = c1 (x)β(x, t) − b1 (x, ξ, t)β(ξ, t)dξ (10.83a)
x
 1
βt (x, t) − μ(x)βx (x, t) = − b2 (x, ξ, t)β(ξ, t)dξ (10.83b)
x
α(0, t) = q̂(t)β(0, t) + q̃(t)v(0, t) (10.83c)
β(1, t) = 0 (10.83d)
α(x, 0) = α0 (x) (10.83e)
β(x, 0) = β0 (x) (10.83f)

where b1 , b2 are time-varying coefficients, defined over S1 given in (1.1d).

Lemma 10.4 The backstepping transformation (10.82) maps target system (10.83)
where b1 and b2 are given by
 ξ
b1 (s, ξ, t) = P α (x, ξ, t)c1 (ξ) − P α (x, s, t)b1 (s, ξ, t)ds (10.84a)
x
 ξ
β
b2 (s, ξ, t) = P (x, ξ, t)c1 (ξ) − P β (x, s, t)b1 (s, ξ, t)ds (10.84b)
x

into system (10.78) with

Γ1 (x, t) = λ(1)P α (x, 1, t) (10.85a)


β
Γ2 (x, t) = λ(1)P (x, 1, t). (10.85b)

Proof From time differentiating (10.82a), inserting the dynamics (10.83a), integrat-
ing by parts and changing the order of integration in the double integral, we find
 1
ũ t (x, t) = αt (x, t) + Ptα (x, ξ, t)α(ξ, t)dξ − P α (x, 1, t)λ(1)α(1, t)
x
 1
α
+ P (x, x, t)λ(x)α(x, t) + Pξα (x, ξ, t)λ(ξ)α(ξ, t)dξ
x
 1  1
α
+ P (x, ξ, t)λ (ξ)α(ξ, t)dξ + P α (x, ξ, t)c1 (ξ)β(ξ, t)dξ
x x
 1  ξ
− P α (x, s, t)b1 (s, ξ, t)dsβ(ξ, t)dξ. (10.86)
x x

Moreover, from differentiating the first element of (10.82a) with respect to space,
we obtain
 1
ũ x (x, t) = αx (x, t) − P α (x, x, t)α(x, t) + Pxα (x, ξ, t)α(ξ, t)dξ. (10.87)
x
192 10 Adaptive Output-Feedback: Uncertain Boundary Condition

Inserting (10.86) and (10.87) into (10.78a), yields


 1 
αt (x, t) + λ(x)αx (x, t) = c1 (x)β(x, t) − Ptα (x, ξ, t) + Pξα (x, ξ, t)λ(ξ)
x

α α β
+ P (x, ξ, t)λ (ξ) + λ(x)Px (x, ξ, t) − c1 (x)P (x, ξ, t) α(ξ, t)dξ
 1   ξ 
− P α (x, ξ, t)c1 (ξ) −
P α (x, s, t)b1 (s, ξ, t)ds β(ξ, t)dξ
x x
 α

− Γ1 (x, t) − P (x, 1, t)λ(1) α(1, t). (10.88)

Using (10.79a), (10.84a) and (10.85a) gives (10.83a). Existence of a solution to


the Volterra integral equation (10.84) is ensured by Lemma 1.1. Similar derivations
using (10.79b), (10.79c), (10.84b) and (10.85b) yield (10.83b). Inserting (10.82) into
(10.78c), one obtains

α(0, t) = q̂(t)β(0, t) + q̃(t)v(0, t)


 1
 
+ q̂(t)P β (0, ξ, t) − P α (0, ξ, t) α(ξ, t)dξ. (10.89)
0

Using (10.79d) yields (10.83c). The last boundary condition (10.83d) follows from
inserting (10.82) into (10.78d). 

10.3.3 Analysis of the Target System

Define the signals

v̄(t) = v̂(0, t − t1 )
 1
 
+ P β (0, ξ, t − t1 ) y1 (t − h α (ξ)) − û(1, t − h α (ξ)) dξ, (10.90a)
0
ϑ(t) = y1 (t) − û(1, t) + q̂(t − t1 )v̄(t) (10.90b)

where P β is the kernel in (10.82) satisfying (10.79), and


 x

h α (x) = (10.91)
0 λ(γ)

with t1 defined in (8.8). We note that t1 = h α (1), and that the signals (10.90) can be
computed using available measurements and estimates only.

Lemma 10.5 Consider system (10.1) with measurement (10.5), observer (10.76)
and the signals (10.90). The relationship
10.3 Collocated Sensing and Control 193

v̄(t) = v(0, t − t1 ) (10.92)

and the linear relation

ϑ(t) = q v̄(t) (10.93)

are valid for t ≥ t F , where t F is defined in (8.8).

Proof From (10.83b) and (10.83d), one has that β ≡ 0 for t ≥ t2 where t2 is given
in (8.8). Thus, for t ≥ t2 , system (10.83) reduces to

αt (x, t) + λ(x)αx (x, t) = 0 (10.94a)


α(0, t) = q̃(t)v(0, t) (10.94b)
α(x, t2 ) = αt2 (x) (10.94c)

which can be solved to yield, for t ≥ t F

α(x, t) = α(0, t − h α (x)) = q̃(t)(t − h α (x))v(0, t − h α (x)) (10.95)

or

α(x, t) = α(1, t + t1 − h α (x))


= y1 (t + t1 − h α (x)) − û(1, t + t1 − h α (x)) (10.96)

where h α and t1 are given in (10.91). Moreover, from (10.82b) and β ≡ 0, one will
for t ≥ t2 have  1
v(0, t) = v̂(0, t) + P β (0, ξ, t)α(ξ, t)dξ. (10.97)
0

Substituting (10.96) into (10.97), we have


 1
v(0, t) = v̂(0, t) + P β (0, ξ, t)y1 (t + t1 − h α (ξ, t))dξ
0
 1
− P β (0, ξ, t)û(1, t + t1 − h α (ξ, t))dξ (10.98)
0

and after a time shift of t1 , assuming t ≥ t1 + t2 = t F one finds (10.92). Inserting


x = 1 into (10.95) and using (10.82a) gives

u(1, t) − û(1, t) = q̃(t)(t − t1 )v(0, t − t1 )


= qv(0, t − t1 ) − q̂(t)v(0, t − t1 ) (10.99)
194 10 Adaptive Output-Feedback: Uncertain Boundary Condition

and using (10.5) and (10.92) establishes that (10.90b) is equivalent to

ϑ(t) = qv(0, t − t1 ) (10.100)

for t ≥ t F . Combining (10.92) and (10.100) gives (10.93). 

10.3.4 Adaptive Law

Given the linear parametric model of Lemma 10.5, a large number of adaptive laws
can be applied to generate estimates of q. The resulting estimate can then be combined
with the observer (10.76) to generate estimates of the system states. To best facilitate
for the adaptive control law design, the gradient algorithm with normalization is
used here, which is given as

⎨0 for 0 ≤ t < t F
˙ =
q̂(t) (ϑ(t) − q̂(t)v̄(t))v̄(t) (10.101a)
⎩γ for t ≥ t F
1 + v̄ 2 (t)
q̂(0) = q̂0 , |q̂0 | ≤ q̄ (10.101b)

for some design gain γ > 0.

Theorem 10.3 Consider system (10.1) with measurement (10.5), observer (10.76)
and the adaptive law (10.101), where ϑ and v̄ are generated using Lemma 10.5 and
t F is defined in (8.8). The adaptive law (10.101) has the following properties:

|q̂(t)| ≤ q̄, ∀t ≥ 0 (10.102)


q̃v(0, ·)
 ∈ L2 ∩ L∞ (10.103)
1 + v 2 (0, ·)
q̂˙ ∈ L2 ∩ L∞ . (10.104)

Moreover, if v̄(t) = v(0, t − t1 ) is bounded for all t ≥ 0, then

||û||∞ , ||v̂||∞ ∈ L∞ . (10.105)

Lastly, if v̄(t) = v(0, t − t1 ) is persistently exciting (PE), then

q̂ → q, ||û − u|| → 0, ||v̂ − v|| → 0, (10.106)

exponentially fast.
10.3 Collocated Sensing and Control 195

Proof By inserting ϑ(t) and using q̃(t) = q − q̂(t) and (10.92) one finds for t ≥ t F

⎨0 for 0 ≤ t < t F
˙ =
q̃(t) v (0, t − t1 )
2 , (10.107)
⎩−γ q̃(t) for t ≥ t F
1 + v 2 (0, t − t1 )

which is straightforwardly solved to yield



⎨q̃(0) for 0 ≤ t < t F
 
q̃(t) =  v 2 (0, τ − t1 ) , (10.108)
⎩q̃(t0 ) exp −γ tt dτ for t ≥ t0 ≥ t F
F
1 + v 2 (0, τ − t1 )

showing (10.102) by selecting t0 = t F . Equation (10.108) also shows that the decay
rate is at maximum exponential with a rate γ. Next, form the Lyapunov function

1 2
V (t) = q̃ (t). (10.109)

Differentiating with respect to time and inserting (10.107), one finds



⎨0 for 0 ≤ t < t F
V̇ (t) = q̃ (t)v (0, t − t1 )
2 2 . (10.110)
⎩− for t ≥ t F
1 + v 2 (0, t − t1 )

From (10.108) with t0 = t − t1 , we have

q̃ 2 (t) ≥ q̃ 2 (t − t1 ) exp(−2γt1 ), (10.111)

and hence

⎨0 for 0 ≤ t < t F
V̇ (t) ≤ e−2γt1 q̃ 2 (t − t1 )v 2 (0, t − t1 ) . (10.112)
⎩− for t ≥ t F
1 + v (0, t − t1 )
2

which shows that V is non-increasing and bounded from above. Integrating (10.112)
from zero to infinity gives that the signal

q̃ 2 (t − t1 )v 2 (0, t − t1 )
s(t) = (10.113)
1 + v 2 (0, t − t1 )

is in L1 . This in turn means that the signal s(t + t1 ) also lies in L1 , and hence

q̃v(0, ·)
 ∈ L2 (10.114)
1 + v 2 (0, ·)
196 10 Adaptive Output-Feedback: Uncertain Boundary Condition

follows. Moreover, we have

q̃ 2 (t − t1 )v 2 (0, t − t1 )
s(t) = ≤ q̃ 2 (t − t1 ) ≤ q̃ 2 (0) (10.115)
1 + v 2 (0, t − t1 )

meaning that s ∈ L∞ , which results in

q̃v(0, ·)
 ∈ L∞ . (10.116)
1 + v 2 (0, ·)

From (10.101), one has for t ≥ t F


 
 q̃(t)v̄ 2 (t) 
˙
|q̂(t)| = γ  ≤ γ |q̃(t)v̄(t)|  |v̄(t)| |q̃(t)v̄(t)|
≤ γ , (10.117)
1 + v̄ 2 (t)  1 + v̄ 2 (t) 1 + v̄ 2 (t) 1 + v̄ 2 (t)

which from (10.103), gives (10.104).


Lastly, the two latter properties are proved. From (10.108), one observes that
q̃(t) converges exponentially to zero when v(0, t − t1 ) is PE. From (10.94), the fact
that q̂(t) is bounded and the assumption that v(0, t) is bounded, α(x, t) is bounded
(recall that β is identically zero after t F ). Boundedness of the kernels as stated in
Lemma 10.4 then provides boundedness of û(x, t) and v̂(x, t). The same line of
reasoning provides the statements about exponential convergence when v(0, t − t1 )
is PE. 

10.3.5 Control Law

We propose the control law


 1  1
U (t) = K̂ u (1, ξ, t)û(ξ, t)dξ + K̂ v (1, ξ, t)v̂(ξ, t)dξ (10.118)
0 0

where ( K̂ u , K̂ v ) is the on-line solution to the PDE (10.38) with q̂ generated using
the method of Theorem 10.3. Note that the bounds (10.39) and (10.40) still apply,
since q̂ is bounded and q̂˙ ∈ L2 ∩ L∞ (Theorem 10.3).

Theorem 10.4 Consider system (10.1) with measurement (10.5), observer (10.76)
and the adaptive law of Theorem 10.3. The control law (10.118) ensures
||u||, ||v||, ||û||, ||v̂|| ∈ L2 ∩ L∞ (10.119a)
||u||∞ , ||v||∞ , ||û||∞ , ||v̂||∞ ∈ L2 ∩ L∞ , (10.119b)
||u||, ||v||, ||û||, ||v̂|| → 0 (10.119c)
||u||∞ , ||v||∞ , ||û||∞ , ||v̂||∞ → 0. (10.119d)

This theorem is proved in Sect. 10.3.7, following some intermediate results.


10.3 Collocated Sensing and Control 197

10.3.6 Backstepping

Consider the backstepping transformation


w(x, t) = û(x, t) (10.120a)
 x
z(x, t) = v̂(x, t) − K̂ u (x, ξ, t)û(ξ, t)dξ
 x 0
v
− K̂ (x, ξ, t)v̂(ξ, t)dξ = T [û, v̂](x, t) (10.120b)
0

where ( K̂ u , K̂ v ) is the on-line solution to the PDE (10.38). Its inverse is in the form
û(x, t) = w(x, t) (10.121a)
−1
v̂(x, t) = T [w, z](x, t) (10.121b)

for an operator T −1 similar to T . Consider also the target system


 x
wt (x, t) + λ(x)wx (x, t) = c1 (x)z(x, t) + ω(x, ξ, t)w(ξ, t)dξ
 x 0

+ κ(x, ξ, t)z(ξ, t)dξ + Γ1 (x, t)α(1, t) (10.122a)


0
 x
z t (x, t) − μ(x)z x (x, t) = Ω(x, t)α(1, t) − K̂ tu (x, ξ, t)w(ξ, t)dξ
 x 0

− K̂ tv (x, ξ, t)T −1 [w, z](ξ, t)dξ (10.122b)


0
w(0, t) = q̂(t)z(0, t) (10.122c)
z(1, t) = 0 (10.122d)
w(x, 0) = w0 (x) (10.122e)
z(x, 0) = z 0 (x) (10.122f)

for some initial conditions w0 , z 0 ∈ B([0, 1]), and where ω and κ are defined over
T1 given in (1.1b) and Ω is defined for x ∈ [0, 1], t ≥ 0.

Lemma 10.6 The backstepping transformation (10.120) maps observer (10.76) into
target system (10.122), where ω, κ are defined by (10.46) and satisfy the bounds
(10.47) for some constants κ̄, ω̄, while

Ω(x, t) = T [Γ1 , Γ2 ](x, t) (10.123)

and satisfies

||Ω(t)|| ≤ Ω̄, ∀t ≥ 0 (10.124)

for some constant Ω̄.


198 10 Adaptive Output-Feedback: Uncertain Boundary Condition

Proof From differentiating (10.120b) with respect to time and space, inserting the
dynamics (10.76a)–(10.76b), and integrating by parts, we find
v̂t (x, t) = z t (x, t) − K̂ u (x, x, t)λ(x)û(x, t) + K̂ u (x, 0, t)λ(0)q̂(t)v̂(0, t)
 x  x
+ K̂ ξu (x, ξ, t)λ(ξ)û(ξ, t)dξ + K̂ u (x, ξ, t)λ (ξ)û(ξ, t)dξ
0 x 0 x
+ K̂ (x, ξ, t)c1 (ξ)v̂(ξ, t)dξ +
u
K̂ u (x, ξ, t)Γ1 (ξ, t)α(1, t)dξ
0 0
v v
+ K̂ (x, x, t)μ(x)v̂(x, t) − K̂ (x, 0, t)μ(0)v̂(0, t)
 x  x
− K̂ ξv (x, ξ, t)μ(ξ)v̂(ξ, t)dξ − K̂ v (x, ξ, t)μ (ξ)v̂(ξ, t)dξ
 x
0
 x
0
v
+ K̂ (x, ξ, t)c2 (ξ)û(ξ, t)dξ + K̂ v (x, ξ, t)Γ2 (ξ, t)α(1, t)dξ
 x
0
 x 0

+ K̂ t (x, ξ, t)û(ξ, t)dξ +


u
K̂ tv (x, ξ, t)v̂(ξ, t)dξ (10.125)
0 0

and
v̂x (x, t) = z x (x, t) + K̂ u (x, x, t)û(x, t) + K̂ v (x, x, t)v̂(x, t)
 x  x
+ K̂ x (x, ξ, t)û(ξ, t)dξ +
u
K̂ xv (x, ξ, t)v̂(ξ, t)dξ. (10.126)
0 0

Inserting (10.125)–(10.126) into the dynamics (10.76b) and using Eq. (10.38) we
obtain
 x
z t (x, t) − μ(x)z x (x, t) = Γ2 (x, t)α(1, t) − K̂ u (x, ξ, t)Γ1 (ξ, t)dξα(1, t)
 x 0
 x
v
− K̂ (x, ξ, t)Γ2 (ξ, t)dξα(1, t) − K̂ tu (x, ξ, t)û(ξ, t)dξ
 x
0 0

− K̂ tv (x, ξ, t)v̂(ξ, t)dξ (10.127)


0

which can be written as (10.122b). Inserting (10.120) into (10.122a), changing the
order of integration in the double integral, we find

û t (x, t) + λ(x)û x (x, t) = c1 (x)v̂(x, t) + Γ1 (x, t)α(1, t)


 x  x 
+ ω(x, ξ, t) − c1 (x) K̂ (x, ξ, t) −
u
κ(x, s, t) K̂ (s, ξ, t)ds û(ξ, t)dξ
u
0 ξ
 x
+ κ(x, ξ, t) − c1 (x) K̂ v (x, ξ, t)
0
 x 
v
− κ(x, s, t) K̂ (s, ξ, t)ds v̂(ξ, t)dξ. (10.128)
ξ
10.3 Collocated Sensing and Control 199

Using the Eqs. (10.46) yields the dynamics (10.76a), since α(1, t) = ũ(1, t). Substi-
tuting the backstepping transformation (10.120) into the boundary condition (10.76c)
immediately yields (10.122c). Lastly, inserting x = 1 into (10.120b) gives
 1  1
z(1, t) = U (t) − K̂ (1, ξ, t)û(ξ, t)dξ −
u
K̂ v (1, ξ, t)v̂(ξ, t)dξ. (10.129)
0 0

The control law (10.118) then yields the last boundary condition (10.122d). 

10.3.7 Proof of Theorem 10.4

Since β in the observer dynamics is zero in finite time, it suffices to consider the state
α satisfying the dynamics (10.94), which we restate here

αt (x, t) + λ(x)αx (x, t) = 0 (10.130a)


α(0, t) = q̃(t)v(0, t) (10.130b)
α(x, t2 ) = αt2 (x), (10.130c)

with v(0, t) given from (10.97), which can be written


 1
v(0, t) = z(0, t) + P β (0, ξ, t)α(ξ, t)dξ (10.131)
0

with P β being time-varying, bounded and satisfying (10.79). Consider the functions
 1
V1 (t) = e−δx λ−1 (x)α2 (x, t)d x (10.132a)
0
 1
V2 (t) = e−δx λ−1 (x)w 2 (x, t)d x (10.132b)
0
 1
V3 (t) = ekx μ−1 (x)z 2 (x, t)d x (10.132c)
0

for some positive constants k and δ to be decided.


The following result is proved in Appendix E.6.
Lemma 10.7 There exist positive constants h 1 , h 2 and nonnegative, integrable func-
tions l1 , l2 so that

V̇1 (t) ≤ −e−δ α2 (1, t) + q̃(t)2 v 2 (0, t) − δλV1 (t) (10.133a)


 
V̇2 (t) ≤ q̄ 2 z 2 (0, t) − δλ − h 1 V2 (t) + μ̄V3 (t) + α2 (1, t) (10.133b)
 
V̇3 (t) ≤ −z 2 (0, t) − kμ − h 2 V3 (t) + ek α2 (1, t)
200 10 Adaptive Output-Feedback: Uncertain Boundary Condition

+ l1 (t)V2 (t) + l2 (t)V3 (t). (10.133c)

Now forming

V4 (t) = a1 V1 (t) + V2 (t) + a3 V3 (t) (10.134)

for some positive constants a1 and a3 , we obtain using Lemma 10.7


   
V̇4 (t) ≤ − a1 e−δ − 1 − a3 ek α2 (1, t) − a3 − q̄ 2 z 2 (0, t) + a1 q̃ 2 (t)v 2 (0, t)
   
− a1 δλV1 (t) − δλ − h 1 V2 (t) − a3 kμ − a3 h 2 − μ̄ V3 (t)
+ a3l1 (t)V2 (t) + a3l2 (t)V3 (t). (10.135)

Choosing

h1 a3 h 2 + μ̄
a3 = q̄ 2 + 1, a1 > eδ (1 + a3 ek ), δ> , k> (10.136)
λ a3 μ

we obtain

V̇4 (t) ≤ −z 2 (0, t) − cV4 (t) + l3 (t)V4 (t) + a1 q̃ 2 (t)v 2 (0, t) (10.137)

for some positive constant c and integrable function l3 . Inequality (10.137) can be
written as

V̇4 (t) ≤ −z 2 (0, t) − cV4 (t) + l3 (t)V4 (t) + a1 σ 2 (t)(1 + v 2 (0, t)) (10.138)

where

q̃ 2 (t)v 2 (0, t)
σ 2 (t) = (10.139)
1 + v 2 (0, t)

is an integrable function (Theorem 10.3). From (10.131), we have

v 2 (0, t) ≤ 2z 2 (0, t) + 2( P̄ β )2 eδ λ̄V1 (t) (10.140)

where P̄ β bounds the kernel P β , and inserting this into (10.138), we obtain
 
V̇4 (t) ≤ −cV4 (t) + l4 (t)V4 (t) + l5 (t) − 1 − 2a1 σ 2 (t) z 2 (0, t) (10.141)

where

l4 (t) = l3 (t) + 2σ 2 (t)( P̄ β )2 eδ λ̄, l5 (t) = a1 σ 2 (t) (10.142)


10.3 Collocated Sensing and Control 201

are integrable functions. Moreover, from (10.109) and (10.112), we have that

1 2
V (t) = q̃ (t) (10.143)

satisfies

V̇ (t) ≤ −e−2γt1 σ 2 (t − t1 ). (10.144)

Define
1 2
V5 (t) = V (t + t1 ) = q̃ (t + t1 ) (10.145)

which means that

V̇5 (t) ≤ −e−2γt1 σ 2 (t). (10.146)

And since |q̃| is decaying with a rate that is at most exponential with a rate γ, we
have

v 2 (0, t)
σ 2 (t) = q̃ 2 (t) < q̃ 2 (t) ≤ e2γt1 q̃ 2 (t + t1 ) = 2e2γt1 γV5 (t) (10.147)
1 + v 2 (0, t)

It then follows from Lemma B.4 in Appendix B that

V4 ∈ L1 ∩ L∞ , (10.148)

and hence

||α||, ||w||, ||z|| ∈ L2 ∩ L∞ . (10.149)

This, in turn, implies that z(0, t) is bounded for almost all t ≥ 0, meaning that

σ 2 z 2 (0, ·) ∈ L1 (10.150)

since σ 2 ∈ L1 , and (10.141) can be written as

V̇4 (t) ≤ −cV4 (t) + l4 (t)V4 (t) + l6 (t) (10.151)

for a nonnegative, integrable function

l6 (t) = l5 (t) + 2a1 σ 2 (t)z 2 (0, t). (10.152)


202 10 Adaptive Output-Feedback: Uncertain Boundary Condition

Lemma B.3 in Appendix B now gives

V4 → 0 (10.153)

and hence

||α||, ||w||, ||z|| → 0. (10.154)

The remaining properties can be proved using the same techniques as in the proof of
Theorem 10.2. 

10.4 Simulations

10.4.1 Anti-collocated Sensing and Control

10.4.1.1 Adaptive Observer

System (10.1) and the observer of Theorem 10.1 are implemented using the param-
eters

λ(x) = 2 + x cos(πx), μ(x) = 3 − 2x (10.155a)


c1 ≡ 1, c2 (x) = 2 − x q = −1 (10.155b)

and initial condition

u 0 ≡ 0, v0 ≡ 1. (10.156)

All additional initial conditions are set to zero. The parameters constitute a stable
system. To excite the system, the actuation signal is chosen as

U (t) = 1 + sin(πt), (10.157)

while the design gains are set to

γ = 10, q̄ = 10. (10.158)

The observer kernel equations are solved using the method described in Appendix F.2.
It is observed from Fig. 10.1 that the system norms stay bounded, while the actu-
ation signal excites the system. From Fig. 10.2, the estimated q̂ converges to its real
value after approximately 2 s, with the adaptive estimation errors converging to zero
as well in the same amount of time.
10.4 Simulations 203

3 2
2
1
1

0 0

0 2 4 6 8 10 0 2 4 6 8 10
Time [s] Time [s]

Fig. 10.1 Left: State norm. Right: Actuation signal

1.5 1

1 0
0.5
−1
0
0 2 4 6 8 10 0 2 4 6 8 10
Time [s] Time [s]

Fig. 10.2 Left: Adaptive estimation error norm. Right: Actual value of q (solid black) and estimated
value q̂ (dashed red)

10.4.1.2 Closed Loop Adaptive Control

System (10.1) and the controller of Theorem 10.2 are implemented using the same
system parameters and initial conditions as above, except that we set

q = 2, (10.159)

constituting an unstable system. The adaptation gain is set as

γ = 1, (10.160)

and the controller kernel equations are solved using the method described in
Appendix F.2. The simulation results are shown in Figs. 10.3–10.4. It is observed
that the adaptive controller successfully stabilizes the system, and makes the system

10 0

5 −2

0
−4
0 2 4 6 8 10 0 2 4 6 8 10
Time [s] Time [s]

Fig. 10.3 Left: State norm. Right: Actuation signal


204 10 Adaptive Output-Feedback: Uncertain Boundary Condition

3
4 2

2 1

0
0
−1
0 2 4 6 8 10 0 2 4 6 8 10
Time [s] Time [s]

Fig. 10.4 Left: Adaptive estimation error norm. Right: Actual value of q (solid black) and estimated
value q̂ (dashed red)

norm and actuation signal converge to zero, even though the value of q is not esti-
mated correctly. Parameter convergence is not guaranteed by the control law, and
does not happen in this cas.

10.4.2 Collocated Sensing and Control

10.4.2.1 Adaptive Observer

System (10.1) and the observer of Theorem 10.3 are implemented using the system
parameters

λ(x) = 1 + x, μ(x) = 2 − x (10.161a)


1
c1 (x) = 3x + 4, c2 (x) = (1 + x) q = −1 (10.161b)
2
and initial condition

u 0 ≡ 1, v0 (x) = sin(x). (10.162)

All additional initial conditions are set to zero. The actuation signal is chosen as
 
1 √
U (t) = 1 + sin t + sin( 2t) + sin(πt) (10.163)
2

in order the excite the system, while the design gains are set to

γ = 1, q̄ = 10. (10.164)

The observer kernel equations are implemented using the method described in
Appendix F.2.
10.4 Simulations 205

20
4

2
10
0
0 −2
0 5 10 15 20 25 0 5 10 15 20 25
Time [s] Time [s]

Fig. 10.5 Left: State norm. Right: Actuation signal

0.5
6
0
4
−0.5
2
−1
0
−1.5
0 5 10 15 20 25 0 5 10 15 20 25
Time [s] Time [s]

Fig. 10.6 Left: Adaptive estimation error norm. Right: Actual value of q (solid black) and estimated
value q̂ (dashed red)

10 0

5 −1

−2
0
0 5 10 15 0 5 10 15
Time [s] Time [s]

Fig. 10.7 Left: State norm. Right: Actuation signal

Again the system norm and actuation signal are bounded, as seen from Fig. 10.5,
while the estimate q̂ converges to its real value q, and the observer error norm
converges to zero, as seen from Fig. 10.6.

10.4.2.2 Closed Loop Adaptive Control

The controller of Theorem 10.4 is now implemented on the system (10.3), using the
same system parameters and initial conditions as in the previous simulation, except
that

q = 2. (10.165)

This now constitutes an unstable system. The controller kernel equations are solved
on-line using the spatial discretization method described in Appendix F.2.
206 10 Adaptive Output-Feedback: Uncertain Boundary Condition

3
4
2

2 1
0
0
−1
0 5 10 15 0 5 10 15
Time [s] Time [s]

Fig. 10.8 Left: Adaptive estimation error norm. Right: Actual value of q (solid black) and estimated
value q̂ (dashed red)

From Figs. 10.7 and 10.8, it is seen that the estimated parameter q̂ stagnates, but
does not converge to its true value q. However, the system state norm, state estimation
error norm and actuation converge to zero after approximately five seconds.

10.5 Notes

The second observer design, using sensing collocated with actuation, employs time-
varying injection gains as part of its observer design, which are given as the solution
to a set of time-varying kernel PDEs. This significantly complicates both design
and implementation. The method described in Appendix F.2 used for on-line imple-
menting the kernel equations is developed specifically for implementing the observer
kernels (10.79) in Anfinsen and Aamo (2017b). The result in Anfinsen and Aamo
(2017b) is also the first time time-varying kernels and time-varying injection gains
have been used for designing adaptive observers for linear hyperbolic PDEs, clearly
illustrating the involved complexity from just having a single uncertain boundary
parameter, and assuming sensing to be taken at the boundary anti-collocated with
the uncertain parameter.
We will proceed in the next chapter to assume that the in-domain coefficients are
uncertain, and seek to adaptively stabilize the system from boundary sensing only.

References

Anfinsen H, Aamo OM (2016) Boundary parameter and state estimation in 2 × 2 linear hyperbolic
PDEs using adaptive backstepping. In: 55th IEEE conference on decision and control, Las Vegas,
NV, USA
Anfinsen H, Aamo OM (2017a) Adaptive stabilization of n + 1 coupled linear hyperbolic systems
with uncertain boundary parameters using boundary sensing. Syst Control Lett 99:72–84
Anfinsen H, Aamo OM (2017b) Adaptive stabilization of 2 × 2 linear hyperbolic systems with an
unknown boundary parameter from collocated sensing and control. IEEE Trans Autom Control
62(12):6237–6249
Ioannou P, Sun J (1995) Robust adaptive control. Prentice-Hall Inc., Upper Saddle River
Chapter 11
Adaptive Output-Feedback: Uncertain
In-Domain Parameters

11.1 Introduction

We once again consider system (7.4) with measurement restricted to the boundary
anti-collocated with actuation, that is

u t (x, t) + λ(x)u x (x, t) = c1 (x)v(x, t) (11.1a)


vt (x, t) − μ(x)vx (x, t) = c2 (x)u(x, t) (11.1b)
u(0, t) = qv(0, t) (11.1c)
v(1, t) = U (t) (11.1d)
u(x, 0) = u 0 (x) (11.1e)
v(x, 0) = v0 (x) (11.1f)
y0 (t) = v(0, t), (11.1g)

for

λ, μ ∈ C 1 ([0, 1]), λ(x), μ(x) > 0, ∀x ∈ [0, 1] (11.2a)


c1 , c2 ∈ C ([0, 1]), q ∈ R,
0
(11.2b)

with

u 0 , v0 ∈ B([0, 1]). (11.3)

In Sect. 9.3, we assumed that the boundary parameter q and the in-domain parameters
c1 and c2 were uncertain and derived a state-feedback control law U that adaptively
stabilized the system, assuming distributed state measurements were available. In
Chap. 10, we restricted the sensing to be taken at the boundaries, and derived both state
observers and control laws for both the collocated and anti-collocated case, assuming

© Springer Nature Switzerland AG 2019 207


H. Anfinsen and O. M. Aamo, Adaptive Control of Hyperbolic PDEs,
Communications and Control Engineering,
https://doi.org/10.1007/978-3-030-05879-1_11
208 11 Adaptive Output-Feedback: Uncertain In-Domain Parameters

the boundary parameter q was uncertain, but assuming all in-domain parameters were
known. We will in Sect. 11.2 relax this assumption and assume that the in-domain
cross terms as well as the boundary parameter q are uncertain, and design an adaptive
control law that stabilizes the system from a single boundary sensing anti-collocated
with actuation. This solution was initially presented in Anfinsen and Aamo (2017),
and is based on the filter-based method proposed for hyperbolic 1–D systems in
Bernard and Krstić (2014). Note that the actual parameter values and system states
are not estimated directly, so that the proposed design is not suited for pure estimation
purposes.
In Chap. 12, we use the same filter-based technique to solve a more general prob-
lem, but we state here the solution to the problem of adaptively stabilizing the simpler
system (11.1) for illustrative purposes.

11.2 Anti-collocated Sensing and Control

11.2.1 Mapping to Observer Canonical Form

Through a series of transformations (Lemmas 11.1–11.4), we will show that system


(11.1) can be transformed to a modified version of what in Bernard and Krstić (2014)
was referred to as “observer canonical form”.

11.2.1.1 Decoupling by Backstepping

First off, we decouple the system states u and v, and the system (11.1) into the
following system, which has a cascade structure in the domain

α̌t (x, t) + λ(x)α̌x (x, t) = L 1 (x)β̌(0, t) (11.4a)


β̌t (x, t) − μ(x)β̌x (x, t) = 0 (11.4b)
α̌(0, t) = q β̌(0, t) (11.4c)
 1
β̌(1, t) = U (t) − L 2 (ξ)α̌(ξ, t)dξ
0
 1
− L 3 (ξ)β̌(ξ, t)dξ (11.4d)
0
α̌(x, 0) = α̌0 (x) (11.4e)
β̌(x, 0) = β̌0 (x) (11.4f)
y0 (t) = β̌(0, t) (11.4g)

for some functions L 1 , L 2 , L 3 , and with initial conditions α̌0 , β̌0 ∈ B([0, 1]).
11.2 Anti-collocated Sensing and Control 209

Lemma 11.1 System (11.1) is through an invertible backstepping transformation


equivalent to system (11.4), where L 1 , L 2 , L 3 are (continuous) functions of the
unknown parameters λ, μ, c1 , c2 , q.

Proof This is proved as part of the proof of Theorem 8.1. 

11.2.1.2 Mapping to Constant Transport Speeds

Consider the following constant transport-speeds system


αt (x, t) + λ̄αx (x, t) = σ1 (x)β(0, t) (11.5a)
βt (x, t) − μ̄βx (x, t) = 0 (11.5b)
α(0, t) = qβ(0, t) (11.5c)
 1
β(1, t) = U (t) + σ2 (ξ)α(ξ, t)dξ
0
 1
+ σ3 (ξ)β(ξ, t)dξ, (11.5d)
0
α(x, 0) = α0 (x) (11.5e)
β(x, 0) = β0 (x) (11.5f)
y0 (t) = β(0, t) (11.5g)

for some functions σ1 , σ2 , σ3 , and parameters λ̄, μ̄ ∈ R, λ̄, μ̄ > 0, with initial con-
ditions α0 , β0 ∈ B([0, 1]).
Lemma 11.2 The invertible mapping
α(x, t) = α̌(h −1
α (x), t), β(x, t) = β̌(h −1
β (x), t) (11.6)

where
 x  x
1 dγ 1 dγ
h α (x) = , h β (x) = (11.7)
t1 0 λ(γ) t2 0 μ(γ)

transforms (11.4) into (11.5), where

λ̄ = t1−1 , μ̄ = t2−1 (11.8)

are known parameters, and

σ1 (x) = L 1 (h −1
α (x, t)) (11.9a)
σ2 (x) = −t1 λ(h −1 −1
α (x))L 2 (h α (x)) (11.9b)
σ3 (x) = −t2 μ(h −1 −1
β (x))L 3 (h β (x)) (11.9c)

are unknown parameters, with t1 and t2 defined in (8.8).


210 11 Adaptive Output-Feedback: Uncertain In-Domain Parameters

Proof We note that (11.7) are strictly increasing and thus invertible. The invert-
iblility of the transformation (11.6) therefore follows. The rest of the proof follows
immediately from insertion and noting that

1 1 1 1
h α (x) = , h β (x) = (11.10a)
t1 λ(x) t2 μ(x)
h α (0) = h β (0) = 0, h α (1) = h β (1) = 1. (11.10b)

11.2.1.3 Removing a Boundary Source Term

Consider the system

αt (x, t) + λ̄αx (x, t) = σ1 (x)z(0, t) (11.11a)


z t (x, t) − μ̄z x (x, t) = μ̄θ(x)z(0, t) (11.11b)
α(0, t) = qz(0, t) (11.11c)
 1
z(1, t) = U (t) + σ2 (ξ)α(ξ, t)dξ (11.11d)
0
α(x, 0) = α0 (x) (11.11e)
z(x, 0) = z 0 (x) (11.11f)
y0 (t) = z(0, t) (11.11g)

for some function θ, and initial condition z 0 ∈ B([0, 1]).


Lemma 11.3 The invertible backstepping transformation
 x
z(x, t) = β(x, t) − σ3 (1 − x + ξ)β(ξ, t)dξ (11.12)
0

maps system (11.5) into (11.11), with

θ(x) = σ3 (1 − x). (11.13)

Proof Differentiating equation (11.12) with respect to time, inserting the dynamics
(11.5b) and integrating by parts, we obtain

βt (x, t) = z t (x, t) + μ̄σ3 (1)β(x, t) − μ̄σ3 (1 − x)β(0, t)


 x
− μ̄ σ3 (1 − x + ξ)β(ξ, t)dξ. (11.14)
0
11.2 Anti-collocated Sensing and Control 211

Similarly, differentiating the latter equation in (11.12) with respect to space, we find
 x
βx (x, t) = z x (x, t) + σ(1)β(x, t) − σ3 (1 − x + ξ)β(ξ, t)dξ. (11.15)
0

Inserting (11.14) and (11.15) into (11.5b) and using β(0, t) = z(0, t), we obtain the
dynamics (11.11b). Inserting x = 1 and (11.5d) into (11.12) gives (11.11d). 

11.2.1.4 Mapping to Observable Canonical Form

Consider the system

wt (x, t) + λ̄wx (x, t) = 0 (11.16a)


z t (x, t) − μ̄z x (x, t) = μ̄θ(x)z(0, t) (11.16b)
w(0, t) = z(0, t) (11.16c)
 1
z(1, t) = U (t) + κ(ξ)w(ξ, t)dξ + ε(t) (11.16d)
0
w(x, 0) = w0 (x) (11.16e)
z(x, 0) = z 0 (x) (11.16f)
y0 (t) = z(0, t) (11.16g)

where we have introduced a filter w, which is a pure transport equation of the signal
z(0, t). Here, ε is a signal defined for t ≥ 0, w0 ∈ B([0, 1]) and
 1
κ(x) = qσ2 (x) + λ̄−1 σ2 (ξ)σ1 (ξ − x)dξ. (11.17)
x

Lemma 11.4 Consider systems (11.11) and (11.16). The signal ε(t), which is char-
acterized in the proof, is zero for t ≥ t1 . Moreover, stabilization of (11.16) implies
stabilization of (11.11). More precisely,

||α(t)|| ≤ c||w(t)||, ||α(t)||∞ ≤ c||w(t)||∞ (11.18)

for t ≥ t1 and some constant c.


Proof A non-adaptive estimate of α in (11.11) can be generated using w as follows
 x
−1
ᾱ(x, t) = qw(x, t) + λ̄ σ1 (x − ξ)w(ξ, t)dξ. (11.19)
0

It can straightforwardly be verified that the non-adaptive estimation error

e(x, t) = α(x, t) − ᾱ(x, t) (11.20)


212 11 Adaptive Output-Feedback: Uncertain In-Domain Parameters

satisfies the dynamics

et (x, t) + λ̄ex (x, t) = 0, e(0, t) = 0, e(x, 0) = e0 (x) (11.21)

which is trivially zero for t ≥ t1 . This means that


 x
α(x, t) = qw(x, t) + λ̄−1 σ1 (x − ξ)w(ξ, t)dξ + e(x, t) (11.22)
0

where e ≡ 0 for t ≥ t1 , which provides (11.18). Inserting this into (11.11d), we obtain
 1  1  ξ
z(1, t) = U (t) + σ2 (ξ)qw(ξ, t)dξ + σ2 (ξ)λ̄−1 σ1 (ξ − s)w(s, t)dsdξ
0 0 0
 1
+ σ2 (ξ)e(ξ, t)dξ. (11.23)
0

Changing the order of integration in the double integral yields


 1   1 
z(1, t) = U (t) + qσ2 (ξ) + λ̄−1 σ2 (s)σ1 (s − ξ)ds w(ξ, t)dξ
0 ξ
 1
+ σ2 (ξ)e(ξ, t)dξ (11.24)
0

which gives (11.16d) with κ defined in (11.17), and ε given as


 1
ε(t) = σ2 (ξ)e(ξ, t)dξ (11.25)
0

which is zero for t ≥ t1 since e is zero. 


Systems (11.11) and (11.16) are not equivalent in the sense that the systems can
be mapped using an invertible transformation, but in the sense that stabilizing the
latter also stabilizes the former, which is evident from the relationship (11.19), with
e = α − ᾱ ≡ 0 in finite time. The reverse implication, however, is not necessarily
true (e.g. if q = 0, σ1 ≡ 0).
For κ ≡ 0 and μ̄ = 1, system (11.16) reduces to the system which in Bernard and
Krstić (2014) is referred to as observer canonical form.

11.2.2 Parametrization by Filters

We have thus shown that stabilizing system (11.1) is achieved by stabilizing system
(11.16). In deriving an adaptive control law for (11.16), we will use the filter-based
design presented in Bernard and Krstić (2014). However, as we will see, the additional
11.2 Anti-collocated Sensing and Control 213

term κ somewhat complicates the control design. We introduce the following filters

ψt (x, t) − μ̄ψx (x, t) = 0, ψ(1, t) = U (t)


ψ(x, 0) = ψ0 (x) (11.26a)
φt (x, t) − μ̄φx (x, t) = 0, φ(1, t) = y0 (t)
φ(x, 0) = φ0 (x) (11.26b)
Pt (x, ξ, t) − μ̄Px (x, ξ, t) = 0, P(1, ξ, t) = w(ξ, t)
P(x, ξ, 0) = P0 (x, ξ) (11.26c)

for some initial conditions satisfying

ψ0 , φ0 ∈ B([0, 1]), P0 ∈ B([0, 1]2 ). (11.27)

Note that w(ξ, t) used in the boundary condition for filter P is itself a filter and hence
known. Define also

p0 (x, t) = P(0, x, t). (11.28)

Then a non-adaptive estimate of the variable z can be generated from


 1  1
z̄(x, t) = ψ(x, t) + θ(ξ)φ(1 − (ξ − x), t)dξ + κ(ξ)P(x, ξ, t)dξ. (11.29)
x 0

Lemma 11.5 Consider system (11.16) and the non-adaptive estimate (11.29) gen-
erated using filters (11.26). Then,

z̄ ≡ z (11.30)

for t ≥ t1 + t2 .

Proof Consider the non-adaptive estimation error defined as

(x, t) = z(x, t) − z̄(x, t). (11.31)

We will show that in (11.31) satisfies the dynamics

t (x, t) − μ̄ x (x, t) = 0, (1, t) = ε(t), (x, 0) = 0 (x). (11.32)

Since ε(t) = 0 for t ≥ t1 , ≡ 0 for t ≥ t1 + t2 = t F . We evaluate from the definition


of z̄ in (11.29), and using the dynamics of the filters (11.26)
214 11 Adaptive Output-Feedback: Uncertain In-Domain Parameters
 1  1
z̄ t (x, t) = ψt (x, t) + θ(ξ)φt (1 − (ξ − x))dξ + κ(ξ)Pt (x, ξ, t)dξ
x 0
 1
= μ̄ψx (x, t) + μ̄ θ(ξ)φx (1 − (ξ − x))dξ
x
 1
+ μ̄ κ(ξ)Px (x, ξ, t)dξ (11.33)
0

and
 1
z̄ x (x, t) = ψx (x, t) − θ(x)φ(1, t) + θ(ξ)φx (1 − (ξ − x), t)dξ
x
 1
+ κ(ξ)Px (x, ξ, t)dξ, (11.34)
0

which, when using the definition of in (11.31), the dynamics (11.16b) and the
boundary condition (11.26a), gives the dynamics (11.32). Substituting x = 1 into
in (11.31), using the definition of z̄ in (11.29), and inserting the boundary condition
(11.16d) give
 1
(1, t) = z(1, t) − z̄(1, t) = U (t) + κ(ξ)w(ξ, t)dξ + ε(t)
0
 1
− ψ(1, t) − κ(ξ)P(1, ξ, t)dξ. (11.35)
0

Using the boundary conditions (11.26a) and (11.26c), we obtain the boundary con-
dition (11.32). 

11.2.3 Adaptive Law and State Estimation

We start by assuming the following.


Assumption 11.1 Bounds on θ and κ are known. That is, we are in knowledge of
some positive constants θ̄ and κ̄ so that

||θ||∞ ≤ θ̄, ||κ||∞ ≤ κ̄. (11.36)

This assumption should not be a limitation, since the bounds θ̄ and κ̄ can be made
arbitrarily large. Now, motivated by the parametrization (11.29), we generate an
estimate of z from
11.2 Anti-collocated Sensing and Control 215
 1
ẑ(x, t) =ψ(x, t) + θ̂(ξ, t)φ(1 − (ξ − x), t)dξ
x
 1
+ κ̂(ξ, t)P(x, ξ, t)dξ (11.37)
0

where θ̂ and κ̂ are estimates of θ and κ, respectively. The dynamics of (11.37) can
straightforwardly be found to satisfy
 1
ẑ t (x, t) − μ̄ẑ x (x, t) = μ̄θ̂(x, t)z(0, t) + θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ
x
 1
+ κ̂t (ξ, t)P(x, ξ, t)dξ (11.38a)
0
 1
ẑ(1, t) = U (t) + κ̂(ξ)w(ξ, t)dξ (11.38b)
0
ẑ(x, 0) = ẑ 0 (x). (11.38c)

for some initial condition ẑ 0 ∈ B([0, 1]). The corresponding prediction error is
defined as

ˆ(x, t) = z(x, t) − ẑ(x, t). (11.39)

From the parametric model (11.29) and corresponding error (11.31), we also have
 1  1
y0 (t) = ψ(0, t) + θ(ξ)φ(1 − ξ, t)dξ + κ(ξ) p0 (ξ, t)dξ + (0, t), (11.40)
0 0

with (0, t) = 0 for t ≥ t F . From (11.40), we propose the adaptive laws


 
ˆ(0, t)φ(1 − x, t)
θ̂t (x, t) = projθ̄ γ1 (x) , θ̂(x, t) , θ̂(x, 0) = θ̂0 (x) (11.41a)
1 + f 2 (t)
 
ˆ(0, t) p0 (x, t)
κ̂t (x, t) = projκ̄ γ2 (x) , κ̂(x, t) , κ̂(x, 0) = κ̂0 (x) (11.41b)
1 + f 2 (t)

where

f 2 (t) = ||φ(t)||2 + || p0 (t)||2 (11.42)

for p0 defined in (11.28), and where γ1 (x), γ2 (x) > 0 for all x ∈ [0, 1] are some
bounded design gains. The initial guesses are chosen inside the feasible domain

||θ̂0 ||∞ ≤ θ̄, ||κ̂0 ||∞ ≤ κ̄, (11.43)

and the projection operator is defined in Appendix A.


216 11 Adaptive Output-Feedback: Uncertain In-Domain Parameters

Lemma 11.6 The adaptive law (11.41) with initial condition satisfying (11.43) has
the following properties

||θ̂(t)||∞ ≤ θ̄, ||κ̂(t)||∞ ≤ κ̄, ∀t ≥ 0 (11.44a)


||θ̂t ||, ||κ̂t || ∈ L∞ ∩ L2 (11.44b)
σ ∈ L∞ ∩ L2 (11.44c)

where θ̃ = θ − θ̂, κ̃ = κ − κ̂, and

ˆ(0, t)
σ(t) =  . (11.45)
1 + f 2 (t)

Proof The property (11.44a) follows from the projection operator. Consider the Lya-
punov function candidate
 1  1
V (t) = a1 λ̄−1 (2 − x)e2 (x, t)d x + μ̄−1 2
(x, t)d x
0 0
 
1 1 −1 1 1 −1
+ γ (x)θ̃ (x, t)d x +
2
γ (x)κ̃2 (x, t)d x (11.46)
2 0 1 2 0 2

for some positive constant a1 . Differentiating with respect to time, we find


 1  1
V̇ (t) = 2a1 λ̄−1 (2 − x)e(x, t)et (x, t)d x + 2μ̄−1 (1 + x) (x, t) t (x, t)d x
0 0
 1  
ˆ(0, t)φ(1 − x, t)
− γ1−1 (x)θ̃(x, t)projθ̄ γ1 (x) , θ̂(x, t) dx
0 1 + f 2 (t)
 1  
−1 ˆ(0, t) p0 (x, t)
− γ2 (x)κ̃(x, t)projκ̄ γ2 (x) , κ̂(x, t) d x. (11.47)
0 1 + f 2 (t)

Using property (A.5b) of Lemma A.1, inserting the dynamics (11.21) and (11.32)
and integrating by parts give

V̇ (t) ≤ −a1 e2 (1, t) + 2a1 e2 (0, t) − a1 ||e(t)||2 + 2 2 (1, t) − 2


(0, t) − || (t)||2
 1
ˆ(0, t)φ(1 − x, t)
− θ̃(x, t) dx
0 1 + f 2 (t)
 1
ˆ(0, t) p0 (x, t)
− κ̃(x, t) d x. (11.48)
0 1 + f 2 (t)

Inserting the boundary conditions (11.21) and (11.32) and using Cauchy–Schwarz’
inequality, we find
11.2 Anti-collocated Sensing and Control 217

V̇ (t) ≤ −a1 e2 (1, t) − a1 ||e(t)||2 + 2||σ2 ||2 ||e(t)||2 − 2


(0, t) − || (t)||2
 1
ˆ(0, t)φ(1 − x, t)
− θ̃(x, t) dx
0 1 + f 2 (t)
 1
ˆ(0, t) p0 (x, t)
− κ̃(x, t) d x. (11.49)
0 1 + f 2 (t)

Choosing a1 = 2||σ2 ||2 yields


 1
ˆ(0, t)φ(1 − x, t)
V̇ (t) ≤ − 2 (0, t) − θ̃(x, t) dx
0 1 + f 2 (t)
 1
ˆ(0, t) p0 (x, t)
− κ̃(x, t) d x. (11.50)
0 1 + f 2 (t)

We note that
 1  1
ˆ(0, t) = (0, t) + θ̃(ξ, t)φ(1 − ξ, t)dξ + κ̃(ξ, t) p0 (ξ, t)dξ, (11.51)
0 0

and inserting this into (11.50), we obtain using Young’s inequality

ˆ2 (0, t) ˆ(0, t) (0, t)


V̇ (t) = − 2 (0, t) − +
1 + f 2 (t) 1 + f 2 (t)
ˆ (0, t)
2
1 ˆ2 (0, t) 1 2 (0, t)
≤ − 2 (0, t) − + +
1 + f 2 (t) 2 1 + f 2 (t) 2 1 + f 2 (t)
1 2
≤ − σ (t) (11.52)
2
for σ defined in (11.45). This proves that V is bounded and nonincreasing, and hence
has a limit as t → ∞. Integrating (11.52) from zero to infinity gives σ ∈ L2 . From
(11.51) and the triangle and Cauchy–Schwarz’ inequalities we have

ˆ(0, t) ||θ̃(t)||||φ(t)|| + ||κ̃(t)|||| p0 (t)|| + | (0, t)|


σ(t) =  ≤ 
1+ f 2 (t) 1 + f 2 (t)
| (0, t)|
≤ ||θ̃(t)|| + ||κ̃(t)|| +  . (11.53)
1 + f 2 (t)

The latter term is zero for t ≥ t F , and hence σ ∈ L∞ follows. From the adaptation
laws (11.41), we have

|ˆ(0, t)| ||φ(t)||


||θ̂t (t)|| ≤ ||γ1 ||   ≤ ||γ1 |||σ(t)| (11.54a)
1 + f 2 (t) 1 + f 2 (t)
218 11 Adaptive Output-Feedback: Uncertain In-Domain Parameters

|ˆ(0, t)| || p0 (t)||


||κ̂t (t)|| ≤ ||γ2 ||   ≤ ||γ2 |||σ(t)| (11.54b)
1 + f 2 (t) 1 + f 2 (t)

which, along with (11.44c) gives (11.44b). 

11.2.4 Closed Loop Adaptive Control

Consider the control law


 1  1
U (t) = ĝ(1 − ξ, t)ẑ(ξ, t)dξ − κ̂(ξ, t)w(ξ, t)dξ, (11.55)
0 0

where ẑ is generated using (11.37), and ĝ is the on-line solution to the Volterra integral
equation
 x
ĝ(x, t) = ĝ(x − ξ, t)θ̂(ξ, t)dξ − θ̂(x, t), (11.56)
0

with θ̂ and κ̂ generated from the adaptive laws (11.41).


Theorem 11.1 Consider system (11.1), filters (11.26), and the adaptive laws (11.41).
The control law (11.55) guarantees

||u||, ||v||, ||ψ||, ||φ||, ||P|| ∈ L2 ∩ L∞ (11.57a)


||u||∞ , ||v||∞ , ||ψ||∞ , ||φ||∞ , ||P||∞ ∈ L2 ∩ L∞ (11.57b)
||u||, ||v||, ||ψ||, ||φ||, ||P|| → 0 (11.57c)
||u||∞ , ||v||∞ , ||ψ||∞ , ||φ||∞ , ||P||∞ → 0 (11.57d)

Before proving Theorem 11.1 we introduce a transformation.

11.2.5 Backstepping

Consider the backstepping transformation


 x
η(x, t) = ẑ(x, t) − ĝ(x − ξ, t)ẑ(ξ, t)dξ = T [ẑ](x, t) (11.58)
0

where ĝ is the solution to the Volterra integral equation

ĝ(x, t) = −T [θ̂](x, t), (11.59)


11.2 Anti-collocated Sensing and Control 219

which is equivalent to (11.56). The transformation (11.58) is invertible. Consider


also the target system

ηt (x, t) − μ̄ηx (x, t) = −μ̄ĝ(x, t)ˆ(0, t)


 1 
+T θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ (x, t)
x
 1 
+T κ̂t (ξ, t)P(x, ξ, t)dξ (x, t)
0
 x
− ĝt (x − ξ, t)T −1 [η](ξ, t)dξ (11.60a)
0
η(1, t) = 0 (11.60b)
η(x, 0) = η0 (x) (11.60c)

for an initial condition η0 ∈ B([0, 1]).


Lemma 11.7 The transformation (11.58) and controller (11.55) map system (11.38)
into (11.60).

Proof Differentiating (11.58) with respect to time and space, respectively, inserting
the dynamics (11.38a), and substituting the result into (11.38a), we obtain
  x 
ηt (x, t) − μ̄ηx (x, t) − μ̄ θ̂(x, t) − ĝ(x − ξ, t)θ̂(ξ, t)dξ ˆ(0, t)
0
 x  1
+ ĝ(x − ξ, t) θ̂t (s, t)φ(1 − (s − ξ), t)dsdξ
0 ξ
 x  1  1
+ ĝ(x − ξ, t)
κ̂t (s, t)P(ξ, s, t)dsdξ − θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ
0 0 x
 1  x
− κ̂t (ξ, t)P(x, ξ, t)dξ + ĝt (x − ξ, t)ẑ(ξ, t)dξ = 0 (11.61)
0 0

which can be rewritten as (11.60a). The boundary condition (11.60b) follows from
inserting x = 1 into (11.58), and using (11.38b) and (11.55). 

11.2.6 Proof of Theorem 11.1

Since θ̂ is bounded by projection, we have from (11.56), (11.58) and Theorem 1.3
the following inequalities

||ĝ(t)|| ≤ ḡ, ||η(t)|| ≤ G 1 ||ẑ(t)||, ||ẑ(t)|| ≤ G 2 ||η(t)||, ∀t ≥ 0 (11.62)


220 11 Adaptive Output-Feedback: Uncertain In-Domain Parameters

for some positive constants ḡ, G 1 and G 2 , and

||ĝt || ∈ L2 ∩ L∞ . (11.63)

Consider the functionals


 1
V1 (t) = λ̄−1 (2 − x)w 2 (x, t)d x, (11.64a)
0
 1
V2 (t) = μ̄−1 (1 + x)η 2 (x, t)d x, (11.64b)
0
 1
V3 (t) = μ̄−1 (1 + x)φ2 (x, t)d x, (11.64c)
0
 1  1
−1
V4 (t) = μ̄ (1 + x) P 2 (x, ξ, t)dξd x (11.64d)
0 0

The following result is proved in Appendix E.7.


Lemma 11.8 There exist nonnegative, integrable functions l1 , l2 , . . . , l5 such that

1
V̇1 (t) ≤ 4η 2 (0, t) + 4ˆ2 (0, t) − λ̄V1 (t) (11.65a)
2
1
V̇2 (t) ≤ −η 2 (0, t) − μ̄V2 (t) + l1 (t)V2 (t) + l2 (t)V3 (t)
4
+ l3 (t)V4 (t) + 32ḡ 2 ˆ2 (0, t) (11.65b)
1
V̇3 (t) ≤ − μ̄V3 (t) + 4η 2 (0, t) + 4ˆ2 (0, t) (11.65c)
2
1
V4 (t) = −|| p0 (t)||2 + 2λ̄V1 (t) − μ̄V4 (t). (11.65d)
2
Now, forming the Lyapunov function candidate

V5 (t) = 8V1 (t) + 36V2 (t) + V3 (t) + V4 (t), (11.66)

we find using Lemma 11.8 that

1 1
V̇5 (t) ≤ −2λ̄V1 (t) − 9μ̄V2 (t) − μ̄V3 (t) − μ̄V4 (t)
2 2
+ 36l1 (t)V2 (t) + 36l2 (t)V3 (t) + 36l3 (t)V4 (t)

+ 36 1 + 32ḡ 2 ˆ2 (0, t) − || p0 (t)||2 . (11.67)

By expanding the term ˆ2 (0, t) as

ˆ2 (0, t) = σ 2 (t)(1 + ||φ(t)||2 + || p0 (t)||2 ) (11.68)


11.2 Anti-collocated Sensing and Control 221

with σ defined in (11.45), (11.67) can be written

1 1
V̇5 ≤ −2λ̄V1 (t) − 9μ̄V2 (t) − μ̄V3 (t) − μ̄V4 (t)
2 2
+ 36l1 (t)V2 (t) + (36l2 (t) + bσ 2 (t)μ̄)V3 (t) + 36l3 (t)V4 (t)
+ bσ 2 (t) − (1 − bσ 2 (t))|| p0 (t)||2 (11.69)

or

V̇5 (t) ≤ −cV5 (t) + l4 (t)V5 (t) + l5 (t) − (1 − bσ 2 (t))|| p0 (t)||2 (11.70)

for the positive constants c and b, and some nonnegative, integrable functions l4 and
l5 . Moreover, from (11.52), (11.46) and (11.53) we have

1
V̇ (t) ≤ − σ 2 (t) (11.71)
2
and for t ≥ t F

ˆ2 (0, t)
σ 2 (t) = ≤ 2||θ̃(t)||2 + 2||κ̃(t)||2 ≤ kV (t) (11.72)
1 + f 2 (t)

where
 
k = 4 max max γ1 (x), max γ2 (x) . (11.73)
x∈[0,1] x∈[0,1]

Lemma B.4 in Appendix B now gives V5 ∈ L1 ∩ L∞ and hence

||w||, ||η||, ||φ||, ||P|| ∈ L2 ∩ L∞ . (11.74)

Since ||P(t)|| is bounded, || p0 (t)||2 must be bounded for almost all t ≥ 0, implying
that σ 2 (t)|| p0 (t)||2 is integrable, since σ 2 ∈ L∞ . Inequality (11.70) can therefore be
written

V̇5 (t) ≤ −cV5 (t) + l4 (t)V5 (t) + l6 (t) (11.75)

for the integrable function

l6 (t) = l5 (t) + bσ 2 (t)|| p0 (t)||2 . (11.76)

Lemma B.3 in Appendix B then gives V5 → 0 and thus

||w||, ||η||, ||φ||, ||P|| → 0. (11.77)


222 11 Adaptive Output-Feedback: Uncertain In-Domain Parameters

From Lemma 11.4 it follows that

||α|| ∈ L2 ∩ L∞ , ||α|| → 0, (11.78)

while from the invertibility of the transformation (11.58), we have

||ẑ|| ∈ L2 ∩ L∞ , ||ẑ|| → 0. (11.79)

From (11.37),

||ψ|| ∈ L2 ∩ L∞ , ||ψ|| → 0 (11.80)

follows, while (11.29), and Lemma 11.5 give

||z|| ∈ L2 ∩ L∞ , ||z|| → 0. (11.81)

From the invertibility of the transformations of Lemmas 11.1–11.3,

||u||, ||v|| ∈ L2 ∩ L∞ , ||u||, ||v|| → 0 (11.82)

follows. We now proceed to show pointwise boundedness, square integrability and


convergence to zero of u and v. From (11.29), (11.31), (11.37) and (11.39), we find
 1
ˆ(x, t) = (x, t) − θ̃(ξ, t)φ(1 − (ξ − x), t)dξ
x
 1
− κ̃(ξ, t)P(x, ξ, t)dξ (11.83)
0

and
 1
z(x, t) = ψ(x, t) + θ̂(ξ, t)φ(1 − (ξ − x), t)dξ
x
 1
+ κ̂(ξ, t)P(x, ξ, t)dξ + ˆ(x, t) (11.84)
0

From (11.83) and the fact that ≡ 0 for t ≥ t F , we obtain

||ˆ||∞ ∈ L∞ ∩ L2 , ||ˆ||∞ → 0. (11.85)

From the filter structure (11.26a) and the control law (11.55), we have

U ∈ L∞ ∩ L2 , U →0 (11.86)
11.2 Anti-collocated Sensing and Control 223

and

||ψ||∞ ∈ L∞ ∩ L2 , ||ψ||∞ → 0 (11.87)

for all x ∈ [0, 1]. Then from (11.84), we obtain

||z||∞ ∈ L∞ ∩ L2 , ||z||∞ → 0. (11.88)

Specifically, we have z(0, ·) ∈ L∞ ∩ L2 , and from (11.26b), (11.26c) and (11.28),


we get

||w||∞ , ||φ||∞ , ||P||∞ ∈ L∞ ∩ L2 , ||w||∞ , ||φ||∞ , ||P||∞ → 0. (11.89)

Lemma 11.4 and the invertibility of the transformations of Lemmas 11.1–11.3, then
give

||u||∞ , ||v||∞ ∈ L∞ ∩ L2 , ||u||∞ , ||v||∞ ∈ L∞ ∩ L2 . (11.90)

11.3 Simulations

System (11.1) and the controller of Theorem 11.1 are implemented using the system
parameters

1 1
λ(x) = (1 + x), μ(x) = e 2 x (11.91a)
2
c1 (x) = 1 + x, c2 (x) = 1 + sin(x) q=1 (11.91b)

and initial condition

u 0 (x) = x, v0 (x) = sin(2πx), (11.92)

constituting an unstable system. All additional initial conditions are set to zero. The
design gains are set to

γ1 = γ2 ≡ 100, θ̄ = κ̄ = 100. (11.93)

From Fig. 11.1 it is observed that the state norm and the actuation signal both
converge to zero in approximately seven seconds, while from Fig. 11.2, the estimated
parameters are bounded.
224 11 Adaptive Output-Feedback: Uncertain In-Domain Parameters

1.5
0.2
u + v
1 0

U
0.5 −0.2
−0.4
0

0 5 10 15 0 5 10 15
Time [s] Time [s]

Fig. 11.1 Left: State norm. Right: Actuation signal

Fig. 11.2 Left: Estimated parameter θ̂. Right: Estimated parameter κ̂

11.4 Notes

The above adaptive controller of Theorem 11.1 is both simpler and easier to imple-
ment than the controllers of Chap. 10. However, neither the system parameters or the
system states are estimated directly.
The problem of stabilizing a system of 2 × 2 linear hyperbolic PDEs with uncer-
tain system parameters using boundary sensing only is also solved in Yu et al. (2017).
The solution in Yu et al. (2017), however, requires sensing to be taken at both bound-
aries (u(1, t) as well as v(0, t)), and the paper only concerns systems with constant
and equal transport speeds set to 1. The systems considered are on the other hand
allowed to have non-local source terms in the form of integrals similar to the term h
in (2.1a), but such a term can be removed by a transformation and the controller of
Theorem 11.1 can therefore be used directly on such systems as well.
In Chap. 12, we further develop the above adaptive output-feedback scheme in
a number of ways: we use it to solve a model reference adaptive control problem,
and to reject a biased harmonic disturbance with uncertain amplitudes, bias and
phases, and also allow the actuation and sensing to be scaled by arbitrary nonzero
constants.
References 225

References

Anfinsen H, Aamo OM (2017) Adaptive output-feedback stabilization of linear 2 × 2 hyperbolic


systems using anti-collocated sensing and control. Syst Control Lett 104:86–94
Bernard P, Krstić M (2014) Adaptive output-feedback stabilization of non-local hyperbolic PDEs.
Automatica 50:2692–2699
Yu H, Vazquez R, Krstić (2017) Adaptive output feedback for hyperbolic PDE pairs with non-local
coupling. In: 2017 American control conference, Seattle, WA, USA
Chapter 12
Model Reference Adaptive Control

12.1 Introduction

We will in this chapter show how the technique of Chap. 11 can be generalized to
solve a model reference adaptive control problem, as well as being used to reject the
effect of a biased harmonic disturbance affecting the system’s interior, boundaries
and measurement. Furthermore, we allow the actuation and anti-collocated sensing
to be scaled by arbitrary nonzero constants. The system under consideration is

u t (x, t) + λ(x)u x (x, t) = c1 (x)v(x, t) + d1 (x, t) (12.1a)


vt (x, t) − μ(x)vx (x, t) = c2 (x)u(x, t) + d2 (x, t) (12.1b)
u(0, t) = qv(0, t) + d3 (t) (12.1c)
v(1, t) = k1 U (t) + d4 (t) (12.1d)
u(x, 0) = u 0 (x) (12.1e)
v(x, 0) = v0 (x) (12.1f)
y0 (t) = k2 v(0, t) + d5 (t) (12.1g)

where the parameters λ, μ, c1 , c2 , q, k1 , k2 are unknown but assumed to satisfy

λ, μ ∈ C 1 ([0, 1]), λ(x), μ(x) > 0, ∀x ∈ [0, 1] (12.2a)


c1 , c2 ∈ C ([0, 1]), q, k1 , k2 ∈ R\{0},
0
(12.2b)

with

u 0 , v0 ∈ B([0, 1]), (12.3)

and where d1 , d2 , d3 , d4 , d5 are disturbances containing biased harmonic oscillators.


The signal y0 is a measurement taken anti-collocated with actuation U .
We assume the following quantities are known about the system.
© Springer Nature Switzerland AG 2019 227
H. Anfinsen and O. M. Aamo, Adaptive Control of Hyperbolic PDEs,
Communications and Control Engineering,
https://doi.org/10.1007/978-3-030-05879-1_12
228 12 Model Reference Adaptive Control

Assumption 12.1 The following quantities are known,


 1  1
−1 dγ −1 dγ
t1 = λ̄ = , t2 = μ̄ = , sign(k1 k2 ). (12.4)
0 λ(γ) 0 μ(γ)

Note that the exact profiles of λ and μ are not required to be known.
The goal of this chapter is to design an adaptive control law U (t) in (12.1d) so that
system (12.1) is adaptively stabilized subject to Assumption 12.1, and the following
tracking objective
 t+T
lim (y0 (s) − yr (s))2 ds = 0 (12.5)
t→∞ t

is obtained for some bounded constant T > 0, where yr is generated using the ref-
erence model

bt (x, t) − μ̄bx (x, t) = 0 (12.6a)


b(1, t) = r (t) (12.6b)
b(x, 0) = b0 (x) (12.6c)
yr (t) = b(0, t) (12.6d)

for some reference signal r of choice. The goal (12.5) should be achieved from
using the sensing (12.1g) only. Moreover, all additional variables in the closed loop
system should be bounded pointwise in space. We assume the reference signal r
and disturbances d1 , d2 , . . . , d5 are bounded, as formally stated in the following
assumption.
Assumption 12.2 The reference signal r (t) is known for all t ≥ 0, and there exist
constants r̄ , d̄ so that

|r (t)| ≤ r̄ , ||di (t)||∞ ≤ d̄ |d j (t)| ≤ d̄ (12.7)

for all t ≥ 0, x ∈ [0, 1], i = 1, 2 and j = 3, 4, 5.

12.2 Model Reference Adaptive Control

12.2.1 Disturbance Parameterization

In the transformations to follow, we will need a parametrization of the disturbance


terms d1 , d2 , d3 , d4 , d5 . Since they are all assumed to be biased, harmonic distur-
bances with a known number n of distinct frequencies, they can all be represented as
outputs of an autonomous linear system. Hence, we parameterize the disturbances
12.2 Model Reference Adaptive Control 229

as follows

d1 (x, t) = g1T (x)X (t), d2 (x, t) = g2T (x)X (t) (12.8a)


d3 (t) = g3T X (t), d4 (t) = g4T X (t) (12.8b)
d5 (t) = g5T X (t), Ẋ (t) = AX (t), X (0) = X 0 (12.8c)

where the matrix A ∈ R(2n+1)×(2n+1) is known and has the form

A = diag {0, A1 , A2 , . . . , An } (12.9)

where
 
0 ωi
Ai = (12.10)
−ωi 0

for i = 1 . . . n. The vectors g1 , g2 , g3 , g4 , g5 and the disturbance model’s initial con-


dition X (0) = X 0 ∈ R2n+1 , however, are unknown.

12.2.2 Mapping to Canonical Form

12.2.2.1 Decoupling

Lemma 12.1 System (12.1) is through an invertible backstepping transformation,


which is characterized in the proof, equivalent to the system

α̌t (x, t) + λ(x)α̌x (x, t) = 0 (12.11a)


β̌t (x, t) − μ(x)β̌x (x, t) = 0 (12.11b)
α̌(0, t) = q β̌(0, t) (12.11c)
 1
β̌(1, t) = k1 U (t) − m 1 (ξ)α̌(ξ, t)dξ
0
 1
− m 2 (ξ)β̌(ξ, t)dξ − m 3T X (t) (12.11d)
0
α̌(x, 0) = α̌0 (x) (12.11e)
β̌(x, 0) = β̌0 (x) (12.11f)
y0 (t) = k2 β̌(0, t) (12.11g)

for some (continuous) functions m 1 , m 2 , m 3 of the unknown parameters μ, λ, c1 , c2 , q,


and with α̌0 , β̌0 ∈ B([0, 1]).
230 12 Model Reference Adaptive Control

Proof We will prove that system (12.1) with disturbance model (12.8) and system
(12.11) are connected through an invertible backstepping transformation. To ease the
derivations to follow, we write system (12.1) in vector form as follows

ζt (x, t) + Λ(x)ζx (x, t) = Π (x)ζ(x, t) + G(x)X (t) (12.12a)


ζ(0, t) = Q 0 ζ(0, t) + G 3 X (t) (12.12b)
ζ(1, t) = R1 ζ(1, t) + k1 Ū (t) + G 4 X (t) (12.12c)
ζ(x, 0) = ζ0 (x) (12.12d)

where
   
u(x, t) λ(x) 0
ζ(x, t) = , Λ(x) = (12.13a)
v(x, t) 0 −μ(x)
   T 
0 c1 (x) g (x)
Π (x) = , G(x) = 1T (12.13b)
c2 (x) 0 g2 (x)
   
0q 10
Q0 = , R1 = (12.13c)
01 00
   T  
0 g 0
Ū (t) = , G3 = 3 , G4 = . (12.13d)
U (t) 0 g4T

Consider the backstepping transformation


 x
γ(x, t) = ζ(x, t) − K (x, ξ)ζ(ξ, t)dξ − F(x)X (t) (12.14)
0

where
 
α̌(x, t)
γ(x, t) = (12.15)
β̌(x, t)

contains the new set of variables, and


 uu   
K (x, ξ) K uv (x, ξ) f 1T (x)
K (x, ξ) = , F(x) = , (12.16)
K vu (x, ξ) K vv (x, ξ) f 2T (x)

are specified shortly. Differentiating (12.14) with respect to time, inserting the
dynamics (12.12a) and (12.8c) and integrating by parts, we find

ζt (x, t) = γt (x, t) − K (x, x)Λ(x)ζ(x, t) + K (x, 0)Λ(0)ζ(0, t)


 x 
+ K ξ (x, ξ)Λ(ξ) + K (x, ξ)Λ (ξ) + K (x, ξ)Π (ξ) ζ(ξ, t)dξ
0
 x
+ K (x, ξ)G(ξ)X (t)dξ + F(x)AX (t). (12.17)
0
12.2 Model Reference Adaptive Control 231

Equivalently, differentiating (12.14) with respect to space, we find

ζx (x, t) = γx (x, t) + K (x, x)ζ(x, t)


 x
+ K x (x, ξ)ζ(ξ, t)dξ + F  (x)X (t). (12.18)
0

Inserting (12.17) and (12.18) into (12.12a), we find

γt (x, t) + Λ(x)γx (x, t) + K (x, 0)Λ(0)Q 0 ζ(0, t)


+ [Λ(x)K (x, x) − K (x, x)Λ(x) − Π (x)] ζ(x, t)
 x

+ Λ(x)K x (x, ξ) + K ξ (x, ξ)Λ(ξ) + K (x, ξ)Π (ξ)
0

+ K (x, ξ)Λ (ξ) ζ(ξ, t)dξ
  x

+ Λ(x)F (x) − G(x) + F(x)A + K (x, ξ)G(ξ)dξ
0

+ K (x, 0)Λ(0)G 3 X (t) = 0. (12.19)

If K satisfies the PDE (8.54)–(8.55) with k uu chosen according to Remark 8.1, and
F satisfies the equation
 x
Λ(x)F  (x) = −F(x)A + G(x) − K (x, ξ)G(ξ)dξ
0
− K (x, 0)Λ(0)G 3 , (12.20)

we obtain the target system equations (12.11a)–(12.11b). Inserting the transformation


(12.14) into the boundary condition (12.1c) and the measurement (12.1g), we obtain

α̌(0, t) + f 1T (0)X (t) = q β̌(0, t) + q f 2T (0)X (t) + g3T X (t) (12.21a)


y0 (t) = k2 β̌(0, t) + k2 f 2T X (t) + g5T X (t). (12.21b)

Choosing

q T 1 T
f 1T (0) = − g + g3T , f 2T (0) = − g (12.22)
k2 5 k2 5

we obtain (12.11c) and (12.11g). The equation consisting of (12.20) and (12.22) is a
standard matrix ODE which can be explicitly solved for F. From Theorem 1.4, the
transformation (12.14) is invertible, and the inverse is in the form
 x
ζ(x, t) = γ(x, t) + L(x, ξ)γ(ξ, t)dξ + R(x)X (t) (12.23)
0
232 12 Model Reference Adaptive Control

where
   
L αα (x, ξ) L αβ (x, ξ) r1T (x)
L(x, ξ) = , R(x) = (12.24)
L βα (x, ξ) L ββ (x, ξ) r2T (x)

are given from (1.53) and (1.102). From inserting x = 1 into (12.23), we obtain
(12.11d), where

m 1 (ξ) = L βα (1, ξ), m 2 (ξ) = L ββ (1, ξ), m 3T = r2T (1) − g4T . (12.25)

12.2.2.2 Scaling and Mapping to Constant Transport Speeds

We now use a transformation to get rid of the spatially varying transport speeds in
(12.11), and also scale the variables to ease subsequent analysis.
Lemma 12.2 System (12.11) is equivalent to the system

αt (x, t) + λ̄αx (x, t) = 0 (12.26a)


βt (x, t) − μ̄βx (x, t) = 0 (12.26b)
α(0, t) = β(0, t) (12.26c)
 1
β(1, t) = ρU (t) − κ(ξ)α(ξ, t)dξ
0
 1
− σ(ξ)β(ξ, t)dξ − m 4T X (t) (12.26d)
0
α(x, 0) = α0 (x) (12.26e)
β(x, 0) = β0 (x) (12.26f)
y0 (t) = β(0) (12.26g)

where ρ, κ, σ, m 4 are continuous functions of m 1 , m 2 , m 3 , k1 and k2 , and α0 , β0 ∈


B([0, 1]).

Proof Consider the invertible mapping

k2
α(x, t) = α̌(h −1
α (x), t), β(x, t) = k2 β̌(h −1
β (x), t) (12.27)
q

where
 x  x
dγ dγ
h α (x) = λ̄ , h β (x) = μ̄ (12.28)
0 λ(γ) 0 μ(γ)
12.2 Model Reference Adaptive Control 233

with λ̄, μ̄ defined in Assumption 12.1, are strictly increasing and hence invertible
functions. The invertibility of the transformation (12.27) therefore follows. The rest
of the proof follows immediately from insertion and noting that

λ̄ μ̄
h α (x) = , h β (x) = (12.29a)
λ(x) μ(x)
h α (0) = h β (0) = 0, h α (1) = h β (1) = 1 (12.29b)

and is therefore omitted. The new parameters are given as

ρ = k1 k2 , κ(x) = qt1 λ(h −1 −1


α (x))m 1 (h α (x)) (12.30a)
m 4 = k2 m 3 , σ(x) = t2 μ(h −1 −1
β (x))m 2 (h β (x)). (12.30b)

12.2.2.3 Extension of Reference Model and Error Dynamics

In view of the structure of system (12.26), we augment the reference model (12.6)
with an auxiliary state a, and introduce the system

at (x, t) + λ̄ax (x, t) = 0 (12.31a)


bt (x, t) − μ̄bx (x, t) = 0 (12.31b)
a(0, t) = b(0, t) (12.31c)
b(1, t) = r (t) (12.31d)
a(x, 0) = a0 (x) (12.31e)
b(x, 0) = b0 (x) (12.31f)
yr (t) = b(0, t) (12.31g)

with initial conditions a0 , b0 ∈ B([0, 1]).

Lemma 12.3 Consider system (12.26) and the extended reference model (12.31).
The error variables

w(x, t) = α(x, t) − a(x, t) (12.32a)


ž(x, t) = β(x, t) − b(x, t) (12.32b)

satisfy the dynamics

wt (x, t) + λ̄wx (x, t) = 0 (12.33a)


ž t (x, t) − μ̄ž x (x, t) = 0 (12.33b)
w(0, t) = ž(0, t) (12.33c)
234 12 Model Reference Adaptive Control
 1
ž(1, t) = ρU (t) − r (t) + κ(ξ)(w(ξ, t) + a(ξ, t))dξ
0
 1
+ σ(ξ)(ž(ξ, t) + b(ξ, t))dξ + m 4T X (t) (12.33d)
0
w(x, 0) = w0 (x) (12.33e)
ž(x, 0) = ž 0 (x) (12.33f)

with the measurement (12.26g) becoming

y0 (t) = ž(0, t) + b(0, t), (12.34)

and with w0 , ž 0 ∈ B([0, 1]).

Proof The proof is straightforward, and therefore omitted. 

12.2.2.4 Canonical Form

Lemma 12.4 System (12.33) is equivalent to the system

wt (x, t) + λ̄wx (x, t) = 0 (12.35a)


z t (x, t) − μ̄z x (x, t) = μ̄θ(x)z(0, t) (12.35b)
w(0, t) = z(0, t) (12.35c)
 1
z(1, t) = ρU (t) − r (t) + κ(ξ)(w(ξ, t) + a(ξ, t))dξ
0
 1
+ θ(ξ)b(1 − ξ, t)dξ + m 4T X (t) (12.35d)
0
w(x, 0) = w0 (x) (12.35e)
z(x, 0) = z 0 (x) (12.35f)
y0 (t) = z(0, t) + b(0, t) (12.35g)

where w0 , z 0 ∈ B([0, 1]) and

θ(x) = σ(1 − x). (12.36)

Proof Consider the backstepping transformation


 x
z(x, t) = ž(x, t) − σ(1 − x + ξ)ž(ξ, t)dξ. (12.37)
0
12.2 Model Reference Adaptive Control 235

Differentiating (12.37) with respect to time and space, respectively, we find

ž t (x, t) = z t (x, t) + μ̄σ(1)ž(x, t) − μ̄σ(1 − x)ž(0, t)


 x
− μ̄σ  (1 − x + ξ)ž(ξ, t)dξ (12.38)
0

and
 x
ž x (x, t) = z(x, t) + σ(1)ž(x, t) − σ  (1 − x + ξ)ž(ξ, t)dξ. (12.39)
0

Inserting (12.38) and (12.39) into (12.33b), we obtain

ž t (x, t) − μ̄ž x (x, t) = z t (x, t) − μ̄z(x, t) − μ̄σ(1 − x)ž(0, t) = 0 (12.40)

which gives (12.35b) with θ defined in (12.36), since

ž(0, t) = z(0, t). (12.41)

Lastly, using (12.37) and (12.33d), we have


 1
z(1, t) = ρU (t) − r (t) + κ(ξ)(w(ξ, t) + a(ξ, t))dξ
0
 1  1
+ σ(ξ)(ž(ξ, t) + b(ξ, t))dξ − σ(1 − 1 + ξ)ž(ξ, t)dξ + m 4T X (t)
0 0
 1
= ρU (t) − r (t) + κ(ξ)(w(ξ, t) + a(ξ, t))dξ
0
 1
+ σ(ξ)b(ξ, t)dξ + m 4T X (t) (12.42)
0

which gives (12.35d), in view of the identity


 1  1  1
σ(ξ)b(ξ, t)dξ = θ(1 − ξ)b(ξ, t)dξ = θ(ξ)b(1 − ξ, t)dξ. (12.43)
0 0 0

We have thus shown that stabilizing (12.35) is equivalent to stabilizing the original
system (12.1), because the reference system (12.31) itself is stable for any bounded
r . Moreover, the objective (12.5) can be stated in terms of z as
 t+T
lim z 2 (0, s)ds = 0. (12.44)
t→∞ t
236 12 Model Reference Adaptive Control

The goal is to design a control law U so that z and w converge in L 2 ([0, 1]) at least
asymptotically to zero, while at the same time ensuring pointwise boundedness of
all variables and convergence of z(0, t) to zero in the sense of (12.44).

12.2.3 Reparametrization of the Disturbance

We reparameterize the disturbance term m 4T X as follows

m 4T X (t) = χT (t)ν (12.45)

where
 
χT (t) = 1 sin(ω1 t) cos(ω1 t) . . . sin(ωn t) cos(ωn t) (12.46)

contains known components, while


 T
ν = a0 a1 b1 . . . an bn (12.47)

contains the unknown amplitudes and bias. This representation facilitates for identi-
fication, since all the uncertain parameters are now in a single vector ν.

12.2.4 Filter Design

We introduce slightly modified versions of the filters introduced in Sect. 11.2.2.


Consider

ψt (x, t) − μ̄ψx (x, t) = 0, ψ(1, t) = U (t)


ψ(x, 0) = ψ0 (x) (12.48a)
φt (x, t) − μ̄φx (x, t) = 0, φ(1, t) = y0 (t) − b(0, t)
φ(x, 0) = φ0 (x) (12.48b)
ϑt (x, t) − μ̄ϑx (x, t) = 0, ϑ(1, t) = χ(t)
ϑ(x, 0) = ϑ0 (x) (12.48c)
Pt (x, ξ, t) + λ̄Pξ (x, ξ, t) = 0, P(x, 0, t) = φ(x, t)
P(x, ξ, 0) = P0 (x, ξ) (12.48d)

and define

p0 (x, t) = P(0, x, t), p1 (x, t) = P(1, x, t). (12.49)


12.2 Model Reference Adaptive Control 237

Consider also the filtered reference variables

Mt (x, ξ, t) − μ̄Mx (x, ξ, t) = 0, M(1, ξ, t) = a(ξ, t)


M(x, ξ, 0) = M0 (x, ξ) (12.50a)
Nt (x, ξ, t) − μ̄N x (x, ξ, t) = 0, N (1, ξ, t) = b(1 − ξ, t)
N (x, ξ, 0) = N0 (x, ξ) (12.50b)

and define

n 0 (x, t) = N (0, x, t), m 0 (x, t) = M(0, x, t). (12.51)

Let the initial conditions satisfy

ψ0 , φ0 , ϑ0 ∈ B([0, 1]), P0 , M0 , N0 ∈ B([0, 1]2 ). (12.52)

We can now construct non-adaptive estimates of the variables w and z as

w̄(x, t) = p1 (x, t) (12.53a)


 1
z̄(x, t) = ρψ(x, t) − b(x, t) + θ(ξ)φ(1 − (ξ − x), t)dξ
x
 1
+ κ(ξ) [P(x, ξ, t) + M(x, ξ, t)] dξ
0
 1
+ θ(ξ)N (x, ξ, t)dξ + ϑT (x, t)ν. (12.53b)
0

Lemma 12.5 Consider system (12.35) and state estimates (12.53) generated using
the filters (12.48) and (12.49). After a finite time t F given in (8.8), we have

w̄ ≡ w, z̄ ≡ z. (12.54)

Proof Consider the non-adaptive estimation errors

e(x, t) = w(x, t) − w̄(x, t) (12.55a)


(x, t) = z(x, t) − z̄(x, t). (12.55b)

Then the dynamics can straightforwardly be shown to satisfy

et (x, t) + λ̄ex (x, t) = 0 (12.56a)


 1
t (x, t) − μ̄x (x, t) = κ(ξ) [μ̄Px (x, ξ, t) − Pt (x, ξ, t)] dξ (12.56b)
0
238 12 Model Reference Adaptive Control

e(0, t) = 0 (12.56c)
 1
(1, t) = κ(ξ)e(ξ, t)dξ (12.56d)
0
e(x, 0) = e0 (x) (12.56e)
(x, 0) = 0 (x) (12.56f)

where e0 , 0 ∈ B([0, 1]). It can be shown using the boundary condition P(x, 0, t) =
φ(x, t) in (12.48d) and the dynamics of φ in (12.48b), that Pt (x, ξ, t) = μ̄Px (x, ξ, t)
for t ≥ t1 . Moreover, from (12.56a) and (12.56c), it is observed that e ≡ 0 for t ≥ t1 ,
and therefore (12.56b) and (12.56d) imply that  ≡ 0 for t ≥ t F where t F is given by
(8.8). 

12.2.5 Adaptive Laws

We start by assuming the following:


Assumption 12.3 Bounds on ρ, θ, κ, ν are known. That is, we are in knowledge of
some constants ρ, ρ̄, θ, θ̄, κ, κ̄, ν i , ν̄i , i = 1 . . . (2n + 1) so that

ρ ≤ ρ ≤ ρ̄ (12.57a)
θ ≤ θ(x) ≤ θ̄, ∀x ∈ [0, 1] (12.57b)
κ ≤ κ(x) ≤ κ̄, ∀x ∈ [0, 1] (12.57c)
ν i ≤ νi ≤ ν̄i , i = 1 . . . (2n + 1) (12.57d)

for all x ∈ [0, 1], where


 T
ν = ν1 ν2 . . . ν2n+1 (12.58a)
 T
ν = ν 1 ν 2 . . . ν 2n+1 (12.58b)
 T
ν̄ = ν̄1 ν̄2 . . . ν̄2n+1 (12.58c)

and with

/ [ρ, ρ̄].
0∈ (12.59)

The assumption (12.59) is equivalent to knowing the sign of the product k1 k2 .


The remaining assumptions should not be a limitation, since the bounds can be made
arbitrary large.
12.2 Model Reference Adaptive Control 239

Motivated by the parametrization (12.53), we generate an estimate of z from


 1
ẑ(x, t) = ρ̂(t)ψ(x, t) − b(x, t) + θ̂(ξ, t)φ(1 − (ξ − x), t)dξ
x
 1
+ κ̂(ξ, t) [P(x, ξ, t) + M(x, ξ, t)] dξ
0
 1
+ θ̂(ξ, t)N (x, ξ, t)dξ + ϑT (x, t)ν̂(t) (12.60)
0

and define the corresponding prediction error as

ˆ(x, t) = z(x, t) − ẑ(x, t). (12.61)

The dynamics of (12.60) is


 1
ẑ t (x, t) − μ̄ẑ x (x, t) = μ̄θ̂(x, t)z(0, t) + κ̂(ξ, t) [Pt (x, ξ, t) − μ̄Px (x, ξ, t)] dξ
0
 1
˙
+ ρ̂(t)ψ(x, t) + θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ
x
 1
+ κ̂t (ξ, t)[P(x, ξ, t) + M(x, ξ, t)]dξ
0
 1
+ ˙
θ̂t (ξ, t)N (x, ξ, t)dξ + ϑT (x, t)ν̂(t) (12.62a)
0
 1
ẑ(1, t) = ρ̂(t)U (t) − r (t) + κ̂(ξ, t)( p1 (ξ, t) + a(ξ, t))dξ
0
 1
+ θ̂(ξ, t)b(1 − ξ, t)dξ (12.62b)
0
ẑ(x, 0) = ẑ 0 (x) (12.62c)

with ẑ 0 ∈ B([0, 1]), and where the term in the first integral of (12.62a) is zero in a
finite time t1 . Moreover, we have

y0 (t) = z(0, t) + b(0, t)


 1
= ρψ(0, t) + θ(ξ) [φ(1 − ξ, t) + n 0 (ξ, t)] dξ
0
 1
+ κ(ξ) [ p0 (ξ, t) + m 0 (ξ, t)] dξ + ϑT (0, t)ν + (0, t) (12.63)
0
240 12 Model Reference Adaptive Control

where the error term (0, t) converges to zero in a finite time t F = t1 + t2 . From
(12.63), we propose the adaptive laws

˙ = proj ˆ(0, t)ψ(0, t)


ρ̂(t) ρ,ρ̄ γ1 , ρ̂(t) (12.64a)
1 + f 2 (t)
ˆ(0, t)(φ(1 − x, t) + n 0 (x, t))
θ̂t (x, t) = projθ,θ̄ γ2 (x) , θ̂(x, t) (12.64b)
1 + f 2 (t)
ˆ(0, t)( p0 (x, t) + m 0 (x, t))
κ̂t (x, t) = projκ,κ̄ γ3 (x) , κ̂(x, t) (12.64c)
1 + f 2 (t)
˙ ˆ(0, t)ϑ(0, t)
ν̂(t) = projν,ν̄ Γ4 , ν̂(t) (12.64d)
1 + f 2 (t)
ρ̂(0) = ρ̂0 (12.64e)
θ̂(x, 0) = θ̂0 (x) (12.64f)
κ̂(x, 0) = κ̂0 (x) (12.64g)
ν̂(0) = ν̂0 (12.64h)

where

ˆ(0, t) = z(0, t) − ẑ(0, t)


= y0 (t) − b(0, t) − ẑ(0, t) (12.65a)
f (t) = ψ (0, t) + ||φ(t)|| + || p0 (t)||
2 2 2 2

+ ||m 0 (t)||2 + ||n 0 (t)||2 + |ϑ(0, t)|2 (12.65b)

and γ1 > 0, γ2 (x), γ3 (x) > 0 for all x ∈ [0, 1] and Γ4 > 0 are design gains. The
initial conditions are chosen inside the feasible domain

ρ ≤ ρ̂0 ≤ ρ̄ (12.66a)
θ ≤ θ̂0 (x) ≤ θ̄, ∀x ∈ [0, 1] (12.66b)
κ ≤ κ̂0 (x) ≤ κ̄, ∀x ∈ [0, 1] (12.66c)
ν i ≤ ν̂i,0 ≤ ν̄i , i = 1 . . . (2n + 1) (12.66d)

for
 T
ν̂0 = ν̂1,0 ν̂2,0 . . . ν̂2n+1,0 (12.67)

and the projection operator is defined in Appendix A. We note that

|ϑ(0, t)|2 = n + 1 (12.68)

for t ≥ t2 .
12.2 Model Reference Adaptive Control 241

Lemma 12.6 The adaptive laws (12.64) with initial conditions satisfying (12.66)
have the following properties

ρ ≤ ρ̂(t) ≤ ρ̄, t ≥ 0 (12.69a)


θ ≤ θ̂(x, t) ≤ θ̄, ∀x ∈ [0, 1], t ≥ 0 (12.69b)
κ ≤ κ̂(x, t) ≤ κ̄, ∀x ∈ [0, 1], t ≥ 0 (12.69c)
ν i ≤ ν̂i (t) ≤ ν̄i , i = 1 . . . (2n + 1), t ≥ 0 (12.69d)
˙ ||θ̂t ||, ||κ̂t ||, ν̂˙ ∈ L∞ ∩ L2
ρ̂, (12.69e)
σ ∈ L∞ ∩ L2 (12.69f)

where ρ̃ = ρ − ρ̂, θ̃ = θ − θ̂, κ̃ = κ − κ̂, ν̃ = ν − ν̂, and

ˆ(0, t)
σ(t) = (12.70)
1 + f 2 (t)

with f 2 given in (12.65b).

Proof The properties (12.69a)–(12.69d) follow from the projection operator used in
(12.64) and the conditions (12.66). Consider the Lyapunov function candidate

1 2 1 1 −1
V (t) = ρ̃ (t) + γ (x)θ̃2 (x, t)d x
2γ1 2 0 2

1 1 −1 1
+ γ (x)κ̃2 (x, t)d x + ν̃ T (t)Γ4−1 ν̃(t). (12.71)
2 0 3 2

Differentiating with respect to time, inserting the adaptive laws (12.64) and using
the property −ν̃ T projν,ν̄ (τ , ν̂) ≤ −ν̃ T τ (Lemma A.1 in Appendix A), and similarly
for ρ̂, θ̂ and κ̂, we get
  1
ˆ(0, t)
V̇ (t) ≤ − ρ̃(t)ψ(0, t) + θ̃(x, t)(φ(1 − x, t) + n 0 (x, t))
1 + f 2 (t) 0

+ κ̃(x, t)( p0 (x, t) + m 0 (x, t)) d x + ϑ (0, t)ν̃(t) .
T
(12.72)

We note that
 1
ˆ(0, t) = (0, t) + ρ̃(t)ψ(0, t) + θ̃(ξ, t)(φ(1 − ξ, t) + n 0 (ξ, t))dξ
0
 1
+ κ̃(ξ, t)( p0 (ξ, t) + m 0 (ξ, t))dξ + ϑT (0, t)ν̃(t), (12.73)
0
242 12 Model Reference Adaptive Control

where (0, t) = 0 for t ≥ t1 + t2 = t F , and inserting this into (12.72), we obtain

V̇ (t) ≤ −σ 2 (t) (12.74)

for t ≥ t1 + t2 . This proves that V is bounded and nonincreasing for t ≥ t F , and


hence has a limit as t → ∞. Integrating (12.74) from zero to infinity gives

σ ∈ L2 . (12.75)

Using (12.73), we obtain, for t ≥ t1


|ˆ(0, t)| |ψ(0, t)| ||φ(t)|| + ||m 0 (t)||
|σ(t)| = ≤ |ρ̃(t)| + ||θ̃(t)||
1 + f 2 (t) 1 + f 2 (t) 1 + f 2 (t)
|| p0 (t)|| + ||n 0 (t)|| |ϑ(0, t)|
+ ||κ̃(t)|| + |ν̃(t)|
1 + f (t)
2 1 + f 2 (t)
≤ |ρ̃(t)| + ||θ̃(t)|| + ||κ̃(t)|| + |ν̃(t)| (12.76)

which gives

σ ∈ L∞ . (12.77)

From the adaptive laws (12.64), we have

˙ |ˆ(0, t)| |ψ(0, t)|


|ρ̂(t)| ≤ γ1 ≤ γ1 |σ(t)| (12.78a)
1+ f 2 (t) 1+ f2
|ˆ(0, t)| ||φ(t)|| + ||n 0 (t)||
||θ̂t (t)|| ≤ ||γ2 || ≤ ||γ2 |||σ(t)| (12.78b)
1 + f 2 (t) 1+ f2
|ˆ(0, t)| || p0 (t)|| + ||m 0 (t)||
||κ̂t (t)|| ≤ ||γ3 || ≤ ||γ3 |||σ(t)| (12.78c)
1 + f 2 (t) 1+ f2
˙ |ˆ(0, t)| |ϑ(0, t)|
|ν̂(t)| ≤ |Γ4 | ≤ |Γ4 ||σ(t)| (12.78d)
1 + f 2 (t) 1 + f 2

which, along with (12.69f), gives (12.69e). 

12.2.6 Control Law

Consider the control law


 1  1
1
U (t) = r (t) + ĝ(1 − ξ, t)ẑ(ξ, t)dξ − κ̂(ξ, t)( p1 (ξ, t) + a(ξ, t))dξ
ρ̂(t) 0 0
 1
− θ̂(ξ, t)b(1 − ξ, t)dξ − χT (t)ν̂(t) (12.79)
0
12.2 Model Reference Adaptive Control 243

where ẑ is generated using (12.60), and ĝ is the on-line solution to the Volterra integral
equation
 x
ĝ(x, t) = ĝ(x − ξ, t)θ̂(ξ, t)dξ − θ̂(x, t), (12.80)
0

with ρ̂, θ̂, κ̂ and ν̂ generated from the adaptive laws (12.64).

Theorem 12.1 Consider system (12.1), filters (12.48) and (12.49), reference model
(12.31), and adaptive laws (12.64). Suppose Assumption 12.2 holds. Then the control
law (12.79) guarantees (12.5), and

||u||, ||v||, ||ψ||, ||φ||, ||P|| ∈ L∞ (12.81a)


||u||∞ , ||v||∞ , ||ψ||∞ , ||φ||∞ , ||P||∞ ∈ L∞ . (12.81b)

This theorem is proved in Sect. 12.2.8, but first, we introduce a backstepping


transformation which facilitates a Lyapunov analysis, and also establish some useful
properties.

12.2.7 Backstepping

Consider the transformation


 x
η(x, t) = ẑ(x, t) − ĝ(x − ξ, t)ẑ(ξ, t)dξ = T [ẑ](x, t) (12.82)
0

where ĝ is the solution to


 x
ĝ(x, t) = −T [θ̂](x, t) = ĝ(x − ξ, t)θ̂(ξ, t)dξ − θ̂(x, t). (12.83)
0

The transformation (12.82) is invertible.


Consider also the target system

ηt (x, t) − μ̄ηx (x, t) = −μ̄ĝ(x, t)ˆ(0, t)


 1 
+T κ̂(ξ, t) [Pt (x, ξ, t) − μ̄Px (x, ξ, t)] dξ (x, t)
0
˙
+ ρ̂(t)T [ψ] (x, t)
 1 
+T θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ (x, t)
x
244 12 Model Reference Adaptive Control
 1 
+T κ̂t (ξ, t) [P(x, ξ, t) + M(x, ξ, t)] dξ (x, t)
0
 
1  
+T ˙
θ̂t (ξ, t)N (x, ξ, t)dξ (x, t) + T ϑT (x, t)ν̂(t)
0
 x
− ĝt (x − ξ, t)T −1 [η](ξ, t)dξ (12.84a)
0
η(1, t) = 0 (12.84b)
η(x, 0) = η0 (x) (12.84c)

for some initial condition η0 ∈ B([0, 1]).

Lemma 12.7 The transformation (12.82) and controller (12.79) map system (12.62)
into (12.84).

Proof Differentiating (12.82) with respect to time inserting the dynamics (12.62a)
and integrating by parts, we obtain

ẑ t (x, t) = ηt (x, t) + ĝ(0, t)μ̄ẑ(x, t) − ĝ(x, t)μ̄ẑ(0, t)


 x  x
+ ĝx (x − ξ, t)μ̄ẑ(ξ, t)dξ + ĝ(x − ξ, t)μ̄θ̂(ξ, t)z(0, t)dξ
0 0
 x  1
+ ĝ(x − ξ, t) κ(s) [Pt (ξ, s, t) − μ̄Px (ξ, s, t)] dsdξ
0 x 0

+ ˙
ĝ(x − ξ, t)ρ̂(t)ψ(ξ, t)dξ
0
 x  1
+ ĝ(x − ξ, t) θ̂t (s, t)φ(1 − (s − ξ), t)dsdξ
0 ξ
 x  1
+ ĝ(x − ξ, t) κ̂t (s, t)[P(ξ, s, t) + M(ξ, s, t)]dsdξ
0 0
 x  1
+ ĝ(x − ξ, t) θ̂t (s, t)N (ξ, s, t)dsdξ
0 x 0

+ ˙
ĝ(x − ξ, t)ϑT (ξ, t)ν̂(t)dξ
 0
x
+ ĝt (x − ξ, t)ẑ(ξ, t)dξ. (12.85)
0

Equivalently, differentiating (12.82) with respect to space, we find


 x
ẑ x (x, t) = ηx (x, t) + ĝ(0, t)ẑ(x, t) + ĝx (x − ξ, t)ẑ(ξ, t)dξ. (12.86)
0
12.2 Model Reference Adaptive Control 245

Inserting the results into (12.62a), yields


  x 
ηt (x, t) − μ̄ηx (x, t) − μ̄θ̂(x, t) − ĝ(x − ξ, t)μ̄θ̂(ξ, t)dξ ˆ(0, t)
0
 1
− κ(ξ) [Pt (x, ξ, t) − μ̄Px (x, ξ, t)] dξ
0
 x  1
+ ĝ(x − ξ, t) κ(s) [Pt (ξ, s, t) − μ̄Px (ξ, s, t)] dsdξ
0
 0x
˙
− ρ̂(t)ψ(x, t) + ˙
ĝ(x − ξ, t)ρ̂(t)ψ(ξ, t)dξ
0
 1
− θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ
x
 x  1
+ ĝ(x − ξ, t) θ̂t (s, t)φ(1 − (s − ξ), t)dsdξ
0 ξ
 1
− κ̂t (ξ, t)[P(x, ξ, t) + M(x, ξ, t)]dξ
0
 x  1
+ ĝ(x − ξ, t) κ̂t (s, t)[P(ξ, s, t) + M(ξ, s, t)]dsdξ
0 0
 1  x  1
− θ̂t (ξ, t)N (x, ξ, t)dξ + ĝ(x − ξ, t) θ̂t (s, t)N (ξ, s, t)dsdξ
0
 x 0 0

˙ +
− ϑT (x, t)ν̂(t) ˙
ĝ(x − ξ, t)ϑT (ξ, t)ν̂(t)dξ
 x
0

+ ĝt (x − ξ, t)ẑ(ξ, t)dξ = 0 (12.87)


0

which can we rewritten as (12.84a). The boundary condition (12.84b) follows from
inserting x = 1 into (12.82), and using (12.62b) and (12.79). 

12.2.8 Proof of Theorem 12.1

First, we note that since r, χ ∈ L∞ , we have

||a||∞ , ||b||∞ , ||m 0 ||∞ , ||n 0 ||∞ ∈ L∞ (12.88a)


||M||∞ , ||N ||∞ ∈ L∞ (12.88b)
||a||, ||b||, ||M||, ||N ||, ||m 0 ||, ||n 0 || ∈ L∞ (12.88c)
||ϑ||∞ ∈ L∞ (12.88d)
||ϑ|| ∈ L∞ (12.88e)
246 12 Model Reference Adaptive Control

Moreover, since θ̂ is bounded by projection, we have from (12.82), (12.83) and


Theorem 1.3 the following inequalities

||ĝ(t)|| ≤ ḡ, ||η(t)|| ≤ G 1 ||ẑ(t)||, ||ẑ(t)|| ≤ G 2 ||η(t)|| (12.89)

for all t ≥ 0, and for some positive constants ḡ, G 1 and G 2 , and

||ĝt || ∈ L2 ∩ L∞ . (12.90)

Consider the functionals


 1
V1 (t) = μ̄−1 (1 + x)η 2 (x, t)d x (12.91a)
0
 1
V2 (t) = μ̄−1 (1 + x)φ2 (x, t)d x (12.91b)
0
 1 1
−1
V3 (t) = λ̄ (2 − ξ)P 2 (x, ξ, t)dξd x (12.91c)
0 0
 1
−1
V4 (t) = λ̄ (2 − x) p02 (x, t)d x (12.91d)
0
 1
V5 (t) = λ̄−1 (2 − x) p12 (x, t)d x (12.91e)
0
 1
−1
V6 (t) = μ̄ (1 + x)ψ 2 (x, t)d x. (12.91f)
0

The following result is proved in Appendix E.8.

Lemma 12.8 There exists positive constants h 1 , h 2 , . . . , h 7 and nonnegative, inte-


grable functions l1 , l2 , . . . , l9 such that

μ̄
V̇1 (t) ≤ −η 2 (0, t) − V1 (t) + h 1 σ 2 (t)ψ 2 (0, t) + l1 (t)V1 (t) + l2 (t)V2 (t)
4
+ l3 (t)V3 (t) + l4 (t)V4 (t) + l5 (t)V6 (t) + l6 (t) (12.92a)
μ̄
V̇2 (t) ≤ −φ2 (0, t) + 4η 2 (0, t) − V2 (t) + 4σ 2 (t)ψ 2 (0, t)
2
+ l7 (t)V2 (t) + l8 (t)V4 (t) + l9 (t) (12.92b)
1
V̇3 (t) ≤ − λ̄V3 (t) + 2μ̄V2 (t) (12.92c)
2
λ̄
V̇4 (t) ≤ 2φ2 (0, t) − V4 (t) (12.92d)
2
λ̄
V̇5 (t) ≤ 4η 2 (0, t) − V5 (t) + 4σ 2 (t)ψ 2 (0, t)
2
+ l7 (t)V2 (t) + l8 (t)V4 (t) + l9 (t) (12.92e)
12.2 Model Reference Adaptive Control 247

μ̄
V̇6 (t) ≤ −ψ 2 (0, t) − V6 (t) + h 2 r 2 (t) + h 3 V1 (t) + h 4 V5 (t)
2
+ h 5 ||a(t)||2 + h 6 ||b(t)||2 + h 7 ||χ(t)||2 . (12.92f)

Now forming

V7 (t) = 64V1 (t) + 8V2 (t) + V3 (t) + 4V4 (t) + 8V5 (t) + 2k1 V6 (t) (12.93)

where

k1 = min{μ̄h −1 −1
3 , λ̄h 4 }, (12.94)

and using Lemma 12.8, we obtain

V̇7 (t) ≤ −cV7 (t) + l10 V7 (t) + l11 (t) − 2k1 − 64(1 + h 1 )σ 2 (t) ψ 2 (0, t)

+ 2k1 h 2 r 2 (t) + 2k1 h 5 ||a(t)||2


+ 2k1 h 6 ||b(t)||2 + 2k1 h 7 ||χ(t)||2 (12.95)

for some positive constant c and integrable functions l9 and l10 . The terms in
r, ||a||, ||b|| and ||χ|| are all bounded by Assumption 12.2.
Moreover, from the inequality (12.76) and the definition of V in (12.71), we have,
for t ≥ t1

σ 2 (t) ≤ ρ̃2 (t) ≤ 2γ1 V (t). (12.96)

Lemma B.4 in Appendix B then gives V7 ∈ L∞ and

||η||, ||φ||, ||P||, || p0 ||, || p1 ||, ||ψ|| ∈ L∞ (12.97)

and from the invertibility of the transformation (12.82), we will also have

||ẑ|| ∈ L∞ . (12.98)

From the definition of the filter ψ in (12.48a) and the control law U in (12.79), we
then have U ∈ L∞ , and

||ψ||∞ ∈ L∞ (12.99)

and particularly, ψ(0, ·) ∈ L∞ . Now forming

V8 (t) = 64V1 (t) + 8V2 (t) + V3 (t) + 4V4 (t) + 8V5 (t) (12.100)
248 12 Model Reference Adaptive Control

we obtain in a similar way

V̇8 (t) ≤ −c̄V8 (t) + l12 (t)V8 (t) + l13 (t) + 64(1 + h 1 )σ 2 (t)ψ 2 (0, t) (12.101)

for some positive constant c̄ and integrable functions l12 and l13 . Since σ 2 ∈ L1 and
ψ(0, ·) ∈ L∞ , the latter term is integrable, and hence

V̇8 (t) ≤ −c̄V8 (t) + l12 (t)V8 (t) + l14 (t) (12.102)

for an integrable function l14 . Lemma B.3 in Appendix B gives

V8 ∈ L1 ∩ L∞ , V8 → 0 (12.103)

and hence

||η||, ||φ||, ||P||, || p0 ||, || p1 || ∈ L∞ ∩ L2 (12.104a)


||η||, ||φ||, ||P||, || p0 ||, || p1 || → 0. (12.104b)

From the invertibility of (12.82), it then follows that

||ẑ|| ∈ L∞ ∩ L2 , ||ẑ|| → 0. (12.105)

From the invertibility of the transformations, and the fact that ||a|| and ||b|| are
bounded, we obtain

||u||, ||v|| ∈ L∞ . (12.106)

We proceed by proving pointwise boundedness. From (12.53b), (12.55b) and Lemma


12.5, we have
 1
z(x, t) = ρψ(x, t) − b(x, t) + θ(ξ)φ(1 − (ξ − x), t)dξ
x
 1
+ κ(ξ)[P(x, ξ, t) + M(x, ξ, t)]dξ
0
 1
+ θ(ξ)N (x, ξ, t)dξ + ϑT (x, t)ν + (x, t) (12.107)
0

where  ≡ 0 for t ≥ t F . From this we find

||z||∞ ∈ L∞ , (12.108)

and specifically z(0, ·) ∈ L∞ . The definition of the filters (12.48) yields

||φ||∞ , ||P||∞ , || p0 ||∞ , || p1 ||∞ ∈ L∞ . (12.109)


12.2 Model Reference Adaptive Control 249

From (12.53a) and (12.55a), we get

||w||∞ ∈ L∞ . (12.110)

From the invertibility of the transformations in Lemmas 12.1–12.4 and since a and
b are pointwise bounded, we finally get

||u||∞ , ||v||∞ ∈ L∞ . (12.111)

Lastly, we prove that the tracking goal (12.5) is achieved. By solving (12.48b),
we find

φ(x, t) = φ(1, t − t2 (1 − x)) = z(0, t − t2 (1 − x)) (12.112)

for t ≥ t2 (1 − x). Moreover, we have


 1  1
||φ(t)|| =
2
φ (x, t)d x =
2
z 2 (0, t − t2 (1 − x))d x → 0 (12.113)
0 0

for t ≥ t2 . Which proves that


 t+T
z 2 (0, s)ds → 0 (12.114)
t

for any T > 0, and from the definition of z(0, t) in (12.35g), this implies that
 t+T
(y0 (s) − yr (s))2 ds → 0 (12.115)
t

for any T > 0. 

12.3 Adaptive Output-Feedback Stabilization


in the Disturbance-Free Case

The adaptive output feedback controller in the disturbance-free case (d1 = d2 ≡


0, d3 = d4 = d5 ≡ 0) is obtained from the model reference adaptive controller of
Theorem 12.1 by simply setting r ≡ 0, b0 ≡ 0 and M0 ≡ 0. Moreover, this controller
also gives the desirable property of square integrability and asymptotic convergence
to zero of the system states pointwise in space. Consider the control law
 1  1
1
U (t) = ĝ(1 − ξ, t)ẑ(ξ, t)dξ − κ̂(ξ, t) p1 (ξ, t)dξ (12.116)
ρ̂(t) 0 0
250 12 Model Reference Adaptive Control

where ẑ is generated using (12.60), and ĝ is the on-line solution to the Volterra integral
equation (12.80) with ρ̂, θ̂ and κ̂ generated using the adaptive laws (6.28).

Theorem 12.2 Consider system (12.1), filters (12.48) and (12.49), and the adap-
tive laws (12.64). Suppose d1 = d2 ≡ 0, d3 = d4 = d5 ≡ 0. Then, the control law
(12.116) guarantees

||u||,||û||, ||ψ||, ||φ||, ||P||,


||u||∞ , ||û||∞ , ||ψ||∞ , ||φ||∞ , ||P||∞ ∈ L2 ∩ L∞ (12.117a)
||u||,||û||, ||ψ||, ||φ||, ||P||,
||u||∞ , ||û||∞ , ||ψ||∞ , ||φ||∞ , ||P||∞ → 0. (12.117b)

Proof From the proof of Theorem 12.1, we already know that

||η||, ||φ||, ||P||, || p0 ||, || p1 || ∈ L∞ ∩ L2 (12.118a)


||η||, ||φ||, ||P||, || p0 ||, || p1 || → 0. (12.118b)

From the control law (12.116) and the definition of the filter ψ in (12.48a), we
then have U ∈ L∞ ∩ L2 , U → 0, and

||ψ||, ||ψ||∞ ∈ L2 ∩ L∞ , ||ψ||, ||ψ||∞ → 0. (12.119)

Moreover, with r ≡ 0 and χ ≡ 0, Eq. (12.107) reduces to


 1
z(x, t) = ρψ(x, t) + θ(ξ)φ(1 − (ξ − x), t)dξ
x
 1
+ κ(ξ)P(x, ξ, t)dξ + (x, t) (12.120)
0

with  ≡ 0 for t ≥ t F , which gives

||z||, ||z||∞ ∈ L2 ∩ L∞ , ||z||, ||z||∞ → 0. (12.121a)

In particular z(0, ·) ∈ L2 ∩ L∞ , z(0, ·) → 0, which from the definition of the filters


(12.48) yields

||φ||∞ , ||P||∞ , || p0 ||∞ , || p1 ||∞ ∈ L∞ (12.122a)


||φ||∞ , ||P||∞ , || p0 ||∞ , || p1 ||∞ → 0, (12.122b)

and from (12.53a) and (12.55a), we get

||w||∞ ∈ L2 ∩ L∞ , ||w||∞ → 0. (12.123)


12.3 Adaptive Output-Feedback Stabilization in the Disturbance-Free Case 251

From the invertibiliy of the transformations of Lemmas 12.1–12.4, this gives

||u||, ||v||, ||u||∞ , ||v||∞ ∈ L2 ∩ L∞ , ||u||, ||v||, ||u||∞ , ||v||∞ → 0. (12.124)

12.4 Simulations

System (12.1), reference model (12.31) and filters (12.48)–(12.51) are implemented
along with the adaptive laws (12.64) and the controller of Theorem 12.1. The system
parameters are set to

λ(x) = 1 + x, μ(x) = e x (12.125a)


1
c1 (x) = 1 + x, c2 (x) = (1 + sin(x)), q=2 (12.125b)
2
and the disturbance terms to
1   1 x 
d1 (x, t) = x 1 1 0 χ(t), d2 (x, t) = e 0 0 1 χ(t) (12.126a)
2 20
1  1 
d3 (t) = 2 −1 1 χ(t), d4 (t) = 1 1 2 χ(t) (12.126b)
4 4
1 
d5 (t) = −1 −1 2 χ(t) (12.126c)
4
where
 T
χ(t) = 1 sin(t) cos(t) . (12.127)

The signal r is set to


√ 
π  2
r (t) = 1 + sin t + 2 sin t , (12.128)
10 2

while the initial conditions of the system are set to

u 0 (x) = x, v0 (x) = sin(2πx). (12.129)

All initial conditions for the filters and parameter estimates are set to zero, except

ρ̂(0) = 1. (12.130)
252 12 Model Reference Adaptive Control

2
40
u + v

U
20
−2
0 −4
0 20 40 60 80 100 0 20 40 60 80 100
Time [s] Time [s]

Fig. 12.1 Left: State norm. Right: Actuation signal

1.5 0.6
0.4
1 0.2
0
0.5
0 50 100 0 50 100
Time [s] Time [s]

0.4
0.2
0.2
0
0
−0.2 −0.2

0 50 100 0 50 100
Time [s] Time [s]

Fig. 12.2 Estimated parameters

−2

0 10 20 30 40 50 60 70 80 90 100
Time [s]

Fig. 12.3 Reference model output yr (t) (solid black) and measured signal y0 (t) (dashed red)

The adaptation gains are set to

γ1 = 5, γ2 = γ3 ≡ 5, Γ4 = 5I3 (12.131a)
12.4 Simulations 253

with the bounds on ρ, θ, κ and ν set to

ρ = 0.1, ρ̄ = 100, θ = κ = ν i = −100, θ̄ = κ̄ = ν̄i = 100 (12.132a)

for i = 1 . . . 3. The integral equation (12.80) is solved using successive approxima-


tions, as descried in Appendix F.1.
With the controller active, it is noted from Fig. 12.1 that the states and actuation
signal are bounded. From Fig. 12.2, the estimated parameters ρ̂, θ̂, κ̂ and ν̂ are all
bounded, but they do not stagnate and instead continuously adapt even when the
tracking goal is successfully reached after approximately 60 s as seen in Fig. 12.3.
Despite this, the controller manages to make the measured output track the output of
the reference model. The reason for the non-stagnating estimates, may be numerical
issues from the discretization method used, but most likely that the values of θ and
κ for which the tracking goal is achieved are not unique.
Part IV
n + 1 Systems
Chapter 13
Introduction

We now generalize the class of systems considered, and allow an arbitrary number
of states convecting in one of the directions. They are referred to as n + 1 systems,
where the phrasing “n + 1” refers to the number of variables, with u being a vector
containing n components convecting from x = 0 to x = 1, and v is a scalar convecting
in the opposite direction. They are typically stated in the following form

u t (x, t) + Λ(x)u x (x, t) = Σ(x)u(x, t) + ω(x)v(x, t) (13.1a)


vt (x, t) − μ(x)vx (x, t) =  (x)u(x, t) + π(x)v(x, t)
T
(13.1b)
u(0, t) = qv(0, t) (13.1c)
v(1, t) = c u(1, t) + k1 U (t)
T
(13.1d)
u(x, 0) = u 0 (x) (13.1e)
v(x, 0) = v0 (x) (13.1f)
y0 (t) = k2 v(0, t) (13.1g)
y1 (t) = k3 u(1, t) (13.1h)

for the system states


 T
u(x, t) = u 1 (x, t) u 2 (x, t) . . . u n (x, t) , v(x, t) (13.2)

defined over x ∈ [0, 1], t ≥ 0. The system parameters are in the form

Λ(x) = diag {λ1 (x), λ2 (x), . . . , λn (x)} , Σ(x) = {σi j (x)}1≤i, j≤n (13.3a)
 T
ω(x) = ω1 (x) ω2 (x) . . . ωn (x) (13.3b)
 T
(x) = 1 (x) 2 (x) . . . n (x) (13.3c)
 T  T
q = q1 q2 . . . qn , c = c1 c2 . . . cn (13.3d)

© Springer Nature Switzerland AG 2019 257


H. Anfinsen and O. M. Aamo, Adaptive Control of Hyperbolic PDEs,
Communications and Control Engineering,
https://doi.org/10.1007/978-3-030-05879-1_13
258 13 Introduction

and are assumed to satisfy, for i, j = 1, 2, . . . , n

λi , μ ∈ C 1 ([0, 1]), λi (x), μ(x) > 0, ∀x ∈ [0, 1] (13.4a)


σi j , ωi , i ∈ C ([0, 1]),
0
qi , ci ∈ R (13.4b)
k1 , k2 , k3 ∈ R\{0}. (13.4c)

Moreover, the initial conditions


 T
u 0 (x) = u 1,0 (x) u 2,0 (x) . . . u n,0 (x) , v0 (x) (13.5)

are assumed to satisfy

u 0 , v0 ∈ B([0, 1]). (13.6)

The signal U (t) is an actuation signal.


Systems in the form (13.1) can be used to model systems of conservation laws
(Diagne et al. 2012), multi-phase flow phenomena (drift flux models Zuber 1965;
Di Meglio et al. 2012), gas-liquid flow in oil production systems (Di Meglio et al.
2011) and the linearized Saint-Venant–Exner model for open channels (Hudson and
Sweby 2003), to mention a few.
Full-state measurements are rarely available, and as with 2 × 2 systems, we dis-
tinguish between sensing taken at the boundary anti-collocated and collocated with
actuation, that is (13.1g) and (13.1h), respectively.
The designs offered in this part of the book are derived subject to some assumptions
on the transport speeds. In addition to the transport speeds λi (x), i = 1 . . . n, μ(x)
being positive for all x ∈ [0, 1], we will also always assume

−μ(x) < 0 < λ1 (x) ≤ λ2 (x) ≤ · · · ≤ λn (x), ∀x ∈ [0, 1] (13.7)

However, some of the designs to follow require the slightly more restrictive assump-
tion of

−μ(x) < 0 < λ1 (x) < λ2 (x) < · · · < λn (x), ∀x ∈ [0, 1]. (13.8)

Moreover, the following is frequently assumed

π≡0 (13.9)

and

σii ≡ 0, i = 1, 2, . . . , n, (13.10)

for the terms in (13.1a)–(13.1b). This is not a restriction, since these terms can be
removed by scaling as demonstrated for 2 × 2 systems in Chap. 7. This assumption
13 Introduction 259

sometimes makes the analysis far easier. In addition, we will sometimes not allow
scaling in the inputs and outputs, and assume that

k1 = k2 = k3 = 1. (13.11)

We will proceed in the next chapter to derive non-adaptive state-feedback con-


trollers and state observers, for both sensing configurations, and combine these into
output-feedback controllers.

References

Diagne A, Bastin G, Coron J-M (2012) Lyapunov exponential stability of 1-D linear hyperbolic
systems of balance laws. Automatica 48:109–114
Di Meglio F, Kaasa G-O, Petit N, Alstad V (2011) Slugging in multiphase flow as a mixed initial-
boundary value problem for a quasilinear hyperbolic system. In: American control conference.
CA, USA, San Francisco
Di Meglio F, Vazquez R, Krstić M, Petit N (2012) Backstepping stabilization of an underactuated
3 × 3 linear hyperbolic system of fluid flow transport equations. In: American control conference.
Montreal, QC, Canada
Hudson J, Sweby P (2003) Formulations for numerically approximating hyperbolic systems gov-
erning sediment transport. J Sci Comput 19:225–252
Zuber N (1965) Average volumetric concentration in two-phase flow systems. J Heat Transf
87(4):453–468
Chapter 14
Non-adaptive Schemes

14.1 Introduction

In this chapter, a non-adaptive state feedback controller and boundary observers


will be derived for system (13.1), subject to assumptions (13.9) and (13.10). For
simplicity we also assume k1 = k2 = k3 = 1, which can be achieved by a scaling of
the actuation signal and measurements.
In Sect. 14.2, we present the state feedback stabilizing controller originally derived
in Di Meglio et al. (2013). It is derived under assumption (13.7). As with 2 × 2 sys-
tems, we will derive observers for system (13.1), and distinguish between observers
using boundary sensing collocated with actuation (13.1h) and anti-collocated with
actuation (13.1g). Only one of the measurements is needed to implement an observer.
It should, however, be noted that the measurement at x = 1, y1 (t) = u(1, t) is a vector
containing n components, while y0 (t) = v(0, t) is a scalar. Hence, using the sensing
y0 (t), only a single measurement is needed to estimate all the n + 1 distributed states
in the system.
The observer using sensing anti-collocated with actuation is given in Sect. 14.3.1.
It was originally derived in Di Meglio et al. (2013) and requires assumption (13.7).
It took some time from the derivation of an observer using sensing anti-collocated
with control before material was derived facilitating design of an observer based on
sensing collocated with control. It requires a different set of kernel equations in the
design, and these were first presented in Bin and Di Meglio (2017) for the case of
constant coefficients. This observer design is presented in Sect. 14.3.2, and requires
the slightly more restrictive assumption (13.8).
The observers are in Sect. 14.4 combined with the state-feedback controller
to obtain output-feedback controllers. An output-tracking problem is solved in
Sect. 14.5, providing a controller that makes the anti-collocated measurement (13.1g)
track an arbitrary, bounded reference signal. The performance of the controllers is
demonstrated in simulations in Sect. 14.6, and some concluding remarks are given
in Sect. 14.7.

© Springer Nature Switzerland AG 2019 261


H. Anfinsen and O. M. Aamo, Adaptive Control of Hyperbolic PDEs,
Communications and Control Engineering,
https://doi.org/10.1007/978-3-030-05879-1_14
262 14 Non-adaptive Schemes

14.2 State Feedback Controller

A stabilizing controller for system (13.1) is derived in Di Meglio et al. (2013). It


is based on a backstepping transformation and target system similar to (8.13) and
(8.26) used in the proof of Theorem 8.1.
Consider the control law
 1  1
U (t) = −c T u(1, t) + K u (1, ξ)u(ξ, t)dξ + K v (1, ξ)v(ξ, t)dξ (14.1)
0 0

where
 
K u (x, ξ) = K 1u (x, ξ) K 2u (x, ξ) . . . K nu (x, ξ) , K v (x, ξ) (14.2)

satisfy the PDE

μ(x)K xu (x, ξ) − K ξu (x, ξ)Λ(ξ) = K u (x, ξ)Λ (ξ) + K u (x, ξ)Σ(ξ)


+ K v (x, ξ) T (ξ) (14.3a)
μ(x)K xv (x, ξ) + K ξv (x, ξ)μ(ξ) = K u (x, ξ)ω(ξ) − K v (x, ξ)μ (ξ) (14.3b)
K u (x, x)Λ(x) + μ(x)K u (x, x) = − T (x) (14.3c)
μ(0)K v (x, 0) = K u (x, 0)Λ(0)q. (14.3d)

Note that K u in this case is a row vector. Well-posedness of Eq. (14.3) is guaranteed
by Theorem D.4 in Appendix D.

Theorem 14.1 Consider system (13.1) subject to assumption (13.7). Let the con-
troller be taken as (14.1) where (K u , K v ) is the solution to (14.3). Then,

u ≡ 0, v≡0 (14.4)

for t ≥ t F , where
 1  1
dγ dγ
t F = tu,1 + tv , tu,i = , tv = . (14.5)
0 λi (γ) 0 μ(γ)

Proof As for the 2 × 2 case in Sect. 8.2, we will here provide two proofs of this
Theorem, where the first one uses the simplest backstepping transformation, while the
second one produces the simplest target system. The first proof is the one originally
given in Di Meglio et al. (2013), while the second proof is included since it employs
a target system that facilitates the model reference adaptive controller design in
Chap. 17.
14.2 State Feedback Controller 263

Solution 1:
We will show that the backstepping transformation

α(x, t) = u(x, t) (14.6a)


 x  x
β(x, t) = v(x, t) − K u (x, ξ)u(ξ, t)dξ − K v (x, ξ)v(ξ, t)dξ (14.6b)
0 0

from the variables u, v to the new variables


 T
α(x, t) = α1 (x, t) α2 (x, t) . . . αn (x, t) , β(x, t), (14.7)

where (K u , K v ) is the solution to the PDE (14.3) maps system (13.1) into the target
system
 x
αt (x, t) + Λ(x)αx (x, t) = Σ(x)α(x, t) + ω(x)β(x, t) + B1 (x, ξ)α(ξ, t)dξ
 x 0

+ b2 (x, ξ)β(ξ, t)dξ (14.8a)


0
βt (x, t) − μ(x)βx (x, t) = 0 (14.8b)
α(0, t) = qβ(0, t) (14.8c)
β(1, t) = 0 (14.8d)
α(x, 0) = α0 (x) (14.8e)
β(x, 0) = β0 (x) (14.8f)

for some initial conditions


 T
α0 (x) = α1,0 (x) α2,0 (x) . . . αn,0 (x) , β0 (x) (14.9)

satisfying αi,0 , β0 ∈ B([0, 1]), 1 ≤ i ≤ n, and system parameters B1 and b2 for


(x, ξ) ∈ T = {(x, ξ) | 0 ≤ ξ ≤ x ≤ 1}, given as
 x
B1 (x, ξ) = ω(x)K (x, ξ) +
u
b2 (x, s)K u (s, ξ)ds (14.10a)
ξ
 x
b2 (x, ξ) = ω(x)K v (x, ξ) + b2 (x, s)K v (s, ξ)ds. (14.10b)
ξ

Differentiating (14.6b) with respect to time, inserting the dynamics (13.1a)–


(13.1b), integrating by parts and using the boundary condition (13.1c), we find
264 14 Non-adaptive Schemes

vt (x, t) = βt (x, t) − K u (x, x)Λ(x)u(x, t)


 
+ K u (x, 0)Λ(0)q − K v (x, 0)μ(0) v(0, t)
 x
+ K ξu (x, ξ)Λ(ξ) + K u (x, ξ)Λ (ξ)
0

v
+ K (x, ξ)Σ(ξ) + K (x, ξ) (ξ) u(ξ, t)dξ
u T

 x 
+ − K ξv (x, ξ)μ(ξ) v 
− K (x, ξ)μ (ξ) + K (x, ξ)ω(ξ) v(ξ, t)dξ
u
0
v
+ K (x, x)μ(x)v(x, t). (14.11)

Similarly, differentiating (14.6b) with respect to space, we get

vx (x, t) = βx (x, t) + K u (x, x)u(x, t) + K v (x, x)v(x, t)


 x  x
+ K xu (x, ξ)u(ξ, t)dξ + K xv (x, ξ)v(ξ, t)dξ. (14.12)
0 0

Substituting (14.11) and (14.12) into (13.1b) yields

0 = vt (x, t) − μ(x)vx (x, t) −  T (x)u(x, t)


 
= βt (x, t) − μ(x)βx (x, t) + K u (x, 0)Λ(0)q − K v (x, 0)μ(0) v(0, t)
 x
+ − μ(x)K xu (x, ξ) + K ξu (x, ξ)Λ(ξ) + K u (x, ξ)Λ (ξ)
0

+ K u (x, ξ)Σ(ξ) + K v (x, ξ) T (ξ) u(ξ, t)dξ
 x
+ − μ(x)K xv (x, ξ) − K ξv (x, ξ)μ(ξ)
0

− K v (x, ξ)μ (ξ) + K u (x, ξ)ω(ξ) v(ξ, t)dξ
 
− (Λ(x) + μ(x))K u (x, x) +  T (x) u(x, t). (14.13)

Using Eq. (14.3), we obtain the dynamics (14.8b). Inserting the backstepping trans-
formation (14.6) into the target system dynamics (14.8a), we find

0 = αt (x, t) + Λ(x)αx (x, t) − Σ(x)α(x, t) − ω(x)β(x, t)


 x  x
− B1 (x, ξ)α(x, t)dξ − b2 (x, ξ)β(x, t)dξ
0 0
= u t (x, t) + Λ(x)u x (x, t) − Σ(x)u(x, t) − ω(x)v(x, t)
 x  x
+ ω(x) K (x, ξ)u(ξ, t)dξ + ω(x)
u
K v (x, ξ)v(ξ, t)dξ
0 0
14.2 State Feedback Controller 265
 x  x
− B1 (x, ξ)u(ξ, t)dξ − b2 (x, ξ)v(ξ, t)dξ
0 0
 x  ξ
+ b2 (x, ξ) K u (ξ, s)u(s, t)dsdξ
0 0
 x  ξ
+ b2 (x, ξ) K v (ξ, s)v(s, t)dsdξ. (14.14)
0 0

Changing the order of integration in the double integrals, (14.14) can be written as

0 = u t (x, t) + Λ(x)u x (x, t) − Σ(x)u(x, t) − ω(x)v(x, t)


 x  x 
− B1 (x, ξ) − ω(x)K u (x, ξ) − b2 (x, s)K u (s, ξ)ds u(ξ, t)dξ
0 ξ
 x
− b2 (x, ξ) − ω(x)K v (x, ξ)
0
 x 
v
− b2 (x, s)K (s, ξ)ds v(ξ, t)dξ. (14.15)
ξ

Using (14.10) yields (13.1a). The boundary condition (14.8c) follows trivially from
(13.1c) and the fact that u(0, t) = α(0, t) and v(0, t) = β(0, t). Evaluating (14.6b)
at x = 1 and inserting the boundary condition (13.1d), we get
 1
β(1, t) = U (t) + c u(1, t) − T
K u (1, ξ)u(ξ, t)dξ
0
 1
− K v (1, ξ)v(ξ, t)dξ, (14.16)
0

from which the control law (14.1) gives the boundary condition (14.8d).
The target system (14.8) is a cascade from β into α. The subsystem in β will be
zero for t ≥ tv for tv defined in (14.5). System (14.8) is then reduced to
 x
αt (x, t) + Λ(x)αx (x, t) = Σ(x)α(x, t) + B1 (x, ξ)α(ξ, t)dξ (14.17a)
0
α(0, t) = 0 (14.17b)
α(x, tv ) = αtv (x) (14.17c)

for some function αtv ∈ B([0, 1]). System (14.17) will be zero after an additional time
tu,1 , corresponding to the slowest transport speed in Λ. Hence, for t ≥ tu,1 + tv = t F ,
we will have α ≡ 0 and β ≡ 0, and the result follows from the invertibility of the
backstepping transformation (14.6).
Solution 2:
This proof is based on Hu et al. (2015) and uses a bit more complicated backstep-
ping transformation with the advantage of producing a simpler target system, that
266 14 Non-adaptive Schemes

facilitates adaptive controller designs in subsequent chapters. The proof requires


assumptions (13.8) and (13.10), so it provides a proof of Theorem 14.1 with assump-
tion (13.7) replaced by (13.8) and (13.10).
Consider the backstepping transformation
 x  x
α(x, t) = u(x, t) − K uu (x, ξ)u(ξ, t)dξ − K uv (x, ξ)v(ξ, t)dξ (14.18a)
 0
x  0x
β(x, t) = v(x, t) − K (x, ξ)u(ξ, t)dξ −
u
K v (x, ξ)v(ξ, t)dξ (14.18b)
0 0

where (K u , K v ) satisfies (14.3) as before, and

K uu (x, ξ) = {K iuu
j (x, ξ)}i, j=1,2,...,n (14.19a)
 T
K uv (x, ξ) = K 1uv (x, ξ) K 2uv (x, ξ) . . . K nuv (x, ξ) (14.19b)

satisfy the PDE

Λ(x)K xuu (x, ξ) + K ξuu (x, ξ)Λ(ξ) = −K uu (x, ξ)Λ (ξ) − K uu (x, ξ)Σ(ξ)
− K uv (x, ξ) T (ξ) (14.20a)
Λ(x)K xuv (x, ξ) − K ξuv (x, ξ)μ(ξ) = −K (x, ξ)ω(ξ)
uu

+ K uv (x, ξ)μ (ξ) (14.20b)


Λ(x)K uu (x, x) − K uu (x, x)Λ(x) = Σ(x) (14.20c)
Λ(x)K uv (x, x) + K uv (x, x)μ(x) = ω(x). (14.20d)

Note that K uu is a matrix, while K uv is a column vector. The PDE (14.20) is under-
determined, and to ensure well-posedness, we add the boundary conditions
uu,1
j (x, 0) = ki j (x), 1 ≤ j ≤ i ≤ n
K iuu (14.21a)
uu,2
j (1, ξ) = ki j (ξ), 1 ≤ i < j ≤ n
K iuu (14.21b)

for some arbitrary functions kiuu,1 uu,2


j , ki j . The well-posedness of (14.20)–(14.21) now
follows from Theorem D.5 in Appendix D.5. We will show that the backstepping
transformation (14.18) and control law (14.1) map system (13.1) into the target
system

αt (x, t) + Λ(x)αx (x, t) = g(x)β(0, t) (14.22a)


βt (x, t) − μ(x)βx (x, t) = 0 (14.22b)
α(0, t) = qβ(0, t) (14.22c)
β(1, t) = 0 (14.22d)
α(x, 0) = α0 (x) (14.22e)
14.2 State Feedback Controller 267

β(x, 0) = β0 (x) (14.22f)

with g given as

g(x) = K uv (x, 0)μ(0) − K uu (x, 0)Λ(0)q. (14.23)

By differentiating (14.18a) with respect to time, inserting the dynamics (13.1a)–


(13.1b), integrating by parts and inserting the boundary condition (13.1c), we get

u t (x, t) = αt (x, t) + K uv (x, x)μ(x)v(x, t) − K uu (x, x)Λ(x)u(x, t)


 x
+ K ξuu (x, ξ)Λ(ξ) + K uu (x, ξ)Λ (ξ)
0

+ K (x, ξ)Σ(ξ) + K (x, ξ) (ξ) u(ξ, t)dξ
uu uv T

 x
+ − K ξuv (x, ξ)μ(ξ) + K uu (x, ξ)ω(ξ)
0


− K (x, ξ)μ (ξ) v(ξ, t)dξ
uv

 uv 
− K (x, 0)μ(0) − K uu (x, 0)Λ(0)q v(0, t). (14.24)

Similarly, from differentiating with respect to space, we get

u x (x, t) = αx (x, t) + K uu (x, x)u(x, t) + K uv (x, x)v(x, t)


 x  x
+ K xuu (x, ξ)u(ξ, t)dξ + K xuv (x, ξ)v(ξ, t)dξ. (14.25)
0 0

Inserting (14.24) and (14.25) into the dynamics (13.1a) gives

0 = αt (x, t) + Λ(x)αx (x, t)


 
+ Λ(x)K uu (x, x) − K uu (x, x)Λ(x) − Σ(x) u(x, t)
 
− K uv (x, 0)μ(0) − K uu (x, 0)Λ(0)q v(0, t)
 
+ Λ(x)K uv (x, x) + K uv (x, x)μ(x) − ω(x) v(x, t)
 x
+ Λ(x)K xuu (x, ξ) + K ξuu (x, ξ)Λ(ξ) + K uu (x, ξ)Λ (ξ)
0

+ K uv (x, ξ) T (ξ) + K uu (x, ξ)Σ(ξ) u(ξ, t)dξ
 x
+ Λ(x)K xuv (x, ξ) − K ξuv (x, ξ)μ(ξ) + K uu (x, ξ)ω(ξ)
0


− K (x, ξ)μ (ξ) v(ξ, t)dξ.
uv
(14.26)
268 14 Non-adaptive Schemes

Using the Eq. (14.20) yields the target system dynamics (14.22a) with g given from
(14.23). The rest of the proof follows the same steps as in Solution 1. 

14.3 State Observers

14.3.1 Sensing Anti-collocated with Actuation

This observer design was originally presented in Di Meglio et al. (2013). Consider
the observer

û t (x, t) + Λ(x)û x (x, t) = Σ(x)û(x, t) + ω(x)v̂(x, t)


+ p1 (x)(y0 (t) − v̂(0, t)) (14.27a)
v̂t (x, t) − μ(x)v̂x (x, t) =  (x)û(x, t) + p2 (x)(y0 (t) − v̂(0, t))
T
(14.27b)
û(0, t) = qy0 (t) (14.27c)
v̂(1, t) = c T û(1, t) + U (t) (14.27d)
û(x, 0) = û 0 (x) (14.27e)
v̂(x, 0) = v̂0 (x) (14.27f)

for some initial conditions û 0 , v̂0 ∈ B([0, 1]), and the injection gains chosen as

p1 (x) = μ(0)M α (x, 0) (14.28a)


β
p2 (x) = μ(0)M (x, 0) (14.28b)

where
 T
M α (x, ξ) = M1α (x, ξ) M2α (x, ξ) . . . Mnα (x, ξ) , M β (x, ξ) (14.29)

satisfy the PDE

Λ(x)Mxα (x, ξ) − μ(ξ)Mξα (x, ξ) = M α (x, ξ)μ (ξ) + Σ(x)M α (x, ξ)


+ ω(x)M β (x, ξ) (14.30a)
β
μ(x)Mxβ (x, ξ) + μ(ξ)Mξ (x, ξ) β
= −M (x, ξ)μ (ξ) 

−  T (x)M α (x, ξ) (14.30b)


α α
Λ(x)M (x, x) + μ(x)M (x, x) = ω(x) (14.30c)
M β (1, ξ) = c T M α (1, ξ). (14.30d)
14.3 State Observers 269

Note that M α (x, ξ) is a column vector. Well-posedness of Eq. (14.30) is guaranteed


by Theorem D.4 in Appendix D, following a coordinate change (x, ξ) → (1 − ξ,
1 − x).

Theorem 14.2 Consider system (13.1) subject to assumption (13.7), and the
observer (14.27) with injection gains p1 and p2 given as (14.28). Then

û ≡ u, v̂ ≡ v (14.31)

for t ≥ t F where t F is defined in (14.5).

Proof The observer estimation error ũ = u − û, ṽ = v − v̂ can straightforwardly be


shown to satisfy the dynamics

ũ t (x, t) + Λ(x)ũ x (x, t) = Σ(x)ũ(x, t) + ω(x)ṽ(x, t) − p1 (x)ṽ(0, t) (14.32a)


ṽt (x, t) − μ(x)ṽx (x, t) =  (x)ũ(x, t) − p2 (x)ṽ(0, t)
T
(14.32b)
ũ(0, t) = 0 (14.32c)
ṽ(1, t) = c T ũ(1, t) (14.32d)
ũ(x, 0) = ũ 0 (x) (14.32e)
ṽ(x, 0) = ṽ0 (x) (14.32f)

where ũ 0 = u 0 − û 0 , ṽ0 = v0 − v̂0 . We will show that the backstepping transforma-


tion
 x
ũ(x, t) = α̃(x, t) + M α (x, ξ)β̃(ξ, t)dξ (14.33a)
 x
0

ṽ(x, t) = β̃(x, t) + M β (x, ξ)β̃(ξ, t)dξ (14.33b)


0

where the kernels (M α , M β ) satisfy (14.30) maps the target system


 x
α̃t (x, t) + Λ(x)α̃x (x, t) = Σ(x)α̃(x, t) + D1 (x, ξ)α̃(ξ, t)dξ (14.34a)
0
 x
β̃t (x, t) − μ(x)β̃x (x, t) =  T (x)α̃(x, t) + d2T (x, ξ)α̃(ξ, t)dξ (14.34b)
0
α̃(0, t) = 0 (14.34c)
β̃(1, t) = c T α̃(1, t) (14.34d)
α̃(x, 0) = α̃0 (x) (14.34e)
β̃(x, 0) = β̃0 (x) (14.34f)

with D1 and d2 given by


270 14 Non-adaptive Schemes
 x
D1 (x, ξ) = −M α (x, ξ) T (ξ) − M α (x, s)d2T (s, ξ)ds (14.35a)
ξ
 x
d2T (x, ξ) = −M β (x, ξ) T (ξ) − M β (x, s)d2T (s, ξ)ds (14.35b)
ξ

into system (14.32).


Differentiating (14.33) with respect to time, inserting the dynamics (14.34b),
integrating by parts and changing the order of integration in the double integral, we
find

α̃t (x, t) = ũ t (x, t) − M α (x, x)μ(x)β̃(x, t) + M α (x, 0)μ(0)β̃(0, t)


 x
 α 
+ Mξ (x, ξ)μ(ξ) + M α (x, ξ)μ (ξ) β̃(ξ, t)dξ
0 x 
− M α (x, ξ) T (ξ)
0
 x 
α
+ M (x, s)d2 (s, ξ)ds α̃(ξ, t)dξ
T
(14.36a)
ξ

β̃t (x, t) = ṽt (x, t) − M β (x, x)μ(x)β̃(x, t) + M β (x, 0)μ(0)β̃(0, t)


 x 
β
+ Mξ (x, ξ)μ(ξ) + M β (x, ξ)μ (ξ) β̃(ξ, t)dξ
0 x 
− M β (x, ξ) T (ξ)
0
 x 
+ M β (x, s)d2T (s, ξ)ds α̃(ξ, t)dξ. (14.36b)
ξ

Differentiating (14.33) with respect to space gives


 x
α
α̃x (x, t) = ũ x (x, t) − M (x, x)β̃(x, t) − Mxα (x, ξ)β̃(ξ, t)dξ (14.37a)
 0
x
β̃x (x, t) = ṽx (x, t) − M β (x, x)β̃(x, t) − Mxβ (x, ξ)β̃(ξ, t)dξ. (14.37b)
0

Inserting (14.36), (14.37) and (14.33) into (14.34a)–(14.34b) gives


 x
0 = α̃t (x, t) + Λ(x)α̃x (x, t) − Σ(x)α̃(x, t) − D1 (x, ξ)α̃(ξ, t)dξ
0
= ũ t (x, t) + Λ(x)ũ x (x, t) − Σ(x)ũ(x, t)
− ω(x)ṽ(x, t) + M α (x, 0)μ(0)ṽ(0, t)
 x
+ − Λ(x)Mxα (x, ξ) + Mξα (x, ξ)μ(ξ) + M α (x, ξ)μ (ξ)
0
14.3 State Observers 271

+ Σ(x)M α (x, ξ) + ω(x)M β (x, ξ) β̃(ξ, t)dξ
 x 
− D1 (x, ξ) + M α (x, ξ) T (ξ)
0
 x 
+ M α (x, s)d2T (s, ξ)ds α̃(ξ, t)dξ
ξ
 
− Λ(x)M α (x, x) + M α (x, x)μ(x) − ω(x) β̃(x, t) (14.38)

and
 x
0 = β̃t (x, t) − μ(x)β̃x (x, t) −  T (x)α̃(x, t) − d2T (x, ξ)α̃(ξ, t)dξ
0
β
= ṽt (x, t) − μ(x)ṽx (x, t) −  T (x)ũ(x, t) + M (x, 0)μ(0)ṽ(0, t)
 x
β
+ μ(x)Mxβ (x, ξ) + Mξ (x, ξ)μ(ξ)
0

β  α
+ M (x, ξ)μ (ξ) +  (x)M (x, ξ) β̃(ξ, t)dξ
T

 x
− d2T (x, ξ) + M β (x, ξ) T (ξ)
0
 x 
β
+ M (x, s)d2 (s, ξ)ds α̃(ξ, t)dξ.
T
(14.39)
ξ

Using (14.28), (14.30a)–(14.30c) and (14.35) yields the dynamics (14.27a)–(14.27b).


Substituting the backstepping transformation (14.33) into the boundary condition
(14.34d) yields
 1  
ṽ(1, t) = c T ũ(1, t) + M β (1, ξ) − c T M α (1, ξ) β̃(ξ, t)dξ. (14.40)
0

Using (14.30d) gives (14.32d). The last boundary condition (14.32c) follows trivially
from inserting (14.33) into (14.34c).
The target system (14.34) is a cascade from α̃ to β̃. For t ≥ tu,1 , we will have
α̃ ≡ 0, and for t ≥ tu,1 + tv = t F , β̃ ≡ 0. The invertibility of the backstepping trans-
formation (14.33) then gives the desired result. 

14.3.2 Sensing Collocated with Actuation

In the collocated case, we have to assume distinct transport speeds (13.8) in order to
ensure well-posedness of the kernel equations and continuous kernels. To ease the
analysis, we also assume (13.10). Consider the observer
272 14 Non-adaptive Schemes

û t (x, t) + Λ(x)û x (x, t) = Σ(x)û(x, t) + ω(x)v̂(x, t)


+ P1 (x)(y1 (t) − û(1, t)) (14.41a)
v̂t (x, t) − μ(x)v̂x (x, t) =  (x)û(x, t) +
T
p2T (x)(y1 (t) − û(1, t)) (14.41b)
û(0, t) = q v̂(0, t) (14.41c)
v̂(1, t) = c y1 (t) + U (t)
T
(14.41d)
û(x, 0) = û 0 (x) (14.41e)
v̂(x, 0) = v̂0 (x) (14.41f)

for some initial conditions û 0 , v̂0 ∈ B([0, 1]), and injection gains chosen as

P1 (x) = N α (x, 1)Λ(1) (14.42a)


β
p2T (x) = N (x, 1)Λ(1) (14.42b)

where

N α (x, ξ) = {Niαj (x, ξ)}1≤i, j≤n (14.43a)


 
N β (x, ξ) = N1β (x, ξ) N2β (x, ξ) . . . Nnβ (x, ξ) (14.43b)

satisfy the PDE

Λ(x)N xα (x, ξ) + Nξα (x, ξ)Λ(ξ) = −N α (x, ξ)Λ (ξ) + Σ(x)N α (x, ξ)
+ ω(x)N β (x, ξ) (14.44a)
β
μ(x)N xβ (x, ξ) − Nξ (x, ξ)Λ(ξ) β 
= N (x, ξ)Λ (ξ) −  (x)N (x, ξ)
T α
(14.44b)
α α
N (x, x)Λ(x) − Λ(x)N (x, x) = Σ(x) (14.44c)
β β
N (x, x)Λ(x) + μ(x)N (x, x) =  (x) T
(14.44d)
Niαj (0, ξ) = qi N j (0, ξ), for 1 ≤ i ≤ j ≤ n. (14.44e)

Note that N α is a matrix and N β is a row vector. The PDE is under-determined, and
uniqueness can be ensured by imposing the following additional boundary conditions

σi j (1)
Niαj (x, 1) = , ∀x ∈ [0, 1], 1 ≤ j < i ≤ n. (14.45)
λ j (1) − λi (1)

The boundary conditions at Niαj (x, 1), 1 ≤ j < i ≤ n can be arbitrary, but choos-
ing them as (14.45) ensures continuity of Niαj (x, 1), 1 ≤ j < i ≤ n at x = ξ = 1.
Well-posedness of (14.44)–(14.45) now follows from Theorem D.5 in Appendix D.5
following a change of coordinates (x, ξ) → (ξ, x).

Theorem 14.3 Consider system (13.1) subject to assumptions (13.8) and (13.10),
and the observer (14.41) with injection gains P1 and p2 given as (14.42). Then
14.3 State Observers 273

û ≡ u, v̂ ≡ v (14.46)

for t ≥ t0 where
n
t0 = tu,i + tv (14.47)
i=1

with tu,i and tv defined in (14.5).

Proof The dynamics of the estimation errors ũ = u − û, ṽ = v − v̂ is

ũ t (x, t) + Λ(x)ũ x (x, t) = Σ(x)ũ(x, t) + ω(x)ṽ(x, t) − P1 (x)ũ(1, t) (14.48a)


ṽt (x, t) − μ(x)ṽx (x, t) =  (x)ũ(x, t) −
T
p2T (x)ũ(1, t) (14.48b)
ũ(0, t) = q ṽ(0, t) (14.48c)
ṽ(1, t) = 0 (14.48d)
ũ(x, 0) = ũ 0 (x) (14.48e)
ṽ(x, 0) = ṽ0 (x) (14.48f)

for some initial conditions ũ 0 , ṽ0 ∈ B([0, 1]).


Consider the target system
 1
α̃t (x, t) + Λ(x)α̃x (x, t) = ω(x)β̃(x, t) − g1 (x, ξ)β̃(ξ, t)dξ (14.49a)
x
 1
β̃t (x, t) − μ(x)β̃x (x, t) = − g2 (x, ξ)β̃(ξ, t)dξ (14.49b)
x
 1
α̃(0, t) = q β̃(0, t) + H (ξ)α̃(ξ, t)dξ (14.49c)
0
β̃(1, t) = 0 (14.49d)
α̃(x, 0) = α̃0 (x) (14.49e)
β̃(x, 0) = β̃0 (x) (14.49f)

for some functions g1 , g2 defined over S = {(x, ξ) | 0 ≤ x ≤ ξ ≤ 1}, and where the
matrix

H (x) = {h i j (x)}1≤i, j≤n (14.50)

is strictly lower triangular, hence

h i j ≡ 0, for 1 ≤ i ≤ j ≤ n. (14.51)

We will show that the backstepping transformation


274 14 Non-adaptive Schemes
 1
ũ(x, t) = α̃(x, t) + N α (x, ξ)α̃(ξ, t)dξ (14.52a)
x
 1
ṽ(x, t) = β̃(x, t) + N β (x, ξ)α̃(ξ, t)dξ (14.52b)
x

where (N α , N β ) satisfies the PDE (14.44) maps system (14.49) into (14.48), provided
g1 and g2 are given by
 ξ
g1 (x, ξ) = N α (x, ξ)ω(ξ) − N α (x, s)g1 (s, ξ)ds (14.53a)
x
 ξ
g2 (x, ξ) = N β (x, ξ)ω(ξ) − N β (x, s)g1 (s, ξ)ds (14.53b)
x

and H is given by

H (ξ) = q N β (0, ξ) − N α (0, ξ). (14.54)

Differentiating (14.52) with respect to time, inserting the dynamics (14.49a) and
integrating by parts, we find

α̃t (x, t) = ũ t (x, t) + N α (x, 1)Λ(1)α̃(1, t) − N α (x, x)Λ(x)α̃(x, t)


 1  1
− Nξα (x, ξ)Λ(ξ)α̃(ξ, t)dξ − N α (x, ξ)Λ (ξ)α̃(ξ, t)dξ
x x
 1
− N α (x, ξ)ω(ξ)β̃(ξ, t)dξ
x
 1  1
+ N α (x, ξ) g1 (ξ, s)β̃(s, t)dξdξ (14.55a)
x ξ

β̃t (x, t) = ṽt (x, t) + N β (x, 1)Λ(1)α̃(1, t) − N β (x, x)Λ(x)α̃(x, t)


 1  1
β
− Nξ (x, ξ)Λ(ξ)α̃(ξ, t)dξ − N β (x, ξ)Λ (ξ)α̃(ξ, t)dξ
x x
 1
− N β (x, ξ)ω(ξ)β̃(ξ, t)dξ
x
 1  1
+ N β (x, ξ) g1 (ξ, s)β̃(s, t)dξdξ. (14.55b)
x ξ

Similarly, differentiating (14.52) with respect to space, we get


 1
α̃x (x, t) = ũ x (x, t) + N α (x, x)α̃(x, t) − N xα (x, ξ)α̃(ξ, t)dξ (14.56a)
x
 1
β
β̃x (x, t) = ṽx (x, t) + N (x, x)α̃(x, t) − N xβ (x, ξ)α̃(ξ, t)dξ. (14.56b)
x
14.3 State Observers 275

Inserting (14.55) and (14.56) into (14.49a)–(14.49b), we find

0 = ũ t (x, t) + Λ(x)ũ x (x, t) − ω(x)ṽ(x, t)


− Σ(x)ũ(x, t) + N α (x, 1)Λ(1)α̃(1, t)
 
+ Λ(x)N α (x, x) − N α (x, x)Λ(x) + Σ(x) α̃(x, t)
 1
− Λ(x)N xα (x, ξ) + Nξα (x, ξ)Λ(ξ) + N α (x, ξ)Λ (ξ)
x

α β
− Σ(x)N (x, ξ) − ω(x)N (x, ξ) α̃(ξ, t)dξ
 1
+ g1 (x, ξ) − N α (x, ξ)ω(ξ)
x
 ξ 
+ N α (x, s)g1 (s, ξ)ds β̃(ξ, t)dξ (14.57)
x

and

0 = ṽt (x, t) − μ(x)ṽx (x, t) −  T (x)ũ(x, t) + N β (x, 1)Λ(1)α̃(1, t)


 
− μ(x)N β (x, x) + N β (x, x)Λ(x) −  T (x) α̃(x, t)
 1
β
+ μ(x)N xβ (x, ξ) − Nξ (x, ξ)Λ(ξ) − N β (x, ξ)Λ (ξ)
x

+  T (x)N α (x, ξ) α̃(ξ, t)dξ
 1
+ g2 (x, ξ) − N β (x, ξ)ω(ξ)
x
 ξ 
β
+ N (x, s)g1 (s, ξ)ds β̃(ξ, t)dξ. (14.58)
x

Using (14.44a)–(14.44d), (14.53) and (14.42) gives the dynamics (14.48a)–(14.48b).


Inserting (14.52) into (14.48c) yields
 1  β 
α̃(0, t) = q β̃(0, t) + q N (0, ξ) − N α (0, ξ) α̃(ξ, t)dξ (14.59)
0

from which (14.54) gives (14.49c). The boundary condition (14.49d) follows trivially
from (14.48d) by noting that ṽ(1, t) = β̃(1, t).
The target system (14.49) has a cascade structure from β̃ to α̃. For t ≥ tv , β̃ ≡ 0,
and system (14.49) reduces to
276 14 Non-adaptive Schemes

α̃t (x, t) + Λ(x)α̃x (x, t) = 0 (14.60a)


 1
α̃(0, t) = H (ξ)α̃(ξ, t)dξ (14.60b)
0
α̃(x, tv ) = α̃tv (x) (14.60c)

for some function αtv ∈ B([0, 1]). Due to the strictly lower triangular structure of
H , system (14.60) is also a cascade system, and will be zero for t ≥ t0 for t0 defined
in (14.47). 

14.4 Output Feedback Controllers

The state feedback controllers and state observers can straightforwardly be combined
into output feedback controllers, as we will do next. The proofs are straightforward
and omitted.

14.4.1 Sensing Anti-collocated with Actuation

Combining the results of Theorems 14.1 and 14.2, the following result trivially
follows.
Theorem 14.4 Consider system (13.1), subject to assumption (13.7), and with mea-
surement (13.1g). Let the controller be taken as
 1  1
U (t) = −c T û(1, t) + K u (1, ξ)û(ξ, t)dξ + K v (1, ξ)v̂(ξ, t)dξ (14.61)
0 0

where (K u , K v ) is the solution to the PDE (14.3), and û and v̂ are generated using
the observer of Theorem 14.2. Then,

u ≡ 0, v≡0 (14.62)

for t ≥ 2t F , where t F is defined in (14.5).

14.4.2 Sensing Collocated with Actuation

Similarly, combining the results of Theorems 14.1 and 14.3, the following result
trivially follows.
14.5 Output Tracking Controllers 277

Theorem 14.5 Consider system (13.1), subject to assumption (13.8), and with mea-
surement (13.1h). Let the controller be taken as
 1  1
U (t) = −c y1 (t) +
T
K (1, ξ)û(ξ, t)dξ +
u
K v (1, ξ)v̂(ξ, t)dξ (14.63)
0 0

where (K u , K v ) is the solution to the PDE (14.3), and û and v̂ are generated using
the observer of Theorem 14.3. Then

u ≡ 0, v≡0 (14.64)

for t ≥ t F + t0 , where t F is defined in (14.5) and t0 is defined in (14.47).

14.5 Output Tracking Controllers

Theorem 14.6 Consider system (13.1). Let the control law be taken as
 1  
U (t) = K u (1, ξ)u(ξ, t) + K v (1, ξ)v(ξ, t) dξ + r (t + tv ), (14.65)
0

where (K u , K v ) is the solution to the PDE (14.3). Then,

y0 (t) = v(0, t) = r (t) (14.66)

for t ≥ tv , where tv is defined in (14.5). Moreover, if r ∈ L∞ , then

||u||∞ , ||v||∞ ∈ L∞ . (14.67)

Proof In the proof of Theorem 14.1, it is shown that system (13.1) can be mapped
using the backstepping transformation (14.6) into target system (14.8), that is
 x
αt (x, t) + Λ(x)αx (x, t) = Σ(x)α(x, t) + ω(x)β(x, t) + B1 (x, ξ)α(ξ, t)dξ
 x 0

+ b2 (x, ξ)β(ξ, t)dξ (14.68a)


0
βt (x, t) − μ(x)βx (x, t) = 0 (14.68b)
α(0, t) = qβ(0, t) (14.68c)
β(1, t) = r (t + tv ) (14.68d)
α(x, 0) = α0 (x) (14.68e)
β(x, 0) = β0 (x) (14.68f)
y0 (t) = β(0, t) (14.68g)
278 14 Non-adaptive Schemes

where we have inserted the control law (14.65), and added the measurement (14.68g)
which follows from (13.1g) and the fact that v(0, t) = β(0, t). It is clear from the
structure of the subsystem in β consisting of (14.68b) and (14.68d) that

β(x, t) = β(1, t − tv (1 − x)) = r (t + tv x) (14.69)

for t ≥ tv (1 − x). Specifically,

y0 (t) = β(0, t) = r (t) (14.70)

for t ≥ tv , which is the tracking goal. System (14.68) is a cascade system from β to α.
For t ≥ tv , all values in β will be given by past values of r , while for t ≥ t1,u + tv = t F ,
this will also be true for α. Due to the invertibility of the transformation (14.6), this
also holds for u and v. 
The tracking controller of Theorem 14.6 can also be combined with the observers
of Theorems 14.2 and 14.3 into output-feedback tracking controllers.

14.6 Simulations

System (13.1) with the state feedback controller of Theorem 14.1, the output-
feedback controller of Theorem 14.4 using sensing anti-collocated with actuation
and the tracking controller of Theorem 14.6 are implemented for n = 2 using the
system parameters

Λ(x) = diag{1 + x, 2 + sin(πx)}, μ(x) = e x (14.71a)


     
1 x 1 1 + 2x
Σ(x) = , ω(x) = , (x) = (14.71b)
cosh(x) 1 − x 1 + 2x 1−x
 T  T
q = −1 −2 , c = 1 −1 (14.71c)

and initial conditions


 T
u 0 (x) = 1 e x , v0 (x) = sin(πx). (14.72)

From the transport speeds, we compute


 1  1
ds ds
tu,1 = = = ln(2) ≈ 0.6931 (14.73a)
0 λ1 (s) 0 1+s
 1  1
ds
tv = = e−s ds = 1 − e−1 ≈ 0.6321 (14.73b)
0 μ(s) 0

t F = tu,1 + tv ≈ 1.3253. (14.73c)


14.6 Simulations 279

Controller gains
1

Observer gains
1
0.5
0.5
0
0
−0.5
−0.5
0 0.5 1 0 0.5 1
Space Space

Fig. 14.1 Left: Controller gains K 1u (1, x) (solid red), K 2u (1, x) (dashed-dotted blue) and K v (1, x)
(dashed green). Right: Observer gains p1,1 (x) (solid red), p1,2 (x) (dashed-dotted blue) and p2 (x)
(dashed green)

4
8
6
2
4
2
0 0

0 1 2 3 4 0 1 2 3 4
Time [s] Time [s]

Fig. 14.2 Left: State norm during state feedback (solid red), output feedback (dashed-dotted blue)
and output tracking (dashed green). Right: State estimation error norm

2
4
2
1
0
−2 0

0 1 2 3 4 0 1 2 3 4
Time [s] Time [s]

Fig. 14.3 Left: Actuation signal during state feedback (solid red), output feedback (dashed-dotted
blue) and output tracking (dashed green). Right: Reference r (solid black) and measured signal
(dashed red) during tracking

The controller and observer gains are shown in Fig. 14.1. In the state feedback
case, the system’s norm and actuation signal converge to zero in a finite time t = t F ,
as seen in Figs. 14.2 and 14.3. In the output feedback case, the state estimation error
norm converges to zero in finite time t = t F , while the state norm and actuation signal
converge to zero in t = 2t F . For the tracking case, the state norm and actuation signals
stay bounded, while the tracking goal is achieved for t ≥ tv , as seen in Fig. 14.3.
280 14 Non-adaptive Schemes

14.7 Notes

It is clear that the complexity now has increased considerably from the 2 × 2 designs
in Chap. 8, especially in the design of the observer of Theorem 14.3 using sensing
collocated with actuation. The number of kernels used in the design is n 2 + n, and
hence scales quadratically with the number of states n. Moreover, some assumptions
are needed on the system parameters, specifically the transport speeds which cannot
be arbitrary, but has to be ordered systematically.
It is, as in the 2 × 2 case, possible to perform a decoupling of the controller target
system, as we showed in the alternative proof of Theorem 14.1. This is utilized in
Chap. 17 where a model reference adaptive control law for systems in the form (13.1)
is derived.

References

Bin M, Di Meglio F (2017) Boundary estimation of boundary parameters for linear hyperbolic
PDEs. IEEE Trans Autom Control 62(8):3890–3904
Di Meglio F, Vazquez R, Krstić M (2013) Stabilization of a system of n + 1 coupled first-order
hyperbolic linear PDEs with a single boundary input. IEEE Trans Autom Control 58(12):3097–
3111
Hu L, Vazquez R, Meglio FD, Krstić M (2015) Boundary exponential stabilization of 1-D inhomo-
geneous quasilinear hyperbolic systems. SlAM J Control Optim
Chapter 15
Adaptive State-Feedback Controller

15.1 Introduction

In this chapter, we derive a swapping-based state-feedback controller for the n +


1-system (13.1) with constant coefficients, under assumptions (13.7), (13.9) and
(13.11). The goal is to design a control law U (t) in (13.1d) so that system (13.1) is
adaptively stabilized when the parameters
 T
Σ = σ1 σ2 . . . σn , ω, , q, c (15.1)

are unknown. Note that σiT , i = 1, . . . , n are the rows of the matrix Σ. The control
law employs full state-feedback, and the practical interest of the controller is therefore
limited, since distributed measurements are at best a coarse approximation in practice.
This problem was originally solved in Anfinsen and Aamo (2017).
Output feedback problems, which are significantly harder to solve, are considered
in Chaps. 16 and 17.

15.2 Swapping-Based Design

15.2.1 Filter Design

Consider the filters

ηt (x, t) + Ληx (x, t) = 0, η(0, t) = 1v(0, t)


η(x, 0) = η0 (x) (15.2a)
ψt (x, t) − μψx (x, t) = 0, ψ(1, t) = u(1, t)
ψ(x, 0) = ψ0 (x) (15.2b)
© Springer Nature Switzerland AG 2019 281
H. Anfinsen and O. M. Aamo, Adaptive Control of Hyperbolic PDEs,
Communications and Control Engineering,
https://doi.org/10.1007/978-3-030-05879-1_15
282 15 Adaptive State-Feedback Controller

φt (x, t) − μφx (x, t) = 0, φ(1, t) = U (t)


φ(x, 0) = φ0 (x) (15.2c)
Pt (x, t) + ΛPx (x, t) = 1u (x, t),
T
P(0, t) = 0
P(x, 0) = P0 (x) (15.2d)
νt (x, t) + Λνx (x, t) = 1v(x, t), ν(0, t) = 0
ν(x, 0) = ν0 (x) (15.2e)
rt (x, t) − μr x (x, t) = u(x, t), r (1, t) = 0
r (x, 0) = r0 (x) (15.2f)

for the variables


 T
η(x, t) = η1 (x, t) η2 (x, t) . . . ηn (x, t) (15.3a)
 T
ψ(x, t) = ψ1 (x, t) ψ2 (x, t) . . . ψn (x, t) (15.3b)
 T
P(x, t) = p1 (x, t) p2 (x, t) . . . pn (x, t) (15.3c)
 T
ν(x, t) = ν1 (x, t) ν2 (x, t) . . . νn (x, t) (15.3d)
 T
r (x, t) = r1 (x, t) r2 (x, t) . . . rn (x, t) (15.3e)

and φ(x, t), where 1 is a column vector of length n with all elements equal to one.
The initial conditions are assumed to satisfy

η0 , ψ0 , φ0 , P0 , ν0 , r0 ∈ B([0, 1]). (15.4)

Note that pi (x, t), i = 1, . . . , n are the rows of the matrix P(x, t), each containing
n elements. Consider the non-adaptive state estimates

ū i (x, t) = ϕiT (x, t)κi , v̄(x, t) = ϕ0T (x, t)κ0 + φ(x, t) (15.5)

where
 T
ϕi (x, t) = ηi (x, t) pi (x, t) νi (x, t) , i = 1, . . . , n (15.6a)
 T
ϕ0 (x, t) = ψ T (x, t) r T (x, t) (15.6b)

are constructed from the filters, and


 T  T
κi = qi σi ωi , κ0 = c T  T (15.7)

for i = 1, . . . , n contain the unknown parameters. Recall that σi , i = 1, . . . , n are


the rows of the matrix Σ.
Lemma 15.1 Consider system (13.1), filters (15.2) and the non-adaptive state esti-
mates (15.5). For t ≥ t S , where
15.2 Swapping-Based Design 283

t S = max{μ−1 , λ−1
1 }, (15.8)

we have

ū ≡ u, v̄ ≡ v. (15.9)

Proof Consider the error signals

ei (x, t) = u i (x, t) − ϕiT (x, t)κi (15.10a)


(x, t) = v(x, t) − ϕ0T (x, t)κ0 − φ(x, t). (15.10b)

By straightforward calculations, it can be verified that the error terms (15.10) satisfy

et (x, t) + Λex (x, t) = 0, e(0, t) = 0, e(x, 0) = e0 (x) (15.11a)


t (x, t) − μ x (x, t) = 0, (1, t) = 0, (x, 0) = 0 (x) (15.11b)

where
 T
e(x, t) = e1 (x, t) e2 (x, t) . . . en (x, t) (15.12)

which will be identically zero for t ≥ t S . 

15.2.2 Adaptive Law

In deriving the adaptive laws, we will use the following assumption.


Assumption 15.1 Bounds are known on all uncertain parameters, that is: constants
σ̄, ω̄, ,
¯ c̄, q̄ are known so that

|σi j | ≤ σ̄, |ωi | ≤ ω̄, |i | ≤ ,


¯ |ci | ≤ c̄, |qi | ≤ q̄ (15.13)

for all i, j = 1, . . . , n.
Using this assumption, consider now the adaptive laws
  1  
êi (x, t)ϕi (x, t)d x êi (1, t)ϕi (1, t) ˙
κ̂˙ i (t) = projκ̄i Γi 0 + , κ̂ i (t) (15.14a)
1 + ||ϕi (t)||2 1 + |ϕi (1, t)|2

  1  
ˆ(x, t)ϕ0 (x, t)d x ˆ(0, t)ϕ2 (0, t) ˙
κ̂˙ 0 (t) = projκ̄0 Γ0 0 + , κ̂ 0 (t) (15.14b)
1 + ||ϕ2 (t)||2 1 + |ϕ0 (0, t)|2
κ̂i (0) = κ̂i,0 (15.14c)
κ̂0 (0) = κ̂0,0 (15.14d)
284 15 Adaptive State-Feedback Controller

for the parameter estimates


 T  T
κ̂i (t) = q̂i (t) σ̂i (t) ω̂i (t) , κ̂0 (t) = ĉ T (t) 
ˆ T (t) (15.15)

and bounds
 T  T
κ̄i = q̄ σ̄ ω̄ , κ̄0 = c̄ T 
¯T (15.16)

where i = 1, 2, . . . , n, with prediction errors

êi (x, t) = u i (x, t) − û i (x, t), ˆ(x, t) = v(x, t) − v̂(x, t) (15.17)

and adaptive state estimates

û i (x, t) = ϕiT (x, t)κ̂i (t), v̂(x, t) = ϕ0T (x, t)κ̂0 (t) + φ(x, t), (15.18)

and where proj is the projection operator in Appendix A and Γi > 0, i = 0, . . . , n


are some gain matrices. The initial conditions are chosen inside the feasible domain,
that is

−κ̄i,0 ≤ κ̂i,0 ≤ κ̄i,0 (15.19)

for i = 0, 1, . . . , n, where the inequalities in (15.19) act component-wise.

Theorem 15.1 The adaptive laws (15.14) with initial conditions satisfying (15.19)
guarantee

|κ̃i |, |κ̃0 | ∈ L∞ (15.20a)


||êi || ||ˆ||
, ∈ L2 ∩ L∞ (15.20b)
1 + ||ϕi ||2 1 + ||ϕ0 ||2
êi (1, ·) ˆ(0, ·)
, ∈ L2 ∩ L∞ (15.20c)
1 + |ϕi (1, ·)| 2 1 + |ϕ0 (0, ·)|2
κ̂˙ i , κ̂˙ 0 ∈ L2 ∩ L∞ (15.20d)

for i = 1, . . . , n. Moreover, the prediction errors satisfy the following bounds, for
t ≥ tS

||êi (t)|| ≤ ||ϕi (t)|||κ̃i (t)|, ||ˆ(t)|| ≤ ||ϕ0 (t)|||κ̃0 (t)|, (15.21)

for i = 1, . . . , n.

Proof The property (15.20a) follows from the projection operator. Consider the
Lyapunov function candidate
15.2 Swapping-Based Design 285

1 T
V (t) = κ̃ (t)Γ −1 κ̃(t) (15.22)
2
where
 T
κ̃(t) = κ̃1T (t) κ̃2T (t) . . . κ̃0T (t) (15.23)

and

Γ = diag {Γ1 , Γ2 , . . . , Γn , Γ0 } . (15.24)

By differentiating (15.22), inserting the adaptive laws (15.14) and using Lemma A.1
in Appendix A, we find

n  1 
êi (x, t)ϕi (x, t)d x êi (1, t)ϕi (1, t)
V̇ (t) ≤ − κ̃iT (t) 0
+
i=1
1 + ||ϕi (t)|| 2 1 + |ϕi (1, t)|2
 1 
ˆ(x, t)ϕ0 (x, t)d x ˆ(0, t)ϕ2 (0, t)
− κ̃0T (t) 0 + . (15.25)
1 + ||ϕ0 (t)||2 1 + |ϕ0 (0, t)|2

Inserting the relationships

êi (x, t) = ϕiT (x, t)κ̃i (t) + ei (x, t), ˆ(x, t) = ϕ0T (x, t)κ̃0 (t) + (x, t) (15.26)

and utilizing that e ≡ 0, ≡ 0 for t ≥ t S , we obtain

V̇ (t) ≤ −ζ 2 (t) (15.27)

where we have defined


n n
||êi (t)||2 êi2 (1, t)
ζ 2 (t) = +
i=1
1 + ||ϕi (t)||2 i=1
1 + |ϕi (1, t)|2
||ˆ(t)||2 ˆ2 (0, t)
+ + . (15.28)
1 + ||ϕ0 (t)||2 1 + |ϕ0 (0, t)|2

This gives ζ ∈ L2 , and hence

||êi (t)|| ||ˆ(t)|| êi (1, ·) ˆ(0, ·)


, , , ∈ L2 . (15.29)
1 + ||ϕi ||2 1 + ||ϕ0 ||2 1 + |ϕi (1, ·)|2 1 + |ϕ0 (0, ·)|2

From (15.26), we find for t ≥ t S

||êi (t)|| ||ϕiT (t)κ̃i (t)||


≤ ≤ |κ̃i (t)| (15.30)
1 + ||ϕi (t)||2 1 + ||ϕi (t)||2
286 15 Adaptive State-Feedback Controller

||ˆ || êi (1,·) ˆ (0,·)


and similarly for √ ,√ , √ , which give the remaining
1+||ϕ0 ||2 1+|ϕi (1,·)|2 1+|ϕ0 (0,·)|2
properties in (15.20b)–(15.20c).
Property (15.20d) follows from (15.20b)–(15.20c), the adaptive laws (15.14) and
the relationships (15.17).
Lastly, we notice from (15.5), (15.10), (15.18) and (15.17) that

êi (x, t) = ei (x, t) + ϕiT (x, t)κ̃i (t), ˆ(x, t) = (x, t) + ϕ0T (x, t)κ̃0 (t), (15.31)

for i = 1, . . . , n. Since e ≡ 0, ≡ 0 for t ≥ t S , the bounds (15.21) immediately


follow. 

15.2.3 Control Law

Consider the control law


1
U (t) = −ĉ T (t)u(1, t) + K̂ u (1, ξ, t)û(ξ, t)dξ
0
1
+ K̂ v (1, ξ, t)v̂(ξ, t)dξ (15.32)
0

where ( K̂ u , K̂ v ) is the solution to the PDE

μ K̂ xu (x, ξ, t) − K̂ ξu (x, ξ, t)Λ = K̂ u (x, ξ, t)Σ̂(t) + K̂ v (x, ξ, t)


ˆ T (t) (15.33a)
μ K̂ xv (x, ξ, t) + K̂ ξv (x, ξ, t)μ = K̂ u (x, ξ, t)ω̂(t) (15.33b)
K̂ (x, x, t)Λ + μ K̂ (x, x, t) = −
u u
ˆ (t) T
(15.33c)
v
μ K̂ (x, 0, t) = K̂ (x, 0, t)Λq̂(t)
u
(15.33d)

defined over T1 , given in (1.1b), where Σ̂, ,


ˆ ω̂, q̂, û and v̂ are estimates of the system
parameters and system states generated from the adaptive law of Theorem 15.1. By
Theorem D.4 in Appendix D and since all parameters in (15.33) are bounded by
projection in (15.14), Eq. (15.33) has a unique, bounded solution for every time t, in
the sense of

|| K̂ u (t)|| ≤ K̄ , ∀t ≥ 0 || K̂ v (t)|| ≤ K̄ , ∀t ≥ 0 (15.34)

for some constant K̄ . Additionally, from differentiating (15.33) with respect to time,
applying Theorem D.4 in Appendix D to the resulting equations, and using (15.20d),
we obtain

|| K̂ u ||, || K̂ v || ∈ L2 . (15.35)
15.2 Swapping-Based Design 287

Theorem 15.2 Consider system (13.1), filters (15.2) and the observer of Theorem
15.1. The control law (15.32) guarantees

||η||, ||ψ||, ||φ||, ||P||, ||ν||, ||r ||, ||û||, ||v̂||, ||u||, ||v|| ∈ L2 ∩ L∞ (15.36)

and
||η||, ||ψ||, ||φ||, ||P||, ||ν||, ||r ||, ||û||, ||v̂||, ||u||, ||v|| → 0. (15.37)

The proof of this theorem is given in Sect. 15.2.6.

15.2.4 Estimator Dynamics

From straightforward calculations, one can verify that the adaptive state estimates
(15.18) have the dynamics

˙
û t (x, t) + Λû x (x, t) = Σ̂(t)u(x, t) + ω̂(t)v(x, t) + (ϕ(x, t) ◦ κ̂(t))1 (15.38a)
v̂t (x, t) − μv̂x (x, t) =  ˆ (t)u(x, t) + ϕ0 (x, t)κ̂˙ 0 (t)
T T
(15.38b)
û(0, t) = q̂(t)v(0, t) (15.38c)
v̂(1, t) = ĉ (t)u(1, t) + U (t)
T
(15.38d)
û(x, 0) = û 0 (x) (15.38e)
v̂(x, 0) = v̂0 (x) (15.38f)

for û 0 , v̂0 ∈ B([0, 1]), where


 T
ϕ(x, t) = ϕ1 (x, t) ϕ2 (x, t) . . . ϕn (x, t) (15.39)

and
 T
κ̂(t) = κ̂1 (t) κ̂2 (t) . . . κ̂n (t) , (15.40)

and ◦ denotes the Hadamard (entrywise) product, while 1 is a vector of length n


containing only ones.

15.2.5 Target System and Backstepping

Consider the following backstepping transformation

α(x, t) = û(x, t) (15.41a)


288 15 Adaptive State-Feedback Controller

x
β(x, t) = v̂(x, t) − K̂ u (x, ξ, t)û(ξ, t)dξ
0
x
− K̂ v (x, ξ, t)v̂(ξ, t)dξ = T [û, v̂](x, t) (15.41b)
0

where ( K̂ u , K̂ v ) is the on-line solution to the PDE (15.33). As with all backstepping
transformations with uniformly bounded integration kernels, transformation (15.41)
is invertible, with inverse in the form

û(x, t) = α(x, t), v̂(x, t) = T −1 [α, β](x, t) (15.42)

where T −1 is a Volterra integral operator similar to T . Consider also the target system

x
αt (x, t) + Λαx (x, t) = Σ̂(t)α(x, t) + ω̂(t)β(x, t) + B̂1 (x, ξ, t)α(ξ, t)dξ
0
x
+ b̂2 (x, ξ, t)β(ξ, t)dξ + Σ̂(t)ê(x, t)
0
˙
+ ω̂(t)ˆ(x, t) + (ϕ(x, t) ◦ κ̂(t))1 (15.43a)
βt (x, t) − μβx (x, t) = − K̂ (x, 0, t)Λq̂(t)ˆ(0, t) + T [Σ̂ ê + ω̂ ˆ, 
u
ˆ ê](x, t) T

x
− K̂ tu (x, ξ, t)α(ξ, t)dξ
0
x
− K̂ tv (x, ξ, t)T −1 [α, β](ξ, t)dξ
0
˙
+ T [(ϕ ◦ κ̂)1, ϕ0T κ̂˙ 0 ](x, t) (15.43b)
α(0, t) = q̂(t)β(0, t) + q̂(t)ˆ(0, t) (15.43c)
β(1, t) = 0 (15.43d)
α(x, 0) = α0 (x) (15.43e)
β(x, 0) = β0 (x) (15.43f)

for α0 , β0 ∈ B([0, 1]), and for some functions B̂1 and b̂2 .

Lemma 15.2 Transformation (15.41) maps system (15.38) in closed loop with con-
trol law (15.32) into the target system (15.43) with B̂1 and b̂2 given as the solution
to the Volterra integral equation
x
B̂1 (x, ξ, t) = ω̂(t) K̂ u (x, ξ, t) + b̂2 (x, s, t) K̂ u (s, ξ, t)ds (15.44a)
ξ
x
b̂2 (x, ξ, t) = ω̂(t) K̂ v (x, ξ, t) + b̂2 (x, s, t) K̂ v (s, ξ, t)ds. (15.44b)
ξ
15.2 Swapping-Based Design 289

Proof From differentiating (15.41b) with respect to time, inserting the dynamics
(15.38b) and integrating by parts, we get

v̂t (x, t) = βt (x, t) − K̂ u (x, x, t)Λû(x, t) + K̂ u (x, 0, t)Λû(0, t)


x x
+ K ξu (x, ξ, t)Λû(ξ, t)dξ + K̂ u (x, ξ, t)Σ̂(t)û(ξ, t)dξ
0 0
x x
+ K̂ u (x, ξ, t)ω̂(t)v̂(ξ, t)dξ + K̂ u (x, ξ, t)Σ̂(t)ê(ξ, t)dξ
0 0
x x
+ K̂ (x, ξ, t)ω̂(t)ˆ(ξ, t)dξ +
u ˙
K̂ u (x, ξ, t)(ϕ(ξ, t) ◦ κ̂(t))1dξ
0 0
+ K̂ (x, x)μv̂(x, t) − K̂ v (x, 0)μv̂(0, t)
v

x x
− K ξv (x, ξ, t)μv̂(ξ, t)dξ + K̂ v (x, ξ, t)
ˆ T (t)û(ξ, t)dξ
0 0
x x
+ K̂ v (x, ξ, t)
ˆ T (t)ê(ξ, t)dξ + K̂ v (x, ξ, t)ϕ0T (ξ, t)κ̂˙ 0 (t)dξ
0 0
x x
+ K tu (x, ξ, t)û(ξ, t)dξ + K tv (x, ξ, t)v̂(ξ, t)dξ. (15.45)
0 0

Similarly, differentiating (15.41b) with respect to space, we obtain

v̂x (x, t) = βx (x, t) + K̂ u (x, x, t)û(x, t) + K̂ v (x, x, t)v̂(x, t)


x x
+ K xu (x, ξ, t)û(ξ, t)dξ + K xv (x, ξ, t)v̂(ξ, t)dξ. (15.46)
0 0

Inserting (15.45) and (15.46) into (15.38b), we find

0 = v̂t (x, t) − μv̂x (x, t) −  ˆ T (t)u(x, t) − ϕ0T (x, t)κ̂˙ 0 (t)


x
= βt (x, t) − μβx (x, t) + K ξu (x, ξ, t)Λ + K̂ u (x, ξ, t)Σ̂(t)
0

v
+ K̂ (x, ξ, t) ˆ (t) − μK x (x, ξ, t) û(ξ, t)dξ
T u

x  
v v
− K̂ ξ (x, ξ, t)μ + μ K̂ x (x, ξ, t) − K̂ (x, ξ, t)ω̂(t) v̂(ξ, t)dξ
u

0 
− K̂ (x, x, t)Λ + μ K̂ (x, x, t) + 
u u
ˆ (t) û(x, t)
T

 
v
− K̂ (x, 0, t)μ − K̂ (x, 0, t)Λq̂(t) v̂(0, t) + K̂ u (x, 0, t)Λq̂(t)ˆ(0, t)
u

x
−
ˆ T (t)ê(x, t) + K̂ (x, ξ, t)Σ̂ ê(ξ, t)dξ − ϕ0T (x, t)κ̂˙ 0 (t)
0
290 15 Adaptive State-Feedback Controller

x x
+ K̂ v (x, ξ, t)
ˆ T (t)ê(ξ, t)dξ + K̂ u (x, ξ, t)ω̂(t)ˆ(ξ, t)dξ
0 0
x x
+ K tu (x, ξ, t)û(ξ, t)dξ + K tv (x, ξ, t)v̂(ξ, t)dξ
0 0
x
+ ˙
K̂ u (x, ξ, t)(ϕ(ξ, t) ◦ κ̂(t))1dξ
0
x
+ K̂ v (x, ξ, t)ϕ0T (ξ, t)κ̂˙ 0 (t)dξ. (15.47)
0

Using Eqs. (15.33), the result can be written as (15.43b). Inserting (15.41b) into
(15.43a), we get
x
û t (x, t) + Λû x (x, t) = Σ̂(t)û(x, t) + ω̂(t)v̂(x, t) − ω̂(t) K̂ u (x, ξ, t)û(ξ, t)dξ
0
x x
− ω̂(t) K̂ v (x, ξ, t)v̂(ξ, t)dξ + B̂1 (x, ξ, t)û(ξ, t)dξ
0 0
x x ξ
+ b̂2 (x, ξ, t)v̂(ξ, t)dξ − b̂2 (x, ξ, t) K̂ u (ξ, s)û(s, t)dsdξ
0 0 0
x ξ
− b̂2 (x, ξ, t) K̂ v (ξ, s)v̂(s, t)dsdξ
0 0
˙
+ Σ̂(t)ê(x, t) + ω̂(t)ˆ(x, t) + (ϕ(x, t) ◦ κ̂(t))1, (15.48)

and using the û-dynamics (15.38a), and changing the order of integration in the
double integrals, we find
x  x 
0= B̂1 (x, ξ, t) − ω̂(t) K̂ (x, ξ, t) − u
b̂2 (x, s, t) K̂ (s, ξ, t)ds û(ξ, t)dξ
u
0 ξ
x
+ b̂2 (x, ξ, t) − ω̂(t) K̂ v (x, ξ, t)
0
x 
− b̂2 (x, s, t) K̂ v (s, ξ, t)ds v̂(ξ, t)dξ (15.49)
ξ

and hence B̂1 and b̂2 must satisfy (15.44). 

15.2.6 Proof of Theorem 15.2

In the derivations to follow, we will use

||T [u, v](t)|| ≤ G 1 ||u(t)|| + G 2 ||v(t)|| (15.50a)


15.2 Swapping-Based Design 291

||T −1 [u, v](t)|| ≤ G 3 ||u(t)|| + G 4 ||v(t)|| (15.50b)

for some positive constants G 1 , G 2 , G 3 , G 4 , which hold since the backstepping


transformation is invertible with uniformly bounded integration kernels (Theorem
1.3).
Consider the functions
1
V2 (t) = e−δx αT (x, t)α(x, t)d x (15.51a)
0
1
V3 (t) = ekx β 2 (x, t)d x (15.51b)
0
1
V4 (t) = e−δx η T (x, t)η(x, t)d x (15.51c)
0
1
V5 (t) = ekx ψ T (x, t)ψ(x, t)d x (15.51d)
0
n 1
V6 (t) = e−δx piT (x, t) pi (x, t)d x (15.51e)
i=1 0
1
V7 (t) = e−δx ν T (x, t)ν(x, t)d x (15.51f)
0
1
V8 (t) = ekx r T (x, t)r (x, t)d x (15.51g)
0

for some positive constants δ, k, ai i = 3 . . . 9 to be decided.


The following result is proved in Appendix E.9.
Lemma 15.3 There exists positive constants h 1 , h 2 , . . . , h 13 and nonnegative, inte-
grable functions l1 , l2 such that

V̇2 (t) ≤ −λ1 e−δ |α(1, t)|2 + h 1 β 2 (0, t) + h 1 ˆ2 (0, t) − (δλ1 − h 2 ) V2 (t)
˙
+ V3 (t) + ||ê(t)||2 + ||ˆ(t)||2 + ||(ϕ(t) ◦ κ̂(t))1)|| 2
(15.52a)
V̇3 (t) ≤ −μβ 2 (0, t) − [kμ − h 3 ] V3 (t) + ek ˆ2 (0, t) + h 4 ||ê(t)||2
+ h 5 ||ˆ(t)||2 + l1 (t)V2 (t) + l2 (t)V3 (t)
˙
+ h 6 ek ||(ϕ(t) ◦ κ̂(t))1|| 2
+ h 7 ek ||ϕT (t)κ̂˙ 0 (t)||2 (15.52b)
0
−δ
V̇4 (t) ≤ −λ1 e |η(1, t)| + h 8 β (0, t) + h 8 ˆ (0, t) − δλ1 V4 (t)
2 2 2
(15.52c)
V̇5 (t) ≤ h 9 e |α(1, t)| + h 9 e |ê(1, t)| − μ|ψ(0, t)| − kμV5 (t)
k 2 k 2 2
(15.52d)
−δ
V̇6 (t) ≤ −λ1 e |P(1, t)| − [δλ1 − 1] V6 (t) + h 10 V2 (t) + h 10 ||ê(t)||
2 2
(15.52e)
−δ
V̇7 (t) ≤ −λ1 e |ν(1, t)| − (δλ1 − h 11 ) V7 (t)
2

+ h 12 eδ V2 (t) + h 13 V3 (t) + 2||ˆ(t)||2 (15.52f)


δ+k
V̇8 (t) ≤ −μ|r (0, t)| − [kμ − 2] V8 (t) + e
2
V2 (t) + e ||ê(t)||
k 2
(15.52g)
292 15 Adaptive State-Feedback Controller

Now, construct the Lyapunov function candidate

9
V9 (t) = ai Vi (t). (15.53)
i=3

If we let
h9 a5 h 8 + a3 h 1
a3 = , a4 = (15.54a)
λ1 μ
a5 = a7 = 1, a6 = a9 = e−δ−k , a8 = e−δ (15.54b)

and then choose


 
1 h 11 a3 h 2 h 12 + 1 + h 10
δ > max 1, , , (15.55a)
λ1 λ1 a 3 λ1
 
2 a4 h 3 + a3 + h 13
k > max 1, , , (15.55b)
μ a4 μ

we obtain by Lemma 15.3 the following bound

V̇9 (t) ≤ −e−δ λ1 |η(1, t)|2 − e−δ−k μ|ψ(0, t)|2 − λ1 e−δ |P(1, t)|2
− λ1 e−2δ |ν(1, t)|2 − e−δ−k μ|r (0, t)|2 − cV9 (t)
 
+ a3 h 1 + a4 ek + a5 h 8 ˆ2 (0, t) + a4 l1 V2 (t) + a4 l2 V3 (t)
 
+ a6 h 9 ek |ê(1, t)|2 + a3 + a4 h 4 ek + a7 h 10 + a9 ek ||ê(t)||2
   
+ a3 + a4 h 5 ek + 2a8 ||ˆ(t)||2 + a3 + a4 h 6 ek ||(ϕ(t) ◦ κ̂(t))1||˙ 2

+ a4 h 7 ek ||ϕT (t)κ̂˙ 0 (t)||2


0 (15.56)

for some positive constant c. We note that


n n
||êi (t)||2
||ê(t)||2 = ||êi (t)|| = (1 + ||ϕi (t)||2 )
i=1 i=1
1 + ||ϕi (t)||2
n n
||êi (t)|| 2
||êi (t)||2
= + ||ϕi (t)||2
i=1
1 + ||ϕi (t)||2 i=1
1 + ||ϕi (t)||2
n
≤ l3 (t) + l3 (t) ||ϕi (t)||2
i=1
≤ l3 (t) + l3 (t)(||η(t)||2 + ||P(t)||2 + ||ν(t)||2 )
≤ l3 (t) + l3 (t)eδ (V4 (t) + V6 (t) + V7 (t)) (15.57)
15.2 Swapping-Based Design 293

and

||ˆ(t)||2
||ˆ(t)||2 = (1 + ||ϕ0 (t)||2 ) = l4 (t) + l4 (t)||ϕ0 (t)||2
1 + ||ϕ0 (t)||2
= l4 (t) + l4 (t)(||ψ(t)||2 + ||r (t)||2 )
≤ l4 (t) + l4 (t)(V5 (t) + V8 (t)) (15.58)

where
n
||êi (t)||2 ||ˆ(t)||2
l3 (t) = , l4 (t) = (15.59)
i=1
1 + ||ϕi (t)||2 1 + ||ϕ0 (t)||2

are integrable functions. Moreover, we have


n
˙
||(ϕ(t) ◦ κ̂(t))1|| 2
= ||ϕiT (t)κ̂˙ i (t)||2
i=1
n
≤ |κ̂˙ i (t)|2 ||ϕi (t)||2 ≤ l5 (t)||ϕ(t)||2
i=1

≤ l5 (t)eδ (V4 (t) + V6 (t) + V7 (t)) (15.60)

and

||ϕ0T (t)κ̂˙ 0 (t)||2 ≤ |κ̂˙ 0 (t)|2 ||ϕ0 (t)||2 ≤ l6 (t)(V5 (t) + V8 (t)) (15.61)

where
n
l5 (t) = |κ̂˙ i (t)|2 , l6 (t) = |κ̂˙ 0 (t)|2 (15.62)
i=1

are integrable functions. Inserting all this into (15.56), we obtain

V̇9 (t) ≤ −cV9 (t) − λ1 e−2δ |ϕ(1, t)|2 − e−δ−k μ|ϕ0 (0, t)|2
+ b1 |ê(1, t)|2 + b2 ˆ2 (0, t) + l7 (t)V9 (t) + l8 (t) (15.63)

for some integrable functions l7 and l8 and positive constants

b1 = a6 h 9 ek , b2 = a3 h 1 + a4 ek + a5 h 8 , (15.64)

and where we have used that

|ϕ(1, t)|2 = |ν(1, t)|2 + |η(1, t)|2 + |P(1, t)|2 (15.65a)


294 15 Adaptive State-Feedback Controller

|ϕ0 (0, t)|2 = |r (0, t)|2 + |ψ(0, t)|2 . (15.65b)

Moreover, for t ≥ t S
n n n
|êi (1, t)|2 |êi (1, t)|2
|ê(1, t)|2 = |êi (1, t)|2 = + |ϕi (1, t)|2
i=1 i=1
1 + |ϕi (1, t)|2 i=1
1 + |ϕi (1, t)|2
≤ ζ 2 (t) + ζ 2 (t)|ϕ(1, t)|2 (15.66a)
|ˆ(0, t)|2
|ˆ(0, t)|2 = (1 + |ϕ0 (0, t)|2 )
1 + |ϕ0 (0, t)|2
≤ ζ 2 (t) + ζ 2 (t)|ϕ0 (0, t)|2 (15.66b)

where we have used the definition of ζ in (15.28). Substituting (15.66) into (15.63),
we obtain
 
V̇9 (t) ≤ −cV9 (t) − λ1 e−2δ − b1 ζ 2 (t) |ϕ(1, t)|2
 
− μe−δ−k − b2 ζ 2 (t) |ϕ0 (0, t)|2 + l7 (t)V9 (t) + l9 (t) (15.67)

where

l9 (t) = l8 (t) + (b1 + b2 )ζ 2 (t) (15.68)

is an integrable function. From (15.22), (15.28) and (15.26), we have for t ≥ t S


n n
||ϕiT (t)κ̃i (t)||2 |ϕiT (1, t)κ̃i (t)|2
ζ (t) =
2
+
i=1
1 + ||ϕi (t)||2 i=1
1 + |ϕi (1, t)|2
||ϕ0T (t)κ̃0 (t)||2 |ϕ0T (0, t)κ̃0 (t)|2
+ +
1 + ||ϕ2 (t)||2 1 + |ϕ0 (0, t)|2
n
≤2 |κ̃i (t)|2 + 2|κ̃0 (t)|2 ≤ 2γ̄V (t) (15.69)
i=1

where γ̄ is the largest eigenvalue of Γ . Lemma B.4 in Appendix B then gives V9 ∈


L1 ∩ L∞ and hence

||α||, ||β||, ||η||, ||ψ||, ||P||, ||ν||, ||r || ∈ L2 ∩ L∞ , (15.70)

meaning that |ϕ(1, t)|2 and |ϕ0 (0, t)|2 must be bounded for almost all t ≥ 0, resulting
in

ζ 2 |ϕ(1, ·)|2 , ζ 2 |ϕ0 (0, ·)|2 ∈ L1 (15.71)

since ζ 2 ∈ L1 . Hence (15.67) can be bounded as


15.2 Swapping-Based Design 295

V̇9 (t) ≤ −cV9 (t) + l7 (t)V9 (t) + l10 (t) (15.72)

where

l10 (t) = l9 (t) + b1 ζ 2 (t)|ϕ(1, t)|2 + b2 ζ 2 (t)|ϕ0 (0, t)|2 (15.73)

is integrable. Lemma B.3 in Appendix B then gives V9 → 0, and

||α||, ||β||, ||η||, ||ψ||, ||P||, ||ν||, ||r || → 0. (15.74)

Due to the invertibility of the backstepping transformation (15.41), we then also have

||û||, ||v̂|| ∈ L2 ∩ L∞ , ||û||, ||v̂|| → 0 (15.75)

while from (15.18)

||φ|| ∈ L2 ∩ L∞ , ||φ|| → 0. (15.76)

Lastly, from (15.5) and Lemma 15.1 we have

||u||, ||v|| ∈ L2 ∩ L∞ , ||u||, ||v|| → 0. (15.77)

15.3 Simulation

System (13.1) and the controller of Theorem 15.2 are implemented for n = 2 using
the system parameters
 
1 0
Λ= , μ=2 (15.78a)
0 1.5
     
−2 −1 −2 1
Σ= , ω= , = (15.78b)
3 1 1 −2
   
1 −1
q= , c= . (15.78c)
1 2

The initial values for the plant are set to


 
sin(πx)
u 01 (x) = , v ≡ 0. (15.79)
ex − 1
296 15 Adaptive State-Feedback Controller

40 100
30
50
20
10 0
0
−50
0 5 10 15 0 5 10 15
Time [s] Time [s]

Fig. 15.1 Left: State norm. Right: Actuation

Fig. 15.2 Estimated parameters

All initial conditions for the filters and adaptive laws are set to zero. The kernel
equations (15.33) are solved on-line using the method described in Appendix F.2.
Figure 15.1 shows that the norm of the system states and actuation signal are
bounded and converge to zero in Fig. 15.1. All estimated parameters are seen to be
bounded in Fig. 15.2, as predicted by theory.
15.4 Notes 297

15.4 Notes

In previous chapters, we managed to prove pointwise boundedness of the states for


both adaptive an non-adaptive schemes. This is not achieved for the adaptive state-
feedback controller of Theorem 15.2. By inserting the control law (15.32) into the
target system (14.8), which is the method used for proving pointwise boundedness
in previous chapters, one obtains
x
αt (x, t) + Λ(x)αx (x, t) = Σ(x)α(x, t) + ω(x)β(x, t) + B1 (x, ξ)α(ξ, t)dξ
0
x
+ b2 (x, ξ)β(ξ, t)dξ (15.80a)
0
βt (x, t) − μ(x)βx (x, t) = 0 (15.80b)
α(0, t) = qβ(0, t) (15.80c)
1
β(1, t) = c̃ T (t)u(1, t) + K̂ u (1, ξ, t)û(ξ, t)dξ
0
1 1
+ K̂ v (1, ξ, t)v̂(ξ, t)dξ − K u (1, ξ)u(ξ, t)dξ
0 0
1
− K v (1, ξ)v(ξ, t)dξ (15.80d)
0
α(x, 0) = α0 (x) (15.80e)
β(x, 0) = β0 (x). (15.80f)

The problematic issue is the term c̃ T (t)u(1, t) in (15.80d), due to which we can-
not ensure β(1, ·) ∈ L∞ and pointwise boundedness. If c T , however, is known, so
that c̃ T (t)u(1, t) = 0, then pointwise boundedness and convergence to zero can be
proved.

Reference

Anfinsen H, Aamo OM (2017) Adaptive state feedback stabilization of n + 1 coupled linear hyper-
bolic PDEs. In: 25th mediterranean conference on control and automation, Valletta, Malta
Chapter 16
Adaptive Output-Feedback: Uncertain
Boundary Condition

16.1 Introduction

We will now consider the n + 1 system (13.1) again, but with the parameter q in
the boundary condition at x = 0 anti-collocated with actuation uncertain.We allow
system (13.1) to have spatially varying coefficients, and assume (13.9) and (13.7), and
derive an observer estimating the system states and q from boundary sensing only.
The derived adaptive observer is also combined with a control law achieving closed-
loop adaptive stabilization from boundary sensing only. This adaptive observer design
was initially proposed in Anfinsen et al. (2016), while the observer was combined
with a control law in Anfinsen and Aamo (2017a).

16.2 Sensing at Both Boundaries

16.2.1 Filter Design and Non-adaptive State Estimates

Considering system (13.1) with the uncertain parameter q, we introduce the filters

ηt (x, t) + Λ(x)ηx (x, t) = Σ(x)η(x, t) + ω(x)φ(x, t)


+ k1 (x)(y0 (t) − φ(0, t)) (16.1a)
φt (x, t) − μ(x)φx (x, t) =  T (x)η(x, t) + k2 (x)(y0 (t) − φ(0, t)) (16.1b)
η(0, t) = 0 (16.1c)
φ(1, t) = c T y1 (t) + U (t) (16.1d)
η(x, 0) = η0 (x) (16.1e)
φ(x, 0) = φ0 (x) (16.1f)

© Springer Nature Switzerland AG 2019 299


H. Anfinsen and O. M. Aamo, Adaptive Control of Hyperbolic PDEs,
Communications and Control Engineering,
https://doi.org/10.1007/978-3-030-05879-1_16
300 16 Adaptive Output-Feedback: Uncertain Boundary Condition

and

Pt (x, t) + Λ(x)Px (x, t) = Σ(x)P(x, t) + ω(x)r T (x, t)


− k1 (x)r T (0, t) (16.2a)
rtT (x, t) − μ(x)r xT (x, t) =  (x)P(x, t) − k2 (x)r (0, t)
T T
(16.2b)
P(0, t) = In v(0, t) (16.2c)
r (1, t) = 0
T
(16.2d)
P(x, 0) = P0 (x) (16.2e)
r (x, 0) =
T
r0T (x) (16.2f)

where
 T
η(x, t) = η1 (x, t) . . . ηn (x, t) (16.3a)
P(x, t) = { pi, j (x, t)}1≤i, j≤n (16.3b)
 T
r (x, t) = r1 (x, t) . . . rn (x, t) (16.3c)

and φ(x, t) is a scalar, with initial conditions satisfying

η0 , φ0 , P0 , r0 ∈ B([0, 1]), (16.4)

and In is the n × n identity matrix. The injection gains k1 , k2 are chosen as

k1 (x) = μ(0)M α (x, 0) (16.5a)


β
k2 (x) = μ(0)M (x, 0), (16.5b)

where (M α , M β ) is the solution to

Λ(x)Mxα (x, ξ) − μ(ξ)Mξα (x, ξ) = M α (x, ξ)μ (ξ) + Σ(x)M α (x, ξ)


+ ω(x)M β (x, ξ) (16.6a)
β
μ(x)Mxβ (x, ξ) + μ(ξ)Mξ (x, ξ) β
= −M (x, ξ)μ (ξ) 

−  T (x)M α (x, ξ) (16.6b)


α α
Λ(x)M (x, x) + μ(x)M (x, x) = ω(x) (16.6c)
β
M (1, ξ) = 0 (16.6d)

defined over the triangular domain T given in (1.1a). Note that Eq. (16.6) is the same
as (14.30) with c = 0, and is therefore well-posed. Using these filters, we define the
non-adaptive state estimates

ū(x, t) = η(x, t) + P(x, t)q v̄(x, t) = φ(x, t) + r T (x, t)q. (16.7)


16.2 Sensing at Both Boundaries 301

Lemma 16.1 Consider system (13.1) subject to (13.11) and (13.9), and the non-
adaptive state estimates (16.7) generated using the filters (16.1)–(16.2). For t ≥ t F ,
with t F defined in (14.5), we have

ū ≡ u, v̄ ≡ v. (16.8)

Proof The non-adaptive estimation errors

e(x, t) = u(x, t) − ū(x, t), (x, t) = v(x, t) − v̄(x, t) (16.9)

can straightforwardly be shown to satisfy

et (x, t) + Λ(x)ex (x, t) = Σ(x)e(x, t) + ω(x) (x, t) − k1 (x) (0, t) (16.10a)


t (x, t) − μ(x) x (x, t) =  (x)e(x, t) − k2 (x) (0, t)
T
(16.10b)
e(0, t) = 0 (16.10c)
(1, t) = 0 (16.10d)
e(x, 0) = e0 (x) (16.10e)
(x, 0) = 0 (x) (16.10f)

for some initial conditions e0 , 0 ∈ B([0, 1]). The dynamics (16.10) has the same
form as the the dynamics (14.32), the only difference being that c = 0. The result
then immediately follows from the proof of Theorem 14.2 and the fact that the kernel
equations (16.6) are identical to the kernel equations (14.30) with c = 0. 

16.2.2 Parameter Update Law

From the static form (16.7) and the result of Lemma 16.1, one can use standard
gradient or least squares update laws to estimate the unknown parameters in q. First,
we will assume we have some bounds on q.
Assumption 16.1 A bound q̄ on all elements in q is known, so that

|q|∞ ≤ q̄. (16.11)

Next, define the following vector of errors

ε(t) = h(t) − Ψ (t)q (16.12)

where
     
e(1, t) u(1, t) − η(1, t) P(1, t)
ε(t) = , h(t) = , Ψ (t) = (16.13a)
(0, t) v(0, t) − φ(0, t) r T (0, t)
302 16 Adaptive Output-Feedback: Uncertain Boundary Condition

Note that all the elements of h(t) and Ψ (t) are either generated using filters or mea-
sured. We now propose a gradient law with normalization and projection, given as
 
˙q̂(t) = proj Γ Ψ (t)ε̂(t) , q̂(t) ,
T
q̂(0) = q̂0 (16.14)

1 + |Ψ (t)|2

for some gain matrix Γ > 0, some initial guess q̂0 satisfying

|q0 |∞ ≤ q̄, (16.15)

and where ε̂(t) is the prediction error

ε̂(t) = h(t) − Ψ (t)q̂(t). (16.16)

The projection operator is defined in Appendix A.

Theorem 16.1 Consider system (13.1) with filters (16.1) and (16.2) and injection
gains given by (16.5). The update law (16.14) guarantees

|q̂(t)|∞ ≤ q̄, ∀t ≥ 0 (16.17a)


ζ ∈ L∞ ∩ L2 (16.17b)
q̂˙ ∈ L∞ ∩ L2 (16.17c)

where

ε̂(t)
ζ(t) =  . (16.18)
1 + |Ψ (t)|2

Moreover, if Ψ (t) and Ψ̇ (t) are bounded and Ψ (t) is persistently exciting (PE), then

q̂ → q (16.19)

exponentially fast.

Proof Property (16.17a) follows from Lemma A.1. Consider the Lyapunov function
candidate
1 T
V1 (t) = q̃ (t)Γ −1 q̃(t) (16.20)
2

where q̃ = q − q̂. Differentiating with respect to time and inserting (16.14), and
using Lemma A.1, we find

Ψ T (t)ε̂(t)
V̇1 (t) ≤ −q̃ T (t) (16.21)
1 + |Ψ (t)|2
16.2 Sensing at Both Boundaries 303

Using (16.12) and (16.16), and noticing from (16.10c) and (16.10d) the fact that
≡ 0, we have

ε̂(t) = Ψ (t)q̃(t), (16.22)

from which we obtain

V̇1 (t) ≤ −ζ 2 (t) (16.23)

where we have used the definition (16.18). Inequality (16.23) shows that V1 is non-
increasing, and hence has a limit as t → ∞. Integrating from zero to infinity gives

ζ ∈ L2 . (16.24)

Using (16.22), we have from (16.18) that

|ε̂(t)| |Ψ (t)q̃(t)|
|ζ(t)| =  = ≤ |q̃(t)| (16.25)
1 + |Ψ (t)| 2 1 + |Ψ (t)|2

which proves the last part of (16.17b). From (16.14) we find

˙ |Ψ T (t)ε̂(t)|
|q̂(t)| ≤ |Γ | ≤ |Γ ||ζ(t)| (16.26)
1 + |Ψ (t)|2

which from (16.17b) proves (16.17c). The property (16.19) follows immediately
from part iii) of Ioannou and Sun (1995), Theorem 4.3.2. 

16.2.3 State Estimation

Using the filters derived above and the boundary parameter estimates generated from
Theorem 16.1, we can generate estimates of the system states u, v by simply replacing
q in (16.7) by its estimate q̂, as follows

û(x, t) = η(x, t) + P(x, t)q̂(t), v̂(x, t) = φ(x, t) + r T (x, t)q̂(t). (16.27)

Lemma 16.2 Consider system (13.1) and the adaptive state estimates (16.27) gen-
erated using the filters (16.1) and (16.2) and the update law of Theorem 16.1. The
corresponding prediction errors

ê(x, t) = u(x, t) − û(x, t), ˆ(x, t) = v(x, t) − v̂(x, t) (16.28)

have the following properties, for t ≥ t F


304 16 Adaptive Output-Feedback: Uncertain Boundary Condition

||ê(t)|| ≤ ||P(t)|||q̃(t)|, ||ˆ(t)|| ≤ ||r (t)|||q̃(t)|. (16.29)

Proof Using the definitions (16.7), (16.9), (16.27) and (16.28), one immediately
finds

ê(x, t) = e(x, t) + P(x, t)q̃(t), ˆ(x, t) = (x, t) + r T (x, t)q̃(t). (16.30)

Since e ≡ 0, ≡ 0 for t ≥ t F (Lemma 16.1), the bounds (16.29) immediately follow.




16.2.4 Control Law

We propose the control law


1 1
U (t) = −ρT y1 (t) + K̂ u (1, ξ, t)û(ξ, t)dξ + K̂ v (1, ξ, t)v̂(ξ, t)dξ (16.31)
0 0

where ( K̂ u , K̂ v ) is the on-line solution to the PDE

μ(x) K̂ xu (x, ξ, t) − K̂ ξu (x, ξ, t)Λ(ξ) = K̂ u (x, ξ, t)Λ (ξ) + K̂ u (x, ξ, t)Σ(ξ)


+ K̂ v (x, ξ, t) T (ξ) (16.32a)
μ(x) K̂ xv (x, ξ, t) + K̂ ξv (x, ξ, t)μ(ξ) = K̂ (x, ξ, t)ω(ξ)
u

− K̂ v (x, ξ, t)μ (ξ) (16.32b)


K̂ (x, x, t)Λ(x) + μ(x) K̂ (x, x, t) = − (x)
u u T
(16.32c)
v
K̂ (x, 0, t)μ(0) = K̂ (x, 0, t)Λ(0)q̂(t)
u
(16.32d)

where q̂ is generated using the adaptive observer of Theorem 16.1. The existence of
a unique solution ( K̂ u , K̂ v ) to (16.32) for every time t is guaranteed by Theorem D.4
in Appendix D. Moreover, since the coefficients are uniformly bounded, the solution
is bounded in the sense of

|| K̂ iu (t)||∞ ≤ K̄ , || K̂ v (t)||∞ ≤ K̄ , ∀t ≥ 0, i = 1 . . . n (16.33)

for some constant K̄ . Additionally, from differentiating (16.32) with respect to time,
and applying Theorem D.4, we obtain

|| K̂ iu ||, || K̂ v || ∈ L2 , i = 1 . . . n. (16.34)

Theorem 16.2 Consider system (13.1) with the update law of Theorem 16.1 and
state estimates û, v̂ generated using Lemma 16.2. The control law (16.31) guarantees
16.2 Sensing at Both Boundaries 305

||u||, ||v||, ||η||, ||φ||, ||P||, ||r || ∈ L∞ ∩ L2 (16.35a)


||u||, ||v||, ||η||, ||φ||, ||P||, ||r || → 0 (16.35b)
||u||∞ , ||v||∞ , ||η||∞ , ||φ||∞ , ||P||∞ , ||r ||∞ ∈ L∞ ∩ L2 (16.35c)
||u||∞ , ||v||∞ , ||η||∞ , ||φ||∞ , ||P||∞ , ||r ||∞ → 0. (16.35d)

The proof of this theorem is the subject of the next sections.

16.2.5 Backstepping of Estimator Dynamics

First, we will derive the dynamics of the state estimates (16.27). Their dynamics
are needed for the backstepping design in subsequent sections. By straightforward
differentiation using (16.27) and the filters (16.1) and (16.2), one can verify that the
dynamics satisfy

û t (x, t) + Λ(x)û x (x, t) = Σ(x)û(x, t) + ω(x)v̂(x, t) + k1 (x)ˆ(0, t)


˙
+ P(x, t)q̂(t) (16.36a)
T ˙
v̂t (x, t) − μ(x)v̂x (x, t) =  (x)û(x, t) + k2 (x)ˆ(0, t) + r (x, t)q̂(t) (16.36b)
T

û(0, t) = q̂(t)v(0, t) (16.36c)


v̂(1, t) = ρ u(1, t) + U (t)
T
(16.36d)
û(x, 0) = û 0 (x) (16.36e)
v̂(x, 0) = v̂0 (x) (16.36f)

for initial conditions û 0 , v̂0 ∈ B([0, 1]), and where we have inserted for the measure-
ments (13.1g)–(13.1h).
Consider the backstepping transformation

α(x, t) = û(x, t) (16.37a)


x
β(x, t) = v̂(x, t) − K̂ u (x, ξ, t)û(ξ, t)dξ
0
x
v
− K̂ (x, ξ, t)v̂(ξ, t)dξ = T [û, v̂](x, t) (16.37b)
0

with inverse

û(x, t) = α(x, t) (16.38a)


−1
v̂(x, t) = T [α, β](x, t) (16.38b)

where K̂ u , K̂ v satisfy the PDE (16.32), and the target system


306 16 Adaptive Output-Feedback: Uncertain Boundary Condition

x
αt (x, t) + Λ(x)αx (x, t) = Σ(x)α(x, t) + ω(x)β(x, t) + B̂1 (x, ξ, t)α(ξ, t)dξ
0
x
+ b̂2 (x, ξ, t)β(ξ, t)dξ − k1 (x)ˆ(0, t)
0
˙
+ P(x, t)q̂(t) (16.39a)
˙ + T [k1 , k2 ](x, t)ˆ(0, t)
βt (x, t) − μ(x)βx (x, t) = T [P, r ](x, t)q̂(t) T

− K̂ u (x, 0, t)Λ(0)q̂(t)ˆ(0, t)
x
− K̂ tu (x, ξ, t)α(ξ, t)dξ
0
x
− K̂ tv (x, ξ, t)T −1 [α, β](ξ, t)dξ (16.39b)
0
α(0, t) = q̂(t)β(0, t) + q̂(t)ˆ(0, t) (16.39c)
β(1, t) = 0 (16.39d)
α(x, 0) = α0 (x) (16.39e)
β(x, 0) = β0 (x) (16.39f)

where ( B̂1 , b̂2 ) is the solution to the Volterra integral equation (15.44), and α0 , β0 ∈
B([0, 1]). The following holds.
Lemma 16.3 The backstepping transformation (16.37) and control law (16.31) map
system (16.36) into the target system (16.39).
Proof Differentiating (16.37b) with respect to time, inserting the dynamics (16.36a)–
(16.36b) and integrating by parts, we find

v̂t (x, t) = βt (x, t) − K̂ u (x, x, t)Λ(x)û(x, t) + K̂ u (x, 0, t)Λ(0)û(0, t)


x x
+ K ξu (x, ξ, t)Λ(ξ)û(ξ, t)dξ + K̂ u (x, ξ, t)Λ (ξ)û(ξ, t)dξ
0 0
x x
+ K̂ u (x, ξ, t)Σ(ξ)û(ξ, t)dξ + K̂ u (x, ξ, t)ω(ξ)v̂(ξ, t)dξ
0 0
x x
+ K̂ u (x, ξ, t)k1 (ξ)ˆ(0, t)dξ + ˙
K̂ u (x, ξ, t)P(ξ, t)q̂(t)dξ
0 0
+ K̂ v (x, x, t)μ(x)v̂(x, t) − K̂ v (x, 0, t)μ(0)v̂(0, t)
x x
− K ξv (x, ξ, t)μ(ξ)v̂(ξ, t)dξ − K̂ v (x, ξ, t)μ (ξ)v̂(ξ, t)dξ
0 0
x x
v
+ K̂ (x, ξ, t) (ξ)û(ξ, t)dξ +
T
K̂ v (x, ξ, t)k2 (ξ)ˆ(0, t)dξ
0 0
x x
+ ˙
K̂ v (x, ξ, t)r T (ξ, t)q̂(t)dξ + K̂ tu (x, ξ, t)û(ξ, t)dξ
0 0
x
+ K̂ tv (x, ξ, t)v̂(ξ, t)dξ. (16.40)
0
16.2 Sensing at Both Boundaries 307

Equivalently, differentiating (16.37b) with respect to space, we find


x
v̂x (x, t) = βx (x, t) + K̂ u (x, x, t)û(x, t) + K̂ xu (x, ξ, t)û(ξ, t)dξ
0
x
v
+ K̂ (x, x, t)v̂(x, t) + K̂ xv (x, ξ, t)v̂(ξ, t)dξ. (16.41)
0

Inserting (16.40) and (16.41) into the v̂-dynamics (16.36b), we find

0 = βt (x, t) − μ(x)βx (x, t)


x
+ K ξu (x, ξ, t)Λ(ξ) + K̂ u (x, ξ, t)Λ (ξ) + K̂ u (x, ξ, t)Σ(ξ)
0

+ K̂ v (x, ξ, t) T (ξ) − μ(x)K xu (x, ξ, t) û(ξ, t)dξ
x
+ K̂ u (x, ξ, t)ω(ξ) − K ξv (x, ξ, t)μ(ξ)
0

v  v
− K̂ (x, ξ, t)μ (ξ) − μ(x)K x (x, ξ, t) v̂(ξ, t)dξ
 x
˙
− q̂(t) r T (x, t) − K̂ u (x, ξ, t)P(ξ, t)dξ
0
x 
v
− K̂ (x, ξ, t)r (ξ, t)dξ
T
0
 x x 
v
− k2 (x) − K̂ (x, ξ, t)k1 (ξ)dξ −
u
K̂ (x, ξ, t)k2 (ξ)dξ ˆ(0, t)
0 0
 
− μ(x) K̂ (x, x, t) + K̂ (x, x, t)Λ(x) +  (x) û(x, t)
u u T

 
v
− K̂ (x, 0, t)μ(0) − K̂ (x, 0, t)Λ(0)q̂(t) v̂(0, t)
u

+ K̂ (x, 0, t)Λ(0)q̂(t)ˆ(0, t)
x x
+ K tu (x, ξ, t)û(ξ, t)dξ + K tv (x, ξ, t)v̂(ξ, t)dξ. (16.42)
0 0

Using Eq. (16.32) and the inverse transformation (16.38), we obtain (16.39b). Insert-
ing the transformation (16.37) into (16.39a), we find

û t (x, t) + Λ(x)û x (x, t) = Σ(x)û(x, t) + ω(x)v̂(x, t)


x x
− ω(x) K̂ u (x, ξ, t)û(ξ, t)dξ − ω(x) K̂ v (x, ξ, t)v̂(ξ, t)dξ
0 0
x x
+ B̂1 (x, ξ, t)û(ξ, t)dξ + b̂2 (x, ξ, t)û(ξ, t)dξ
0 0
308 16 Adaptive Output-Feedback: Uncertain Boundary Condition

x ξ
− b̂2 (x, ξ, t) K̂ u (ξ, s, t)û(s, t)dsdξ
0 0
x ξ
− b̂2 (x, ξ, t) K̂ v (ξ, s, t)v̂(s, t)dsdξ
0 0
˙
− k1 (x)ˆ(0, t) + P(x, t)q̂(t). (16.43)

Changing the order of integration in the double integrals, (16.43) can be written as

˙
û t (x, t) + Λ(x)û x (x, t) − Σ(x)û(x, t) − ω(x)v̂(x, t) + k1 (x)ˆ(0, t) − P(x, t)q̂(t)
x
= B̂1 (x, ξ, t) − ω(x) K̂ u (x, ξ, t)
0
x 
− b̂2 (x, s, t) K̂ (s, ξ, t)ds û(ξ, t)dξ
u
ξ
x
+ b̂2 (x, ξ, t) − ω(x) K̂ v (x, ξ, t)
0
x 
v
− b̂2 (x, s, t) K̂ (s, ξ, t)ds v̂(ξ, t)dξ. (16.44)
ξ

Using the Eqs. (15.44) yields the dynamics (16.36a). We find from inserting the back-
stepping transformation (16.37) into (16.36d) that the control law (16.31) produces
the boundary condition (16.39d). The last boundary condition (16.39c) results from
inserting (16.37) into (16.36c) and noting that v(0, t) = β(0, t) + ˆ(0, t). 

16.2.6 Backstepping of Regressor Filters

To ease the Lyapunov analysis, we will also perform a backstepping transformation


of the filters P, r , mapping (16.2) into a target system. The transformation is
x
P(x, t) = W (x, t) + M α (x, ξ)z T (ξ, t)dξ (16.45a)
0
x
r T (x, t) = z T (x, t) + M β (x, ξ)z T (ξ, t)dξ (16.45b)
0

where (M α , M β ) satisfies the PDE (16.6), while the target system is


x
Wt (x, t) + Λ(x)Wx (x, t) = Σ(x)W (x, t) + D1 (x, ξ)W (ξ, t)dξ (16.46a)
0
x
z tT (x, t) − μ(x)z xT (x, t) = θ T (x)W (x, t) + d2T (x, ξ)W (ξ, t)dξ (16.46b)
0
W (0, t) = In (β(0, t) + ˆ(0, t)) (16.46c)
16.2 Sensing at Both Boundaries 309

z T (1, t) = 0 (16.46d)
W (x, 0) = W0 (x) (16.46e)
z(x, 0) = z 0 (x) (16.46f)

where D1 and d2 are given from (14.35).


Lemma 16.4 Consider filters (16.2) with injection gains given by (16.5). The back-
stepping transformation (16.45) maps the target system (16.46) into the filter (16.2).

Proof The filters (16.2) have the same structure as the error dynamics (16.10), which
in turn have the same form as the error dynamics (14.32). The proof of the trans-
formation is therefore similar to the proof of Theorem 14.2, and is omitted. The
boundary condition (16.46c) follows from noting that v(0, t) = β(0, t) + ˆ(0, t).


16.2.7 Proof of Theorem 16.2

First, we state the following inequalities, which result from the fact that the back-
stepping transformations are invertible and also act on the individual columns of P

||Pi (t)|| ≤ A1 ||Wi (t)|| + A2 ||z i (t)|| (16.47a)


||ri (t)|| ≤ A3 ||Wi (t)|| + A4 ||z i (t)|| (16.47b)

and

||Wi (t)|| ≤ B1 ||Pi (t)|| + B2 ||ri (t)|| (16.48a)


||z i (t)|| ≤ B3 ||Pi (t)|| + B4 ||ri (t)|| (16.48b)

for some positive constants A1 . . . A4 and B1 . . . B4 , where Pi , Wi are the columns


of P, W and ri , z i are the elements of r, z, that is
 
P(x, t) = P1 (x, t) P2 (x, t) . . . Pn (x, t) (16.49a)
 
r T (x, t) = r1 (x, t) r2 (x, t) . . . rn (x, t) (16.49b)
 
W (x, t) = W1 (x, t) W2 (x, t) . . . Wn (x, t) (16.49c)
 
z T (x, t) = z 1 (x, t) z 2 (x, t) . . . z n (x, t) . (16.49d)

Moreover, we also have

||T [u, v](t)|| ≤ G 1 ||u(t)|| + G 2 ||v(t)|| (16.50a)


−1
||T [u, v](t)|| ≤ G 3 ||u(t)|| + G 4 ||v(t)|| (16.50b)

for some positive constants G 1 , G 2 , G 3 and G 4 .


310 16 Adaptive Output-Feedback: Uncertain Boundary Condition

Consider systems (16.39) and (16.46), and the Lyapunov function candidate
6
V2 (t) = ai Vi (t) (16.51)
i=3

where a3 . . . a6 are positive constants to be decided, and


1
V3 (t) = e−δx αT (x, t)Λ−1 (x)α(x, t)d x (16.52a)
0
1
V4 (t) = ekx μ−1 (x)β 2 (x, t)d x (16.52b)
0
n 1
V5 (t) = e−δx WiT (x, t)Λ−1 (x)Wi (x, t)d x (16.52c)
i=1 0
n 1
V6 (t) = ekx μ−1 (x)z i2 (x, t)d x. (16.52d)
i=1 0

The following result is proved in Appendix E.10.


Lemma 16.5 Subject to the assumption k, δ ≥ 1, there exists positive constants
h 1 , h 2 , . . . , h 6 and nonnegative, integrable functions l1 , l2 , . . . , l6 such that
 
V̇3 (t) ≤ 2n q̄ 2 β 2 (0, t) − δλ − h 1 V3 (t) + h 2 V4 (t)
+ h 3 ˆ2 (0, t) + l1 (t)V5 (t) + l2 (t)V6 (t) (16.53a)
V̇4 (t) ≤ −β 2 (0, t) − kμ − 5 V4 (t) + l3 (t)V3 (t) + l4 (t)V4 (t)
+ l5 (t)V5 (t) + l6 (t)V6 (t) + h 4 ek ˆ2 (0, t) (16.53b)
−δ
 
V̇5 (t) ≤ −e |W (1, t)| + 2nβ (0, t) + 2n ˆ (0, t) − λδ − h 5 V5 (t)
2 2 2
(16.53c)
V̇6 (t) ≤ −|z(0, t)| + h 6 e
2 k+δ
V5 (t) − kμ − 2 V6 (t) (16.53d)

where λ and μ are lower bounds on λ and μ, respectively.


Now, let

a3 = a5 = 1, a4 = 2n(1 + q̄ 2 ), a6 = e−δ−k (16.54)

then, by Lemma 16.5


   
V̇2 (t) ≤ − δλ − h 1 V3 (t) − a4 kμ − 5a4 − h 2 V4 (t) − λδ − h 5 − h 6 V5 (t)
 
− e−k−δ kμ − 2 V6 (t) + h 3 + a4 h 4 ek + 2n ˆ2 (0, t)
− e−k−δ |z(0, t)|2 − e−δ |W (1, t)|2 + l7 (t)V2 (t) (16.55)
16.2 Sensing at Both Boundaries 311

for an integrable function l7 . Now let


  
h1 h5 + h6 5a4 + h 2 2
δ > max 1, , , k > max 1, , (16.56)
λ λ a3 μ μ

then

V̇2 (t) ≤ −cV2 (t) − e−k−δ |z(0, t)|2 − e−δ |W (1, t)|2
+ b ˆ2 (0, t) + l5 (t)V2 (t) (16.57)

where

b = h 3 + a4 h 4 ek + 2n (16.58)

is a positive constant. Consider ˆ2 (0, t), which can be written as

ˆ2 (0, t)
ˆ2 (0, t) = (1 + |r (0, t)|2 + |P(1, t)|2 ). (16.59)
1 + |r (0, t)|2 + |P(1, t)|2

Using the backstepping transformation (16.45a), we find

|P(1, t)|2 ≤ 2|W (1, t)|2 + 2 M̄ 2 ||z(t)||2 (16.60)

where M̄ bounds the kernel M. Expressed using the Lyapunov function V6 , we find

|P(1, t)|2 ≤ 2|W (1, t)|2 + 2 M̄ 2 μ̄V6 (t). (16.61)

Inserting (16.61) into (16.59), we obtain

ˆ2 (0, t)
ˆ2 (0, t) = (|z(0, t)|2 + 2|W (1, t)|2 ) + l8 (t)V6 (t) + l9 (t) (16.62)
1 + |Ψ (t)|2

where l8 and l9 are integrable, and where we have used the definition of Ψ stated in
(16.13). Moreover, we have

ˆ2 (0, t) ≤ ˆ2 (0, t) + ê2 (1, t) = |ε̂(t)|2 (16.63)

and hence

ˆ2 (0, t) ≤ ζ 2 (t)(|z(0, t)|2 + 2|W (1, t)|2 ) + l8 (t)V6 (t) + l9 (t) (16.64)

where we used the definition of ζ in (16.18). Now inserting (16.64) into (16.57), we
get
312 16 Adaptive Output-Feedback: Uncertain Boundary Condition
 
V̇2 (t) ≤ −cV2 (t) + l10 (t)V2 (t) + l11 (t) − e−k−δ − bζ 2 (t) |z(0, t)|2
 
− e−δ − bζ 2 (t) |W (1, t)|2 (16.65)

where l10 and l11 are integrable functions. Moreover, we have

|ε̂(t)|2 |Ψ (t)q̃(t)|2
ζ 2 (t) = = ≤ |q̃(t)|2 ≤ 2γ̄V1 (t) (16.66)
1 + |Ψ (t)|2 1 + |Ψ (t)|2

where γ̄ is an upper bound on the eigenvalues of Γ , and V1 is defined in (16.20).


Lemma B.4 in Appendix B gives V2 ∈ L∞ ∩ L1 , and hence

||α||, ||β||, ||W ||, ||z|| ∈ L∞ ∩ L2 . (16.67)

Since ||W ||, ||z|| ∈ L∞ ∩ L2 , z(0, t) and W (1, t) must be bounded almost every-
where, so that

ζ 2 |z(0, ·)|2 , ζ 2 |W (1, ·)|2 ∈ L1 (16.68)

since ζ ∈ L1 . Thus, (16.65) can we written as

V̇2 (t) ≤ −cV2 (t) + l10 (t)V2 (t) + l12 (t) (16.69)

where

l12 (t) = l11 (t) + bζ 2 (t)|z(0, t)|2 + bζ 2 (t)|W (1, t)|2 (16.70)

is an integrable function. Lemma B.3 in Appendix B then gives V2 → 0, and hence

||α||, ||β||, ||W ||, ||z|| → 0. (16.71)

Due to the invertibility of the transformations (16.37) and (16.45), we then also have

||û||, ||v̂||, ||P||, ||r || ∈ L∞ ∩ L2 , ||û||, ||v̂||, ||P||, ||r || → 0. (16.72)

From (16.27)

||η||, ||φ|| ∈ L∞ ∩ L2 , ||η||, ||φ|| → 0 (16.73)

follows, while (16.7) and Lemma 16.7 give

||u||, ||v|| ∈ L∞ ∩ L2 , ||u||, ||v|| → 0. (16.74)

We proceed by proving boundedness and square integrability pointwise in space.


From the proof of Theorem 14.1, we have that system (13.1) can be mapped by the
invertible backstepping transformation (14.6) into (14.8), which we restate here
16.2 Sensing at Both Boundaries 313

x
αt (x, t) + Λ(x)αx (x, t) = Σ(x)α(x, t) + ω(x)β(x, t) + B1 (x, ξ)α(ξ, t)dξ
0
x
+ b2 (x, ξ)β(ξ, t)dξ (16.75a)
0
βt (x, t) − μ(x)βx (x, t) = 0 (16.75b)
α(0, t) = qβ(0, t) (16.75c)
1 1
β(1, t) = K̂ u (1, ξ, t)û(ξ, t)dξ + K̂ v (1, ξ, t)v̂(ξ, t)dξ
0 0
1
− K u (1, ξ)u(ξ, t)dξ
0
1
− K v (1, ξ)v(ξ, t)dξ (16.75d)
0
α(x, 0) = α0 (x) (16.75e)
β(x, 0) = β0 (x) (16.75f)

where we have inserted the control law (16.31). We observe that since ||u||, ||v||, ||û||,
||v̂|| ∈ L∞ ∩ L2 and ||u||, ||v||, ||û||, ||v̂|| → 0 in the boundary condition (16.75d),
we must have ||β||∞ ∈ L∞ ∩ L2 and ||β||∞ → 0. Due to the cascaded structure of
system (16.75), we must also ultimately have ||α||∞ ∈ L∞ ∩ L2 and ||α||∞ → 0.
Due to the invertibility of the transformation (14.6), we therefore also have

||u||∞ , ||v||∞ ∈ L∞ ∩ L2 , ||u||∞ , ||v||∞ → 0. (16.76)

This also implies that all filters, being generated from measurements of u and v, are
bounded, square integrable and converge to zero pointwise in space. 

16.3 Simulations

System (13.1), the observer of Theorem 16.1 and the adaptive control law of Theorem
16.2 are implemented using the transport speeds

λ1 = λ2 = μ = 1, (16.77)

the in-domain parameters


⎡ ⎤ ⎡ ⎤
σ1,1 σ1,2 ω1 0 0.4 0
⎣σ2,1 σ2,2 ω2 ⎦ = ⎣−0.7 0 0.1⎦ (16.78)
θ1 θ2 0 0.5 −0.1 0
314 16 Adaptive Output-Feedback: Uncertain Boundary Condition

6 0
||u|| + ||v||

4
−0.5

U
2
0 −1
0 2 4 6 8 10 0 2 4 6 8 10
Time [s] Time [s]

Fig. 16.1 Left: State norm. Right: Actuation

4 0
−1
q̂1

q̂2
−2
0 −3
0 2 4 6 8 10 0 2 4 6 8 10
Time [s] Time [s]

Fig. 16.2 Left: Actual (solid black) and estimated (dashed red) boundary parameter q1 . Right:
Actual (solid black) and estimated (dashed red) boundary parameter q2

and the boundary parameters


   
q1 ρ1 4 0
= . (16.79)
q2 ρ2 −3 0

The initial conditions for the system are set to


 T
u 0 (x) = sin(2πx) x v0 ≡ 0. (16.80)

The kernel equations (16.32) are solved online by mapping the equations to inte-
gral equations (details on how this is done can be found in the appendix of Anfinsen
and Aamo (2017a)).
In the closed loop case, the system state norm and actuation signal are in Fig. 16.1
seen to be bounded and converge to zero. The estimated boundary parameters as
generated using Theorem 16.1 are shown in Fig. 16.2 to converge to their true values,
although this has not been proved for the closed loop case.

16.4 Notes

The problem of estimating the parameter q as well as an additive parameter d in the


boundary condition at x = 0 from sensing limited to the boundary anti-collocated
with the uncertain parameter is solved in Anfinsen et al. (2016). Although this
16.4 Notes 315

observer manages to estimate the parameter q subject to some PE requirements,


it does not manage to produce real-time estimates of the system states. However, the
extension of the observer of Theorem 10.3 and controller of Theorem 10.4 to n + 1
using time-varying injection gains should should be straightforward to solve.

References

Anfinsen H, Aamo OM (2017) Adaptive stabilization of n + 1 coupled linear hyperbolic systems


with uncertain boundary parameters using boundary sensing. Syst Control Lett 99:72–84
Anfinsen H, Diagne M, Aamo OM, Krstić M (2016) An adaptive observer design for n + 1 coupled
linear hyperbolic PDEs based on swapping. IEEE Trans Autom Control 61(12):3979–3990
Anfinsen H, Di Meglio F, Aamo OM (2016) Estimating the left boundary condition of coupled
1–D linear hyberbolic PDEs from right boundary sensing. In: 15th European control conference,
Aalborg, Denmark
Ioannou P, Sun J (1995) Robust adaptive control. Prentice-Hall Inc, Upper Saddle River
Chapter 17
Model Reference Adaptive Control

17.1 Introduction

We revisit system (13.1) again with assumptions (13.9) and (13.10), and sensing
(17.1g) anti-collocated with actuation, that is

u t (x, t) + (x)u x (x, t) = Σ(x)u(x, t) + ω(x)v(x, t) (17.1a)


vt (x, t) − μ(x)vx (x, t) =  (x)u(x, t)
T
(17.1b)
u(0, t) = qv(0, t) (17.1c)
v(1, t) = c T u(1, t) + k1 U (t) (17.1d)
u(x, 0) = u 0 (x) (17.1e)
v(x, 0) = v0 (x) (17.1f)
y0 (t) = k2 v(0, t). (17.1g)

Here, we also allow the measurement and the actuation signal U to be scaled by
arbitrary (nonzero) constants, k1 and k2 . The system parameters are in the form

(x) = diag {λ1 (x), λ2 (x), . . . , λn (x)} , Σ(x) = {σi j (x)}1≤i, j≤n (17.2a)
 T
ω(x) = ω1 (x) ω2 (x) . . . ωn (x) (17.2b)
 T
(x) = 1 (x) 2 (x) . . . n (x) (17.2c)
 T  T
q = q1 q2 . . . qn , c = c1 c2 . . . cn (17.2d)

and assumed to satisfy, for i, j = 1, 2, . . . , n

λi , μ ∈ C 1 ([0, 1]), λi (x), μ(x) > 0, ∀x ∈ [0, 1] (17.3a)


σi j , ωi , i ∈ C ([0, 1]), σii ≡ 0,
0
qi , ci ∈ R (17.3b)
k1 , k2 ∈ R\{0}. (17.3c)

© Springer Nature Switzerland AG 2019 317


H. Anfinsen and O. M. Aamo, Adaptive Control of Hyperbolic PDEs,
Communications and Control Engineering,
https://doi.org/10.1007/978-3-030-05879-1_17
318 17 Model Reference Adaptive Control

The initial conditions are assumed to satisfy u 0 , v0 ∈ B([0, 1]). We assume that (13.8)
holds for the transport speeds, that is

−μ(x) < 0 < λ1 (x) < λ2 (x) < · · · < λn (x), ∀x ∈ [0, 1]. (17.4)

We now seek to solve the model reference adaptive control (MRAC) problem assum-
ing

(x), μ(x), Σ(x), ω(x), (x), q, c, k1 , k2 (17.5)

are uncertain. However, as before, we assume the transport delays and the sign of
the product k1 k2 is known, as formally stated the following assumption.
Assumption 17.1 The following quantities are known,
 1  1
dγ dγ
tu,i = λ̄i−1 = , tv = μ̄−1 = , sign(k1 k2 ), (17.6)
0 λi (γ) 0 μ(γ)

for i = 1, 2, . . . , n.
Mathematically, the MRAC problem is stated as designing a control input U (t)
that achieves
 t+T
lim (y0 (s) − yr (s))2 = 0 (17.7)
t→∞ t

for some T > 0, where the signal yr is generated using the reference model

bt (x, t) − μ̄bx (x, t) = 0 (17.8a)


b(1, t) = r (t) (17.8b)
b(x, 0) = b0 (x) (17.8c)
yr (t) = b(0, t) (17.8d)

for some initial condition b0 ∈ B([0, 1]) and a reference signal r of choice. The
signal r is assumed to be bounded, as formally stated in the following assumption.
Assumption 17.2 The reference signal r (t) is known for all t ≥ 0, and there exists
a constant r̄ so that

|r (t)| ≤ r̄ (17.9)

for all t ≥ 0.
Moreover, all other signals, such as the system states and other auxiliary (filter)
states should be bounded in the L 2 -sense.
17.2 Model Reference Adaptive Control 319

17.2 Model Reference Adaptive Control

17.2.1 Mapping to Canonical Form

17.2.1.1 System Decoupling

Consider the system

α̌t (x, t) + (x)α̌x (x, t) = m 1 (x)β̌(0, t) (17.10a)


β̌t (x, t) − μ(x)β̌x (x, t) = 0 (17.10b)
α̌(0, t) = q β̌(0, t) (17.10c)
 1
β̌(1, t) = c T α̌(1, t) + k1 U (t) − m 2T (ξ)α̌(ξ, t)dξ
0
 1
− m 3 (ξ)β̌(ξ, t)dξ (17.10d)
0
α̌(x, 0) = α̌0 (x) (17.10e)
β̌(x, 0) = β̌0 (x) (17.10f)
y0 (t) = k2 β̌(0, t) (17.10g)

for the states α̌, β̌ defined for x ∈ [0, 1], t ≥ 0, and for some new parameters
m 1 , m 2 , m 3 and initial conditions α̌0 , β̌0 ∈ B([0, 1]).

Lemma 17.1 System (17.1) is equivalent to the system (17.10), with

m 1 (x) = μ(0)K uu (x, 0) − K uv (x, 0)(0)q (17.11a)


m 2T (x) = L βα (1, ξ) − c T L αα (1, ξ) (17.11b)
ββ αβ
m 3 (x) = L (1, ξ) − c L T
(1, ξ) (17.11c)

where (K uu , K uv ) is the solution to the PDE (14.20)–(14.21), and


 
L αα (x, ξ) L αβ (x, ξ)
L(x, ξ) = (17.12)
L βα (x, ξ) L ββ (x, ξ)

is the solution to the Volterra integral equation (1.53), that is


 x
L(x, ξ) = K (x, ξ) + K (x, s)L(s, ξ)ds (17.13)
ξ
320 17 Model Reference Adaptive Control

where
 
K uu (x, ξ) K uv (x, ξ)
K (x, ξ) = (17.14)
K u (x, ξ) K v (x, ξ)

with (K u , K v ) satisfying the PDE (14.3).

Proof This result follows directly from the alternative proof of Theorem 14.1, where
the backstepping transformation
     x  
α̌(x, t) u(x, t) u(ξ, t)
= − K (x, ξ) dξ (17.15)
β̌(x, t) v(x, t) 0 v(ξ, t)

is shown to map (17.1a)–(17.1c) into the form (17.10a)–(17.10c) with m 1 in the form
(17.11a). The inverse of (17.15) is from Theorem 1.2 given as
     x  
u(x, t) α̌(x, t) α̌(ξ, t)
= + L(x, ξ) dξ (17.16)
v(x, t) β̌(x, t) 0 β̌(ξ, t)

with L given by (17.13). Inserting (17.16) into (17.1d) immediately yields (17.10d)
with m 2 and m 3 given by (17.11b)–(17.11c). 

17.2.1.2 Constant Transport Speeds and Scaling

We now use a transformation to get rid of the spatially varying transport speeds in
(17.10), and consider the system
¯ x (x, t) = m 4 (x)β(0, t)
αt (x, t) + α (17.17a)
βt (x, t) − μ̄βx (x, t) = 0 (17.17b)
α(0, t) = qβ(0, t) (17.17c)
 1
β(1, t) = c T α(1, t) + ρU (t) + m 5T (ξ)α(ξ, t)dξ
0
 1
+ m 6 (ξ)β(ξ, t)dξ (17.17d)
0
α(x, 0) = α0 (x) (17.17e)
β(x, 0) = β0 (x) (17.17f)
y0 (t) = β(0, t) (17.17g)

for the system states α, β, some new parameters ρ, m 4 , m 5 , m 6 and initial conditions
α0 , β0 ∈ B([0, 1]).
17.2 Model Reference Adaptive Control 321

Lemma 17.2 System (17.10) is equivalent to the system (17.17) where

ρ = k1 k2 (17.18a)
m 4,i (x) = m 1,i (h −1
α,i (x)) (17.18b)
m 5,i (x) = −tu,i λi (h −1 −1
α,i (x))m 2,i (h α,i (x)) (17.18c)
m 6 (x) = −tv μ(h −1 −1
β (x))m 3 (h β (x)) (17.18d)

for the vectors of parameters


 T
m j (x) = m j,1 (x) m j,2 (x) . . . m j,n (x) (17.19)

for j = 1, 2, 4, 5, where
 x  x
dγ dγ
h α,i (x) = λ̄i , h β (x) = μ̄ . (17.20)
0 λi (γ) 0 μ(γ)

Proof The proof is straightforward, using the mappings

αi (x, t) = k2 α̌i (h −1
α,i (x), t), β(x, t) = k2 β̌(h −1
β (x), t), (17.21)

for i = 1, . . . n, which are invertible since the functions (17.20) are strictly increasing.
The rest of the proof follows immediately from insertion and noting that

λ̄i μ̄
h α,i (x) = , h β (x) = (17.22a)
λi (x) μ(x)
h α,i (0) = h β (0) = 0, h α,i (1) = h β (1) = 1 (17.22b)

for i = 1, . . . , n, which follow from (17.20), (14.5) and (17.6). 

17.2.1.3 Simplification Using a Swapping Filter

Consider the system

¯ x (x, t) = 0
ζt (x, t) + ζ (17.23a)
βt (x, t) − μ̄βx (x, t) = 0 (17.23b)
ζ(0, t) = 1β(0, t) (17.23c)
 1
β(1, t) = ν ζ(1, t) + ρU (t) +
T
κ(ξ)ζ1 (ξ, t)dξ
0
 1
+ m 6 (ξ)β(ξ, t)dξ + ε(t) (17.23d)
0
322 17 Model Reference Adaptive Control

ζ(x, 0) = ζ0 (x) (17.23e)


β(x, 0) = β0 (x) (17.23f)
y(t) = β(0, t) (17.23g)

for the variable


 T
ζ(x, t) = ζ1 (x, t) ζ2 (x, t) . . . ζn (x, t) , (17.24)

with initial condition


 T
ζ0 (x) = ζ1,0 (x) ζ2,0 (x) . . . ζn,0 (x) (17.25)

intentionally chosen as

ζ0 ≡ 0, (17.26)

and the parameters


 T
ν = c1 q1 c2 q2 . . . cn qn (17.27a)
m 7,i (ξ) = qi m 5,i (ξ) + ci λ̄i−1 m 4,i (1− ξ)
 1
+ λ̄i−1 m 5,i (s)m 4,i (s − ξ)ds (17.27b)
ξ
 
1 
n
λ̄1 λ̄i
κ(ξ) = λ̄i δ ξ, m 7,i ξ (17.27c)
λ̄1 i=1 λ̄i λ̄1

where

1 for x ≤ a
δ(x, a) = (17.28)
0 otherwise,

and some signal ε(t) defined for t ≥ 0, and where 1 is a column vector of length n
with all elements equal to one.
Lemma 17.3 Consider systems (17.17) and (17.23). The signal ε(t), which is char-
acterized in the proof, is zero for t ≥ dα,1 . Moreover, stabilization of (17.23) implies
stabilization of (17.17). More precisely,

||α(t)|| ≤ c||ζ(t)||, ||α(t)||∞ ≤ c||ζ(t)||∞ (17.29)

for t ≥ dα,1 and some constant c.


Proof Non-adaptive estimates of the states in
 T
α(x, t) = α1 (x, t) α2 (x, t) . . . αn (x, t) (17.30)
17.2 Model Reference Adaptive Control 323

can be generated from ζ as


 x
ᾱi (x, t) = qi ζi (x, t) + λ̄i−1 m 4,i (x − ξ)ζi (ξ)dξ. (17.31)
0

Consider the corresponding error e(x, t) = α(x, t) − ᾱ(x, t). It can straightfor-
wardly be proved that e satisfies the dynamics

¯ x (x, t) = 0,
et (x, t) + e e(0, t) = 0, e(x, 0) = e0 (x), (17.32)

for e0 ∈ B([0, 1]), from which (17.29) follows directly. By inserting


 x
αi (x, t) = qi ζi (x, t) + λ̄i−1 m 4,i (x − ξ)ζi (ξ)dξ + ei (x, t) (17.33)
0

into the terms in α in (17.17d), we find, component-wise


 1
ci αi (1, t) + m 5,i (ξ)αi (ξ, t)dξ
0
 1
= ci qi ζi (1, t) + ci ei (1, t) + m 5,i (ξ)ei (ξ, t)dξ
0
 1
+ qi m 5,i (ξ) + ci λ̄i−1 m 4,i (1 − ξ)
0
 1 
+ λ̄i−1 m 5,i (s)m 4,i (s − ξ)ds ζi (ξ, t)dξ
ξ
 1
= νi ζi (1, t) + m 7,i (ξ)ζi (ξ, t)dξ + εi (t) (17.34)
0

where
 1
εi (t) = ci ei (1, t) + m 5,i (ξ)ei (ξ, t)dξ (17.35)
0

is zero for t ≥ dα,i since the ei ’s are zero. Since all components of the filter ζ essen-
tially are transport equations with the same input y0 , the one with the slowest transport
speed, ζ1 , contains all the information in the other n − 1 states in ζ (recall that the
initial conditions are intentionally set to zero). Thus, the integrals
 1
m 7,i (ξ)ζi (ξ, t)dξ (17.36)
0

for i = 1, . . . , n can all be expressed in terms of ζ1 as


324 17 Model Reference Adaptive Control

  λ̄1 
1 λ̄i λ̄i λ̄i
m 7,i (ξ)ζi (ξ, t)dξ = m 7,i ξ ζ1 (ξ, t)dξ (17.37)
0 0 λ̄1 λ̄1

and hence
 n 
 
1  1
λ̄1 λ̄i λ̄i
m 7T (ξ)ζ(ξ, t)dξ = δ ξ, m 7,i ξ ζ1 (ξ, t)dξ. (17.38)
0 i=1 0 λ̄i λ̄1 λ̄1

Using the definition (17.27c), the result (17.23) follows. 


Systems (17.17) and (17.23) are not in general equivalent. That is, there exists
in general no invertible change of variables that maps between (17.17) and (17.23).
However, stabilization of system (17.23) implies stabilization of system (17.17). This
is evident from the relationship (17.33), which shows that αi (x, t) can not in general
be reconstructed from ζi (x, t) (e.g. if q = 0, m 4 ≡ 0).

17.2.1.4 Error from Reference

We now define a reference model. Motivated by the structure of the dynamics


(17.23a)–(17.23b), we augment the reference model (17.8) with an additional state
a as follows
¯ x (x, t) = 0,
at (x, t) + a a(0, t) = 1b(0, t), a(x, 0) = a0 (x) (17.39a)
bt (x, t) − μ̄bx (x, t) = 0, b(1, t) = r (t), b(x, 0) = b0 (x). (17.39b)

Consider the tracking errors

w(x, t) = ζ(x, t) − a(x, t), ž(x, t) = β(x, t) − b(x, t), (17.40)

and the dynamics

wt (x, t) + w¯ x (x, t) = 0 (17.41a)


ž t (x, t) − μ̄ž x (x, t) = 0 (17.41b)
w(0, t) = ž(0, t) (17.41c)
ž(1, t) = ν [w(1, t) + a(1, t)] + ρU (t) − r (t)
T
 1
+ κ(ξ)[w1 (ξ, t) + a1 (ξ, t)]dξ
0
 1
+ m 6 (ξ)[ž(ξ, t) + b(ξ, t)]dξ + ε(t) (17.41d)
0
w(x, 0) = w0 (x) (17.41e)
ž(x, 0) = ž 0 (x) (17.41f)
y0 (t) = ž(0, t) + b(0, t). (17.41g)
17.2 Model Reference Adaptive Control 325

Lemma 17.4 The variables (17.40) satisfy the dynamics (17.41) where w0 = ζ0 −
a0 , ž 0 = β0 − b0 .
Proof The proof follows straightforwardly from the dynamics (17.23) and (17.39)
and is therefore omitted. 

17.2.1.5 Canonical Form

Consider the system


¯ x (x, t) = 0
wt (x, t) + w (17.42a)
z t (x, t) − μ̄z x (x, t) = μ̄θ(x)z(0, t) (17.42b)
w(0, t) = 1z(0, t) (17.42c)
z(1, t) = ν T [w(1, t) + a(1, t)] + ρU (t) − r (t)
 1
+ κ(ξ)[w1 (ξ, t) + a1 (ξ, t)]dξ
0
 1
+ θ(ξ)b(1 − ξ, t)dξ + ε(t) (17.42d)
0
w(x, 0) = w0 (x) (17.42e)
z(x, 0) = z 0 (x) (17.42f)
y0 (t) = z(0, t) + b(0, t) (17.42g)

for a new variable z, and a new parameter θ.


Lemma 17.5 System (17.41) is equivalent to system (17.42), where

θ(x) = m 6 (1 − x). (17.43)

Proof The proof is straightforward, using the backstepping transformation


 x
z(x, t) = ž(x, t) − m 6 (1 − x + ξ)ž(ξ)dξ, (17.44)
0

which yields the new parameter θ as (17.43). The remaining details are omitted. 

17.2.2 Filter Design

In addition to the ζ-filter defined in (17.23), we introduce the following filters

ψt (x, t) − μ̄ψx (x, t) = 0, ψ(1, t) = U (t)


ψ(x, 0) = ψ0 (x) (17.45a)
326 17 Model Reference Adaptive Control

φt (x, t) − μ̄φx (x, t) = 0, φ(1, t) = z(0, t)


φ(x, 0) = φ0 (x) (17.45b)
h t (x, t) − μ̄h x (x, t) = 0, h(1, t) = w(1, t)
h(x, 0) = h 0 (x) (17.45c)
ϑt (x, t) − μ̄ϑx (x, t) = 0, ϑ(1, t) = a(1, t)
ϑ(x, 0) = ϑ0 (x) (17.45d)
Pt (x, ξ, t) − μ̄Px (x, ξ, t) = 0, P(1, ξ, t) = w1 (ξ, t)
P(x, ξ, 0) = P0 (x, ξ) (17.45e)
Mt (x, ξ, t) − μ̄Mx (x, ξ, t) = 0, M(1, ξ, t) = a1 (ξ, t)
M(x, ξ, 0) = M0 (x, ξ) (17.45f)
Nt (x, ξ, t) − μ̄Px (x, ξ, t) = 0, N (1, ξ, t) = b(1 − ξ, t)
N (x, ξ, 0) = N0 (x, ξ) (17.45g)

with initial conditions satisfying

ψ0 , φ0 , h 0 , ϑ0 ∈ B([0, 1]), P0 , M0 , N0 ∈ B([0, 1]2 ), (17.46)

and define

p0 (x, t) = P(0, x, t), m 0 (x, t) = M(0, x, t), n 0 (x, t) = N (0, x, t). (17.47)

Now, construct a non-adaptive estimate of the system state z as

z̄(x, t) = ν T [h(x, t) + ϑ(x, t)] + ρψ(x, t) − b(x, t)


 1
+ κ(ξ)[P(x, ξ, t) + M(x, ξ, t)]dξ
0
 1  1
+ θ(ξ)N (x, ξ, t)dξ + θ(ξ)φ(1 − (ξ − x), t)dξ. (17.48)
0 x

Lemma 17.6 Consider system (17.42) and the non-adaptive estimate (17.48) of the
state z. For t ≥ t F , where t F is defined in (14.5), we have

z̄ ≡ z. (17.49)

Proof We will prove that the non-adaptive estimation error

(x, t) = z(x, t) − z̄(x, t) (17.50)

satisfies the dynamics

t (x, t) − μ̄x (x, t) = 0, (1, t) = ε(t), (x, 0) = 0 (x), (17.51)


17.2 Model Reference Adaptive Control 327

for some function 0 ∈ B([0, 1]). By differentiating (17.48) with respect to time and
space respectively, we find

z̄ t (x, t) = ν T [h t (x, t) + ϑt (x, t)] + ρψt (x, t) − bt (x, t)


 1
+ κ(ξ)[Pt (x, ξ, t) + Mt (x, ξ, t)]dξ
0
 1  1
+ θ(ξ)Nt (x, ξ, t)dξ + θ(ξ)φt (1 − (ξ − x), t)dξ
0 x
= ν T [μ̄h x (x, t) + μ̄ϑx (x, t)] + ρμ̄ψx (x, t) − μ̄bx (x, t)
 1
+ κ(ξ)[μ̄Px (x, ξ, t) + μ̄Mx (x, ξ, t)]dξ
0
 1  1
+ θ(ξ)μ̄N x (x, ξ, t)dξ + θ(ξ)μ̄φx (1 − (ξ − x), t)dξ (17.52)
0 x

and

z̄ x (x, t) = ν T [h x (x, t) + ϑx (x, t)] + ρψx (x, t) − bx (x, t)


 1
+ κ(ξ)[Px (x, ξ, t) + Mx (x, ξ, t)]dξ
0
 1
+ θ(ξ)N x (x, ξ, t)dξ − θ(x)z(0, t)
0
 1
+ θ(ξ)φx (1 − (ξ − x), t)dξ. (17.53)
x

Using (17.52) and (17.53), we immediately find

z̄ t (x, t) − μ̄z̄ x (x, t) = μ̄θ(x)z(0, t), (17.54)

which with (17.42b) yields the dynamics (17.51). Inserting x = 1 into (17.48), we
find

z̄(1, t) = ν T [w(1, t) + a(1, t)] + ρU (t) − r (t)


 1  1
+ κ(ξ)[w1 (ξ, t) + a1 (ξ, t)]dξ + θ(ξ)b(1 − ξ, t)dξ, (17.55)
0 0

which with (17.42b) gives the boundary condition (17.51).


Since ε(t) = 0 for t ≥ tu,1 , it is clear that  ≡ 0 and hence z̄ ≡ z for t ≥ tu,1 + tv =
tF . 
328 17 Model Reference Adaptive Control

17.2.3 Adaptive Law

To ensure that the adaptive laws to be designed next generate bounded estimates of
the uncertain parameters, we assume the following.
Assumption 17.3 Bounds on the uncertain parameters ν, ρ, θ and κ are known.
That is, we are in knowledge of some constants ρ, ρ̄, θ, θ̄, κ, κ̄, ν i , ν̄i , i = 1, . . . , n
so that

ν i ≤ νi ≤ ν̄i , i = 1, . . . n, ρ ≤ ρ ≤ ρ̄ (17.56a)
θ ≤ θ(x) ≤ θ̄, κ ≤ κ(x) ≤ κ̄, ∀x ∈ [0, 1], (17.56b)

where
 T  T
ν = ν1 ν2 . . . νn , ν = ν1 ν2 . . . νn (17.57a)
 T
ν̄ = ν̄1 ν̄2 . . . ν̄n (17.57b)

with

/ [ρ, ρ̄].
0∈ (17.58)

The assumption (17.58) is equivalent to knowing the sign of k1 k2 , which is known


by Assumption 17.1. We propose an adaptive estimate of z by substituting all uncer-
tain parameters in (17.48) with estimates as follows

ẑ(x, t) = ν̂ T (t)[h(x, t) + ϑ(x, t)] + ρ̂(t)ψ(x, t) − b(x, t)


 1
+ κ̂(ξ, t)[P(x, ξ, t) + M(x, ξ, t)]dξ
0
 1  1
+ θ̂(ξ, t)N (x, ξ, t)dξ + θ̂(ξ, t)φ(1 − (ξ − x), t)dξ. (17.59)
0 x

To derive the adaptive laws, we evaluate (17.48) and (17.50) at x = 0 to obtain

z(0, t) = ν T [h(0, t) + ϑ(0, t)] + ρψ(0, t) − b(0, t)


 1
+ κ(ξ)[ p0 (ξ, t) + m 0 (ξ, t)]dξ
0
 1
+ θ(ξ)[φ(1 − ξ, t) + n 0 (ξ, t)]dξ + (0, t) (17.60)
0

with (0, t) = 0 in finite time, and define

ˆ(0, t) = z(0, t) − ẑ(0, t) = (h(0, t) + ϑ(0, t))T ν̃(t) + ρ̃(t)ψ(0, t)


17.2 Model Reference Adaptive Control 329
 1
+ θ̃(ξ, t)(φ(1 − ξ, t) + n 0 (ξ, t))dξ
0
 1
+ κ̃(ξ, t)( p0 (ξ, t) + m 0 (ξ, t))dξ
0
+ (0, t), (17.61)

where we have used (17.47), and defined the estimation errors ν̃ = ν − ν̂, ρ̃ = ρ − ρ̂,
θ̃ = θ − θ̂, κ̃ = κ − κ̂. We propose the following adaptive laws

˙ ˆ(0, t)(h(0, t) + ϑ(0, t))


ν̂(t) = projν,ν̄ Γ1 , ν̂(t) (17.62a)
1 + f 2 (t)
˙ = proj ˆ(0, t)ψ(0, t)
ρ̂(t) ρ,ρ̄ γ2 , ρ̂(t) (17.62b)
1 + f 2 (t)
ˆ(0, t)(φ(1 − x, t) + n 0 (x, t))
θ̂t (x, t) = projθ,θ̄ γ3 (x) , θ̂(x, t) (17.62c)
1 + f 2 (t)
ˆ(0, t)( p0 (x, t) + m 0 (x, t))
κ̂t (x, t) = projκ,κ̄ γ4 (x) , κ̂(x, t) (17.62d)
1 + f 2 (t)
ν̂(0) = ν̂0 (17.62e)
ρ̂(0) = ρ̂0 (17.62f)
θ̂(x, 0) = θ̂0 (x) (17.62g)
κ̂(x, 0) = κ̂0 (x) (17.62h)

for some design matrix Γ1 > 0, and design gains γ2 > 0 and γ3 (x), γ4 (x) > 0 for
all x ∈ [0, 1], where

f 2 (t) = |h(0, t)|2 + |ϑ(0, t)|2 + ψ 2 (0, t) + ||φ(t)||2


+ ||n 0 (t)||2 + || p0 (t)||2 + ||m 0 (t)||2 . (17.63)

The initial conditions are chosen inside the feasible domain

ν i ≤ ν̂i,0 ≤ ν̄i , i = 1, 2, . . . , n ρ ≤ ρ̂0 ≤ ρ̄ (17.64a)


θ ≤ θ̂0 (x) ≤ θ̄, ∀x ∈ [0, 1], κ ≤ κ̂0 (x) ≤ κ̄, ∀x ∈ [0, 1], (17.64b)

where
 T
ν̂0 = ν̂1,0 ν̂2,0 . . . ν̂n,0 (17.65)

and the projection operator is defined in Appendix A.

Lemma 17.7 The adaptive laws (17.62) with initial conditions satisfying (17.64)
guarantee the following properties
330 17 Model Reference Adaptive Control

ν i ≤ ν̂i (t) ≤ ν̄i , t ≥ 0, i = 1, . . . , n (17.66a)


ρ ≤ ρ̂(t) ≤ ρ̄, t ≥ 0, (17.66b)
θ ≤ θ̂(x, t) ≤ θ̄, t ≥ 0, ∀x ∈ [0, 1], (17.66c)
κ ≤ κ̂(x, t) ≤ κ̄, t ≥ 0, ∀x ∈ [0, 1], (17.66d)
σ ∈ L∞ ∩ L2 (17.66e)
˙ ρ̂,
ν̂, ˙ ||θ̂t ||, ||κ̂t || ∈ L∞ ∩ L2 (17.66f)

where ν̃ = ν − ν̂, ρ̃ = ρ − ρ̂, θ̃ = θ − θ̂, κ̃ = κ − κ̂ and

ˆ(0, t)
σ(t) = (17.67)
1 + f 2 (t)

with f 2 given in (17.63).

Proof The properties (17.66a)–(17.66d) follow from the projection operator and the
initial conditions (17.64). Consider the Lyapunov function candidate
 1
1 T 1 2 1
V (t) = ν̃ (t)Γ1−1 ν̃(t) + ρ̃ (t) + γ3−1 (x)θ̃2 (x, t)d x
2 2γ2 2 0
 1
1
+ γ −1 (x)κ̃2 (x, t)d x. (17.68)
2 0 4

Differentiating with respect to time, inserting the adaptive laws and using the property
−ν̃ T projν,ν̄ (τ , ν̂) ≤ −ν̃ T τ (Lemma A.1 in Appendix A), and similarly for ρ̂, θ̂ and
κ̂, we get

ˆ(0, t)
V̇ (t) ≤ − (h(0, t) + ϑ(0, t))T ν̃(t) + ρ̃(t)ψ(0, t)
1 + f 2 (t)
 1 
+ θ̃(x, t)(φ(1 − x, t) + n 0 (x, t)) d x
0
 1   
+ κ̃(x, t)( p0 (x, t) + m 0 (x, t)) d x . (17.69)
0

Using (17.61) with (0, t) = 0 for t ≥ t F , and inserting this into (17.69), we obtain

V̇ (t) ≤ −σ 2 (t) (17.70)

for t ≥ tu,1 + tv , where σ is defined in (17.67). This proves that V is bounded and
nonincreasing, and hence has a limit V∞ as t → ∞. Integrating (17.70) in time from
zero to infinity gives
17.2 Model Reference Adaptive Control 331
 ∞
σ 2 (t)dt = V (0) − V∞ ≤ V (0) < ∞ (17.71)
0

and hence

σ ∈ L2 . (17.72)

Using (17.61), we obtain, for t ≥ tu,1

|ˆ(0, t)| |h(0, t) + ϑ(0, t)| |ψ(0, t)|


|σ(t)| = ≤ |ν̃(t)| + |ρ̃(t)|
1+ f 2 (t) 1+ f 2 (t) 1 + f 2 (t)
||φ(t)|| + ||n 0 (t)|| || p0 (t)|| + ||m 0 (t)||
+ ||θ̃(t)|| + ||κ̃(t)||
1 + f 2 (t) 1 + f 2 (t)
≤ |ν̃(t)| + |ρ̃(t)| + ||θ̃(t)|| + ||κ̃(t)|| (17.73)

which gives σ ∈ L∞ . From the adaptive laws (17.62), we have

˙ |ˆ(0, t)| |h(0, t) + ϑ(0, t)|


|ν̂(t)| ≤ |Γ1 | ≤ |Γ1 ||σ(t)| (17.74)
1+ f 2 (t) 1 + f 2 (t)

˙ θ̂t and κ̂t , which, along with (17.66d), gives (17.66f).


and similarly for ρ̂, 

17.2.4 Control Law

Consider the control law


 1
1
U (t) = ĝ(1 − ξ, t)ẑ(ξ, t)dξ − ν̂ T (t)[w(1, t) + a(1, t)] + r (t)
ρ̂(t) 0
 1
− κ̂(ξ, t)[w1 (ξ, t) + a1 (ξ, t)]dξ
0
 1 
− θ̂(ξ, t)b(1 − ξ, t)dξ (17.75)
0

where ẑ is generated using (17.59), and ĝ is the on-line solution to the Volterra
integral equation
 x
ĝ(x, t) = ĝ(x − ξ, t)θ̂(ξ, t)dξ − θ̂(x, t), (17.76)
0

with ρ̂, θ̂, κ̂ and ν̂ generated from the adaptive laws (17.62).
332 17 Model Reference Adaptive Control

Theorem 17.1 Consider system (17.1), filters (17.45) and (17.47), the augmented
reference model (17.39), and the adaptive laws (17.62). Suppose Assumption 17.2
holds. Then, the control law (17.75) with ĝ generated from (17.76) guarantees (17.7)
and

||u||, ||v||, ||ψ|| ∈ L∞ (17.77a)


||φ||, ||h||, ||P|| ∈ L2 ∩ L∞ . (17.77b)

This theorem is proved in Sect. 17.2.6, but first, we introduce a backstepping


transformation which facilitates a Lyapunov analysis, and also establish some useful
properties regarding ĝ.

17.2.5 Backstepping

By straightforward differentiation, one can verify that ẑ in (17.59) satisfies the


dynamics

ẑ t (x, t) − μ̄ẑ x (x, t) = μ̄θ̂(x, t)z(0, t) + ν̂˙ T (t)[h(x, t) + ϑ(x, t)] + ρ̂(t)ψ(x,
˙ t)
 1
+ κ̂t (ξ, t)[P(x, ξ, t) + M(x, ξ, t)]dξ
0
 1
+ θ̂t (ξ, t)N (x, ξ, t)dξ
0
 1
+ θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ (17.78a)
x
ẑ(1, t) = ν̂ (t)[w(1, t) + a(1, t)] + ρ̂(t)U (t) − r (t)
T
 1
+ κ̂(ξ, t)[w1 (ξ, t) + a1 (ξ, t)]dξ
0
 1
+ θ̂(ξ, t)b(1 − ξ, t)dξ (17.78b)
0
ẑ(x, 0) = ẑ 0 (x) (17.78c)

for some initial condition ẑ 0 ∈ B([0, 1]). Consider the backstepping transformation
 x
η(x, t) = ẑ(x, t) − ĝ(x − ξ, t)ẑ(ξ, t)dξ = T [ẑ](x, t) (17.79)
0

where ĝ is the on-line solution to the Volterra integral equation (17.76). Consider
also the target system
17.2 Model Reference Adaptive Control 333

ηt (x, t) − μ̄ηx (x, t) = −μ̄ĝ(x, t)ˆ(0, t) + ν̂˙ T (t)T [h + ϑ] (x, t) + ρ̂(t)T


˙ [ψ](x, t)
 1 
+T κ̂t (ξ, t)[P(x, ξ, t) + M(x, ξ, t)dξ (x, t)
0
 1 
+T θ̂t (ξ, t)N (x, ξ, t)dξ (x, t)
0
 1 
+T θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ (x, t)
x
 x
− ĝt (x − ξ, t)T −1 [η](ξ, t)dξ (17.80a)
0
η(1, t) = 0 (17.80b)
η(x, 0) = η0 (x) (17.80c)

for some initial condition η0 ∈ B([0, 1]).

Lemma 17.8 The backstepping transformation (17.79) and controller (17.75) with
ĝ satisfying (17.76) map (17.78) into (17.80).

Proof Differentiating (17.79) with respect to time and space, respectively, inserting
the dynamics (17.78a), integrating by parts and inserting the result into (17.78a) give
  x 
ηt (x, t) − μ̄ηx (x, t) = μ̄ ĝ(x, t) + θ̂(x, t) − ĝ(x − ξ, t)θ̂(x, t)dξ ẑ(0, t)
0
 x
+ μ̄θ̂(x, t)ˆ(0, t) − ĝ(x − ξ, t)μ̄θ̂(x, t)dξ ˆ(0, t)
0
 1
+ θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ
x
 x  1
− ĝ(x − ξ, t) θ̂t (s, t)φ(1 − (s − ξ), t)dsdξ + ν̂˙ T (t)[h(ξ, t) + ϑ(x, t)]
0 ξ
 x
− ĝ(x − ξ, t)ν̂˙ T (t)[h(ξ, t) + ϑ(ξ, t)]dξ + ρ̂(t)ψ(x,
˙ t)
0
 x  1
− ˙
ĝ(x − ξ, t)ρ̂(t)ψ(ξ, t)dξ + κ̂t (ξ, t)[P(x, ξ, t) + M(x, ξ, t)]dξ
0 0
 x  1
− ĝ(x − ξ, t) κ̂t (s, t)[P(ξ, s, t) + M(ξ, s, t)]dsdξ
0 0
 1  x  1
+ θ̂t (ξ, t)N (x, ξ, t)dξ − ĝ(x − ξ, t) θ̂t (s, t)N (ξ, s, t)dsdξ
0 x 0 0

− ĝt (x − ξ, t)ẑ(ξ, t)dξ. (17.81)


0
334 17 Model Reference Adaptive Control

Using (17.76) and the notation T defined in (17.79), we obtain (17.80a). Substituting
x = 1 into (17.79), using (17.78b) and the control law (17.75) yield (17.80b).


17.2.6 Proof of Theorem 17.1

Since r is bounded by Assumption 17.2, the signals of the reference model (17.39),
the filters M, N and ϑ and the derived filters m 0 and n 0 are all bounded pointwise in
space. Hence

||a||, ||b||, ||ϑ||, ||M||, ||N ||, ||m 0 ||, ||n 0 || ∈ L∞ (17.82a)
||a||∞ , ||b||∞ , ||ϑ||∞ , ||M||∞ , ||N ||∞ , ||m 0 ||∞ , ||n 0 ||∞ ∈ L∞ . (17.82b)

Moreover, we have

||η(t)|| ≤ G 1 ||ẑ(t)||, ||ẑ(t)|| ≤ G 1 ||η(t)|| (17.83)

and

||ĝ(t)||∞ ≤ ḡ, ||ĝt || ∈ L2 ∩ L∞ (17.84)

for some positive constant ḡ.


Consider the functionals
 1
V1 (t) = μ̄−1 (1 + x)η 2 (x, t)d x (17.85a)
0
 1
V2 (t) = (2 − x)w T (x, t)¯ −1 w(x, t)d x (17.85b)
0
 1
V3 (t) = μ̄−1 (1 + x)φ2 (x, t)d x (17.85c)
0
 1
−1
V4 (t) = μ̄ (1 + x)h T (x, t)h(x, t)d x (17.85d)
0
 1  1
−1
V5 (t) = μ̄ (1 + x) P 2 (x, ξ, t)dξd x (17.85e)
0 0
 1
−1
V6 (t) = μ̄ (1 + x)ψ 2 (x, t)d x. (17.85f)
0

The following result is proved in Appendix E.11.


Lemma 17.9 There exists positive constants b1 , b2 , . . . , b5 and nonnegative, inte-
grable functions l1 , l2 , . . . , l6 such that
17.2 Model Reference Adaptive Control 335

1
V̇1 (t) ≤ −η 2 (0, t) − μ̄V1 (t) + l1 (t)V1 (t) + l2 (t)V3 (t)
4
+ l3 (t)V4 (t) + l4 (t)V5 (t) + l5 (t)V6 (t) + l6 (t) + b1 ˆ2 (0, t) (17.86a)
1
V̇2 (t) ≤ −|w(1, t)|2 + 4nη 2 (0, t) + 4n ˆ2 (0, t) − λ̄1 V2 (t) (17.86b)
2
1
V̇3 (t) ≤ 4η 2 (0, t) + 4ˆ2 (0, t) − μ̄V3 (t) (17.86c)
2
1
V̇4 (t) ≤ 2|w(1, t)|2 − |h(0, t)|2 − μ̄V4 (t) (17.86d)
2
1
V̇5 (t) ≤ b1 V2 (t) − || p0 ||2 − μ̄V5 (t) (17.86e)
2
1
V̇6 (t) ≤ b2 V1 (t) + b3 V2 (t) − μ̄V6 (t) + b4 |w(1, t)|2 − ψ 2 (0, t) + b5 . (17.86f)
2
We start by proving boundedness of all signals, by forming

V7 = a1 V1 (t) + a2 V2 (t) + V3 (t) + V4 (t) + V5 (t) + V6 (t) (17.87)

for some positive constants a1 and a2 to be decided. We straightforwardly, using


Lemma 17.9, find
 
1
V̇7 (t) ≤ −[a1 − 4na2 − 4]η (0, t) − a1 μ̄ − b2 V1 (t)
2
4
 
1 1 1 1
− a2 λ̄1 − b1 − b3 V2 (t) − μ̄V3 (t) − μ̄V4 (t) − μ̄V5 (t)
2 2 2 2
1
− μ̄V6 (t) + [a1 b1 + a2 4n + 4] ˆ (0, t) − (a2 − 2 − b4 ) |w(1, t)|2
2
2
− |h(0, t)|2 − ψ 2 (0, t) − || p0 (t)||2 + b5 + l7 (t)V7 + l8 (t) (17.88)

for some nonnegative integrable functions l7 and l8 . Choosing


   
2(b3 + b1 ) 4b2
a2 > max 2 + b4 , , a1 > max 4na2 + 4, (17.89)
λ̄1 μ̄

we obtain

V̇7 (t) ≤ −b6 η 2 (0, t) − cV7 (t) + b7 ˆ2 (0, t) − |h(0, t)|2 − ψ 2 (0, t)
− || p0 (t)||2 + b5 + l7 (t)V7 (t) + l8 (t) (17.90)

for some positive constants c, b6 and b7 . Consider the term in ˆ2 (0, t), and expand it
as follows
336 17 Model Reference Adaptive Control

ˆ2 (0, t) = σ 2 (t)(1 + |h(0, t)|2 + |ϑ(0, t)|2 + ψ 2 (0, t) + ||φ(t)||2


+ ||n 0 (t)||2 + || p0 (t)||2 + ||m 0 (t)||2 ) (17.91)

where σ 2 is defined in (17.67), and is a nonnegative, bounded integrable function


(Lemma 17.7). Since ϑ, n 0 , m 0 are bounded, we find, by inserting (17.91) into (17.90),
that

V̇7 (t) ≤ −cV7 (t) + l9 (t)V7 (t) + l10 (t)


− [1 − b7 σ 2 (t)](|h(0, t)|2 + ψ 2 (0, t) + || p0 (t)||2 ) + b5 (17.92)

for some nonnegative, bounded, integrable functions l9 and l10 , and some constant
b5 . From (17.70), we have

V̇7 (t) ≤ −σ 2 (t) (17.93)

and from the definition of V in (17.68) and the inequality (17.73), we have

σ 2 (t) ≤ kV7 (t) (17.94)

for some positive constant k. It then follows from Lemma B.4 in Appendix B that
V7 ∈ L∞ and hence

||η||, ||w||, ||φ||, ||h||, ||P||, ||ψ|| ∈ L∞ . (17.95)

From the invertibility of the transformation (17.79), we also have

||ẑ|| ∈ L∞ . (17.96)

From (17.59), it follows that

||ψ|| ∈ L∞ , (17.97)

while from (17.40),

||ζ||, ||β|| ∈ L∞ (17.98)

follows. Lemma 17.3 gives

||α|| ∈ L∞ , (17.99)

while the invertibility of the transformations of Lemmas 17.1 and 17.2, yields

||u||, ||v|| ∈ L∞ . (17.100)


17.2 Model Reference Adaptive Control 337

We now prove square integrability of the system states’ L 2 -norms. Since ||w|| ∈
L∞ , w 2 (x, t) for all fixed t must be bounded almost everywhere in the domain
x ∈ [0, 1]. Specifically, w2 (0, t) is bounded for almost all t ∈ [0, ∞), and hence

σ 2 ψ 2 (0, t) ∈ L1 (17.101)

since σ 2 ∈ L1 . Now forming

V8 (t) = a1 V1 (t) + a2 V2 (t) + V3 (t) + V4 (t) + V5 (t) (17.102)

we similarly find using Lemma 17.9


 
1 1
V̇8 (t) ≤ −[a1 − 4na2 − 4]η 2 (0, t) − a1 μ̄V1 (t) − a2 λ̄1 − b1 V2 (t)
4 2
1 1 1
− μ̄V3 (t) − μ̄V4 (t) − μ̄V5 (t) + [a1 b1 + a2 4n + 4] ˆ2 (0, t)
2 2 2
− (a2 − 2) |w(1, t)|2 − |h(0, t)|2 − || p0 (t)||2
+ l7 (t)V7 (t) + l8 (t). (17.103)

Choosing a1 and a2 according to (17.89), we obtain

V̇8 (t) ≤ −b8 η 2 (0, t) − c̄V8 (t) + b9 ˆ2 (0, t) − |h(0, t)|2
− || p0 (t)||2 + l11 (t)V8 (t) + l12 (t) (17.104)

for some nonnegative, bounded integrable functions l11 , l12 , and a positive constant
b9 . Inserting (17.91) yields

V̇8 (t) ≤ −c̄V8 (t) + l13 (t)V8 (t) + l14 (t)


− [1 − b9 σ 2 (t)](|h(0, t)|2 + || p0 (t)||2 ) (17.105)

for some positive constant c̄, and nonnegative integrable functions l13 and l14 , where
we utilized (17.101). Now Lemma B.4 in Appendix B yield V8 ∈ L1 ∩ L∞ and thus

||η||, ||w||, ||φ||, ||h||, ||P|| ∈ L2 ∩ L∞ . (17.106)

The above results then imply that h(0, t) and || p0 (t)|| are bounded for almost all t,
and hence

σ 2 h 2 (0, ·), σ 2 || p0 ||2 ∈ L1 (17.107)

follows, implying that (17.105) can be written as

V̇8 (t) ≤ −c̄V8 (t) + l13 (t)V8 (t) + l15 (t) (17.108)
338 17 Model Reference Adaptive Control

for a nonnegative, integrable function

l15 (t) = l14 (t) + b9 σ 2 (t)(|h(0, t)|2 + || p0 (t)||2 ). (17.109)

Lemma B.3 in Appendix B then gives V8 (t) → 0, and thus

||η||, ||w||, ||φ||, ||h||, ||P|| → 0. (17.110)

From the invertibility of the transformation (17.79), ||ẑ|| ∈ L2 ∩ L∞ and ||ẑ|| → 0


immediately follows.
Finally, we prove that the tracking goal (17.7) is achieved. From the definition of
the filter φ in (17.45b), we can explicitly solve for φ to obtain

φ(x, t) = z(0, t − tv (1 − x)) (17.111)

for t ≥ tv (1 − x), and hence


 1  1
lim ||φ(t)|| = lim
2
φ (x, t)d x = lim
2
z 2 (0, t − tv (1 − x))d x
t→∞ t→∞ 0 t→∞ 0
 t
= lim μ̄ z 2 (0, τ )dτ = 0 (17.112)
t→∞ t−tv

 t+T 2
for t ≥ tv , implying t z (0, s)ds → 0 for any T > 0, which from the definition
of z(0, t), is equivalent to the tracking goal (17.7). 

17.3 Adaptive Output Feedback Stabilization

The adaptive output feedback controller is obtained from the model reference adap-
tive controller of Theorem 12.1 by simply setting r ≡ 0. This controller also gives
the desirable property of square integrability and asymptotic convergence to zero of
the system states in the L 2 -sense. Consider the control law
 1
1
U (t) = ĝ(1 − ξ, t)ẑ(ξ, t)dξ − ν̂ T (t)w(1, t)
ρ̂(t) 0
 1 
− κ̂(ξ, t)w1 (ξ, t)dξ (17.113)
0

where ẑ is generated using (12.60), and ĝ is the on-line solution to the Volterra
integral equation (12.80) with ρ̂, θ̂ and κ̂ generated using the adaptive laws (6.28).

Theorem 17.2 Consider system (17.1), filters (17.45) and (17.47), and the adaptive
laws (17.62). Let r ≡ 0. Then, the control law (17.113) with ĝ generated from (17.76)
guarantees
17.3 Adaptive Output Feedback Stabilization 339

||u||, ||v||, ||ψ||, ||φ||, ||h||, ||P|| ∈ L2 ∩ L∞ (17.114)

and

||u||, ||v||, ||ψ||, ||φ||, ||h||, ||P|| → 0. (17.115)

Proof We already know from the proof of Theorem 17.1 that

||η||, ||w||, ||φ||, ||h||, ||P|| ∈ L2 ∩ L∞ , (17.116)

and

||η||, ||w||, ||φ||, ||h||, ||P|| → 0. (17.117)

If r ≡ 0, then a = b = ϑ ≡ 0, M = N ≡ 0, and (17.59) gives ||ψ|| ∈ L2 ∩ L∞


and ||ψ|| → 0, while from (17.40), ||ζ||, ||β|| ∈ L2 ∩ L∞ and ||ζ||, ||β|| → 0 fol-
low. Lemma 17.3 gives ||α|| ∈ L2 ∩ L∞ , and ||α|| → 0. The invertibility of the
transformations of Lemmas 17.1 and 17.2, yields

||u||, ||v|| ∈ L2 ∩ L∞ (17.118)

and

||u||, ||v|| → 0. (17.119)

17.4 Simulations

System (17.1) and the controllers of Theorems 17.1 and 17.2 are implemented for
n = 2. The system parameters are set to

(x) = diag{sin(πx) + 1, 2 + x}, μ(x) = e x (17.120a)


k1 = 0.75, k2 = 2 (17.120b)
     
−x − sin(x) 1 x −1
Σ(x) = , ω(x) = x , (x) = (17.120c)
cos(πx) − sinh(x) e x +1
   
0 −0.5
q= , c= (17.120d)
1 1

with initial conditions for the system set to

u 1,0 (x) = u 2,0 (x) = x, v0 (x) = sin(2πx). (17.121)


340 17 Model Reference Adaptive Control

System (17.1) with the given parameters is unstable in the open loop case. The
adaptation gains are set to

Γ1 = I 2 , γ2 = 1, γ3 = γ4 ≡ 1, (17.122)

while the parameter bounds are set to

ν 1 = ν 2 = θ = κ = −100, ρ = 0.1 (17.123a)


ν̄1 = ν̄2 = θ̄ = κ̄ = 100, ρ̄ = 100. (17.123b)

All additional initial conditions are set to zero.

17.4.1 Tracking

The controller of Theorem 17.1 is here applied, with the reference signal r set to
√
π  2
r (t) = 1 + sin t + 2 sin t . (17.124)
10 2

In the adaptive control case, the system state norm and actuation signal are seen in
Fig. 17.1 to be bounded. The estimated parameters are also seen to be bounded in
Fig. 17.2.
The tracking objective (17.7) is seen in Fig. 17.3 to be achieved after approximately
15 s. The violent transient observed from the onset of control at t = 0 to tracking
is achieved at around t = 12 is due to the choice of initial conditions, which are
deliberately chosen to induce transients so that the theoretical convergence results
are clearly demonstrated. In practice though, transients would be avoided by applying
an appropriate start-up procedure to the system.
||u1 || + ||u2 || + ||v||

60 0
40 −20
U

20 −40
0 −60

0 10 20 30 0 10 20 30
Time [s] Time [s]

Fig. 17.1 State norm ||u 1 || + ||u 2 || + ||v|| (left) and actuation signal (right) for the tracking case
of Theorem 17.1
17.4 Simulations 341

1
1.4
0.1
0.5
ν̂1

ν̂2

1.2

ρ̂
0
0 1

−0.1 0.8
0 10 20 30 0 10 20 30 0 10 20 30
Time [s] Time [s] Time [s]

Fig. 17.2 Estimated parameters for the tracking case of Theorem 17.1

4
y0 and r

−2

0 5 10 15 20 25 30
Time [s]

Fig. 17.3 Reference signal (solid black) and measured signal (dashed red)

17.4.2 Stabilization

To demonstrate the properties of Theorem 17.2, the reference signal is here set
identically zero. It is seen from Fig. 17.4 that the state norms and actuation signal in
this case are bounded and converge to zero. The convergence time is approximately
8 s.

17.5 Notes

The above result is arguably the strongest result concerning n + 1 systems, stabilizing
a general type of n + 1 coupled linear hyperbolic PDEs from a single boundary
measurement only.
342 17 Model Reference Adaptive Control

||u1 || + ||u2 || + ||v|| 5


6
4 0

U
2 −5
0
−10
0 5 10 15 0 5 10 15
Time [s] Time [s]

Fig. 17.4 Left: State norm ||u 1 || + ||u 2 || + ||v|| for the stabilization case of Theorem 17.2. Right:
Actuation signal for the stabilization case of Theorem 17.2

In Anfinsen and Aamo (2017), it is shown that under the assumption of having
the parameter c known, and u(1, t) measured, the above adaptive controller can be
slightly simplified and some additional, interesting properties can be proved. First,
it is shown that the n + 1 system is from the controller’s perspective equal to a 2 × 2
system. Hence, the controller order does not increase with n if c is known and u(1, t)
is measured. Secondly, none of the values tu,i are required to be known, only an upper
bound is required. Lastly, pointwise boundedness and convergence to zero can be
proved, none of which were proved for the controller of Theorem 17.2.

Reference

Anfinsen H, Aamo OM (2017) Adaptive stabilization of a system of n + 1 coupled linear hyperbolic


PDEs from boundary sensing. In: Australian and New Zealand control conference, Gold Coast,
Queensland, Australia
Part V
n + m Systems
Chapter 18
Introduction

This part considers the most general class of PDEs treated in this book. It contains
systems of n + m linear coupled hyperbolic PDEs of which n equations convect
information from x = 1 to x = 0, and m equations convect information in the oppo-
site direction. Such systems are usually stated in the form (1.25), which we for the
reader’s convenience restate here

u t (x, t) + + (x)u x (x, t) =  ++ (x)u(x, t) +  +− (x)v(x, t) (18.1a)


vt (x, t) − − (x)vx (x, t) =  −+ (x)u(x, t) +  −− (x)v(x, t) (18.1b)
u(0, t) = Q 0 v(0, t) (18.1c)
v(1, t) = C1 u(1, t) + U (t) (18.1d)
u(x, 0) = u 0 (x) (18.1e)
v(x, 0) = v0 (x) (18.1f)

for the system states


 T
u(x, t) = u 1 (x, t) u 2 (x, t) . . . u n (x, t) (18.2a)
 T
v(x, t) = v1 (x, t) v2 (x, t) . . . vm (x, t) , (18.2b)

defined over x ∈ [0, 1], t ≥ 0. The system parameters are in the form

+ (x) = diag {λ1 (x), λ2 (x), . . . , λn (x)} (18.3a)


− (x) = diag {μ1 (x), μ2 (x), . . . , μm (x)} (18.3b)
 ++
(x) = {σi++
j (x)}1≤i, j≤n  +− (x) = {σi+−
j (x)}1≤i≤n,1≤ j≤m (18.3c)
−+
 −+
(x) = {σi j (x)}1≤i≤m,1≤ j≤n  (x) = {σi−−
−−
j (x)}1≤i, j≤m (18.3d)
Q 0 = {qi j }1≤i≤m,1≤ j≤n , C1 = {ρi j }1≤i≤n,1≤ j≤m (18.3e)

© Springer Nature Switzerland AG 2019 345


H. Anfinsen and O. M. Aamo, Adaptive Control of Hyperbolic PDEs,
Communications and Control Engineering,
https://doi.org/10.1007/978-3-030-05879-1_18
346 18 Introduction

and assumed to satisfy, for i, k = 1, 2, . . . , n, j, l = 1, 2, . . . , m

λi , μ j ∈ C 1 ([0, 1]), λi (x), μ j (x) > 0, ∀x ∈ [0, 1] (18.4a)


++
σik , σi+− −+ −−
j , σ ji , σ jl ∈ C ([0, 1]),
0
qi j , c ji ∈ R, (18.4b)

while the initial conditions


 T
u 0 (x) = u 1,0 (x) u 2,0 (x) . . . u n,0 (x) (18.5a)
 T
v0 (x) = v1,0 (x) v2,0 (x) . . . vm,0 (x) , (18.5b)

satisfy

u 0 , v0 ∈ B([0, 1]). (18.6)

The signal
 T
U (t) = U1 (t) U2 (t) . . . Um (t) , (18.7)

is a vector of actuation signals.


As with the solutions to n + 1-systems in Part IV, the methods in this part of the
book will be derived subject to some restrictions on the transport speeds. Specifically,
one of the following two assumptions will be used

−μ1 (x) < −μ2 (x) < · · · < −μm (x) < 0 < λ1 (x) ≤ λ2 (x) ≤
. . . ≤ λn (x), ∀x ∈ [0, 1] (18.8)

and

−μ1 (x) < −μ2 (x) < · · · < −μm (x) < 0 < λ1 (x) < λ2 (x) <
. . . < λn (x), ∀x ∈ [0, 1] (18.9)

for all x ∈ [0, 1]. In addition, we will frequently, without loss of generality, assume
the diagonal elements of  ++ and  −− to be zero, hence

σii++ ≡ 0, i = 1, 2, . . . , n σ −−
j j ≡ 0, j = 1, 2, . . . , m. (18.10)

Boundary measurements are either taken at the boundary anti-collocated or col-


located with actuation, hence

y0 (t) = v(0, t) (18.11a)


y1 (t) = u(1, t). (18.11b)
18 Introduction 347

In Chap. 19, non-adaptive state-feedback and boundary observers are derived, and
these are combined into output-feedback solutions. We also solve an output tracking
problem, where the measurement (18.11a) anti-collocated with actuation is sought
to track an arbitrary, bounded reference signal. The resulting state-feedback track-
ing controller can be combined with the boundary observers into output-feedback
tracking controllers.
The problem of stabilizing system (18.1) when the boundary parameters Q 0 and
C1 in (18.1c)–(18.1d) are uncertain, is solved in Chap. 20. The method requires
measurements to be taken at both boundaries. The problems solved for n + 1-systems
in Chap. 15 and for 2 × 2 systems in Chap. 10, are straightforward to extend to n + m-
systems, and are therefore omitted.
Chapter 19
Non-adaptive Schemes

19.1 Introduction

We start by deriving non-adaptive state feedback controller and boundary observers


for systems in the form (18.1). In Sect. 19.2, we derive state-feedback controllers.
Firstly, in Sect. 19.2.1, a controller originally proposed in Hu et al. (2016) (although
for the constant-coefficient case) is derived that achieves convergence to zero in a
finite time that involves the sum of all the transport delays in the state v in (18.1b). As
the convergence time depends on the number of states m in v, this is a non-minimum
time convergent controller. In Sect. 19.2.2 the controller from Sect. 19.2.1 is slightly
altered so that regulation to zero is achieved in minimum time. Such a minimum
time controller was originally proposed in Auriol and Di Meglio (2016). However,
we will state the more compact solution originally proposed in Coron et al. (2017)
which involves the use of an invertible Fredholm integral transformation. Next, in
Sect. 19.3, we derive observers for system (18.1). Two observer designs are proposed.
One uses sensing (18.11a) anti-collocated with the actuation, while the other one only
employs sensing (18.11) collocated with the actuation. The former of these observers
were originally proposed in Hu et al. (2016). Both observers converge in finite time.
However, none of them are minimum-time convergent observers. The observers are
combined with the minimum-time convergent controller to establish output-feedback
controllers in Sect. 19.4.
In Sect. 19.5, we solve a reference tracking problem, where the goal is to make
an output signal taken as a linear combination of the states at the boundary anti-
collocated with actuation track an arbitrary, bounded reference signal. This tracking
problem was originally solved in (Anfinsen and Aamo 2018). The resulting state-
feedback controller can also be combined with the boundary observers into output-
feedback controllers.
Most of the derived controllers and observers are implemented and simulated in
Sect. 19.6, before some concluding remarks are given in Sect. 14.7.

© Springer Nature Switzerland AG 2019 349


H. Anfinsen and O. M. Aamo, Adaptive Control of Hyperbolic PDEs,
Communications and Control Engineering,
https://doi.org/10.1007/978-3-030-05879-1_19
350 19 Non-adaptive Schemes

19.2 State Feedback Controllers

19.2.1 Non-minimum-time Controller

Consider the control law


 1  1
U (t) = −C1 u(1, t) + K (1, ξ)u(ξ, t)dξ +
u
K v (1, ξ)v(ξ, t)dξ (19.1)
0 0

where

K u (x, ξ) = {K iuj (x, ξ)}1≤i≤m,1≤ j≤n (19.2a)


v
K (x, ξ) = {K ivj (x, ξ)}1≤i, j≤m (19.2b)

are defined over (1.1a), and satisfy the PDE

Λ− (x)K xu (x, ξ) − K ξu (x, ξ)Λ+ (ξ) = K u (x, ξ) ++ (ξ) + K u (x, ξ)(Λ+ ) (ξ)
+ K v (x, ξ) −+ (ξ) (19.3a)
− v v − +− v − 
Λ (x)K x (x, ξ) + K ξ (x, ξ)Λ (ξ) = K (x, ξ) (ξ) − K (x, ξ)(Λ ) (ξ)
u

+ K v (x, ξ) −− (ξ) (19.3b)


− + −+
Λ (x)K (x, x) + K (x, x)Λ (x) = − (x)
u u
(19.3c)
Λ− (x)K v (x, x) − K v (x, x)Λ− (x) = − −− (x) (19.3d)
K v (x, 0)Λ− (0) − K u (x, 0)Λ+ (0)Q 0 = G(x) (19.3e)

where G is a strictly lower triangular matrix in the form



0 if 1 ≤ i ≤ j ≤ n
G(x) = {gi j (x)}1≤i, j≤n = (19.4)
gi j (x) otherwise.

These equations are under-determined, and to ensure well-posedness, we add the


additional boundary conditions

K ivj (1, ξ) = kivj (ξ), 1 ≤ j < i ≤ m (19.5)

for some arbitrary functions kivj (ξ), 1 ≤ j < i ≤ m.


Well-posedness of the PDE consisting of (19.3) and (19.5) then follows from
Theorem D.6 in Appendix D.6.
Theorem 19.1 Consider system (18.1) subject to assumption (18.8). Let the con-
troller be taken as (19.1) where (K u , K v ) is the solution to (19.3). Then

u ≡ 0, v≡0 (19.6)
19.2 State Feedback Controllers 351

for t ≥ t F , where


m
t F = tu,1 + tv,tot , tv,tot = tv, j (19.7a)
j=1
 1  1
dγ dγ
tu,i = , tv, j = . (19.7b)
0 λi (γ) 0 μ j (γ)

Proof We will show that the backstepping transformation

α(x, t) = u(x, t) (19.8a)


 x  x
β(x, t) = v(x, t) − K u (x, ξ)u(ξ, t)dξ − K v (x, ξ)v(ξ, t)dξ (19.8b)
0 0

and the control law (19.1) with (K u , K v ) satisfying the PDE (19.3) map (18.1) into
the target system

αt (x, t) + Λ+ (x)αx (x, t) =  ++ (x)α(x, t) +  +− (x)β(x, t)


 x
+ C + (x, ξ)α(ξ, t)dξ
 x
0

+ C − (x, ξ)β(ξ, t)dξ (19.9a)


0
βt (x, t) − Λ− (x)βx (x, t) = G(x)β(0, t) (19.9b)
α(0, t) = Q 0 α(0, t) (19.9c)
β(1, t) = 0 (19.9d)
α(x, 0) = α0 (x) (19.9e)
β(x, 0) = β0 (x) (19.9f)

for α0 , β0 ∈ B([0, 1]), where G has a triangular form (19.4), and is given from
(19.3e), while C + and C − are defined over the triangular domain T defined in
(1.1a), and given as the solution to the equation
 x
C + (x, ξ) =  +− (x)K u (x, ξ) + C − (x, s)K u (s, ξ)ds (19.10a)
ξ
 x
C − (x, ξ) =  +− (x)K v (x, ξ) + C − (x, s)K v (s, ξ)ds. (19.10b)
ξ

By differentiating (19.8b) with respect to time and space, respectively, inserting


the dynamics (18.1a)–(18.1b), integrating by parts, inserting the boundary condition
(18.1c) and inserting the result back into the dynamics (18.1b), we find
352 19 Non-adaptive Schemes

0 = vt (x, t) − Λ− (x)vx (x, t) −  −+ (x)u(x, t) −  −− (x)v(x, t)


= βt (x, t) − Λ− (x)βx (x, t)
 
− K v (x, 0)Λ− (0) − K u (x, 0)Λ+ (0)Q 0 v(0, t)
 
− K u (x, x)Λ+ (x) + Λ− (x)K u (x, x) +  −+ (x) u(x, t)
 
− Λ− (x)K v (x, x) − K v (x, x)Λ− (x) +  −− (x) v(x, t)
 x
− Λ− (x)K xu (x, ξ) − K ξu (x, ξ)Λ+ (ξ) − K u (x, ξ) ++ (ξ)
0

− K u (x, ξ)(Λ+ ) (ξ) − K v (x, ξ) −+ (ξ) u(ξ, t)dξ
 x
− Λ− (x)K xv (x, ξ) + K ξv (x, ξ)Λ− (ξ) + K v (x, ξ)(Λ− ) (ξ)
0

− K u (x, ξ) +− (ξ) − K v (x, ξ) −− (ξ) v(ξ, t)dξ. (19.11)

Using (19.3) and the fact that v(0, t) = β(0, t), we obtain (19.9b). Inserting (19.8)
into (19.9a) gives

0 = αt (x, t) + Λ+ (x)αx (x, t) −  ++ (x)α(x, t) −  +− (x)β(x, t)


 x  x
− C + (x, ξ)α(ξ, t)dξ − C − (x, ξ)β(ξ, t)dξ
0 0
= u t (x, t) + Λ+ (x)u x (x, t) −  ++ (x)u(x, t) −  +− (x)v(x, t)
 x
− C + (x, ξ) −  +− (x)K u (x, ξ)
0
 x 

− C (x, s)K (s, ξ)ds u(ξ, t)dξ
u
ξ
 x
− C − (x, ξ) −  +− (x)K v (x, ξ)
0
 x 
− C − (x, s)K v (s, ξ)ds v(ξ, t)dξ (19.12)
ξ

where we have changed the order of integration in the double integrals. Using (19.10)
gives the dynamics (18.1a). The boundary condition (19.9c) follows trivially from
inserting (19.8) into (18.1c). Evaluating (19.8b) at x = 1 and inserting the boundary
condition (18.1d), we get
 1
β(1, t) = C1 u(1, t) + U (t) − K u (1, ξ)u(ξ, t)dξ
0
 1
− K v (1, ξ)v(ξ, t)dξ. (19.13)
0
19.2 State Feedback Controllers 353

The control law (19.1) gives the boundary condition (19.9d). The initial conditions
α0 and β0 are expressed from u 0 , v0 by evaluating (19.8) at t = 0, giving

α0 (x) = u 0 (x) (19.14a)


 x  x
β0 (x) = v0 (x) − K u (x, ξ)u 0 (ξ)dξ − K v (x, ξ)v0 (ξ)dξ. (19.14b)
0 0

Due to boundary condition (19.9d) and the fact that G in (19.9b) is strictly lower
triangular, we have ∂t β1 − μ1 ∂x β1 = 0 so that β1 ≡ 0 for t ≥ tv,1 . This fact reduces
the next equation to ∂t β2 − μ2 ∂x β2 = 0 for t ≥ tv,1 so that β2 ≡ 0 for t ≥ tv,1 + tv,2 .
m
Continuing this argument we obtain that β ≡ 0 for t ≥ tv,tot = i=1 tv,i , and system
(19.9) is reduced to

αt (x, t) + Λ+ (x)αx (x, t) =  ++ (x)α(x, t)


 x
+ C + (x, ξ)α(ξ, t)dξ (19.15a)
0
α(0, t) = 0 (19.15b)
α(x, tv,tot ) = αtv,tot (x) (19.15c)

for some function αtv,tot . System (19.15) has the same form as system (14.17), and
will be zero after an additional time tu,1 . Hence, for t ≥ t F = tu,1 + tv,tot , we have
α ≡ 0, β ≡ 0. Due to the invertibility of the backstepping transformation (19.8), the
result follows. 

19.2.2 Minimum-Time Controller

The controller of Theorem 19.1 is a non-minimum-time convergent controller, since


the convergence time involves the sum of the propagation time of all the states in v.
This is addressed next, where an additional transformation is used to create a target
system in the form (19.9), but with G ≡ 0.
Consider the control law
 1  1
v
U (t) = −C1 u(1, t) + u
K min (ξ)u(ξ, t)dξ + K min (ξ)v(ξ, t)dξ (19.16)
0 0

where
 1
u
K min (ξ) = K u (1, ξ) − Θ(1, s)K u (s, ξ)ds (19.17a)
ξ
 1
v v
K min (ξ) = K (1, ξ) + Θ(1, ξ) − Θ(1, s)K v (s, ξ)ds (19.17b)
ξ
354 19 Non-adaptive Schemes

and Θ is a strictly lower triangular matrix defined over the square domain [0, 1]2 and
given as the solution to the Fredholm integral equation
 1
Θ(x, ξ) = −F(x, ξ) + F(x, s)Θ(s, ξ)ds (19.18)
0

where F is a strictly lower triangular matrix defined over [0, 1]2 and given as the
solution to the PDE

Λ− (x)Fx (x, ξ) + Fξ (x, ξ)Λ− (ξ) = −F(x, ξ)(Λ− ) (ξ) (19.19a)


− −1
F(x, 0) = G(x)(Λ ) (0) (19.19b)
F(0, ξ) = 0. (19.19c)

The existence of a unique solution F to (19.19) and a unique solution Θ to (19.18)


are guaranteed by Theorem D.7 and Lemma D.2 in Appendix D, respectively.

Theorem 19.2 Consider system (18.1) subject to assumption (18.8). Let the con-
v
troller be taken as (19.16) with (K min
u
, K min ) given by (19.17). Then

u ≡ 0, v≡0 (19.20)

for t ≥ tmin , where

tmin = tu,1 + tv,m , (19.21)

and where tu,1 and tv,m are defined in (19.7).

Proof It is shown in the proof of Theorem 19.1 that system (18.1), subject to assump-
tion (18.8), can be mapped using the backstepping transformation (19.8) into the
target system (19.9) provided the control law is chosen as (19.1). If, however, we
choose the slightly modified control law
 1
U (t) = −C1 u(1, t) + K u (1, ξ)u(ξ, t)dξ
0
 1
+ K v (1, ξ)v(ξ, t)dξ + Ua (t) (19.22)
0

we obtain the target system

αt (x, t) + Λ+ (x)αx (x, t) =  ++ (x)α(x, t) +  +− (x)β(x, t)


 x
+ C + (x, ξ)α(ξ, t)dξ
 x
0

+ C − (x, ξ)β(ξ, t)dξ (19.23a)


0
19.2 State Feedback Controllers 355

βt (x, t) − Λ− (x)βx (x, t) = G(x)β(0, t) (19.23b)


α(0, t) = Q 0 β(0, t) (19.23c)
β(1, t) = Ua (t) (19.23d)
α(x, 0) = α0 (x) (19.23e)
β(x, 0) = β0 (x) (19.23f)

where G still has the lower triangular form (19.4).


Consider now the Fredholm integral transformation
 1
β(x, t) = η(x, t) − F(x, ξ)η(ξ, t)dξ (19.24)
0

from a new variable η into β, and where F satisfies the PDE (19.19) and is strictly
lower triangular, hence

0 if 1 ≤ i ≤ j ≤ n
F(x) = { f i j (x)}1≤i, j≤n = (19.25)
f i j (x) otherwise.

The transformation (19.24) has inverse


 1
η(x, t) = β(x, t) − Θ(x, ξ)β(ξ, t)dξ (19.26)
0

with Θ satisfying (19.18). This can be verified from inserting (19.26) into (19.24),
yielding
 1  1
β(x, t) = β(x, t) − Θ(x, ξ)β(ξ, t)dξ − F(x, ξ)β(ξ, t)dξ
0 0
 1  1
+ F(x, ξ) Θ(ξ, s)β(s, t)dsdξ, (19.27)
0 0

which can be written as


 1  1 
0=− Θ(x, ξ) + F(x, ξ) − F(x, s)Θ(s, ξ)ds β(ξ, t)dξ (19.28)
0 0

which holds due to (19.18).


We will show that transformation (19.24) maps the target system

ηt (x, t) − Λ− (x)ηx (x, t) = 0 (19.29a)


η(1, t) = 0 (19.29b)
η(x, 0) = η0 (x) (19.29c)
356 19 Non-adaptive Schemes

into the β-system given by (19.23b), (19.23d) and (19.23f).


Differentiating (19.24) with respect to time, inserting the dynamics (19.29a), and
integrating by parts, we find

βt (x, t) = ηt (x, t) − F(x, 1)Λ− (1)η(1, t) + F(x, 0)Λ− (0)η(0, t)


 1
+ Fξ (x, ξ)Λ− (ξ)η(ξ, t)dξ
0
 1
+ F(x, ξ)(Λ− ) (ξ)η(ξ, t)dξ, (19.30)
0

while differentiating (19.24) with respect to space gives


 1
βx (x, t) = ηx (x, t) − Fx (x, ξ)η(ξ, t)dξ. (19.31)
0

Inserting (19.30) and (19.31) into (19.23b), we obtain

0 = βt (x, t) − Λ− (x)βx (x, t) − G(x)β(0, t) (19.32)


− −
= ηt (x, t) − Λ (x)ηx (x, t) − F(x, 1)Λ (1)η(1, t)
 
− G(x) − F(x, 0)Λ− (0) η(0, t)
 1
+ Λ− (x)Fx (x, ξ) + Fξ (x, ξ)Λ− (ξ) + F(x, ξ)(Λ− ) (ξ)
0

+ G(x)F(0, ξ) η(ξ, t)dξ. (19.33)

Using (19.19) and (19.29b) gives (19.29a). Evaluating (19.26) at x = 1 and inserting
the boundary condition (19.23d) gives
 1
η(1, t) = Ua (t) − Θ(1, ξ)β(ξ, t)dξ. (19.34)
0

Choosing
 1
Ua (t) = Θ(1, ξ)β(ξ, t)dξ (19.35)
0

results in the boundary condition (19.29b). The initial condition η0 is given from β0
as
 1
η0 (x) = β0 (x) − Θ(x, ξ)β0 (ξ)dξ (19.36)
0

found by inserting t = 0 into (19.26).


19.3 Observers 357

From the simple structure of the target system (19.29), it is evident that η ≡ 0 for
t ≥ tv,m , which corresponds to the slowest transport speed in η. From (19.24), we
then also have β ≡ 0 for t ≥ tv,m . The final result follows from the same reasoning
as in the proof of Theorem 19.1.
Inserting (19.8b) into (19.35) and substituting the result into (19.22), gives
 1  1
U (t) = −C1 u(1, t) + K (1, ξ)u(ξ, t)dξ + K v (1, ξ)v(ξ, t)dξ
u
0 0
 1  1 1
+ Θ(1, ξ)v(ξ, t)dξ − Θ(1, s)K u (s, ξ)dsu(ξ, t)dξ
0 0 ξ
 1  1
− Θ(1, s)K v (s, ξ)dsv(ξ, t)dξ (19.37)
0 ξ

where we have changed the order of integration in the double integrals. Using the
definition (19.17) gives the control law (19.16). 

19.3 Observers

19.3.1 Anti-collocated Observer

Consider the observer

û t (x, t) + Λ+ (x)û x (x, t) =  ++ (x)û(x, t) +  +− (x)v̂(x, t)


+ P + (x)(y0 (t) − v̂(0, t)) (19.38a)
− −+ −−
v̂t (x, t) − Λ (x)v̂x (x, t) =  (x)û(x, t) +  (x)v̂(x, t)
+ P − (x)(y0 (t) − v̂(0, t)) (19.38b)
û(0, t) = Q 0 y0 (t) (19.38c)
v̂(1, t) = C1 û(1, t) + U (t) (19.38d)
û(x, 0) = û 0 (x) (19.38e)
v̂(x, 0) = v̂0 (x) (19.38f)

with initial conditions û 0 , v̂0 ∈ B([0, 1]), and where the injection gains P + and
P − are given as

P + (x) = M α (x, 0)Λ− (0) (19.39a)


− β −
P (x) = M (x, 0)Λ (0). (19.39b)
358 19 Non-adaptive Schemes

The matrices

M α (x, ξ) = {Miαj (x, ξ)}1≤i≤n,1≤ j≤m (19.40a)


β
M β (x, ξ) = {Mi j (x, ξ)}1≤i, j≤m (19.40b)

are defined over T (see (1.1a)), and satisfy the PDE

Λ+ (x)Mxα (x, ξ) − Mξα (x, ξ)Λ− (ξ) = M α (x, ξ)(Λ− ) (ξ) +  ++ (x)M α (x, ξ)
+  +− (x)M β (x, ξ) (19.41a)
− β
Λ (x)Mxβ (x, ξ) + Mξ (x, ξ)Λ− (ξ) α
= −M (x, ξ)(Λ ) (ξ) − −  −+ α
(x)M (x, ξ)
−  −− (x)M β (x, ξ) (19.41b)
Λ (x)M (x, x) + M (x, x)Λ (x) =  +− (x)
+ α α −
(19.41c)
Λ− (x)M β (x, x) − M β (x, x)Λ− (x) = − −− (x) (19.41d)
M β (1, ξ) − C1 M α (1, ξ) = H (x) (19.41e)

where H is a strictly upper triangular matrix in the form



0 if 1 ≤ j ≤ i ≤ n
H (x) = {h i j (x)}1≤i, j≤n = (19.42)
h i j (x) otherwise.

As with the controller kernel equations, these equations are under-determined, so to


ensure well-posedness we add the additional boundary conditions

β β
Mi j (x, 0) = m i j (x), 1 ≤ i < j ≤ m (19.43)

β
for some arbitrary functions m i j (x), 1 ≤ i < j ≤ m, defined for x ∈ [0, 1].
Well-posedness of the PDE consisting of (19.41) and (19.43) then follows from
Theorem D.6 in Appendix D.6 following a coordinate transformation (x, ξ) → (1 −
ξ, 1 − x) and transposing the equations.

Theorem 19.3 Consider system (18.1) subject to assumption (18.8), and the
observer (19.38) with injection gains P + and P − given as (19.39). Then

û ≡ u, v̂ ≡ v (19.44)

for t ≥ t F where t F is defined in (19.7).

Proof The observer errors ũ = u − û and ṽ = v − v̂ satisfy the dynamics

ũ t (x, t) + Λ+ (x)ũ x (x, t) =  ++ (x)ũ(x, t) +  +− (x)ṽ(x, t)


− P + (x)ṽ(0, t) (19.45a)
19.3 Observers 359

ṽt (x, t) − Λ− (x)ṽx (x, t) =  −+ (x)ũ(x, t) +  −− (x)ṽ(x, t)


− P − (x)ṽ(0, t) (19.45b)
ũ(0, t) = 0 (19.45c)
ṽ(1, t) = C1 ũ(1, t) (19.45d)
ũ(x, 0) = ũ 0 (x) (19.45e)
ṽ(x, 0) = ṽ0 (x) (19.45f)

where ũ 0 = u 0 − û 0 , ṽ0 = v0 − v̂0 . We will show that the backstepping transforma-


tion
 x
ũ(x, t) = α̃(x, t) + M α (x, ξ)β̃(ξ, t)dξ (19.46a)
 x
0

ṽ(x, t) = β̃(x, t) + M β (x, ξ)β̃(ξ, t)dξ (19.46b)


0

where (M α (x, ξ), M β (x, ξ)) satisfies the PDE (19.41) maps the target system
 x
α̃t (x, t) + Λ+ (x)α̃x (x, t) =  ++ (x)α̃(x, t) + D + (x, ξ)α̃(ξ, t)dξ (19.47a)
 0
x
β̃t (x, t) − Λ− (x)β̃x (x, t) =  −+ (x)α̃(x, t) + D − (x, ξ)α̃(ξ, t)dξ (19.47b)
0
α̃(0, t) = 0 (19.47c)
 1
β̃(1, t) = C1 α̃(1, t) − H (ξ)β̃(ξ, t)dξ (19.47d)
0
α̃(x, 0) = α̃0 (x) (19.47e)
β̃(x, 0) = β̃0 (x) (19.47f)

where H is the upper triangular matrix satisfying (19.41e) and D + and D − satisfy
the Volterra integral equation
 x
+ α −+
D (x, ξ) = −M (x, ξ) (ξ) − M α (x, s)D − (s, ξ)ds (19.48a)
ξ
 x
D − (x, ξ) = −M β (x, ξ) −+ (ξ) − M β (x, s)D − (s, ξ)ds, (19.48b)
ξ

into the error dynamics (19.45).


By differentiating (19.46) with respect to time and space, inserting the dynamics
(19.47a), (19.47b) and integrating by parts, we obtain
360 19 Non-adaptive Schemes

α̃t (x, t) = ũ t (x, t) − M α (x, x)Λ− (x)β̃(x, t) + M α (x, 0)Λ− (0)β̃(0, t)


 x  x
α −
+ Mξ (x, ξ)Λ (ξ)β̃(ξ, t)dξ + M α (x, ξ)(Λ− ) (ξ)β̃(ξ, t)dξ
 x
0 0

− M α (x, ξ) −+ (ξ)α̃(ξ, t)dξ


0 x  x
− M α (x, s)D − (s, ξ)ds α̃(ξ, t)dξ (19.49a)
0 ξ

β̃t (x, t) = ṽt (x, t) − M β (x, x)Λ− (x)β̃(x, t) + M β (x, 0)Λ− (0)β̃(0, t)
 x  x
β
+ Mξ (x, ξ)Λ− (ξ)β̃(ξ, t)dξ + M β (x, ξ)(Λ− ) (ξ)β̃(ξ, t)dξ
0 x 0
β −+
− M (x, ξ) (ξ)α̃(ξ, t)dξ
 x x
0

− M β (x, s)D − (s, ξ)ds α̃(ξ, t)dξ (19.49b)


0 ξ

and
 x
α̃x (x, t) = ũ x (x, t) − M α (x, x)β̃(x, t) − Mxα (x, ξ)β̃(ξ, t)dξ (19.50a)
 0
x
β̃x (x, t) = ṽx (x, t) − M β (x, x)β̃(x, t) − Mxβ (x, ξ)β̃(ξ, t)dξ, (19.50b)
0

respectively. Inserting (19.49) and (19.50) into the dynamics (19.47a), (19.47b) and
noting that β̃(0, t) = ṽ(0, t), we obtain
 x
0 = α̃t (x, t) + Λ+ (x)α̃x (x, t) −  ++ (x)α̃(x, t) − D + (x, ξ)α̃(ξ, t)dξ
0
= ũ t (x, t) + Λ+ (x)ũ x (x, t) −  ++ (x)ũ(x, t) −  +− (x)ṽ(x, t)
+ M α (x, 0)Λ− (0)ṽ(0, t)
 
− Λ+ (x)M α (x, x) + M α (x, x)Λ− (x) −  +− (x) β̃(x, t)
 x
− Λ+ (x)Mxα (x, ξ) − Mξα (x, ξ)Λ− (ξ) − M α (x, ξ)(Λ− ) (ξ)
0

−  ++ (x)M α (x, ξ) −  +− (x)M β (x, ξ) β̃(ξ, t)dξ
 x
− D + (x, ξ) + M α (x, ξ) −+ (ξ)
0
 x 
+ M α (x, s)D − (s, ξ)ds α̃(ξ, t)dξ (19.51)
ξ
 x
0 = β̃t (x, t) − Λ− (x)β̃x (x, t) −  −+ (x)α̃(x, t) − D − (x, ξ)α̃(ξ, t)dξ
0
19.3 Observers 361

= ṽt (x, t) − Λ− (x)ṽx (x, t) −  −+ (x)ũ(x, t) −  −− (x)ṽ(x, t)


+ M β (x, 0)Λ− (0)ṽ(0, t)
 
+ Λ− (x)M β (x, x) − M β (x, x)Λ− (x) +  −− (x) β̃(x, t)
 x
+ Λ− (x)Mxβ (x, ξ) + Mξα (x, ξ)Λ− (ξ) + M α (x, ξ)(Λ− ) (ξ)
0

+  −− (x)M β (x, ξ) +  −+ (x)M α (x, ξ) β̃(ξ, t)dξ
 x  x
− D − (x, ξ) + M β (x, s)D − (s, ξ)ds
0 ξ

+ M β (x, ξ) −+ (ξ) α̃(ξ, t)dξ. (19.52)

Using the Eqs. (19.41a)–(19.41d), (19.48) and the injection gains (19.39) gives
(19.45a)–(19.45b).
The boundary condition (19.45c) follows immediately from (19.46a) and (19.47c).
Inserting (19.46) into the boundary condition (19.45d) gives

ṽ(1, t) − C1 ũ(1, t) = α̃(1, t) − C1 β̃(1, t)


 1
 β 
+ M (1, ξ) − C1 M α (1, ξ) β̃(ξ, t)dξ = 0. (19.53)
0

Using (19.41e) results in (19.47d). The initial conditions (19.47d)–(19.47e) and


(19.45e)–(19.45f) are linked through (19.46) by evaluating (19.46) at t = 0.
The α̃-dynamics in (19.47) is independent of β̃ and will be zero for t ≥ tu,1 ,
corresponding to the slowest transport speed in α̃. For t ≥ tu,1 , system (19.47) reduces
to

β̃t (x, t) − Λ− (x)β̃x (x, t) = 0 (19.54a)


 1
β̃(1, t) = − H (ξ)β̃(ξ, t)dξ (19.54b)
0
β̃(x, tu,1 ) = β̃tu,1 (x) (19.54c)

for some function βtu,1 ∈ B([0, 1]).


Due to the strictly upper triangular structure of H in the boundary condi-
tion (19.54b), we have ∂t β̃m − μm ∂x βm = 0, β̃m (1, t) = 0, so that βm ≡ 0 for t ≥
tu,1 + tv,m . This fact reduces equation number m − 1 to ∂t β̃m−1 − μm−1 ∂x βm−1 =
0, β̃m−1 (1, t) = 0 for t ≥ tu,1 + tv,m , and hence β̃m−1 ≡ 0 for t ≥ tu,1 + tv,m +
tv,m−1 . Continuing this argument we obtain that β̃ ≡ 0 for t ≥ tu,1 + tv,tot = tu,1 +
m
i=1 tv,i . From (19.46) it is clear that ũ ≡ 0 and ṽ ≡ 0 for t ≥ t F , which gives the
desired result. 
362 19 Non-adaptive Schemes

19.3.2 Collocated Observer

Consider the observer

û t (x, t) + Λ+ (x)û x (x, t) =  ++ (x)û(x, t) +  +− (x)v̂(x, t)


+ P + (x)(y1 (t) − û(1, t)) (19.55a)
− −+ −−
v̂t (x, t) − Λ (x)v̂x (x, t) =  (x)û(x, t) +  (x)v̂(x, t)
+ P − (x)(y1 (t) − û(1, t)) (19.55b)
û(0, t) = Q 0 v̂(0, t) (19.55c)
v̂(1, t) = C1 y1 (t) + U (t) (19.55d)
û(x, 0) = û 0 (x) (19.55e)
v̂(x, 0) = v̂0 (x) (19.55f)

for some initial conditions û 0 , v̂0 ∈ B([0, 1]), where the injection gains P + and P −
are given as

P + (x) = N α (x, 1)Λ+ (1) (19.56a)


− β +
P (x) = N (x, 1)Λ (1). (19.56b)

The matrices

N α (x, ξ) = {Niαj (x, ξ)}1≤i, j≤n (19.57a)


β
N β (x, ξ) = {Ni j (x, ξ)}1≤i≤m,1≤ j≤n (19.57b)

are defined over S (see (1.1c)), and satisfy the PDE

Λ+ (x)N xα (x, ξ) + Nξα (x, ξ)Λ+ (ξ) = −N α (x, ξ)(Λ+ ) (ξ) +  ++ (x)N α (x, ξ)
+  +− (x)N β (x, ξ) (19.58a)
− β
Λ (x)N xβ (x, ξ) − Nξ (x, ξ)Λ+ (ξ) β + 
= N (x, ξ)(Λ ) (ξ) −  −+ α
(x)N (x, ξ)
−  −− (x)N β (x, ξ) (19.58b)
Λ (x)N (x, x) − N (x, x)Λ (x) = − ++ (x)
+ α α +
(19.58c)
Λ− (x)N β (x, x) + N β (x, x)Λ+ (x) =  −+ (x) (19.58d)
α β
N (0, ξ) − Q 0 N (0, ξ) = A(x) (19.58e)

where A is a strictly lower triangular matrix in the form



0 if 1 ≤ j ≤ i ≤ n
A(x) = {ai j (x)}1≤i, j≤n = (19.59)
ai j (x) otherwise.
19.3 Observers 363

As with the controller kernel equations and kernel equations for the anti-collocated
observer, these equations are under-determined. To ensure well-posedness, we add
the boundary conditions

Niαj (x, 1) = n iαj (x), 1 ≤ j < i ≤ m (19.60)

for some arbitrary functions n iαj (x), 1 ≤ j < i ≤ m, defined for x ∈ [0, 1].
Well-posedness of the PDE consisting of (19.58) and (19.60) then follows from
Theorem D.6 in Appendix D.6.

Theorem 19.4 Consider system (18.1) subject to assumption (18.9), and the
observer (19.55) with injection gains P + and P − given as (19.56). Then

û ≡ u, v̂ ≡ v (19.61)

for t ≥ t0 , where


n
t0 = tu,tot + tv,m , tu,tot = tu,i (19.62)
i=1

with tu,i , tv,m defined in (19.7).

Proof The observer errors ũ = u − û and ṽ = v − v̂ satisfy the dynamics

ũ t (x, t) + Λ+ (x)ũ x (x, t) =  ++ (x)ũ(x, t) +  +− (x)ṽ(x, t)


− P + (x)ũ(1, t) (19.63a)
ṽt (x, t) − Λ− (x)ṽx (x, t) =  −+ (x)ũ(x, t) +  −− (x)ṽ(x, t)
− P − (x)ũ(1, t) (19.63b)
ũ(0, t) = Q 0 ṽ(0, t) (19.63c)
ṽ(1, t) = 0 (19.63d)
ũ(x, 0) = ũ 0 (x) (19.63e)
ṽ(x, 0) = ṽ0 (x) (19.63f)

where ũ 0 = u 0 − û 0 , ṽ0 = v0 − v̂0 . It can be shown that the target system


 1
α̃t (x, t) + Λ+ (x)α̃x (x, t) =  +− (x)β̃(x, t) + B + (x, ξ)β̃(ξ, t)dξ (19.64a)
x
 1
− −−
β̃t (x, t) − Λ (x)β̃x (x, t) =  (x)β̃(x, t) + B − (x, ξ)β̃(ξ, t)dξ (19.64b)
x
 1
α̃(0, t) = Q 0 β̃(0, t) − A(ξ)α̃(ξ, t)dξ (19.64c)
0
β̃(1, t) = 0 (19.64d)
364 19 Non-adaptive Schemes

α̃(x, 0) = α̃0 (x) (19.64e)


β̃(x, 0) = β̃0 (x) (19.64f)

where B + and B − are given by the Volterra integral equation


 ξ
B + (x, ξ) = −N α (x, ξ) +− (ξ) − N α (x, s)B + (s, ξ)ds (19.65a)
x
 ξ
B − (x, ξ) = −N β (x, ξ) +− (ξ) − N β (x, s)B + (s, ξ)ds (19.65b)
x

can be mapped into (19.63) with injection gains (19.56) using the backstepping
transformation
 1
ũ(x, t) = α̃(x, t) + N α (x, ξ)α̃(ξ, t)dξ (19.66a)
x
 1
ṽ(x, t) = β̃(x, t) + N β (x, ξ)α̃(ξ, t)dξ, (19.66b)
x

where N α , N β satisfy the PDEs (19.58). The derivation follows the same steps as in
the proof of Theorem 19.3, and is omitted.
The β̃-dynamics in (19.64) is independent of α̃ and will be zero for t ≥ tv,m ,
corresponding to the slowest transport speed in β̃. The resulting system in α̃ is then
n
a cascade system which will be zero after an additional time i=1 tu,i = tu,tot , and
hence α̃ ≡ 0 and β̃ ≡ 0 for t ≥ tv,m + tu,tot = t0 . From (19.66), ũ ≡ 0 and ṽ ≡ 0 for
t ≥ t0 follows, which gives the desired result. 

19.4 Output Feedback Controllers

The state feedback controller of Theorems 19.2 or 19.1 can be combined with the
observers of Theorems 19.3 or 19.4 into output feedback controllers. The proofs are
straightforward and omitted.

19.4.1 Sensing Anti-collocated with Actuation

Combining the results of Theorems 19.2 and 19.3, we obtain the following theorem.

Theorem 19.5 Consider system (18.1) with measurement (18.11a). Let the con-
troller be taken as
19.5 Reference Tracking 365
 1
U (t) = −C1 û(1, t) + u
K min (1, ξ)û(ξ, t)dξ
0
 1
v
+ K min (1, ξ)v̂(ξ, t)dξ (19.67)
0

v
u
where K min , K min are given from (19.17), and û and v̂ are generated using the
observer of Theorem 19.3. Then

u ≡ 0, v≡0 (19.68)

for t ≥ t F + tmin , where t F and tmin are defined in (19.7) and (19.21), respectively.

19.4.2 Sensing Collocated with Actuation

Combining the results of Theorems 19.2 and 19.4, we obtain the following theorem.

Theorem 19.6 Consider system (18.1) with measurement (18.11b). Let the con-
troller be taken as
 1
U (t) = −C1 û(1, t) + u
K min (1, ξ)û(ξ, t)dξ
0
 1
v
+ K min (1, ξ)v̂(ξ, t)dξ (19.69)
0

v
u
where K min , K min are given from (19.17), and û and v̂ are generated using the
observer of Theorem 19.4. Then

u ≡ 0, v≡0 (19.70)

for t ≥ t0 + tmin , where t0 and tmin are defined in (19.62) and (19.21), respectively.

19.5 Reference Tracking

As opposed to the controller of Theorem (14.6) for n + 1 systems, where we designed


a tracking controller for the measured signal y0 (t) = v(0, t), we now allow the signal
to be manipulated to be a linear combination of the states at x = 0. That is, we seek
to design U so that the following tracking goal is achieved after a finite time

r (t) = R0 u(0, t) + v(0, t) (19.71)


366 19 Non-adaptive Schemes

where R0 is a constant matrix with parameters

R0 = {ri j }1≤i≤m,1≤ j≤n , (19.72)

subject to the restriction that

det(R0 Q 0 + Im ) = 0. (19.73)

Consider the control law


 1  1
U (t) = −C1 u(1, t) + K u (1, ξ)u(ξ, t)dξ + K v (1, ξ)v(ξ, t)dξ
0 0
 1
+ Θ(1, ξ)β(ξ, t)dξ + ω(t) (19.74)
0

where K u , K v are given from the solution to the PDE (19.3) and (19.5), the state β is
given from the system states u, v through (19.8b), while Θ is the solution to (19.18)
with F given as the solution to the PDE (19.19) , and
 T
ω(t) = ω1 (t) ω2 (t) ω3 (t) . . . ωm (t) (19.75)

is given recursively as

i−1 
 1
pik (τ )
ωi (t) = νi (t + φi (1)) − ωk (t + φi (1) − φi (τ ))dτ (19.76)
k=1 0 μi (τ )

for i = 1, . . . , m, where
 T
ν(t) = ν1 (t) ν2 (t) ν3 (t) . . . νm (t) (19.77)

is generated from r , under the assumption (19.73), as

ν(t) = (R0 Q 0 + Im )−1r (t), (19.78)

pi j is the components of the strictly lower triangular matrix



0 if 1 ≤ i ≤ j ≤ n
P(x) = pi j (x) = (19.79)
1≤i, j≤m
pi j (x) otherwise.

given as the solution to the Fredholm integral equation


 1
P(x) = F(x, 1)Λ− (1) + F(x, ξ)P(ξ)dξ, (19.80)
0
19.5 Reference Tracking 367

and
 x

φi (x) = . (19.81)
0 μi (γ)

The existence of solution P of (19.79) is guaranteed by Lemma D.2 in Appendix D.

Theorem 19.7 Consider system (18.1), and assume that R0 satisfies (19.73). Then,
the control law (19.74) guarantees that (19.71) holds for t ≥ tv,m with tv,m defined
in (19.21). Moreover, if r ∈ L∞ , then

||u||∞ , ||v||∞ ∈ L∞ . (19.82)

Proof By modifying the control law used in the Fredholm transformation performed
in the proof of Theorem 19.2, and instead of choosing Ua as (19.35), we choose
 1
Ua (t) = Θ(1, ξ)β(ξ, t)dξ + Ub (t), (19.83)
0

for a new control signal Ub , we obtain a slightly modified version of the target system
(19.29) as

ηt (x, t) − Λ− (x)ηx (x, t) = P(x)Ub (t) (19.84a)


η(1, t) = 0 (19.84b)
η(x, 0) = η0 (x) (19.84c)

where P is the strictly lower triangular matrix given from (19.80).


Inserting the boundary condition (18.1c) and the transformations (19.8b) and
(19.24) with the boundary condition (19.19c), the tracking objective (19.71) can be
expressed as

r (t) = (R0 Q 0 + Im )η(0, t). (19.85)

The target system (19.84), in component form, reads


i−1
∂t ηi (x, t) − μi (x)∂x ηi (x, t) = pik (x)Ub,k (t) (19.86a)
k=1
ηi (1, t) = Ub,i (t) (19.86b)
ηi (x, 0) = ηi,0 (x) (19.86c)

for i = 1 . . . m, where
 T
η(x, t) = η1 (x, t) η2 (x, t) . . . ηm (x, t) (19.87a)
368 19 Non-adaptive Schemes
 T
Ub (t) = Ub,1 (t) Ub,2 (t) . . . Ub,m (t) (19.87b)
 T
η0 (x) = η1,0 (x) η2,0 (x) . . . ηm,0 (x) (19.87c)

The Eq. (19.86) can be solved explicitly using the method of characteristics. Note
that φi defined in (19.81) are strictly increasing functions and hence invertible. Along
the characteristic lines

x1 (x, s) = φi−1 (φi (x) + s), t1 (t, s) = t − s (19.88)

we have

d 
i−1
ηi (x1 (x, s), t1 (t, s)) = − pik (x1 (x, s))Uc,k (t1 (t, s)). (19.89)
ds k=1

Integrating from s = 0 to s = φi (1) − φi (x), we obtain

ηi (x, t) = ηi (1, t − φi (1) + φi (x))


i−1  φi (1)−φi (x)
+ pik (x1 (x, s))Ub,k (t1 (t, s))ds (19.90)
k=1 0

valid for t ≥ φi (1 − x). Using the substitution τ = φi−1 (φi (x) + s) in the integral,
(19.90) can be written

ηi (x, t) = Ub,i (t − φi (1) + φi (x))


i−1  1
 pik (τ )
+ Ub,k (t + φi (x) − φi (τ ))dτ , (19.91)
k=1 x
μi (τ )

valid for t ≥ φi (1) − φi (x), and specifically

i−1 
 1
pik (τ )
ηi (0, t) = Ub,i (t − φi (1)) + Ub,k (t − φi (τ ))dτ (19.92)
k=1 0 μi (τ )

valid for t ≥ φi (1). Hence, choosing the control laws Ub,i recursively as

i−1 
 1
pik (τ )
Ub,i (t) = νi (t + φi (1)) − Ub,k (t − φi (τ ) + φi (1))dτ (19.93)
k=1 0 μi (τ )

which is equivalent to choosing

Ub (t) = ω(t) (19.94)


19.6 Simulations 369

with ω defined in (19.75) and (19.76), we obtain ηi (0, t) = νi (t) for t ≥ φi (1), and

η(0, t) = ν(t) (19.95)

for t ≥ tv,m .
From inserting (19.95) into the right hand side of (19.85) and using the definition
(19.78), it is verified that the control objective (19.85), which is equivalent with
(19.71), holds for t ≥ tv,m .
From (19.84) with Ub (t) = ω(t), it is clear that η will be pointwise bounded if
r is bounded. From the Fredholm transformation (19.24) and the cascade structure
of system (19.23), pointwise boundedness of α and β follows. From the invertibility
of the backstepping transformation (19.8), it is then clear that a bounded r implies
pointwise boundedness of u and v. 

The state-feedback controller of Theorem 19.7 can also be combined with the
observers of Sect. 19.3 into output-feedback reference tracking controllers.

19.6 Simulations

System (18.1) is implemented using the system parameters

Λ+ (x) = diag{1, 3}, ∀x ∈ [0, 1] (19.96a)



Λ (x) = diag{1.5, 1}, ∀x ∈ [0, 1] (19.96b)
 
1 0 1 + ex
 ++ (x) = (19.96c)
4 4 + 2x 0
 
1 0 ex
 +− (x) = (19.96d)
4 2 + 2x 0
 
1 0 ex
 −+ (x) = (19.96e)
4 2x − 4 8x − 4
 
1 0 2x + 2
 −− (x) = (19.96f)
4 cosh(x) + 1 0
   
3 −2 −3 3
Q0 = C1 = (19.96g)
2 2 −1 −3

and initial conditions


 T  T
u 1,0 (x) = 1 e x , v0 (x) = sin(πx) sin(πx) . (19.97)
370 19 Non-adaptive Schemes

Fig. 19.1 System norm for


the non-minimum time 8
(dashed red) and

||u|| + ||v||
minimum-time 6
(dashed-dotted blue) 4
controllers of Theorems 19.1
and 19.2. The theoretical 2
convergence times t F and
0
tmin are indicated by black
vertical lines
0 1 2 3 4
Time [s]

19.6.1 State-Feedback Control

The controllers of Theorems 19.1 and 19.2 are here implemented to demonstrate
performance. The convergence times are computed to be
 1  2  1
dγ dγ 2
t F = tu,1 + tv,tot = + =1+ +1
0 λ1 (γ) i=1 0 μi (γ) 3
≈ 2.667 (19.98a)
 1  1
dγ dγ
tmin = tu,1 + tv,2 = + =1+1
0 λ1 (γ) 0 μ2 (γ)
= 2.000. (19.98b)

It is seen form the state norms shown in Fig. 19.1 and actuation signals shown in
Fig. 19.2 that both controllers achieve converge to zero of the state norm and actuation
signals in finite time, with the minimum-time convergent controller of Theorem 19.2
converging faster than the controller of Theorem 19.1. It is interesting to notice from
Fig. 19.2 that the actuation signals are approximately the same for the first 1.2 s, but
thereafter significantly different until convergence to zero is achieved.

20
15
15
10
5 10
U2
U1

0 5

−5 0
−10 −5
0 1 2 3 4 0 1 2 3 4
Time [s] Time [s]

Fig. 19.2 Left: Actuation signal U1 , and Right: Actuation signal U2 for the non-minimum time
(dashed red) and minimum-time (dashed-dotted blue) controllers of Theorems 19.1 and 19.2
19.6 Simulations 371

19.6.2 Output-Feedback and Tracking Control

The output-feedback controller of Theorem 19.5 and the tracking controller of The-
orem 19.7 are implemented in this section to demonstrate performance. The matrix
R0 and reference signal r in (19.71) are set to
 T
R0 = 2I2 r (t) = 0 sin(πt) . (19.99)

The observer used by the controller of Theorem 19.5 should converge to its true value
for

t ≥ t F = 2.667 (19.100)

while the state norm using the output-feedback controller should converge to zero
for

t ≥ t F + tmin = 2.667 + 2.000 = 4.667. (19.101)

Lastly, the tracking goal (19.71) should be achieved for

t ≥ tv,m = 1. (19.102)

From Fig. 19.3 it is observed that the state norms are bounded in both cases, and
that the state estimation error actually converges faster than anticipated, with the esti-
mates converging to its true values for approximately t ≥ 2 = tmin . Convergence to
zero of the state norm during output tracking is therefore also faster than anticipated.
The actuation signals are seen in Fig. 19.4 to be bounded.
 T
Figure 19.5 shows the reference signal r (t) = r1 (t) r2 (t) and the right hand
side of (19.71) as
 T
yc (t) = yc,1 (t) yc,2 (t) = R0 u(0, t) + v(0, t). (19.103)
||u − û|| + ||v − v̂||

8
25
20 6
||u|| + ||v||

15 4
10
2
5
0 0

0 1 2 3 4 5 6 0 1 2 3 4 5 6
Time [s] Time [s]

Fig. 19.3 Left: System norm for the output-feedback controller of Theorem 19.5 and output tracking
controller of Theorem 19.7. Right: State estimation error norm ||u − û|| + ||v − v̂||
372 19 Non-adaptive Schemes

60 50
40
20
U1

U2
0
0
−20
−40 −50
0 1 2 3 4 5 6 0 1 2 3 4 5 6
Time [s] Time [s]

Fig. 19.4 Left: Actuation signal U1 , and Right: Actuation signal U2 for the output-feedback (dashed
red) and output tracking (dashed-dotted blue) controllers of Theorems 19.5 and 19.7

8
1
6
0.5
yc,1

4
yc,2 0
2 −0.5
0 −1

0 1 2 3 4 5 6 0 1 2 3 4 5 6
Time [s] Time [s]

Fig. 19.5 Left: Reference signal r1 (t), and Right: Reference signal r2 (t) for the output-feedback
(dashed red) and output tracking (dashed-dotted blue) controllers of Theorems 19.5 and 19.7

It is observed from Fig. 19.5 that the tracking goal is achieved for t ≥ tv,m = 1, as
predicted by theory.

19.7 Notes

The complexity of non-adaptive controller and observer designs now further increases
compared to the n + 1 designs of Chap. 14. The number of controller kernels required
for implementation of a stabilizing controller for an n + m system is m(n + m), so
that a 1 + 2 system results in 6 kernels to be computed, compared to only 3 for the
2 + 1 case. Also, the resulting controller of Theorem 19.1 is non-minimum-time-
convergent, and an additional transformation is needed to derive the minimum time-
convergent controller of Theorem 19.2. This transformation is a Fredholm integral
transformation, and the technique was originally proposed in Coron et al. (2017).
An alternative way of deriving minimum time-controllers is offered in Auriol
and Di Meglio (2016), using a slightly altered target system. However, the result-
ing controller requires the solution to an even more complicated set of PDEs that
are cascaded in structure, making the proof of well-posedness as well as numeri-
cally solving them considerably harder. However, a minimum time-convergent anti-
collocated observer is also proposed in Auriol and Di Meglio (2016). This is opposed
to all observers derived in Sect. 19.3, which are non-minimum time-convergent. The
19.7 Notes 373

minimum time-convergent observer in Auriol and Di Meglio (2016), as with the


controller design, requires the solution to a fairly complicated set of cascaded ker-
nel equations. Extension of the Fredholm-based transformation used to derive the
minimum time-convergent controller of Theorem 19.2 to derive a minimum time-
convergent observer is an unsolved problem.

References

Anfinsen H, Aamo OM (2018) Minimum time disturbance rejection and tracking control of n + m
linear hyperbolic PDEs. In American Control Conference 2018, Milwaukee, WI, USA
Auriol J, Di Meglio F (2016) Minimum time control of heterodirectional linear coupled hyperbolic
PDEs. Automatica 71:300–307
Coron J-M, Hu L, Olive G (2017) Finite-time boundary stabilization of general linear hyperbolic
balance laws via Fredholm backstepping transformation. Automatica 84:95–100
Hu L, Di Meglio F, Vazquez R, Krstić M (2016) Control of homodirectional and general heterodi-
rectional linearcoupled hyperbolic PDEs. IEEE Trans Autom Control 61(11):3301–3314
Chapter 20
Adaptive Output-Feedback: Uncertain
Boundary Condition

20.1 Introduction

We will now consider the n + m system (18.1), but for simplicity restrict ourselves
to constant coefficients, that is

u t (x, t) + Λ+ u x (x, t) = Σ ++ u(x, t) + Σ +− v(x, t) (20.1a)


vt (x, t) − Λ− vx (x, t) = Σ −+ u(x, t) + Σ −− v(x, t) (20.1b)
u(0, t) = Q 0 v(0, t) (20.1c)
v(1, t) = C1 u(1, t) + U (t) (20.1d)
u(x, 0) = u 0 (x) (20.1e)
v(x, 0) = v0 (x) (20.1f)
y0 (t) = v(0, t) (20.1g)
y1 (t) = u(1, t) (20.1h)

for the system states


 T
u(x, t) = u 1 (x, t) u 2 (x, t) . . . u n (x, t) (20.2a)
 T
v(x, t) = v1 (x, t) v2 (x, t) . . . vm (x, t) , (20.2b)

defined over x ∈ [0, 1], t ≥ 0, and with initial conditions


 T
u 0 (x) = u 1,0 (x) u 2,0 (x) . . . u n,0 (x) (20.3a)
 T
v0 (x) = v1,0 (x) v2,0 (x) . . . vm,0 (x) , (20.3b)

satisfying

u 0 , v0 ∈ B([0, 1]). (20.4)

© Springer Nature Switzerland AG 2019 375


H. Anfinsen and O. M. Aamo, Adaptive Control of Hyperbolic PDEs,
Communications and Control Engineering,
https://doi.org/10.1007/978-3-030-05879-1_20
376 20 Adaptive Output-Feedback: Uncertain Boundary Condition

The system parameters

Λ+ = diag {λ1 , λ2 , . . . , λn } , Λ− = diag {μ1 , μ2 , . . . , μm } (20.5a)


Σ ++ = {σi++
j }1≤i, j≤n , Σ +− = {σi+−
j }1≤i≤n,1≤ j≤m (20.5b)
Σ −+ = {σi−+
j }1≤i≤m,1≤ j≤n , Σ −− = {σi−−
j }1≤i, j≤m (20.5c)
Q 0 = {qi j }1≤i≤m,1≤ j≤n , C1 = {ci j }1≤i≤n,1≤ j≤m (20.5d)

are now assumed to satisfy

λi , μ j ∈ R, λi , μ j > 0 (20.6a)
σik , σi j , σ ji , σ −−
++ +− −+
jl ∈ R, qi j , c ji ∈ R, (20.6b)

for i, k = 1, 2, . . . , n, j, l = 1, 2, . . . , m.
Additionally, we assume (18.9), that is

−μ1 < −μ2 < · · · < −μm < 0 < λ1 ≤ λ2 ≤ · · · ≤ λn (20.7)

and that the diagonal terms of Σ ++ and Σ −− are zero, hence

σii++ = 0, i = 1, 2, . . . , n σ −−
j j = 0, j = 1, 2, . . . , m. (20.8)

The goal is to design a stabilizing control law


 T
U (t) = U1 (t) U2 (t) . . . Um (t) , (20.9)

when the boundary parameters Q 0 and C1 are uncertain. We will consider the esti-
mation problem and closed loop adaptive control problem.

20.2 Sensing at Both Boundaries

20.2.1 Filter Design and Non-adaptive State Estimates

We introduce the input filters

ηt (x, t) + Λ+ ηx (x, t) = Σ ++ η(x, t) + Σ +− φ(x, t)


+ P + (x)(y0 (t) − φ(0, t)) (20.10a)
φt (x, t) − Λ− φx (x, t) = Σ −+ η(x, t) + Σ −− φ(x, t)
+ P − (x)(y0 (t) − φ(0, t)) (20.10b)
η(0, t) = 0 (20.10c)
20.2 Sensing at Both Boundaries 377

φ(1, t) = U (t) (20.10d)


η(x, 0) = η0 (x) (20.10e)
φ(x, 0) = φ0 (x) (20.10f)

where
 T
η(x, t) = η1 (x, t) . . . ηn (x, t) (20.11a)
 T
φ(x, t) = φ1 (x, t) . . . φm (x, t) , (20.11b)

and initial conditions η0 , φ0 ∈ B([0, 1]). The output injection gains P + and P − will
be specified later.
Furthermore, we design parameter filters that model how the boundary parameters
Q 0 and C1 influence the system states u and v. We define

Pt (x, t) + Λ+ Px (x, t) = Σ ++ P(x, t) + Σ +− R(x, t)


− P + (x)R(0, t) (20.12a)
Rt (x, t) − Λ Rx (x, t) = Σ −+ P(x, t) + Σ −− R(x, t)

− P − (x)R(0, t) (20.12b)
P(0, t) = y0T (t) ⊗ In (20.12c)
R(1, t) = 0 (20.12d)
P(x, 0) = P0 (x) (20.12e)
R(x, 0) = R0 (x) (20.12f)

where ⊗ denotes the Kronecker product, and


 
P(x, t) = P1 (x, t) P2 (x, t) . . . Pmn (x, t)
= { pi j (x, t)}1≤i≤n,1≤ j≤mn (20.13a)
 
R(x, t) = R1 (x, t) R2 (x, t) . . . Rmn (x, t)
= {ri j (x, t)}1≤i≤m,1≤ j≤mn (20.13b)

with initial conditions P0 , R0 ∈ B([0, 1]), and

Wt (x, t) + Λ+ Wx (x, t) = Σ ++ W (x, t) + Σ +− Z (x, t)


− P + (x)Z (0, t) (20.14a)
Z t (x, t) − Λ− Z x (x, t) = Σ −+ W (x, t) + Σ −− Z (x, t)
− P − (x)Z (0, t) (20.14b)
W (0, t) = 0 (20.14c)
Z (1, t) = y1T (t) ⊗ Im (20.14d)
W (x, 0) = W0 (x) (20.14e)
Z (x, 0) = Z 0 (x) (20.14f)
378 20 Adaptive Output-Feedback: Uncertain Boundary Condition

where
 
W (x, t) = W1 (x, t) W2 (x, t) . . . Wmn (x, t)
= {wi j (x, t)}1≤i≤n,1≤ j≤mn (20.15a)
 
Z (x, t) = Z 1 (x, t) Z 2 (x, t) . . . Z mn (x, t)
= {z i j (x, t)}1≤i≤m,1≤ j≤mn (20.15b)

with initial conditions W0 , Z 0 ∈ B([0, 1]). The output injection gains P + and P −
are the same ones as in (20.10).
We define non-adaptive state estimates as

ū(x, t) = η(x, t) + P(x, t)q + W (x, t)c, (20.16a)


v̄(x, t) = φ(x, t) + R(x, t)q + Z (x, t)c (20.16b)

where q contains the components of Q 0 and c contains the components of C1 , but


stacked column-wise, i.e.
 T  T
q = q1T q2T . . . qmT , c = c1T c2T . . . cnT . (20.17)

Lemma 20.1 Consider system (20.1) and the non-adaptive state estimates (20.16)
generated using filters (20.10), (20.12) and (20.14). If the output injection gains
P + and P − are selected as (19.39), where (M α , M β ) is the solution to equation
(19.41)–(19.43) with constant coefficients and C1 = 0, then

ū ≡ u, v̄ ≡ v (20.18)

for t ≥ t F , where t F is defined in (19.7).

Proof The non-adaptive estimation error, defined as

e(x, t) = u(x, t) − ū(x, t) (20.19a)


(x, t) = v(x, t) − v̄(x, t) (20.19b)

can straightforwardly be shown to satisfy the dynamics

et (x, t) + Λ+ ex (x, t) = Σ ++ e(x, t) + Σ +− (x, t) − P + (x)(0, t), (20.20a)


t (x, t) − Λ− x (x, t) = Σ −+ e(x, t) + Σ −− (x, t) − P − (x)(0, t), (20.20b)
e(0, t) = 0 (20.20c)
(1, t) = 0 (20.20d)
e(x, 0) = e0 (x, t) (20.20e)
(x, 0) = 0 (x, t) (20.20f)
20.2 Sensing at Both Boundaries 379

with initial conditions


e0 , 0 ∈ B([0, 1]). (20.21)

The dynamics (20.20) has the same form as the dynamics (19.45) but with C1 = 0.
The rest of the proof therefore follows the same steps as the proof of Theorem 19.3
and is omitted. 

20.2.2 Adaptive Law

From the static relationships (20.16) and the result of Lemma 20.1 any standard
identification law can be applied to estimate the unknown parameters in q and c.
First, we will assume we have some bounds on the parameters q and c.
Assumption 20.1 Bounds q̄ and c̄ are known, so that

|q|∞ ≤ q̄, |c|∞ ≤ c̄. (20.22)

Next, we present the integral adaptive law with forgetting factor, normalization
and projection. Define
     T
u(1, t) − η(1, t) P(1, t) W (1, t) q
h(t) = , ϕ(t) = , θ= (20.23)
v(0, t) − φ(0, t) R(0, t) Z (0, t) cT

and consider the adaptive law



˙ 0 for t < t F
θ̂(t) = (20.24)
projθ̄ {Γ (R I L (t)θ̂(t) + Q I L (t)), θ̂(t)} for t ≥ t F

where
 T  T
θ̂(t) = q̂ T (t) ĉ T (t) , θ̄ = q̄ T c̄ T , (20.25)

with q̄ and c̄ given from Assumption 20.1, while R I L and Q I L are generated from

  ⎪
⎨0 for t < t F
Ṙ I L (t)
= R I L (t) ϕT (t) −ϕ(t) (20.26)
Q̇ I L (t) ⎪
⎩−γ + for t ≥ t F
Q I L (t) 1 + |ϕ(t)| 2 h(t)

with initial conditions

R I L (0) = 02nm Q I L (0) = 02nm×1 , (20.27)


380 20 Adaptive Output-Feedback: Uncertain Boundary Condition

and where the scalar γ > 0 and 2nm × 2nm symmetric gain matrix Γ > 0 are tuning
parameters.
Moreover, adaptive state estimates can be generated by substituting the parameters
in (20.16) with their respective estimates

û(x, t) = η(x, t) + P(x, t)q̂(t) + W (x, t)ĉ(t) (20.28a)


v̂(x, t) = φ(x, t) + R(x, t)q̂(t) + Z (x, t)ĉ(t) (20.28b)

Theorem 20.1 Consider system (20.1) with filters (20.10), (20.12) and (20.14) and
output injection gains given by (19.39), where (M α , M β ) is given as the solution to
the PDE (19.41)–(19.43) with C1 = 0. The adaptive law (20.24), guarantees that

|q̂|∞ ≤ q̄, |ĉ|∞ ≤ c̄ (20.29a)


˙ 1/2
θ̂, ζ, |R I L θ̃| ∈ L2 ∩ L∞ (20.29b)
˙
lim |θ̂(t)| = 0 (20.29c)
t→∞

for all i = 1, 2, . . . n, j = 1, 2, . . . m, and where θ̃ = θ − θ̂, and

|ε̂(t)|
ζ(t) = . (20.30)
1 + |ϕ(t)|2

with

ε̂(t) = h(t) − ϕ(t)θ̂(t). (20.31)

Moreover, if ϕ and ϕ̇ are bounded and ϕT is PE, then

θ̂ → θ (20.32)

exponentially fast. Furthermore, the prediction errors

ê(x, t) = u(x, t) − û(x, t), ˆ(x, t) = v(x, t) − v̂(x, t) (20.33a)

satisfy the bounds

||ê(t)|| ≤ ||P(t)|||q̃(t)| + ||W (t)|||c̃(t)| + ||e(t)|| (20.34a)


||ˆ(t)|| ≤ ||R(t)|||q̃(t)| + ||Z (t)|||c̃(t)| + ||(t)|| (20.34b)

with ||e|| = |||| = 0 for all t ≥ t F .


20.2 Sensing at Both Boundaries 381

Proof Using the fact that

h(t) = ϕ(t)θ (20.35)

for t ≥ t F , which follows from (20.16) and Lemma 20.1, we note from (20.26) that
R I L and Q I L are bounded for all t ≥ 0. Additionally, R I L is symmetric and positive
semidefinite. Solving for Q I L (t) and R I L (t), we have
 t
ϕT (τ )ϕ(τ )
Q I L (t) = e−γ(t−τ ) dτ θ = −R I L (t)θ (20.36)
0 1 + |ϕ(τ )|2

which means from (20.24) that



˙ 0 for t < t F
θ̂(t) = (20.37)
projθ̄ {−Γ R I L (t)θ̃(t), θ̂(t)} for t ≥ t F

proving that

˙
θ̂ ∈ L∞ (20.38)

since R I L is bounded by design, and θ̃ is bounded by projection. Forming

1 T
V1 (t) = θ̃ (t)Γ −1 θ̃(t) (20.39)
2
from which we find, using the update law (20.37) and Lemma A.1 in Appendix A

0 for t < t F
V̇1 (t) ≤ (20.40)
−θ̃ (t)R I L (t)θ̃(t) for t ≥ t F
T

proving that V1 is bounded and non-increasing. It thus has a limit as t → ∞. Inte-


grating (20.40) from zero to ∞, and noting that R I L (t) = 0 for 0 ≤ t < t F gives
1/2
|R I L θ̃| ∈ L2 , (20.41)

which also immediately, from (20.37) gives

˙
θ̂ ∈ L2 . (20.42)

˙ ¨
Since θ̂, Ṙ ∈ L∞ , it follows from (20.37) that θ̂ ∈ L∞ , from which Lemma B.1 in
Appendix B gives (20.29c), and

lim |R I L (t)θ̃(t)| = 0. (20.43)


t→∞
382 20 Adaptive Output-Feedback: Uncertain Boundary Condition

Finally, we have that d


dt
(θ̃ T (t)R I L (t)θ̃(t)) is zero for t < t F , while for t ≥ t F , we
have
d T ˙
(θ̃ (t)R I L (t)θ̃(t)) = θ̃ T (t) Ṙ I L (t)θ̃(t) + 2θ̃ T (t)R I L (t)θ̃(t)
dt
θ̃ T (t)ϕT (t)ϕ(t)θ̃(t)
≤ −γ θ̃ T (t)R I L (t)θ̃(t) −
1 + |ϕ(t)|2
− 2θ̃ T (t)R I L (t)Γ R I L (t)θ̃(t) (20.44)

where we used Lemma A.1 in Appendix A. This gives, using ε̂(t) = ϕ(t)θ̃(t) that
 
t
ε̂T (τ )ε̂(τ ) t
dτ ≤ −γ θ̃ T (τ )R I L (τ )θ̃(τ )dτ − θ̃ T (t)R I L (t)θ̃(t)
0 1 + |ϕ(τ )|2 0
 t
−2 θ̃ T (τ )R I L (τ )Γ R I L (τ )θ̃(τ )dτ . (20.45)
0

Using (20.41) and (20.43) gives

ζ ∈ L2 . (20.46)

Moreover, we have

ε̂T (t)ε̂(t) θ̃ T (t)ϕT (t)ϕ(t)θ̃(t)


= ≤ |θ̃(t)|2 (20.47)
1 + |ϕ(t)| 2 1 + |ϕ(t)|2

which proves

ζ ∈ L∞ . (20.48)

The inequalities (20.34) follow from noting that

ê(x, t) = e(x, t) + P(x, t)q̃(t) + W (x, t)c̃(t) (20.49a)


ˆ(x, t) = (x, t) + R(x, t)q̃(t) + Z (x, t)c̃(t) (20.49b)

with e =  ≡ 0 for t ≥ t F . 

20.2.3 Output-Feedback Control Using Sensing at Both


Boundaries

We will in this section derive an adaptive control law that uses the parameter and
state estimates generated from the adaptive law of Theorem 20.1 to stabilize system
20.2 Sensing at Both Boundaries 383

(20.1). We start by stating the main results. Consider the following time-varying
PDEs defined over T1 defined in (1.1b)

Λ− K̂ xu (x, ξ, t) − K̂ ξu (x, ξ, t)Λ+ = K̂ u (x, ξ, t)Σ ++


+ K̂ v (x, ξ, t)Σ −+ (20.50a)

Λ K̂ xv (x, ξ, t) + K̂ ξv (x, ξ, t)Λ− = K̂ (x, ξ, t)Σ
u +−

+ K̂ v (x, ξ, t)Σ −− (20.50b)


− + −+
Λ K̂ (x, x, t) + K̂ (x, x, t)Λ = −Σ
u u
(20.50c)
− v v − −−
Λ K (x, x, t) − K̂ (x, x, t)Λ = −Σ (20.50d)
v − +
K̂ (x, 0, t)Λ − K̂ (x, 0, t)Λ Q̂ 0 (t) = G(x)
u
(20.50e)

where the matrix G is strictly lower triangular, as defined in (19.4). As with the
kernel equations (19.3), the Eqs. (20.50) are also under-determined, and to ensure
well-posedness, we add the boundary condition

K̂ ivj (1, ξ) = k̂ivj (ξ), 1 ≤ j < i ≤ m (20.51)

for some arbitrary functions k̂ivj (ξ), 1 ≤ j < i ≤ m.


From Theorem D.6 in Appendix D.6, the PDE (20.50)–(20.51) has a unique
solution for any bounded Q̂ 0 . Moreover, since the coefficients are bounded uniformly
in time, the solution is bounded in the sense of

|| K̂ u (t)||∞ ≤ K̄ , ∀t ≥ 0 || K̂ v (t)||∞ ≤ K̄ , ∀t ≥ 0, (20.52)

for some nonnegative constant K̄ . Moreover, if | Q̂˙ 0 | ∈ L2 ∩ L∞ , then

|| K̂ tu ||, || K̂ tv || ∈ L2 ∩ L∞ . (20.53)

Theorem 20.2 Consider system (20.1) and the state and boundary parameter esti-
mates generated from Theorem 20.1. Let the control law be taken as
 1
U (t) = −Ĉ1 (t)y1 (t) + K̂ u (1, ξ, t)û(ξ, t)dξ
0
 1
+ K̂ v (1, ξ, t)v̂(ξ, t)dξ (20.54)
0

where ( K̂ u , K̂ v ) is the solution to the PDE consisting of (20.50) and (20.57). Then,

||u||, ||v||, ||η||, ||φ||, ||P||, ||R||, ||W ||, ||Z ||, ||û||, ||v̂|| ∈ L2 ∩ L∞ . (20.55)

The proof of this theorem is given in Sect. 20.2.6.


384 20 Adaptive Output-Feedback: Uncertain Boundary Condition

20.2.4 Backstepping of Estimator Dynamics

We will need the dynamics of the estimates û and v̂ generated using (20.28). By
straightforward calculations, we find the dynamics to be

û t (x, t) + Λ+ û x (x, t) = Σ ++ û(x, t) + Σ +− v̂(x, t) + P + (x)ˆ(0, t)


˙ + W (x, t)ĉ(t)
+ P(x, t)q̂(t) ˙ (20.56a)
v̂t (x, t) − Λ− v̂x (x, t) = Σ −+ û(x, t) + Σ −− v̂(x, t) + P − (x)ˆ(0, t)
˙ + Z (x, t)ĉ(t)
+ R(x, t)q̂(t) ˙ (20.56b)
û(0, t) = Q̂ 0 (t)v(0, t) (20.56c)
v̂(1, t) = Ĉ1 (t)u(1, t) + U (t) (20.56d)
û(x, 0) = û 0 (x) (20.56e)
v̂(x, 0) = v̂0 (x). (20.56f)

We will use an invertible backstepping transformation to bring system (20.56)


into an equivalent system for which the stability analysis is easier. Consider the
backstepping transformation

α(x, t) = û(x, t) (20.57a)


 x  x
β(x, t) = v̂(x, t) − K̂ u (x, ξ, t)û(ξ, t)dξ − K̂ v (x, ξ, t)v̂(ξ, t)dξ
0 0
= T [û, v̂](x, t) (20.57b)

where ( K̂ u , K̂ v ) is the online solution to the PDE (20.50). The inverse transformation
has the form
û(x, t) = α(x, t), v̂(x, t) = T −1 [α, β](x, t) (20.58)

where T −1 is an integral operator in the same form as (20.57b).

Lemma 20.2 The backstepping transformation (20.57) maps between system (20.56)
in closed loop with the control law (20.54) and the following target system
 x
αt (x, t) + Λ+ αx (x, t) = Σ ++ α(x, t) + Σ +− β(x, t) + Ĉ + (x, ξ, t)α(ξ, t)dξ
 x 0

+ Ĉ − (x, ξ, t)β(ξ, t)dξ + P + (x)ˆ(0, t)


0
˙ + W (x, t)ĉ(t)
+ P(x, t)q̂(t) ˙ (20.59a)
βt (x, t) − Λ βx (x, t) = G(x)β(0, t) + T [P , P − ](x, t)ˆ(0, t)
− +

− K (x, 0, t)Λ+ Q̂ 0 (t)ˆ(0, t)


˙ + T [W, Z ](x, t)ĉ(t)
+ T [P, R](x, t)q̂(t) ˙
20.2 Sensing at Both Boundaries 385
 x
− K̂ tu (x, ξ, t)α(ξ, t)dξ
0
x
− K̂ tv (x, ξ, t)T −1 [α, β](ξ, t)dξ (20.59b)
0
α(0, t) = Q̂ 0 (t)(β(0, t) + ˆ(0, t)) (20.59c)
β(1, t) = 0 (20.59d)
α(x, 0) = α0 (x) (20.59e)
β(x, 0) = β0 (x) (20.59f)

for α0 , β0 , ∈ B([0, 1]), where G is the strictly lower triangular matrix given by
(20.50e) and Ĉ + and Ĉ − are given by
 x
+ +−
Ĉ (x, ξ, t) = Σ (x) K̂ (x, ξ, t) +
u
Ĉ − (x, s, t) K̂ u (s, ξ, t)ds (20.60a)
ξ
 x
Ĉ − (x, ξ, t) = Σ +− (x)K v (x, ξ, t) + Ĉ − (x, s, t)K v (s, ξ, t)ds. (20.60b)
ξ

Proof Differentiating (20.57b) with respect to time and space, respectively, inserting
the dynamics (20.56a) and (20.56b), integrating by parts and inserting the result into
(20.56b), we find
 x
0 = βt (x, t) − Λ− βx (x, t) + K̂ ξu (x, ξ, t)Λ+ + K̂ u (x, ξ, t)Σ ++
0
  x
v −+ − u
+ K̂ (x, ξ, t)Σ − Λ K̂ x (x, ξ, t) û(ξ, t)dξ + K̂ u (x, ξ, t)Σ +−
0

v − − v v −−
− K̂ ξ (x, ξ, t)Λ − Λ K̂ x (x, ξ, t) + K̂ (x, ξ, t)Σ v̂(ξ, t)dξ
  x  x 
− P − (x) − K̂ u (x, ξ, t)P + (ξ)dξ − K̂ v (x, ξ, t)P − (ξ)dξ ˆ(0, t)
0
 0
+ v −
+ K̂ (x, 0, t)Λ Q̂ 0 (t) − K̂ (x, 0, t)Λ v̂(0, t)
u

+ K̂ u (x, 0, t)Λ+ Q̂ 0 (t)ˆ(0, t)


 
− Λ− K̂ u (x, x, t) + K̂ u (x, x, t)Λ+ + Σ −+ û(x, t)
 
+ K̂ v (x, x, t)Λ− − Λ− K̂ v (x, x, t) − Σ −− v̂(x, t)
˙ − Z (x, t)ĉ(t)
− R(x, t)q̂(t) ˙
 x  x
+ ˙
K̂ u (x, ξ, t)P(ξ, t)q̂(t)dξ + ˙
K̂ u (x, ξ, t)W (ξ, t)ĉ(t)dξ
 x
0
 x0

+ ˙
K̂ v (x, ξ, t)R(ξ, t)q̂(t)dξ + ˙
K̂ v (x, ξ, t)Z (ξ, t)ĉ(t)dξ
 x
0
 x 0

+ K̂ t (x, ξ, t)û(ξ, t)dξ +


u
K̂ tu (x, ξ, t)v̂(ξ, t)dξ. (20.61)
0 0
386 20 Adaptive Output-Feedback: Uncertain Boundary Condition

Using (20.50a)–(20.50d) and (20.58) we obtain (20.59b). Inserting (20.57) into


(20.59a), changing the order of integration in the double integrals, and using (20.60)
gives (20.56a). Inserting (20.57) into (20.56c) immediately gives the boundary con-
dition (20.59c). Inserting (20.57b) into (20.56d) and using the control law (20.54)
results in (20.59d). 

20.2.5 Backstepping of Filters

To ease the Lyapunov proof in the next section, we also perform backstepping trans-
formations of the parameter filters (20.12) and (20.14). Consider the target systems

 x
+ ++
At (x, t) + Λ A x (x, t) = Σ A(x, t) + D + (x, ξ)A(ξ, t)dξ (20.62a)
0
x
Bt (x, t) − Λ− Bx (x, t) = Σ −+ A(x, t) + D − (x, ξ)A(ξ, t)dξ (20.62b)
0
A(0, t) = (β(0, t) + ˆ(0, t))T ⊗ In (20.62c)
 1
B(1, t) = − H (ξ)B(ξ, t)dξ (20.62d)
0
A(x, 0) = A0 (x) (20.62e)
B(x, 0) = B0 (x) (20.62f)

and
 x
+ ++
Ψt (x, t) + Λ Ψx (x, t) = Σ Ψ (x, t) + D + (x, ξ)Ψ (ξ, t)dξ (20.63a)
0 x
Ωt (x, t) − Λ− Ωx (x, t) = Σ −+ Ψ (x, t) + D − (x, ξ)Ψ (ξ, t)dξ (20.63b)
0
Ψ (0, t) = 0 (20.63c)
 1
Ω(1, t) = − H (ξ)Ω(ξ, t)dξ
0
+ (α(1, t) + ê(1, t))T ⊗ In (20.63d)
Ψ (x, 0) = Ψ0 (x) (20.63e)
Ω(x, 0) = Ω0 (x) (20.63f)

for
 
A(x, t) = A1 (x, t) A2 (x, t) . . . Amn (x, t)
= {ai j (x, t)}1≤i≤n,1≤ j≤mn (20.64a)
20.2 Sensing at Both Boundaries 387
 
B(x, t) = B1 (x, t) B2 (x, t) . . . Bmn (x, t)
= {bi j (x, t)}1≤i≤m,1≤ j≤mn (20.64b)
 
Ψ (x, t) = Ψ1 (x, t) Ψ2 (x, t) . . . Ψmn (x, t)
= {ψi j (x, t)}1≤i≤n,1≤ j≤mn (20.64c)
 
Ω(x, t) = Ω1 (x, t) Ω2 (x, t) . . . Ωmn (x, t)
= {ωi j (x, t)}1≤i≤m,1≤ j≤mn . (20.64d)

Lemma 20.3 Consider systems (20.12) and (20.14). The following backstepping
transformations
 x
P(x, t) = A(x, t) + M α (x, ξ)B(ξ, t)dξ (20.65a)
 0
x
R(x, t) = B(x, t) + M β (x, ξ)B(ξ, t)dξ (20.65b)
0

and
 x
W (x, t) = Ψ (x, t) + M α (x, ξ)Ω(ξ, t)dξ (20.66a)
 0
x
Z (x, t) = Ω(x, t) + M β (x, ξ)Ω(ξ, t)dξ (20.66b)
0

where (M α , M β ) satisfies equation (19.41)–(19.43) with C1 = 0, map (20.62) and


(20.63) into (20.12) and (20.14), respectively.

Proof Column-wise, the proof is the same as the proof of Lemma 20.1, and is there-
fore skipped. 

We note that the subsystem in Ψ consisting of (20.63a) and (20.63c) is autonomous,


and will be zero in finite time λ−1
1 , after which (20.63) is reduced to

Ωt (x, t) − Λ− Ωx (x, t) = 0 (20.67a)


 1
Ω(1, t) = − H (ξ)Ω(ξ, t)dξ
0
+ (α(1, t) + ê(1, t))T ⊗ In (20.67b)
Ω(x, λ−1
1 ) = Ωλ−1
1
(x) (20.67c)
388 20 Adaptive Output-Feedback: Uncertain Boundary Condition

20.2.6 Proof of Theorem 20.2

Due to the invertibility of the transformations, we will for i = 1 . . . nm have

||Pi (t)|| ≤ H1 ||Ai (t)|| + H2 ||Bi (t)||, ||Ri (t)|| ≤ H3 ||Bi (t)|| (20.68a)
||Ai (t)|| ≤ H4 ||Pi (t)|| + H5 ||Ri (t)||, ||Bi (t)|| ≤ H6 ||Ri (t)|| (20.68b)

for some positive constants H j , j = 1 . . . 6. While, for t ≥ λ−1


1 , we have

||Wi (t)|| ≤ H2 ||Ωi (t)|| (20.69a)


||Z i (t)|| ≤ H3 ||Ωi (t)|| (20.69b)
||Ωi (t)|| ≤ H6 ||Z i (t)||. (20.69c)

Lastly, for the operator T defined in (20.57b), we have

||T [u, v](t)|| ≤ G 1 ||u(t)|| + G 2 ||v(t)|| (20.70a)


−1
||T [u, v](t)|| ≤ G 3 ||u(t)|| + G 4 ||v(t)|| (20.70b)

for some positive constants G 1 . . . G 4 . We are finally ready to prove Theorem 20.2.
Consider the functionals
 1
V2 (t) = e−δx αT (x, t)α(x, t)d x (20.71a)
0
 1
V3 (t) = ekx β T (x, t)Dβ(x, t)d x (20.71b)
0
nm 
 1
V4 (t) = e−δx AiT (x, t)Ai (x, t)d x (20.71c)
i=1 0
nm  1

V5 (t) = ekx BiT (x, t)Π Bi (x, t)d x (20.71d)
i=1 0
nm  1

V6 (t) = (1 + x)ΩiT (x, t)Π Ωi (x, t)d x (20.71e)
i=1 0

for some positive constants δ, k and positive definite matrices D and Π to be decided.
The following result is proved in Appendix E.12.

Lemma 20.4 It is possible to choose D and Π so that there exists positive constants
h 1 , h 2 , . . . , h 9 and nonnegative, integrable functions l1 , l2 , . . . , l8 such that

V̇2 (t) ≤ −e−δ λ1 |α(1, t)|2 + h 1 |β(0, t)|2 − [δλ1 − h 2 ] V2 (t) + 2d −1 V3 (t)
+ h 3 |ˆ(0, t)|2 + l1 (t)V4 (t) + l2 (t)V5 (t) + l3 (t)V6 (t) (20.72a)
20.2 Sensing at Both Boundaries 389

V̇3 (t) ≤ −h 4 |β(0, t)|2 − (kλ1 − 7)V3 (t) + ek d̄h 5 |ˆ(0, t)|2 + l4 (t)V2 (t)
+ l5 (t)V3 (t) + l6 (t)V4 (t) + l7 (t)V5 (t) + l8 (t)V6 (t) (20.72b)
−δ
V̇4 (t) ≤ −λ1 e |A(1, t)| + h 7 |β(0, t)| + h 7 |ˆ(0, t)|
2 2 2

− [δλ1 − h 6 ] V4 (t) (20.72c)


δ+k
V̇5 (t) ≤ −h 8 e V5 (t) − μm π|B(0, t)| + 2π̄e
k 2
V4 (t) (20.72d)
V̇6 (t) ≤ −h 9 e V6 (t) + 8n π̄|α(1, t)| + 8n π̄|ê(1, t)| − π|Ω(0, t)| ,
k 2 2 2
(20.72e)

where π and π̄ are lower and upper bounds on the elements on Π , respectively, and
d and d̄ are lower and upper bounds on the elements on D, respectively.
Consider now the Lyapunov function


6
V9 (t) = ai Vi (t) (20.73)
i=2

for some positive constant ai , i = 2, 3, . . . , 6. Choosing

a2 = d, a3 = h −1
4 (dh 1 + h 7 ), a4 = 1 (20.74a)
−1 −δ−k −1 −δ
a5 = π̄ e , a6 = (8n π̄) de λ1 (20.74b)

and then choosing


 
h2 h6 + 2
δ > max 1, , (20.75a)
λ1 λ1
  
−1 2h 4
k > max 1, λ1 +7 , (20.75b)
dh 1 + h 7

we find by Lemma 20.4

V̇9 (t) ≤ −cV9 (t) + l9 (t)V9 (t) + h 11 |ε̂(t)|2


− h 12 (2|A(1, t)|2 + |B(0, t)|2 + |Ω(0, t)|2 ) (20.76)

where
 
k dh 1 + h 7 −δ
h 11 = max dh 3 + e d̄ h 5 + h 7 , de λ1 (20.77a)
h4
 
1 π de−δ λ1
h 12 = min λ1 e−δ , e−δ−k μm , π (20.77b)
2 π̄ 8n π̄

and c is a positive constant, l9 is a bounded, integrable function, and we have defined


ε̂ = h − φθ̂, and used the fact that |ε̂(t)|2 = |ê(1, t)|2 + |ˆ(0, t)|2 . Now, rewrite
|ε̂(t)|2 as follows
390 20 Adaptive Output-Feedback: Uncertain Boundary Condition

|ε̂(t)|2 (1 + |ϕ(t)|2 )
|ε̂(t)|2 =
1 + |ϕ(t)|2
= ζ (t)(1 + |P(1, t)|2 + |W (1, t)|2 + |R(0, t)|2 + |Z (0, t)|2 )
2
(20.78)

where we have used the definition of ζ in (20.30). We note from (20.65) and (20.66)
that |P(1, t)|2 ≤ 2|A(1, t)|2 + 2 M̄ 2 ||B(t)||2 , |W (1, t)|2 ≤ M̄ 2 ||Ω(t)||2 , |R(0, t)|2
= |B(0, t)|2 , |Z (0, t)|2 = |Ω(0, t)|2 where M̄ bounds the kernel M α , and thus

|ε̂(t)|2 ≤ ζ 2 (t) 1 + 2|A(1, t)|2 + 2 M̄ 2 ||B(t)||2 + M̄ 2 ||Ω(t)||2

+ |B(0, t)|2 + |Ω(0, t)|2 . (20.79)

Inserting (20.79) into (20.76), we obtain

V̇9 (t) ≤ −cV9 (t) + l10 (t)V9 (t) + l11 (t)


 
− h 12 − h 11 ζ 2 (t) (2|A(1, t)|2 + |B(0, t)|2 + |Ω(0, t)|2 ) (20.80)

for some bounded, integrable functions l10 , l11 (property (20.29b)). Moreover, we
have, for t ≥ t F

|ε̂(t)|2 |ϕ(t)θ̃(t)|2
ζ 2 (t) = = ≤ |θ̃(t)|2 ≤ γ̄V1 (t) (20.81)
1 + |ϕ(t)| 2 1 + |ϕ(t)|2

where γ̄ is the largest eigenvalue of Γ , and V is defined in (20.39).


It then follows from Lemma B.4 in Appendix B that V9 ∈ L1 ∩ L∞ , and hence

||α||, ||β||, ||A||, ||B||, ||Ω|| ∈ L2 ∩ L∞ . (20.82)

This in turn, implies that |A(1, t)|2 , |B(0, t)|2 and |Ω(0, t)|2 must be bounded almost
everywhere, meaning that

ζ 2 |A(1, ·)|2 , ζ 2 |B(0, ·)|2 , ζ 2 |Ω(0, ·)|2 ∈ L1 (20.83)

since ζ 2 ∈ L1 . Inequality (20.80) then reduces to

V̇9 (t) ≤ −cV9 (t) + l10 (t)V9 (t) + l12 (t) (20.84)

for some integrable function l12 (t). Lemma B.3 in Appendix B then gives

V9 → 0 (20.85)
20.2 Sensing at Both Boundaries 391

and hence

||α||, ||β||, ||A||, ||B||, ||Ω|| → 0. (20.86)

Due to the invertibility of the transformations, we then have (Theorem 1.3)

||û||, ||v̂||, ||P||, ||R||, ||W ||, ||Z || ∈ L2 ∩ L∞ (20.87)

and

||û||, ||v̂||, ||P||, ||R||, ||W ||, ||Z || → 0. (20.88)

From (20.34) and (20.33a), we have

||u||, ||v|| ∈ L2 ∩ L∞ , ||u||, ||v|| → 0, (20.89)

while from (20.28), we have

||η||, ||φ|| ∈ L2 ∩ L∞ , ||η||, ||φ|| → 0. (20.90)

From (20.28), we then have

||û||, ||v̂|| ∈ L2 ∩ L∞ , ||û||, ||v̂|| → 0. (20.91)

20.3 Simulations

20.3.1 Parameter Estimation

System (20.1) and the adaptive observer of Theorem 20.1 are implemented for n =
m = 2, using the system parameters

Λ+ = diag{1, 3}, Λ− = diag{1.5, 1} (20.92a)


   
++ 1 01 +− 1 01
Σ = , Σ = (20.92b)
5 20 10 3 0
   
1 01 1 03
Σ −+ = , Σ −−
= (20.92c)
10 2 4 10 1 0
   
1 2 5 1 −2 1
Q0 = C1 = (20.92d)
10 2 −10 4 −1 −2
392 20 Adaptive Output-Feedback: Uncertain Boundary Condition

0.2 0
0.1 −0.2
0 −0.4
0 5 10 15 0 5 10 15
Time [s] Time [s]
0.2 0
0.1 −0.5
0 −1
0 5 10 15 0 5 10 15
Time [s] Time [s]
0 0.3
−0.2 0.2
−0.4 0.1
0
0 5 10 15 0 5 10 15
Time [s] Time [s]
0 0
−0.1 −0.2
−0.2 −0.4
−0.6
0 5 10 15 0 5 10 15
Time [s] Time [s]

Fig. 20.1 Actual (solid black) and estimated (dashed red) parameters Q̂ 0 and Ĉ1

and initial conditions


 T  T
u 1,0 (x) = 1 e x , v0 (x) = sin(πx) sin(πx) . (20.93)

System (20.1) with parameters (20.96) constitute a stable system. The observer kernel
equation (19.41) is solved using the method described in Appendix F.2, with the
boundary condition (19.43), set to
−−
β σ12
m 12 ≡ , (20.94)
μ2 − μ1
β
so that the two boundary conditions of m 12 match at x = ξ = 0.
To excite the system, the actuation signals are set to

U1 (t) = sin(t) U2 (t) = 2 sin(πt). (20.95)

The estimated system parameters are seen in Fig. 20.1 to converge to their true
values after approximately 10 s of simulation.
20.3 Simulations 393

2000
400
1500
300

1000
200

100 500

0 0

0 5 10 15 20 0 5 10 15 20
Time [s] Time [s]

Fig. 20.2 Left: System state norm. Right: Filter norms

20.3.2 Output-Feedback Adaptive Control

System (20.1) and the controller of Theorem 20.2 are now implemented for n = m =
2, using the system parameters

Λ+ = diag{1, 3}, Λ− = diag{1.5, 1} (20.96a)


   
++ 01 +− 1 01
Σ = , Σ = (20.96b)
20 2 30
   
1 01 1 03
Σ −+ = , Σ −−
= (20.96c)
2 24 4 10
   
1 2 −1 1 −2 1
Q0 = C1 = (20.96d)
2 4 −2 2 −1 −2

and initial conditions


 T  T
u 1,0 (x) = 1 e x , v0 (x) = sin(πx) sin(πx) . (20.97)

System (20.1) with parameters (20.96) is open-loop unstable. The controller kernel
PDE (20.50) is solved using the method described in Appendix F.2, with the boundary
condition (20.57), set to
−−
v σ21
k̂21 ≡ , (20.98)
μ1 − μ2

v
so that the two boundary conditions of k̂21 match at x = ξ = 1.
It is seen from the norms shown in Fig. 20.2 that the controller successfully stabi-
lizes the system, with the state and filter norms converging asymptotically to zero as
predicted by theory. The control signals are also seen in Fig. 20.3 to converge to zero
and the estimated parameters are seen in Fig. 20.4 to converge to their true values,
although this was not proved.
394 20 Adaptive Output-Feedback: Uncertain Boundary Condition

200 300
200
100
100
0
0
−100
−100 −200
−300
−200
0 5 10 15 20 0 5 10 15 20
Time [s] Time [s]

Fig. 20.3 Left: Actuation signal U1 . Right: Actuation signal U2

1 0
−0.2
0.5 −0.4
0 −0.6
−0.8
0 5 10 15 20 0 5 10 15 20
Time [s] Time [s]

2 0
1 −0.5
0 −1
0 5 10 15 20 0 5 10 15 20
Time [s] Time [s]

0 0.6
0.4
−0.5 0.2
−1 0
0 5 10 15 20 0 5 10 15 20
Time [s] Time [s]
0.2 0.5
0 0
−0.2 −0.5
−0.4 −1
−0.6
0 5 10 15 20 0 5 10 15 20
Time [s] Time [s]

Fig. 20.4 Actual (solid black) and estimated (dashed red) parameters Q̂ 0 and Ĉ1

20.4 Notes

It is evident that the adaptive observer of Theorem 20.1 and controller of Theorem
20.2 scale poorly. The number of required filters is (1 + 2nm)(n + m), so that for
the 2 + 2-case in the simulations in Sect. 20.3, a total of 36 filters is required. For the
4 + 4 case, the number of required filters is 264. The controller of Theorem 20.2 also
20.4 Notes 395

requires the kernel equations consisting of (20.50) and (20.57) to be solved at every
time step, which quickly scales into a non-trivial task requiring much computational
power.
Appendix A
Projection Operators

A projection operator is frequently used in this book. It is stated as




⎨0 if ω ≤ a and τ ≤ 0
proja,b (τ , ω) = 0 if ω ≥ b and τ ≥ 0 (A.1)


τ otherwise.

In the case of vectors τ , ω and a, b, the operator acts element-wise. Often, the
shorthand notation for a one-parameter projection operator

proja (τ , ω) = proj−a,a (τ , ω). (A.2)

is used.

Lemma A.1 Consider the projection operator (A.1). Assume τ is continuously dif-
ferentiable. Let

˙
θ̂(t) = proja,b (τ (t), θ̂(t)), (A.3)

for t > 0, where the initial condition θ̂(0) = θ̂0 satisfies

a ≤ θ̂0 ≤ b (A.4)

and where the inequality is taken component-wise in the case of vector-valued


a, b, θ̂0 . Then, for all t > 0, we have

a ≤ θ̂(t) ≤ b (A.5a)
−θ̃ (t)proja,b (τ (t), θ̂(t)) ≤ −θ̃ (t)τ (t)
T T
(A.5b)

© Springer Nature Switzerland AG 2019 397


H. Anfinsen and O. M. Aamo, Adaptive Control of Hyperbolic PDEs,
Communications and Control Engineering,
https://doi.org/10.1007/978-3-030-05879-1
398 Appendix A: Projection Operators

where

θ̃(t) = θ − θ̂(t) (A.6)

and where the inequality (A.5a) is taken component-wise in the case of vector-valued
a, b, θ̂.

Proof For property (A.5b), we consider the three cases independently and
component-wise. In the first two cases, the projection operator is active, and the
left hand side of (A.5b) is zero. Moreover, if ωi = ai and τi ≤ 0, then

−θ̃i (t)τi (t) = −(θi − θ̂i (t))τi (t) = −(θi − ai )τi (t) ≥ 0, (A.7)

since θi ≥ ai , and τi ≤ 0. Similarly, if ωi = bi and τi ≥ 0, then

−θ̃i (t)τi (t) = −(θi − θ̂i (t))τi (t) = −(θi − bi )τi (t) ≥ 0, (A.8)

since θi ≤ bi , and τi ≥ 0. Hence, the inequality holds for the first two cases. In the
last case, the projection is inactive, and inequality (A.5b) holds trivially with equality.
This proves (A.5b). 
Appendix B
Lemmas for Proving Stability and Convergence

+
Lemma B.1 (Barbalat’s Lemma) t Consider the function φ : R → R. If φ is uni-
formly continuous and limt→∞ 0 φ(τ )dτ exists and is finite, then

lim φ(t) = 0 (B.1)


t→∞

Proof See e.g. Krstić et al. (1995), Lemma A.6. 

Corollary B.1 (Corollary to Barbalat’s Lemma) Consider the function φ : R+ →


R. If φ, φ̇ ∈ L∞ , and φ ∈ L p for some p ∈ [0, ∞), then

lim φ(t) = 0 (B.2)


t→∞

Proof See e.g. Krstić et al. (1995), Corollary A.7. 

Lemma B.2 (Lemma 2.17 from Tao 2003) Consider a signal g satisfying

ġ(t) = −ag(t) + bh(t) (B.3)

for a signal h ∈ L1 and some constants a > 0, b > 0. Then

g ∈ L∞ (B.4)

and

lim g(t) = 0. (B.5)


t→∞

Proof See Tao (2003), Lemma 2.17. 

Lemma B.3 Let v, l1 , l2 be real valued, nonnegative functions defined over R+ , and
let c be a positive constant. If l1 , l2 ∈ L1 , and v satisfies
© Springer Nature Switzerland AG 2019 399
H. Anfinsen and O. M. Aamo, Adaptive Control of Hyperbolic PDEs,
Communications and Control Engineering,
https://doi.org/10.1007/978-3-030-05879-1
400 Appendix B: Lemmas for Proving Stability and Convergence

v̇(t) ≤ −cv(t) + l1 (t)v(t) + l2 (t) (B.6)

then

v ∈ L1 ∩ L∞ , (B.7)

with the following bounds

v(t) ≤ (v(0)e−ct + ||l2 ||1 )e||l1 ||1 (B.8a)


1
||v||1 ≤ (v(0) + ||l2 ||1 )e||l1 ||1 , (B.8b)
c
and

lim v(t) = 0. (B.9)


t→∞

Proof Properties (B.6) and (B.8) were originally stated in Krstić et al. (1995), Lemma
B.6, while (B.9) was stated in Anfinsen and Aamo (2018), Lemma 2.
Using the fact that w(t) ≤ v(t), ẇ(t) = −cw(t) + l1 (t)w(t) + l2 (t), w(0) =
v(0) (the comparison principle), we rewrite

ẇ(t) + cw(t) − l1 (t)w(t) = l2 (t). (B.10)

We proceed
 t by applying the variation of constants formula, by multiplying with
exp(ct − 0 l1 (s)ds) to obtain

d  t  t
w(t)ect− 0 l1 (s)ds = l2 (t)ect− 0 l1 (s)ds . (B.11)
dt
Integration from 0 to t gives

t t t
w(t) = w(0)e−ct+ 0 l1 (s)ds + e−c(t−τ )+ τ l1 (s)ds l2 (τ )dτ , (B.12)
0

and by the comparison lemma, this gives

t t t
v(t) ≤ v(0)e−ct+ 0 l1 (s)ds + e−c(t−τ )+ τ l1 (s)ds l2 (τ )dτ , (B.13)
0

which can be bounded as


t
v(t) ≤ v(0)e−ct + e−c(t−τ ) l2 (τ )dτ e||l1 ||1 . (B.14)
0
Appendix B: Lemmas for Proving Stability and Convergence 401

and

v(t) ≤ v(0)e−ct + ||l2 ||1 e||l1 ||1 . (B.15)

which proves that v ∈ L∞ , and gives the bound (B.8a). Integrating (B.14) from 0 to
t, we obtain
t t τ
1
v(τ )dτ ≤ v(0)(1 − e−ct ) + e−c(τ −s) l2 (s)dsdτ e||l1 ||1 . (B.16)
0 c 0 0

Changing the order of integration in the double integral yields


t
1 t  
v(τ )dτ ≤ v(0)(1 − e−ct ) + 1 − e−c(t−τ ) l2 (τ )dτ e||l1 ||1 . (B.17)
0 c 0

which, when t → ∞ can be bounded as (B.8b), and also proves v ∈ L1 .


To prove (B.9), we rewrite (B.6) as

v̇(t) ≤ −cv(t) + f (t) (B.18)

where

f (t) = l1 (t)v(t) + l2 (t) (B.19)

satisfies f ∈ L1 and f (t) ≥ 0, ∀t ≥ 0 since l1 , l2 ∈ L1 , l1 (t), l2 (t) ≥ 0, ∀t ≥ 0 and


v ∈ L∞ . Lemma B.2 can be invoked for (B.18) with equality. The result (B.9) then
follows from the comparison lemma.
An alternative, direct proof of (B.9) goes as follows. For (B.9) to hold, we must
show that for every 1 > 0, there exist T1 > 0 such that

v(t) < 1 (B.20)

for all t > T1 . We will prove that such a T1 exists by constructing it. Since f ∈ L1 ,
there exists T0 > 0 such that

f (s)ds < 0 (B.21)
T0

for any 0 > 0. Solving

ẇ(t) = −cw(t) + f (t), (B.22)

and applying the comparison principle, gives the following bound for v(t)
402 Appendix B: Lemmas for Proving Stability and Convergence

t
v(t) ≤ v(0)e−ct + e−c(t−τ ) f (τ )dτ . (B.23)
0

Splitting the integral at τ = T0 gives


T0 t
v(t) ≤ v(0)e−ct + e−c(t−T0 ) e−c(T0 −τ ) f (τ )dτ + e−c(t−τ ) f (τ )dτ
0 T0
t
≤ Me−ct + f (τ )dτ (B.24)
T0

for t > T0 , where


T0
M = v(0) + ecT0 f (τ )dτ ≤ v(0) + ecT0 || f ||1 (B.25)
0

is a finite, positive constant. Using (B.21) with

1
0 = 1 , (B.26)
2
we have
t
1
v(t) ≤ Me−ct + f (τ )dτ < Me−ct + 0 = Me−ct + 1 . (B.27)
T0 2

Now, choosing T1 as
  
1 2M
T1 = max T0 , log (B.28)
c 1

we obtain
1 1
v(t) < 1 + 1 = 1 (B.29)
2 2
for all t > T1 , which proves (B.9). 

Lemma B.4 (Lemma 12 from Anfinsen and Aamo 2017b) Let v1 (t), v2 (t), σ(t),
l1 (t), l2 (t), h(t) and f (t), be real-valued, nonnegative functions defined for t ≥ 0.
Suppose

l1 , l2 ∈ L1 (B.30a)
h ∈ L∞ (B.30b)
t
f (s)ds ≤ Ae Bt (B.30c)
0
Appendix B: Lemmas for Proving Stability and Convergence 403

σ(t) ≤ kv1 (t) (B.30d)


v̇1 (t) ≤ −σ(t) (B.30e)
v̇2 (t) ≤ −cv2 (t) + l1 (t)v2 (t) + l2 (t) + h(t) − a(1 − bσ(t)) f (t) (B.30f)

for t ≥ 0, where k, A, B, a, b and c are positive constants. Then v2 ∈ L∞ . Moreover,


if h ≡ 0, then v2 ∈ L1 ∩ L∞ .

Proof Proceeding as in the proof of Lemma B.3, using the comparison principle and
applying the variation of constants formula, we find
t
v2 (t) ≤ v2 (0)e−ct+ 0 l1 (s)ds

t t
+ e−c(t−s)+ s l1 (τ )dτ [l2 (s) + h(s) − a(1 − bσ(s)) f (s)] ds
0

≤ v2 (0)e−ct
t  
−c(t−s)
+ e l2 (s) + h(s) − a(1 − bσ(s)) f (s) ds e||l1 ||1 (B.31)
0

and
1
v2 (t)e−||l1 ||1 ≤ v2 (0)e−ct + ||l2 ||1 + ||h||∞
c
t
−a e−c(t−s) [1 − bσ(s)] f (s)ds. (B.32)
0

Consider also the case where h ≡ 0, and integrate (B.32) from 0 to t, to obtain
t
e−||l1 ||1 v2 (τ )dτ
0
t τ
1
≤ v2 (0) + e−c(τ −s) l2 (s) − a(1 − bσ(s)) f (s) dsdτ . (B.33)
c 0 0

Changing the order of integration in the double integral yields


t
e−||l1 ||1 v2 (τ )dτ
0
1 1 a t  
≤ v2 (0) + ||l2 ||1 − 1 − e−c(t−s) 1 − bσ(s) f (s)ds. (B.34)
c c c 0

t
For v2 in (B.32) or limt→∞ 0 v2 (τ )dτ in (B.34) to be unbounded, the term in the
last brackets of (B.32) and (B.34) must be negative on a set whose measure increases
unboundedly as t → ∞. Supposing this is the case, there must exist constants T > 0,
T0 > 0 and ρ > 0 so that
404 Appendix B: Lemmas for Proving Stability and Convergence

t+T0
σ(τ )dτ ≥ ρ (B.35)
t

for t > T . Condition (B.35) is the requirement for persistence of excitation in (B.30e),
meaning that v1 and, from (B.30d), σ converge exponentially to zero. It must therefore
exist a time T1 > 0 after which σ(t) < b1 for all t > T1 , resulting in the expression in
the brackets being positive for all t > T1 , contradicting the initial assumption. Hence
v2 ∈ L∞ , while h ≡ 0 results in v2 ∈ L1 ∩ L∞ . 
Appendix C
Minkowski’s, Cauchy–Schwarz’
and Young’s Inequalities

Lemma C.1 (Minkowski’s inequality) For two scalar functions f (x), g(x) defined
for x ∈ [a, b], the version of Minkowski’s inequality used in this book is
  
b b b
( f (x) + g(x))2 d x ≤ f 2 (x)d x + g 2 (x)d x. (C.1)
a a a

For two vector functions u, v defined for x ∈ [0, 1], we have

||u + v|| ≤ ||u|| + ||v||. (C.2)

Proof See e.g. Abramowitz and Stegun (1975), Page 11. 

Lemma C.2 (Cauchy–Schwarz’ inequality) For two vector functions f (x), g(x)
defined for x ∈ [a, b], the version of Cauchy–Schwarz’ inequality used in this book
is
 
b b b
f T (x)g(x)d x ≤ f T (x) f (x)d x g T (x)g(x)d x. (C.3)
a a a

This inequality is also a special case of Hölder’s inequality. A special case frequently
used, is for a scalar function h(x) defined for x ∈ [a, b],
 b 2 b
h(x)d x ≤ (b − a) h 2 (x)d x (C.4)
a a

which follows from letting f = h and g ≡ 1, and squaring the result. For two vector
functions u, v and scalar w defined for x ∈ [0, 1], we have

© Springer Nature Switzerland AG 2019 405


H. Anfinsen and O. M. Aamo, Adaptive Control of Hyperbolic PDEs,
Communications and Control Engineering,
https://doi.org/10.1007/978-3-030-05879-1
406 Appendix C: Minkowski’s, Cauchy–Schwarz’ and Young’s Inequalities
 
1 1 1
u T (x)v(x)d x ≤ u T (x)u(x)d x v T (x)v(x)d x = ||u||||v|| (C.5)
0 0 0

and
 1 2 1
w(x)d x ≤ w 2 (x)d x = ||w||2 . (C.6)
0 0

Proof See e.g. Abramowitz and Stegun (1975), Page 11. 

Lemma C.3 (Young’s inequality) For two vector functions f (x), g(x) defined for
x ∈ [a, b], the version of Young’s inequality used in this book is
b
 b
1 b
f T (x)g(x)d x ≤ f T (x) f (x)d x + g T (x)g(x)d x (C.7)
a 2 a 2 a

for some arbitrary positive constant . For two vector functions u, v defined for
x ∈ [0, 1], we have
1
u T (x)v(x)d x
0
 1
1 1
 1
≤ u T (x)u(x)d x + v T (x)v(x)d x = ||u||2 + ||v||2 . (C.8)
2 0 2 0 2 2

Proof We have
 T  
√ 1 √ 1
0≤  f (x) − √ g(x)  f (x) − √ g(x)
 
1
≤  f T (x) f (x) − 2 f T (x)g(x) + g T (x)g(x) (C.9)

which implies

1
2 f T (x)g(x) ≤  f T (x) f (x) + g T (x)g(x). (C.10)

Integration from a to b and dividing by two yields the result. 
Appendix D
Well-Posedness of Kernel Equations

D.1 Solution to a 1–D PIDE

Consider a PIDE on the form


x
Fx (x, ξ) + Fξ (x, ξ) = F(x, s)a(s, ξ)ds + b(x, ξ) (D.1a)
ξ
x
F(x, 0) = F(x, s)c(s)ds + d(x) (D.1b)
0

where a, b, c, d are functions assumed to satisfy

a, b ∈ C N (T ) (D.2a)
c, d ∈ C N ([0, 1]) (D.2b)

for some positive integer N , and T is the triangular domain defined in (1.1a).
Lemma D.1 The PIDE (D.1) has a unique solution F ∈ C N (T ). Moreover, a bound
on the solution is

|F(x, ξ)| ≤ (b̄ + d̄)e(ā+c̄)(x−ξ) (D.3)

where

ā = max |a(x, ξ)|, b̄ = max |b(x, ξ)| (D.4a)


(x,ξ)∈T (x,ξ)∈T

c̄ = max |c(x)|, d̄ = max |d(x)|. (D.4b)


x∈[0,1] x∈[0,1]

© Springer Nature Switzerland AG 2019 407


H. Anfinsen and O. M. Aamo, Adaptive Control of Hyperbolic PDEs,
Communications and Control Engineering,
https://doi.org/10.1007/978-3-030-05879-1
408 Appendix D: Well-Posedness of Kernel Equations

Proof This proof is based on a similar proof given in Krstić and Smyshlyaev (2008).
We proceed by transforming (D.1) to integral equations using the method of charac-
teristics. Along the characteristic lines

xτ (τ ; x) = x − τ (D.5)
ξτ (τ ; ξ) = ξ − τ , (D.6)

the PIDE (D.1) reduces to an ODE, in the sense that

d d
F(xτ (τ ; x), ξτ (τ ; ξ)) = F(x − τ , ξ − τ )
dτ dτ
= −Fx (x − τ , ξ − τ ) − Fξ (x − τ , ξ − τ )
x−τ
=− F(x − τ , s)a(s, ξ − τ )ds − b(x − τ , ξ − τ ). (D.7)
ξ−τ

Integration with respect to τ from τ = 0 to τ = ξ, yields

ξ x−τ
F(x, ξ) = F(x − ξ, 0) + F(x − τ , s)a(s, ξ − τ )dsdτ
0 ξ−τ
ξ
+ b(x − τ , ξ − τ )dτ . (D.8)
0

Inserting (D.1b), we obtain

F(x, ξ) = Ψ0 (x, ξ) + Ψ [F](x, ξ) (D.9)

where
ξ
Ψ0 (x, ξ) = b(x − τ , ξ − τ )dτ + d(x − ξ) (D.10a)
0
ξ x−τ
Ψ [F](x, ξ) = F(x − τ , s)a(s, ξ − τ )dsdτ
0 ξ−τ
x−ξ
+ F(x − ξ, s)c(s)ds. (D.10b)
0

This equation can be solved using successive approximations, similar to what was
done in the proof of Lemma 1.1. However, as the integral operator Ψ in this case
contains a double integral, the proof is naturally also more complicated. We form
the series

F 0 (x, ξ) = Ψ0 (x, ξ) (D.11a)


F (x, ξ) = Ψ0 (x, ξ) + Ψ [F
n n−1
](x, ξ) (D.11b)
Appendix D: Well-Posedness of Kernel Equations 409

for n ∈ Z, n ≥ 1. Clearly,

F(x, ξ) = lim F n (x, ξ) (D.12)


n→∞

provided the limit exists, and F ∈ C N (T ), since all the terms Fn ∈ C N (T ). Consider
the differences

ΔF n (x, ξ) = F n (x, ξ) − F n−1 (x, ξ) (D.13)

for n ∈ Z, n ≥ 1, and where we define

ΔF 0 (x, ξ) = Ψ0 (x, ξ). (D.14)

Due to the linearity of the operator Ψ , we have

ΔF n (x, ξ) = Ψ [ΔF n−1 ](x, ξ), (D.15)

and


F(x, ξ) = ΔF n (x, ξ) (D.16)
n=0

provided the sum is bounded. Assume that

(ā + c̄)n (x − ξ)n


|ΔF n (x, ξ)| ≤ (b̄ + d̄) (D.17)
n!
where we have used the bounds (D.4). We prove (D.17) by induction. For n = 0, we
have
 ξ 
     

|ΔF (x, ξ)| = |Ψ0 (x, ξ)| = 
0
b(x − τ , ξ − τ )dτ + d(x − ξ) ≤ b̄ξ  + d̄ 
0
≤ b̄ + d̄ (D.18)

and (D.17) holds. Consider now

|ΔF n+1 (x, ξ)| = |Ψ [ΔF n−1 ](x, ξ)|


 ξ x−τ 
 
≤  ΔF n (x − τ , s)a(s, ξ − τ )dsdτ 
0 ξ−τ
 x−ξ 
 
+  ΔF (x − ξ, s)c(s)ds 
n
(D.19)
0
410 Appendix D: Well-Posedness of Kernel Equations

Now using the assumption (D.17)


 
(ā + c̄)n  ξ x−τ 
|ΔF n+1 (x, ξ)| ≤ (b̄ + d̄) ā  (x − τ − s)n dsdτ 
n! 0 ξ−τ
 
(ā + c̄)  x−ξ
n 
+ (b̄ + d̄) c̄  (x − ξ − s) ds 
n
n! 0
(ā + c̄)n ā
≤ (b̄ + d̄) ξ(x − ξ)n+1
n! n+1
(ā + c̄)n c̄
+ (b̄ + d̄) (x − ξ)n+1
n! n+1
(ā + c̄)n
≤ (b̄ + d̄) (āξ + c̄)(x − ξ)n+1
n!
(ā + c̄)n+1
≤ (b̄ + d̄) (x − ξ)n+1 (D.20)
n!
which proves (D.17) by induction. Using (D.17), F can be pointwise bounded as

 ∞
(ā + c̄)n (x − ξ)n
|F(x, ξ)| ≤ |ΔF n (x, ξ)| ≤ (b̄ + d̄)
n=0 n=0
n!
≤ (b̄ + d̄)e(ā+c̄)(x−ξ) . (D.21)

Therefore, the sum (D.16) uniformly converges to the solution F of (D.1). We proceed
by showing uniqueness. Consider two solutions F1 and F2 , and their difference

F̃(x, ξ) = F1 (x, ξ) − F2 (x, ξ). (D.22)

Due to linearity, F̃ must also satisfy the integral equation (D.9), but with Ψ0 (x, ξ) =
0, hence

F̃(x, ξ) = Ψ [ F̃](x, ξ) (D.23)

Repeating the steps above, one can obtain a bound on the form

| F̃(x, ξ)| ≤ Me(ā+c̄)(x−ξ) . (D.24)

where M = 0, and hence F̃ ≡ 0, which implies F1 ≡ F2 . 

D.2 Existence of Solution to a 4 × 4 System of PDEs

Consider the following system of 4 × 4 coupled linear hyperbolic PDEs


4
1 (x)Fx1 (x, ξ) + 1 (ξ)Fξ1 (x, ξ) = g1 (x, ξ) + C1i (x, ξ)F i (x, ξ) (D.25a)
i=1
Appendix D: Well-Posedness of Kernel Equations 411


4
1 (x)Fx2 (x, ξ) − 2 (ξ)Fξ2 (x, ξ) = g2 (x, ξ) + C2i (x, ξ)F i (x, ξ) (D.25b)
i=1


4
2 (x)Fx3 (x, ξ) − 1 (ξ)Fξ3 (x, ξ) = g3 (x, ξ) + C3i (x, ξ)F i (x, ξ) (D.25c)
i=1


4
2 (x)Fx4 (x, ξ) + 2 (ξ)Fξ4 (x, ξ) = g4 (x, ξ) + C4i (x, ξ)F i (x, ξ) (D.25d)
i=1
F 1 (x, 0) = h 1 (x) + q1 (x)F 2 (x, 0)
+ q2 (x)F 3 (x, 0) (D.25e)
F (x, x) = h 2 (x)
2
(D.25f)
F (x, x) = h 3 (x)
3
(D.25g)
F (x, 0) = h 4 (x) + q3 (x)F (x, 0)
4 2

+ q4 (x)F 3 (x, 0) (D.25h)

evolving over T defined in (1.1a), with parameters assumed to satisfy

qi , h i ∈ B([0, 1]), gi , Ci j ∈ B(T ), i, j = 1 . . . 4, (D.26a)


1 , 2 ∈ C([0, 1]), 1 (x), 2 (x) > 0, ∀x ∈ [0, 1]. (D.26b)

Theorem D.1 (Theorem A.1 from Coron et al. 2013) The PDEs (D.25) with param-
eters (D.26) have a unique B(T ) solution (F 1 , F 2 , F 3 , F 4 ). Moreover, there exists
bounded constants A, B so that

|F i (x, ξ)| ≤ Ae Bx , i = 1 . . . 4 (D.27)

where A continuously depends on qi , h i and gi , i = 1 . . . 4, while B continuously


depends on 1 , 2 and Ci j , i, j = 1 . . . 4.
Theorem D.2 (Theorem A.2 from Coron et al. 2013) Consider the PDEs (D.25)
with parameters (D.26). Under the additional assumptions

i , qi , h i ∈ C N ([0, 1]), gi , Ci j ∈ C N (T ) (D.28)

there exists a unique C N (T ) solution (F 1 , F 2 , F 3 , F 4 ).


The proofs of these theorems, given in Coron et al. (2013), use a technique similar
to the one used to prove existence of solution to the kernel equations used in The-
orem D.1, by forming characteristic lines and transforming the PDEs into Volterra
equations which are approximated using successive iterations proved to converge.
A bound on the form (D.27) emerges in the process. The N th order derivatives of
(D.25) was then proved to be continuous and unique using a similar technique. We
skip the proofs here, and instead refer the interested reader to Coron et al. (2013).
412 Appendix D: Well-Posedness of Kernel Equations

D.3 Existence of Solution to Time-Varying Observer PDEs

Consider the PDEs defined over S1 defined in (1.1d)

Ptα (x, ξ, t) = −λ(x)Pxα (x, ξ, t) − λ(ξ)Pξα (x, ξ, t)


− λ (ξ)P α (x, ξ, t) + c1 (x)P β (x, ξ, t) (D.29a)
β β
Pt (x, ξ, t) = μ(x)Pxβ (x, ξ, t) − λ(ξ)Pξ (x, ξ, t)
− λ (ξ)P β (x, ξ, t) + c2 (x)P α (x, ξ, t) (D.29b)
c2 (x)
P β (x, x, t) = (D.29c)
λ(x) + μ(x)
P α (0, ξ, t) = q̂(t)P β (0, ξ, t) (D.29d)
α
P (x, ξ, 0) = P0α (x, ξ) (D.29e)
β β
P (x, ξ, 0) = P0 (x, ξ) (D.29f)

for some parameters

λ, μ ∈ C 1 ([0, 1]) c1 , c2 ∈ C 0 ([0, 1]), q̂(t), q̄ ∈ R, |q̂(t)| ≤ q̄, ∀t ≥ 0, (D.30)

and some initial conditions satisfying

β β β
||P0α ||∞ ≤ P̄0α , ||P0 ||∞ ≤ P̄0 , ∀(x, ξ) ∈ S, P0α , P0 ∈ C(S) (D.31)

β
for some bounded constants P̄0α , P̄0 , and where S is defined in (1.1c).

Theorem D.3 (Lemma 4 from Anfinsen and Aamo 2016) The solution (P α , P β )
to the PDE (10.79) is bounded in the L 2 sense for any bounded system parameters
β
λ, μ, c1 , c2 and estimate q̂(t), and initial conditions P0α , P0 satisfying (10.80), and
α β
there exists constants P̄ , P̄ so that

||P α (t)||∞ ≤ P̄ α ||P β (t)||∞ ≤ P̄ β , ∀t ≥ 0 (D.32)

β
where P̄ α , P̄ β depend on the system parameters, q̂0 , P0α and P0 . Moreover, if q̂(t)
exponentially converges to q, then (P α , P β ) converge exponentially in L 2 to the
static solution (P α , P β ) given as the solution to (8.86).

Proof (Proof originally from Anfinsen and Aamo 2016) Let (M, N ) denote the
solution of the static equations (8.86). The difference between (P α , P β ), whose
dynamics is given in (10.79), and (M, N ), that is

M̃(x, ξ, t) = M(x, ξ) − P α (x, ξ, t) (D.33a)


Appendix D: Well-Posedness of Kernel Equations 413

Ñ (x, ξ, t) = N (x, ξ) − P β (x, ξ, t), (D.33b)

can straightforwardly be shown to satisfy

M̃t (x, ξ, t) = −λ(x) M̃x (x, ξ, t) − M̃ξ (x, ξ, t)λ(ξ)


− M̃(x, ξ, t)λ (ξ) + c1 (x) Ñ (x, ξ, t) (D.34a)
Ñt (x, ξ, t) = μ(x) Ñ x (x, ξ, t) − Ñξ (x, ξ, t)λ(ξ)
− Ñ (x, ξ, t)λ (ξ) + c2 (x) M̃(x, ξ, t) (D.34b)
Ñ (x, x, t) = 0 (D.34c)
M̃(0, ξ, t) = q̂(t) Ñ (0, ξ, t) + q̃(t) N̄ (0, ξ, t) (D.34d)
M̃(x, ξ, 0) = M̃0 (x, ξ) (D.34e)
Ñ (x, ξ, 0) = Ñ0 (x, ξ) (D.34f)

β
where M̃0 = M − P0α , Ñ0 = N − P0 are bounded, and M̃0 , Ñ0 ∈ C(S). Consider
the Lyapunov function candidate

V (t) = V1 (t) + aV2 (t) (D.35)

where

V1 (t) = e−bξ M̃ 2 (x, ξ, t)dS (D.36a)


S

V2 (t) = e−bξ Ñ 2 (x, ξ, t)dS (D.36b)


S

for some constants a and b with a > 0 to be decided. The domain S is defined
in (1.1a). Differentiating (D.36a) with respect to time, and inserting the dynamics
(D.34a), we find
1 ξ
V̇1 (t) = −2 e−bξ λ(x) M̃(x, ξ, t) M̃x (x, ξ, t)d xdξ
0 0
1 1
−2 e−bξ λ(ξ) M̃(x, ξ, t) M̃ξ (x, ξ, t)dξd x
0 x
−bξ
−2 e λ (ξ) M̃ 2 (x, ξ, t)dS
S

+2 e−bξ c1 (x) M̃(x, ξ, t) Ñ (x, ξ, t)dS (D.37)


S

Integration by parts yields


414 Appendix D: Well-Posedness of Kernel Equations

1 1
V̇1 (t) = − e−bξ λ(ξ) M̃ 2 (ξ, ξ, t)dξ + e−bξ λ(0) M̃ 2 (0, ξ, t)dξ
0 0
1
+ e−bξ λ (x, t) M̃ 2 (x, ξ, t)dS − e−b λ(1) M̃ 2 (x, 1, t)d x
S 0
1
+ e−bx λ(x) M̃ 2 (x, x, t)d x + e−bξ λ (ξ) M̃ 2 (x, ξ, t)dS
0 S

−b e−bξ λ(ξ) M̃ 2 (x, ξ, t)dS − 2 e−bξ λ (ξ) M̃ 2 (x, ξ, t)dS


S S
−bξ
+2 e c1 (x) M̃(x, ξ, t) Ñ (x, ξ, t)dS (D.38)
S

or, when using Young’s inequality on the last term and inserting the boundary con-
dition (D.34d) lead to
1 1
V̇1 (t) ≤ e−bξ λ(0)q̂ 2 (t) Ñ 2 (0, ξ, t)dξ + e−bξ λ(0)q̃ 2 (t) N̄ 2 (0, ξ, t)dξ
0 0
 
+ e−bξ λ (x, t) − bλ(ξ) − λ (ξ) + c1 (x) M̃ 2 (x, ξ, t)dS
S
1
− e−b λ(1) M̃ 2 (x, 1, t)d x + e−bξ c1 (x) Ñ 2 (x, ξ, t)dS. (D.39)
0 S

Time differentiating (D.36b), using (D.34b) and (D.34c) yields in a similar way
1
V̇2 (t) ≤ − e−bξ μ(0) Ñ 2 (0, ξ, t)dξ
0
 
+ e−bξ c2 (x) − λ (ξ) − μ (x, t) − bλ(ξ) Ñ 2 (x, ξ, t)dS
S
1
− e−b λ(1) Ñ 2 (x, 1, t)d x + e−bξ c2 (x) M̃ 2 (x, ξ, t)dS. (D.40)
0 S

Using (D.39) and (D.40), the time derivative of V (t) satisfies


V̇ (t) ≤ − e−bξ − λ (x, t) + bλ(ξ) + λ (ξ)
S

− c1 (x) − ac2 (x) M̃ 2 (x, ξ, t)dS

− e−bξ − c1 (x) − ac2 (x) + aλ (ξ)
S

+ aμ (x, t) + abλ(ξ) Ñ 2 (x, ξ, t)dS
1  
− e−bξ aμ(0) − λ(0)q̂ 2 (t) Ñ 2 (0, ξ, t)dξ
0
Appendix D: Well-Posedness of Kernel Equations 415

1 1
− e−b λ(1) M̃ 2 (x, 1, t)d x − a e−b λ(1) Ñ 2 (x, 1, t)d x
0 0
1
+ q̃ 2 (t)λ(0) e−bξ N̄ 2 (0, ξ, t)dξ. (D.41)
0

We require

0 < −λ (x) + bλ(ξ) + λ (ξ) − c1 (x) − ac2 (x) (D.42a)


0 < −c1 (x) − ac2 (x) + aλ (ξ) + aμ (x) + abλ(ξ) (D.42b)
0 < aμ(0) − λ(0)q̂ (t). 2
(D.42c)

Firstly, from (D.42c), we choose

λ(0) 2
a>4 q̄ (D.43)
μ(0) 0

where
q̄0 = max{|q|, |q̄|}. (D.44)

Then, from (D.42a), we require

c1 (x) + ac2 (x) + λ (x) − λ (ξ)


b> (D.45)
λ(ξ)

while from (D.42b), we require

c1 (x) + ac2 (x) − aλ (ξ) − aμ (x)


b> . (D.46)
aλ(ξ)

Thus, choose
 
c̄1 + a c̄2 + 2λ̄d c̄1 + a c̄2 + a λ̄d + a μ̄d
b > max , (D.47)
λ aλ

where

λ = min λ(x), (D.48a)


x∈[0,1]

λ̄d = max |λ (x)|, μ̄d = max |μ (x)|, (D.48b)


x∈[0,1] x∈[0,1]

c̄1 = max |c1 (x)|, c̄2 = max |c2 (x)|. (D.48c)


x∈[0,1] x∈[0,1]

Additionally, we know that the last integral in (D.41) is well-defined and bounded.
We thus obtain
416 Appendix D: Well-Posedness of Kernel Equations

1
V̇ (t) ≤ −k1 M̃ 2 (x, ξ, t)dS − k2 Ñ 2 (x, ξ, t)dS − k3 Ñ 2 (0, ξ, t)dξ
S S 0
1 1
− k4 M̃ 2 (x, 1, t)d x − k5 Ñ 2 (x, 1, t)d x + c0 q̃ 2 (t) (D.49)
0 0

for some positive constants c0 , ki , i = 1, . . . , 5. This, along with the assumed bound-
edness of q̃(t) proves that V and hence M̃ and Ñ are bounded. Moreover, q̃(t) expo-
nentially converges to zero, M̃ and Ñ will exponentially converge to zero. This can
be seen from rewriting (D.49) into

V̇ (t) ≤ −cV (t) + c0 q̃ 2 (t) (D.50)

for some positive constant c.


Lastly, the PDEs (D.34) are linear hyperbolic and the coefficients and initial
conditions are continuous, the solution to (D.34) will also stay continuous, which,
together with the L 2 -boundedness proved above implies bounds on the form (10.81).


D.4 Existence of Solution to Coupled n + 1 Kernel


Equations

Consider the set of n + 1 coupled PDEs, evolving over T defined in (1.1a)

μ̄(x)Fxi (x, ξ) − λ̄i (ξ)Fξi (x, ξ) = gi (x, ξ) + ai (x, ξ)G(x, ξ)



n
+ bi, j (x, ξ)F j (x, ξ) (D.51a)
j=1

μ̄(x)G x (x, ξ) + μ̄(ξ)G ξ (x, ξ) = k(x, ξ) + d(x, ξ)G(x, ξ)



n
+ e j (x, ξ)F j (x, ξ) (D.51b)
j=1

F i (x, x) = h i (x) (D.51c)



n
G(x, 0) = l(x) + qi (x)F i (x, 0) (D.51d)
j=1

for i = 1, . . . , n.

Theorem D.4 Under the following assumptions

ai , bi, j , d, ei, j , gi , k ∈ C(T ), h i , l, qi ∈ C([0, 1]), i, j = 1, . . . , n, (D.52a)


Appendix D: Well-Posedness of Kernel Equations 417

λ̄i , μ̄ ∈ C 1 ([0, 1]), λ̄(x), μ̄(x) > 0, ∀x ∈ [0, 1], i = 1, . . . , n


(D.52b)

the Eqs. (D.51) admit a unique continuous solution on T . Moreover, there exists
bounded constants A, B so that

|F i (x, ξ)| ≤ Ae Bx , i = 1, . . . , n |G(x, ξ)| ≤ Ae Bx , ∀(x, ξ) ∈ T , (D.53)

where A continuously depends on gi , k, h i , l and qi , i = 1, 2, while B continuously


depends on λ̄i , μ̄, ai , bi, j , d and e j , i, j = 1, . . . , n.

Proof This theorem was in Di Meglio et al. (2013) stated for gi = k ≡ 0, h i = l,


i = 1, 2 (Di Meglio et al. 2013, Theorem 5.3). However, the proof straightforwardly
extends to the case of nonzero gi , h i , qi , giving bounds on the form (D.53). 

D.5 Existence of Solution to Coupled (n + 1) × (n + 1)


Kernel Equations

Consider the PDEs defined over the triangular domain T defined in (1.1a)


n
λi (x)∂x K i j (x, ξ) + λ j (x)∂ξ K i j (x, ξ) = ak j (x, ξ)K ik (x, ξ) (D.54a)
k=1
K i j (x, x) = bi j (x), i, j = 1, 2, . . . , n, i = j (D.54b)

n−m
K i j (x, 0) = ck j K i,m+k (x, 0),
k=1
1≤i ≤ j ≤n (D.54c)
K i j (1, ξ) = di j (ξ), 1 ≤ j < i ≤ m
∪m+1≤i < j ≤n (D.54d)
K i j (x, 0) = ei j (ξ), m + 1 ≤ j ≤ i ≤ n (D.54e)

for some coefficients

λi ∈ C 0 ([0, 1]), i = 1, 2, . . . , n (D.55a)


ai j ∈ C (T ) i, j = 1, 2, . . . , n
0
(D.55b)
bi j ∈ C ([0, 1]) i, j = 1, 2, . . . , n, i = j
0
(D.55c)
ck j ∈ R, i = 1, 2, . . . , n − m, j = 1, 2, . . . , n (D.55d)
dk j ∈ C 0 ([0, 1]), 1 ≤ j < i ≤ m ∪ m + 1 ≤ i < j ≤ n (D.55e)
ek j ∈ C ([0, 1]), m + 1 ≤ j ≤ i ≤ n
0
(D.55f)
418 Appendix D: Well-Posedness of Kernel Equations

with

λ1 (x) < λ2 (x) < · · · < λm (x) < 0, ∀x ∈ [0, 1], (D.56a)
0 < λm+1 (x) < λm+2 (x) < · · · < λm+n (x), ∀x ∈ [0, 1]. (D.56b)

Theorem D.5 There exists a unique piecewise continuous solution K to the PDEs
(D.54) with coefficients satisfying (D.55)–(D.56).

Proof This theorem is a slight variation of Hu et al. (2015), Theorem A.1. We omit
further details and instead refer the reader to Hu et al. (2015). 

D.6 Existence of Solution to Coupled n + m Kernel


Equations

Consider the PDEs defined over the triangular domain T defined in (1.1a)

Λ− (x)K xu (x, ξ) − K ξu (x, ξ)Λ+ (ξ) = K u (x, ξ)Σ ++ (ξ) + K u (x, ξ)(Λ+ ) (ξ)
+ K v (x, ξ)Σ −+ (ξ) (D.57a)

Λ (x)K xv (x, ξ) + K ξv (x, ξ)Λ− (ξ) = K u (x, ξ)Σ +− (ξ) − K v (x, ξ)(Λ− ) (ξ)
+ K v (x, ξ)Σ −− (ξ) (D.57b)
− + −+
Λ (x)K (x, x) + K (x, x)Λ (x) = −Σ (x)
u u
(D.57c)
Λ− (x)K v (x, x) − K v (x, x)Λ− (x) = −Σ −− (x) (D.57d)
K v (x, 0)Λ− (0) − K u (x, 0)Λ+ (0)Q 0 = G(x) (D.57e)
K ivj (1, ξ) = kivj (ξ), 1 ≤ j < i ≤ m (D.57f)

where G is a strictly lower triangular matrix on the form



0 if 1 ≤ i ≤ j ≤ n
G(x) = {gi j (x)}1≤i, j≤n = (D.58)
gi j (x) otherwise.

where

K u (x, ξ) = {K iuj (x, ξ)}1≤i≤m,1≤ j≤n K v (x, ξ) = {K ivj (x, ξ)}1≤i, j≤m (D.59)

and where

Λ+ (x) = diag {λ1 (x), λ2 (x), . . . , λn (x)} (D.60a)


Λ− (x) = diag {μ1 (x), μ2 (x), . . . , μm (x)} (D.60b)
Σ ++ (x) = {σi++
j (x)}1≤i, j≤n Σ +− (x) = {σi+−
j (x)}1≤i≤n,1≤ j≤m (D.60c)
Appendix D: Well-Posedness of Kernel Equations 419

Σ −+ (x) = {σi−+
j (x)}1≤i≤m,1≤ j≤n Σ −− (x) = {σi−−
j (x)}1≤i, j≤m (D.60d)
Q 0 = {qi j }1≤i≤m,1≤ j≤n (D.60e)

are assumed to satisfy, for i, k = 1, 2, . . . , n, j, l = 1, 2, . . . , m

λi , μ j ∈ C 1 ([0, 1]), λi (x), μ j (x) > 0, ∀x ∈ [0, 1] (D.61a)


++
σik , σi+− −+ −−
j , σ ji , σ jl ∈ C ([0, 1]),
0
qi j ∈ R, (D.61b)

with

−μ1 (x) < −μ2 (x) < · · · < −μm (x) < 0 < λ1 (x) < λ2 (x) <
· · · < λn (x) (D.62)

while kivj for 1 ≤ j < i ≤ m are some arbitrary functions.


Theorem D.6 There exists a unique piecewise continuous solution K to the PDEs
(D.54) with coefficients satisfying (D.55)–(D.56).
Proof This theorem is a slight variation of Hu et al. (2015), Theorem A.1. We omit
further details and instead refer the reader to Hu et al. (2015). 

D.7 Existence of Solution to a Fredholm PDE

Consider the PDE in F, evolving over the quadratic domain [0, 1]2 .

Λ(x)Fx (x, ξ) + Fξ (x, ξ)Λ(ξ) = F(x, ξ)A(ξ) (D.63a)


F(x, 0) = B(x) (D.63b)
F(0, ξ) = C(ξ), (D.63c)

where

F(x, ξ) = { f i j (x, ξ)}1≤i, j≤n , A(x) = {ai j (x)}1≤i, j≤n (D.64a)


B(x) = {bi j (x)}1≤i, j≤n , C(x) = {ci j (x)}1≤i, j≤n (D.64b)
Λ(x) = diag{λ1 (x), λ2 (x), . . . , λn (x)} (D.64c)

with parameters assumed to satisfy

ai j , bi j , ci j ∈ L 2 ([0, 1]), λi ∈ C 0 ([0, 1]), λi (x) > 0, ∀x ∈ [0, 1] (D.65)

for all i, j = 1, 2, . . . , n.
Theorem D.7 There exists a unique solution F ∈ L 2 ([0, 1])n×n to the Eqs. (D.63).
420 Appendix D: Well-Posedness of Kernel Equations

Proof This was originally proved in Coron et al. (2017). Regarding ξ as the time
parameter, the PDE (D.63) is a standardtime-dependent
 uncoupled hyperbolic sys-
λi (x)
tem with only positive transport speeds λ j (ξ) , and therefore admits a unique solu-
tion. 

D.8 Invertibility of a Fredholm Equation

Consider a Fredholm integral equation in the form


1
G(x, ξ) = A(x, ξ) + B(x, s)G(s, ξ)ds (D.66)
0

where G, A, and B are all strictly lower triangular, hence



0 if 1 ≤ i ≤ j ≤ n
G(x) = {gi j (x)}1≤i, j≤n = (D.67a)
gi j (x) otherwise.

0 if 1 ≤ i ≤ j ≤ n
A(x) = {ai j (x)}1≤i, j≤n = (D.67b)
ai j (x) otherwise.

0 if 1 ≤ i ≤ j ≤ n
B(x) = {bi j (x)}1≤i, j≤n = (D.67c)
bi j (x) otherwise.

with the parameters are assumed to satisfy

ai j , bi j ∈ L 2 ([0, 1]2 ) (D.68)

for all i, j = 1, 2, . . . , n.

Lemma D.2 There exists a unique solution G ∈ L 2 ([0, 1]2 )n×n to (D.66).

Proof Written on component form, Eq. (D.66) can be written

gi j (x, ξ)

0 if 1 ≤ i ≤ j ≤ n
= i−1  1 (D.69)
ai j (x, ξ) + k= j+1 0 bik (x, s)gk j (s, ξ)ds otherwise.
Appendix D: Well-Posedness of Kernel Equations 421

Specifically,

g21 (x, ξ) = a21 (x, ξ) (D.70)


1
g31 (x, ξ) = a31 (x, ξ) + b32 (x, s)g21 (s, ξ)ds (D.71)
0
g32 (x, ξ) = a32 (x, ξ) (D.72)

3 1
g41 (x, ξ) = a41 (x, ξ) + b4k (x, s)gk1 (s, ξ)ds (D.73)
k=2 0
1
g42 (x, ξ) = a42 (x, ξ) + b43 (x, s)g31 (s, ξ)ds (D.74)
0
g43 (x, ξ) = a43 (x, ξ) (D.75)
..
.

i−1 1
gi j (x, ξ) = bik (x, s)gk j (s, ξ)ds for 1 ≤ j < i ≤ n. (D.76)
k= j+1 0

The rows on G are independent of the above rows. The components of G can therefore
be computed in cascade from the components of A and B. 
Appendix E
Additional Proofs

E.1 Proof of Theorem 1.1

Proof We will prove this for the following general system

u t (x, t) + Λ+ (x)u x (x, t) = Σ ++ (x)u(x, t) + Σ +− (x)v(x, t)


+ d1 (x, t) (E.1a)
− −+ −−
vt (x, t) − Λ (x)vx (x, t) = Σ (x)u(x, t) + Σ (x)v(x, t)
+ d2 (x, t) (E.1b)
u(0, t) = Q 0 v(0, t) + d3 (t) (E.1c)
v(1, t) = C1 u(1, t) + U (t) + d4 (t) (E.1d)
u(x, 0) = u 0 (x) (E.1e)
v(x, 0) = v0 (x) (E.1f)

where

Λ+ (x) = diag{λ1 (x), λ2 (x), . . . , λn (x)}, (E.2a)


Λ− (x) = diag{μ1 (x), μ2 (x), . . . , μm (x)}, (E.2b)

with λi (x), μ j (x) > 0 for i = 1, 2, . . . , n, j = 1, 2, . . . , m, some functions Σ ++ (x),


Σ +− (x), Σ −+ (x), Σ −− (x) and matrices Q 0 , C1 of appropriate sizes, initial condi-
tions u 0 , v0 ∈ L 2 ([0, 1]) and an actuation signal U . The actuation signal is in the
form
1
U (t) = G(t)u(1, t) + K u (ξ, t)u(ξ, t)dξ
0
1
v
+ K (ξ, t)v(ξ, t)dξ + f (t) (E.3)
0

© Springer Nature Switzerland AG 2019 423


H. Anfinsen and O. M. Aamo, Adaptive Control of Hyperbolic PDEs,
Communications and Control Engineering,
https://doi.org/10.1007/978-3-030-05879-1
424 Appendix E: Additional Proofs

for some bounded kernels K u , K v , and bounded signal f . Let σ̄ bound all elements
in Σ ++ (x), Σ +− (x), Σ −+ (x), Σ −− (x), q̄ bound on all elements in Q 0 , c̄ bound on
all elements in C1 , d̄ bound all elements in d1 , d2 , d3 , d4 and f , λ̄ bound all elements
on Λ+ , μ̄ bound all elements on Λ− , λ bound all elements in Λ+ from below, μ
bound all elements in Λ− from below and K̄ bound all elements in K u and K v .
Firstly, we prove (1.40). Consider the weighted sum of state norms
1
V1 (t) = eδx u T (x, t)(Λ+ (x))−1 u(x, t)d x
0
1
+a v T (x, t)(Λ− (x))−1 v(x, t)d x (E.4)
0

for some positive constants a and δ to be decided. Differentiating (E.4) with respect
to time and inserting the dynamics (E.1a)–(E.1b), we find
1
V̇1 (t) = −2 eδx u T (x, t)u x (x, t)d x
0
1
+2 eδx u T (x, t)(Λ+ (x))−1 Σ ++ (x)u(x, t)d x
0
1
+2 eδx u T (x, t)(Λ+ (x))−1 Σ +− (x)v(x, t)d x
0
1 1
+2 eδx u T (x, t)(Λ+ (x))−1 d1 (x, t)d x + 2a v T (x, t)vx (x, t)d x
0 0
1
+ 2a v T (x, t)(Λ− (x))−1 Σ −+ (x)u(x, t)d x
0
1
+ 2a v T (x, t)(Λ− (x))−1 Σ −− (x)v(x, t)d x
0
1
+ 2a v T (x, t)(Λ− (x))−1 d2 (x, t)d x. (E.5)
0

Integration by parts and Young’s inequality on the cross terms give


1
V̇1 (t) ≤ − eδ u T (1, t)u(1, t) + u T (0, t)u(0, t) + δ eδx u T (x, t)u(x, t)d x
0
1
+ p σ̄λ−1 eδx u T (x, t)u(x, t)d x
0
1 1
+ p 2 σ̄ 2 λ−2 eδx u T (x, t)u(x, t)d x + eδx v T (x, t)v(x, t)d x
0 0
1
+ λ̄−2 eδx u T (x, t)u(x, t)d x
0
Appendix E: Additional Proofs 425

1
+ eδx d1T (x, t)d1 (x, t)d x + av T (1, t)v(1, t) − av T (0, t)v(0, t)
0
1 1
+ ap 2 σ̄ 2 μ−2 v T (x, t)v(x, t)d x + a u T (x, t)u(x, t)d x
0 0
1 1
+ ap σ̄μ−1 v T (x, t)v(x, t)d x + aμ−1 v T (x, t)v(x, t)d x
0 0
1
+a d2T (x, t)d2 (x, t)d x (E.6)
0

where

p = max(m, n). (E.7)

Inserting the boundary conditions (E.1c)–(E.1d) and the control law (E.3), we can
bound V̇1 (t) as
   
V̇1 (t) ≤ − eδ − 6ap 2 (c̄2 + ḡ 2 ) u T (1, t)u(1, t) − a − 2 p 2 q̄ 2 v T (0, t)v(0, t)
1
+ b1 eδx (Λ+ (x))−1 u T (x, t)u(x, t)d x
0
1
+ b2 (Λ− (x))−1 v T (x, t)v(x, t)d x + b3 d̄ 2 (E.8)
0

where we have inserted the boundary conditions, and defined the positive constants

b1 = λ̄ δ + p σ̄λ−1 + p 2 σ̄ 2 λ−2 + λ̄−2 + a + 6ap 2 K̄ 2 (E.9a)


 
b2 = μ̄ eδ + ap 2 σ̄ 2 μ−2 + ap σ̄μ−1 + aμ−1 + 6ap 2 K̄ 2 (E.9b)
δ
b3 = e p + 13ap + 2 p. (E.9c)

Choosing

a = 2 p 2 q̄ 2 + 1 (E.10)

and

δ = log 6ap 2 (c̄2 + ḡ 2 ) + 1 (E.11)

we obtain

V̇1 (t) ≤ −u T (1, t)u(1, t) − v T (0, t)v(0, t) + kV1 (t) + b3 d̄ 2 (E.12)


426 Appendix E: Additional Proofs

where
 
b2
k = max b1 , (E.13)
a

and hence
t
V1 (t) ≤ V1 (0)ekt + b3 d̄ 2 ek(t−τ ) dτ (E.14)
0

yielding a bound on the weighted norms as

b3 d̄ 2 kt
V1 (t) ≤ V1 (0) + e (E.15)
k

which proves (1.40).


We now proceed by assuming pointwise bounded initial conditions, i.e. u 0 , v0 ∈
B([0, 1]), and prove (1.41). Firstly, we consider the case where t ∈ [0, T ], where

T = min{t¯u , t¯v } t¯u = min tu,i t¯v = min tv,i (E.16a)


i∈[0,1,...,n] i∈[0,1,...,m]
1 1
dγ dγ
tu,i = tv,i = . (E.16b)
0 λi (γ) 0 μi (γ)

Consider the characteristic curves

xu,i (x, t, s) = φ−1


u,i (φu,i (x) − t + s) (E.17a)
xv,i (x, t, s) = φ−1
v,i (φv,i (x) − t + s) (E.17b)

where
x

φu,i (x) = (E.18a)
0 λi (γ)
1

φv,i (x) = . (E.18b)
x μi (γ)

We note that φu,i (x) and φv,i (x) are strictly increasing and decreasing functions,
respectively, and therefore invertible. Along their characteristic lines, we have from
(E.1a)–(E.1b)

d  n
u i (xu,i (x, t, s), s) = σi++
j (x u,i (x, t, s))u j (x u,i (x, t, s), s)
ds j=1

m
+ σi+−
j (x u,i (x, t, s))v j (x u,i (x, t, s), s) + d1,i (x u,i (x, t, s), s) (E.19a)
j=1
Appendix E: Additional Proofs 427

d  n
vi (xv,i (x, t, s), s) = σi−+
j (x v,i (x, t, s))u j (x v,i (x, t, s), s)
ds j=1

m
+ σi−−
j (x v,i (x, t, s))v j (x v,i (x, t, s), s) + d2,i (x v,i (x, t, s), s). (E.19b)
j=1

Integrating (E.19a) and (E.19b) from s = su,i


0
(x, t) and s = sv,i
0
(x, t), respectively,
to s = t, where
0
su,i (x, t) = max{0, t − φu,i (x)} (E.20a)
0
sv,i (x, t) = max{0, t − φv,i (x)}, (E.20b)

we obtain

u i (x, t) = u i (xu,i (x, t, su,i


0
(x, t)), su,i
0
(x, t)) + Iu,i [u, v](x, t)
+ D1,i (x, t) (E.21a)
vi (x, t) = vi (xv,i (x, t, sv,i
0
(x, t)), sv,i
0
(x, t)) + Iv,i [u, v](x, t)
+ D2,i (x, t) (E.21b)

where
t 
n
Iu,i [u, v](x, t) = σi++
j (x u,i (x, t, s))u j (x u,i (x, t, s), s)ds
0
su,i (x,t) j=1

t 
m
+ σi+−
j (x u,i (x, t, s))v j (x u,i (x, t, s), s)ds (E.22a)
0
su,i (x,t) j=1

t 
n
Iv,i [u, v](x, t) = σi−+
j (x v,i (x, t, s))u j (x v,i (x, t, s), s)ds
0
sv,i (x,t) j=1

t 
m
+ σi−−
j (x v,i (x, t, s))v j (x v,i (x, t, s), s)ds (E.22b)
0
sv,i (x,t) j=1
t
D1,i (x, t) = d1,i (xu,i (x, t, s), s)ds (E.22c)
0
su,i (x,t)
t
D2,i (x, t) = d2,i (xv,i (x, t, s), s)ds. (E.22d)
0
sv,i (x,t)
428 Appendix E: Additional Proofs

Using the boundary and initial conditions (E.1c)–(E.1f), we find

u i (x, t) = Hu,i (x, t) + Iu,i [u, v](x, t) + D1,i (x, t) + D3,i (x, t)
+ Cu,i (x, t) (E.23a)
vi (x, t) = Hv,i (x, t) + Iv,i [u, v](x, t) + D2,i (x, t) + D4,i (x, t)
+ Cv,i (x, t) + Pi (x, t) + Fi (x, t) (E.23b)

where

u 0,i (xu,i (x, t, 0)) if t < φu,i (x)
Hu,i (x, t) = (E.24a)
0 if t ≥ φu,i (x)

v0,i (xv,i (x, t, 0)) if t < φv,i (x)
Hv,i (x, t) = (E.24b)
0 if t ≥ φv,i (x)

and

0 if t < φu,i (x)
D3,i (x, t) = (E.25a)
d3,i (t − φu,i (x)) if t ≥ φu,i (x)

0 if t < φv,i (x)
D4,i (x, t) = (E.25b)
d4,i (t − φv,i (x)) if t ≥ φv,i (x)

with

0 if t < φu,i (x)
Cu,i (x, t) = m (E.26a)
j=1 qi j v j (0, t − φu,i (x)) if t ≥ φu,i (x)


⎨0 if t < φv,i (x)
Cv,i (x, t) = n
(r
j=1 i j + gij (t − φ v,i (x))) (E.26b)


×u j (1, t − φv,i (x)) if t ≥ φv,i (x)

and

0 if t < φv,i (x)
Pi (x, t) = (E.27)
pi (t − φv,i (x)) if t ≥ φv,i (x)

where


n 1 
m 1
pi (t) = K iuj (ξ, t)u j (ξ, t)dξ + K ivj (ξ, t)v j (ξ, t)dξ (E.28)
j=1 0 j=1 0
Appendix E: Additional Proofs 429

and

0 if t < φv,i (x)
Fi (x, t) = (E.29)
f i (t − φv,i (x)) if t ≥ φv,i (x).

We now consider the terms in Cu,i and Cv,i , and insert (E.23), recalling that
t ≤ T = min{t¯u , t¯v }, to obtain


⎪ 0 if t < φu,i (x)
⎪ m

j=1 qi j Hv, j (0, t − φu,i (x))
Cu,i (x, t) =  (E.30a)

⎪ + m
j=1 qi j Iv, j [u, v](0, t − φu,i (x))

⎩ m
+ j=1 qi j D2, j (0, t − φu,i (x)) if t ≥ φu,i (x)


⎪ 0 if t < φv,i (x)
⎪ n


⎪ j=1 (ri j + gi j (t − φv,i (x)))




Cv,i (x, t) = × Hu, j (0, t − φv,i (x)) (E.30b)



⎪ +Iu, j [u, v](0, t − φv,i (x))





⎩ +D1, j (0, t − φv,i (x)) if t ≥ φv,i (x).

Inserting (E.30a) into (E.23) yields

u i (x, t) = Hu,i (x, t) + Iu,i [u, v](x, t) + D1,i (x, t) + D3,i (x, t)
+ Ju,i [u, v](x, t) + Q u,i (x, t) (E.31a)
vi (x, t) = Hv,i (x, t) + Iv,i [u, v](x, t) + D2,i (x, t) + D4,i (x, t)
+ Jv,i [u, v](x, t) + Q v,i (x, t) + Fi (x, t) + Pi (x, t) (E.31b)

where

0 if t < φu,i (x)
Ju,i [u, v](x, t) = m (E.32a)
j=1 q i j I v, j [u, v](0, t − φ u,i (x)) if t ≥ φu,i (x)


⎨0 if t < φv,i (x)
Jv,i [u, v](x, t) = n
j=1 (ri j
+ gi j (t − φv,i (x))) (E.32b)


×Iu, j [u, v](0, t − φv,i (x)) if t ≥ φv,i (x)

and


⎨0 if t < φu,i (x)
Q u,i (x, t) = m
j=1 qi j Hv, j (0, t − φu,i (x)) (E.33a)

⎩ 
+ mj=1 qi j D2, j (0, t − φu,i (x)) if t ≥ φu,i (x)
430 Appendix E: Additional Proofs


⎪0 if t < φv,i (x)



⎪ n
⎨ j=1 (ri j + gi j (t − φv,i (x)))
Q v,i (x, t) = ×Hu, j (0, t − φv,i (x)) (E.33b)

⎪ n

⎪ + j=1 (ri j + gi j (t − φv,i (x)))


⎩ ×D1, j (0, t − φv,i (x)) if t ≥ φv,i (x).

Next, we define
 T
w(x, t) = u 1 (x, t) u 2 (x, t) . . . u n (x, t) v1 (x, t) v2 (x, t) . . . vm (x, t)
 T
= w1 (x, t) w2 (x, t) . . . wn+m (x, t) (E.34)

and write (E.31) as

w(x, t) = ψ(x, t) + Ψ [w](x, t) (E.35)

where
⎡ ⎤
[Hu,1 + D1,1 + D3,1 + Q u,1 ](x, t)
⎢ [Hu,2 + D1,2 + D3,2 + Q u,2 ](x, t) ⎥
⎢ ⎥
⎢ .. ⎥
⎢ . ⎥
⎢ ⎥
ψ(x, t) = ⎢
⎢ [H u,n + D 1,n + D 3,n + Q u,n ](x, t) ⎥
⎥ (E.36)
⎢ [Hv,1 + D2,1 + D4,1 + Q v,1 + F1 + P1 ](x, t) ⎥
⎢ ⎥
⎢ .. ⎥
⎣ . ⎦
[Hv,m + D2,m + D4,m + Q v,m + F1 + P1 ](x, t)

and
⎡ ⎤
Iu,1 [u, v](x, t) + Ju,1 [u, v](x, t)
⎢ Iu,2 [u, v](x, t) + Ju,2 [u, v](x, t) ⎥
⎢ ⎥
⎢ .. ⎥
⎢ . ⎥
⎢ ⎥
Ψ [w](x, t) = ⎢
⎢ Iu,n [u, v](x, t) + Ju,n [u, v](x, t) ⎥
⎥. (E.37)
⎢ Iv,1 [u, v](x, t) + Jv,1 [u, v](x, t) ⎥
⎢ ⎥
⎢ .. ⎥
⎣ . ⎦
Iv,m [u, v](x, t) + Jv,m [u, v](x, t)

We note that ψ(x, t) is bounded for all x ∈ [0, 1] and t ∈ [0, T ] since it is a function
of the bounded initial states, bounded system parameters, bounded source terms and
pi , which is a weighted L 2 norm of the system states, and hence bounded by (1.40).
Let ψ̄ be such that

|ψ(x, t)|∞ ≤ ψ̄ (E.38)


Appendix E: Additional Proofs 431

for all x ∈ [0, 1] and t ∈ [0, T ], and define the sequence

w 0 (x, t) = ψ(x, t) (E.39a)


w q+1
(x, t) = ψ(x, t) + Ψ [w ](x, t), q ≥ 0
q
(E.39b)

and the differences

Δw0 (x, t) = ψ(x, t) (E.40a)


Δw q+1
(x, t) = w q+1
(x, t) − w (x, t), q ≥ 0
q
(E.40b)

for which the following hold

Δwq+1 (x, t) = Ψ [Δwq ](x, t) (E.41)

due to linearity. Then, by construction




w(x, t) = lim wq (x, t) = Δwq (x, t). (E.42)
q→∞
q=0

We will prove that the series (E.42) converges by induction. Suppose that

tq
|Δwq (x, t)|∞ ≤ ψ̄C q (E.43)
q!

for all q ≥ 0, x ∈ [0, 1] and t ∈ [0, T ] where

C = σ̄(n + m)(2 + q̄m + (c̄ + ḡ)n). (E.44)

Clearly, (E.43) holds for q = 0 since

|Δw0 (x, t)|∞ = |w 0 (x, t)|∞ = |ψ 0 (x, t)|∞ ≤ ψ̄ (E.45)

by construction. Assume now that q ≥ 1. We find



 n t
|Iu,i [Δw ](x, t)| ≤  σi++
q
j (x u,i (x, t, s))Δu j (x u,i (x, t, s), s)ds
q

j=1
0
su,i (x,t)


m t 
+ σi+−
q 
j (x u,i (x, t, s))Δv j (x u,i (x, t, s), s)ds 
j=1
0
su,i (x,t)


n t
q
≤ σ̄ |Δu j (xu,i (x, t, s), s)|ds
j=1
0
su,i (x,t)
432 Appendix E: Additional Proofs


m t
q
+ σ̄ |Δv j (xu,i (x, t, s), s)|ds
j=1
0
su,i (x,t)
t
≤ σ̄(n + m) |Δwq (xu,i (x, t, s), s)|∞ ds (E.46)
0
su,i (x,t)

Using assumption (E.43), we obtain


t
sq
|Iu,i [Δwq ](x, t)| ≤ ψ̄C q σ̄(n + m) ds
0
su,i (x,t) q!
t q+1
≤ ψ̄C q σ̄(n + m) (E.47)
(q + 1)!

for all i = 1, . . . , n, x ∈ [0, 1] and t ∈ [0, T ]. Moreover, from the definition of


Ju,i [u, v](x, t) in (E.32a), we have
 m 
 
|Ju,i [Δwq ](x, t)| =  qi j Iv, j [u, v](0, t − φu,i (x))
j=1
m 
 
 
≤ q̄  Iv, j [u, v](0, t − φu,i (x))
 
j=1

m
(t − φu,i (x))q+1
≤ q̄ ψ̄C q σ̄(n + m)
j=1
(q + 1)!
t q+1
≤ ψ̄C q q̄ σ̄m(n + m) (E.48)
(q + 1)!

for all i = 1, . . . , n, x ∈ [0, 1] and t ∈ [0, T ]. Similar derivations for Iv,i [Δwq ](x, t)
and Jv,i [Δwq ](x, t) give the bounds

t q+1
|Iv,i [Δwq ](x, t)| ≤ ψ̄C q σ̄(n + m) (E.49)
(q + 1)!

t q+1
|Ju,i [Δwq ](x, t)| ≤ ψ̄C q (c̄ + ḡ)σ̄n(n + m) (E.50)
(q + 1)!

Combining all this, we obtain the bound

t q+1
|Φ[Δwq ](x, t)|∞ ≤ ψ̄C q (n + m)σ̄ [2 + q̄m + (c̄ + ḡ)n]
(q + 1)!
t q+1
≤ ψ̄C q+1 (E.51)
(q + 1)!
Appendix E: Additional Proofs 433

for all x ∈ [0, 1] and t ∈ [0, T ], which proves the claim.


Using (E.42) and (E.43), we have

tq
|Δwq (x, t)|∞ ≤ ψ̄C q (E.52)
q!

and from (E.42),


 ∞  ∞ ∞
    tq
|w(x, t)|∞ =  Δwq (x, t) ≤ |Δwq (x, t)|∞ ≤ ψ̄ Cq
q=0 ∞ q=0 q=0
q!

≤ ψ̄e Ct
(E.53)

for all x ∈ [0, 1] and t ∈ [0, T ], which proves that u(x, t) and v(x, t) are bounded
for all x ∈ [0, 1] and t ∈ [0, T ].
The above result also implies that u i (x, T ) and v j (x, T ) for i = 1, 2, . . . , n, j =
1, 2, . . . , m, are bounded for all x ∈ [0, 1]. By now shifting time T units and repeating
the above line of reasoning, we obtain that u i (x, t) and v j (x, t) for i = 1, 2, . . . , n,
j = 1, 2, . . . , m, are bounded for all x ∈ [0, 1] and t ∈ [T, 2T ]. Continuing in this
manner, proofs the theorem. 

E.2 Proof of Corollary 1.1

Proof We start by proving this for the L 2 -norm. Consider a system u(x, t) defined
for x ∈ [0, 1], t ≥ 0 with initial condition u(x, 0) = u 0 (x). Assume u ≡ 0 after a
finite time T . By Theorem 1.1, we have

||u|| ≤ M||u 0 ||ect (E.54)

for some positive constants M and c. By time T

||u(T )|| ≤ M||u 0 ||ecT = M||u 0 ||e(c+k)T e−kT


≤ G||u 0 ||e−kT (E.55)

where G = Me(c+k)T , for some constant k. Since u ≡ 0 for t ≥ T , it also follows


that

||u(t)|| ≤ G||u 0 ||e−kt (E.56)

which proves exponential convergence of u to zero in the L 2 -sense. The proof for
the ∞-norm is similar and omitted. 
434 Appendix E: Additional Proofs

E.3 Proof of Lemma 9.3

Bound on V̇4 :
From differentiating (9.44a) with respect to time, inserting the dynamics (9.36a), and
integrating by parts, we find
1
V̇4 (t) = − λe−δ w 2 (1, t) + λw 2 (0, t) − λδ e−δx w 2 (x, t)d x
0
1 1
−δx
+2 e w (x, t)ĉ11 (t)d x + 2
2
e−δx w(x, t)ĉ12 (t)z(x, t)d x
0 0
1 x
−δx
+2 e w(x, t) ω(x, ξ, t)w(ξ, t)dξd x
0 0
1 x
+2 e−δx w(x, t) κ(x, ξ, t)z(ξ, t)dξd x
0 0
1
+2 e−δx w(x, t)ĉ11 (t)e(x, t)d x
0
1
+2 e−δx w(x, t)ĉ12 (t)(x, t)d x
0
1
+2 e−δx w(x, t)ρe(x, t)||(t)||2 d x. (E.57)
0

Using

1  x 2 1 x
−δx
e w(ξ, t)dξ dx ≤ e−δx w 2 (ξ, t)dξd x
0 0 0 0
x 1 1
1 −δx 1
≤− e w 2 (ξ, t)dξd x + e−δx w 2 (x, t)d x
δ 0 0 δ 0
1 1
1
≤ (e−δx − e−δ )w 2 (x, t)d x ≤ e−δx w 2 (x, t)d x (E.58)
δ 0 0

for δ ≥ 1, and Young’s inequality, we get

  1
V̇4 (t) ≤ −λe−δ w 2 (1, t) + λw 2 (0, t) − λδ − 2c̄11 − ω̄ 2 − 5 e−δx w 2 (x, t)d x
0
+ (c̄12
2
+ κ̄2 )||z(t)||2 + c̄11
2
||e(t)||2 + c̄12
2
||(t)||2
+ 2ρ||w(t)||||e(t)||||(t)||2 . (E.59)
Appendix E: Additional Proofs 435

Consider the last term. We have, using Young’s and Minkowski’s inequalities

2ρ||w(t)||||e(t)||||(t)||2 = 2ρ||w(t)||||e(t)||||(t)||(||u(t)|| + ||v(t)||)


1
≤ ρ1 ρ2 ||w(t)||2 ||e(t)||2 ||(t)||2 + (||u(t)|| + ||v(t)||)2
ρ1
1
≤ ρ1 ρ2 ||w(t)||2 ||e(t)||2 ||(t)||2 + (||û(t) + e(t)|| + ||v(t) + (t)||)2
ρ1
≤ ρ1 ρ ||w(t)|| ||e(t)|| ||(t)||
2 2 2 2

1
+ (||w(t)|| + ||e(t)|| + ||T −1 [w, z](t)|| + ||(t)||)2
ρ1
≤ ρ1 ρ2 ||w(t)||2 ||e(t)||2 ||(t)||2
4
+ ||w(t)||2 + ||T −1 [w, z](t)||2 + ||e(t)||2 + ||(t)||2
ρ1
≤ ρ1 ρ2 ||w(t)||2 ||e(t)||2 ||(t)||2
4
+ (1 + 2 A23 )||w(t)||2 + 2 A24 ||z(t)||2 + ||e(t)||2 + ||(t)||2 (E.60)
ρ1

for some arbitrary ρ1 > 0. Choosing ρ1 = eδ , we find

V̇4 ≤ −λe−δ w 2 (1, t) + 3λq̄ 2 z 2 (0, t) + 3λq̄ 2 2 (0, t) + 3λe2 (0, t)


  1
− λδ − 2c̄11 − ω̄ 2 − 9 − 8A23 e−δx w 2 (x, t)d x + (c̄12
2
+ κ̄2 )||z(t)||2
0
δ 2
+ c̄11
2
||e(t)||2 + c̄12
2
||(t)||2 + e ρ ||w(t)||2 ||e(t)||2 ||(t)||2
+ 4e−δ 2 A24 ||z(t)||2 + ||e(t)||2 + ||(t)||2 . (E.61)

Defining

h 1 = 3λq̄ 2 , h 2 = 2c̄11 + ω̄ 2 + 9 + 8A23 (E.62a)


−δ
h3 = 2
c̄12 + κ̄ + 8e
2
A24 (E.62b)

and

l1 (t) = e2δ ρ2 ||e(t)||2 ||(t)||2 (E.63a)


−δ −δ
l2 (t) = (c̄11
2
+ 4e )||e(t)|| + 2
(c̄12
2
+ 4e )||(t)|| 2

+ 3λe (0, t) + 3λq̄  (0, t),


2 2 2
(E.63b)

we obtain

V̇4 (t) ≤ h 1 z 2 (0, t) − [λδ − h 2 ] V4 (t) + h 3 V5 (t) + l1 (t)V4 (t) + l2 (t). (E.64)
436 Appendix E: Additional Proofs

Bound on V̇5 :
From differentiating (9.44b) with respect to time, inserting the dynamics (9.36b),
and integration by parts, we find
1
V̇5 (t) = μek z 2 (1, t) − μz 2 (0, t) − μk ekx z 2 (x, t)d x
0
1
+2 e z(x, t)ĉ22 (t)z(x, t)d x
kx
0
1
−2 ekx z(x, t)λ K̂ u (x, 0, t)q(t)(0, t)d x
0
1
−2 ekx z(x, t)λ K̂ u (x, 0, t)q̃(t)z(0, t)d x
0
1
+2 ekx z(x, t)λ K̂ u (x, 0, t)e(0, t)d x
0
1 x
−2 ekx z(x, t) K̂ tu (x, ξ, t)w(ξ, t)dξd x
0 0
1 x
−2 ekx z(x, t) K̂ tv (x, ξ, t)T −1 [w, z](ξ, t)dξd x
0 0
1
+2 ekx z(x, t)T [ĉ11 e + ĉ12 , ĉ21 e + ĉ22 ](x, t)d x
0
1
+ 2ρ ekx z(x, t)T [e, ](x, t)||(t)||2 d x. (E.65)
0

Using Young’s inequality


1
V̇5 (t) ≤ μek z 2 (1, t) − μz 2 (0, t) − [kμ − 2c̄22 − 6] ekx z 2 (x, t)d x
0
+ λ2 K̄ 2 q̄ 2 ek 2 (0, t) + λ2 K̄ 2 ek q̃ 2 (t)z 2 (0, t) + λ2 K̄ 2 ek e2 (0, t)
1  x 2
+ ekx K̂ tu (x, ξ, t)w(ξ, t)dξ d x
0 0
1  x 2
+2 e kx
K̂ tv (x, ξ, t)T −1 [w, z](ξ, t)dξ dx
0 0
1
+ ekx T 2 [ĉ11 e + ĉ12 , ĉ21 e + ĉ22 ](x, t)d x
0
1
+ ρ2 eδ+2k (z(x, t)T [e, ](x, t))2 ||(t)||2 d x
0
+ e−δ (||u(t)|| + ||v(t)||)2 (E.66)
Appendix E: Additional Proofs 437

and Cauchy–Schwarz’ and Minkowski’s inequalities


1
V̇5 (t) ≤ −μz 2 (0, t) − [kμ − 2c̄22 − 6] ekx z 2 (x, t)d x + λ2 K̄ 2 q̄ 2 ek 2 (0, t)
0
+ λ2 K̄ 2 ek q̃ 2 (t)z 2 (0, t) + λ2 K̄ 2 ek e2 (0, t) + ek || K̂ tu (t)||2 ||w(t)||2
+ 2ek || K̂ tv (t)||2 A23 ||w(t)||2 + 2ek || K̂ tv (t)||2 A24 ||z(t)||2
+ 2ek A21 c̄11
2
||e(t)||2 + 2ek A21 c̄12
2
||(t)||2 + 2ek A22 c̄21
2
||e(t)||2
+ 2ek A22 c̄22
2
||(t)||2 + 2ρ2 eδ+2k ||z(t)||2 (A21 ||e(t)||2 + A22 ||(t)||2 )||(t)||2
+ 4e−δ ((1 + A3 )2 ||w(t)||2 + A24 ||z(t)||2 + ||e(t)||2 + ||(t)||2 ). (E.67)

Defining the positive constants

h 4 = λ2 K̄ 2 , h 5 = 4(1 + A3 )2 , h 6 = 2c̄22 + 4 A24 + 6 (E.68)

and

l3 (t) = eδ+k || K̂ tu (t)||2 + 2eδ+k || K̂ tv (t)||2 A23 (E.69a)


l4 (t) = 2e || K̂ tv (t)||2 A24
k

+ 2ρ2 eδ+2k (A21 ||e(t)||2 + A22 ||(t)||2 )||(t)||2 (E.69b)


l5 (t) = λ2 K̄ 2 q̄ 2 ek 2 (0, t) + λ2 K̄ 2 ek e2 (0, t) + 2ek A21 c̄11
2
||e(t)||2
+ 2ek A21 c̄12 2
||(t)||2 + 2ek A22 c̄21
2
||e(t)||2 + 2ek A22 c̄22
2
||(t)||2
−δ −δ
+ 4e ||e(t)|| + 4e ||(t)||
2 2
(E.69c)

all of which are integrable, we obtain


 
V̇5 (t) ≤ − μ − ek h 4 q̃ 2 (t) z 2 (0, t) + h 5 V4 (t) − [kμ − h 6 ] V5 (t)
+ l3 (t)V4 (t) + l4 (t)V5 (t) + l5 (t). (E.70)

E.4 Proof of Lemma 9.7

Bound on V̇1 :
From differentiating (9.116a) with respect to time, inserting the dynamics (9.109a)
and integrating by parts, we find
1 1
V̇1 (t) ≤ w 2 (0, t) − δ e−δx w 2 (x, t)d x + 2 e−δx w(x, t)θ̂(x, t)z(x, t)d x
0 0
1
+2 e−δx w(x, t)θ̂(x, t)ˆ(x, t)d x
0
438 Appendix E: Additional Proofs

1 x
+2 e−δx λ−1 (x)w(x, t) ω(x, ξ, t)w(ξ, t)dξd x
0 0
1 x
+2 e−δx λ−1 (x)w(x, t) b(x, ξ, t)z(ξ, t)dξd x
0 0
1
+2 ˙
e−δx λ−1 (x)w(x, t)q̂(t)η(x, t)d x
0
1 x
+2 e−δx λ−1 (x)w(x, t) θ̂t (ξ, t)M(x, ξ, t)dξd x. (E.71)
0 0

Applying Young’s inequality to the cross terms, we obtain


1
V̇1 (t) ≤ 2q̄ 2 z 2 (0, t) + 2q̄ 2 ˆ2 (0, t) − (δ − 6) e−δx w 2 (x, t)d x
0
1 1
+ θ̄2 e−δx z 2 (x, t)d x + θ̄2 e−δx ˆ2 (x, t)d x
0 0
1  x 2
+ λ−2 e−δx ω(x, ξ, t)w(ξ, t)dξ dx
0 0
1  x 2
−2 −δx
+λ e b(x, ξ, t)z(ξ, t)dξ dx
0 0
1
+ λ−2 q̂˙ 2 (t) e−δx η 2 (x, t)d x
0
1  x 2
+ λ−2 e−δx θ̂t (ξ, t)M(x, ξ, t)dξ dx (E.72)
0 0

where we have inserted for the boundary condition (9.109c). Applying Cauchy–
Schwarz’ inequality to the double integrals, yields

V̇1 (t) ≤ 2q̄ 2 z 2 (0, t) + 2q̄ 2 ˆ2 (0, t) + θ̄2 ||z(t)||2 + θ̄2 ||ˆ(t)||2
1
− (δ − 6 − λ−2 ω̄ 2 ) e−δx w 2 (x, t)d x + λ−2 b̄2 ||z(t)||2
0
+ λ−2 q̂˙ 2 (t)||η(t)||2 + λ−2 ||θ̂t (t)||2 ||M(t)||2 (E.73)

where ω̄ and b̄ are defined in (9.114). Expanding ||ˆ(t)||2 as


||ˆ(t)||2
||ˆ(t)||2 = (1 + ||N (t)||2 ) (E.74)
1 + ||N (t)||2

and ˆ2 (0, t) as


ˆ2 (0, t)
ˆ2 (0, t) = (1 + ||n 0 (t)||2 ) (E.75)
1 + ||n 0 (t)||2
Appendix E: Additional Proofs 439

yield

ˆ2 (0, t)
V̇1 (t) ≤ 2q̄ 2 z 2 (0, t) + 2q̄ 2 (1 + ||n 0 (t)||2 )
1 + ||n 0 (t)||2
1
− (δ − 6 − λ−2 ω̄ 2 ) e−δx w 2 (x, t)d x + θ̄2 ||z(t)||2
0
||ˆ(t)||2
+ θ̄2 (1 + ||N (t)||2 ) + λ−2 b̄2 ||z(t)||2
1 + ||N (t)||2
+ λ−2 q̂˙ 2 (t)||η(t)||2 + λ−2 ||θ̂t (t)||2 ||M(t)||2 . (E.76)

Selecting δ > 6 + λ−2 ω̄ 2 , we find

V̇1 (t) ≤ h 1 z 2 (0, t) − (δ − 6 − λ−2 ω̄ 2 )λV1 (t)


ˆ2 (0, t)
+ h 2 V2 (t) + h 1 ||n 0 (t)||2
1 + ||n 0 (t)||2
+ l1 (t)V3 (t) + l2 (t)V4 (t) + l3 (t)V5 (t) + l4 (t) (E.77)

for the positive constants

h 1 = 2q̄ 2 , h 2 = θ̄2 μ̄ + λ−2 b̄2 μ̄ (E.78)

and non-negative integrable functions

l1 (t) = λ−2 q̂˙ 2 (t)λ̄, l2 (t) = λ−2 ||θ̂t (t)||2 μ̄ (E.79a)


||ˆ(t)|| 2
l3 (t) = μ̄θ̄2 (E.79b)
1 + ||N (t)||2
ˆ2 (0, t) ||ˆ(t)||2
l4 (t) = 2q̄ 2 + θ̄ 2
. (E.79c)
1 + ||n 0 (t)||2 1 + ||N (t)||2

Bound on V̇2 :
Differentiation (9.116b) with respect to time, inserting the dynamics (9.109), inte-
grating by parts and using Young’s inequality yield
1
V̇2 (t) ≤ −z 2 (0, t) − ||z(t)||2 + ρ1 (1 + x)μ−1 (x)z 2 (x, t)d x
0
1
2 −1 2 2 2
+ μ K̄ λ̄ q̄ (t)ˆ2 (0, t) + ρ2 (1 + x)μ−1 (x)z 2 (x, t)d x
ρ1 0
1 x
2 ˙ +
+ μ−1 T q̂η θ̂t (ξ, t)M(x, ξ, t)dξ,
ρ2 0 0
1 2
κ̂t (ξ, t)N (x, ξ, t)dξ (x, t)d x
x
440 Appendix E: Additional Proofs

1
+ ρ3 (1 + x)μ−1 (x)z 2 (x, t)d x
0
1  x 2
2
+ μ−1 K̂ tu (x, ξ, t)w(ξ, t)dξ dx
ρ3 0 0
1
+ ρ4 (1 + x)μ−1 (x)z 2 (x, t)d x
0
1  x 2
2 −1
+ μ K̂ tv (x, ξ, t)T −1 [w, z](ξ, t)dξ dx (E.80)
ρ4 0 0

for some arbitrary positive constants ρ1 . . . ρ4 . Using Cauchy–Schwarz’ inequality,


μ
choosing ρ1 = ρ2 = ρ3 = ρ4 = 16 , and expanding the term in ˆ2 (0, t) yield

1
1
V̇2 (t) ≤ −z 2 (0, t) − μ (1 + x)μ−1 (x)z 2 (x, t)d x
4 0
ˆ2 (0, t)
+ 32μ−2 K̄ 2 λ̄2 q̄ 2 (1 + ||n 0 (t)||2 ) + 64μ−2 G 21 q̂˙ 2 (t)||η(t)||2
1 + ||n 0 (t)||2
+ 64μ−2 G 21 ||θ̂t (t)||2 ||M(t)||2 + 32μ−2 G 22 ||κ̂t (t)||2 ||N (t)||2
+ 32μ−2 || K̂ tu (t)||2 ||w(t)||2 + 32μ−2 G 23 || K̂ tv (t)||2 ||w(t)||2
+ 32μ−2 G 24 || K̂ tv (t)||2 ||z(t)||2 . (E.81)

Specifically, we used

1 x 1 2
˙ +
T q̂η θ̂t (ξ)M(x, ξ)dξ, κ̂t (ξ)N (x, ξ)dξ (x, t)d x
0 0 x
1 1 x 2
≤ G 21 q̂˙ 2 (t)η 2 (x, t)d x + G 21 θ̂t (ξ, t)M(x, ξ, t)dξ dx
0 0 0
1  1 2
+ G 22 κ̂t (ξ, t)N (x, ξ, t)dξ dx
0 x

≤ G 21 q̂˙ 2 (t)||η(t)||2 + G 21 ||θ̂t (t)||2 ||M(t)||2 + G 22 ||κ̂t (t)||2 ||N (t)||2 (E.82)

and
1  x 2
K̂ tv (x, ξ, t)T −1 [w, z](ξ, t)dξ dx
0 0
1 x x
≤ ( K̂ tv (x, ξ, t))2 dξ (T −1 [w, z](ξ, t))2 dξd x
0 0 0
1 x 1
≤ ( K̂ tv (x, ξ, t))2 dξd x (T −1 [w, z](ξ, t))2 dξ
0 0 0
Appendix E: Additional Proofs 441

1
≤ || K̂ tv (t)||2 (T −1 [w, z](x, t))2 d x
0
≤ || K̂ tv (t)||2 (G 23 ||w(t)||2 + G 24 ||z(t)||2 ). (E.83)

Inequality (E.81) can be written


1 ˆ2 (0, t)
V̇2 (t) ≤ −z 2 (0, t) − μV2 (t) + h 3 ||n 0 (t)||2 + l5 (t)V1 (t)
4 1 + ||n 0 (t)||2
+ l6 (t)V2 (t) + l7 (t)V3 (t) + l8 (t)V4 (t) + l9 (t)V5 (t) + l10 (t) (E.84)

for the positive constant

h 3 = 32μ−2 K̄ 2 λ̄2 q̄ 2 (E.85)

and the nonnegative, integrable functions


l5 (t) = 32μ−2 (|| K̂ tu (t)||2 + G 23 || K̂ tv (t)||2 )eδ λ̄ (E.86a)
−2
l6 (t) = 32μ G 24 || K̂ tv (t)||2 μ̄ (E.86b)
l7 (t) = 64μ −2
G 21 q̂˙ 2 (t)λ̄ (E.86c)
l8 (t) = 64μ−2 G 21 ||θ̂t (t)||2 λ̄ (E.86d)
−2
l9 (t) = 32μ G 22 ||κ̂t (t)||2 μ̄ (E.86e)
−2 2 2 2 ˆ2 (0, t)
l10 (t) = 32μ K̄ λ̄ q̄ . (E.86f)
1 + ||n 0 (t)||2

Bound on V̇3 :
We find
V̇3 (t) ≤ −||η(t)||2 + 4z 2 (0, t) + 4ˆ2 (0, t). (E.87)

Expanding ˆ2 (0, t) yields


ˆ2 (0, t)
V̇3 (t) ≤ −||η(t)||2 + 4z 2 (0, t) + 4 (1 + ||n 0 (t)||2 ) (E.88)
1 + ||n 0 (t)||2

and hence
1 ˆ2 (0, t)
V̇3 (t) ≤ − μV3 (t) + 4z 2 (0, t) + 4 ||n 0 (t)||2 + l11 (t) (E.89)
2 1 + ||n 0 (t)||2

where the non-negative integrable function

ˆ2 (0, t)
l11 (t) = (E.90)
1 + ||n 0 (t)||2

has been defined.


442 Appendix E: Additional Proofs

Bound on V̇4 :
We find
1 1
V̇4 (t) = −2 (2 − x)M(x, ξ, t)Mx (x, ξ, t)d xdξ
0 ξ
1 1
=− M 2 (1, ξ, t)dξ + (2 − ξ)M 2 (ξ, ξ, t)dξ − ||M(t)||2
0 0
≤ 2||v(t)||2 − ||M(t)||2 ≤ 4||v̂(t)||2 + 4||ˆ(t)||2 − ||M(t)||2
≤ 4G 21 ||w(t)||2 + 4G 22 ||z(t)||2 + 4||ˆ(t)||2 − ||M(t)||2 . (E.91)

Expanding ||ˆ||2 yields

V̇4 (t) ≤ −||M(t)||2 + 4G 21 ||w(t)||2 + 4G 22 ||z(t)||2


||ˆ(t)||2
+4 (1 + ||N (t)||2 ) (E.92)
1 + ||N (t)||2

and hence
1
V̇4 (t) ≤ − λV4 (t) + h 4 eδ V1 (t) + h 5 V2 (t) + l12 (t)V5 (t) + l13 (t) (E.93)
2
for the positive constants

h 4 = 4G 21 λ̄, h 5 = 4G 22 μ̄ (E.94)

and non-negative integrable functions

||ˆ(t)||2
l12 (t) = μ̄l13 (t) l13 (t) = 4 . (E.95)
1 + ||N (t)||2

Bound on V̇5 :
Finally, we find
1 1
V̇5 (t) = 2 (1 + x)N (x, ξ, t)N x (x, ξ, t)dξd x
0 x
= 2||u(t)||2 − ||n 0 (t)||2 − ||N (t)||2
≤ 4||û(t)||2 + 4||ê(t)||2 − ||n 0 (t)||2 − ||N (t)||2
≤ 4||w(t)||2 + 4||ê(t)||2 − ||n 0 (t)||2 − ||N (t)||2 . (E.96)
Appendix E: Additional Proofs 443

Expanding ||ê||2 , yields

V̇5 (t) ≤ −||n 0 (t)||2 − ||N (t)||2 + 4||w(t)||2


||ê(t)||2
+4 (1 + ||η(t)||2 + ||M(t)||2 ) (E.97)
1 + f 2 (t)

and hence
1
V̇5 (t) ≤ −||n 0 (t)||2 − μV5 (t) + h 6 eδ V1 (t) + l14 (t)V3 (t)
2
+ l14 (t)V4 (t) + l15 (t) (E.98)

where

h 6 = 4λ̄ (E.99)

is a positive constant, and

||ê(t)||2
l14 (t) = l15 (t)λ̄, l15 (t) = 4 (E.100)
1 + f 2 (t)

are nonnegative, integrable functions. 

E.5 Proof of Lemma 10.3

Bound on V̇1 :
We find
1 1
V̇1 (t) = −2 e−δx α(x, t)αx (x, t)d x + 2 e−δx λ−1 (x)α(x, t)c1 (x)β(x, t)d x
0 0
1 x
+2 e−δx λ−1 (x)α(x, t) ω(x, ξ, t)α(ξ, t)dξd x
0 0
1 x
+2 e−δx λ−1 (x)α(x, t) κ(x, ξ, t)β(ξ, t)dξd x
0 0
1
+2 e−δx λ−1 (x)α(x, t)k1 (x)ˆ(0, t)d x
0
1
+2 ˙ p(x, t)d x.
e−δx λ−1 (x)α(x, t)q̂(t) (E.101)
0

Integration by parts, inserting the boundary condition (10.45c) and using Young’s
inequality on the cross terms gives
444 Appendix E: Additional Proofs

V̇1 (t) ≤ −e−δ α2 (1, t) + 2q̄ 2 β 2 (0, t) + 2q̄ 2 ˆ2 (0, t)


1 1
− δλ e−δx λ−1 (x)α2 (x, t)d x + c̄12 e−δx λ−1 (x)α2 (x, t)d x
0 0
μ̄ 1 1
+ (1 + x)μ−1 (x)β 2 (x, t)d x + ω̄ 2 e−δx λ−1 (x)α2 (x, t)d x
λ 0 0
λ̄ 1 1
+ e−δx λ−1 (x)α2 (x, t)d x + κ̄2 e−δx λ−1 (x)α2 (x, t)d x
λ 0 0
μ̄ 1
k̄ 2 1 −δx −1
+ (1 + x)μ−1 (x)β 2 (x, t)d x + 1 e λ (x)α2 (x, t)d x
λ 0 λ 0
1 1 −δx −1
+ ˆ2 (0, t) + e λ (x)α2 (x, t)d x
λ 0
1
+ q̂˙ 2 (t) p 2 (x, t)d x (E.102)
0

where we used
1 x 1 x
e−δx λ−1 (x) α2 (ξ, t)dξd x ≤ λ−1 e−δx α2 (ξ, t)dξd x
0 0 0 0
x 1 1
1 1
≤− e−δx α2 (ξ, t)dξd x + e−δx α2 (x, t)d x
δλ 0 0 δλ 0
1  1
1 1
≤ e−δx − e−δ α2 (x, t)d x ≤ e−δx α2 (x, t)d x
δλ 0 δλ 0
λ̄ 1
≤ e−δx λ−1 (x)α2 (x, t)d x (E.103)
λ 0

where the last inequality follows from assuming δ ≥ 1, and similarly for the double
integral in β. Inequality (E.102) can be written
 
V̇1 (t) ≤ h 1 β 2 (0, t) + h 2 ˆ2 (0, t) − δλ − h 3 V1 (t)
+ h 4 V2 (t) + l1 (t)V3 (t) (E.104)

where

h 1 = 2q̄ 2 , h2 = 1 + h1, (E.105a)


λ̄ k̄12
1 μ̄
h 3 = c̄12 + ω̄ 2 + + κ̄2 + + , h4 = 2 (E.105b)
λ λ λ λ

are positive constants independent of δ, and

l1 (t) = q̂˙ 2 (t)B12 λ̄eδ (E.106)

is an integrable function (Theorem 10.1).


Appendix E: Additional Proofs 445

Bound on V̇2 :
We find
1
V̇2 (t) = (1 + x)β(x, t)βx (x, t)d x
0
1  
+ (1 + x)μ−1 (x)β(x, t) K̂ u (x, 0, t)λ(0)q̂(t) + T [k1 , k2 ](x, t) ˆ(0, t)d x
0
1
+ ˙
(1 + x)μ−1 (x)β(x, t)q̂(t)T [ p, r ](x, t)d x
0
1 x
− (1 + x)μ−1 (x)β(x, t) K̂ tu (x, ξ, t)α(ξ, t)dξd x
0 0
1 x
− (1 + x)μ−1 (x)β(x, t) K̂ tv (x, ξ, t)T −1 [α, β](ξ, t)dξd x. (E.107)
0 0

Integration by parts, the boundary condition and Young’s inequality give


 
1 4 1
V̇2 (t) ≤ −β 2 (0, t) − μ− ρi (1 + x)μ−1 (x)β 2 (x, t)d x
2 i=1 0

1 1  2
+ (1 + x)μ−1 (x) K̂ u (x, 0, t)λ(0)q̂(t) + T [k1 , k2 ](x, t) d x ˆ2 (0, t)
ρ1 0
1
1
+ q̂˙ 2 (t) (1 + x)μ−1 (x)T 2 [ p, r ](x, t)d x
ρ2 0
 x 2
1 1
+ (1 + x)μ−1 (x) K̂ tu (x, ξ, t)α(ξ, t)dξ d x
ρ3 0 0
1  x 2
1 −1 v −1
+ (1 + x)μ (x) K̂ t (x, ξ, t)T [α, β](ξ, t)dξ d x (E.108)
ρ4 0 0

where ρ1 . . . ρ4 are arbitrary positive constants. Choosing ρ1 = ρ2 = ρ3 = ρ4 = 1


16
μ
now gives
1
1
V̇2 (t) ≤ −β 2 (0, t) − μ (1 + x)μ−1 (x)β 2 (x, t)d x
4 0
64  
+ 2 ( K̄ )2 λ̄2 q̄ 2 + 2(A21 ||k1 ||2 + A22 ||k2 ||2 ) ˆ2 (0, t)
μ
1
64 ˙ 2
+ q̂ (t)(A21 B12 + 2 A22 B22 )λ̄eδ e−δx λ−1 (x)w 2 (x, t)d x
μ2 0
1
128
+ 2 q̂˙ 2 (t)A22 B32 μ̄ (1 + x)μ−1 (x)z 2 (x, t)d x
μ 0
446 Appendix E: Additional Proofs

1
32
+ || K̂ tu (t)||2 λ̄eδ e−δx λ−1 (x)α2 (x, t)d x
μ2 0
1
64
+ || K̂ tv (t)||2 A23 λ̄eδ e−δx λ−1 (x)α2 (x, t)d x
μ2 0
1
64
+ || K̂ tv (t)||2 A24 μ̄ (1 + x)μ−1 (x)β 2 (x, t)d x, (E.109)
μ2 0

where we used Cauchy–Schwarz’ inequality to derive

1  x 2
K̂ tv (x, ξ, t)T −1 [α, β](ξ, t)dξ dx
0 0
  2
1 x x
≤ ( K̂ tv (x, ξ, t))2 dξ (T −1 [α, β](ξ, t))2 dξ dx
0 0 0
1 x x
≤ ( K̂ tv (x, ξ, t))2 dξ (T −1 [α, β](ξ, t))2 dξd x
0 0 0
1 x 1
≤ ( K̂ tv (x, ξ, t))2 dξ (T −1 [α, β](ξ, t))2 dξd x
0 0 0
1 x
≤ ( K̂ tv (x, ξ, t))2 dξd x||T −1 [α, β](t)||2
0 0
≤ 2|| K̂ tv (t)||2 (A23 ||α(t)||2 + A24 ||β(t)||2 )
1
≤ 2|| K̂ tv (t)||2 A23 λ̄eδ e−δx λ−1 (x)α2 (x, t)d x
0
1
+ 2|| K̂ tv (t)||2 A24 μ̄ (1 + x)μ−1 (x)β 2 (x, t)d x (E.110)
0

and similarly for the term in K̂ tu . Inequality (E.109) can be written

1
V̇2 (t) ≤ −β 2 (0, t) − μV2 (t) + h 5 ˆ2 (0, t) + l2 (t)V1 (t)
4
+ l3 (t)V2 (t) + l4 (t)V3 (t) + l5 (t)V4 (t) (E.111)

where
64  2 2 2 
h5 = K̄ λ̄ q̄ + 2(A21 ||k1 ||2 + A22 ||k2 ||2 ) (E.112)
μ2

is a positive constant, and

32  u 
l2 (t) = || K̂ t (t)||2 + 2|| K̂ tv (t)||2 A23 λ̄eδ (E.113a)
μ2
Appendix E: Additional Proofs 447

64
l3 (t) = || K̂ tv (t)||2 A24 μ̄ (E.113b)
μ2
64
l4 (t) = 2 q̂˙ 2 (t)(A21 B12 + 2 A22 B22 )λ̄eδ (E.113c)
μ
128
l5 (t) = 2 q̂˙ 2 (t)A22 B32 μ̄ (E.113d)
μ

are integrable functions (Theorem 10.1 and (10.40)).


Bound on V̇3 :
Using the same steps as for the Lyapunov function V in the proof of Theorem 10.1,
we find

V̇3 (t) ≤ 2β 2 (0, t) + 2ˆ2 (0, t) − δλ − h 6 V3 (t) (E.114)

where

λ̄
h 6 = ḡ12 + (E.115)
λ

is a positive constant, with ḡ1 bounding g1 .


Bound on V̇4 :
Again, using the same steps as for the Lyapunov function V in the proof of Theorem
10.1, we find
1
1
V̇4 (t) ≤ 2z 2 (1, t) − z 2 (0, t) − μ − ρ1 − ρ2 (1 + x)μ−1 (x)z 2 (x, t)d x
2 0
1
c̄22
+ (1 + x)μ−1 (x)w 2 (x, t)d x
ρ1 0
ḡ 2 1 x
+ 2 (1 + x)μ−1 (x) w 2 (ξ, t)dξd x (E.116)
ρ2 0 0

where ḡ2 bounds g2 , and ρ1 and ρ2 are arbitrary positive constants. Choosing ρ1 =
ρ2 = 18 μ and using the boundary condition (10.53d) we find

1
V̇4 (t) ≤ −z 2 (0, t) − μV4 (t) + h 7 eδ V3 (t) (E.117)
4
where
16 2
h7 = (c̄ + ḡ22 )λ̄ (E.118)
μ2 2

is a positive constant independent of δ. 


448 Appendix E: Additional Proofs

E.6 Proof of Lemma 10.7

Bound on V̇1 :
Differentiating (10.132a), integration by parts, inserting the boundary condition and
using Cauchy–Schwarz’ inequality, we find
1
V̇1 (t) = −e−δ α2 (1, t) + α2 (0, t) − δ e−δx α2 (x, t)d x
0
≤ −e−δ α2 (1, t) + q̃ 2 (t)v 2 (0, t) − δλV1 (t)
≤ −e−δ α2 (1, t) + 2q̃ 2 (t)z 2 (0, t)
 
− δλ − 2eδ λ̄q̃ 2 (t)( P̄ β )2 V1 (t) (E.119)

where we used (10.131) and the bound P̄ β given in (10.81).


Bound on V̇2 :
Differentiating (10.132b), integration by parts, inserting the boundary condition, we
find
1
V̇2 (t) ≤ q̂(t)2 z 2 (0, t) − δ e−δx w 2 (x, t)d x
0
1
−δx −1
+2 e λ (x)w(x, t)c1 (x)z(x, t)d x
0
1 x
+2 e−δx λ−1 (x)w(x, t) ω(x, ξ, t)w(ξ, t)dξd x
0 0
1
+2 e−δx λ−1 (x)w(x, t)κ(x, ξ, t)z(ξ, t)d x
0
1
+2 e−δx λ−1 (x)w(x, t)Γ1 (x, t)d xα(1, t). (E.120)
0

Using Young’s and Cauchy–Schwarz’ inequalities on the cross terms and assuming
δ ≥ 1, give
 
V̇2 ≤ q̄ 2 z 2 (0, t) − δλ − (c̄12 λ−2 + 1 + ω̄ 2 λ−2 + κ̄2 λ−2 + Γ¯12 λ−2 )λ̄ V2 (t)
+ μ̄V3 (t) + α2 (1, t), (E.121)

where c̄1 , ω̄, κ̄, Γ¯1 , λ̄ and μ̄ upper bound c1 , ω, κ, Γ1 , λ and μ, respectively, and λ
lower bounds λ. Inequality (E.121) can be written
 
V̇2 (t) ≤ q̄ 2 z 2 (0, t) − δλ − h 1 V2 (t) + μ̄V3 (t) + α2 (1, t), (E.122)
Appendix E: Additional Proofs 449

for the positive constant

h 1 = (c̄12 λ−2 + 1 + ω̄ 2 λ−2 + κ̄2 λ−2 + Γ¯12 λ−2 )λ̄. (E.123)

Bound on V̇3 :
Differentiating (10.132c), integration by parts and inserting the boundary condition,
we find
1
V̇3 (t) ≤ −z 2 (0, t) − k ekx z 2 (x, t)d x
0
1
+2 ekx μ−1 (x)z(x, t)Ω(x, t)α(1, t)d x
0
1 x
−2 ekx μ−1 (x)z(x, t) K̂ tu (x, ξ, t)w(ξ, t)dξd x
0 0
1 x
−2 ekx μ−1 (x)z(x, t) K̂ tv (x, ξ, t)T −1 [w, z](ξ, t)dξd x (E.124)
0 0

From Young’s and Cauchy–Schwarz’ inequalities, we obtain


! "
V̇3 (t) ≤ −z 2 (0, t) − kμ − μ−2 Ω̄ 2 μ̄ − 2μ−2 μ̄ V3 (t) + ek α2 (1, t)
 
+ eδ+k λ̄ || K̂ tu (t)||2 + 2|| K̂ tv (t)||2 B12 V2

+ 2ek || K̂ tv (t)||2 B22 μ̄V3 (t) (E.125)

which can be written as


! "
V̇3 (t) ≤ −z 2 (0, t) − kμ − h 2 V3 (t) + ek α2 (1, t) + l1 V2 (t) + l2 V3 (t) (E.126)

for the positive constant

μ̄
h2 = 2 + Ω̄ 2 (E.127)
μ2

and integrable functions


 
l1 (t) = eδ+k λ̄ || K̂ tu (t)||2 + 2|| K̂ tv (t)||2 B12 (E.128a)

l2 (t) = 2ek || K̂ tv (t)||2 B22 μ̄. (E.128b)


450 Appendix E: Additional Proofs

E.7 Proof of Lemma 11.8

Bound on V̇1 :
From differentiating V1 in (11.64a) with respect to time, inserting the dynamics
(11.16a) and integration by parts, we find
1
V̇1 (t) = −w 2 (1, t) + 2w 2 (0, t) − w 2 (x, t)d x. (E.129)
0

Inserting the boundary condition (11.64c) and recalling that z(0, t) = ẑ(0, t) +
ˆ(0, t) = η(0, t) + ˆ(0, t), yields

1
V̇1 (t) ≤ 4η 2 (0, t) + 4ˆ2 (0, t) − λ̄V1 (t). (E.130)
2

Bound on V̇2 :
From differentiating V2 in (11.64b) with respect to time and inserting the dynamics
(11.60a), we find
1
V̇2 (t) = 2 (1 + x)η(x, t)ηx (x, t)d x
0
1 1
+ 2μ̄−1 (1 + x)η(x, t)T θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ (x, t)d x
0 x
1
−2 (1 + x)η(x, t)ĝ(x, t)d x ˆ(0, t)
0
1 1
+ 2μ̄−1 (1 + x)η(x, t)T κ̂t (ξ, t)P(x, ξ, t)dξ (x, t)d x
0 0
1 x
− 2μ̄−1 (1 + x)η(x, t) ĝt (x − ξ, t)T −1 [η](ξ, t)dξd x. (E.131)
0 0

Using integration by parts and Cauchy–Schwarz’ inequality on the cross terms, we


find the following upper bound

1
V̇2 (t) ≤ −η 2 (0, t) − μ̄ − ρ1 − ρ2 − ρ3 − ρ4 V2 (t)
2
1 1 2
1
+ (1 + x)T θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ (x, t)d x
ρ2 μ̄2 0 x
2
2ḡ 2 2 1 1 1
+ ˆ (0, t) + (1 + x)T κ̂t (ξ, t)P(x, ξ, t)dξ (x, t)d x
ρ1 ρ3 μ̄2 0 0
1 x 2
1
+ (1 + x) ĝt (x − ξ, t)T −1 [η](ξ, t)dξ (x, t)d x (E.132)
ρ4 μ̄2 0 0
Appendix E: Additional Proofs 451

for some arbitrary positive constants ρi , i = 1, . . . , 4, and where we have used the
boundary condition (11.60b). Choosing ρ1 = ρ2 = ρ3 = ρ4 = 16 1
, we further find
1 32
V̇2 (t) ≤ −η 2 (0, t) − μ̄V2 (t) + 32ḡ 2 ˆ2 (0, t) + 2 G 21 ||θ̂t (t)||2 ||φ(t)||2
4 μ̄
32 32
+ 2 G 21 ||κ̂t (t)||2 ||P(t)||2 + 2 G 22 ||ĝt (t)||2 ||η(t)||2 . (E.133)
μ̄ μ̄

Define the functions


2 2 2 2
l1 (t) = G ||ĝt (t)||2 , l2 (t) = G ||θ̂t (t)||2 (E.134a)
ρ4 μ̄ 2 ρ2 μ̄ 1
2λ̄ 2
l3 (t) = G ||κ̂t (t)||2 , (E.134b)
ρ3 μ̄2 1

which are all integrable from (11.44b), (11.44c) and (11.63), we obtain
1
V̇2 (t) ≤ −η 2 (0, t) − μ̄V2 (t) + l1 (t)V2 (t) + l2 (t)V3 (t)
4
+ l3 (t)V4 (t) + 32ḡ 2 ˆ2 (0, t). (E.135)

Bound on V̇3 :
Similarly, differentiating V3 in (11.64c) with respect to time, inserting the dynamics
(11.26b), and integration by parts, we find
1 1
V̇3 (t) = 2 (1 + x)φ(x, t)φx (x, t)d x = 2φ2 (1, t) − φ2 (0, t) − φ2 (x, t)d x
0 0
1
≤ − μ̄V3 (t) + 4η 2 (0, t) + 4ˆ2 (0, t) (E.136)
2
where we have inserted the boundary condition in (11.26b).
Bound on V̇4 :
Differentiating V4 in (11.64d) with respect to time, inserting the dynamics (11.26c),
and integration by parts, we find
1 1
V4 (t) = 2 P 2 (1, ξ, t)dξ − P 2 (0, ξ, t)dξ
0 0
1 1
− P 2 (x, ξ, t)dξd x (E.137)
0 0

Inserting the boundary condition in (11.26c), we obtain

1
V4 (t) = −|| p0 (t)||2 + 2λ̄V1 (t) − μ̄V4 (t). (E.138)
2

452 Appendix E: Additional Proofs

E.8 Proof of Lemma 12.8

Bound on V̇1 :
From differentiating V1 in (12.91a) with respect to time and inserting the dynamics
(12.84a), we find, for t ≥ t1
1 1
V̇1 (t) = 2 (1 + x)η(x, t)ηx (x, t)d x − 2 (1 + x)η(x, t)ĝ(x, t)d x ˆ(0, t)
0 0
1
2 ˙
+ (1 + x)η(x, t)ρ̂(t)T [ψ] (x, t)d x
μ̄ 0
1 1
2
+ (1 + x)η(x, t)T θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ (x, t)d x
μ̄ 0 x
1 1
2
+ (1 + x)η(x, t)T κ̂t (ξ, t)P(x, ξ, t)dξ (x, t)d x
μ̄ 0 0
1 1
2
+ (1 + x)η(x, t)T κ̂t (ξ, t)M(x, ξ, t)dξ
μ̄ 0 0
1 1
2
+ (1 + x)η(x, t)T θ̂t (ξ, t)N (x, ξ, t)dξ d x
μ̄ 0 0
2 1  
+ ˙
(1 + x)η(x, t)T ϑT (x, t)ν̂(t)d x
μ̄ 0
1 x
2
− (1 + x)η(x, t) ĝt (x − ξ, t)T −1 [η](ξ, t)dξd x (E.139)
μ̄ 0 0

where we have utilized that Pt − μ̄Px is zero for t ≥ t1 . Using integration by parts
and Cauchy–Schwartz’ inequality on the cross terms, we find the following upper
bounds
⎡ ⎤
1 8
1 1
V̇1 (t) ≤ −η 2 (0, t) − μ̄ ⎣ − ρi ⎦ V1 (t) + (1 + x)ρ̂˙ 2 (t)T [ψ]2 (x, t)d x
2 ρ1 μ̄ 0
2
i=1
# $2
1 1 1
+ (1 + x)T θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ (x, t)d x
ρ2 μ̄2 0 x
1 1
+ (1 + x)ĝ 2 (x, t)d x ˆ 2 (0, t)
ρ3 0
# $2
1 1 1
+ (1 + x)T κ̂t (ξ, t)P(x, ξ, t)dξ (x, t)d x
ρ4 μ̄2 0 0
# $2
1 1 1
+ (1 + x)T κ̂t (ξ, t)M(x, ξ, t)dξ (x, t)d x
ρ5 μ̄2 0 0
Appendix E: Additional Proofs 453

# $2
1 1 1
+ (1 + x)T θ̂t (ξ, t)N (x, ξ, t)dξ (x, t)d x
ρ6 μ̄ 0
2
0
1 1
+ ˙ 2 (x, t)d x
(1 + x)T [ϑT ν̂]
ρ7 μ̄2 0
1 x 2
1
+ (1 + x) ĝt (x − ξ, t)T −1 [η](ξ, t)dξ d x, (E.140)
ρ8 μ̄ 0
2
0

for some arbitrary positive constants ρi , i = 1, . . . , 8. V̇1 can be upper bounded by


# $
1 
8
2
V̇1 (t) ≤ −η (0, t) − μ̄
2
− ρi V1 (t) + G 2 ρ̂˙ 2 (t)||ψ(t)||2
2 i=1 ρ1 μ̄2 1
2 2ḡ 2 2 2
+ G 21 ||θ̂t (t)||2 ||φ(t)||2 + ˆ (0, t) + G 2 ||κ̂t (t)||2 ||P(t)||2
ρ2 μ̄ 2 ρ3 ρ4 μ̄2 1
2 2
+ G 21 ||κ̂t (t)||2 ||M(t)||2 + G 2 ||θ̂t (t)||2 ||N (t)||2
ρ5 μ̄2 ρ6 μ̄2 1
2 ˙ 2
+ G 2 |ν̂(t)| 2
||ϑ(t)||2 + G 2 ||ĝt (t)||2 ||η(t)||2 . (E.141)
ρ7 μ̄2 1 ρ8 μ̄2 2

Let
1
ρi = , i = 1, . . . , 8, (E.142)
32
then
μ̄ 64 64
V̇1 (t) ≤ −η 2 (0, t) − V1 (t) + 2 G 21 ρ̂˙ 2 (t)||ψ(t)||2 + 2 G 21 ||θ̂t (t)||2 ||φ(t)||2
4 μ̄ μ̄
+ 64Mg2 σ 2 (t) + 64Mg2 σ 2 (t)ψ 2 (0, t) + 64Mg2 σ 2 (t)||φ(t)||2
+ 64Mg2 σ 2 (t)|| p0 (t)||2 + 64Mg2 σ 2 (t)||m 0 (t)||2 + 64Mg2 σ 2 (t)||n 0 (t)||2
64
+ 64Mg2 σ 2 (t)|ϑ(0, t)|2 + 2 G 21 ||κ̂t (t)||2 ||P(t)||2
μ̄
64 2 64
+ 2 G 1 ||κ̂t (t)|| ||M(t)|| + 2 G 21 ||θ̂t (t)||2 ||N (t)||2
2 2
μ̄ μ̄
64 ˙ 64
+ 2 G 21 |ν̂(t)| 2
||ϑ(t)||2 + 2 G 22 ||gt (t)||2 ||η(t)||2 . (E.143)
μ̄ μ̄

Define the bounded, integrable functions

64 2 64 2
l1 (t) = G ||ĝt (t)||2 , l2 (t) = G ||θ̂t (t)||2 + 64μ̄Mg2 σ 2 (t) (E.144a)
μ̄ 2 μ̄ 1
64λ̄
l3 (t) = 2 G 21 ||κ̂t (t)||2 , l4 (t) = 64λ̄Mg2 σ 2 (t), (E.144b)
μ̄
454 Appendix E: Additional Proofs

64 2 ˙ 2
l5 (t) = G ρ̂ (t) (E.144c)
μ̄ 1
l6 (t) = 64Mg2 σ 2 (t) + 64Mg2 σ 2 (t)||m 0 (t)||2 + 64Mg2 σ 2 (t)||n 0 (t)||2
64
+ 64Mg2 σ 2 (t)|ϑ(0, t)|2 + 2 G 21 ||κ̂t (t)||2 ||M(t)||2
μ̄
64 2 64 ˙
+ 2 G 1 ||θ̂t (t)||2 ||N (t)||2 + 2 G 21 |ν̂(t)| 2
||ϑ(t)||2 (E.144d)
μ̄ μ̄

and the positive constant


h 1 = 64Mg2 (E.145)

then (E.143) can be written as

μ̄
V̇1 (t) ≤ −η 2 (0, t) − V1 (t) + h 1 σ 2 (t)ψ 2 (0, t) + l1 (t)V1 (t) + l2 (t)V2 (t)
4
+ l3 (t)V3 (t) + l4 (t)V4 (t) + l5 (t)V6 (t) + l6 (t). (E.146)

Bound on V̇2 :
Similarly, differentiating V2 in (12.91b) with respect to time, inserting the dynamics
(12.48b), and integrating by parts, we find
1 1
V̇2 (t) = 2 (1 + x)φ(x, t)φx (x, t)d x = 2φ2 (1, t) − φ2 (0, t) − φ2 (x, t)d x
0 0
1
≤ −φ2 (0, t) + 4η 2 (0, t) − μ̄V2 (t) + 4ˆ2 (0, t) (E.147)
2
where we have inserted the boundary condition in (12.48b). Inequality (E.147) can
be written as

μ̄
V̇2 (t) ≤ −φ2 (0, t) + 4η 2 (0, t) − V2 (t) + 4σ 2 (t) 1 + ψ 2 (0, t) + ||φ(t)||2
2

+ || p0 (t)||2 + ||m 0 (t)||2 + ||n 0 (t)||2 + |ϑ(0, t)|2 . (E.148)

Defining the functions


l7 (t) = 4μ̄σ 2 (t), l8 (t) = 4λ̄σ 2 (t) (E.149a)
l9 (t) = 4σ (t)(1 + ||m 0 (t)|| + ||n 0 (t)|| + |ϑ(0, t)| )
2 2 2 2
(E.149b)

which from (12.69f) are bounded and integrable, we obtain


μ̄
V̇2 (t) ≤ −φ2 (0, t) + 4η 2 (0, t) − V2 (t) + 4σ 2 (t)ψ 2 (0, t)
2
+ l7 (t)V2 (t) + l8 (t)V4 (t) + l9 (t). (E.150)
Appendix E: Additional Proofs 455

Bound on V̇3 :
Differentiating V3 in (12.91c) with respect to time and inserting the dynamics
(12.48d), we find
1 1
V̇3 (t) = −2 (2 − ξ)P(x, ξ, t)Pξ (x, ξ, t)dξd x
0 0
1 1
=− P 2 (x, 1, t)d x + 2 P 2 (x, 0, t)d x
0 0
1 1
− P (x, ξ, t)dξd x.
2
(E.151)
0 0

Inserting the boundary condition in (12.48d), we obtain

1
V̇3 (t) ≤ − λ̄V3 (t) + 2μ̄V2 (t) (E.152)
2

Bound on V̇4 :
From differentiating V4 in (12.91d) with respect to time and inserting p0 ’s dynamics
derived from the relationship given in (12.49), we find
1
V̇4 (t) = −2 (2 − x) p0 (x, t)∂x p0 (x, t)d x
0
λ̄
= − p02 (1, t) + 2 p02 (0, t) − V4 (t). (E.153)
2
Using (12.49) and (12.48d) yields

λ̄
V̇4 (t) ≤ 2φ2 (0, t) − V4 (t). (E.154)
2

Bound on V̇5 :
Similarly, differentiating V5 in (12.91e) with respect to time and integration by parts,
we find

λ̄
V̇5 (t) = − p12 (1, t) + 2 p12 (0, t) − V5 (t). (E.155)
2
Using (12.49) and (12.48d) yields

λ̄
V̇5 (t) ≤ 2φ2 (1, t) − V5 (t) (E.156)
2
456 Appendix E: Additional Proofs

λ̄
≤ 4η (0, t) − V5 (t) + 4σ (t) 1 + ψ 2 (0, t) + ||φ(t)||2
2 2
2

+ || p0 (t)|| + ||m 0 (t)|| + ||n 0 (t)|| + |ϑ(0, t)| .
2 2 2 2
(E.157)

which can be written as

λ̄
V̇5 (t) ≤ 4η 2 (0, t) − V5 (t) + 4σ 2 (t)ψ 2 (0, t)
2
+ l7 (t)V2 (t) + l8 (t)V4 (t) + l9 (t), (E.158)

for the integrable functions defined in (E.149a).


Bound on V̇6 :
Lastly, from differentiating V6 in (12.91f) with respect to time and inserting the
dynamics (12.48a), we find
1
V̇6 (t) = 2 (1 + x)ψ(x, t)ψx (x, t)d x
0
μ̄
= 2ψ 2 (1, t) − ψ 2 (0, t) − V6 (t). (E.159)
2
Inserting the boundary condition (12.48a) and the control law (12.79), we can bound
this as

μ̄ 1
V̇6 (t) ≤ −ψ 2 (0, t) − V6 (t) + 12Mρ2 r 2 (t) + 12Mρ2 ĝ 2 (1 − ξ, t)ẑ 2 (ξ, t)dξ
2 0
1 1
+ 12Mρ2 κ̂2 (ξ, t) p12 (ξ, t)dξ + 12Mρ2 κ̂2 (ξ, t)a 2 (ξ, t)dξ
0 0
1
+ 12Mρ2 θ̂2 (ξ, t)b2 (1 − ξ, t)dξ + 12Mρ2 (χT (t)ν̂(t))2 (E.160)
0

where
1
Mρ = . (E.161)
min{|ρ|, |ρ̄|}

Inequality (E.160) can be bounded as

μ̄
V̇6 (t) ≤ −ψ 2 (0, t) − V6 (t) + 12Mρ2 r 2 (t) + 12Mρ2 Mg2 G 22 ||η(t)||2
2
+ 12Mρ2 Mκ2 || p1 (t)||2 + 12Mρ2 Mκ2 ||a(t)||2
+ 12Mρ2 Mθ2 ||b(t)||2 + 12(2n + 1)Mρ2 Mν2 ||χ(t)||2 (E.162)
Appendix E: Additional Proofs 457

where

Mκ = max{|κ|, |κ̄|}, Mν = max {|ν i |, |ν̄i |} (E.163)


i=1...(2n+1)

Defining the positive constants

h 2 = 12Mρ2 , h 3 = 12Mρ2 Mg2 G 22 μ̄, h 4 = 12Mρ2 Mκ2 λ̄ (E.164a)


h5 = 12Mρ2 Mκ2 , h6 = 12Mρ2 Mθ2 , h 7 = 12(2n + 1)Mρ2 Mν2 (E.164b)

then (E.162) can be written as


μ̄
V̇6 (t) ≤ −ψ 2 (0, t) − V6 (t) + h 2 r 2 (t) + h 3 V1 (t) + h 4 V5 (t)
2
+ h 5 ||a(t)||2 + h 6 ||b(t)||2 + h 7 ||χ(t)||2 . (E.165)

E.9 Proof of Lemma 15.3

Bound on V̇2 :
Differentiating V2 , using the dynamics (15.43a), integration by parts, inserting the
boundary condition (15.43c) and using Young’s and Cauchy–Schwarz’ inequalities
on the cross terms, assuming δ ≥ 1, we find

V̇2 (t) ≤ −λ1 e−δ |α(1, t)|2 + 2n q̄ 2 λn β 2 (0, t) + 2n q̄ 2 λn ˆ2 (0, t)


1 1
− δλ1 e−δx αT (x, t)α(x, t)d x + 2n σ̄ e−δx αT (x, t)α(x, t)d x
0 0
1 1
1
+ 2n ω̄ 2 e−δx αT (x, t)α(x, t)β(x, t)d x + e−δx β 2 (x, t)d x
0 2 0
1 1
+ n̄ B̄12 e−δx αT (x, t)α(x, t)d x + e−δx αT (x, t)α(x, t)d x
0 0
1 1
1
+ 2n b̄22 e−δx αT (x, t)α(x, t)d x + e−δx β 2 (x, t)d x
0 2 0
1 1
+ n σ̄ 2 e−δx αT (x, t)α(x, t)d x + e−δx ê2 (x, t)d x
0 0
1 1
+ n ω̄ 2 e−δx αT (x, t)α(x, t)d x + e−δx ˆ2 (x, t)d x
0 0
1
+ e−δx αT (x, t)α(x, t)d x
0
1
+ ˙
e−δx ((ϕ(x, t) ◦ κ̂(t))1) T ˙
((ϕ(x, t) ◦ κ̂(t))1)d x, (E.166)
0
458 Appendix E: Additional Proofs

which can be written as

V̇2 (t) ≤ −λ1 e−δ |α(1, t)|2 + h 1 β 2 (0, t) + h 1 ˆ2 (0, t) − (δλ1 − h 2 ) V2 (t)
˙
+ V3 (t) + ||ê(t)||2 + ||ˆ(t)||2 + ||(ϕ(t) ◦ κ̂(t))1)|| 2
(E.167)

for the positive constants

h 1 = 2n q̄ 2 λn , h 2 = 2n σ̄ + 3n ω̄ 2 + n̄ B̄12 + 2 + 2n b̄22 + n σ̄ 2 . (E.168)

Bound on V̇3 :
Similarly for V3 , we find using (15.43b)
1 1
V̇3 (t) ≤ −μβ 2 (0, t) − kμ ekx β 2 (x, t)d x + λ2n K̄ 2 q̄ 2 ekx β 2 (x, t)d x
0 0
1 1
+ ek ˆ2 (0, t) + ekx β 2 (x, t)d x + ekx T [Σ̂ ê + ω̂ ˆ, 
ˆ T ê]2 (x, t)d x
0 0
1 1 x x
+ ekx β 2 (x, t)d x + ekx ( K̂ tu (x, ξ, t))2 dξ α2 (ξ, t)dξd x
0 0 0 0
1
+ ekx β 2 (x, t)d x
0
1 x x
+ ekx ( K̂ tv (x, ξ, t))2 dξ T −1 [α, β]2 (ξ, t)dξd x
0 0 0
1 1
+ e β (x, t)d x +
kx 2 ˙
e T [(ϕ ◦ κ̂)1,
kx
ϕ0T κ̂˙ 0 ]2 (x, t)d x (E.169)
0 0

which can be written

V̇3 (t) ≤ −μβ 2 (0, t) − [kμ − h 3 ] V3 (t) + ek ˆ2 (0, t) + h 4 ||ê(t)||2


+ h 5 ||ˆ(t)||2 + l1 (t)V2 (t) + l2 (t)V3 (t)
˙
+ h 6 ek ||(ϕ(t) ◦ κ̂(t))1|| 2
+ h 7 ek ||ϕT (t)κ̂˙ 0 (t)||2 (E.170)
0

for the positive constants

h 3 = 4 + λ2n K̄ 2 q̄ 2 , h 4 = 2ek 2G 21 n σ̄ 2 + G 22 n 
¯2 (E.171)
h 5 = 4e k
G 21 n ω̄ 2 , h6 = 2G 21 , h7 = 2G 22 (E.172)

and integrable functions


 
l1 (t) = || K̂ tu (t)||2 + 2|| K̂ tv (t)||2 G 23 ek+δ , l2 (t) = 2|| K̂ tv (t)||2 ek G 24 (E.173)
Appendix E: Additional Proofs 459

Bound on V̇4 :
Following the same steps as before, we obtain from (15.51c) and the filter (15.2a)

V̇4 (t) ≤ −λ1 e−δ η T (1, t)η(1, t) + nλn v 2 (0, t)


1
− δλ1 e−δx η T (x, t)η(x, t)d x (E.174)
0

which can be written

V̇4 (t) ≤ −λ1 e−δ |η(1, t)|2 + h 8 β 2 (0, t) + h 8 ˆ2 (0, t) − δλ1 V4 (t) (E.175)

where

h 8 = 2nλn (E.176)

is a positive constant.
Bound on V̇5 :
By straightforward calculations, we obtain
1
V̇5 (t) = μek ψ T (1, t)ψ(1, t) − μψ T (0, t)ψ(0, t) − kμ ekx ψ T (x, t)ψ(x, t)d x
0
≤ h 9 ek |α(1, t)|2 + h 9 ek |ê(1, t)|2 − μ|ψ(0, t)|2 − kμV5 (t) (E.177)

where h 9 = 2μ is a positive constant.


Bound on V̇6 :
Similarly, we find from (15.51e) and the filter (15.2d)


n 1
V̇6 (t) = −2 e−δx λi piT (x, t)∂x pi (x, t)d x
i=1 0

n 1
+2 e−δx piT (x, t)u(x, t)d x
i=1 0

≤ −λ1 e−δ |P(1, t)|2 − [δλ1 − 1] V6 (t) + h 10 V2 (t) + h 10 ||ê(t)||2 (E.178)

with h 10 = 2n as a positive constant, where we used the relationship u = α + ê.


Bound on V̇7 :
From (15.51f) and (15.2e)
1
V̇7 (t) ≤ −e−δ ν T (1, t)Λν(1, t) + ν T (0, t)Λν(0, t) − δ e−δx ν T (x, t)Λν(x, t)d x
0
460 Appendix E: Additional Proofs

1 1
+ n2 e−δx ν T (x, t)ν(x, t)d x + e−δx v 2 (x, t)d x
0 0
−δ
≤ −λ1 e |ν(1, t)|2 − (δλ1 − h 11 ) V7 (t)
+ h 12 eδ V2 (t) + h 13 V3 (t) + 2||ˆ(t)||2 (E.179)

where we used the relationship

v(x, t) = v̂(x, t) + ˆ(x, t) = T −1 [α, β](x, t) + ˆ(x, t) (E.180)

and where

h 11 = n 2 , h 12 = 4G 23 , h 13 = 4G 24 . (E.181)

are positive constants.


Bound on V̇8 :
Lastly, from (15.51g) and (15.2f)

V̇8 (t) ≤ μek r T (1, t)r (1, t) − μr T (0, t)r (0, t)


1
− [kμ − 2] ekx r T (x, t)r (x, t)d x
0
1
1
+ ekx u T (x, t)u(x, t)d x (E.182)
2 0

and hence

V̇8 (t) ≤ −μ|r (0, t)|2 − [kμ − 2] V8 (t) + eδ+k V2 (t) + ek ||ê(t)||2 (E.183)

E.10 Proof of Lemma 16.5

Bound on V̇3 :
From (16.52a) and the dynamics (16.39a), we find
1
V̇3 (t) = −e−δ αT (1, t)α(1, t) + αT (0, t)α(0, t) − δ e−δx αT (x, t)α(x, t)d x
0
1
+2 e−δx αT (x, t)Λ−1 (x)Σ(x)α(x, t)d x
0
Appendix E: Additional Proofs 461

1
+2 e−δx αT (x, t)Λ−1 (x)ω(x)β(x, t)d x
0
1 x
+2 e−δx αT (x, t)Λ−1 (x) B̂1 (x, ξ, t)α(ξ, t)dξd x
0 0
1 x
+2 e−δx αT (x, t)Λ−1 (x) b̂2 (x, ξ, t)β(ξ, t)dξd x
0 0
1
−2 e−δx αT (x, t)Λ−1 (x)Γ1 (x)ˆ(0, t)d x
0
1
+2 ˙
e−δx αT (x, t)Λ−1 (x)P(x, t)q̂(t)d x. (E.184)
0

Using Young’s inequality on the cross terms, we find

V̇3 (t) ≤ −e−δ αT (1, t)α(1, t) + 2n q̄ 2 β 2 (0, t) + 2n q̄ 2 ˆ2 (0, t)


1
− δλV3 (t) + 2n σ̄V3 (t) + V3 (t) + n ω̄ 2 λ−1 e−δx β 2 (x, t)d x
0
1 x
+ e−δx αT (x, t)Λ−1 (x)α(x, t)dξd x
0 0
1 x
+ n b̄12 e−δx αT (ξ, t)Λ−1 (x)α(ξ, t)dξd x
0 0
1 x
−δx
+ e αT (x, t)Λ−1 (x)α(x, t)dξd x
0 0
1 x
+ n b̄22 λ−1 e−δx β 2 (ξ, t)dξd x
0 0
1
+ e−δx αT (x, t)Λ−1 (x)α(x, t)d x
0
1 1
+ e−δx αT (x, t)Λ−1 (x)α(x, t)d x + n γ̄12 λ−1 e−δx ˆ2 (0, t)d x
0 0
1
+ e−δx αT (x, t)Λ−1 (x)α(x, t)d x
0
1
+ λ−1 ˙
e−δx q̂˙ T (t)P T (x, t)P(x, t)q̂(t)d x, (E.185)
0

where ω̄, σ̄, b̄1 , b̄2 , γ̄1 , γ̄2 and q̄ bounds the absolute values of all elements in ω, Σ,
B̂1 , b̂2 , Γ1 , Γ2 and q̂, respectively. Assuming δ ≥ 1, one can shorten it down to
 
V̇3 (t) ≤ −e−δ αT (1, t)α(1, t) + 2n q̄ 2 β 2 (0, t) − δλ − 2n σ̄ − n b̄12 − 7 V3 (t)
+ n(ω̄ 2 + b̄22 )λ−1 μ̄V4 (t) + (n γ̄12 λ−1 + 2n q̄ 2 )ˆ2 (0, t)

n 1
˙
+ q̂˙ T (t)q̂(t) e−δx PiT (x, t)Λ−1 (x)Pi (x, t)d x (E.186)
i=1 0
462 Appendix E: Additional Proofs

where Pi are the columns of P. Using (16.47a) and the property (16.17c), we can
write
 
V̇3 (t) ≤ 2n q̄ 2 β 2 (0, t) − δλ − h 1 V3 (t) + h 2 V4 (t)
+ h 3 ˆ2 (0, t) + l1 (t)V5 (t) + l2 (t)V6 (t) (E.187)

for some positive constants h 1 , h 2 , h 3 independent of δ and k, and integrable functions


l1 and l2 .
Bound on V̇4 :
From (16.52b) we find
1
V̇4 (t) = ek β 2 (1, t) − β 2 (0, t) − k ekx β 2 (x, t)d x
0
1
+2 ˙
ekx μ−1 (x)β(x, t)T [P, r T ](x, t)q̂(t)d x
0
1
−2 ekx μ−1 (x)β(x, t)T [Γ1 , Γ2 ](x, t)ˆ(0, t)d x
0
1
−2 ekx μ−1 (x)β(x, t) K̂ u (x, 0, t)Λ(0)q̂(t)ˆ(0, t)d x
0
1 x
−2 ekx μ−1 (x)β(x, t) K̂ tu (x, ξ, t)α(ξ, t)dξd x
0 0
1 x
−2 ekx μ−1 (x)β(x, t) K̂ tv (x, ξ, t)T −1 [α, β](ξ, t)dξd x. (E.188)
0 0

Using Cauchy–Schwarz’ inequality on the cross terms gives


! " 1
V̇4 (t) ≤ −β 2 (0, t) − kμ − 5 V4 (t) + ˙
ekx μ−1 (x)(T [P, r T ](x, t)q̂(t))2
dx
0
+ 2ek μ−1 G 21 ||Γ1 ||2 ˆ2 (0, t) + 2ek μ−1 G 22 ||Γ2 ||2 ˆ2 (0, t)
+ n 2 q̄ 2 K̄ 2 λ̄2 μ−1 ek ˆ2 (0, t) + || K̂ tu (t)||2 μek ||α(t)||2
+ 2|| K̂ tv (t)||2 μek G 23 ||α(t)||2 + 2|| K̂ tu (t)||2 μek G 24 ||β(t)||2 (E.189)

where we have used K̄ as defined in (16.33), μ is a lower bound on μ, and λ̄ bounds


all elements in Λ. In view of Theorem 16.1 and property (16.34), inequality (E.189)
can be written
! "
V̇4 (t) ≤ −β 2 (0, t) − kμ − 5 V4 (t) + l3 (t)V3 (t) + l4 (t)V4 (t)
+ l5 (t)V5 (t) + l6 (t)V6 (t) + h 4 ek ˆ2 (0, t) (E.190)
Appendix E: Additional Proofs 463

for some positive constant h 4 independent of δ, k and integrable functions l3 , l4 , l5


and l6 .
Bound on V̇5 :
From (16.52c) we find


n
V̇5 (t) ≤ −e−δ |W (1, t)|2 + WiT (0, t)Wi (0, t)
i=1
 
− λδ − 2n σ̄ − 2n b̄1 V5 (t) (E.191)

where b̄1 bounds the absolute values of all elements in B̂1 . Inserting the boundary
condition (16.46c), we obtain

V̇5 (t) = −e−δ |W (1, t)|2 + 2nβ 2 (0, t) + 2n ˆ2 (0, t)


 
− λδ − 2n σ̄ − 2n ā V5 (t), (E.192)

which can be written


 
V̇5 (t) ≤ −e−δ |W (1, t)|2 + 2nβ 2 (0, t) + 2n ˆ2 (0, t) − λδ − h 5 V5 (t) (E.193)

for some positive constant h 6 independent of δ and k.


Bound on V̇6 :
From (16.52d) we find
! "
V̇6 (t) ≤ −|z(0, t)|2 − kμ − 2 V6 (t) + nμ−1 λ̄ek+δ (
¯ 2 + b̄22 )V5 (t) (E.194)

where ¯ and b̄2 bounds all elements in  and b̂2 , respectively. Inequality (E.194)
can be written
! "
V̇6 (t) ≤ −|z(0, t)|2 + h 6 ek+δ V5 (t) − kμ − 2 V6 (t) (E.195)

for some positive constant h 7 independent of δ and k. 

E.11 Proof of Lemma 17.9

Bound on V̇1 :
Differentiating V1 in (17.85a) with respect to time, inserting the dynamics (17.80a),
integrating by parts and using Young’s inequality on the cross terms, one obtains
464 Appendix E: Additional Proofs
  1
1 2
V̇1 ≤ −η (0, t) −
2
− 9k (1 + x)η 2 (x, t)d x + ḡ 2 ˆ2 (0, t)
2 0 k
1 1
2 2
+ μ̄−2 (ν̂˙ T (t)T [h] (x, t))2 d x + μ̄−2 (ν̂˙ T (t)T [ϑ] (x, t))2 d x
k 0 k 0
1
2 −2
+ μ̄ ρ̂˙ 2 (t)T 2 [ψ](x, t)d x
k 0
1 1
2 −2
+ μ̄ T2 κ̂t (ξ, t)P(x, ξ, t)dξ (x, t)d x
k 0 0
1 1
2 −2
+ μ̄ T2 κ̂t (ξ, t)M(x, ξ, t)dξ (x, t)d x
k 0 0
1 1
2
+ μ̄−2 T2 θ̂t (ξ, t)N (x, ξ, t)dξ (x, t)d x
k 0 0
1 1
2
+ μ̄−2 T2 θ̂t (ξ, t)φ(1 − (ξ − x), t)dξ (x, t)d x
k 0 x
1  x 2
2
+ μ̄−2 ĝt (x − ξ, t)T −1 [η](ξ, t)dξ dx (E.196)
k 0 0

for some arbitrary positive constant k. Choosing k = 1


36
and using Cauchy–Schwarz’
inequality gives

1 1
V̇1 (t) ≤ −η 2 (0, t) − (1 + x)η 2 (x, t)d x + 72ḡ 2 ˆ2 (0, t)
4 0
˙
+ 72μ̄−2 G 2 |ν̂(t)| 2 ˙
||h(t)||2 + 72μ̄−2 G 2 |ν̂(t)| 2
||ϑ(t)||2
1 1

+ 72μ̄−2 ρ̂˙ 2 (t)G 21 ||ψ(t)||2 + 72μ̄−2 G 21 ||κ̂t (t)||2 ||P(t)||2


+ 72μ̄−2 G 21 ||κ̂t (t)||2 ||M(t)||2 + 72μ̄−2 G 21 ||θ̂t (t)||2 ||N (t)||2
+ 72μ̄−2 G 21 ||θ̂t (t)||2 ||φ(t)||2 + 72μ̄−2 G 22 ||ĝt (t)||2 ||η(t)||2 . (E.197)

Since ||ϑ||, ||M||, ||N || are all bounded (Assumption 17.2), this can be written

1
V̇1 (t) ≤ −η 2 (0, t) − μ̄V1 (t) + l1 (t)V1 (t) + l2 (t)V3 (t) + l3 (t)V4 (t)
4
+ l4 (t)V5 (t) + l5 (t)V6 (t) + l6 (t) + b1 ˆ2 (0, t) (E.198)

where l1 . . . l6 are all bounded and integrable functions (Lemmas 17.7 and 17.8), and
b1 is a positive constant.
Bound on V̇2 :
Differentiating V2 in (17.85b) with respect to time, inserting the dynamics (17.42a)
and integrating by parts
Appendix E: Additional Proofs 465

V̇2 (t) ≤ −|w(1, t)|2 + 2|w(0, t)|2 − ||w(t)||2 . (E.199)

Inserting the boundary condition (17.42a), and noting that

z(0, t) = ẑ(0, t) + ˆ(0, t) = η(0, t) + ˆ(0, t), (E.200)

we obtain the bound


1
V̇2 (t) ≤ −|w(1, t)|2 + 4nη 2 (0, t) + 4n ˆ2 (0, t) − λ̄1 V2 (t). (E.201)
2

Bound on V̇3 :
Similarly, differentiating V3 in (17.85c) with respect to time, inserting the dynamics
(17.45b), integrating by parts and inserting the boundary condition (17.45b), we
obtain in a similar manner the upper bound

1
V̇3 (t) ≤ −φ2 (0, t) + 4η 2 (0, t) + 4ˆ2 (0, t) − μ̄V3 (t). (E.202)
2

Bound on V̇4 :
Differentiating V4 in (17.85d) with respect to time, inserting the dynamics (17.45c),
integrating by parts and inserting the boundary condition (17.45c), yields

1
V̇4 (t) = 2|w(1, t)|2 − |h(0, t)|2 − μ̄V4 (t) (E.203)
2

Bound on V̇5 :
For V5 in (17.85e), using the dynamics (17.45e), integration by parts and inserting
the boundary condition (17.45e), yields

1
V̇5 (t) = 2||w1 (t)||2 − || p0 (t)||2 − μ̄V5 (t). (E.204)
2

Bound on V̇6 :
Similarly, for V6 in (17.85f), the dynamics and boundary condition (17.45a) yield

1
V̇6 (t) = 2U 2 (t) − ψ 2 (0, t) − μ̄V6 (t). (E.205)
2
Inserting the control law (17.75) and using Young’s and Cauchy–Schwarz’ inequal-
ities, we obtain
466 Appendix E: Additional Proofs

V̇6 (t) ≤ 14Mρ2 ||ĝ(t)||2 G 22 ||η(t)||2 + 14Mρ2 |ν̂(t)|2 |w(1, t)|2


+ 14Mρ2 |ν̂(t)|2 |a(1, t)|2 + 14Mρ2 ||κ̂(t)||2 ||w1 (t)||2
+ 14Mρ2 ||κ̂(t)||2 ||a1 (t)||2 + 14Mρ2 ||θ̂(t)||2 ||b(t)||2
+ 14Mρ2 r̄ 2 − ψ 2 (0, t) − ||ψ(t)||2 (E.206)

where we have used Assumption 17.2, and defined Mρ = 1


min(|ρ|,|ρ̄|)
. Inequality
(E.206) can be written

1
V̇6 (t) ≤ b2 V1 (t) + b3 V2 (t) − μ̄V6 (t) + b4 |w(1, t)|2 − ψ 2 (0, t) + b5 (E.207)
2
for some positive constants b2 . . . b5 , with b5 depending on r̄ . 

E.12 Proof of Lemma 20.4

Bound on V̇2 :
Differentiating V2 , inserting the dynamics (20.59a), integrating by parts and using
Cauchy–Schwartz’ inequality on the cross terms, bounding all the coefficients, insert-
ing the boundary conditions, and evaluating all the double integrals, we find, when
assuming δ > 1

V̇2 (t) ≤ −e−δ λ1 αT (1, t)α(1, t) + 2mn q̄ 2 λn β T (0, t)β(0, t)


 
− δλ1 − 2n σ̄ − n σ̄ 2 − n κ̄2 − 3 − n κ̄2 − n γ̄ 2
1
× e−δx αT (x, t)α(x, t)d x
0
1
+2 e−δx β T (x, t)β(x, t)d x + (1 + 2mn q̄ 2 λn )ˆT (0, t)ˆ(0, t)
0

nm 
nm
˙
+ 2q̂˙ T (t)q̂(t) H12 ||Ai (t)||2 + 2q̂˙ T (t)q̂(t)
˙ H22 ||Bi (t)||2
i=1 i=1

nm
+ ĉ˙ T (t)ĉ(t)
˙ H22 ||Ωi (t)||2 (E.208)
i=1

where σ̄ bounds all the elements of the matrices Σ, κ̄ bounds the elements of κ+ and
κ− , γ̄ bounds the elements of P + , and q̄ bounds q̂. Define the positive constants

h 1 = 2mn q̄ 2 λn , h 2 = 2n σ̄ + n σ̄ 2 + n κ̄2 + 3 + n κ̄2 + n γ̄ 2 (E.209a)


h 3 = 2mn q̄ λn + 1
2
(E.209b)
Appendix E: Additional Proofs 467

and the integrable functions

l1 (t) = 2q̂˙ T (t)q̂(t)H


˙ 2 δ
1e , l2 (t) = 2q̂˙ T (t)q̂(t)H
˙ 2 −1
2π (E.210a)
l3 (t) = ĉ˙ T (t)ĉ(t)H
˙ 2 −1
π ,
2 (E.210b)

where π is a lower bound on the elements of Π , we obtain

V̇2 (t) ≤ −e−δ λ1 |α(1, t)|2 + h 1 |β(0, t)|2 − [δλ1 − h 2 ] V2 (t) + 2d −1 V3 (t)
+ h 3 |ˆ(0, t)|2 + l1 (t)V4 (t) + l2 (t)V5 (t) + l3 (t)V6 (t). (E.211)

Bound on V̇3 :
Using the same steps as for V2 , we obtain using (20.59b) and assuming k > 1

V̇3 (t) = −μm β T (0, t)Dβ(0, t) − (kμm − 7)V3 (t)


1
+ ekx β T (0, t)G T (x)DG(x)β(0, t)d x
0
+ 2d̄ek G 21 ||P + ||2 ˆT (0, t)ˆ(0, t) + 2d̄ek G 22 ||P − ||2 ˆT (0, t)ˆ(0, t)
n
˙
+ d̄ K̄ 2 m 2 n 2 λ2 q̄ 2 ek ˆT (0, t)ˆ(0, t) + 2d̄ek q̂˙ T (t)q̂(t)G 2
1 ||P(t)||2
˙
+ 2d̄ek q̂˙ T (t)q̂(t)G k ˙T ˙
2 ||R(t)|| + 2d̄e ĉ (t)ĉ(t)G 1 ||W (t)||
2 2 2 2

˙
+ 2d̄ek ĉ˙ T (t)ĉ(t)G 2
2 ||Z (t)||2 + || K̂ u (t)||2 d̄ek ||α(t)||2
t
v
+ 2|| K̂ t (t)|| d̄e G 3 ||α(t)||2
2 k 2
+ 2|| K̂ tv (t)||2 d̄ek G 24 ||β(t)||2 (E.212)

where d̄ bounds all the elements of D. Consider the third term on the right hand
side. Written out, and using Cauchy–Schwarz’ inequality, we can bound the term as
follows


m 
m 
m
β T (0, t)G T (x)DG(x)β(0, t) ≤ βi2 (0, t) dk ḡ 2 , (E.213)
i=1 j=1 k=max(i+1, j+1)

where ḡ bounds all the elements of G, and hence the first and the third terms can be
bounded as
 
β T (0, t) μm D − ek G T (x)DG(x) β(0, t)
⎡ ⎤
m m 
m
≤− βi2 (0, t) ⎣μm di − ek ḡ 2 dk ⎦ . (E.214)
i=1 j=1 k=max(i+1, j+1)

Thus, we can recursively determine the coefficients di . Initially, choose

dm = 1 (E.215)
468 Appendix E: Additional Proofs

then choose

dm−1 > ek μ−1


m ḡ (m − 1)dm
2
(E.216a)
dm−2 > ek μ−1
m ḡ (m
2
− 2)dm−1 + dm−1 (E.216b)
dm−3 > ek μ−1
m ḡ (m
2
− 3)dm−2 + dm−2 (E.216c)

and so on. Choosing D like this, we can obtain

V̇3 (t) ≤ −h 4 |β(0, t)|2 − (kλ1 − 7)V3 (t) + ek d̄h 5 |ˆ(0, t)|2 + l4 (t)V2 (t)
+ l5 (t)V3 (t) + l6 (t)V4 (t) + l7 (t)V5 (t) + l8 (t)V6 (t) (E.217)

for some positive constant h 4 depending on the chosen values of D, the positive
constant

h 5 = 2G 21 (||P + ||2 + ||P − ||2 ) + K̄ 2 m 2 n 2 λ2n q̄ 2 (E.218)

independent of k, and the integrable functions

l4 (t) = || K̂ tu (t)||2 d̄eδ+k + 2|| K̂ tv (t)||2 d̄eδ+k G 23 (E.219a)


l5 (t) = 2|| K̂ tv (t)||2 d̄ek G 24 (E.219b)
l6 (t) = ˙
4d̄ek q̂˙ T (t)q̂(t)G 2 2 δ
1 H1 e (E.219c)
l7 (t) = ˙
2d̄ek q̂˙ T (t)q̂(t)G 2 −1
1 (2H2 + H3 )π
2 2
(E.219d)
l8 (t) = ˙
2d̄ek ĉ˙ T (t)ĉ(t)G 2 −1
1 (H2 + H3 )π .
2 2
(E.219e)

Bound on V̇4 :
Using the same steps as above, and assuming δ > 1, we find


nm 
nm
V̇4 (t) = − λ1 e−δ AiT (1, t)Ai (1, t) + λn AiT (0, t)Ai (0, t)
i=1 i=1
 
− δλ1 − 2n σ̄ − 1 − n D̄ 2 V4 (t), (E.220)

where D̄ bounds the elements of D + and D − . Inserting the boundary condition


(20.62c), we obtain

V̇4 (t) ≤ −λ1 e−δ |A(1, t)|2 + 2λn mn(|β(0, t)|2 + |ˆ(0, t)|2 )
 
− δλ1 − 2n σ̄ − 1 − n D̄ 2 V4 (t). (E.221)

By defining

h 6 = 2n σ̄ + 1 + n D̄ 2 , h 7 = 2λn mn (E.222)
Appendix E: Additional Proofs 469

we obtain

V̇4 (t) ≤ −λ1 e−δ |A(1, t)|2 + h 7 |β(0, t)|2 + h 7 |ˆ(0, t)|2
− [δλ1 − h 6 ] V4 (t). (E.223)

Bound on V̇5 :
Differentiating V5 , inserting the dynamics, integration by parts, inserting the bound-
ary condition and using Young’s inequality, one can obtain, when assuming k > 1

nm 1
V̇5 (t) ≤ μ1 ek BiT (x, t)H T (x, t)Π H (x)Bi (x, t)dξ
i=1 0

nm
− μm BiT (0, t)Π Bi (0, t)
i=1
nm 1
− kμm ekx BiT (x, t)Π Bi (x, t)d x
i=1 0


nm 1
+ m σ̄ 2 ekx BiT (x, t)Π Λ− Bi (x, t)d x
i=1 0
nm 1
+ m D̄ 2 ekx BiT (x, t)Π Λ− Bi (x, t)d x
i=1 0


nm 1
+2 ekx AiT (x, t)Π Ai (x, t)d x. (E.224)
i=1 0

Since H has the same strictly triangular structure as G, one can use the same recursive
argument as for D in V3 for determining the coefficients of Π . This results in

V̇5 (t) ≤ −h 8 ek V5 (t) − μm π|B(0, t)|2 + 2π̄eδ+k V4 (t) (E.225)

for some positive constant h 8 .


Bound on V̇6 :
Following the same steps as for V5 , inserting the boundary condition and using
Young’s inequality, we obtain

nm 1
V̇6 (t) = 4 Ω T (x, t)H T (x, t)Π H (x)Ω(x, t)d x + 8n π̄αT (1, t)α(1, t)
i=1 0


nm
+ 8n π̄ ê T (1, t)ê(1, t) − ΩiT (0, t)Π Ωi (0, t)
i=1

nm 1
− ΩiT (x, t)Π Ωi (x, t)d x (E.226)
i=1 0
470 Appendix E: Additional Proofs

where π̄ is an upper bound for the elements of Π . Again, due to H having the same
structure as G, we can recursively choose the components of Π so that the sum of
the first and last components is negative, and hence obtain

V̇6 (t) ≤ −h 9 ek V6 (t) + 8n π̄|α(1, t)|2 + 8n π̄|ê(1, t)|2 − π|Ω(0, t)|2 (E.227)

for some positive constants h 9 . 


Appendix F
Numerical Methods for Solving Kernel
Equations

F.1 Method 1: Successive Approximations

This method is suitable for Volterra (integral) equations. This method iterates a
sequence similar to the sequence (1.62) used in the proof of existence of solution to
the Volterra equation (1.58) in Lemma 1.1. Consider the Volterra equation
x
k(x) = f (x) + G(x, ξ)k(ξ)dξ (F.1)
0

in the variable k. Consider the sequence {k 0 , k 1 , k 2 , . . . k q , k q+1 , . . . } generated using

k 0 (x) = f (x) (F.2a)


x
k q (x) = f (x) + G(x, ξ)k q−1 (ξ), q ≥ 1. (F.2b)
0

An approximate solution is then taken as

k ≈ kq (F.3)

for a sufficiently large q (typically 30–100).

© Springer Nature Switzerland AG 2019 471


H. Anfinsen and O. M. Aamo, Adaptive Control of Hyperbolic PDEs,
Communications and Control Engineering,
https://doi.org/10.1007/978-3-030-05879-1
472 Appendix F: Numerical Methods for Solving Kernel Equations

F.2 Method 2: Uniformly Gridded Discretization

F.2.1 Introduction

This method was originally proposed in Anfinsen and Aamo (2017a), and is based
on discretization of the domain into an uniformly spaced grid. We will demonstrate
the technique on the time-invariant PDE

μ(x)K x (x, ξ) + λ(ξ)K ξ (x, ξ) = f (x, ξ)K (x, ξ) + g(x, ξ) (F.4a)


x
K (x, 0) = h(ξ)K (x, ξ)dξ + m(x) (F.4b)
0

defined over T , with bounded parameters

μ, λ ∈ C 1 ([0, 1]), λ(x), μ(x) > 0, ∀x ∈ [0, 1] (F.5a)


f, g ∈ C(T ), h, m ∈ C([0, 1]), (F.5b)

which will be solved over a the lower triangular part of a uniformly spaced grid with
N × N nodes. The method straightforwardly extends to time-varying PDEs as well.

F.2.2 Main Idea

One well-known problem with solving the Eqs. (F.4), is the numerical issues one is
facing when evaluating the spatial derivatives K x and K ξ at the points (1, 1) and
(0, 0) respectively, as naively performing a finite difference scheme results in the
need for evaluating points outside the domain. The key to overcoming the numerical
issues faced at the sharp corners of the domain, is to treat both terms on the left
hand side of (F.4a) of as a directional derivative, and approximate the derivative
of K at a point (x, ξ) using a finite difference upwind scheme, using information
from the direction of flow to approximate the derivative. Intuitively, K represents
information that convects from the bottom boundary and upwards to the right. This
is depicted in Fig. F.1. The red boundary represent the boundary at which a boundary
condition is specified, while the blue lines are characteristics indicating the direction
of information flow.
For K in (F.4), we approximate the left hand side of (F.4a) as

μ(x)K x (x, ξ) + λ(ξ)K ξ (x, ξ)


%
μ2 (x) + λ2 (ξ)

σ(x, ξ)
 
× K (x, ξ) − K (x − σ(x, ξ)ν1 (x, ξ), ξ − σ(x, ξ)ν2 (x, ξ)) (F.6)
Appendix F: Numerical Methods for Solving Kernel Equations 473

Fig. F.1 Boundary


condition (red) and
characteristics (blue)

where ν1 , ν2 are the components of a unit vector in the direction of the characteristic,
that is
ν1 (x, ξ) 1 μ(x)
ν(x, ξ) = =% (F.7)
ν1 (x, ξ) μ (x) + λ (ξ) λ(ξ)
2 2

and σ(x, ξ) > 0 is a step length.

F.2.3 Discretization

The method starts by discretizing the domain T into the lower triangular part of an
N × N grid, with discrete nodes defined for

1 ≤ j ≤ i ≤ N, (F.8)

constituting a total of 21 N (N + 1) nodes. One such grid is displayed in Fig. F.2 for
N = 4, with each node assigned a coordinate. The boundary condition (F.4b) is along
j = 1. Introducing the notation

1
Δ= , xi = Δi, ξi = Δj, (F.9)
N −1

the discrete version of (F.6) can be stated as follows

μ(xi )K x (xi , ξ j ) + λ(ξ j )K ξ (xi , ξ j )


%
μ2 (xi ) + λ2 (ξ j )
≈ (K (xi , ξ j ) − K (xi − σi, j νi, j,1 , ξ j − σi, j νi, j,2 )) (F.10)
σi, j
474 Appendix F: Numerical Methods for Solving Kernel Equations

Fig. F.2 Discretization grid (4, 4)

(3, 3) (4, 3)

(2, 2) (3, 2) (4, 2)

(i = 1, j = 1)
(2, 1) (3, 1) (4, 1)

where

νi, j,1
νi, j = (F.11)
νi, j,2

is a unit vector in the direction of the characteristic and point (xi , ξ j ), that is

1 μ(xi )
νi, j = % (F.12)
μ2 (xi ) + λ2 (ξ j ) λ(ξ j )

and σi, j > 0 is the step length. Note that the evaluation point P v (xi − σi, j νi, j , ξ j −
σi, j νi, j , t) usually is off-grid, and its value will have to be found from interpolating
neighboring points on the grid.

F.2.4 Step Length

The performance of the proposed scheme depends on the step length σi, j one chooses.
Apparently, one should choose σi, j so that the evaluation point K (xi − σi, j νi, j , ξ j −
σi, j νi, j , t) is close to other points on the grid. Proposed here is a method for choosing
σi, j . Depending on the values of the vector νi, j , the extended vector −σi, j νi, j will
either cut through the left hand side of the square (blue arrow), or the bottom side
(red arrow), as depicted in Fig. F.3. In either case, the distance σi, j can be computed
so that one of the sides is hit. In the case of the left hand side being hit, one can
evaluate the value K (xi − σi, j νi, j , ξ j − σi, j νi, j , t) by simple, linear interpolation of
the points at (i − 1, j) and (i − 1, j − 1). Similarly, if the bottom side is hit, the point
is evaluated using linear interpolation of the points at (i − 1, j − 1) and (i, j − 1).
Index 475

Fig. F.3 Choosing the step (i − 1, j) (i, j)


length

(i − 1, j − 1) (i, j − 1)

F.2.5 Solving the System of Equations

Using the above discretization scheme, a linear set of equations can be built and solved
efficiently on a computer. In the case of adaptive schemes, most of the matrices can
be computed off-line prior to implementation, and updating the parts changing with
the adaptive laws should be a minor part of the implementation.

References

Abramowitz M, Stegun IA (eds) (1975) Handbook of mathematical functions with formulas, graphs,
and mathematical tables. Dover Publications Inc, New York
Anfinsen H, Aamo OM (2016) Boundary parameter and state estimation in 2 × 2 linear hyperbolic
PDEs using adaptive backstepping. In: 2016 IEEE 55th conference on decision and control (CDC),
Vegas, NV, USA
Anfinsen H, Aamo OM (2017a) Adaptive stabilization of 2 × 2 linear hyperbolic systems with an
unknown boundary parameter from collocated sensing and control. IEEE Trans Autom Control
62(12):6237–6249
Anfinsen H, Aamo OM (2017b) Model reference adaptive control of n + 1 coupled linear hyperbolic
PDEs. Syst Control Lett 109:1–11
Anfinsen H, Aamo OM (2018) A note on establishing convergence in adaptive systems. Automatica
93:545–549
Coron J-M, Vazquez R, Krstić M, Bastin G (2013) Local exponential H 2 stabilization of a 2 × 2
quasilinear hyperbolic system using backstepping. SIAM J Control Optim 51(3):2005–2035
Coron J-M, Hu L, Olive G (2017) Finite-time boundary stabilization of general linear hyperbolic
balance laws via Fredholm backstepping transformation. Automatica 84:95–100
Di Meglio F, Vazquez R, Krstić M (2013) Stabilization of a system of n + 1 coupled first-order
hyperbolic linear PDEs with a single boundary input. IEEE Trans Autom Control 58(12):3097–
3111
Hu L, Vazquez R, Meglio FD, Krstić M (2015) Boundary exponential stabilization of 1-D inhomo-
geneous quasilinear hyperbolic systems. SlAM J Control Optim (to appear)
Krstić M, Smyshlyaev A (2008) Backstepping boundary control for first-order hyperbolic PDEs
and application to systems with actuator and sensor delays. Syst Control Lett 57(9):750–758
Krstić M, Kanellakopoulos I, Kokotović PV (1995) Nonlinear and adaptive control design. Wiley,
New York
Tao G (2003) Adaptive control design and analysis. Wiley, New York
Index

A D
Adaptive control Discretization, 473
– identifier-based, 32, 70, 153 Disturbance, 227
– Lyapunov, 30 – parametrization, 228, 236
– model reference, 103, 243, 332 – rejection, 243
– output feedback, 86, 111, 182, 196, Drift flux model, 258
218, 250, 304, 338, 383
– state feedback, 70, 153, 165, 287
– swapping-based, 34, 86, 165, 182, 218, F
243, 250, 287, 304, 332, 338, 383 Filters, 34, 82, 98, 159, 176, 213, 236, 281,
Adaptive law, 68, 83, 100, 149, 161, 177, 299, 325, 376
194, 215, 240, 283, 302, 329, 379
H
Heat exchangers, 3
B
Backstepping for PDEs, 24
Barbalat’s lemma, 399 I
Bessel functions, 144 Identifier, 32, 68, 149

K
C Korteweg de Vries equation, 46, 112
Canonical form, 97, 212, 234, 325
Cauchy–Schwarz’ inequality, 405
Certainty equivalence, 32, 34 L
Classes of linear hyperbolic PDEs, 7 Laplace transform, 58
– 2 × 2 systems, 8, 117, 121, 147, 176, L 2 -stability, 10
207, 227
– n + 1 systems, 8, 257, 261, 281, 299,
317 M
– n + m systems, 9, 345, 349, 375 Marcum Q-function, 144
– scalar systems, 7, 45, 53, 67, 81, 95 Minkowski’s inequality, 405
Convergence, 10, 399 Model reference adaptive control, see adap-
– minimum-time, 11, 354 tive control
– non-minimum time, 11, 350 Multiphase flow, 3, 258

© Springer Nature Switzerland AG 2019 477


H. Anfinsen and O. M. Aamo, Adaptive Control of Hyperbolic PDEs,
Communications and Control Engineering,
https://doi.org/10.1007/978-3-030-05879-1
478 Index

N S
Non-adaptive control, 53, 121, 261, 349 Saint-Venant equations, 118
– output-feedback, 61, 140, 141, 276, Saint-Venant–Exner model, 258
277, 364, 365 Square integrability, 10
– state-feedback, 54, 123, 262, 350, 354 Stability, 10, 399, 402
– tracking, 61, 141, 277, 367 State feedback
Notation, 4 – adaptive control, see adaptive control
– non-adaptive control, see non-adaptive
control
Successive approximations, 17, 408, 471
O
Observer, 60, 132, 268, 357
– anti-collocated, 133, 269, 358 T
– collocated, 137, 272, 363 Target system, 24
Output feedback Time-delay, 3, 27
– adaptive control, see adaptive control Transmission lines, 3, 118
– non-adaptive control, see non-adaptive
control
U
Update law, see adaptive law
P
Parabolic PDEs, 3
Persistency of excitation, 177 V
Predator–prey systems, 3 Volterra integral transformations, 14
– affine, 23
Projection, 68, 83, 100, 149, 161, 177, 215,
– invertibility, 18, 21, 23
240, 283, 302, 329, 379, 397
– time-invariant, 14
– time-variant, 21

R
Reference model, 96, 228, 318 Y
Road traffic, 3, 46 Young’s inequality, 406

You might also like