0% found this document useful (0 votes)
20 views113 pages

Nonlinear Time-Delay Systems A Geometric Approach: Claudia Califano Claude H. Moog

Uploaded by

Somnath Sarangi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views113 pages

Nonlinear Time-Delay Systems A Geometric Approach: Claudia Califano Claude H. Moog

Uploaded by

Somnath Sarangi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 113

SPRINGER BRIEFS IN ELEC TRIC AL AND COMPUTER

ENGINEERING  CONTROL, AUTOMATION AND ROBOTICS

Claudia Califano
Claude H. Moog

Nonlinear
Time-Delay
Systems
A Geometric
Approach
123
SpringerBriefs in Electrical and Computer
Engineering

Control, Automation and Robotics

Series Editors
Tamer Başar, Coordinated Science Laboratory, University of Illinois at
Urbana-Champaign, Urbana, IL, USA
Miroslav Krstic, La Jolla, CA, USA
SpringerBriefs in Control, Automation and Robotics presents concise sum-
maries of theoretical research and practical applications. Featuring compact,
authored volumes of 50 to 125 pages, the series covers a range of research, report
and instructional content. Typical topics might include:
• a timely report of state-of-the art analytical techniques;
• a bridge between new research results published in journal articles and a con-
textual literature review;
• a novel development in control theory or state-of-the-art development in
robotics;
• an in-depth case study or application example;
• a presentation of core concepts that students must understand in order to make
independent contributions; or
• a summation/expansion of material presented at a recent workshop, symposium
or keynote address.
SpringerBriefs in Control, Automation and Robotics allows authors to present
their ideas and readers to absorb them with minimal time investment, and are
published as part of Springer’s e-Book collection, with millions of users worldwide.
In addition, Briefs are available for individual print and electronic purchase.
Springer Briefs in a nutshell
• 50–125 published pages, including all tables, figures, and references;
• softcover binding;
• publication within 9–12 weeks after acceptance of complete manuscript;
• copyright is retained by author;
• authored titles only – no contributed titles; and
• versions in print, eBook, and MyCopy.
Indexed by Engineering Index.
Publishing Ethics: Researchers should conduct their research from research
proposal to publication in line with best practices and codes of conduct of relevant
professional bodies and/or national and international regulatory bodies. For more
details on individual ethics matters please see: https://www.springer.com/gp/
authors-editors/journal-author/journal-author-helpdesk/publishing-ethics/14214

More information about this subseries at http://www.springer.com/series/10198


Claudia Califano Claude H. Moog

Nonlinear Time-Delay
Systems
A Geometric Approach

123
Claudia Califano Claude H. Moog
Dipartimento di Ingegneria Informatica, Laboratoire des Sciences
Automatica e Gestionale “Antonio Ruberti” du Numérique de Nantes
Università di Roma La Sapienza CNRS
Rome, Italy Nantes, France

ISSN 2191-8112 ISSN 2191-8120 (electronic)


SpringerBriefs in Electrical and Computer Engineering
ISSN 2192-6786 ISSN 2192-6794 (electronic)
SpringerBriefs in Control, Automation and Robotics
ISBN 978-3-030-72025-4 ISBN 978-3-030-72026-1 (eBook)
https://doi.org/10.1007/978-3-030-72026-1

Mathematics Subject Classification: 93C10, 93B05, 93B27, 93B18, 93B50, 93B52

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021


This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of
illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by similar
or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, expressed or implied, with respect to the material contained
herein or for any errors or omissions that may have been made. The publisher remains neutral with regard
to jurisdictional claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface

This book is devoted to nonlinear time-delay control systems. Although they


include the class of linear time-delay systems, the specific mathematical tools valid
(only) for this subclass of systems will not be developed or used herein.
Thus, in this introductory chapter, a sketch is given of what can be found
elsewhere (Richard 2003) and which will not be considered herein.

Linear Time-Delay Systems

For the subclass of linear time-delay systems in continuous time, the use of the Laplace
transform yields quasi-polynomials in the Laplace variable s and in es Gu et al.
(2003), Michiels et al. (2007), Niculescu (2001). Among those systems, one may
distinguish betweem the so-called retarded systems (Fridman 2014) described by
differential equations where the highest differentiation order of the output, or the state,
is not delayed, and the so-called neutral systems (Fridman 2001) whose equations
involve delayed values of the highest differentiation order of the output, or the state.
Thus, the Laplace transform still enables an input–output analysis of linear
time-delay systems (Olgac and Sipahi 2002, Sipahi et al. 2011). This approach can
hardly be extended to more general nonlinear time-delay systems which are the
main focus of this book. Finite dimension approximations may be appealing, but
have a limited interest due to stability issues (Insperger 2015).
Discrete-time linear systems with unknown delays are under interest in Shi et al.
(1999). Discrete-time linear systems with varying delays are under interest with an
ad hoc predictor design in Mazenc (2008).
Stability analysis and stabilization of linear time-delay systems require general
tools derived from the Lyapunov theory as in Fridman (2001), Kharitonov and
Zhabko (2003).
Stability of linear systems with switching delays is tackled in Mazenc (2021)
using trajectory-based methods and the so-called sup-delay inequalities.

v
vi Preface

Stability and Stabilization of Nonlinear Time-Delay Systems

Some of the most significant historic results obtained for the general class of
nonlinear time-delay systems are about the analysis of their stability, thanks to a
generalization of the Lyapunov theory, the so-called Krasovskii-type approach (Gu
et al. 2003). This approach can hardly be circumvented even in the case of linear
time-delay systems (Fridman 2001).
Early design problems for nonlinear time-delay control systems focus on the
stabilization. Considering a positive definite functional, the stabilizing control has
to render its time derivative negative definite. Advanced control methods as
backstepping make use of Lyapunov–Razumikhin–Krasovskii-type theorems as in
Battilotti (2020), Krstic and Bekiaris-Liberis (2012), Mazenc and Bliman (2006),
Pepe and Jiang (2006). Also, delay-free nonlinear systems may be subject to a
delayed state feedback whose stability is tackled in Mazenc et al. (2008). Nonlinear
observer designs for systems subject to delayed output measurements are found in
Battilotti (2020), Van Assche (2011).
Discrete-time nonlinear systems including delays on the input are considered in
Ushio (1996). A reduction process is introduced to define a delay-free system with
equivalent stability properties in Mattioni et al. (2018), Mattioni et al. (2021).

Content

This book is essentially devoted to the so-called structure of nonlinear time-delay


systems. Thanks to adapted (non-commutative) algebraic mathematical tools, one is
able to follow the main features valid for delay-free systems displayed in Conte
(2007) which are extended to the time-delay case under interest.
The structure of a system includes notions as controllability, observability, and
the related decompositions. The so-called structural control problems are based on
inversion techniques like feedback linearization Oguchi et al. (2002), disturbance
decoupling (Moog et al. 2000, Velasco et al. 1997) or noninteracting control.
More precisely, geometric tools such as the Lie Bracket are adapted for the class
of nonlinear time-delay systems involving a finite number of commensurable
delays. Thanks to these tools, the very first criterion for accessibility is provided for
this class of systems. Specialized to linear time-delay systems, it corresponds to
weak controllability (Fliess 1998).
The realization problem was partially solved for delay-free nonlinear systems
either in the continuous-time case in Crouch (1995) or in the discrete-time case
(Kotta, et al. 2001, Monaco and Normand-Cyrot 1984). The realization problem is
tackled in this book in relation to strongly or weakly observable retarded-type or
neutral-type state equations or input–output equations. The notions of order of a
system and whether it is of neutral or retarded type are questioned since a higher
Preface vii

order retarded-type realization may admit a lower order neutral-type realization as


well, both being either weakly observable or strongly observable.
As for control design problems, the exact linearization using feedback is one
mainstream to control delay-free nonlinear systems. The same path is followed in
this book, although causality of the feedback adds a new constraint and new
conditions for the solvability of the problem.

Rome, Italy Claudia Califano


Nantes, France Claude H. Moog
Contents

1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 The Class of Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Integrability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Geometric Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Accessibility and Observability Properties . . . . . . . . . . . . . . . . . . 6
1.5 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.6 Recalls on Non-commutative Algebra . . . . . . . . . . . . . . . . . . . . . 10
2 Geometric Tools for Time-Delay Systems . . . . . . . . . . . . . . ....... 15
2.1 The Initialization of the Time-Delay System Versus
the Initialization of the Delay-Free Extended System . . . . . . . . . . 16
2.2 Non-independence of the Inputs of the Extended System . . . . . . . 20
2.3 The Differential Form Representation . . . . . . . . . . . . . . . . . . . . . . 21
2.4 Generalized Lie Derivative and Generalized Lie Bracket . . . . . . . . 23
2.5 Some Remarks on the Polynomial Lie Bracket . . . . . . . . . . . . . . . 27
2.6 The Action of Changes of Coordinates . . . . . . . . . . . . . . . . . . . . 31
2.7 The Action of Static State Feedback Laws . . . . . . . . . . . . . . . . . . 34
2.8 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3 The Geometric Framework—Results on Integrability . . . . . . ...... 37
3.1 Some Remarks on Left and Right Integrability . . . . . . . . . ...... 38
3.2 Integrability of a Right-Submodule . . . . . . . . . . . . . . . . . ...... 39
3.2.1 Involutivity of a Right-Submodule Versus its
Integrability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...... 40
3.2.2 Smallest 0-Integrable Right-Submodule Containing
Dðd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.2.3 p-Integrability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.2.4 Bicausal Change of Coordinates . . . . . . . . . . . . . . . . . . . . 46
3.3 Integrability of a Left-Submodule . . . . . . . . . . . . . . . . . . . . . . . . 48
3.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

ix
x Contents

4 Accessibility of Nonlinear Time-Delay Systems . . . . . . . . . . . . . . . . . 57


4.1 The Accessibility Submodules in the Delay Context . . . . . . . . . . . 60
4.2 A Canonical Decomposition with Respect to Accessibility . . . . . . 63
4.3 On the Computation of the Accessibility Submodules . . . . . . . . . . 67
4.4 On t-Accessibility of Time-Delay Systems . . . . . . . . . . . . . . . . . . 70
4.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
5 Observability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.1 Decomposing with Respect to Observability . . . . . . . . . . . . . . . . . 77
5.1.1 The Case of Autonomous Systems . . . . . . . . . . . . . . . . . . 78
5.2 On Regular Observability for Time-Delay Systems . . . . . . . . . . . . 79
5.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
6 Applications of Integrability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
6.1 Characterization of the Chained Form with Delays . . . . . . . . . . . . 85
6.2 Input–Output Feedback Linearization . . . . . . . . . . . . . . . . . . . . . . 89
6.2.1 Introductory Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6.2.2 Static Output Feedback Solutions . . . . . . . . . . . . . . . . . . . 90
6.2.3 Hybrid Output Feedback Solutions . . . . . . . . . . . . . . . . . . 92
6.3 Input-State Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
6.3.1 Introductory Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
6.3.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
6.4 Normal Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
6.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

Series Editor Biographies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99


References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Chapter 1
Preliminaries

Time-delay systems are modeled by ordinary differential equations which involve


delayed variables (Gu et al. 2003; Krstic 2009). They are frequently encountered
in many applications, for instance, in biology or biomedical systems (Timmer et al.
2004), in telerobotics, teleoperations (Islam et al. 2013; Kim et al. 2013), and in
networked control systems.
The literature on time-delay systems is extensive, and concerns mainly stabiliza-
tion problems. The analysis of the structural properties of this class of systems is less
developed, both with respect to the delay-free case and the linear case. Even funda-
mental properties such as accessibility or observability and related design problems
are far from being completely understood.
The aim of this book is to introduce the reader to a new methodology, recently
introduced in the literature, which has allowed to obtain interesting results in the
analysis of the structural properties of time-delay systems affected by constant com-
mensurate delays. In order to give the reader a flavor of the important issues that
arise in the delay context, we give hereafter an overview of the different problems
that will be analyzed later on.
Such overview will show that the results in this book feature fundamentals of a
novel approach to tackle nonlinear time-delay systems. They include useful algebraic
results which are independent of any system dynamics.

1.1 The Class of Systems

Let us for the moment focus our attention on the class of single-input nonlinear time-
delay systems which can be described through the ordinary differential equation

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 1


C. Califano and C. H. Moog, Nonlinear Time-Delay Systems,
SpringerBriefs in Control, Automation and Robotics,
https://doi.org/10.1007/978-3-030-72026-1_1
2 1 Preliminaries


l
ẋ(t) = F(x(t), . . . , x(t − s D)) + G i (x(t), . . . , x(t − s D))u(t − i D)), (1.1)
i=0

where x(t) ∈ IR n and u(t) ∈ IR represent, respectively, the current values of the
state and of the control; D is a constant delay; s, l ≥ 0 are finite integers; and finally
the functions G i (x(t), . . . , x(t − s D)), i ∈ [0, l], and F(x(t), . . . , x(t − s D)) are
analytic in their arguments. It is easy to see that such a class of systems covers the
case of constant multiple commensurate delays as well (Gu et al. 2003).
Without loss of generality, in order to simplify the notation, we will assume D = 1.
Then, system (1.1) reads as follows, for t ≥ 0:


l
ẋ(t) = F(x(t), . . . , x(t − s)) + G i (x(t), . . . , x(t − s))u(t − i)). (1.2)
i=0

The differential equation (1.2) is influenced by the delayed state variables x(t − i),
i ∈ [1, s], which, for t > s, can be recovered as solution of the differential equations
obtained by shifting (1.2), and thus given for  ∈ [1, s] by

ẋ(t − 1) = F(x(t − 1), . . . , x(t − s − 1))


l
+ G i (x(t − 1), . . . , x(t − s − 1))u(t − i − 1))
i=0
..
. (1.3)
ẋ(t − s) = F(x(t − s), . . . , x(t − 2s))
l
+ G i (x(t − s), . . . , x(t − 2s))u(t − i − s)).
i=0

The extended system (1.2), (1.3) now displays s additional delays on the state and
input variables. Virtually, one can continue this process adding an infinite number of
differential equations which allow to go back (or forward) in time, and thus leading
naturally to an infinite dimensional system.
As well known, from a practical point of view, time-delay systems are at a cer-
tain point initialized with some arbitrary functions, which may not necessarily be
recovered as a solution of the differential equations describing the system. The con-
sequence is that any trajectory of the extended system (1.2), (1.3) is also a trajectory
of (1.2), but conversely any trajectory of (1.2) is, in general, not a trajectory of (1.2),
(1.3).
Example 1.1 Consider the scalar delay differential equation

ẋ(t) = −x(t − 1) (1.4)


1.1 The Class of Systems 3

defined for t ≥ 0. To compute its trajectory, it requires a suitable initial condition,


say the function ϕ0 (τ ) defined for τ ∈ [−1, 0).
The addition of the extension

ẋ(t − 1) = −x(t − 2) (1.5)

implies that with the initial condition ϕ0 (τ ) on the interval τ ∈ [−1, 0), the system
(1.4), (1.5) is well defined for t ≥ 1. Alternatively, if (1.4), (1.5) is considered for
t ≥ 0, then an additional initial condition ϕ1 (τ ) should be given on the interval
τ ∈ [−2, −1). However, it should be noted that the trajectory of (1.4) corresponding
to the initial condition ϕ0 (τ ) cannot be reproduced, in general, as a trajectory of
the extended system (1.4), (1.5), as there does not necessarily exist an initialization
ϕ1 (τ ) of (1.5) such that x(t − 1) = ϕ0 (t − 1) for t ∈ [0, 1).
The trajectories of system (1.4), (1.5) are special trajectories generated by
   
ż 1 (t) −z 2 (t)
=
ż 2 (t) −v(t)

as long as z 2 (t) = z 1 (t − 1) and v(t) = z 2 (t − 1) = z 1 (t − 2).


More generally, consider again the class of systems (1.2), (1.3). It is natural to rename
the delayed state variables as x(t − i) = z i , as well as the delayed control variable
as u(t − i) = vi . In this way, one gets the cascade system


l
ż 0 = F(z 0 , . . . z s ) + G i (z 0 , . . . z s )vi
i=0


l
ż 1 = F(z 1 , . . . z s+1 ) + G i (z 1 , . . . z s+1 )vi+1 (1.6)
i=0
..
.

which has the following nice block representation given in Fig. 1.1 where 0 is
the system described by z 0 , with entries V0 = (v0 , . . . vl ) and (z 1 , . . . , z s ); 1 is the
system described by the dynamics of z 1 , with entries V1 = (v1 , . . . vl+1 ) = V0 (t − 1)
and (z 2 , . . . , z s+1 ); and 2 is the system described by the dynamics of z 2 , with entries
V2 = (v2 , . . . vl+2 ) and (z 3 , . . . , z s+2 ), . . .
It is immediately understood, however, that the block scheme in Fig. 1.1 as well as
the differential equations (1.6) represents a broader class of systems, not necessarily
delayed and generated by Eq. (1.2). In fact, in order to represent the delay-time
system (1.2), one cannot neglect that the variables z i (t) represent pieces of the same
trajectory, which means that they cannot be initialized in an arbitrary way: for any
real , and integer h such that h ≤  ≤ h + 1, at any time t they have to satisfy
the relation that z i (t − ) = z i+ j (t −  + j) for any integer j ∈ [1, h]. In a similar
4 1 Preliminaries

Fig. 1.1 Block scheme of


system (1.2), (1.3)

vein, the inputs V0 , V1 , . . . are generated by a unique signal u by considering also its
repeated delays. They are thus not independent.
Consider, for instance, the nonlinear time-delay system
 
x1 (t − τ )
ẋ(t) = u(t) (1.7)
1

with τ as a constant delay. As the single delayed variable is x1 (t − τ ), one may


extend the dynamics (1.7) with

ẋ1 (t − τ ) = x1 (t − 2τ )u(t − τ )

and further by

ẋ1 (t − 2τ ) = x1 (t − 3τ )u(t − 2τ )
..
.

Note that for some special cases as


 
x2 (t − τ )
ẋ(t) = u(t), (1.8)
1

it is sufficient to include the following extension:

ẋ2 (t − τ ) = u(t − τ ) (1.9)

to describe the full behavior of the states.


Nevertheless, it will be seen later in this book that both systems (1.7) and (1.8)
are fully controllable. This is a major paradox with respect to delay-free driftless
systems. In fact, the underlying mathematical point is about the left-annihilator
[1 − x1 (t − τ )] of the input matrix of system (1.7), or about the left-annihilator
[1 − x2 (t − τ )] of the input matrix of system (1.8). These left-annihilators are com-
monly denoted as d x1 (t) − x1 (t − τ )d x2 (t) (respectively d x1 (t) − x2 (t − τ )d x2 (t)).
Due to the delay, both one-forms d x1 (t) − xi (t − τ )d x2 (t), for i = 1, 2, are not inte-
grable and thus they are not representative of a noncontrollable state. More precisely,
this fact gives rise to issues on the characterization of integrability as discussed further
hereafter.
1.2 Geometric Behavior 5

1.2 Integrability

A major difficulty in analyzing time-delay systems is their infinite dimensionality.


Thus, in the nonlinear case, integrability results provided by Poincaré lemma or
Frobenius theorem have to be revised.
Consider, for instance, the one form derived from system (1.8) for τ = 1

ω(x(t), x(t − 1)) = d x1 − x2 (t − 1)d x2 .

It is not integrable, in the sense that there is no finite index j, no function


λ(x(t), x(t − 1), . . . , x(t − j)) and no coefficient α(x(t), x(t − 1), . . . , x(t − j))
such that dλ(·) = α(·)ω(x(t), x(t − 1)).
The approach proposed in this book allows us to address successfully this kind of
problems. This issue is analyzed in detail in Chap. 3.

1.3 Geometric Behavior

Consider again the nonlinear time-delay system (1.8)


 
x2 (t − τ )
ẋ(t) = u(t).
1

In Fig. 1.2, the trajectory of the system is shown for a switching sequence of the input
signal. The input switches from 1 to −1 includes five such forward and backward
cycles. Differently from what would happen in the delay-free case when the input
switches, the trajectory does not stay on the same integral manifold of one single
vector field. A new direction is taken in the motion, which shows that the delay adds
some additional freedom for the control direction and yields accessibility for the
example under consideration. This is a surprising property of single-input driftless
nonlinear time-delay systems and contradicts pre-conceived ideas as it could not
happen for delay-free systems. As it will be argued in Sect. 2.5, the motion in the x1
direction of the final point of each cycle has to be interpreted as the motion along
the nonzero Lie Bracket of the delayed control vector field with itself. For instance,
system (1.8) with its extension (1.9) reads
⎛ ⎞ ⎛ ⎞
x2 (t − τ ) 0
ẋ(t) = ⎝ 1 ⎠ u(t) + ⎝0⎠ u(t − τ ).
0 1

The Lie Bracket of the two vector fields generates a third independent direction. These
general intuitive considerations are discussed formally in the book using precise
definitions.
6 1 Preliminaries

Fig. 1.2 Forward and backward integration yields a motion in an additional specific direction. On
the left the state trajectory of the system initialized with x2 = −10 for t ∈ [−2, 0). The delay is
fixed to τ = 2 secs

1.4 Accessibility and Observability Properties

The accessibility/controllability of systems (1.7) and (1.8) has already been discussed
above in relation with the topic of integrability. This property is unexpected for a
driftless single-input system.
Now, consider, for instance,
⎛ ⎞
x2 (t − τ )
ẋ(t) = ⎝ x3 (t) ⎠ u(t). (1.10)
1

System (1.10) is not fully controllable, as for τ = 0 there is one autonomous ele-
ment λ = x32 /2 − x2 .1 Define the change of variables (z 1 (t), z 2 (t), z 3 (t)) = (x1 (t),
x3 (t), x32 (t)/2 − x2 (t)). The dynamics in the new system of coordinates decomposes
system (1.10) into the fully controllable subsystem
   2 
ż 1 (t) z (t − τ )/2 − z 3 (t − τ )
= 2 u(t)
ż 2 (t) 1

and the noncontrollable subsystem

ż 3 (t) = 0.

More generally, it will be shown that a decomposition with respect to accessibility can
always be carried out and a full characterization of accessibility can be given in terms
of a rank condition for nonlinear time-delay systems as shown in Chap. 4. This result
is in the continuation of the celebrated geometric approach for delay-free systems;

1 For τ
= 0 one would reduce to the delay-free case with two autonomous elements, λ1 = x32 /2 − x2
and λ2 = −x33 /3 + x2 x3 − x1 .
1.4 Accessibility and Observability Properties 7

the work (Hermann 1963) on accessibility has certainly been “the seminal paper
inspiring the geometric approach that started to be developed by Lobry, Jurdjevic,
Sussmann, Hermes, Krener, Sontag, Brockett in the early 1970’s” (quoted from Sallet
2007).
The above discussion enlightens also that in the delay context two different notions
of accessibility should be considered and accordingly two different criteria. A first
notion, which could be considered as an immediate generalization of Kalman’s result,
is based on the consideration of autonomous elements for the given system. Such
autonomous elements may be functions of the current state and its delays. Looking
instead at accessibility as the possibility to define a control which allows to move
from some initial point x0 at time t0 to a given final point x f at some time t leads to a
different notion and characterization of accessibility, actually a peculiarity of time-
delay systems only. This new notion will be referred to t-accessibility. The difference
between the two cases is illustrated by the following simple linear example.
Example 1.2 Consider the linear system

ẋ1 (t) = u(t)


ẋ2 (t) = u(t − 1).

Such a system is not fully accessible due to the existence of the autonomous element
λ = x2 (t) − x1 (t − 1). This means that the point reachable at time t is linked to
the point reached at time t − 1 through the relation x2 (t) − x1 (t − 1) = constant.
However, we may reach a given fixed point at a different time t¯.
Assume, for instance, that x1 (t) = x10 , x2 (t) = x20 , u(t) = 0 for t ∈ [−1, 0). And
let x f = (x1 f , x2 f )T be the final point to be reached. Then on the interval [0, 1)

ẋ1 (t) = u(t)


ẋ2 (t) = 0,

so that x2 (t) = x20 and for a constant value of the control u(t) = u 0 , x1 (t) = u 0 t +
x10 for t ∈ [0, 1). If x2 f = x20 it is immediately clear that one cannot reach the given
final point for any t ∈ [0, 1). Now let the control change to u(t) = u 1 on the interval
t ∈ [1, 2). Then on such interval

ẋ1 (t) = u 1
ẋ2 (t) = u 0

and x1 (t) = (t − 1)u 1 + x1 (1) = (t − 1)u 1 + u 0 + x10 = x1 f and x2 (t) =


(t − 1)u 0 + x20 = x2 f , so that once t > 1 is fixed, one gets

x2 f − x20
u0 =
t −1
x1 f − x10 − u 0
u1 = .
t −1
8 1 Preliminaries

The situation is quite different when considering the property of observability.


For instance, consider the single-output time-delay system

ẋ1 (t) = 0
ẋ2 (t) = 0
y(t) = x1 (t)x1 (t − τ ) + x2 (t)x2 (t − τ ).

As any time derivative of the output is zero, for t ≥ 0, the two state variables of
the above system cannot be estimated independently and the system is not fully
observable. As a matter of fact, differently from the delay-free case, there is no
invertible change of state coordinates which decomposes the system into an observ-
able subsystem and a nonobservable one. This contradicts common beliefs on this
matter. Additional assumptions are required (Zheng et al. 2011) to ensure that such
a decomposition still exists. This topic is addressed in Chap. 5.

1.5 Notation

This paragraph is devoted to the notation which will be used in the book. In general,
we will refer to the class of multi-input multi-output nonlinear time-delay systems


l 
m
ẋ(t) = F(x(t), . . . , x(t − s D)) + G ji (x(t), . . . , x(t − s D))u j (t − i D))
i=0 j=1
(1.11)
y(t) = H (x(t), . . . , x(t − s D)),

where x(t) ∈ IR n and u(t) ∈ IR m are the current values of the state and con-
trol variables; D is a constant delay; s, l ≥ 0 are integers; and the functions
G ji (x(t), . . . , x(t − s D)), j ∈ [1, m], i ∈ [0, l], F(x(t), . . . , x(t − s D)), and
H (x(t), . . . , x(t − s D)) are analytic in their arguments. It is easy to see that such a
class of systems includes the case of constant multiple commensurate delays as well
(Gu et al. 2003).
The following notation taken from Califano and Moog (2017), Xia et al. (2002)
will be extensively used:
• x[Tp,s] = (x T (t + p D), . . . x T (t − s D)) ∈ IR ( p+s+1)n , denotes the vector consist-
ing of the np future values x(t + i D), i ∈ [1, p], of the state together with the
first (s + 1)n components of the state of the infinite dimensional system associ-
ated to (1.11). When p = 0, the more simple notation x[s] T
= x[0,s]
T
∈ IR (s+1)n is
used, with x[0] = [x1,[0] , . . . , xn,[0] ] = x(t) ∈ IR , u[0] = [u 1,[0] , . . . , u m,[0] ]T =
T n

u(t) ∈ IR m , the current values of the state and input variables.


• x[Tp,s] (−i) = (x T (t + p D − i D), . . . x T (t − s D − i D)). Accordingly, x[s] (−i) =
x[0,s] (−i); x j,[0] (−i) := x j (t − i D), and u ,[0] (−i) := u  (t − i D) denote, respec-
tively, the j-th and -th components of the instantaneous values of the state and
1.5 Notation 9

input variables delayed by τ = i D. When no confusion is possible the subscript


will be omitted so that x will stand for x[ p,s] , while x(−i) will stand for x[ p,s] (−i).
• u[ j] := (uT , u̇T , . . . , (u( j) )T )T where u[−1] = ∅.
[k]
• K∗ denotes the field of meromorphic functions f (x[ p,s] , u[q, j] ), with p, s, k, q,

j ∈ IN . The subfield K of K , consisting of causal meromorphic functions, is
obtained for p = q = 0.
[k]
• Given a function f (x[ p,s] , u[q, j] ), the following notation is in order:

[k]
f (−l) := f (x[ p,s] (−l), u[q, j] (−l)).

As an example, the function f (x) = x1 (t)x2 (t − D) − x12 (t − 2D) in the previous


notation is written as f (x) = x1,[0] x2,[0] (−1) − x1,[0]
2
(−2) and accordingly

f (−5) = x1,[0] (−5)x2,[0] (−6) − x1,[0]


2
(−7).

• d is the standard differential operator.


• δ represents the backward time-shift operator: for a(·), f (·) ∈ K∗ : δ[ a d f ] =
a(−1)δd f = a(−1)d f (−1).
• Let for i ∈ [1, j], τi (x[l] ) be vector fields defined in an open set Ωl ⊆ IR n(l+1) .
Then Δ = span{τi (x[l] ), i = 1, . . . , j} represents the distribution generated by
the vector fields τi (·) and defined on IR n(l+1) . Δ̄ represents its involutive closure,
that is, for any two vector fields τi (·), τ j (·) ∈ Δ̄ the Lie Bracket
∂τi ∂τ
[τi , τ j ] = ∂x[l]
τ j − ∂x[l]j τi ∈ Δ̄ (Isidori 1995).
Δ[ p,q] will denote a distribution in spanK∗ { ∂x[0]∂ ( p) , . . . , ∂x[0]∂(−q) }.
• Let E denote the vector space spanned by the differentials {dx(t − i); i ∈ Z} over
the field K∗ . The elements of E are called one-forms. A one-form ω is said to be
exact if there exists ϕ ∈ K∗ such that ω = dϕ. The use of exterior differentiation
and of the wedge product allows to state in a concise manner both Poincaré lemma
and Frobenius theorem (Choquet-Bruhat et al. 1989):
– The one-form ω is locally exact if and only if dω = 0.
– The codistribution spanK {ω1 , . . . , ωq } is integrable if and only if the (q + 2)-
forms dωi ∧ ω1 ∧ · · · ∧ ωq are zero for i = 1, . . . , q, where ∧ denotes the wedge
product of differential forms (Choquet-Bruhat et al. 1989).
– The following notation is also used:

dω = 0 mod spanK {ω̄1 , . . . , ω̄q },

which means that dω ∧ ω̄1 ∧ · · · ∧ ω̄q = 0.


With such a notation the given system (1.1) is rewritten as

⎨ m l
 : ẋ[0] = F(x[s] ) + j=1 i=0 G ji (x[s] )u[0], j (−i) . (1.12)
⎩ y = H (x )
[0] [s]
10 1 Preliminaries

1.6 Recalls on Non-commutative Algebra

Non-commutative algebra is used throughout the book to address the study of time-
delay systems. In this section, the mathematics and definitions beyond this method
are introduced (see, for example, Cohn 1985, Banks 2002).
Let us now consider the time-shift operator δ recalled in the previous paragraph,
and let us denote by K∗ (δ] the (left-) ring of polynomials in δ with coefficients in K∗
(analogously K(δ] will denote the (left-) ring of polynomials in δ with coefficients
in K). Every element of K∗ (δ] may be written as

α(δ] = α0 (x) + α1 (x)δ + · · · + αrα (x)δrα , α j (·) ∈ K∗

rα = deg(α(δ]) is the polynomial degree of α(δ] in δ. Let

β(δ] = β0 (x) + β1 (x)δ + · · · + βrβ (x)δrβ

be another element of K∗ (δ] of polynomial degree rβ . Then addition and multiplica-


tion on this ring are defined by Xia et al. (2002):

max{rα , rβ }

α(δ] + β(δ] = (αi + βi )δ i
i=0

and

rα 

α(δ]β(δ] = αi β j (−i)δ i+ j .
i=0 j=0

Analogously, we can consider vectors, covectors, and matrices whose entries are
in the ring. The standard operations of sum and product are well defined once one
applies the previous rules on the sum and product of the elements of the ring. As
for matrices, it should be noted that in this case the property of full rank of a square
matrix does not imply automatically the existence of its inverse. If the inverse exists,
a stronger property is satisfied which is that of unimodularity of the matrix. We give
hereafter the formal definition of unimodular matrix which will play a fundamental
role in the definitions of changes of coordinates. Some examples will clarify the
difference with full rank but not unimodular matrices.

Definition 1.1 (Cohn 1985) A matrix A(δ) ∈ K∗ (δ]k×k is unimodular if it is invert-


ible within the ring of polynomial matrices, i.e. if there exists a B(δ) ∈ K∗ (δ]k×k
such that A(δ)B(δ) = B(δ)A(δ) = I .

Example 1.3 The matrix


 
1 x(t − 1)δ
A(δ) =
δ 1 + x(t − 2)δ 2
1.6 Recalls on Non-commutative Algebra 11

is unimodular, since it admits an inverse which is given by


 
−1 1 + x(t − 1)δ 2 −x(t − 1)δ
A (δ) = .
−δ 1

In fact A(δ)A−1 (δ) = A−1 (δ)A(δ) = I . Note that while any unimodular matrix has
full rank, the converse is not true. For example, there is no polynomial inverse for
A(δ) = (1 + δ).

Let us now consider a set of one-forms. It is immediately clear that such a set has
both the structure of a vector space E over the field K∗ and the structure of a module,
denoted M, over the ring K∗ (δ], i.e.

M = spanK∗ (δ] {dx(t)}.

As an example, the one-form ω = x(t)d x(t − 1) − x 2 (t − 1)d x(t) can also be


rewritten as ω = x[0] δ − x 2 (−1) dx[0] . This twofold possibility allows us to inter-
pret the given one forms in two different ways, as shown in the next example.

Example 1.4 The one-forms dx1 (t) and dx1 (t − 1) are independent over the field
K, while they are dependent over the ring K(δ], since δdx1 (t) − dx1 (t − 1) = 0.
This simple example shows that the action of time delay is taken into account in M,
but not in E. This motivates the definition of the module M.

A left-submodule of M consists of all possible linear combinations of given one-


forms (or row vectors) {ω1 , . . . , ωk } over the ring K∗ (δ], i.e. linear combinations
of row vectors. A left-submodule, generated by {ω1 , . . . , ω j }, is denoted by Ω =
spanK∗ (δ] {ω1 , . . . , ω j }.
Any ω(x, δ)dx[0] ∈ Ω(δ] can be expressed as a linear combination of the gener-
ators of Ω(δ], that is,


j
ω(x, δ)dx[0] = αi (x, δ)ωi (x, δ)dx[0] .
i=1

An important property of the considered modules is given by the so-called closure,


introduced in Conte and Perdon (1995). Before giving the formal definition we will
illustrate it through an example.

Example 1.5 Consider the left-submodule Ω(δ] = spanK∗ (δ] {d x1 (t − 1), d x2 (t)}.
Clearly the rank of Ω(δ] is 2. Any ω ∈ Ω(δ] = α1 (x, δ)d x1 (t − 1) + α2 (x, δ)d x2 (t).
/ Ω(δ], while α0 (x, δ)ω̄ ∈ Ω(δ]. This is, for
However, there are some ω̄ such that ω̄ ∈
example, the case of ω̄ = d x1 (t); we will then say that Ω(δ] is not left-closed.

We thus can give the following definition from Conte and Perdon (1995).
12 1 Preliminaries

Definition 1.2 Let Ω(δ] = spanK∗ (δ] {ω1 (x, δ)dx[0] , . . . ω j (x, δ)dx[0] } be a left-
submodule of rank j with ωi ∈ K∗(1×n) (δ]. The left closure of Ω(δ] is the largest
left-submodule Ωc (δ] of rank j containing Ω(δ] .
The left closure of the left-submodule Ω is thus the largest left-submodule, containing
Ω, with the same rank as Ω.
Similar conclusions can be obtained on right-submodules. More precisely, a
right-module of M̂ (Califano et al. 2011a) consists of all possible linear com-
binations of column vectors τ1 , . . . , τ j , τi ∈ Kn×1 (δ], and is denoted by Δ =
spanK∗ (δ] {τ1 , . . . , τ j }.
Let Δ(δ] = spanK∗ (δ] {τ1 (x, δ), . . . τ j (x, δ)} be a right-submodule of rank j with
τi ∈ K∗(n×1) (δ]. Then any τ (x, δ) ∈ Δ(δ] can be expressed as τ (x, δ) =
j
i=1 τi (x, δ)αi (x, δ).
Accordingly, the following definition of the right closure of a right-submodule
can be given.

Definition 1.3 Let Δ(δ] = spanK∗ (δ] {τ1 (x, δ), . . . τ j (x, δ)} be a right-submodule of
rank j with τi ∈ K∗(n×1) (δ]. The right closure of Δ(δ] is the largest right-submodule
Δc (δ] of rank j containing Δ(δ].

Definition 1.4 The right closure of a right-submodule Δ of M̂, denoted by clK(ϑ] (Δ),
is defined as clK(ϑ] (Δ) = {X ∈ M̂ | ∃q(ϑ) ∈ K∗ (ϑ], Xq(ϑ) ∈ Δ}.

The right closure of the right-submodule Δ is the largest right-submodule, containing


Δ, with the same rank as Δ.
Of course, one may consider the right-annihilator of a left-submodule or the left-
annihilator of a right-submodule. In both cases, it is easily seen that one ends up on
a closed submodule. In fact

Definition 1.5 The right-kernel (right-annihilator) of the left-submodule Ω is the


right-submodule Δ containing all vectors τ (δ) ∈ M̂ such that P(δ)τ (δ) = 0.

And it is easily verified that by definition the right-kernel is necessarily closed.


Analogously

Definition 1.6 The left-kernel (left-annihilator) of Δ is the left-submodule Ω con-


taining all one forms ω(δ) ∈ M such that ω(δ)Δ = 0.

Again, by definition, the left-kernel is necessarily closed.


An immediate consequence is that the right-kernel of a left-submodule and of its
closure coincide. Analogously, the left-kernel of a right-submodule and of its closure
coincide.
We end the chapter by recalling the relations between the degrees of a submodule
and its left-annihilator which were shown in Califano and Moog (2017).
1.6 Recalls on Non-commutative Algebra 13

Lemma 1.1 Consider the matrix

Γ (x[ p,s] , δ) = (τ1 (x[ p,s] , δ), . . . , τ j (x[ p,s] , δ)).

Let s̄ = deg(Γ (x[ p,s] , δ)). The left-annihilator Ω(x[ p̄,α] , δ) satisfies the following
relations:

(i) deg(Ω(x[ p̄,α] , δ)) ≤ j [deg(Γ (x, δ))];


(ii) p̄, α can be chosen to be α ≤ s + deg(Ω(x, δ)), p̄ ≤ p.

Consequently, if Γ (x, δ) is causal, then Ω(x, δ) is also causal.

Proof Without loss of generality, assume that the first j rows of Γ (x[ p,s] , δ) are
linearly independent over K(δ]. Then Ω(x[ p̄,α] , δ) must satisfy
 
Γ1 (x, δ)
Ω(x, δ)Γ (x, δ) = [Ω1 (x, δ), Ω2 (x, δ)] = 0,
Γ2 (x, δ)

where Γ1 (x, δ) is a j × j full rank matrix, accordingly Γ2 (x, δ) is a (n − j) × j


matrix, Ω1 (x, δ) is a (n − j) × j matrix, and Ω2 (x, δ) is a (n − j) × (n − j)
matrix. Let rΩ1 = deg(Ω1 (x, δ)), rΩ2 = deg(Ω2 (x, δ)), rΓ1 = deg(Γ1 (x, δ)), and
rΓ2 = deg(Γ2 (x, δ)). Then we have that rΩ1 + rΓ1 = rΩ2 + rΓ2 . Ω(x, δ) must satisfy
the following relations:

Ω10 Γ10 + Ω20 Γ20 = 0


Ω10 Γ11 + Ω11 Γ10 (−1) + Ω20 Γ21 + Ω21 Γ20 (−1) = 0
..
. (1.13)
rΩi
 2 
j rΩ +rΓ − j
Ωi Γi i i (− j) = 0.
i=1 j=0

We have (n − j) j (rΩ1 + 1) unknowns for Ω1 , (n − j)2 (rΩ2 + 1) unknowns for Ω2


and (n − j) j (rΩ1 + rΓ1 + 1) equations. In order to be sure to get a solution

(n − j) j (rΩ1 + 1) + (n − j)2 (rΩ2 + 1) ≥ (n − j) j (rΩ1 + rΓ1 + 1),

that is, (n − j)(rΩ2 + 1) ≥ jrΓ1 . Once fixed rΩ2 , we get that rΩ1 = rΩ2 + rΓ2 − rΓ1 .
In the worst case, rΓ2 = rΓ1 = r and n − j = 1, which proves (i).
From the set of equations (1.13), fixing the independent parameters as functions of
x[0] only, then the maximum delay is given by the largest between (s + rΩ1 , s + rΩ2 ),
while p̄ ≤ p which proves (ii). Consequently, if Γ (x, δ) is causal, then p = 0 so that
p̄ ≤ 0 which shows that Ω(x, δ) is also causal.
Chapter 2
Geometric Tools for Time-Delay Systems

In this chapter, we introduce the main tools that will be used in the book to deal with
nonlinear time-delay systems affected by constant commensurate delays. We will
introduce basic notions such as the Extended Lie derivative and the Polynomial Lie
Bracket (Califano et al. 2011a; Califano and Moog 2017) which generalize to the
time-delay context the standard definitions of Lie derivative and Lie Bracket used to
deal with nonlinear systems. We will finally show how changes of coordinates and
feedback laws act on the class of systems considered.
Before going into the technical details let us first recall that with the notation
introduced in Chap. 1, we can rewrite system (1.1) as


l 
m
ẋ[0] = F(x[s] ) + G ji (x[s] )u j,[0] (−i) (2.1)
i=0 j=1
y j,[0] = H j (x[s] ), j ∈ [1, p]. (2.2)

As it was already discussed, the following infinite dimensional dynamics is naturally


associated to the dynamics (2.1):


l 
m
ẋ[0] = F(x[s] ) + G ji (x[s] )u j,[0] (−i)
i=0 j=1


l 
m
ẋ[0] (−1) = F(x[s] (−1)) + G ji (x[s] (−1))u j,[0] (−i − 1) (2.3)
i=0 j=1
..
.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 15


C. Califano and C. H. Moog, Nonlinear Time-Delay Systems,
SpringerBriefs in Control, Automation and Robotics,
https://doi.org/10.1007/978-3-030-72026-1_2
16 2 Geometric Tools for Time-Delay Systems

The advantage in the representation (2.3) is that it shows that the given time-delay
system can be represented as an interconnection of subsystems and that these subsys-
tems are coupled through the action of the control. Nevertheless caution must be used
when referring to this last representation since the variables x(−i) are connected to
each other through time. This will be further discussed in this chapter.

2.1 The Initialization of the Time-Delay System Versus the


Initialization of the Delay-Free Extended System

Consider a truncation of system (2.3):


l 
m
ẋ[0] = F(x[s] ) + G ji (x[s] )u j,[0]
i=0 j=1


l 
m
ẋ[0] (−1) = F(x[s] (−1)) + G ji (x[s] (−1))u j,[0] (−i − 1) (2.4)
i=0 j=1
..
.

l 
m
ẋ[0] (−k) = F(x[s] (−k)) + G ji (x[s] (−k))u j,[0] (−i − k).
i=0 j=1

Rename v j0 = u j,[0] , v j = u j,[0] (−) for  = 1, . . . , l + k, so that system (2.4)


reads


l 
m
ẋ[0] = F(x[s] ) + G ji (x[s] )v j,i
i=0 j=1


l 
m
ẋ[0] (−1) = F(x[s] (−1)) + G ji (x[s] (−1))v j,i+1 (2.5)
i=0 j=1
..
.

l 
m
ẋ[0] (−k) = F(x[s] (−k)) + G ji (x[s] (−k))v j,i+k
i=0 j=1
ẋ[0] (−k − 1) = φ1
..
.
ẋ[0] (−k − s) = φs .
2.1 The Initialization of the Time-Delay System … 17

While system (2.1) requires an initial condition on the time interval [−s, 0) to com-
pute its trajectories for t ≥ 0, system (2.5) requires an initialization on the time
interval [−(k + s), 0). As a consequence, any trajectory of (2.5) is also a trajectory
of (2.1), provided the latter is correctly initialized on the interval [−s, 0).
On the contrary, considering a given trajectory of system (2.1), there may not
necessarily exist a corresponding initialization function defined on the time interval
[−(k + s), 0) for system (2.5) which reproduces the given trajectory of system (2.1).
This is a direct consequence of the fact that the initialization of the system is not
necessarily a solution of the set of differential equations. This point is further clarified
through the next example.

Example 2.1 Consider the dynamics

ẋ = −x(t − 1) (2.6)

with x(τ ) = 1 for τ ∈ [−1, 0). System (2.6) is extended to

ẋ(t) = −x(t − 1)
(2.7)
ẋ(t − 1) = −x(t − 2).

There is no initialization function for system (2.7) over the time interval [−2, −1)
so that the trajectory of (2.7) coincides with the trajectory of (2.6) for any t ≥ −1.
In the special example, an ideal Dirac impulse at time t = −1 is required to achieve
the reproduction of the trajectory.

Coming back to the extended system (2.5), note that the further trick which consists
in renaming x0 = x[0] , x = x[0] (−) for  = 1, . . . , s + k may be misleading as
x0 ,…,xs+k are not independent.


l 
m
ẋ0 = F(x0 , . . . , xs ) + G ji (x0 , . . . , xs )v j,i
i=0 j=1


l 
m
ẋ1 = F(x1 , . . . , xs+1 ) + G ji (x1 , . . . , xs+1 )v j,i+1 (2.8)
i=0 j=1
..
.

l 
m
ẋk = F(xk , . . . , xs+k ) + G ji (xk , . . . , xs+k )v j,i+k
i=0 j=1
ẋk+1 = φ1
..
.
ẋk+s = φs .
18 2 Geometric Tools for Time-Delay Systems

From a practical point of view, given a nonlinear time-delay system, its represen-
tation (2.8) can be used to compute the solution of the system by referring to the
so-called step method as shown in the example hereafter.
Example 2.2 Consider the dynamics

ẋ(t) = f (x(t), x(t − 1)), t ≥0


(2.9)
x(t) = ϑ0 (t), −1 ≤ t < 0.

To compute the solution starting from the initialization x(t) = ϑ0 (t), for −1 ≤ t < 0,
the following steps are taken according to Garcia-Ramirez et al. (2016a):
• The solution ϑ1 (t) of (2.9) on the time interval 0 ≤ t < 1 is found as the solution
of the delay-free system (2.8) subject to the appropriate initial condition ϑ0 (t):

ẋ(t) = f (x(t), ϑ0 (t − 1)), 0≤t <1


(2.10)
x(t) = ϑ0 (t), −1 ≤ t < 0.

• The solution ϑ2 (t) of (2.9) on the time interval 1 ≤ t < 2 is found as the solution
of the delay-free system (2.8) subject to the initial condition ϑ1 (t) computed at
the previous step:

ẋ(t) = f (x(t), ϑ1 (t − 1)), 1≤t <2


(2.11)
x(t) = ϑ1 (t), 0 ≤ t < 1.

• More generally, for k ≥ 0, the solution ϑk+1 (t) of (2.9) on the time interval k ≤
t < k + 1 is found as the solution of the delay-free system (2.8) subject to the
initial condition ϑk (t):

ẋ(t) = f (x(t), ϑk (t − 1)), k ≤t <k+1


(2.12)
x(t) = ϑk (t), k − 1 ≤ t < k.

Note that shifting (2.10) into the interval 1 ≤ t < 2 yields

ẋ(t − 1) = f (x(t − 1), ϑ0 (t − 2)), 1 ≤ t < 2.

Thus, the set of equations (2.5) defined on the interval (k − 1) ≤ t < k becomes
in the case of this example

ẋ(t) = f (x(t), ϑk−1 (t − 1)),


ẋ(t − 1) = f (x(t − 1), ϑk−2 (t − 2)),
.. (2.13)
.
ẋ(t − (k − 1)) = f (x(t − (k − 1)), ϑ0 (t − k)).

System (2.13) defines the solution on the first k units of time of Eq. (2.9) shifted
into the interval (k − 1) ≤ t < k, with initialization x(τ ) = ϑ0 (τ ) for τ ∈ [k − 1, k).
2.2 Non-independence of the Inputs of the Extended System 19

Moreover, through a change of variable x0 (t) = x(t), x1 (t) = x(t − 1), xk (t) =
x(t − k), system (2.13) can also be represented as (2.8)

ẋ0 (t) = f (x0 (t), x1 (t)),


ẋ1 (t) = f (x1 (t), x2 (t)),
..
.
ẋk−1 (t) = f (xk−1 (t), ϑ0 (t − 1)),

with initial conditions x0 (0) = x1 (1), x1 (0) = x2 (1), x2 (0) = x3 (1), . . ., xk−1 (0) =
ϑ0 (1).
Now, consider the following hypothesis:
H0 . System (2.1), under the restriction u(t) = ν(t) has a unique solution

x(t0 + θ) = ϕ(θ, ϕ0 , ν), θ ∈ [0, T ] (2.14)

in the interval [0, T ].


Under H0 the following result was proven in Garcia-Ramirez et al. (2016a), using
as basic argument that the solution of part of the equations can be recovered as copies,
delayed, of the solutions of (2.1).

Theorem 2.1 Consider system (2.1), with initial conditions ϕ0 (θ), and subject to
the restriction u(t) = ν(t), t ≥ 0, and assume that hypothesis H0 is satisfied. Then,
the solution (2.14) is obtained from the solution of system (2.5) in the interval
[t0 + s, t0 + T ] if T > k + s,

z i (t0 + s) = ϕ(s − i, ϕ0 , ν), i = 0 . . . , k + s,

and
φi (t) = ϕ̇(t − t0 + i), i = 0, . . . , s − 1.

As a corollary, one can consider the special case of a single delay as done in
Example 2.2, thus getting the following result.

Corollary 2.1 Given (2.9), consider the associated extended system (2.13). There
always exists a proper initialization of (2.13) so that the trajectories of both systems
coincide on the time interval (k − 1) ≤ t < k.

This means that it is possible to group the solution of (2.9) on the interval (k − 1) ≤
t < k.
20 2 Geometric Tools for Time-Delay Systems

2.2 Non-independence of the Inputs of the Extended System

It has to be noticed that u(t − k) and x(t − k) are not independent of u(t) and x(t),
so that attention must be paid when referring to the representation (2.3). An example
is given hereafter.

Example 2.3 Consider, for instance, the two systems Σ1 and Σ2


 
ẋ1,[0] = (x2,[0]
2
+ 1)u [0] ẋ1,[0] = (x2,[0]
2
+ 1)u 1
Σ1 : , Σ2 :
ẋ2,[0] = u [0] (−1) ẋ2,[0] = u 2 .

It is easily seen that they represent the same dynamics whenever u 1 = u [0] and
u 2 = u [0] (−1). So, Σ2 has less constraints and better properties than Σ1 regarding,
for instance, feedback linearization: as a matter of fact while Σ2 can be linearized
via a regular static state feedback by setting

1
u1 = v1
2
x2,[0] +1 (2.15)
u2 = v2 ,

there is no regular static state feedback which fully linearizes Σ1 , since u [0] (−1) is
no more independent of u [0] . In fact, setting

1
u [0] = v[0]
2
x2,[0] +1

would immediately imply that

1
u [0] (−1) = v[0] (−1).
2
x2,[0] (−1) + 1

Still, the feedback (2.15) can be implemented on the time-delay system Σ1 on any
time interval [2kT, (2k + 1)T ) and can switch on u(t) = v(t) for any
t ∈ [(2k + 1)T, (2k + 2)T ), with the initialization u [0] (−1) = 0. This switching
scheme ensures a linear behavior for the given dynamics on any interval [2kT, (2k +
1)T ], k ≥ 0.

The conclusion at this stage is that one cannot neglect the links between the con-
trol/state variables and their delayed signals. As shown hereafter, this is one of the
problems that can be overcome by considering the differential representation of the
given time-delay system. Such a representation naturally takes into account through
the shift operator the link of a given variable with its delayed terms.
2.3 The Differential Form Representation 21

2.3 The Differential Form Representation

One of the peculiarities of nonlinear time-delay systems is the fact that when analyz-
ing their dynamics, one has to refer to two different kind of operations with respect
to time: time differentiation and time shift.
The simultaneous action of shift and differentiation determines difficulties which
are peculiar to time-delay systems. A simple case is illustrated through the following
example.

Example 2.4 Consider the two following nonlinear time-delay systems:



⎪ ẋ1,[0] = x2,[0]
3
(−1) + x2,[0] + x2,[0]
3 ⎧

⎨ ⎨ ẋ1,[0] = x2,[0] (−1)x2,[0]
+x2,[0] (−1)
Σ1 : Σ2 : ẋ2,[0] = u 2,[0]

⎪ ẋ = u ⎩
⎩ 2,[0] 1,[0]
y2,[0] = x1,[0] .
y1,[0] = x1,[0]

The input–output behavior is obtained by considering, in these two cases, the second-
order derivatives of the output maps. We easily get that
 2  2
ÿ1,[0] = 3x2,[0] (−1) + 1 u 1,[0] (−1) + 3x2,[0] + 1 u 1,[0]
ÿ2,[0] = x2,[0] (−1)u 2,[0] + u 2,[0] (−1)x2,[0] . (2.16)

While in the first case the feedback u 1,[0] = 3x 2 1 +1 v1,[0] linearizes the input–output
2,[0]
behavior, that is, ÿ1,[0] = v1,[0] + v1,[0] (−1), there is instead no regular static state
feedback which allows to solve the same problem for the second system.

The difference between these two cases can be understood through the use of the
differential form representation, where the shifts are represented by the δ operator,
and the representation becomes linear.
More precisely, consider the time-delay system (2.1), (2.2), and recall that, using
the notation introduced in Sect. 1.5, for any k ≥ 0, d x(t − k) = dx[0] (−k) = δ k dx[0]
and similarly, for any  ≥ 0, du(t − ) = du[0] (−) = δ  du[0] . Through standard
computations one gets that such a differential form representation is given by


m
d ẋ[0] = f (x[s] , u[s] , δ)dx[0] + g1, j (x[s] , δ)du[0], j
j=1
(2.17)
dy[0] = h(x[s] , δ)dx[0] ,

where f (x[s] , u[s] , δ) is a n × n matrix representing the differential with respect to


the state variable and is given by

s
∂ F(x[s] )    
m s l
∂G ji (x[s] ) 
f (x[s] , u[s] , δ) = δ + u[0] (−i) δ ,
=0
∂x[0] (−) j=1 =0 i=0
∂x[0] (−)
22 2 Geometric Tools for Time-Delay Systems

l
 
g1, j (x[s] , δ) = g1, j (x)δ is a n × 1 vector representing the differential of the
=0
dynamics with respect to the control u j , and given by


l
g1, j (x[s] , δ) = G j,i (x[s] )δ i , j ∈ [1, m].
i=0

s
Finally, h j (x[s] , δ) = h j (x)δ  is an 1 × n vector representing the differential of
=0
the output, and is given by


s
∂ H j (x[s] )
h j (x[s] , δ) = δi , j ∈ [1, p].
i=0
∂x[0] (−i)

Consider again Example 2.4. The differential form representation of Σ1 is



⎨ d ẋ1,[0] = (3x2,[0]
2
+ 1 + 3x2,[0]
2
(−1)δ + δ)d x2,[0]
d ẋ2,[0] = du 1,[0]

dy1,[0] = d x1,[0]

and the differentials of the derivatives of the output are

d ẏ1,[0] = d ẋ1,[0] = (3x2,[0]


2
+ 1 + 3x2,[0]
2
(−1)δ + δ)d x2,[0]
d ÿ1,[0] = (6x2,[0] ẋ2,[0] + 6x2,[0] (−1)ẋ2,[0] (−1)δ)d x2,[0]
+(3x2,[0]
2
+ 1 + 3x2,[0]
2
(−1)δ + δ)d ẋ2,[0] .

With some technical manipulations, using the fact that f (−1)δ = δ f (0), one then
gets that

d ÿ1,[0] = (1 + δ) 6u 1,[0] x2,[0] d x2,[0] + (3x2,[0]
2
+ 1)du 1,[0] .

Since the left side is an exact differential, also the right-hand side is an exact differ-
ential and it is possible to find the solution of

6u 1,[0] x2,[0] d x2,[0] + (3x2,[0]


2
+ 1)du 1,[0] = d (3x2,[0]
2
+ 1)u 1,[0] = dv1,[0] ,

that is,
1
u 1,[0] = v1,[0] .
2
3x2,[0] +1

With such a feedback dy1,[0] = dv1,[0] + dv1,[0] (−1) as expected.


2.4 Generalized Lie Derivative and Generalized Lie Bracket 23

Consider now system Σ2 . Its differential representation is


⎧ 
⎨ d ẋ1,[0] = x2,[0] (−1) + x2,[0] δ d x2,[0]
d ẋ2,[0] = du 2,[0]

dy2,[0] = d x1,[0]

and the differentials of the output derivatives are



d ẏ2,[0] = d ẋ1,[0] = x2,[0] (−1) + x2,[0] δ d x2,[0]
 
d ÿ2,[0] = ẋ2,[0] (−1) + ẋ2,[0] δ d x2,[0] + x2,[0] (−1) + x2,[0] δ d ẋ2,[0]
 
= x2,[0] (−1) + x2,[0] δ du 2,[0] + u 2,[0] (−1) + u 2,[0] δ d x2,[0] .

Since the coefficient of du 2,[0] cannot be factorized in c0 (δ)c1 (x), there is no static
state feedback which can achieve input–output linearization.

2.4 Generalized Lie Derivative and Generalized Lie


Bracket

When dealing with nonlinear systems, Lie derivatives and Lie Brackets are standard
tools used in many contexts (Isidori 1995). As well known the Lie derivative rep-
resents the derivative of a function along a given trajectory. When moving to the
time-delay context, however, several aspects should be taken into account.
As a first comment, consider again the dynamics (2.1) and consider g1, j (x, δ) in
(2.17) which is thus associated to the differential representation of (2.1).
Accordingly, it is immediate to understand that δ k g1, j (x, δ) will be associated to
the differential representation of


s 
m
ẋ[0] (−k) = F(x[s] (−k)) + G ji (x[s] (−k))u j,[0] (−i − k). (2.18)
i=0 j=1

s
Such a reasoning can be generalized to any element r (x, δ) = r  (x)δ  , so that
=0
if r (x, δ) is associated to the differential representation of (2.1), then δ k r (x, δ) =
s
r  (x[0] (−k))δ +k will be associated to the differential representation of (2.18).
=0
One thus gets the following infinite dimensional matrix:
24 2 Geometric Tools for Time-Delay Systems
⎧ 0 ⎫

→ ⎪
⎪r r1 ··· rs 0 ··· ⎪

∂x[0] ⎪
⎪ . ⎪


→ ⎪
⎪ 0 r 0 (−1) · · · r s−1 (−1) r s (−1) 0 . ⎪ . ⎪
∂x[0] (−1) ⎪
⎪ ⎪

.. ⎨. .. .. .. .. .. ⎬
.. . . . . . ··· .
. (2.19)
∂ ⎪
⎪ . ⎪ ⎪
→ ⎪ ⎪
∂x[0] (−s) ⎪

⎪ 0 ··· 0 r 0 (−s) r 1 (−s) · · · . . ⎪ ⎪

.. ⎪
⎪ ⎪

. ⎩ .. .. .. .. .. .. ⎭
. . . . . . ···

Despite the infinite dimensionality of the state-space


xe = (x T (t), x T (t − 1), x T (t − 2), . . .)T the columns are generated by a finite num-
ber of elements, depending on a finite number of variables. This fact will turn to
play a crucial role in the definitions of Lie derivative and Lie Bracket for time-delay
systems introduced in Califano et al. (2011a), Califano and Moog (2017), and which
are given hereafter.

Definition 2.1 (Generalized Lie derivative) Given the function τ (x[ p,s] ) and the

submodule element r (x, δ) = r j (x)δ j ∈ K∗n (δ], the Generalized Lie derivative
j=0
L r μ (x) τ (x[ p,s] ), μ ∈ [0, s̄] is defined as

μ
 ∂τ (x[ p,s] )
L r μ (x) τ (x[ p,s] ) = r μ−l (x(−l)). (2.20)
l=− p ∂x[0] (−l)

Remark 2.1 In a delay-free context, one would have that p = s = s̄ = 0 and the
Generalized Lie derivative would reduce to
∂τ (x)
L r 0 (x) τ (x) = r 0 (x),
∂x
which is exactly the standard Lie derivative of τ along r 0 .

rq (x)δ j ∈ K∗n (δ],
j
Definition 2.2 (Generalized Lie Bracket) Let rq (x, δ) =
j=0
q = 1, 2. For any k, l ≥ 0, the Generalized Lie Bracket [r1k (·), r2l (·)] Ei , on IR (i+1)n ,
i ≥ 0, is defined as

i 
 T
k− j l− j ∂
r1k (·), r2l (·) = [r1 , r2 ] E , (2.21)
Ei
j=0
(x(− j)) ∂x[0] (− j)

where
 
r1k (·), r2l (·) E
= L r1k (x)r2l (x) − L r2l (x)r1k (x) . (2.22)
2.4 Generalized Lie Derivative and Generalized Lie Bracket 25

Remark 2.2 The Generalized Lie derivative as defined by (2.20) is the Lie derivative
of τ (x[ p,s] ) along

(r μ+ p (+ p), . . . , r μ (0), r μ−1 (−1), . . . , r 0 (−μ), 0)T .

The latter is embedded in


⎛ 0 ⎞
r (x( p)) · · · r (x( p)) 0 0
⎜ .. .. ⎟
Δ[ p,q] = spanK∗⎝ 0 . . 0 ⎠,

0 0 r0 (x(−q)) · · · r (x(−q))

where ri (x) = (r1i , . . . , r ij ) and q > μ. Accordingly, assuming without loss of gen-
erality k ≥ l, the Generalized Lie Bracket [r1k (·), r2l (·)] Ei is defined starting from the
standard Lie Bracket
⎡⎛ ⎞⎛ s ⎞⎤ ⎛τ k+s−l (s − l)⎞
0 r2 (s − l)
⎢⎜r1s (s − k)⎟ ⎜ .. ⎟⎥ ⎜ .. ⎟
⎢⎜ ⎟⎜ . ⎟⎥ ⎜ ⎜ . ⎟

⎢⎜ .. ⎟ ⎜ ⎟ ⎥ ⎜ . ⎟
⎢⎜ ⎟⎜ .. ⎟⎥ ⎜ ..
⎢⎜ k . ⎟⎜ . ⎟ ⎥ ⎜


⎢⎜ r1 (0) ⎟ ⎜ l ⎟⎥
⎢⎜ ⎟ ⎜ r (0) ⎟⎥ ⎜ ⎜ τ k
(0) ⎟
⎟.
⎢⎜ .. ⎟⎜ 2 ⎟⎥ = ⎜ . ⎟
⎢⎜ . ⎟ ⎜ . ⎟ ⎥ ⎜ . ⎟
⎢⎜ ⎟⎜ .. ⎟⎥ ⎜ . ⎟
⎢⎜ . ⎟ ⎜ ⎟ ⎥ ⎜ .. ⎟
⎢⎜ .. ⎟ ⎜ r 0 (−l) ⎟⎥ ⎜ ⎟
⎢⎜ ⎟⎜ 2 ⎟⎥ ⎜ . ⎟
⎣⎝ r 0 (−k) ⎠ ⎝ ⎠⎦ ⎝
1 0 τ (−k) ⎠
0
0 0 0

min(k,i)

In fact, [r1k (·), r2l (·)] Ei = (τ k− j (− j))T ∂x(− j)
.
j=0

The Generalized Lie Brackets (2.21) are associated to Δ[ p,q] defined above. In the
special case of causal submodules (which lead to consider Δ[0,q] ), they have shown
to characterize the 0-integrability conditions, that is, when the Δ⊥ (δ] of rank n − j
is generated by n − j exact and independent differentials dλμ (x) = Λμ (x, δ)dx[0] ,
μ ∈ [1, n − j] (Califano et al. 2011a). In order to give the conditions on integrability
directly on the submodule

Δ(δ] = spanK∗ (δ] {r1 (x, δ), . . . , r j (x, δ)},

we need to refer to the definition of Polynomial Lie Bracket and accordingly to a


more general definition of Lie Bracket.

Definition 2.3 (Lie Bracket) Given ri (x[si ,s] , δ) ∈ K∗n (δ], i = 1, 2, the Lie Bracket

[r1 (x[s1 ,s] , δ), r2 (x[s2 ,s] , δ)],


26 2 Geometric Tools for Time-Delay Systems

is a (4s + s1 + s2 + 1)-uple of polynomial vectors r12, j (x, δ), defined as

1
2s+s
+s1 − j
r12, j (x, δ) = [r1 , r2 ] E0 δ +s1 , j ∈ [−2s, 2s + s1 + s2 ]. (2.23)
=−s1

Recalling that a polynomial vector r1 (x[si ,s] , δ) acts on a function (t) and denoting its
s
j
image as R1 (x[s1 ,s] , ) := r1 (x) (− j), the Polynomial Lie Bracket is then defined
j=0
as follows:
Definition 2.4 (Polynomial Lie Bracket) Given ri (x[si ,s] , δ) ∈ K∗n (δ], i = 1, 2, the
Polynomial Lie Bracket [R1 (x, ), r2 (x, δ)] is defined as

[R1 (x, ), r2 (x, δ)] := ad R1 (x[s1 ,s] , )r2 (x[s2 ,s] , δ) =


s1 +s
∂R1 (x[s1 ,s] , ) k
ṙ2 (x, δ)|ẋ[0] =R1 (x, ) δ s1 − δ r2 (x(s1 ), δ).
k=0 ∂x(s1 − k)

With some abuse, the Polynomial Lie Bracket and the standard Lie Bracket are both
denoted by [., .]. No confusion is possible, since in the Polynomial Lie Bracket, some
(i) will always be present inside the brackets.
Some comments
As noted in Califano and Moog (2017), the link between the Lie Bracket (2.23) and
the Generalized Lie Bracket (2.21) can be easily established by noting that r12, j (x, δ)
in (2.23) is given by
 
2(s+s )− j
r12, j (x, δ) = I(δ) [r1 1 , r22s+s1 ] E2s+s1 |x(2(s+s1 )) ,

where 
I(δ) = In δ 2(s+s1 ) , · · · , In δ, In .

Furthermore, the r12, j (x, δ)’s also characterize the Polynomial Lie Bracket since
one easily gets that


2s+s1 +s2

[R1 (x, ), r2 (x, δ)] = r12, j (x, δ) ( j). (2.24)


j=−2s

Finally, in the delay-free case, the Polynomial Lie Bracket reduces (up to (0)) to
the standard Lie Bracket. In fact

[R1 (x, ), r2 (x, δ)] = [r10 (x) (0), r20 (x)] = [r10 , r20 ] (0).
2.5 Some Remarks on the Polynomial Lie Bracket 27

Instead, if delays are present, [R1 (x, ), r2 (x, δ)] immediately enlightens some
important differences with respect to the delay-free case, such as the loss of validity
of the Straightening theorem. In fact, since the term depending on δ undergoes a
different kind of operation with respect to the term depending on , starting from
r (x, δ) and its corresponding image R(x, ), in general,

1 +s
s
∂R(x[s1 ,s] , )
ṙ (x, δ)|ẋ[0] =R(x, ) δ s1 = δ k r (x(s1 ), δ),
k=0 ∂x(s1 − k)

 that, in general,[r (x, δ),r (x, δ)] = 0. For instance, consider r (x, δ) =
which yields

x2 (−1) x (−1)
. Then R(x, ) = 2 (0) and
1 1
 
(−1) − (0)δ
[R(x, ), r (x, δ)] = = 0.
0

Accordingly,    
1 −δ
[r (x, δ), r (x, δ)] = , .
0 0

2.5 Some Remarks on the Polynomial Lie Bracket

Let us first examine some properties of the Polynomial Lie Bracket discussed in
Califano and Moog (2017).

Property 2.1 (Anticommutativity) Assume without loss of generality,


s2 ≥ s1 , then for any integer j,

∂[R1 (x, ), r2 (x, δ)] s2 −s1 + j+| j| ∂[R2 (x, ), r1 (x, δ)] | j|
δ =− δ. (2.25)
∂ (s1 − j) ∂ (s2 + j)

Property 2.2 Given for i = 1, 2, r̄i (x[s̄i ,s] , δ) = ri (x[si ,s] , δ)βi (x[si ,s] , δ), then

[R̄1 (x, ), r̄2 (x, δ)]δ s1 −s̄1 =

[R1 (x, ¯), r2 (x, δ)]¯=β1 (x, ) β̂2 + r2 (x, δ)α2 − r1 (x, δ)α1 (2.26)

s+s1
∂β1 (x, ) k
with β̂2 =β2 (x(s1 ), δ), α1 = ∂x(s1 −k)
δ r̄2 (x(s1 ), δ), and α2 = β̇2 (x, δ)|ẋ=R̄1 (x, ) δ s1 .
k=0

Property 2.3 The repeated Polynomial Lie Bracket obtained by setting = 1 is


given by
28 2 Geometric Tools for Time-Delay Systems

k  
k
ad R1 (x,1) (r2 (x, δ)) α( j) |ẋ=R1 (x,1) .
k− j
ad kR1 (x,1) (r2 (x, δ)α) = (2.27)
j=0
j

It is important to point out that for delay-free systems one recovers the standard
properties of Lie Brackets. In fact, if ri (x, δ) = ri0 (x), for i = 1, 2, then Ri (x, ) =
ri0 (x) (0) and

∂[R1 (x, ), r2 (x, δ)] ∂[R2 (x, ), r1 (x, δ)]


= [r10 , r20 ] = −[r20 , r10 ] = −
∂ (0) ∂ (0)

whereas letting r̄i (x, δ) = ri0 (x)βi (x), then R̄i (x, ) = ri0 (x)βi (x) (0) and

[R̄1 (x, ), r̄2 (x, δ)] = [r10 (x)β1 (x) (0), r20 (x)β2 (x)]

= [r10 , r20 ]β2 β1 + r20 α2 − r10 α1 (0)

with α1 = β2 (L r20 β1 ) and α2 = β1 (L r10 β2 ).

Example 2.5 Consider for i = 1, 2, ri (x, δ) given by


   
x1 (1) x δ
r1 (x, δ) = , r2 (x, δ) = 2 .
x2 δ x1

Then    
x1 (1) (0) x (−1)
R1 (x, ) = , R2 (x, ) = 2 .
x2 (−1) x1 (0)

Accordingly, since s1 = 1, s2 = s = 0,
   
x2 (−1)δ (0)x2 (1)δ
[R1 (x, ), r2 (x, δ)] = δ−
x1 (1) (0) (−1)x1 δ
   2 
0 x2 δ − x2 (1)δ
=− (0) + (1)
x1 δ x1 (1)δ
= r12,0 (x, δ) (0) + r12,1 (x, δ) (1).

One can easily verify that


  
1
0
r12,0 (x, δ) = − δ= [r1+1 , r2 ] E0 δ +1
x1
=−1
    
1
−x2 (1) x
r12,1 (x, δ) = δ + 2 δ 2 = [r1 , r2 ] E0 δ +1 ,
x1 (1) 0
=−1

which confirms Eq. (2.23).


2.5 Some Remarks on the Polynomial Lie Bracket 29
   
x2 (1) − x2 δ 0
Analogously, [R2 (x, ), r1 (x, δ)] = (0) + (1) and it is
−x1 (1) x1 δ
again easily verified that (2.25) holds true (with the indices exchanged since s1 > s2 ).
In fact,
 
∂[R2 (x, ), r1 (x, δ)] x2 (1) − x2 δ ∂[R1 (x, ), r2 (x, δ)]
δ= δ=−
∂ (0) −x 1 (1) ∂ (1)
 
∂[R2 (x, ), r1 (x, δ)] 0 ∂[R1 (x, ), r2 (x, δ)]
δ= δ=− δ.
∂ (1) x1 δ ∂ (0)

Since the derivative of a differential is equal to the differential of the derivative


one can easily compute the differential of the kth derivative.
As already underlined, the Polynomial Lie Bracket is intrinsically linked to the
standard Lie Bracket when we consider delay-free systems. Since to the standard
Lie Bracket it has been given a precise geometric interpretation, one may wonder if
something could be said also in the delay case. Let us, in fact, recall that in the delay-
free case the geometric interpretation of the Lie Bracket can be easily obtained by
considering a simple example given in Spivak (1999) of a two-input driftless system
of the form
ẋ(t) = g1 (x(t))u 1 (t) + g2 (x(t))u 2 (t).

If the system were linear, that is, g1 (x) and g2 (x) were constant vectors, the appli-
cation of an input sequence of the form [(0, 1), (1, 0), (0, −1), (−1, 0)] where each
control acts exactly for a time h, would bring the state back to the starting point. In
the nonlinear case instead it was shown that such a sequence brings the system into
a final point x f different from the starting one x0 and that the Lie Bracket [g2 , g1 ]
exactly identifies the direction which should be taken to go back to x0 from x f . In
fact, if one carries out the computation it turns out that the first-order derivative of
the flow in the origin is zero, while its second-order derivative evaluated again in the
origin is exactly twice the bracket [g2 , g1 ]. As a by-product in the special case of a
single-input driftless and delay-free system then using a constant control allows to
move forward or backward on a unique integral manifold of the considered control
vector field, and this can be easily proven by considering that [g, g] = 0.
In the time-delay context, already for a single-input system the Polynomial Lie
Bracket is not, in general, identically zero. Using analogous arguments as in the delay-
free case, as already discussed in Sect. 1.3, it follows that already in the single-input
case, in fact, it is not, in general, true that one moves forward and backward on a
unique integral manifold when delays are present. To formally show this, let us go
back to the single-input time-delay system (1.8)
 
x2 (t − τ )
ẋ(t) = g(x(t), x(t − τ ))u(t) = u(t) (2.28)
1

and consider the dynamics over four steps of magnitude τ when the control sequence
[1, 0, −1, 0] is applied and the switches occur every τ . Then one gets that over the
30 2 Geometric Tools for Time-Delay Systems

four steps the dynamics reads

ẋ(t) = g(x(t), x(t − τ ))u(t)


ẋ(t − τ ) = g(x(t − τ ), x(t − 2τ ))u(t − τ )
(2.29)
ẋ(t − 2τ ) = g(x(t − 2τ ), x(t − 3τ ))u(t − 2τ )
ẋ(t − 3τ ) = g(x(t − 3τ ), x(t − 4τ ))u(t − 3τ )

and due to the input sequence, it can be rewritten in the form

ż(t) = g1 (z(t))u 1 (t) + g2 (z(t))u 2 (t), (2.30)

where z 1 (t)=x(t), z 2 (t)=x(t − τ ), z 3 (t) = x(t − 2τ ), z 4 (t) = x(t − 3τ ), u 1 (t) =


u(t − τ ) = −u(t − 3τ ), and u 2 (t) = u(t) = −u(t − 2τ ). In (2.30)
⎛ ⎞ ⎛ ⎞
0 g(z 1 , z 2 )
⎜ g(z 2 , z 3 ) ⎟ ⎜ 0 ⎟
g1 (z) = ⎜

⎟ , g2 (z) = ⎜


⎝−g(z 3 , z 4 )⎠ ,
0
−g(z 4 , c0 ) 0

with c0 the initial condition of x on the interval [−4τ , −3τ ). Of course not all the
trajectories of z 1 (t) in (2.30) will be trajectories of x(t) in (2.29), whereas all the
trajectories of x(t) for t ∈ [0, 4τ ) in (2.29) can be recovered as trajectories of z 1 (t)
in (2.30) for t ∈ [0, 4τ ), whenever the system is initialized with constant initial
conditions.

Mimicking the delay-free case, one should then apply the input sequence [(0, 1),
(1, 0), (0, −1), (−1, 0)] to the system. This can be achieved by considering u(t) = 1
for t ∈ [0, τ ), u(t) = 0 for t ∈ [τ , 2τ ), u(t) = −1 for t ∈ [2τ , 3τ ), and u(t) = 0 for
t ∈ [3τ , 4τ ), with the initialization u(t) = 0 for t ∈ [−τ , 0). Such an example shows
immediately that the second-order derivative in 0 is characterized by
⎡⎛ ⎞ ⎛ ⎞⎤
0 g(z 1 , z 2 )
⎢⎜ g(z 2 , z 3 ) ⎟ ⎜ 0 ⎟⎥
[g1 , g2 ] = ⎢
⎣⎝
⎜ ⎟,⎜
⎠ ⎝
⎟⎥
0 −g(z 3 , z 4 )⎠⎦
−g(z 4 , c0 ) 0
⎡⎛ ⎞⎛ ⎞⎤
0 g(x(t), x(t − τ ))
⎢⎜ g(x(t − τ ), x(t − 2τ )) ⎟⎜ 0 ⎟⎥
=⎢ ⎜
⎣⎝
⎟,⎜
⎠ ⎝
⎟⎥.
0 −g(x(t − 2τ ), x(t − 3τ ))⎠⎦
−g(x(t − 3τ ), x(t − 4τ )) 0


It is straightforward to note that the ∂x(t)
component of the Lie Bracket is given by
∂g(x(t),x(t−τ ))
∂x(t−τ )
g(x(t
− τ ), x(t − 2τ )) which, in general, is nonzero, thus showing that
the presence of a delay may generate a new direction that can be taken.
2.6 The Action of Changes of Coordinates 31

Such a result can be easily recovered by using the Polynomial Lie Bracket. In fact,
one has that starting from g1 (x, δ) = g(x(t), x(t − τ )), one considers G1 (x, ) =
g(x(t), x(t − τ )) (0). Accordingly, the associated Polynomial Lie Bracket is


1
∂g(x(0), x(−1))
[G1 (x, ), g1 (x, δ)] = ġ|ẋ[0] =g(x(0),x(−1)) (0) − (0) g(− j)
j=0
∂x(− j)
∂g(x(0), x(−1))
= g(x(−1), x(−2))( (−1) − (0)δ)
∂x(−1)
 
1
= ( (−1) − (0)δ)
0

which is different from zero. Figure 1.2 shows the trajectories of the single-input
system (2.28) controlled with a piecewise input which varies from 1 to −1 every
10 s, highlighting the difference with the delay-free case as discussed. It is also clear
that in this framework the delay can be used as an additional control variable. This
is left as an exercise to the reader, which can investigate on the effect of different
delays on the system with the same output, as well as a system with a fixed delay
and where the input changes its period.

2.6 The Action of Changes of Coordinates

Changes of coordinates play a fundamental role in the study of the structural prop-
erties of a given system. In the delay-free case, a classical example of their use
is displayed by the decomposition in observable/reachable subsystems, following
Kalman intuition. When dealing with time-delay systems, several problems arise
when considering changes of coordinates.

Example 2.6 Consider, for instance, the system

ẋ(t) = x(t − 1) + u(t).

The map z(t) = x(t) + x(t − 1) does not define a change of coordinates, since we
are not able to express x(t) as a function of z(t) and a finite number of its delays.
Nevertheless we can compute

ż(t) = z(t − 1) + u(t) + u(t − 1).

Example 2.7 Consider, for instance, the system

ẋ1 (t) = x1 (t − 1)x2 (t)


ẋ2 (t) = x1 (t).
32 2 Geometric Tools for Time-Delay Systems

The map z 1 (t) = x1 (t − 1), z 2 (t) = x2 (t) leads to the system

ż 1 (t) = z 1 (t − 1)z 2 (t − 1)
ż 2 (t) = z 1 (t + 1),

where the causality property of the system has not been preserved.

The previous examples show that changes of coordinates should be defined with
carefulness. To this end, we will consider bicausal changes of coordinates as defined
in Márquez-Martínez et al. (2002), that is, changes of coordinates which are causal
and admit a causal inverse map. Let us thus consider the mapping

z[0] = ϕ(x[α] ), (2.31)

where α ∈ IN and ϕ ∈ Kn .
Definition 2.5 Consider a system Σ in the state coordinates x. The mapping (2.31)
is a local bicausal change of coordinates for Σ if there exists an integer  ∈ IN and
a function ψ(z[] ) ∈ Kn such that, assuming z[0] and x[0] defined for t ≥ −(α + ),
then ψ(ϕ(x[α] ), . . . , ϕ(x[α] (−))) = x[0] for t ≥ 0.
Furthermore, if we consider the differential form representation of (2.31), which is
given by
s
dz[0] = T (x[γ] , δ)dx[0] = T j (x)δ j , (2.32)
j=0


then the polynomial matrix T x[γ] , δ ∈ Kn×n (δ] is unimodular and γ ≤ α, whereas
its inverse T −1 (z, δ) has polynomial degree  ≤ (n − 1) α.
Under the change of coordinates (2.31), with differential representation (2.32),
the differential representation (2.17) of the given system is transformed in the new
coordinates into


m
d ż[0] = f˜(z, u, δ)dz[0] + g̃1,m (z, δ)du[0]
j=1 (2.33)
dy[0] = h̃(z, δ)dz[0]

with

f˜(z, u, δ) = T (x, δ) f (x, u, δ)+ Ṫ (x, δ) T −1 (x, δ) |x[0] =φ−1 (z)

g̃1, j (z, δ) = T (x, δ)g1, j (x, δ) |x[0] =φ−1 (z) , (2.34)

h̃(z, δ) = h(x, δ)T −1 (x, δ) |x[0] =φ−1 (z) .

More generally, the effect of a bicausal change of coordinates on a submodule


element is defined by the next result.
2.7 The Action of Static State Feedback Laws 33

Proposition 2.1 Under the bicausal change of coordinates (2.31), the causal sub-
module element r (x, δ) is transformed in r̃ (z, δ) given by

 
s+s̄ 
r̃ (z, δ) = [T (x, δ)r (x, δ)]x[0] =φ−1 (z) = T j (x)r − j (x(− j)) x[0] =φ−1 (z)
δ
=0 j=0

with s = deg(T (x, δ)), s̄ = deg(r (x, δ)).

As a consequence, we are now able to characterize the effect of a bicausal change


of coordinates on the Extended Lie Bracket and the Polynomial Lie Bracket. To this
end, starting from the given change of coordinates and its differential representation
(2.32), we have to consider the matrices
⎛ ⎞
T 0 (x) · · · ··· ··· T l (x)
⎜ .. ⎟
l,i (x) = ⎝ 0 . . . . ⎠,
0 0 T (x(−i)) · · · T (x(−i))
0 l−i

where T j (x) = 0 for j > s. The following results hold true.



j
Lemma 2.1 Let rβ (x, δ) = rβ (x)δ j , β = 1, 2. Under the bicausal change of
j=0
coordinates (2.31) with differential representation (2.32), one has, for k ≤ l,
0≤i ≤l

[r̃1k (z), r̃2l (z)] Ei = l,i (x)[r1k (x), r2l (x)] El |x=φ−1 (z)
.

Proposition 2.2 Under the bicausal change of coordinates (2.31), the Polynomial
Lie Bracket [R1 (x, ), r2 (x, δ)] defined starting from the causal submodule elements
r1 (x, δ), r2 (x, δ) is transformed into


2s̃
[ R̃1 (z, ), r̃2 (z, δ)] = r̃12, j (z, δ) ( j),
j=−2s̃

where


2s̃
− j

2s̃− j
+ j
r̃12, j (z, δ) = [r̃1 , r̃2 ] E0 δ  = [r̃1 , r̃2 ] E0 δ j+
=0 =0


2s̃− j
+ j

=  + j,0 (x)[r1 (x), r2 (x)] E+ j δ j+ .
|x=φ−1 (z)
=0
34 2 Geometric Tools for Time-Delay Systems

2.7 The Action of Static State Feedback Laws

Definition 2.6 Consider a system (2.1), an invertible instantaneous static state feed-
back is defined as
u(x(t), v(t)) = α(x(t)) + β(x(t))v(t), (2.35)

where v(t) is a new input of dimension m and β is a square invertible matrix whose
entries are meromorphic functions, so that (2.35) is invertible almost everywhere and
one recovers v(t) as a function of u(t), that is,

v(x(t), u(t)) = [β(x(t))]−1 (−α(x(t)) + u(t)) .

In general, the class of instantaneous static state feedback laws is not rich enough to
cope with the complexity of time-delay systems and to solve the respective control
problems. Thus, delay-dependent state feedback laws are considered as well and
have the same level of complexity as the system to be controlled.

Definition 2.7 Given the system (2.1), consider the feedback



u(x, v) = α(x(t), . . . , x(t − )) + βi (x(t), . . . , x(t − ))v(t − i), (2.36)
i=0

which can be written in the compact form

u(x, v) = α(·) + β(x(t), . . . , x(t − ), δ)v(t), (2.37)

where v is a m-valued new input and


i=
β(x, δ) = βi (x(t), . . . , x(t − ))δ i
i=0

is a δ-polynomial matrix.
The feedback (2.37) is said to be an invertible bicausal static state feedback if β
is a unimodular polynomial matrix, i.e. it admits an inverse polynomial matrix β .

It follows that v is a function of u as follows:

v(x, u) = [β (·, δ](−α(·) + u(t)). (2.38)

Note that in the special case where m = 1, the invertibility of (2.36) necessarily
yields

u(x(t)) = α(x(t), . . . , x(t − )) + β0 (x(t), . . . , x(t − ))v(t), (2.39)


2.8 Problems 35

that is, the feedback law can depend only on v(t). Only in the multi-input case, several
time instants of v(·) may be involved. Referring to the differential representation,
one gets that the differential of the feedback (2.36) is


! "
 ∂α(x) 
i=
∂βi (x) 
i=
du[0] = + v(−i) δ dx[0] +
j
βi (x)δ i dv[0]
j=0
∂x(− j) i=0 ∂x(− j) i=0
(2.40)
= α(x, v, δ)dx[0] + β(x, δ)dv[0]

so that the inverse feedback is

dv[0] = α̂(x, u, δ)dx[0] + β̂(x, δ)du[0] .

As the matrix β(x, δ) is unimodular, one gets β̂(x, δ) = β −1 (x, δ). Accordingly, the
differential representation of the closed-loop system, given by the dynamics

d ẋ[0] = f (x, u, δ)dx[0] + g(x, δ)du[0]

with the feedback du[0] = α(x, v, δ)dx[0] + β(x, δ)dv[0] , reads

d ẋ[0] = fˆ(x, v, δ)dx[0] + ĝ(x, δ)dv[0]

with

fˆ(x, v, δ) = f (x, u, δ)|u=α(·)+β(·)v + g(x, δ)α(x, v, δ),


ĝ(x, δ) = g(x, δ)β(x, δ).

Example 2.8 The feedback

u 1 (t) = v1 (t)
u 2 (t) = x 2 (t − 1)v1 (t − 2) + v2 (t)

involves delays and obviously is invertible since

v1 (t) = u 1 (t)
v2 (t) = −x 2 (t − 1)u 1 (t − 2) + u 2 (t).
36 2 Geometric Tools for Time-Delay Systems

2.8 Problems

1. Consider the dynamics


   
x1 (t)x2 (t − D) + x2 (t) x1 (t)
ẋ(t) = + u(t − 2D)
x1 (t) + x2 (t)x2 (t − D) 1
y(t) = x1 (t) + x2 (t − D)

with D as constant delay. Find the associated differential representation.


⎛ ⎞
δ
2. Given the submodule elements r1 (x, δ) = ⎝x1 (t − 1)⎠,
x22 (t)
⎛ ⎞
x2 (t)
r2 (x, δ) = ⎝x1 (t − 2)δ ⎠ , compute the Polynomial Lie Bracket
1

[R1 (x, ), r2 (x, δ)].


⎛ ⎞ ⎛ ⎞
δ x2 (t)
3. Given r1 (x, δ) = ⎝x1 (t − 1)⎠, r2 (x, δ) = ⎝x1 (t − 2)δ ⎠, compute the Gener-
x22 (t) 1
alized Lie derivative L r11 (x)r20 (x).

4. Verify Properties 2.1, 2.2 and 2.3 in Sect. 2.5


⎛ ⎞ ⎛ ⎞
δ2 x2 (t)
r1 (x, δ) = ⎝ x2 (t) ⎠ , r2 (x, δ) = ⎝x1 (t − 2)δ 2 ⎠
x1 (t − 2) δ.

5. Prove Property 2.1 and Property 2.2 in Sect. 2.5.

6. Consider the (linear) delay-dependent input transformation

u 1 (t) = v1 (t) − v2 (t − 1)
u 2 (t) = v1 (t − 1) + v2 (t) − v2 (t − 2).

Is this transformation invertible? If yes, then write v1 (t) and v2 (t) in terms of
u 1 (t), u 2 (t) and their delays.

7. Consider the delay-dependent state feedback

u 1 (t) = x12 (t) + x2 (t − 1)v1 (t − 1) + v2 (t)


u 2 (t) = x23 (t − 1) + v1 (t).

Is this state feedback invertible? If yes, is the inverse state feedback causal?
Chapter 3
The Geometric Framework—Results on
Integrability

In this chapter, we will focus our attention on the solvability of a set of partial dif-
ferential equations of the first order, or equivalently the integrability problem of a
set of one forms when the given variables are affected by a constant delay. As it
is well known, in the delay-free case such a problem is addressed by using Frobe-
nius theorem, and the necessary and sufficient conditions for integrability can be
stated equivalently by referring to involutive distributions or involutive codistribu-
tions. Frobenius theorem is used quite frequently in the nonlinear delay-free context
because it is at the basis of the solution of many control problems. This is why it is
fundamental to understand how it works in the delay context.
When dealing with time-delay systems, in fact, things become more involved.
A first attempt to deal with the problem can be found in Márquez-Martínez (2000)
where it is shown that for a single one-form one gets results which are similar to the
delay-free case, while these results cannot be extended to the general context.
As shown in Chap. 1, a first important characteristics of one-forms in the time-
delay context, is that they have to be viewed as elements of a module over a certain
non-commutative polynomial ring.
In the present chapter, it will also be shown that two notions of integrability have to
be defined in the delay context, strong and weak integrability, which instead coincide
in the delay-free case. As it will be pointed out, these main differences are linked
to the notion of closure of a submodule introduced in Conte and Perdon (1995) for
linear systems over rings and recalled in Chap. 1. Finally, it will also be shown that
the concept of involutivity can be appropriately extended to this context through the
use of the Polynomial Lie Bracket.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 37


C. Califano and C. H. Moog, Nonlinear Time-Delay Systems,
SpringerBriefs in Control, Automation and Robotics,
https://doi.org/10.1007/978-3-030-72026-1_3
38 3 The Geometric Framework—Results on Integrability

3.1 Some Remarks on Left and Right Integrability

Let us first underline what is meant by integrability of a left- or right-submodule:


1. Integrating a given left-submodule

Ω(δ) = spanK∗ (δ] {ω1 (x, δ), . . . , ωk (x, δ)}

generated by k one-forms independent over K∗ (δ] consists in finding k indepen-


dent functions ϕ1 , . . . ϕk such that

spanK∗ (δ] {ω1 , . . . , ωk } ⊆ spanK∗ (δ] {dϕ1 , . . . , dϕk }. (3.1)

2. Integrating a given right-submodule

Δ(δ] = spanK∗ (δ] {r1 (x, δ), . . . , r j (x, δ)},

generated by j independent elements over K∗ (δ], consists in the computation of


a set of n − j exact differentials dλμ (x) = Λμ (x, δ)dx[0] ( p) independent over
K∗ (δ], which define a basis for the left-kernel of Δ(δ].
Already these definitions enlighten two important differences with respect to the
delay-free case:
First, Eq. (3.1) states that integrability of Ω(δ] is equivalent to finding k indepen-
dent functions ϕi i ∈ [1, k] such that
⎛ ⎞ ⎛ ⎞
ω1 dϕ1
⎜ .. ⎟ ⎜ . ⎟
⎝ . ⎠ = A(x, δ) ⎝ .. ⎠ .
ωk dϕk

If the matrix A(x, δ) is unimodular then this means that

spanK∗ (δ] {ω1 , . . . , ωk } ≡ spanK∗ (δ] {dϕ1 , . . . , dϕk },

so that the differentials dϕ j can be expressed in terms of the one-forms ωi ’s, and this
is exactly what happens in the delay-free case. We will talk in this case of Strong
Integrability. If instead the matrix A(x, δ) is not unimodular then we cannot express
the dϕ j ’s in terms of the one-forms ωi , since

spanK∗ (δ] {ω1 , . . . , ωk } ⊂ spanK∗ (δ] {dϕ1 , . . . , dϕk }.

We will talk in this case of Weak Integrability which is then peculiar of the delay
context only and is directly linked to the concept of closure of a submodule.
Secondly, such a difference does not come out if one works on the right-submodule
Δ(δ], since its left-annihilator is always closed, so that one will always find out that
3.1 Some Remarks on Left and Right Integrability 39

its left-annihilator is strongly integrable. Instead, as it will be clarified later on, when
talking of integrability of a right-submodule a new notion of p-integrability needs
to be introduced which characterizes the index p such that dλ = Λ(x, δ)dx[0] ( p)
satisfies Λ(x, δ)Δ(δ] = 0.

Example 3.1 The one-form ω1 = dx(t) + x(t − 1)dx(t − 1) can be written in the
two following forms:
ω1 = (1 + x(t − 1)δ)dx(t) (3.2)

and
ω1 = d(x(t) + 1/2x(t − 1)2 ). (3.3)

Equation (3.2) suggests that the given one form is just weakly integrable; however,
it is even strongly integrable from (3.3).
Instead, the one-form ω2 = dx1 (t) + x2 (t)dx1 (t − 1) = (1 + x2 (t)δ)dx1 (t) is weakly
integrable, but not strongly integrable, because the polynomial 1 + x2 (t)δ is not
invertible.

3.2 Integrability of a Right-Submodule

The following section is devoted to analyze in more detail the concept of integrability
of a right-submodule. As it has been underlined at the beginning of this chapter,
differently to the delay-free case, in this context it will be necessary to introduce
a more general definition of integrability, namely, p-integrability. Another concept
strictly linked to the integrability problem is that of involutivity. Such a notion is also
introduced in this context, though it will be shown that it is not a straightforward
generalization to the delay context of the standard definitions. The main features of
delay systems, in fact, fully characterize these topics.
Let us now consider the right-submodule

Δ(δ] = spanK∗ (δ] {r1 (x, δ), . . . , r j (x, δ)} (3.4)



of rank j, with the polynomial vector ri (x, δ) = (ri (x))T ∂x∂[ p] δ ∈ K∗n (δ]. By
=0
assumption ris̄+ = 0, ∀ > 0; by convention ri−k = 0, ∀k > 0.
As we have already underlined, integrating Δ(δ] consists in the computation of a
set of n − j exact differentials dλμ (x) = Λμ (x, δ)dx[0] ( p) independent over K∗ (δ],
which define a basis for the left-kernel of Δ(δ].
As it is immediately evident, one key point stands in the computation of the
correct p. We will thus talk of p-integrability of a right-submodule, which is defined
as follows.
40 3 The Geometric Framework—Results on Integrability

Definition 3.1 (p-integrability of a Right-Submodule) The right-submodule

Δ(δ] = spanK∗ (δ] {r1 (x, δ), . . . , r j (x, δ)}

of rank j is said to be p-integrable if there exist n − j independent exact differentials

dλμ (x) = Λμ (x, δ)dx[0] ( p), μ ∈ [1, n − j]

such that the dλμ (x)’s lay in the left-kernel of Δ(δ], that is, dλμ (x)ri (x, δ) = 0, for
i ∈ [1, j] and μ ∈ [1, n − j], and any other exact differential d λ̄(x) ∈ Δ⊥ (δ] can be
expressed as a linear combination over K∗ (δ] of such dλμ (x)’s.

Definition 3.2 (integrability of a Right-Submodule) The right-submodule Δ(δ] of


rank j, given by (3.4), is said to be integrable if there exists some finite integer p
such that Δ(δ] is p-integrable.

Example 3.2 Consider, for instance,

−x1 (2)δ
Δ(δ] = spanK∗ (δ] .
x2 (2)

According to the above definition, Δ(δ] is 2-integrable, since

dλ = d(x1 (2)x2 (1)) = (x2 (1), x1 (2)δ)d x(2) ⊥ Δ(δ].

How to check the existence of such a solution and how to compute it is the topic
of the present chapter.

To this end, starting from the definitions of Generalized Lie Bracket, Lie Bracket, and
Polynomial Lie Bracket given in Chap. 2, the notions of Involutivity and Involutive
Closure of a right-submodule are introduced next. They represent the nontrivial
generalization of the standard definitions used in the delay-free context, which can
be recovered as a special case. These definitions play a fundamental role in the
integrability conditions.

3.2.1 Involutivity of a Right-Submodule Versus its


Integrability

As already recalled, for right-submodules to deal with integrability, the involutivity


concept must be defined.

Definition 3.3 (Involutivity) Consider the right-submodule


3.2 Integrability of a Right-Submodule 41

Δ(δ] = spanK∗ (δ] r1 (x, δ), . . . , r j (x, δ)


s
of rank j, with ri (x, δ) = ril (x[si ,s] )δl and let Δc (δ] be its right closure. Then
l=0
Δ(δ] is said to be involutive if for any pair of indices i, ∈ [1, j] the Lie Bracket
[ri (x, δ), r (x, δ)] satisfies

spanK∗ (δ] {[ri (x, δ), r (x, δ)]} ∈ Δc (δ]. (3.5)

Remark 3.1 Definition 3.3 includes as a special case the notion of involutivity of
a distribution. The main feature is that starting from a given right-submodule, its
involutivity implies that the vectors obtained through the Lie Bracket of two elements
of the submodule cannot be obtained as a linear combination of the generators of the
given submodule, but it is a linear combination of the generators of its right closure.
For finite dimensional delay-free systems, distributions are closed by definition, so
there is no such a difference.
The definition of involutivity of a submodule is crucial for the integrability problem,
as enlightened in the next theorem.
Theorem 3.1 The right-submodule

Δ(δ] = spanK∗ (δ] r1 (x, δ), . . . , r j (x, δ)

of rank j is 0-integrable if and only if it is involutive and its left-annihilator is causal.


Hereafter, the proof is reported in order to make the reader familiar with the
Polynomial Lie Bracket. The necessity part simply shows that if the left-annihilator
of Δ(δ] is integrable, then necessarily all the Polynomial Lie Brackets of the vectors in
Δ(δ] must be necessarily in Δ(δ]. The sufficiency part, instead, starts by associating
a finite involutive distribution to Δ(δ], and then showing that its integrability allows
us to compute the exact one forms that span the left-annihilator of Δ(δ].
Proof Necessity. Assume that there exist n − j causal exact differentials
dλi (x) = Λi (x, δ)dx[0] , independent over K∗ (δ] which are in Δ⊥ (δ]. Let ρ denote
the maximum between the delay in the state variable and the degree in δ. Then

Λμ (x[ρ] , δ)r (x, δ) = 0, ∀μ ∈ [1, n − j], ∀ ∈ [1, j]. (3.6)

The time derivative of (3.6) along Rq (x[s1 ,s] , ) yields ∀μ ∈ [1, n − j], ∀ ∈ [1, j]

Λ̇μ (x, δ)|ẋ[0] =Rq (x, )r (x, δ) + Λμ (x, δ)ṙ (x, δ)|ẋ[0] =Rq (x, ) = 0.

Multiplying on the right by δ s1 , one gets


ρ
 ∂Λiμ (x)
Rq (x(−k), (−k))δ i r (x, δ)δ s1 + Λμ (x, δ)ṙ (x, δ)|ẋ[0] =Rq (x, ) δ s1 = 0,
i,k=0 ∂x(−k)
42 3 The Geometric Framework—Results on Integrability

∂Λiμ (x) ∂Λkμ (x)


that is, recalling that = ,
∂x(−k) ∂x(−i)


ρ ∂Λkμ (x)
Rq (x(−k), (−k))δ i r (x, δ)δ s1 +
i,k=0 ∂x(−i)
1
s+s
∂Rq (x, )
+Λμ (x, δ) δ k r (x(s1 ), δ) = −Λμ (x, δ)[Rq (x, ), r (x, δ]. (3.7)
k=0 ∂x(s1 − k)

∂Λk (x)
Moreover, since λμ (x) is causal then ∂x(sμ1 −i) = 0 for i ∈ [0, s1 − 1]; since
s
Λμ (x, δ)rq (x, δ) = 0, then also k=0 Λμ (x)Rq (x(−k), (−k)) = 0, so that for
k

i ∈ [0, s + s1 ],


s
∂Λkμ (x) s
∂Rq (x(−k), (−k))
Rq (x(−k), (−k))+ Λkμ (x) = 0.
k=0
∂x(−i) k=0
∂x(−i)

It follows, through standard computations, that

1
s+s s
∂Λkμ (x) 1
s+s
∂Rq (x, )
Rq (x(−k), (−k))δ = − i
Λμ (x, δ) ,
i=0 k=0
∂x(s1 − i) i=0
∂x(s1 − i)

which, substituted in (3.7), leads to

Λμ (x, δ)[Rq (x, ), r (x, δ)] = 0, ∀ .

Since the previous relation must be satisfied ∀μ ∈ [1, n − j], and ∀ , q ∈ [1, j],
and recalling the link between the Polynomial Lie Bracket and the Lie Bracket
highlightened in Eq. (2.24), then necessarily Δ(δ] must be involutive.
Sufficiency. Let ω(x[ŝ], δ)=(ω1T (x[ŝ], δ), . . . , ωn− T
j (x[ŝ], δ)) be the left-annihilator
T

of (r1 (x[s1 ,s] , δ), . . . , r j (x[sk ,s] , δ)). Let s̄ = max{s1 , . . . , sk } and
ρ
ρ = max{ŝ,
 deg(ω(x, δ))}, that is, for k ∈ [1, n − j],
 ω k (x, δ) = =0 ω k (x[ρ] )δ .
Set Ω = 0, . . . , 0, ω0 (x[ρ] ), . . . , ωρ (x[ρ] ), 0, . . . , 0 , where ω0 is preceded by s̄ 0-
blocks and set Δi := Δ[s̄,i+s] ⊂ span{ ∂x[0]∂ (s̄) , . . . , ∂x[0] (−i−s) ∂
} as
⎛ ⎞
In s̄ ∗ ∗ 0 ··· 0
⎜ 0 r0 (x) ··· r (x) ⎟
⎜ 0 ⎟
⎜ .. .. .. .. ⎟
⎜ . . .. ⎟.
Δi = spanK∗ ⎜ 0 ⎟ (3.8)
⎜ .. ⎟
⎜ ⎠
⎝ . 0 r (x(−i)) · · · r (x(−i))
0 0
0  0 ··· 0 ··· 0 I ns
 
Δi0
3.2 Integrability of a Right-Submodule 43

By assumption ω(x, δ) is causal and for any two vector fields τ ∈ Δi0 , = 1, 2,

i ≥ ρ, Ωτ = 0, and Ω[τ1 , τ2 ] = 0. Moreover, since i ≥ ρ, Ω ∂ x (−i− p)
= 0, ∀ ∈
[1, n], ∀ p ∈ [1, s]. It follows that Ω[τ1 , ∂ x ∂
(−i− p)
] = 0, since ∂ x∂(Ωτ 1)
(−i− p)
= Ω ∂x ∂τ1
(−i− p)
= 0. Analogously, since Ω is causal, then for any p ∈ [1, s̄], ∂∂(Ωτ 1)
x (+ p)
= Ω ∂ x ∂τ(+1 p) = 0,

which shows that Ω[τ1 , ∂ x (+ p) ] = 0, so that Ω ⊥ Δ̄i . As a consequence, there exist
at least n − j causal exact differentials, independent over K∗ which lay in the left-
annihilator of Δ̄i . It remains to show that there are also n − j causal exact differen-
tials, independent over K∗ (δ], which lay in the left-annihilator of Δ(δ]. This follows
dλμ , μ ≤ n − j is a basis for Δ⊥ (δ], since
immediately by noting that if dλ1 , . . . ,
μ
Ω is 0-integrable then ω(x, δ)dx[0] = i=1 αi (x, δ)dλi . Since the ωi (x, δ)dx[0] ’s
are n − j and by assumption they are independent over K∗ (δ], then necessarily
μ = n − j.
A direct consequence of the proof of Theorem 3.1 is the definition of an upper bound
on the maximum delay appearing in the exact differentials which generate a basis
for the left-annihilator of Δ(δ]. This is pointed out in the next corollary.
Corollary 3.1 Let the right-submodule

Δ(δ] = spanK∗ (δ] r1 (x, δ), . . . , r j (x, δ)



of rank j, with ri (x, δ) = ril (x[si ,s] )δl , be 0-integrable. Then the maximum delay
l=0
which characterizes the exact differentials generating the left-annihilator of Δ(δ] is
not greater than j s̄ + s.
Proof The proof of Theorem 3.1 shows that if ρ is the maximum between the
degree in δ and the largest delay affecting the state variables in the left-annihilator
Ω(x[ p̄] , δ) of Δ(δ], then the exact differentials are affected by a maximum delay
which is not greater than ρ. On the other hand, according to Lemma 1.1 in Chap. 1,
deg(Ω(x, δ)) ≤ j s̄, whereas p̄ ≤ s + j s̄, which shows that ρ ≤ j s̄ + s.
The result stated by Theorem 3.1 which is itself an important achievement plays
also a key role in proving a series of fundamental results which are enlightened
hereafter.

3.2.2 Smallest 0-Integrable Right-Submodule Containing


Δ(δ]

If the given submodule Δ(δ] is not 0-integrable, one may be interested in computing
the smallest 0-integrable right-submodule containing it. This in turn will allow to
identify the maximum number of independent exact one-forms which stand in the
left-annihilator of Δ(δ]. The following definition needs to be introduced, which
generalizes the notion of involutive closure of a distribution to the present context.
44 3 The Geometric Framework—Results on Integrability

Definition 3.4 (Involutive closure) Given the right-submodule



Δ(δ] = spanK∗ (δ] r1 (x, δ), . . . , r j (x, δ)


s
of rank j, with ri (x, δ) = ril (x[si ,s] )δl , let Δc (δ] be its right closure. Then its
l=0
involutive closure Δ̄(δ] is the smallest submodule, which contains Δc (δ] and which
is involutive.
Accordingly, the following result can be stated.
Theorem 3.2 Consider the right-submodule

Δ(δ] = spanK∗ (δ] r1 (x, δ), . . . , r j (x, δ)

of rank j and let Δ̄(δ] be its involutive closure and assume that the left-annihilator of
Δ̄(δ] is causal. Then Δ̄(δ] is the smallest 0-integrable right-submodule containing
Δ(δ].
Example 3.3 Let us consider

x1 x1 (1)δ 2
Δ(δ] = spanK∗ (δ] .
−x2 x1 (2)δ − x1 δ 2

Δ(δ] is not closed. Clearly, its right closure is given by

x1 x1 (1)δ
Δc (δ] = .
−x2 x1 (2) − x1 δ

To check the involutivity, we set

x1 x1 (1)δ x1 x1 (1) (−1)


r (x, δ) = , R(x, ) = .
−x2 x1 (2) − x1 δ −x2 x1 (2) (0) − x1 (−1)

We then compute the Polynomial Lie Bracket


2
∂R(x, )
[R(x, ), r (x, δ)] = ṙ (x, δ)|ẋ[0] =R(x, ) δ 2 − δ r (x(2), δ)
∂ x(2 − )
=0
x1 x12 (1) (−1)δ + x1 x1 (1)x1 (2) (0)δ
= δ2
x1 (2)(x2 x1 (2) (0) − x1 (−1)) − x2 x1 (2)x1 (3) (1) − x1 x1 (1) (−1)δ
x1 (1) (−1)δx1 (2)x1 (3)δ + x1 (−1)δ 2 x1 (2)x1 (3)δ

−x2 (0)x1 (2)x1 (3)δ + (0)x1 (2)δ 2 (x2 (2)x1 (4) − x1 (2)δ) − (−1)δ(x1 (2)x1 (3)δ)
x1 x1 (1)δ  
= x1 (3) δ 2 (3) − δ (1)
−x2 x1 (2) − x1 δ
 
= r (x, δ)x1 (3) δ 2 (3) − δ (1) .
3.2 Integrability of a Right-Submodule 45

Thus, Δc (δ] is involutive and is the smallest 0-integrable right-submodule containing


Δ(δ].

3.2.3 p-Integrability

The approach presented in this book allows us to state a more general result con-
cerning p-integrability which is independent of any control system. This is done
hereafter.
Theorem 3.3 The right-submodule

Δ(δ] = spanK∗ (δ] r1 (x, δ), . . . , r j (x, δ)

of rank j is p-integrable if and only if



Δ̂(δ] = Δ(x(− p), δ) = spanK∗ (δ] r1 (x(− p), δ), . . . , r j (x(− p), δ)

is 0-integrable.

Proof Assume that Δ(δ] is p-integrable. Then there exist n − j independent exact
differentials dλi (x) = Λi (x, δ)dx[0] ( p) such that, denoting by Λ(x, δ) =
(Λ1T (x, δ), . . . , Λn−
T
j (x, δ)) , then Λ(x, δ)Δ(δ] = 0. Consequently, for i ∈ [1, j],
T

δ p Λ(x, δ)ri (x, δ) = Λ(x(− p), δ)ri (x(− p), δ)δ p = 0,

that is, Λ(x(− p), δ)Δ̂(δ] = 0. On the other hand,

δ p Λ(x, δ)dx[0] ( p) = Λ(x(− p), δ)dx[0] ,

which thus proves that Δ̂(δ] is 0-integrable. Of course, if Δ̂(δ] is 0-integrable, there
exist n− j exact differentials d λ̄i (x) = Λ̄i (x, δ)dx[0] such that
Λ̄(x, δ)Δ̂(δ] = 0. As a consequence also Λ̄(x, δ)Δ̂(δ]δ p = 0, which shows that
Δ̂(x( p), δ) = Δ(δ] is p-integrable.

Example 3.4 Let us consider again the submodule

−x1 (2)δ
Δ(δ] = spanK∗ (δ]
x2 (2)

of Example 3.2. We first check if it is 0-integrable. To this end, we set

−x1 (2) (−1)


R(x, ) =
x2 (2) (0)
46 3 The Geometric Framework—Results on Integrability

and compute the Polynomial Lie Bracket

∂R(x, )
[R(x, ), r (x, δ)] = ṙ (x, δ)|ẋ[0] =R(x, ) δ 2 − r (x(2), δ)
∂ x(2)
x1 (4) (1)δ 2 (−1)x1 (4)δ
= δ −
x2 (4) (2) (0)x2 (4)
x1 (4)δ  
= δ 2 (4) − (0)
x2 (4)

which is not in Δ(δ]. To check 2-integrability, we have to consider

−x1 δ
Δ̂(δ] = Δ(x(−2), δ) = spanK∗ (δ] .
x2

We set
−x1 (−1)
R(x(−2), ) =
x2 (0)

and compute the Polynomial Lie Bracket


[R(x(−2), ), r̂ (x(−2), δ)] =

∂R(x(−2), )
= ṙ (x(−2), δ)|ẋ[0] =R(x(−2), ) − r (x(−2), δ)
∂ x(0)

x1 (0) (−1)δ (−1)x1 (0)δ


= − =0
x2 (0) (0) (0)x2 (0)

which shows that Δ(δ] is 2-integrable. In fact, one gets that


dλ = d(x1 (2)x2 (1)) = (x2 (1), x1 (2))δ)d x(2)) is in the left-kernel of Δ(δ].

3.2.4 Bicausal Change of Coordinates

A major problem in control theory stands in the possibility of describing the given
system in some different coordinates which may put in evidence particular structural
properties. As already underlined in Chap. 1, in the delay context, it is fundamental
to be able to compute bicausal change of coordinates, that is, diffeomorphisms which
are causal and admit a causal inverse as defined by Definition 1.1.
As it will be shown in this section, the previous results on the 0-integrability of a
right-submodule have important outcomes also in the definition of a bicausal change
of coordinates. Lemma 3.1 is instrumental for determining whether or not a given
set of one-forms can be used to obtain a bicausal change of coordinates.
3.2 Integrability of a Right-Submodule 47

Lemma 3.1 Given n − k independent functions on Rn , λi (x[α] ),


i ∈ [1, n − k], such that
spanK(δ] {dλ1 , . . . , dλn−k }

is closed and its right-annihilator is causal, there exists a dθ1 (x[ᾱ] ) independent of
the dλi (x[α] )’s i ∈ [1, n − k] over K(δ] and such that

spanK(δ] {dλ1 , . . . , dλn−k , dθ1 }

is closed and its right-annihilator is causal.

While the proof is omitted and can be found in Califano and Moog (2017), starting
from it, it is immediate to get the following result.

Theorem 3.4 Given k functions λi (x[α] ), i ∈ [1, k], whose differentials are indepen-
dent over K(δ], there exist n − k functions θ j (x[ᾱ] ), j ∈ [1, n − k] such that

spanK(δ] {dλ1 , . . . , dλk , dθ1 , . . . , dθn−k } ≡ spanK(δ] {dx[0] }

if and only if
spanK(δ] {dλ1 , . . . , dλk }

is closed and its right-annihilator is causal. As a consequence, dz[0] =


(dλ1T , . . . , dλkT , dθ1T , . . . , dθn−k
T
)T defines a bicausal change of coordinates.

Proof If the k exact differentials dλi (x) can be completed to span all dx[0] over K(δ]
then necessarily spanK(δ] {dλ1 , . . . , dλk } must be closed and its right-annihilator must
be causal. On the contrary, due to Lemma 3.1, if spanK(δ] {dλ1 , . . . , dλk } is closed
and its right-annihilator is causal then one can compute an exact differential dθ1
independent over K(δ] of the dλi ’s and such that spanK(δ] {dλ1 , . . . , dλk , dθ1 } is
closed and its right-annihilator is causal. Iterating, one gets the result.

Example 3.5 Consider the continuous-time system

ẋ1 (t) = x2 (t) + x2 (t − 1)


ẋ2 (t) = u(t)
ẋ3 (t) = x3 (t) − x1 (t − 1)
y(t) = x1 (t).

The output and its derivatives are given by

y(t) = x1 (t)
ẏ(t) = x2 (t) + x2 (t − 1)
ÿ(t) = u(t) + u(t − 1).
48 3 The Geometric Framework—Results on Integrability

The functions y and ẏ could be used to define a change of coordinates if they satisfied
the conditions of Lemma 3.1, that is

Y = spanK(δ] {dy, d ẏ} = spanK(δ] {d x1,[0] , (1 + δ)d x2,[0] }

were closed and its right-annihilators were causal. ⎛ ⎞


0
Unfortunately, while the right-annihilator of Y is causal as it is given by ⎝0⎠, Y
1
itself is not closed, since it is easily verified that its closure is

Yc = spanK(δ] {d x1,[0] , d x2,[0] }.

As a consequence, y and ẏ cannot be used directly to define a change of coordinates.

Example 3.6 Consider the function λ = x1 (t − 1)x2 (t) + x22 (t) on R2 . In this case,

L = span K d {x2 δd x1,[0] + (x1 (−1) + 2x2 )d x2,[0] }


 
= spanK(δ] { x2 δ x1 (−1) + 2x2 dx[0] }.

The right-annihilator is given by

−2x2 (1) − x1
r (x, δ) =
x2 δ,

which is not causal. Thus, the function λ cannot be used as a basis for a change of
coordinates.

3.3 Integrability of a Left-Submodule

We now address the problem of integrability working directly on one-forms. In


the present paragraph, a set of one-forms {ω1 , . . . , ωk } independent over K∗ (δ] is
considered. Considering one-forms as elements of M naturally leads to two different
notions of integrability. Instead, if one-forms are considered as elements of the vector
space E, there is only one single notion of integrability.
In fact, as it happens in the delay-free case, if the one-forms {ω1 . . . , ωk } are
considered over K∗ , then they are said to be integrable if there exists an invertible
matrix A ∈ K∗k×k and functions ϕ = (ϕ1 , . . . , ϕk )T , such that ω = Adϕ. The full
rank of A guarantees the invertibility of A, since K∗ is a field. Instead, if the one-
forms {ω1 . . . , ωk } are viewed as elements of the module M, then the matrix A ∈
K∗k×k (δ] instead of K∗k×k . Since A(δ) may be of full rank but not unimodular, it is
3.3 Integrability of a Left-Submodule 49

necessary to distinguish two cases. Accordingly, one has the following two definitions
of integrability.
Definition 3.5 A set of k one-forms {ω1 , . . . , ωk }, independent over K∗ (δ], is said
to be strongly integrable if there exist k independent functions {ϕ1 , . . . , ϕk }, such
that
spanK∗ (δ] {ω1 , . . . , ωk } = spanK∗ (δ] {dϕ1 , . . . , dϕk }.

A set of k one-forms {ω1 , . . . , ωk }, independent over K(δ], is said to be weakly


integrable if there exist k independent functions {ϕ1 , . . . , ϕk }, such that

spanK∗ (δ] {ω1 , . . . , ωk } ⊆ spanK∗ (δ] {dϕ1 , . . . , dϕk }.

If the one-forms ω = (ω1T , . . . , ωkT )T are strongly (respectively weakly) integrable,


then the left-submodule spanK∗ (δ] {ω1 , . . . , ωk } is said to be strongly (respectively
weakly) integrable.
Clearly, strong integrability yields weak integrability. Also, the one-forms ω are

weakly integrable if and only if there exists a full rank matrix A(δ) ∈ K(δ] k×k and
functions ϕ = (ϕ1 , . . . , ϕk ) such that ω = A(δ)dϕ. If in addition the matrix A(δ)
T T T

can be chosen to be unimodular, then the one-forms ω are also strongly integrable.
Remark 3.2 It should be noted that the integrability of a closed left-submodule
spanK(δ] {ω1 , . . . , ωk } always implies strong integrability. As a consequence, the two
notions of strong and weak integrability coincide in case of delay-free one forms.
Integrability of a set of k one-forms {ω1 , . . . , ωk } is tested thanks to the so-called
Derived Flag Algorithm (DFA):
Starting from a given I0 the algorithm computes

Ii = spanK {ω ∈ Ii−1 | dω = 0 mod Ii−1 }. (3.9)

The sequence (3.9) converges as it defines a strictly decreasing sequence of vector


spaces Ii and by the standard Frobenius theorem, the limit I∞ has an exact basis,
which represents the largest integrable codistribution contained in I0 .
In order to define I0 , one has to note that when considering a set of k one-forms
{ω1 , . . . , ωk }, some shifts of ωi are required for integration. It follows that the ini-
tialization
p
I0 =spanK {ω1 , . . . , ωk , ω1 (−1), . . . , ωk (−1), . . . , ω1 (− p), . . . , ωk (− p)} (3.10)

allows to compute the smallest number of time shifts required for the given one-
forms for the maximal integration of the submodule. More precisely, the sequence
p
Ii defined by (3.9) converges to an integrable vector space
p
p
I∞ = spanK {dϕ1 , . . . , dϕγpp } (3.11)
50 3 The Geometric Framework—Results on Integrability

p
for some γ p ≥ 0. By definition, dϕi ∈ spanK(δ] {ω1 , . . . , ωk } for i = 1, . . . , γ p and
p
p ≥ 0. The exact one-forms dϕi , i = 1, . . . , γ p are independent over K, but may
p p
not be independent over K(δ]. A basis for spanK(δ] {dϕ1 , . . . , dϕγ p } is obtained by
computing a basis for


p
 
0
I∞ ∪ i
I∞ i−1
mod(I∞ , δ I∞
i−1
)
i=1

i
as I∞ + δ I∞
i
⊂ I∞
i+1
.

Remark 3.3 A different initialization of the derived flag algorithm is

I˜0 = spanK {spanK(δ] {ω1 ,. . .,ωk } ∩ spanK {dx(t),. . . ,dx(t − p)}},


p
(3.12)

which allows to compute for each p ≥ 0, the exact differentials contained in the
given submodule and which depend on x(t), . . . , x(t − p) only. Both initializations
allow the algorithm to converge toward the same integrable submodule over K(δ],
but following different steps, as shown in the next example.

Example 3.7 Let spanK(δ] {dx(t − 2)}. On one hand, the initialization (3.10) is com-
pleted for p = 0 as no time shift of dx(t − 2) is required for its integration. On the
other hand, initialization (3.12) yields a 0 limit for p = 0 and p = 1 as the exact
differential involves larger delays than x(t) and x(t − 1). The final result is obtained
for p = 2.

Assume that the maximum delay that appears in {ω1 , . . . , ωk } (either in the coef-
ficients or differentials) is s. The necessary and sufficient condition for strong inte-
grability of the one-forms {ω1 , . . . , ωk } is given by the following theorem in terms
p
of the limit I∞ .
Theorem 3.5 A set of one-forms {ω1 , . . . , ωk }, independent over K(δ], is strongly
p
integrable if and only if there exists an index p ≤ s(k − 1) such that starting from I0
p
defined by (3.10), the derived flag algorithm (3.9) converges to I∞ given by (3.11)
with
p
ωi ∈ spanK(δ] {dϕ1 , . . . , dϕγpp } (3.13)

for i = 1, . . . , k.

Proof Necessity. If a set of one-forms {ω1 , . . . , ωk }, independent over K(δ],


is strongly integrable, then there exist k functions ϕi , i = 1, . . . , k, such that
spanK(δ] {ω1 , . . . , ωk } = spanK(δ] {dϕ1 , . . . , dϕk }.
Thus, ωi ∈ spanK(δ] {dϕ1 , . . . , dϕk } and

dϕi ∈ spanK {ω1 , . . . , ωk , . . . , ω1 (− p), . . . , ωk (− p)}


p
for i = 1, . . . , k and some p ≥ 0. Clearly, dϕi ∈ I∞ and Condition (3.2) is satisfied.
3.3 Integrability of a Left-Submodule 51

It remains to show that p ≤ s(k − 1). Note that there exist infinitely many pairs
(A(δ), ϕ) that satisfy ω = A(δ)dϕ. Since the degree of unimodular matrices A(δ)
has a lower bound, then one can find a pair (A(δ), ϕ), where the degree of matrix
A(δ) is minimal among all possible pairs. Let A(δ) be such a unimodular matrix for
some functions ϕ = (ϕ1 , . . . , ϕk )T . Note that A(δ) and ϕ are not unique.
We show that the degree of A(δ) is less or equal to s. By contradiction, assume
that the degree of A(δ) is larger than s, for example, s + 1. Then for some i

ωi = a1i (δ)dϕ1 + · · · + aki (δ)dϕk , (3.14)

where a ij (δ) ∈ K(δ], j = 1, . . . , k, and at least one polynomial a ij (δ) has degree
s + 1. s+1 i l
Let a ij (δ) = l=0 a j,l δ , j = 1, . . . , k. From (3.14), one gets


k 
s+1
ωi = a ij, dϕ j (−i), (3.15)
j=1 =0

where at least one coefficient a ij,s+1 ∈ K is nonzero. For simplicity, assume that
i
a1,s+1 = 0 and aγi ,s+1 = 0 for γ = 2, . . . , k. We have assumed that the maximum
delay in ωi is s, but the maximum delay in dϕ1 (−s − 1) is at least s + 1.
Note that dϕ1 , . . . , dϕ1 (−s − 1), . . . , dϕk (−s − 1) are independent over K.
Therefore, to eliminate dϕ1 (−s − 1) from (3.15)


k
dϕ1 (−s − 1) = b j (δ)dϕ j + ω̄ (3.16)
j=1

for some coefficients b j (δ) ∈ K(δ]. Let l j = deg b j (δ) ≤ s and the one-forms
ω̄ ∈ spanK {dx, dx − , . . . , dx −s }. Let l := min{l j }. For clarity, let l = l2 and b2 (δ) =
δl . We show that ω̄ can be chosen such that it is integrable. By contradiction, assume
that ω̄ cannot be chosen integrable. Then, the coefficients of ω̄ must depend on larger
delays than s. Since ω̄ is not integrable, then the coefficients of a1,s+1 i
ω̄ depend also
−s−1
i
on larger delays than s. Now, substitute a1,s+1 dϕ1 to (3.15). One gets that ωi
i
depends on a1,s+1 ω̄ and thus also on larger delays than s. This is a contradiction and
thus ω̄ can be chosen integrable.
Let ω̄ = adφ −l for some a, φ ∈ K. Then spanK(δ] {dϕ1 ,. . ., dϕk } =
spanK(δ] {dϕ1 , dφ, dϕ3 ,. . ., dϕk } and there exists an unimodular matrix Ā(δ) with
smaller degree than A(δ), and functions ϕ̄ = (ϕ1 , φ, ϕ3 , . . . , ϕk )T such that ω =
Ā(δ)dϕ̄, which leads to a contradiction. Thus, the degree of A(δ) must be less than
or equal to s and the degree of A−1 (δ) is less or equal to s(k − 1), i.e. p ≤ s(k − 1).
The general case requires a more technical proof.
52 3 The Geometric Framework—Results on Integrability

p
Sufficiency. Let I∞ = spanK {dϕ}, where p ≤ s(k − 1). By construction,
p
I∞ ⊂ spanK(δ] {ω1 , . . . , ωk } and by (3.13) ωi ∈ spanK(δ] {dϕ} for i = 1, . . . , k. Thus,
spanK(δ] {ω1 , . . . , ωk } = spanK(δ] {dϕ}.
p p+1
Since I∞ ⊆ I∞ for any p ≥ 0, one can check the condition (3.13) step by step,
increasing the value of p every step. When for some p = p̄ the condition (3.13) is
satisfied, then it is satisfied for all p > p̄.
Given the set of 1-forms {ω1 , . . . , ωk }, independent over K(δ], the basis of vector
s(k−1)
space I∞ defines the basis for the largest integrable left-submodule contained in
spanK(δ] {ω1 , . . . , ωk }.

Lemma 3.2 A set of one-forms {ω1 , . . . , ωk } is weakly integrable if and only if the
left closure of the left-submodule, generated by {ω1 , . . . , ωk }, is (strongly) integrable.

Proof Necessity. By definitions of weak integrability and left closure, there exist
functions ϕ = (ϕ1 , . . . , ϕk )T such that dϕ = A(δ)ω̄, where ω̄ is the basis of the
closure of the left-submodule, generated by {ω1 , . . . , ωk }. Choose {dϕ1 , . . . , dϕk }
such that for i = 1, . . . , k


k
dϕi = adφ + b j (δ)dϕ j (3.17)
j=1; j=i

for any φ ∈ K and b j (δ) ∈ K(δ]. It remains to show that one can choose ϕ such that
ω̄i ∈ spanK(δ] {dϕ}.
By contradiction, assume that one cannot choose ϕ such that ω̄i ∈ spanK(δ] {dϕ}.
−j
Then ω̄k ∈/ spanK(δ] {dϕ} and also ω̄k ∈/ spanK(δ] {dϕ1 , . . . , dϕk } for j ≥ 1 and any
ϕ. Really, if 
−j
ω̄k = ci (δ)dϕi , (3.18)
i

then, since on the left-hand side of (3.18) everything is delayed at least j times,
everything that is delayed less than j times on the right-hand side should cancel out.
Therefore, one is able to find functions φi , ψi ∈ K, i = 1, . . . , k, such that dϕi =
dφi + dψi and

ci (δ)dφi ∈ spanK(δ] {dx − j } ci (δ)dψi = 0.
i

Now, because  of (3.17), ψi = 0, φi = ϕi for +ij = 1, . . . , k and thus


+j
δ j ω̄k = δ j i c̄i (δ)dϕi which yields ω̄k = i c̄i (δ)dϕi . Clearly, the one-forms
+j
dϕi have to belong to spanK(δ] {ω̄}, because dϕi ∈ spanK(δ] {ω̄}. Now, one has
−j
a contradiction and therefore ω̄k ∈span / K(δ] {dϕ} for j≥1. Then, by construction
spanK(δ] {dϕ1 , . . . , dϕk } ⊂ spanK(δ] {ω1 , . . . , ωk−1 }, which is impossible. Thus, the
assumption that one cannot choose ϕ such that ω̄i ∈ spanK(δ] {dϕ} must be wrong.
3.3 Integrability of a Left-Submodule 53

Sufficiency. Sufficiency is satisfied directly by the definitions of strong and weak


integrability.
Example 3.8 Consider the following one-forms:

ω1 = x3 (t − 1)dx2 (t) + x2 (t)dx3 (t − 1) + x2 (t − 1)dx1 (t − 1)


ω2 = x3 (t − 2)dx2 (t − 1) + x2 (t − 1)dx3 (t − 2)
+dx1 (t) + x2 (t − 2)dx1 (t − 2). (3.19)

One gets for s(k − 1) = 2:


2
I∞ = spanK {dx1 (t), dx1 (t − 1), d(x2 (t)x3 (t − 1))}.

When one eliminates the basis elements, which are dependent over K(δ], one gets
that the rank of spanK(δ] {dx1 (t), dx1 (t − 1), d(x2 (t)x3 (t − 1))} is 2. To check the
condition (3.13), one has to check whether there exists a matrix A(δ) such that ω =
A(δ)dϕ, where ω = (ω1 , ω2 )T , ϕ = (ϕ1 , ϕ2 )T , ϕ1 = x2 (t)x3 (t − 1), and ϕ2 = x1 (t).
In fact, ω = A(δ)dϕ, where the unimodular matrix A(δ) is
 
1 x2 (t − 1)δ
A(δ) = .
δ 1 + x2 (t − 2)δ 2

Thus, the one-forms (3.19) are strongly integrable.


Example 3.9 Consider the following one-forms:

ω1 = dx2 (t)
ω2 = x4 (t − 1)dx1 (t) + x2 (t)dx2 (t − 1) + x1 (t)dx4 (t − 1)
ω3 = x3 (t)x4 (t)dx2 (t) + x2 (t)x4 (t)dx3 (t) (3.20)
+x3 (t − 1)dx2 (t − 1) + x2 (t − 1)dx3 (t − 1).

For s(k − 1) = 2:
2 = span {dx (t), d(x (t − 1)x (t)), dx (t − 1), dx (t − 2), d(x (t − 2)x (t − 1))}.
I∞ K 2 4 1 2 2 4 1

Now, ω1 ∈ I∞ 2
and ω2 ∈ I∞ 2
, but ω3 ∈/ I∞2
. Thus, the one-forms (3.20) are not
strongly integrable, and spanK(δ] {dx2 (t), d(x4 (t − 1)x1 (t))} is the largest integrable
left-submodule, contained in A = spanK(δ] {ω1 , ω2 , ω3 }.
Now, one can check if the one-forms (3.20) are weakly integrable. For that,
one has to compute the left closure of A and check if it is strongly integrable.
In practice, the left closure of a left-submodule A can be computed as the left-
kernel of its right-kernel Δ. Thus, the right-kernel of A is Δ = spanK(δ] {q(δ)}, where
q(δ) = (x1 (t)δ, 0, 0, −x4 (t))T . The left-kernel of Δ is

clK(δ] (A) = spanK(δ] {dx2 (t), dx3 (t), d(x4 (t − 1)x1 (t))}.
54 3 The Geometric Framework—Results on Integrability

Therefore, the one-forms (3.20) are weakly integrable.

We finally end this section by highlighting that the integrability of right-sub


modules and one-forms are connected through the following corollary, which follows
from Lemma 3.2 and by noting also that given a left-submodule A, its right-kernel,
and the right-kernel of its closure coincide and analogously given a right-submodule
Δ, its left-kernel, and the left-kernel of its closure coincide.

Corollary 3.2 Weak integrability of a set of one-forms is equivalent to the integra-


bility of its right-kernel.

To show more explicitly how the integrability of right-submodules and weak inte-
grability of one-forms are related, consider Algorithm (3.9) initialized with (3.12).
The left-kernel of Δi , defined above, is equal to I∞i i
, where I∞ is computed with
respect to the closure of a given submodule.
The next example shows the importance of working on K∗ and K∗ (δ]. In fact,
while the one-forms considered are causal, the right-kernel is not thus requiring to
consider the polynomial Lie Bracket on K∗ (δ] as defined in Definition 2.4 to check
the integrability.

Example 3.10 Consider the one-forms

ω1 = x1 (t − 1)dx1 (t) + x1 (t)dx1 (t − 1)


−x3 (t)dx2 (t − 1) + dx3 (t − 1) (3.21)
ω2 = dx2 (t) + x3 (t)dx2 (t − 1).

The one-forms ω = (ω1 , ω2 )T can be written as

x1 (t − 1) + x1 (t)δ −x3 (t)δ δ


ω= dx(t).
0 1 + x3 (t)δ 0

The right-kernel of the left-submodule spanK(δ] {ω1 , ω2 } is not causal (i.e. one needs
forward-shifts of variables x(t) to represent it), and is given by
⎧⎛ ⎞⎫
⎨ x1 (−1)δ ⎬
Δ = spanK∗ (δ] ⎝ 0 ⎠ = spanK∗ (δ] {r1 (x, δ)} .
⎩ ⎭
−x12 (0) − x1 (1)x1 (−1)δ

To check the involutivity of Δ, we have to consider


⎛ ⎞
x1 (−1) (−1)
R1 (x, ) = ⎝ 0 ⎠
−x12 (0) (0) − x1 (1)x1 (−1) (−1)

from which since s1 = 1, we compute the Polynomial Lie Bracket


3.3 Integrability of a Left-Submodule 55

 2
∂ R1 (x, ) k
[R1 (x, ), r1 (x, δ)] = ṙ1 (x, δ)|ẋ[0] =R1 (x, )δ − δ r1 (x(s1 ), δ)
k=0
∂ x(1 − k)
⎛ ⎞
x1 (−2)δ 2
=⎝ 0 ⎠ ( (0) − δ (2)).
−x1 (0)x1 (−1)δ − x1 (1)x1 (−2)δ 2

Since ⎛ ⎞
x1 (−2)δ 2
⎝ 0 ⎠ = r1 (x, δ) x1 (−1) δ,
−x1 (0)x1 (−1)δ − x1 (1)x1 (−2)δ 2 x1

Δ is involutive which shows that the closure of ω is strongly integrable. In fact, one
gets that the left-annihilator of Δ is generated by

Δ⊥ = spanK∗ (δ] {d(x1 (t)x1 (t − 1) + x3 (t − 1)), dx2 (t)} = spanK∗ (δ] {dλ1 , dλ2 }.

On the other hand


ω1 1 −x3 δ dλ1
= ,
ω2 0 1 + x3 δ dλ2

which shows that spanK∗ (δ] {ω1 , ω2 } ⊂ spanK∗ (δ] {dλ1 , dλ2 }, that is, weak integrabil-
ity.
The problem can be addressed by working directly on one-forms. In this case,
starting from ω1 and ω2 , we first consider the left closure which is given by

Ωc = spanK∗ (δ] {ω1 , d x2 }

and then we can apply the derived flag algorithm, thus getting

Ωc = spanK∗ (δ] {d(x1 (t)x1 (t − 1) + x3 (t − 1)), dx2 (t)} = spanK∗ (δ] {dλ1 , dλ2 }

as expected.

3.4 Problems

1. Consider the one-form ω = dx(t) + x(t)dx(t − 1). Check whether it is weakly


integrable and/or strongly integrable.

2. Check if the right-submodule

x1 (0)x1 (1)δ
Δ(δ] = spanK∗ (δ]
−x1 (0)x1 (2) − x12 (0)δ
56 3 The Geometric Framework—Results on Integrability

is 0-integrable.

3. Check if the right-submodule

x1 (2)x1 (3)δ
Δ(δ] = spanK∗ (δ]
−x1 (2)x1 (4) − x12 (2)δ

is p-integrable for some p ≥ 0.

4. Prove the following lemma:

Lemma 3.3 Consider the distribution Δi defined by (3.8), and let ρi = dim(Δ1 )
with ρ−1 = ns. Then

(i) If dλ(x) is such that span{dλ(x)} = Δ̄i−1 , then span{dλ(x), dλ(x(−1))} ⊂

Δ̄i .
(ii) A canonical basis for Δ̄i⊥ is defined for i ≥ 0 as follows:
Pick dλ0 (x[0] ) such that span{dλ0 (x[0] )} = Δ̄⊥ 0 , with rank d(λ0 ) = μ0 =
ρ0 − ρ−1 .
At step ≤ i pick dλ (x[ ] ) such that span{dλk (x[k] (− j)), k ∈ [0, ],
j ∈ [0, − k]} = Δ̄⊥ , and dλ (x[ ] ) ∈ / Δ̄⊥−1 , with rank d(λi )=μi =ρi − 2ρi−1
+ ρi−2 .
Chapter 4
Accessibility of Nonlinear Time-Delay
Systems

In this chapter, the accessibility properties of a nonlinear time-delay system affected


by constant commensurate delays are fully characterized in terms of absence of
non-constant autonomous functions, that is, functions which are non-constant and
whose derivatives of any order are never affected by the control. Using an algebraic
terminology, the accessibility property is characterized by the accessibility module
Rn introduced in Márquez-Martínez (1999) and defined by (4.8) to be torsion free
over the ring K(δ]. This has been worked out in Fliess and Mounier (1998) for the
special case of linear time-delay systems. While controllability for linear time-delay
systems was first addressed in Buckalo (1968), a complete characterization in the
linear case was instead given in Sename (1995).
In order to understand the peculiarities of time-delay systems, let us first analyze
the following example:

Example 4.1 Consider the delay-free second-order nonlinear system in chained


form

ẋ1 (t) = x2 (t)u(t)


ẋ2 (t) = u(t).

It is well known that such a system is


not locally
 accessible. The accessibility distri-
x2
bution associated to it is R2 = span which has dimension 1 for any x. As
1
a matter of fact, the function ϕ = x1 (t) − 21 x22 (t) is an autonomous function for the
given system and it is computed starting from R2 .

Surprisingly, a delay on x2 renders the system locally accessible. In fact, as shown


in Califano et al. (2013), the nonlinear system

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 57


C. Califano and C. H. Moog, Nonlinear Time-Delay Systems,
SpringerBriefs in Control, Automation and Robotics,
https://doi.org/10.1007/978-3-030-72026-1_4
58 4 Accessibility of Nonlinear Time-Delay Systems

ẋ1 (t) = x2 (t − 1)u(t)


(4.1)
ẋ2 (t) = u(t)

is locally accessible (there is no way to compute an autonomous function for such a


system). This is discussed in Example 4.3. Using the results obtained in this chapter
it is shown that the rank of the accessibility submodule associated to a given delay
system determines the dimension of its accessible subsystem and consequently the
dimension of its non-accessible part.
It should be noted that a different slight modification in the dynamics (4.1) yields
again a non-accessible system.
Example 4.2 Consider, for instance,

ẋ1 (t) = x2 (t − 1)u(t − 1)


(4.2)
ẋ2 (t) = u(t).

Now, an autonomous function is computed as ϕ = x1 (t) − 21 x22 (t − 1) since ϕ̇ = 0.


As a matter of fact, in the coordinates
   
z 1 (t) x1 (t) − 21 x22 (t − 1)
= (4.3)
z 2 (t) x2 (t)

the system reads

ż 1 (t) = 0
ż 2 (t) = u(t)

which by the way is also delay free.


This means that the points reachable at time t are linked to the point reached at
time t − 1 through the relation x1 (t) − 21 x22 (t − 1) = constant: the trajectory of the
state is thus constrained by the initialization. However, it is still possible to reach
a given fixed point in IR 2 at a different time t¯, even if the previous link is present
between the point reached at time t − 1 and that one reached at time t. In fact, assume
to start the system with x1 (t) = x10 , x2 (t) = x20 , u(t) = 0 for t ∈ [−1, 0), and let
x f = (x1 f , x2 f )T be the final point to be reached.
• On the interval [0, 1), by setting u(t) = u 0 , one gets that

x1 (t) = x10
x2 (t) = u 0 t + x20 .

If x1 f = x10 it is immediately clear that any arbitrary final point x f cannot be


reached for t ∈ [0, 1). The set of reachable points from x0 , within some time
t ∈ [0, 1), is not open in IR 2 .
4 Accessibility of Nonlinear Time-Delay Systems 59

• Now let the control switch to u(t) = u 1 on the interval t ∈ [1, 2). The dynamics
on such interval becomes

ẋ1 (t) = x2 (t − 1)u 0 = ((t − 1)u 0 + x20 )u 0


ẋ2 (t) = u 1 .

The solution is obtained as


1
x1 (t) = (t − 1)2 u 20 + (t − 1)x20 u 0 + x10
2
x2 (t) = (t − 1)u 1 + x2 (1) = (t − 1)u 1 + u 0 + x20

so that once the time t¯ ∈ [1, 2) at which one wants to reach the desired state x f is
fixed, one gets that

1
x1 f (t¯) = (t¯ − 1)2 u 20 + (t¯ − 1)x20 u 0 + x10
2
x2 f (t¯) = (t¯ − 1)u 1 + u 0 + x20 .

Accordingly, with the control sequence



−x20 ± 2
x20 − 2(x10 − x1 f )
u0 =
(t¯ − 1)
x2 f − x20 − u 0
u1 =
(t¯ − 1)

one can reach any final point x f from the initial point x0 as long as
2
x20 − 2(x10 − x1 f ) ≥ 0.
Of course using non-constant controls may allow to reach a greater region of the
plane. However, already this simple control shows, on one hand, that the system is
accessible in a weaker sense (the set of reachable points is open), and, on the other
hand, that while it is possible to get through the desired final point x f , it is not,
however, possible to stay in x f forever.
This example shows the necessity of introducing a weaker notion of accessibility
that we call t-accessibility. We end this introductory paragraph by giving the formal
definition of accessible and t-accessible system which will be used later on in the
chapter.
Consider the dynamics


l 
m
ẋ[0] = F(x[s] ) + G ji (x[s] )u j,[0] (−i). (4.4)
i=0 j=1
60 4 Accessibility of Nonlinear Time-Delay Systems

Definition 4.1 The state x f ∈ IR n is said to be reachable from the initial condition
ϕ(t), t ∈ [−sτ , 0), if there exist a time t f and a Lebesgue measurable input u(t)
defined for t ∈ [0, t f ] such that the solution x(t f , ϕ) of (4.4) equals x f .
Definition 4.2 System (4.4) is said to be t-accessible from the initial condition ϕ(t),
t ∈ [−sτ , 0), if the adherence of the set of its reachable states has a nonempty interior.
There may exist some singular initial conditions where the system is not t-
accessible. Thus, system (4.4) is just said to be t-accessible if it is t-accessible
from almost any initial condition.
Definition 4.3 System (4.4) is fully accessible if there does not exist any autonomous
function for the system, that is, a non-constant function λ(x) whose time derivative
of any order along the dynamics of the system is never affected by the control.

4.1 The Accessibility Submodules in the Delay Context

To tackle the accessibility problem for a given dynamics of the form (4.4), we are
essentially looking for a bicausal change of coordinates z[0] = φ(x) such that in the
new coordinates the system is split into two subsystems

ż 1,[0] = F̃1 (z 1,[s̃] )


l˜ 
 m
ż 2,[0] = F̃2 (z) + G̃ 2, ji (z)u j,[0] (−i)
i=0 j=1

with the subsystem S2 completely accessible, that is, satisfying definition 4.3.
In order to address this problem, we need to refer to the differential representation
of the given dynamics (4.4), which was introduced in Sect. 2.3. Thus, by applying
the differential operator d to both sides of (4.4), its differential form representation
is derived as
d ẋ[0] = f (x[s] , u[0] , δ)dx[0] + g1 (x[s] , δ)du[0] , (4.5)

where we recall that


⎛ ⎞
s
∂ F(x) m  l
∂G (x)
f (x, u, δ) = ⎝ ⎠ δ,i
jk
+ u j,[0] (−k) (4.6)
i=0
∂x[0] (−i) j=1 k=0
∂x[0] (−i)


l
g1 (x, δ) =(g11 , . . . , g1m ), g1i = G ik (x[s] )δ k, i ∈ [1, m]. (4.7)
k=0

We will assume, without loss of generality, that rank K(δ] (g1 (x, δ)) = m (the number
of inputs), that is, each input acts independently on the system.
4.1 The Accessibility Submodules in the Delay Context 61

The first step consists in finding out the maximum number of independent
autonomous functions for the given system and then in showing that these func-
tions can be used to define a bicausal change of coordinates. To this end, we need
to characterize the notion of relative degree, since as it will be stated later on, and
in accordance with the delay-free case, autonomous functions have actually infinite
relative degree. Starting from the g1i ’s defined by (4.7), we thus consider the accessi-
bility submodule generators, introduced in Márquez-Martínez (1999, 2000), defined
(up to the sign) as

gi+1, j (x, u[i−1], δ) = ġi, j (x, u[i−2], δ) − f (x, u, δ)gi, j (x, u[i−2], δ).

The accessibility submodules Ri of Σ again introduced in Márquez-Martínez (1999)


are then defined as

Ri (x, u[i−2] , δ) = spanK(δ] {g1 (x, δ) . . . gi (x, u[i−2] , δ)}. (4.8)

Let us now recall that a function λ(x[s̄] ) has finite relative degree k if ∀l ∈ [1, m],
and ∀i ∈ [1, k − 1]

L g j (x,u[i−2] ) λ(x[s̄] ) = 0, ∀ j ∈ [0, s̄ + βi ], ∀u[i−2] , (4.9)


il

and there exists an index l ∈ [1, m] such that

L g j (x,u[k−2] ) λ(x[s̄] ) = 0 for some j ∈ [0, s̄ + βk ]. (4.10)


kl

It immediately follows that a function λ(x) has relative degree k > 1 if and only if

dλ(x) ⊥ Rk−1 (x, u[k−3] , δ)


(4.11)
dλ(x)gk, (x, u[k−2] , δ) = 0 for some  ∈ [1, m].

The following result, which is an immediate consequence of the expression of the


gil (x, u, δ)’s given later on by (4.22), gives conditions which are independent of the
control u, for a function to have relative degree k.

Proposition 4.1 A function λ(x) has relative degree k > 1 if and only if ∀l ∈ [1, m],

dλ(x)gil (x, 0, δ) = 0, ∀i ≤ k − 1, (4.12)

and for some l ∈ [1, m],

dλ(x)gkl (x, 0, δ) = 0. (4.13)

A straightforward consequence of the definition of relative degree and accessibil-


ity submodules is that a non-constant function has infinite relative degree if and only
62 4 Accessibility of Nonlinear Time-Delay Systems

if its relative degree is greater than n, which also allows to characterize autonomous
functions. We have, in fact, the following results which allow to derive an accessi-
bility criterion.

Lemma 4.1 Given the dynamics (4.4), the relative degree of a non-constant function
λ(x[s̄] ) ∈ K is greater than n if and only if it is infinite.

Theorem 4.1 The dynamics (4.4) is locally accessible if and only if the following
equivalent statements hold true:
• Rn (x, u[n−2] , δ) is torsion free over K(δ],
• rank K(δ] Rn (x, u[n−2] , δ) = n for some u[n−2] , and
• rank R̄n (x, 0, δ) = n.

Proof If Rn (x, u[n−2] , δ) is torsion free over K(δ], then there is no nonzero element
which annihilates Rn (x, u[n−2] , δ), that is, rank K(δ] Rn (x, u[n−2] , δ) = n. Conse-
quently, there cannot exist any function with infinite relative degree, rank R̄(x, 0, δ) =
n and the given system is accessible. As for the converse, assume that Rn (x, u[n−2] , δ)
is not torsion free over K(δ]. Then rank K(δ] Rn (x, u[n−2] , δ) = k < n for all possi-
ble choices of u[k−2] . Accordingly (see Proposition 4.4), R̄n (x, 0, δ] the involutive
closure of Rn (x, 0, δ] has rank k, so that there exist n − k exact differentials in the
left-annihilator, independent over K(δ], which would contradict the assumption that
the system is fully accessible since for Proposition 4.1 the corresponding functions
have infinite relative degree.

Example 4.3 (The Chained Form Model) Consider again the two-dimensional sys-
tem (1.8):
 
x2,[0] (−1)
ẋ[0] = g(x[1] )u [0] = u [0] .
1

As already discussed in Sect. 2.5 and showed in Fig. 1.2, when the delay τ = 0,
the system becomes fully accessible, as opposite to the delay-free case (Bloch 2003;
Murray and Sastry 1993; Sørdalen 1993). In fact, through standard computations,
one has that
   
x2,[0] (−1) u [0] (−1) − u [0] δ
g1 (x, δ) = , g2 (x, u, δ) = ,
1 0

which shows that the accessibility matrix


 
x2,[0] (−1) u[0] (−1) − u[0] δ
R2 (x, u, δ) =
1 0

has full rank whenever the control is different from 0, which, according to Theo-
rem 4.1, ensures the full accessibility of the given system. An extensive discussion
on this topic can be found in Califano et al. (2013), Li et al. (2011).
4.1 The Accessibility Submodules in the Delay Context 63

We end this section by noting that the accessibility property of a system can also
be given working on one-forms. In fact, starting from the definition of infinite relative
degree of a one-form, it turns out that a system is completely accessible if and only if
the set of integrable one-forms with infinite relative degree is empty. To get the result,
we first have to define the relative degree of a one-form which is given hereafter.

Definition 4.4 A one-form ω ∈ spanK(δ] {dx(t)} has relative degree r , if r is the


smallest integer such that ω (r ) ∈
/ spanKu (δ] {dx(t)}. A function ϕ ∈ K is said to have
relative degree r if the one-form dϕ has relative degree r .

Denoting now by M = spanK(δ] {dx(t), du (k) (t); k ≥ 0} let us now consider the
sequence of left-submodule H1 ⊃ H2 ⊃ . . . of M as follows:

H1 = spanK(δ] {dx(t)}
(4.14)
Hi = spanK(δ] {ω ∈ Hi−1 | ω̇ ∈ Hi−1 }.

Since H1 has finite rank and all the left submodule Hi are closed it was shown in
Xia et al. (2002) that the sequence (4.14) converges. Let now H∞ be the limit of
sequence (4.14) and let Ĥi denote the largest integrable left submodule contained
in Hi . A left submodule Hi contains all the one-forms with relative degree equal to
or greater than i. Thus, H∞ contains all the one-forms which have infinite relative
degree. As a consequence, the accessibility of system (4.4) can be characterized in
the following way.
Theorem 4.2 System (4.4) is accessible if and only if Ĥ∞ = ∅.

4.2 A Canonical Decomposition with Respect to


Accessibility

Theorem 4.1 gives a criterion to test the accessibility of a given system. If


rankK(δ] Rn = k < n the system is not accessible and there exist n − k indepen-
dent functions ϕ1 (x), . . . , ϕn−k (x) which are characterized by an infinite relative
degree.
We are thus interested in characterizing the non-accessible part of the system, that
is, defining a bicausal change of coordinates which decomposes in the new coordi-
nates the given system into two parts, one of which represents the non-accessible
subsystem.
The basic idea is that the autonomous functions linked to the system and that can
be computed through Rn (x, 0, δ) represent the starting point in order to define the
correct change of coordinates. The main issues which are solved in the next two
results are first to show how to compute a basis over K(δ] of all the autonomous
functions, and secondly to show that such a basis is closed and can be used to define
a bicausal change of coordinates.
64 4 Accessibility of Nonlinear Time-Delay Systems

Let us then consider Rn (x, 0, δ) = spanK(δ] {g1 (x, δ), . . . , gn (x, 0, δ)} and, since
the elements of the submodule are by construction causal, consider for i ≥ 0, the
sequence of distributions Gi ⊂ span ∂ ,..., ∂ defined as
∂x[0] ∂x[0] (−i − s)
⎛ 0 ⎞
g (x[s] ) · · · g (x[s] ) 0 0
⎜ .. .. ⎟
⎜ 0 . . ⎟

Gi = span ⎜ . ⎟, (4.15)

⎝ .. 0 g (x[s] (−i)) · · · g (x[s] (−i)) 0 ⎠
0 

0 ··· 0 Ins

where  represents the maximum degree in δ and s the maximum delay in x which
are present in the gi, j ’s. Gi is a distribution in IR n(s+i+1) as well as its involutive
closure Ḡi . Let ρi = rank(Ḡi ), with ρ−1 = ns. The following result can be stated.

Proposition 4.2 Assume that the system Σ, given by (4.4), is not accessible, i.e.
rank Rn (x, u, δ) = k < n, then the following facts hold true:
(i) The system Σ possesses n − k independent (over K(δ]) autonomous exact
differentials.
(ii) A canonical basis for Ḡi⊥ is defined for i ≥ 0 as follows.
Let dλ0 (x[0] ) be such that span{dλ0 (x[0] )} = Ḡ0⊥ , with rank (dλ0 ) = μ0 =
ρ0 − ρ−1 .
/ Ḡ0⊥ , with rank (dλ1 ) = μ1 = ρ1 − 2ρ0 + ρ−1 , be such that
Let dλ1 (x[1] ) ∈

span{dλ0 (x[0] ), dλ0 (x[0] (−1)), dλ1 (x[1] )} = Ḡ1⊥ .


More generally, let dλi (x[i] ) ∈
/ Ḡi−1 , with rank (dλi ) = μi = ρi − 2ρi−1 +
ρi−2 be such that span{dλμ (x[μ] (− j)), μ ∈ [0, i], j ∈ [0, i − μ]} = Ḡi⊥ .
(iii) Let ¯ represent the maximum degree in δ and s̄ the maximum delay in x in
Rn (x, u, δ). Then there exists γ ≤ s̄ + k ¯ such that any other autonomous
function λ(x) satisfies

dλ(x) ∈ spanK(δ] {dλ0 (x), . . . , dλγ (x)},

that is, Ḡγ characterizes completely all the independent autonomous functions
of Σ.
Proof (i) is a direct consequence of Proposition 4.4. (ii) is a direct consequence of
Lemma 3.3 in the appendix, where Δi = Gi is causal by assumption, thus ensur-
ing that the left-annihilator is causal also. Finally, (iii) is a direct consequence of
Lemma 1.1.
Theorem 4.3 Consider the continuous-time system (4.4). Let γ be the smallest index
such that any autonomous function λ(x) associated to the given system satisfies

dλ(x) ∈ spanK(δ] {dλ0 (x[0] ), . . . , dλγ (x[γ] )},


4.2 A Canonical Decomposition with Respect to Accessibility 65

where

span{dλ0 (x)} = Ḡ0⊥


span{dλ0 , dλ0 (x(−1)), dλ1 (x)} = Ḡ1⊥ , dλ1 (x[1] ) ∈
/ Ḡ0⊥
..
.
span{dλi (x(− j)), i ∈ [0, γ], j ∈ [0, γ − i]} = Ḡγ⊥ , dλγ (x[γ] ) ∈ ⊥
/ Ḡγ−1

then
(1.) ∃ dλγ+1 (x) such that
⎛ ⎞ ⎛ ⎞
dz 1,[0] dλ0 (x[0] )
⎜ .. ⎟ ⎜ .. ⎟
⎜ . ⎟ ⎜ . ⎟
dz[0] =⎜ ⎟=⎜ ⎟ = T (x, δ)dx[0]
⎝dz γ+1,[0] ⎠ ⎝dλγ (x[γ] )⎠
dz γ+2,[0] dλγ+1 (x)

defines a bicausal change of coordinates.


(2.) In the above-defined coordinates z[0] = φ(x) such that dz[0] = T (x, δ)dx[0] the
system reads

ż 1,[0] = f 1 (z 1,[s̄] , . . . , z γ+1,[s̄] )


..
.
ż γ+1,[0] = f γ+1 (z 1,[s̄] , . . . , z γ+1,[s̄] ) (4.16)

s̄ 
m
ż γ+2,[0] = f γ+2 (z) + G̃ ji (z)u j,[0] (−i).
i=0 j=1

Moreover, the dynamics associated to (z 1 , . . . z γ+1 )T represents the largest non-


accessible dynamics.

Proof By construction spanK(δ] {dλ0 (x), . . . , dλγ (x)} is closed and its
right-annihilator is causal so that, according to Proposition 4.2, it is possible to
compute λγ+1 (x) such that
⎛ ⎞ ⎛ ⎞
dz 1,[0] dλ0 (x[0] )
⎜ .. ⎟ ⎜ .. ⎟
⎜ . ⎟ ⎜ . ⎟
z[0] =⎜ ⎟=⎜ ⎟ = T (x, δ)dx[0] (4.17)
⎝dz γ+1,[0] ⎠ ⎝dλγ (x[γ] )⎠
dz γ+2,[0] dλγ+1 (x)

is a bicausal change of coordinates.


Consider λ̇i (x) for i ∈ [0, γ]. By construction,
66 4 Accessibility of Nonlinear Time-Delay Systems

dλi (x)g1, j (x, δ) = 0, i ∈ [0, γ].

Consequently, if α is the maximum delay in λi (x), that is, λi := λi (x[α] ), then


α
 ∂λi (x[α] )
λ̇i (x[α] ) = F(x(− j)), i ∈ [0, γ].
j=0
∂x(− j)

Let dλi (x) = Λi (x, δ)dx[0] , then

d λ̇i (x) = Λ̇i (x, δ)dx[0] + Λi (x, δ)d ẋ[0]


= Λ̇i (x, δ)dx[0] + Λi (x, δ) f (x, u, δ)dx[0] = Γ (x, δ)dx[0] . (4.18)

On the other hand, by assumption for any k ≥ 1 and any j ∈ [1, m],

Λi (x, δ)gk, j (x, u, δ) = 0,

so that derivating both sides, one gets ∀k ≥ 1, j ∈ [1, m],

0 = Λ̇i (x, δ)gk, j (x, u, δ) + Λi (x, δ)ġk, j (x, u, δ)


= Λi (x, δ)gk, j (x, u, δ) + Λi (x, δ) f (x, u, δ)gk, j (x, u, δ). (4.19)

It follows that for any k ≥ 1 and any j ∈ [1, m], by considering that d λ̇i (x) is given
by (4.18), then, due to (4.19),

Γ (x, δ)gk, j (x, u, δ) = Λ̇i (x, δ)gk, j (x, u, δ) + Λi (x, δ) f (x, u, δ)gk, j (x, u, δ) = 0.

As a consequence, d λ̇i ∈ spanK(δ] {dλ0 (x[0] ), . . . , dλγ (x[γ] )} for any i ∈ [0, γ].
Accordingly, in the coordinates (4.17), the system necessarily reads (4.16).

Example 4.2 contin’d. Consider again system (4.2). Its differential representation
is
   
0 u[0] (−1)δ x (−1)δ
d ẋ[0] = dx[0] + 2,[0] du[0] .
0 0 1

In this case, again through standard computations, one gets that

   
x2,[0] (−1)δ 0
g1 (x, δ) = , g2 (x, u, δ) = .
1 0

The accessibility matrix


 
x2,[0] (−1)δ 0
R2 (x, u, δ) =
1 0
4.2 A Canonical Decomposition with Respect to Accessibility 67

has clearly rank 1, which according to Theorem 4.1 implies that the system is not
completely accessible.
According to Proposition 4.2, there exists an autonomous function, which can be
computed by considering the distributions Gi . Starting from G0 we get
⎛ ⎞
 0 1  0 x2,[0] (−1) 0 0
g g 0 ⎜1 0 0 0⎟
G0 = spanK(δ] = spanK(δ] ⎜
⎝0
⎟.
0 0 I 0 1 0⎠
0 0 0 1

Standard computations show that


 
∂ ∂
Ḡ0 = spanK(δ] , .
∂x[0] ∂x[0] (−1)

Let us now consider


⎛ ⎞
g0 g1 0 0

G1 = spanK(δ] 0 g0 (−1) g1 (−1) 0⎠
0 0 0 I
⎛ ⎞
0 x2,[0] (−1) 0 0 0
⎜1 0 0 0 0⎟
⎜ ⎟
⎜0 0 x (−2) 0 0⎟
⎜ 2,[0] ⎟
= spanK(δ] ⎜
⎜ 0 1 0 0 0⎟⎟.
⎜ ⎟
⎜ ⎟
⎝0 0 0 1 0⎠
0 0 0 01

One thus gets that


Ḡ1 = spanK(δ] ∂x2,[0]
, x2,[0] (−1) ∂x∂1,[0] + ∂
, ∂
, ∂
∂x2,[0] (−1) ∂x1,[0] (−1) ∂x[0] (−2)
.

4.3 On the Computation of the Accessibility Submodules

The accessibility generators gi (x, u[i−2] , δ)’s are strictly linked to the Polynomial
Lie Bracket, thus generalizing to the delay context their definition in the nonlinear
delay-free case.
In fact, starting from the dynamics (4.4), we can consider
68 4 Accessibility of Nonlinear Time-Delay Systems
⎛ ⎞

l 
m
F̄(x, u, ) := ⎝ F(x) + G ji (x)u [0], j (−i)⎠ (0).
i=0 j=1

For a given vector τ (x, u, δ), let us define

ad F̄(x,u,1) τ (x, u, δ):=ad F̄(x,u, ) τ (x, u, δ)| (0)=1


= τ̇ (x, u, δ) − f (x, u, δ)τ (x, u, δ) (4.20)

and iteratively for any i > 1


 
i
ad F̄(x,u,1) τ (x, u, δ) = ad i−1
F̄(x,u,1)
ad F̄(x,u,1) τ (x, u, δ) .

Accordingly, the accessibility submodule generators, introduced in Márquez-Martínez


(1999, 2000), defined (up to the sign) as

gi+1, j (x, u[i−1], δ) = ġi, j (x, u[i−2], δ) − f (x, u, δ)gi, j (x, u[i−2], δ)

are then given by

gi+1, j (x, u[i−1] , δ) = ad F̄(x,u,1)


i
g1, j (x, δ), (4.21)

which implies that they can be expressed in terms of Extended Lie Brackets. In fact,
standard but tedious computations show that setting


ns
j

ns
F̄0 (x, δ) = F̄0 (x)δ j = F(x)δ j ,
j=0 j=0

is p p p
for i ≤ n, and gil (x, 0, δ) = p=0 gil (x, 0)δ ,
p
and denoting gil (x, 0) with gil (0),
then


is
p
gil (x, 0, δ) = ad i−1 g (x, δ) =
F̄(x,0,1) 1l
[ F̄0is , gi−1,l (0)] E0 δ p
p=0


is
p
= [ F̄0is , . . . , [ F̄0is , g1l ] Eis ] E0 δ p .
p=0

Analogously, gil (x, u, δ) = ad i−1 g (x, δ) is given by


F̄(x,u,1) 1l
4.3 On the Computation of the Accessibility Submodules 69

gil (x, u, δ) = gil (x, 0, δ) +


 m   
i−2 i−1−q p+is
is  
i − 1 q k+  (q)
+ cμ [gμ, j (0), gi−μ−q,l (0)] E0 δ  u j (−k)
j=1 q=0 μ=1 k=− p−is =0
μ + q

+m i (x, u[i−3] , δ), (4.22)

where cμ0 = c1 = 1, and for μ > 1 , q > 0, cμq = cμ−1 + cμq−1 , and m i (x, u[i−3] , δ) is
q q

given by the linear combination, through real coefficients, of terms of the form

 i + ν
 ( )
[gμ11 , j1 (0), . . . , [gμiνν+
, jν (0), g 
i−q,l (0)] E is
] E 0
δ 
u jμμ (−i μ ),
 μ=1


ν
where ν ∈ [2, i − 1], jμ ∈ [1, m], q = k + μk ≤ i − 1.
k=1
The following result can be easily proven.
Proposition 4.3 If for some coefficient α(x, u, δ), gi+1, j (·)α(x, u, δ) ∈ Ri , then
∀ k ≥ 0 there exist coefficients ᾱk (x, u, δ) such that gi+k+1, j (·)ᾱk (x, u, δ) ∈ Ri .
The possibility of expressing the accessibility generators gil (·, δ) as a linear com-
bination of Extended Lie Brackets of the gk j ’s for j ∈ [1, l − 1] allows to prove that
when the dimension of the accessibility modules Ri ’s stabilizes, then it cannot grow,
so that Rn has maximal dimension over K(δ], and one can state the next result, whose
proof can be found in Califano and Moog (2017) and which is also at the basis of
Theorem 4.1.
Proposition 4.4 Let k = rankK(δ] (Rn (x, u, δ)) for almost all u, and set

Rn (x, 0, δ) = spanK(δ] {g1 (x, δ), . . . , gn (x, 0, δ)}.

Then R̄n (x, 0, δ), the involutive closure of Rn (x, 0, δ) has rank k.
Example 4.3 cont’d: For the dynamics (2.28), we have shown that the accessibility
matrix has full rank 2 for u = 0 since it is given by
 
x2,[0] (−1) u[0] (−1) − u[0] δ
R2 (x, u, δ) = .
1 0

R2 (x, 0, δ) instead has dimension 1, while its involutive closure has again dimension
2. In fact, we have that, using the Polynomial Lie Bracket
   
x2,[0] (−1) x2,[0] (−1) 1
R2 (x, 0, δ) = , R̄2 (x, 0, δ) = ,
1 1 0

as expected.
70 4 Accessibility of Nonlinear Time-Delay Systems

4.4 On t-Accessibility of Time-Delay Systems

As already underlined through examples, even if a system admits some autonomous


functions, it may be still possible to move from some initial state to a final state in
an open subset of IR n though some time constraints on the trajectories have to be
satisfied. This property is called t − accessibilit y.
Of course, if a system is accessible it will also be t-accessible, so that the problem
arises when the submodule

Rn (x, u, δ) = spanK(δ] {g1 (x, δ), . . . , gn (x, u, δ)}

has rank n − j < n. In the general case, there exist j autonomous functions λi (x[s] ),
i ∈ [i, j], such that

λ̇i = ϕi (λ1 , . . . , λ j , . . . , λ1 (−l), . . . , λ j (−l)), i ∈ [1, j].

A first simple result can be immediately deduced when all the autonomous functions
depend on non-delayed state variables only. In this case, the next result can be stated.
Theorem 4.4 Consider the dynamics (4.4) and assume that the system is not fully
accessible with rank Rn (x, u, δ) = n − j, j > 0. If Ḡ0 given by (4.15) satisfies
dim Ḡ0⊥ = j, then the given system is also not t-accessible.

Proof Clearly, if dim Ḡ0⊥ = j then all the independent autonomous functions of the
system are delay free. Let λi (x(t)), i ∈ [1, j] be such independent functions. Then
under the delay-free change of coordinates
⎛ ⎞ ⎛ ⎞
z 1 (t) λ1 (x(t))
⎜ .. ⎟ ⎜ .. ⎟
⎜ . ⎟ ⎜ . ⎟
⎜ ⎟ ⎜ ⎟
⎜ z j (t) ⎟ ⎜ λ j (x(t)) ⎟
⎜ ⎟=⎜ ⎟
⎜ χ1 (t) ⎟ ⎜ ϕ1 (x(t)) ⎟ ,
⎜ ⎟ ⎜ ⎟
⎜ .. ⎟ ⎜ .. ⎟
⎝ . ⎠ ⎝ . ⎠
χn− j (t) ϕn− j (x(t))

where the functions ϕi , i ∈ [1, n − j], are chosen to define any basis completion,
the given system reads

ż [0] = F̃z (z [s] )


(4.23)

l
χ̇[0] = F̃χ (z [s] , χ[s] ) + G̃ i,χ (z [s] , χ[s] )u(−i).
i=0
4.4 On t-Accessibility of Time-Delay Systems 71

Once the initial condition φ(t) over the interval [−sτ , 0) is fixed, the z-variables
evolve along fixed trajectories defined by the initial condition, which proves that the
system is not t-accessible.

The driftless case represents a particular case, since one has that if λ(x) is an
autonomous element, then λ̇ = 0. As a consequence in order to be t-accessible, the
autonomous functions for a driftless system cannot depend on x(t) only, as underlined
hereafter.
Corollary 4.1 The dynamics (4.4) with F(x[s] ) = 0 is t-accessible only if Ḡ0⊥ = 0.
Things become more involved when only part of the autonomous functions are delay
free. In this case, a necessary condition can be obtained by identifying if there is a
subset of the autonomous functions, which is characterized by functions which are
delay free and whose derivative falls in the same subset. To this end, the following
algorithm was proposed
 in Gennari and Califano
 (2018).
Let Ω = span dλ1 (x(t)), . . . , dλ j (x(t)) .

Algorithm 4.1
Start
Set Ω0 = Ω
Let Ω1 = span {ω ∈ Ω0 : ω̇ ∈ Ω0 }
Let k := 1
For k ≤ dim(Ω0 )
If Ωk = Ωk−1 goto found
Else
Set Ωk+1 = span {ω ∈ Ωk : ω̇ ∈ Ωk }
Set k ← k + 1
End
found
Set Ω = maxintegrable (Ωk )
If dim(span{Ωk }) = dim(span{Ω})
goto close
Else goto start
close;
End

Proposition 4.5 Let j be the dimension of Ω. Then Algorithm 4.1 ends after k ≤ j
steps.

Based on the previous algorithm the following result was also obtained.
Theorem 4.5 Suppose that system (4.4) is not accessible.
 Let rankRn (x, u, δ) =
n − j and accordingly R⊥ n (x, u, δ) = spanK(δ] dλ1 (·), . . . , dλ j (·) . Let the first
j̄ < j functions be independent of the delayed variables, that is, λi = λi (x(t)),
i ∈ [1, j̄]. If Algorithm 4.1 applied to Ω = dλ1 , . . . , dλ j̄ ends with Ω nonzero,
then system (4.4) is not t-accessible.
72 4 Accessibility of Nonlinear Time-Delay Systems

The proof is based on the consideration that the algorithm identifies a subset of the
autonomous functions which depend only on x(t) and identifies a non-accessible sub-
system with special characteristics. Of course, it is not the maximal non-accessible
subsystem in general. Furthermore, since the algorithm works on one-forms which
are delay-free, the maximal integrable codistribution contained in Ωk can be com-
puted by using standard results.
Example 4.2 contin’d. Let us consider again the dynamics (4.2). We have already
shown that the system is not fully accessible, but it was t-accessible. As a matter of
fact Ḡ0 has full rank, so that the autonomous function was obtained from Ḡ1 and was
λ(x) = x1 (t) − 21 x22 (t − 1).

4.5 Problems

1. Given τ1 (x[ p,s] , δ), let

τi (x, u, δ) := ad i−1 τ (x
F̄(x,u,1) 1 [ p,s]
, δ), for i > 1.

Show that the following result holds true:

Proposition 4.6. Given α(x, u, δ),

ad F̄(x,u,1) (τ1 (x, δ)α) = τ2 (x, u, δ)α + τ1 (x, δ)α̇

and more generally

k  
k
ad kF̄(x,u,1) (τ1 (x, δ)α) = τk− j+1 (x, u, δ)α( j) . (4.24)
j=0
j

2. DUPLICATION OF THE DYNAMICS.


(a) Find an autonomous element for the following linear time-delay system:

ẋ1 (t) = x1 (t − 1) + u(t)
.
ẋ2 (t) = x2 (t − 1) + u(t)

(b) Find an autonomous element for the following delay-free nonlinear system:

ẋ1 (t) = x1 (t)u(t)
.
ẋ2 (t) = x2 (t)u(t)

(c) Show that the following nonlinear time-delay system is fully accessible or,
equivalently, there is no autonomous element:
4.5 Problems 73

ẋ1 (t) = x1 (t − 1)u(t)
.
ẋ2 (t) = x2 (t − 1)u(t)

3. PRACTICAL CHECK OF ACCESSIBILITY.


Consider the system ⎧
⎨ ẋ1 (t) = x2 (t) + u(t)
Σ0 : ẋ2 (t) = x3 (t − τ )u(t) ,

ẋ3 (t) = u(t)

where the initial condition for x3 is some smooth function c(t) defined for t ∈
[−τ , 0].
Checking accessibility amounts to compute combinations of state variables with
infinite relative degree, i.e. which are not affected by the control input.
(a) Set c(t) = x3 (t − τ ) which is considered as a time-varying parameter which
is not affected by the control input:

⎨ ẋ1 (t) = x2 (t) + u(t)
Σ̃0 : ẋ2 (t) = c(t)u(t) .

ẋ3 (t) = u(t)

Compute all functions depending on x(t) whose relative degree is larger than
or equal to 2.
(b) For t ≥ τ , set x4 (t) = x3 (t − τ ), u(t − τ ) = v(t) and the extended system

⎪ ẋ1 (t) =


x2 (t) + u(t)
ẋ2 (t) = x4 (t)u(t)
Σ1 : .

⎪ ẋ3 (t) = u(t)

ẋ4 (t) = v(t)

Compute all functions depending on x(t) whose relative degree is larger than
or equal to 2 for the dynamics Σ1 . Check its accessibility.
Compare the number of autonomous elements of Σ1 with the number of
autonomous elements of Σ̃0 .
Conclude about the accessibility of Σ0
4. Consider the system

ẋ(t) = f (x(t), x(t − 1), . . . , x(t − k), u(t)),

where the initial condition for x is some smooth function c1 (t) for t ∈ [−1, 0],
c2 (t) for t ∈ [−2, −1[,…, ck (t) for t ∈ [−k, −(k − 1)[. Define the system

Σ0 : ẋ(t) = f (x(t), c1 , . . . , ck , u(t)),

where c1 (t), . . . , ck (t) are time-varying parameters.


74 4 Accessibility of Nonlinear Time-Delay Systems

Any autonomous element for Σ0 is also an autonomous element for the initial
system, at least for t ∈ [0, 1], i.e. it will not depend on the control input for
t ∈ [0, 1].
Now, denote ξ(t) = x(t − 1) and define the extended system

ẋ(t) = f (x(t), ξ(t), x(t − 2), . . . , x(t − k), u(t))


˙ = f (ξ(t), x(t − 2), . . . , x(t − k − 1), u(t − 1)). (4.25)
ξ(t)

The latter is well defined for t ≥ 1. Define the system



ẋ(t) = f (x(t), ξ(t), c2 , . . . , ck , u(t))
Σ1 : ˙ = f (ξ(t), c2 , . . . , ck , ck+1 , u(t − 1))
ξ(t)

for some time-varying parameter ck+1 (t).


(a) Compare the number of autonomous elements of Σ1 with the number of
autonomous elements of Σ0 .
Any autonomous element for Σ1 is also an autonomous element for the initial
system, at least for t ∈ [0, 2], i.e. it will not depend on the control input for
t ∈ [0, 2].
More generally, consider the extended system. Now, denote ξ(t) = x(t − 1)
and define the extended system

ẋ(t) = f (x(t), ξ1 (t), ξ2 (t), . . . , ξk (t), u(t))


ξ˙1 (t) = f (ξ1 (t), ξ2 (t), . . . , ξk (t), x(t − k − 1), u(t − 1))
.. (4.26)
.
˙
ξk (t) = f (ξk (t), x(t − k − 1), . . . , x(t − 2k, u(t − k)).

The latter is well defined for t ≥ k. Define the system




⎪ ẋ(t) = f (x(t), ξ1 (t), ξ2 (t), . . . , ξk (t), u(t))

⎨ ξ˙1 (t) = f (ξ1 (t), ξ2 (t), . . . , ξk (t), ck+1 , u(t − 1))
Σk : ..

⎪ .

⎩˙
ξk (t) = f (ξk (t), ck+1 , . . . , c2k , u(t − k))

for some parameters ck+1 , . . . , c2k .


(b) Compare the number of autonomous elements of Σk with the number of
autonomous elements of Σk−1 .
Any autonomous element for Σk is also an autonomous element for the initial
system, at least for t ∈ [0, k], i.e. it will not depend on the control input for
t ∈ [0, k].
Chapter 5
Observability

Observability is an important characteristics of a system, which allows to address


several problems when only the output function can be measured. Many authors have
focused on the observer design problem starting from linear systems to nonlinear
ones, adding then delays. Several differences already arise between continuous-time
and discrete-time nonlinear delay-free systems (see, for example, Andrieu and Praly
2006; Besançon 1999; Bestle and Zeitz 1983; Califano et al. 2009; Krener and Isidori
1983; Xia and Gao 1989). When dealing with systems affected by delays additional
features have to be taken into account. Some aspects are discussed in Anguelova and
Wennberg (2010), Germani et al. (1996, 2002).
The observability property of a system is considered as the dual geometric aspect
of accessibility. This is true in the linear and nonlinear delay-free case. However,
when delays are present some differences arise as it has already been underlined in
Sect. 1.4. In the present chapter, we will highlight the main aspects.
There are two main features, in particular, which should be examined. The first
one stands in the fact that it is not, in general, possible to decompose the system
into an observable subsystem and a nonobservable one as shown in Sect. 1.4, which
is instead possible with respect to the accessibility property. The second feature
is instead linked to the fact that the two notions of weak and strong observability
introduced in the linear delay context are not sufficient when dealing with nonlinear
systems. This becomes particularly clear when dealing with the realization problem.
It is sufficient to note that, as shown in Halas and Anguelova (2013), a linear retarded-
type single-input single-output (SISO), weakly observable linear time-delay system
of order n

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 75


C. Califano and C. H. Moog, Nonlinear Time-Delay Systems,
SpringerBriefs in Control, Automation and Robotics,
https://doi.org/10.1007/978-3-030-72026-1_5
76 5 Observability


s 
s
ẋ(t) = A j x(t − jτ ) + B j u(t − jτ )
j=0 j=0
(5.1)

s
y(t) = C j x(t − jτ ),
j=0

with τ being constant, always admits a strongly observable realization of the same
order n, since the input–output equation is again of order n and of retarded type.
Surprisingly, this is no more true in the nonlinear case as it will be discussed later
on.
Let us now consider the nonlinear time-delay system

⎨ 
m 
l
Σ : ẋ[0] = F(x[s] ) + G ji (x[s] )u[0], j (−i) . (5.2)
⎩ y = H (x ) j=1 i=0
[0] [s]

The following definition is in order (Califano and Moog 2020).


Definition 5.1 System (5.2) with x(t) ∈ R n is said to be weakly observable if, setting
⎛ ⎞ ⎛ ⎞
dy du
⎜ d ẏ ⎟ ⎜ ⎟ d u̇
⎜ ⎟ ⎜ ⎟
⎜ .. ⎟ = O(·, δ)d x + G(·, δ) ⎜ ⎟, ..
⎝ . ⎠ ⎝ ⎠ .
(n−1) (n−2)
dy du

the observability matrix O(x, u, . . . , u (n−2) , δ) has rank n over K(δ].


System (5.2) is said to be strongly observable if the observability matrix
O(x, u, . . . , u (n−2) , δ) is unimodular, that is, it has an inverse polynomial matrix
in δ.
Based on this definition let us examine the next example borrowed from Garcia-
Ramirez et al. (2016a).
Example 5.1 Consider the system

ẋ(t) = x(t − 1)u(t)


(5.3)
y(t) = x(t) + x(t − 1).

The observability matrix is obtained from dy and is given by

(dy) = (1 + δ)dx = O(x, δ)d x,

which shows that according to Definition 5.1 the given system is weakly observable,
but not strongly observable, since 1 + δ does not have a polynomial inverse.
The main difference between the definition of strong and weak observability stands
in the fact that the first one ensures that the state at time t can be reconstructed from
5 Observability 77

the observation output and its first n − 1 time derivatives at time t. However, if one
refers to the example, which shows that the system is not strongly observable, it is
also immediate to notice that considering a larger number of derivatives allows to
reconstruct the state for almost all input sequences. In fact, if one considers also the
first-order derivative
ẏ = x(−1)u + x(−2)u(−1), (5.4)

after standard computations one gets that x(t) can be recovered as a function of the
output, its first-order derivative and the input, whenever u(t) = u(t − τ ). In fact, one
gets that
ẏ(0) − u(−1)y(−1)
x(0) = y(0) + . (5.5)
u(−1) − u

It becomes then necessary to introduce the notion of regular observability in order


to characterize this property, which will be further examined in Sect. 5.2.

5.1 Decomposing with Respect to Observability

While in the delay-free case it is always possible to decompose a nonlinear system


into two subsystems, one completely observable and the other representing the unob-
servable part, this cannot be achieved, in general, in the delay case, due to the fact
that the output function and its derivatives may not necessarily be suitable to define a
bicausal change of coordinates. We try in this section to focus on the conditions under
which such a decomposition exists. Before going into the details we first underline
the conditions which determine if a system is weakly or strongly observable. The
following results hold true (Califano and Moog 2020).
Proposition 5.1 System (5.2) with x(t) ∈ IR n is weakly observable if and only if

rank (spanK(δ] {d x} ∩ spanK(δ] {dy, . . . , dy (n−1) , du, . . . du (n−2) }) = n.

Proposition 5.2 System (5.2) with x(t) ∈ IR n is strongly observable if and only if

span K (δ] {d x} ⊂ spanK(δ] {dy, . . . , dy (n−1) , du, . . . du (n−2) }.

Assume now that the system is neither strongly nor weakly observable. This means
that there exists an index k < n such that

rank K(δ] O(x, ·, δ) = k < n.

If such a conditions is verified then one may try to decompose the system into an
observable and a non observable subsystem.
78 5 Observability

5.1.1 The Case of Autonomous Systems

Let us thus consider the autonomous system (that is, without control)

ẋ[0] = F(x[s] )
(5.6)
y[0] = h(x),

and let us assume that for the given system the observability matrix has rank k. Then
⎛ ⎞
dy(x)
⎜ .. ⎟
T1 (x, δ)d x = ⎝ . ⎠ (5.7)
(k−1)
dy (x)

has full row rank and one may wonder if the output and its derivatives could be used
to define a bicausal change of coordinates. The following result holds true.
Proposition 5.3 Let k be the rank of the observability matrix for the dynamics (5.6).
Then there exists a basis completion χ = ψ(x) such that
⎛ ⎞ ⎛ ⎞
z1 y
⎜ .. ⎟ ⎜ .. ⎟
⎜.⎟ ⎜ . ⎟
⎜ ⎟=⎜ ⎟
⎝z k ⎠ ⎝ y (k−1) ⎠
χ ψ

is a bicausal change of coordinates, if and only if T1 (x, δ) in (5.7) is closed and its
right-annihilator is causal.

Example 5.2 Let us consider again the dynamics

ẋ1 = 0
ẋ2 = 0
y = x1 x2 (−1) + x2 x1 (−1).

Let us consider the observability matrix


 
x2 (−1) + x2 δ x1 (−1) + x1 δ
O(x, δ) = .
0 0

The matrix has clearly rank 1 so the system is not completely observable. Let us now
check the right annihilator of dy to see if it can be used to define a bicausal change
of coordinates. As a matter of fact, one easily gets that
5.1 Decomposing with Respect to Observability 79
 
x2 x1 (−2)−x1 x2 (−2)
−x1 (−1) − x1 (+1) x2 (+1)x 1 (−1)−x 2 (−1)x 1 (+1)
δ
kerdy = x2 x1 (−2)−x1 x2 (−2) ,
x2 (−1) + x2 (+1) x2 (+1)x 1 (−1)−x 2 (−1)x 1 (+1)
δ

which is not causal, and cannot be rendered causal. As a consequence, dy cannot be


used to define a bicausal change of coordinates.

Example 5.3 Let us consider the dynamics

ẋ1 = x1 (−1) + x2 (−1) − x2 (−2) + x1 (−1)x22 (−2) − x23 (−2)


ẋ2 = x1 x22 (−1) + x2 − x23 (−1)
y = x1 − x2 (−1).

In this case, the observability matrix is


 
1 −δ
O(x, δ) = ,
δ δ2

which has rank 1. dy is closed and


 
δ
kerdy =
1

which is causal. It can thus be used to define the desired bicausal change of coordi-
nates. A possible solution is
   
z x − x2 (−1)
= 1 .
χ x2

In the new coordinates, the given system reads

ż = z(−1)
χ̇ = zχ2 (−1) + χ
y = z,

and we see that the system is split into an observable subsystem and an unobservable
one.

5.2 On Regular Observability for Time-Delay Systems

As already underlined in the introductory section of this chapter, the regular observ-
ability notion is introduced to take into account the case in which the state can
be reconstructed from the input and its derivatives up to an order which is greater
80 5 Observability

than the state dimension. As it will be shown, this has important implications in the
realization problem.

Definition 5.2 System (5.2) is said to be regularly observable if there exists an


integer N ≥ n such that setting
⎛ ⎞ ⎛ ⎞
dy du
⎜ d ẏ ⎟ ⎜ d u̇ ⎟
⎜ ⎟ ⎜ ⎟
⎜ .. ⎟ = Oe (·, δ)d x + Ge (·, δ) ⎜ .. ⎟, (5.8)
⎝ . ⎠ ⎝ . ⎠
dy (N −1) du (N −2)

the extended observability matrix Oe (x, u, . . . , u (N −2) , δ) has rank n and admits a
polynomial left inverse.

Let us underline that the previous definition of regular observability not only implies
that the state can be recovered from the output, the input, and their derivatives, but
also that the obtained function is causal. As an example consider the system

ẋ(t) = u(t)
y(t) = x(t − 1).

Such a system is weakly observable, but not regularly observable, since x(t) =
y(t + 1), which is not causal.
Note that strong observability implies regular observability and the latter yields
weak observability.
Necessary and sufficient conditions can now be given to test regular observability.
One has the following result whose proof is found in Califano and Moog (2020).
Proposition 5.4 System (5.2) is regularly observable if and only if there exists an
integer N ≥ n such that

spanK(δ] {d x(t)} ⊂ spanK(δ] {dy(t), . . . , dy (N −1) (t), du(t), . . . du (N −2) (t)}.

Example 5.1 contin’d. For system (5.3), N = 2 since using Eqs. (5.3) and (5.4)
yields  
1+δ
Oe (·, δ) =
uδ + u(−1)δ 2

and accordingly for u = u(−1),


 
T0 (·, δ) = 1 + u(−1)
δ 1
u−u(−1) u(−1)−u ,

represents the left polynomial inverse of Oe thus proving that (5.3) is regularly
observable as Proposition 5.4 holds true. In fact, the state at time t can be reconstructed
5.2 On Regular Observability for Time-Delay Systems 81

through (5.5) whenever u = u(−1). Of course, u = u(−1) is a singular input. One


can easily verify that the system is only weakly observable.
Finally, one may inquire on the difference between strong and regular observabil-
ity. This is clarified by the following result (Califano and Moog 2020).
Corollary 5.1 System (5.2) is regularly observable, and not strongly observable
only if
dy (n) ∈
/ spanK(δ] {dy, . . . , dy (n−1) , du, . . . du (n−1) }.

The different properties of weak, strong, and regular observability have important
implications on the input–output representation. In fact, it is easily verified that the
following result holds true.
Theorem 5.1 An observable SISO system with retarded-type state-space realization
of order n admits a retarded-type input–output equation of order n if and only if

dy (n) ∈ spanK(δ] {dy, . . . , dy (n−1) , du, . . . du (n−1) }, (5.9)

that is, the system is strongly observable or it is weakly observable but not regularly
observable.
Theorem 5.1 completes the statement of Theorem 2 in Califano and Moog (2020).
Proposition 5.5 follows from Theorem 5.1 and is peculiar of nonlinear time-delay
systems as already anticipated.
Proposition 5.5 A weakly observable SISO with retarded-type state-space realiza-
tion of order n, such that

dy (n) ∈
/ spanK(δ] {dy, . . . , dy (n−1) , du, . . . du (n−1) }

admits a neutral-type input–output equation of order n and a retarded-type input–


output equation of order n + 1 (and not smaller).
Example 5.1 contin’d. The dynamics (5.3) is strongly controllable for x(−1) = 0
and weakly observable. For the given retarded-type system of dimension n = 1, one
has

dy = (1 + δ)d x
d ẏ = (1 + δ)uδd x + (1 + δ)x(−1)du.

/ spanK(δ] {d x, du}, thus according to Proposition 5.5 it admits a neutral-


Clearly, d ẏ ∈
type input–output equation of order one, which for u = u(−1) is

(u − u(−1)) ẏ(−1)−(u − u(−1))y(−2)u(−2)


+( ẏ − y(−1)u)(u(−1) − u(−2)) = 0, (5.10)

and a second-order retarded-type input–output equation given by


82 5 Observability

ÿ = x(−1)u̇ + x(−2)u̇(−1) + x(−2)u(−1)u + x(−3)u(−2)u(−1)


ẏ − y(−1)u
= y(−2)u(−1)u + (u̇(−1) − u̇)
u(−1) − u
ẏ(−1) − y(−2)u(−1)
+y(−1)u̇ + u(−1)[u(−2) − u] .
u(−2) − u(−1)

The previous example shows an important issue. In fact, one gets that to the one-
dimensional retarded-type system (5.3) one can associate either a neutral-type first-
order input–output equation or a second-order retarded-type input–output equation.
In this second case, one has that using, for example, the procedure in Kaldmäe and
Kotta (2018), one can compute a second-order retarded-type system associated to the
input–output equation. To obtain it, one has to consider the filtration of submodules

Hi+1 = {ω ∈ Hi |ω̇ ∈ Hi }

with H1 = spanK(δ] {dy ( j) , du ( j) , j ∈ [0, n − 1]}, and check if Hn+1 is integrable.


In our case n = 2, and we then have to verify if H3 is integrable. In this case, for
u = u(−1)
  
ẏ − y(−1)
H1 = spanK(δ] {dy, d ẏ, du, d u̇}, H2 = spanK(δ] dy, d , du
u(−1) − u
 
ẏ − y(−1)
H3 = spanK(δ] dy, d .
u(−1) − u

Since H3 is integrable, the state variable candidates result from its integration.
ẏ − uy(−1)
So, we can set x1 = y, x2 = , and we get the second-order retarded-type
u(−1) − u
system ⎧
⎨ ẋ1 = x1 (−1)u + x2 [u(−1) − u]
ẋ2 = x2 (−1)u(−2) (5.11)

y = x1 .

The associated observability matrix is unimodular provided again that u(−1) = u,


that is, the system is not solicited with a periodic input of period τ . As for the acces-
sibility property, standard computations show that the system is weakly accessible.
We thus started with a one-dimensional retarded system strongly accessible
for x(−1) = 0 and weakly observable for u = u(−1), and we ended on a two-
dimensional system which is weakly accessible and strongly observable, under appro-
priate conditions. This result which seems a contradiction is due to the fact that we
are neglecting the relation established by the first-order neutral-type input–output
equation, which links the state variables through the relation

p(x) = x2 + x2 (−1) − x1 (−2) = 0. (5.12)


5.2 On Regular Observability for Time-Delay Systems 83

If this condition is satisfied, while u = 0 and α = 0, then the accessibility matrix has
rank 1, and, after standard computations, one gets that dp(x) = (1 + δ)d x2 − δ 2 d x1
is in the left annihilator of R. Using the results of Chap. 4, we can then consider the
bicausal change of coordinates z 1 = x2 + x2 (−1) − x1 (−2), z 2 = x1 − x1 (−1)+x2 .
In these coordinates, the system then reads

ż 1 = z 1 (−1)u(−2)
ż 2 = (z 2 (−1) − z 1 )u + z 1 u(−1)
y = z 2 + z 2 (−1) − z 1

and since by assumption z 1 = 0, we get

ż 1 = 0
ż 2 = z 2 (−1)u
y = z 2 + z 2 (−1),

which highlights our weakly (and for u = u(−1) also regularly) observable and
strongly accessible subsystem of dimension one.

Further Note on the Observer Design

The transformation of the dynamics (5.2) by a change of coordinates and a nonlinear


input–output injection is solved in Califano et al. (2013). These transformations allow
to obtain delayed linear dynamics (up to nonlinear input–output injections) which
are weakly observable. The design of a state observer can then be achieved using all
the techniques available for linear time-delay systems (Garcia-Ramirez et al. 2016b).

5.3 Problems

1. Let the system ⎧



⎪ ẋ1 (t) = x2 (t) + x3 (t)u(t)


⎨ ẋ2 (t) = x1 (t)
ẋ3 (t) = 0 .



⎪ ẋ 4 (t) = 0

y(t) = x1 (t) + x1 (t − 1)

Check the strong, regular, and weak observability of the given system.
84 5 Observability

2. Consider the system




⎪ ẋ1 (t) = u(t)

ẋ2 (t) = x1 (t)
.

⎪ ẋ3 (t) = 0

y(t) = x1 (t)x2 (t − 1) + x1 (t − 1)x2 (t)

∂(y(t), ẏ(t), ÿ(t))


Compute ẏ(t) and ÿ(t) and the Jacobian ∂(x1 (t),x2 (t),x3 (t))
.
Is this system weakly observable?
Chapter 6
Applications of Integrability

In this chapter, it will be shown how the tools and methodologies introduced in the
previous chapters can be used to address classical control problems, underlying also
the main difference in the results with respect to the delay-free case.
Consider the nonlinear time-delay system

ẋ(t) = f (x(t − i), u(t − i); i = 0, . . . , dmax ), (6.1)

where x(t) ∈ Rn and u(t) ∈ Rm . Also, assume that the function f is meromor-
phic. To simplify the presentation, the following notation is used: x(·) := (x(t),
x(t − 1), . . .). The notation ϕ(x(·)) means that function ϕ can depend on
x(t), . . . , x(t − i) for some finite i ≥ 0. The same notation is used for other variables.

6.1 Characterization of the Chained Form with Delays

One classical model which is considered in the delay-free context is the chained form,
which was first proposed by Murray and Sastry (1993), since it has been shown that
many nonholonomic systems in mobile robotics, such as the unicycle, n-trailers,
etc., can be converted into this form. It also turned out that some effective control
strategies could be developed for these systems because of the special structure of the
chained system. A characterization of its existence in the delay-free case is found in
Sluis et al. (1994). We may thus be interested in understanding the characterization of
such a form in the delay context. In particular, hereafter, we will define the conditions
under which a two-input driftless time-delay nonlinear system of the form

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 85


C. Califano and C. H. Moog, Nonlinear Time-Delay Systems,
SpringerBriefs in Control, Automation and Robotics,
https://doi.org/10.1007/978-3-030-72026-1_6
86 6 Applications of Integrability


s
j

s
j
ẋ[0] = g1 (x)u 1 (t − j) + g2 (x)u 2 (t − j) (6.2)
j=0 j=0

can be put, under bicausal change of coordinates z[0] = φ(x[α] ), in the following
chained form: ⎧

⎪ ż 1 (t) = u i (t)


⎨ ż 2 (t) = z 3 (t − k3 )u i (t − r2 )

Σc .. (6.3)
⎪ .



⎪ ż (t) = z (t − k )u (t − r )
⎩ n−1 n n i n−1
ż n (t) = u j (t),

where k and r−1 , for  ∈ [3, n], are non-negative integers, i, j ∈ [1, 2], and i = j.
While the results here recalled are taken from Califano et al. (2013) and Li et al.
(2011), more general Goursat forms as well as the use of feedback laws will need
instead further investigation. A first important result stated hereafter is that the
chained form (6.3) is actually accessible, so that to investigate if a system can be put
in such a form. A first issue that must be verified is accessibility.
Proposition 6.1 The chained form with delays Σc is accessible for u = 0.
The proof, which is left as an exercise, can be easily carried out by considering the
accessibility matrix (Li et al. 2016).
Let us now consider the differential form representation of (6.2) which is equal
to
d ẋ[0] = f (x, u, δ)dx[0] + g11 (x, δ)du 1,[0] + g12 (x, δ)du 2,[0] (6.4)

and let g, j (x, ·, δ) = ad −1


f (x,u,δ) g1, j (x, δ). The following conditions are necessary.

Theorem 6.1 System (6.2) can be transformed into the chained form (6.3) under
bicausal change of coordinates z[0] = φ(x) only if there exists j ∈ [1, 2] such that
setting u = 1 = const, and denoting by G, j (x, ) the image of g, j (x, ū, δ) when it
acts on the function (t), then the following conditions are satisfied:
1. [G, j (x, ), gk, j (x, δ)] = 0, for k,  ∈ [1, n − 1];
2. rankK(δ] Ln−2 = n, where setting u = 1, the sequence L0 ⊂ L1 ⊂ · · · is defined
as

L0 = spanK(δ] {g1,1 (x, δ), g1,2 (x, δ)},


Lk+1 = Lk + spanK(δ] {gk+2, j (x, ū, δ)}, k ≥ 0, j = i.

The previous theorem, whose proof is found in Li et al. (2016), shows, in particular,
that in the delay case only one of the input channels, the jth one, has to satisfy the
straightening theorem, that is, [G1 j (x, ), g1 j (x, δ)] = 0, while the ith channel does
not have to satisfy it, still allowing a solution to the problem. This makes itself an
important difference with respect to the delay-free case.
6.1 Characterization of the Chained Form with Delays 87

Starting from Theorem 6.1, we are now ready to give necessary and sufficient
conditions for the solvability of the problem.
Theorem 6.2 System (6.2) can be transformed in the chained form (6.3) under
bicausal change of coordinates z[0] = φ(x) if only if the conditions of Theorem 6.1
are satisfied and additionally
(a) there exist integers 0 = l1 ≤ l2 ≤ · · · ≤ ln−1 such that for u = 1 = const.,

gk, j (x, ū, δ) = ḡk, j (x, δ)δlk for k ∈ [1, n − 1] (6.5)

while gn, j (x, ū, δ) = 0, and the submodules Δk , defined as

Δ = spanK(δ] {ḡ1, j (x, δ), . . . , ḡ, j (x, δ)},  ∈ [1, n − 1]


Δn = spanK(δ] {ḡ1, j (x, δ), . . . , ḡn−1, j (x, δ), g1,i (x, δ)},

are closed and locally of constant rank for k ∈ [1, n];


(b) denoting by G1,i (x, ), the image of g1,i (x, δ) acting on the function (t), then
the Polynomial Lie Bracket

[G1,i (x, ), g1,i (x, δ)] ∈ Δn−1

for all possible values of (k);


(c) let Q(x, ) be the image of [G1,i (x, ), g1,i (x, δ)]|=¯ , then for  ∈ [1, n − 1],

[Q(x, ), g, j (x, δ] ∈ spanR(δ] {g+2, j (x, ū, δ)}.

While the proof, which uses essentially the properties of the Polynomial Lie Bracket,
can be found in Califano et al. (2013), we prefer to discuss the interpretation of the
conditions. More precisely condition (a) together with Theorem 6.1 guarantees that
the Δ ’s are involutive and that a coordinates change can be defined connected to
them. Condition (b) is weaker than the straightening theorem expected in the delay-
free case, but guarantees together with condition (c) the particular structure of the
chained form with delays.

Example 6.1 (Califano et al. 2013) Consider the dynamics

ẋ1 (0) = u 1 (0)


 
ẋ2 (0) = x3 (−2) − x2 (−3) + x12 (−5) u 1 (−3) + 2x1 (−2)u 1 (−2)
 
ẋ3 (0) = u 2 (0) + x3 (−3) − x2 (−4) + x12 (−6) u 1 (−4).

From the computation of its differential representation, we get that


88 6 Applications of Integrability
⎛ ⎞ ⎛ ⎞
 1  0
g1,1 = ⎝ x3 (−2)
 − x 2 (−3) + x 2
1 (−5) δ 3
+ 2x
 4 1 (−2)δ 2⎠
, g1,2 = ⎝ 0⎠
x3 (−3) − x2 (−4) + x1 (−6) δ
2
1
⎛ ⎞
0 0 0
f (x, u, δ) = ⎝2u 1 (−3)x1 (−5)δ 5 + 2u 1 (−2)δ 2 −u 1 (−3)δ 3 u 1 (−3)δ 2 ⎠ .
2u 1 (−4)x1 (−6)δ 6 −u 1 (−4)δ 4 u 1 (−4)δ 3

Accordingly, for the constant input u = 1


⎛ ⎞ ⎛ ⎞
0 0
g2,2 = − ⎝δ 2 ⎠ = − ⎝1⎠ δ 2 , g3,2 = 0.
δ3 δ

We thus have that


⎛ ⎞ ⎛ ⎞
0 0
G1,2 (x, ) = ⎝0⎠ (0), G2,2 (x, ) = − ⎝(−2)⎠ ,
1 (−3)

whereas
⎛ ⎞
 (0)
G1,1 (x, ) = ⎝ x3 (−2) −
 x2 (−3) + x1 (−5) (−3)
2
+ ⎠
 2x1 (−2)(−2) .
x3 (−3) − x2 (−4) + x1 (−6) (−4)
2

It is immediate to verify that [G,2 (x, ), gk,2 (x, δ)] = 0 for , k ∈ [1, 2] and all the
conditions of Theorem 6.1 are fulfilled so that one can check the remaining conditions
of Theorem 6.2. We have already seen that g2,2 = ḡ2,2 δ 2 and that g3,2 = 0. Since also
[G1,1 (x, ), g1,1 (x, δ)] = 0, all the necessary and sufficient conditions are satisfied.
The desired change of coordinates is then
⎛ ⎞
x1
z[0] =⎝ x2 − x12 (−2) ⎠.
x3 − x2 (−1) + x12 (−3)

In these new coordinates, the system reads

ż 1,[0] = u 1 (0)
ż 2,[0] = z 3 (−2)u 1 (−3)
ż 3,[0] = u 2 (0).
6.2 Input–Output Feedback Linearization 89

6.2 Input–Output Feedback Linearization

Consider the nonlinear single-input single-output (SISO) time-delay system

ẋ(t) = f (x(t − i), u(t − i); i = 0, . . . , dmax ),


(6.6)
y(t) = h(x(t − i), u(t − i); i = 0, . . . , dmax ),

where both the input u(t) ∈ Rm and the output y(t) ∈ R p . A causal state feedback
is sought
u(t) = α(x(t − i), v(t − i); i = 0, . . . , k)

for some k, so that the input–output relation from the new input v(t) to the output
y(t) is linear (involving eventually some delays).

6.2.1 Introductory Examples

Consider the second-order system



⎨ ẋ1 (t) = f 1 (x(t), . . . , x(t − k)) + u(t)
ẋ2 (t) = f 2 (x(t − i), u(t − i); i = 0, . . . , dmax ) (6.7)

y(t) = x1 (t).

Whatever the functions f 1 and f 2 are, it is always possible to cancel f 1 with a maximal
loss of observability by the causal feedback u(t) = − f 1 (x(t), . . . , x(t − k)) + v(t)
to get the linear delay-free closed-loop system ẏ(t) = v(t). This mimics what is
done in the delay-free case and which is known as the computed torque method in
robotics.
Consider now

⎨ ẋ1 (t) = a1 x1 (t) + a2 x1 (t − 1) + f 1 (x(t − δ), . . . , x(t − k)) + u(t − δ)
x2 (t) = f 2 (x(t − i), u(t − i); i = 0, . . . , dmax ) (6.8)

y(t) = x1 (t),

where a1 and a2 are real numbers and δ is an integer greater than or equal
to 2. Whatever the functions f 1 and f 2 are, it is still possible to cancel f 1 (·)
and still with a maximal loss of observability by the causal feedback u(t) =
− f 1 (x(t), . . . , x(t − k + δ)) + v(t) to get the linear time-delay closed-loop system
ẏ(t) = a1 y(t) + a2 y(t − 1) + v(t − δ). This solution is different from the standard
one known for delay-free nonlinear systems as not all terms in ẏ(t) are cancelled out
by feedback: the intrinsic linear terms do not need to be cancelled out.
Consider further
90 6 Applications of Integrability

⎨ ẋ1 (t) = a1 x1 (t) + a2 x2 (t − 1) + f 1 (x(t − δ), . . . , x(t − k)) + u(t − δ)
ẋ2 (t) = a3 x1 (t) + f 1 (x(t − δ − 1), . . . , x(t − k)) + u(t − δ − 1) (6.9)

y(t) = x1 (t),

where a1 , a2 , and a3 are real numbers and δ ≥ 2 is an integer. Due to causality of


the feedback, there is no way to cancel out the term a2 x2 (t − 1). Thus, there will be
no loss of observability under any causal state feedback. Still the causal feedback
u(t) = − f 1 (x(t), . . . , x(t − k + δ)) + v(t) yields the linear time-delay closed-loop
system ÿ(t) = a1 ẏ(t) + a2 a3 y(t − 1) + v̇(t − δ) + a2 v(t − δ − 1).

6.2.2 Static Output Feedback Solutions

In this section, we are looking for a causal static output feedback

u(t) = α((y(t − i), v(t − i); i = 0, . . . , k)

and a bicausal change of coordinates z[0] = φ(x[α] ) which transform the system (6.6)
into



ż[0] = A j z[0] (− j) + Bv[s̄]
j=0
(6.10)

s̄+k
y[0] = C j−k z[0] (− j).
j=k

To solve this problem, it is necessary that the dynamics (6.6) are intrinsically
linear up to nonlinear input–output injections ψ(y(·), u(·)) in the sense of Califano
et al. (2013). Further conditions are, however, to be fulfilled so that the number of
nonlinearities to be cancelled out is smaller than the number of available control
variables. This is an algebraic condition which is formalized as follows.
The solvability of the nonlinear equations involving the input requires

dim cls IR[δ] {dy, dψ} ≤ m + p, (6.11)

where cls{·} denotes the closure of a given submodule. Recall that the closure of
a given submodule M of rank r is the largest submodule of rank r containing M
according to Definition 1.2.
Consider, for instance,

M = span{d x1 (t) + x3 (t)d x2 (t − 1) + d x1 (t − 2) + x3 (t − 2)d x2 (t − 3)}


6.2 Input–Output Feedback Linearization 91

which has rank r = 1. Its closure N is spanned by d x1 (t) + x3 (t)d x2 (t − 1) and thus
has rank 1 as well. Clearly, M ⊂ N but N ⊂ M.
The solvability of those equations in u(t), say ψ(y(·), u(·)) = v(t), does not yet
ensure the solution is causal.
Causality of the static state feedback is obtained from the condition

cls IR[δ] {dy, dψ} = span IR[δ] {dy, dϕ(u(t), y(t − j))} (6.12)

so that {dy(t), dϕ(u(t), y(−τ j ))} is a basis of the module cls IR[δ] {dy, dψ}, for a finite
∂ϕ
number of j’s and with ∂u(t) ≡ 0.
The above discussion is summarized as follows.
Theorem 6.3 System (6.6) admits a static output feedback solution to the input–
output linearization problem if and only if
(i) (6.6) is linearizable by input–output injections ψ,
(ii) and conditions (6.11) and (6.12) are satisfied.
The following illustrative examples are in order.
Example 6.2 ⎧
⎨ ẋ1 (t) = x2 (t − 3) − sin x1 (t − 1) + u(t)
ẋ2 (t) = 2 sin x1 (t − 1) − 2u(t) (6.13)

y(t) = x1 (t).

The static output feedback u(t) = sin y(t − 1) + v(t) yields the closed-loop system

⎨ ẋ1 (t) = x2 (t − 3) + v(t)
ẋ2 (t) = −2v(t) (6.14)

y(t) = x1 (t),

which is a linear time-delay system. Note that system (6.13) is already in the form
(6.10) and (6.11) is fulfilled as well with ψ = u(t) − sin y(t − 1).

Example 6.3 Consider now



⎨ ẋ1 (t) = x2 (t − 3) − sin x1 (t − 1) + u(t − 1)
ẋ2 (t) = 2 sin x1 (t − 1) − 2u(t) (6.15)

y(t) = x1 (t).

Though the system (6.15) is in the form (6.10), two input–output injections are
required ψ1 = u(t − 1) − sin y(t − 1) and ψ2 = 2 sin y(t − 1) − 2u(t). Thus, (6.11)
is not fulfilled and there is no static output feedback solution to the input–output lin-
earization problem.
92 6 Applications of Integrability

Example 6.4 A third example is as follows:



⎨ ẋ1 (t) = x2 (t − 3) − sin x1 (t − 1) − 3 sin x1 (t − 2) + u(t) + 3u(t − 1)
ẋ2 (t) = 2 sin x1 (t − 1) − 2u(t) (6.16)

y(t) = x1 (t).

System (6.16) is again in the form (6.10) thanks to the two input–output injections
ψ1 = u(t) + 3u(t − 1) − sin y(t − 1) − 3 sin y(t − 2) and ψ2 = 2 sin y(t − 1)
− 2u(t). However,

cls IR[δ] {dy, dψ1 , dψ2 } = span IR[δ] {dy, d[u(t) − sin y(t − 1)]}

so that (6.11) is fulfilled and the linearizing output feedback solution is again u(t) =
sin y(t − 1) + v(t). The linear closed loop reads

⎨ ẋ1 (t) = x2 (t − 3) + v(t) + 3v(t − 1)
ẋ2 (t) = −2v(t) (6.17)
⎩ y(t) = x (t).
1

Example 6.5 The next example is as follows:

ẋ(t) = x(t − 3) − sin x1 (t − 1) + u(t) + u(t − 1)


(6.18)
y(t) = x( t).

Condition (6.12) is not fulfilled and there is no static output feedback which cancels
out the nonlinearity in the system.

6.2.3 Hybrid Output Feedback Solutions

Whenever the conditions above are not fulfilled, there is no static output feedback
which is able to cancel out all nonlinearities. Nevertheless, a broader class of output
feedbacks may still offer a solution (Márquez-Martínez and Moog 2004).
Consider, for instance, the system

ẋ(t) = − sin x(t − 2) + u(t) + u(t − 1)


y(t) = x(t).

Condition (6.12) is not fulfilled as ϕ involves the control variable at different time
instants. Previous elementary tools suggest to implement the implicit control law
u(t) = −u(t − 1) + sin y(t − 2) + v(t) which would just yield ẏ(t) = v(t). A prac-
tical realization of this control law is obtained setting z(t) = u(t − 1) in the form
of an hybrid output feedback since it also involves the shift operator. Note that
z(t) = u(t − 1) is nothing but a buffer. The resulting compensator is named a pure
shift dynamic output feedback in Márquez-Martínez and Moog (2004) and for the
example above it reads as
6.2 Input–Output Feedback Linearization 93

z(t + 1) = −z(t) + sin y(t − 2) + v(t)


u(t) = −z(t) + sin y(t − 2) + v(t),

which yields ẏ(t + 1) = v(t + 1), i.e. linearity of the input–output relation is
obtained after some time only.
Accepting this broader class of output feedbacks allows to weaken the conditions
in Theorem 6.4.

Theorem 6.4 System (6.6) admits an hybrid output feedback solution to the input–
output linearization problem if and only if
(i) (6.6) is linearizable by input–output injections;
(ii) condition (6.11) is satisfied;
(iii) and cls IR[δ] {dy, dψ} = span IR[δ] {dy, dϕ(u(t), . . . , u(t − i), y(t − j))}, so that
{dy(t), dϕ(u(t), . . . , u(t − i), y(t − j))} is a basis of the module
∂ϕ
cls IR[δ] {dy(t), dψ}, for a finite number of j’s and with ∂u(t) ≡ 0.

Note that the digital implementation of the stabilizing control in Monaco et al.
(2017) goes through a trick similar to a buffer equation (see Eq. (18) in Monaco et al.
2017).

Further solutions to input–output feedback linearization including causal static


state feedbacks and causal dynamic feedbacks are available in the literature (Germani
et al. 1996; Márquez-Martínez and Moog 2004; Oguchi et al. 2002) and weaken the
conditions of solvability of the problem.

6.3 Input-State Linearization

In general, input–output linearization just partially linearizes the system. This is


known to be a drawback for delay-free systems which have some unstable zero
dynamics. The situation is obviously the same in the case of nonlinear time-delay
systems.
In such a case, the input to state feedback linearization, or full linearization, is
specially helpful because the full state variables remain “under control” in the closed
loop, and can be further controlled using more standard linear techniques available
for linear time-delay systems. Note that full linearization via dynamic state feedback
was popularized and is also known as flatness of control systems.
The input to state linearization via static state feedback is stated as follows. Given

ẋ(t) = f (x(t − i), u(t − i); i = 0, . . . , dmax ) (6.19)

find, if possible, a causal state feedback u(t) = α(x(t − i), v(t − i); i = 0, . . . , k),
a bicausal change of coordinates z(t) = ϕ(x(t − i); i = 0, . . . , ) and K ∈ IN such
that the dynamics is linear in the z coordinates after some time K :
94 6 Applications of Integrability

ż(t) = A(δ)z + B(δ)v

for all t ≥ K and where the pair A(δ), B(δ) is weakly controllable.

6.3.1 Introductory Example

Consider the system

ẋ1 (t) = x2 (t − 1)
ẋ2 (t) = 3x1 (t − 2) + cos (x12 (t − 2)) + x2 (t − 1)u(t − 1).

Pick the “output function” y(t) = x1 (t) which has a full relative degree equal to 2.
A standard method valid for delay-free systems consists in cancelling by feedback
the full right-hand side of ÿ(t). Due to the causality constraint this cannot be done
in this special case. Nevertheless, one takes advantage of some pre-existing suitable
structure and cancel only nonlinear terms. The obvious solution is

v(t) − cos (x12 (t − 1))


u(t) = ,
x2 (t)

and the closed-loop system reads


   
0 δ 0
ẋ(t) = x(t) + v(t),
3δ 2 0 δ

which is a weakly controllable linear system.

6.3.2 Solution

Focus on the single-input case (Baibeche and Moog 2016). Following the methodol-
ogy valid for delay-free systems, one has to search for an output function candidate
which has full relative degree, i.e. equal to n. Then, apply the results valid for input–
output linearization. Such an output function is called a “linearizing output”.
This methodology will, however, yield conditions which are only sufficient for
feedback linearization in the case of time-delay nonlinear systems.
Nevertheless, a necessary and sufficient condition for the existence of a full relative
degree output candidate can be derived thanks to the Polynomial Lie Bracket.
Lemma 6.1 There exists h(x(·)) with full relative degree n if and only if
(i) rank R̄n−1 (x, 0, δ) = n − 1,
(ii) rank R̄n (x, 0, δ) = n,
where Ri is defined by (4.8).
6.3 Input-State Linearization 95

Condition (ii) ensures the accessibility of the system under interest, so that (i) yields
the existence of an output function candidate with finite relative degree n.
Theorem 6.5 Given system (6.19), there exists a causal static state feedback which
solves the input-state linearization problem if the conditions of Lemma 6.1 are ful-
filled and for some dh ⊥ R̄n−1 (x, 0, δ)
(i) dh (n) ∈ span IR[δ] {d[a(x(·)) + b(x(·))u(t − i)]} for some i ∈ IN and a, b ∈
K;
∂[a(x(·))+b(x(·))u(t−i)]
(ii) ∂ x(t−τ )
= 0, 0 ≤ τ < i; and
(iii) there exists a polynomial matrix D(δ) in IR[δ]n×n such that
⎛ ⎞
dh
⎜ d ḣ ⎟
⎜ ⎟
⎜ .. ⎟ = D(δ)M(x, δ)d x
⎝ . ⎠
dh (n−1)

for some unimodular matrix M(x, δ).


Note that condition (i) in Theorem 6.5 restricts the class of systems under interest as
the control variable is subject to one single delay i. Then, condition (ii) ensures that
the linearizing feedback
v(t) − a(x(·))
u(t) =
b(x(·))

is causal. The closed loop then yields a linear time-delay differential input–output
equation.

6.4 Normal Form

In this section, with reference to the SISO case, we are seeking for a bicausal change
of coordinates z(t) = ϕ(x(·)) and a regular causal feedback u(t) = α(x(·), v(·)),
which allows to transform the given dynamics (6.1) with output y = h(x) into the
form


s̄ 

ż 1 (t) = A j z 1 (− j) + B j v(− j)
j=0 j=0

ż 2 (t) = f 2 (z 1 (·), z 2 (·)), (6.20)


 s̄
y= C j z 1 (− j),
j=0
96 6 Applications of Integrability

where the subsystem defined by the z 1 variable is linear, weakly or strongly observ-
able, and weakly or strongly accessible.
As a matter of fact, one has to find a bicausal change of coordinates and a regular
and causal static feedback such that the input–output behavior is linearized and the
residual dynamics does not depend on the control.
The solution requires thus, on one hand, that the input–output linearization can
be achieved via a bicausal change of coordinates and a causal feedback, on the other,
that one can choose the basis completion to guarantee that the residual dynamics is
independent of the control. This can be formulated by checking some conditions on
the Polynomial Lie Bracket [G 1 (x, ), g1 (x, δ)]. While the result can be found in the
literature (Califano and Moog 2016), we leave the formulation of the solution and
its proof as an exercise for the reader.

6.5 Problems

1. Show that system (6.3) is accessible.


2. Consider the system

ẋ1 (t) = x2 (t − 1)
ẋ2 (t) = cos (x12 (t − 1)) + x2 (t − 1)u(t).

Compute a linearizing output, if any, and the corresponding linearizing state feed-
back.
3. Check whether there exists an output function candidate for the following system:

⎨ ẋ1 (t) = x2 (t − 1) + 3[cos (x12 (t − 2)) + x2 (t − 2)u(t − 1)]
ẋ2 (t) = x3 (t) + cos (x12 (t − 1)) + x2 (t − 1)u(t)

ẋ3 (t) = 2[cos (x12 (t − 3)) + x2 (t − 3)u(t − 2)].

Compute an output function candidate whose relative degree equals 3, if any.


Compute the closed-loop system under the action of the static state feedback

cos (x12 (t − 1))


u(t) = − .
x2 (t − 1)

Check its accessibility.


4. Let the system ⎧
⎨ ẋ1 (t) = x2 (t) + u(t − 1) sin x1 (t − 2)
ẋ2 (t) = x1 (t) + u(t) sin x1 (t − 1)

y(t) = x1 (t).

Compute a causal static output feedback, if any, which linearizes the input–output
system.
Write the closed-loop system and check its weak accessibility.
6.5 Problems 97

5. Let the system



⎨ ẋ1 (t) = x2 (t − 2) + u(t − 1) sin x1 (t − 2)
ẋ2 (t) = x1 (t) + u(t) sin x1 (t − 1)

y(t) = x1 (t).

Compute a causal static output feedback, if any, which linearizes the input–output
system.
Write the closed-loop system and check its weak accessibility.
Series Editor Biographies

Tamer Başar is with the University of Illinois at Urbana-Champaign, where he


holds the academic positions of Swanlund Endowed Chair, Center for Advanced
Study (CAS) Professor of Electrical and Computer Engineering, Professor at the
Coordinated Science Laboratory, Professor at the Information Trust Institute, and
Affiliate Professor of Mechanical Science and Engineering. He is also the Direc-
tor of the Center for Advanced Study—a position he has been holding since 2014.
At Illinois, he has also served as Interim Dean of Engineering (2018) and Interim
Director of the Beckman Institute for Advanced Science and Technology (2008–
2010). He received the B.S.E.E. degree from Robert College, Istanbul, and the
M.S., M.Phil., and Ph.D. degrees from Yale University. He has published exten-
sively in systems, control, communications, networks, optimization, learning, and
dynamic games, including books on non-cooperative dynamic game theory, robust
control, network security, wireless and communication networks, and stochastic net-
works, and has current research interests that address fundamental issues in these
areas along with applications in multi-agent systems, energy systems, social net-
works, cyber-physical systems, and pricing in networks.
In addition to his editorial involvement with these Briefs, Başar is also the Editor
of two Birkhäuser series on Systems & Control: Foundations & Applications and
Static & Dynamic Game Theory: Foundations & Applications, the Managing Editor
of the Annals of the International Society of Dynamic Games (ISDG), and member
of editorial and advisory boards of several international journals in control, wireless
networks, and applied mathematics. Notably, he was also the Editor-in-Chief of
Automatica between 2004 and 2014. He has received several awards and recognitions
over the years, among which are the Medal of Science of Turkey (1993); Bode Lecture
Prize (2004) of IEEE CSS; Quazza Medal (2005) of IFAC; Bellman Control Heritage
Award (2006) of AACC; Isaacs Award (2010) of ISDG; Control Systems Technical
Field Award of IEEE (2014); and a number of international honorary doctorates
and professorships. He is a member of the US National Academy of Engineering,
a Life Fellow of IEEE, Fellow of IFAC, and Fellow of SIAM. He has served as
an IFAC Advisor (2017-), a Council Member of IFAC (2011–2014), President of

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 99


C. Califano and C. H. Moog, Nonlinear Time-Delay Systems,
SpringerBriefs in Control, Automation and Robotics,
https://doi.org/10.1007/978-3-030-72026-1
100 Series Editor Biographies

AACC (2010–2011), President of CSS (2000), and Founding President of ISDG


(1990–1994).
Miroslav Krstic is Distinguished Professor of Mechanical and Aerospace Engineer-
ing, holds the Alspach Endowed Chair, and is the Founding Director of the Cymer
Center for Control Systems and Dynamics at UC San Diego. He also serves as Senior
Associate Vice Chancellor for Research at UCSD. As a graduate student, he won
the UC Santa Barbara best dissertation award and student best paper awards at CDC
and ACC. He has been elected as Fellow of IEEE, IFAC, ASME, SIAM, AAAS, IET
(UK), AIAA (Assoc. Fellow), and as a foreign member of the Serbian Academy of
Sciences and Arts and of the Academy of Engineering of Serbia. He has received the
SIAM Reid Prize, ASME Oldenburger Medal, Nyquist Lecture Prize, Paynter Out-
standing Investigator Award, Ragazzini Education Award, IFAC Nonlinear Control
Systems Award, Chestnut textbook prize, Control Systems Society Distinguished
Member Award, the PECASE, NSF Career, and ONR Young Investigator awards,
the Schuck (’96 and ’19) and Axelby paper prizes, and the first UCSD Research
Award given to an engineer. He has also been awarded the Springer Visiting Profes-
sorship at UC Berkeley, the Distinguished Visiting Fellowship of the Royal Academy
of Engineering, and the Invitation Fellowship of the Japan Society for the Promo-
tion of Science. He serves as Editor-in-Chief of Systems & Control Letters and has
been serving as Senior Editor for Automatica and IEEE Transactions on Automatic
Control, as Editor of two Springer book series—Communications and Control Engi-
neering and SpringerBriefs in Control, Automation and Robotics—and has served
as Vice President for Technical Activities of the IEEE Control Systems Society and
as Chair of the IEEE CSS Fellow Committee. He has coauthored 13 books on adap-
tive, nonlinear, and stochastic control, extremum seeking, control of PDE systems
including turbulent flows, and control of delay systems.
References

Alvarez-Aguirre A, van de Wouw N, Oguchi T, Kojima K, Nijmeijer H (2011) Remote tracking


control of unicycle robots with network-induced delays. In: Cetto JA et al (eds) Informatics in
Control, Automation and Robotics. LNEE 89. Springer, New York, pp 225–238
Andrieu V, Praly L (2006) On the existence of a Kazantzis-Kravaris/Luenberger observer. SIAM J
Cont Opt 45(2):432–456
Anguelova M, Wennberg B (2010) On analytic and algebraic observability of nonlinear delay
systems. Automatica 46:682–686
Baibeche K, Moog CH (2016) Input-state feedback linearization for a class of single-input non-
linear time-delay systems. J Math Control Inf 33:873–891
Banks PS (2002) Nonlinear delay systems, Lie algebras and Lyapunov transformations. J Math
Control Inf 19:59–72
Battilotti S (2020) Continuous-time and sampled-data stabilizers for nonlinear systems with input
and measurement delays. IEEE Trans Autom Control 65:1568–1583
Besançon G (1999) On output transformations for state linearization up to output injection. IEEE
Trans Autom Control 44:1975–1981
Bestle D, Zeitz M (1983) Canonical form observer design for non-linear time-variable systems. Int
J Control 38:419–431
Bloch M (2003) Nonholonomic mechanics and control. Springer, New York
Buckalo AF (1968) Explicit conditions for controllability of linear systems with time lag. IEEE
Trans Autom Control 13:193–195
Califano C, Li S, Moog CH (2013) Controllability of driftless nonlinear time-delay systems. Syst
Contr Lett 62:294–301. https://doi.org/10.1016/j.sysconle.2012.11.023
Califano C, Márquez-Martínez LA, Moog CH (2010) On linear equivalence for time–delay sys-
tems. In: Proceedings of the 2010 American control confernce, art no 5531404, pp 6567–6572,
Baltimore, USA
Califano C, Márquez-Martínez LA, Moog CH (2011) Extended Lie brackets for nonlinear time-
delay systems. IEEE Trans Automat Control 56:2213–2218. https://doi.org/10.1109/TAC.2011.
2157405
Califano C, Márquez-Martínez LA, Moog CH (2011) On the observer canonical form for time–
delay systems. In: Proceedings of 18th Ifac world congress 2011, V 18, Part 1, Milan, Italy, pp
3855–3860. https://doi.org/10.3182/20110828-6-IT-1002.00729
Califano C, Márquez-Martínez LA, Moog CH (2013) Linearization of time-delay systems by input-
output injection and output transformation. Automatica 49:1932–1940

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 101
C. Califano and C. H. Moog, Nonlinear Time-Delay Systems,
SpringerBriefs in Control, Automation and Robotics,
https://doi.org/10.1007/978-3-030-72026-1
102 References

Califano C, Monaco S, Normand-Cyrot D (2009) Canonical observer forms for multi-output systems
up to coordinate and output transformations in discrete time. Automatica 45:2483–2490. https://
doi.org/10.1016/j.automatica.2009.07.003
Califano C, Moog CH (2014) Coordinates transformations in nonlinear time-delay systems. In:
53rd IEEE conference on decision and control. Los Angeles, USA, pp 475–480
Califano C, Moog CH (2016) On the existence of the normal form for nonlinear delay systems.
In: Karafyllis I, Malisoff M, Mazenc F, Pepe P (eds) Recent results on nonlinear delay control
systems, vol 4. Advances in delays and dynamics. Springer, Berlin Heidelberg New York, pp
113–142
Califano C, Moog CH (2017) Accessibility of nonlinear time-delay systems. IEEE Trans Autom
Control 62:7. https://doi.org/10.1109/TAC.2016.2581701
Califano C, Moog CH (2020) Observability of nonlinear time-delay systems and its application to
their state realization. IEEE Control Syst Lett 4: 803–808, https://doi.org/10.1109/LCSYS.2020.
2992715
Choquet-Bruhat Y, DeWitt-Morette C, Dillard-Bleick M (1989) Analysis, manifolds and physics,
part I: basics. North-Holland, Amsterdam
Cohn PM (1985) Free rings and their relations. Academic Press, London
Conte G, Moog CH (2007) Perdon A (2007) Algebraic methods for nonlinear control systems, 2nd
edn. Springer, London, p
Conte G, Perdon AM (1995) The disturbance decoupling problem for systems over a ring. SIAM,
J Contr Optimiz 33:750–764
Crouch PE, Lamnabhi-Lagarrigue F, Pinchon D (1995) Some realizations of algorithms for nonlinear
input-output systems. Int J Contr 62:941–960
Fliess M, Mounier H (1998) Controllability and observability of linear delay systems: an algebraic
approach. ESAIM COCV 3:301–314
Fridman E (2001) New Lyapunov-Krasovskii functionals for stability of linear retarded and neutral
type systems. Syst Control Lett 43:309–319
Fridman E (2014) Introduction to time-delay systems. Birkhäuser, Basel
Fridman E, Shaked U (2002) A descriptor system approach to H-infinity control of linear time-delay
systems. IEEE Trans Aut Contr 47:253–270
Garcia-Ramirez E, Moog CH, Califano C (2016) LA Marquez-Martinez Linearisation via input-
output injection of time delay systems. Int J Control 89:1125–1136
Garcia-Ramirez E, Califano C, Marquez-Martinez LA, Moog CH (2016) Observer design based
on linearization via input-output injection of time-delay systems. Proceedings IFAC NOLCOS,
Monterey, CA, USA, IFAC-PapersOnLine 49–18(2016):672–677
Gennari F, Califano C (2018) T-accessibility for nonlinear time-delay systems: the general case. In:
IEEE conference on decision and control (CDC), pp 2950–2955
Germani A, Manes C, Pepe P (1996) Linearization of input-output mapping for nonlinear delay
systems via static state feedback. In: CESA ’96 IMACS multiconference, pp 599–602
Germani A, Manes C, Pepe P (1998) Linearization and decoupling of nonlinear delay systems. In:
Proceedings of the American control conference, Philadelphia, USA, pp 1948–1952
Germani A, Manes C, Pepe P (2002) A new approach to state observation of nonlinear systems
with delayed output. IEEE Trans Autom Control 47:96–101
Glumineau A, Moog CH, Plestan F (1996) New algebro-geometric conditions for the linearization
by input-output injection. IEEE Trans Autom Control 41:598–603
Gu K, Kharitonov VL, Chen J (2003) Stability of time-delay systems. Birkhaüser, Boston
Hermann R (1963) On the accessibility problem in control theory. In: Lasalle JP, Efschetz SL (eds)
International symposium on nonlinear differential equations and nonlinear mechanics. Academic
Press, pp 325–332
Halas M, Anguelova M (2013) When retarded nonlinear time-delay systems admit an input-output
representation of neutral type. Automatica 49:561–567
Hespanha JP, Naghshtabrizi P, Xu Y (2007) A survey of recent results in networked control systems.
Proc IEEE 95:138–162
References 103

Hou M, Pugh AC (1999) Observer with linear error dynamics for nonlinear multi-output systems.
Syst Contr Lett 37:1–9
Insperger T (2015) On the approximation of delayed systems by Taylor series expansion. ASME J
Comput Nonlinear Dyn 10:1–4
Isidori A (1995) Nonlinear control systems, 3rd edn. Springer, New York
Islam S, Liu XP, El Saddik A (2013) Teleoperation systems with symmetric and unsymmetric time
varying communication delay. IEEE Trans Instrum Meas 21:40–51
Kaldmäe A, Kotta Ü (2018) Realization of time-delay systems. Automatica 90:317–320
Kaldmäe A, Moog CH, Califano C (2015) Towards integrability for nonlinear time-delay systems.
In: MICNON 2015. St Petersburg, Russia, IFAC-PapersOnLine 48, pp 900–905. https://doi.org/
10.1016/j.ifacol.2015.09.305
Kaldmäe A, Califano C, Moog CH (2016) Integrability for nonlinear time-delay systems. IEEE
Trans Autom Control 61(7):1912–1917. https://doi.org/10.1109/TAC.2015.2482003
Kazantzis N, Kravaris C (1998) Nonlinear observer design using Lyapunov’s auxiliary theorem.
Syst Contr Lett 34:241–247
Keller H (1987) Non-linear observer design by transformation into a generalized observer canonical
form. Int J Control 46:1915–1930
Kharitonov VL, Zhabko AP (2003) Lyapunov-Krasovskii approach to the robust stability analysis
of time-delay systems. Automatica 39:15–20
Kim J, Chang PH, Park HS (2013) Two-channel transparency-optimized control architectures in
bilateral teleoperation with time delay. IEEE Trans Control Syst Technol 62:2943–2953
Kotta Ü, Zinober ASI, Liu P (2001) Transfer equivalence and realization of nonlinear higher order
input-output difference equations. Automatica 37:1771–1778
Krener AJ, Isidori A (1983) Linearization by output injection and nonlinear observers. Syst Contr
Lett 3:47–52
Krener AJ, Respondek W (1985) Nonlinear observers with linearizable error dynamics. SIAM J
Contr Opt 23:197–216
Krstic M (2009) Delay compensation for nonlinear, adaptive, and PDE. Systems Series: Systems
& Control: Foundations & Applications. Birkhaüser, Boston
Krstic M, Bekiaris-Liberis N (2012) Control of nonlinear delay systems: a tutorial. 51st IEEE
conference on decision and control (CDC). HI, Maui, pp 5200–5214
Lee EB, Olbrot A (1981) Observability and related structural results for linear hereditary systems.
Int J Control 34:1061–1078
Li SJ, Moog CH, Califano C (2011) Characterization of accessibility for a class of nonlinear time-
delay systems. In: CDC 2011. Orlando, pp 1068–1073
Li SJ, Califano C, Moog CH (2016) Characterization of the chained form with delays, IFAC NOL-
COS. Monterey, CA, USA, 2011. IFAC-PapersOnLine 49–18:808–813
Mattioni M, Monaco S, Normand-Cyrot D (2018) Nonlinear discrete-time systems with delayed
control: a reduction. Syst Control Lett 114:31–37
Mattioni M, Monaco S, Normand-Cyrot D (2021) IDA-PBC for LTI dynamics under input delays:
a reduction approach. IEEE Control Syst Lett 5:1465–1470
Márquez-Martínez LA (1999) Note sur l’accessibilité des systèmes non linéaires à retards. Comptes
Rendus de l’Académie des Sciences-Series I - Mathematics 329(6):545–550
Márquez-Martínez LA (2000). Analyse et commande de systèmes non linéaires à retards. PhD
thesis, Université de Nantes / Ecole Centrale de Nantes, Nantes, France
Márquez-Martínez LA, Moog CH (2004) Input-output feedback linearization of time-delay systems.
IEEE Trans Autom Control 49:781–786
Márquez-Martínez LA, Moog CH (2007) New insights on the analysis of nonlinear time-delay
systems: application to the triangular equivalence. Syst Contr Lett 56:133–140
Márquez-Martínez LA, Moog CH, Velasco-Villa M (2002) Observability and observers for nonlin-
ear systems with time delay. Kybernetika 38:445–456
Mazenc F, Bliman PA (2006) Backstepping design for time-delay nonlinear systems. IEEE Trans
Autom Control 51:149–154
104 References

Mazenc F, Malisoff M, Krstic M (2021) Stability analysis using generalized sup-delay inequalities.
IEEE Control Syst Lett 5:1411–1416
Mazenc F, Malisoff M, Lin Z (2008) Further results on input-to-state stability for nonlinear systems
with delayed feedbacks. Automatica 44:2415–2421
Mazenc F, Malisoff M, Bhogaraju INS (2008) Sequential predictors for delay compensation for
discrete time systems with time-varying delays. Automatica, 122, art n 109188. https://doi.org/
10.1016/j.automatica.2020.109188
Michiels W, Niculescu S-I (2007) Stability and stabilization of time-delay systems. an eigenvalue-
based approach. Advances in design and control, 12. Philadelphia, SIAM
Monaco S, Normand-Cyrot D (1984) On the realization of nonlinear discrete-time systems. Syst
Control Lett 5:145–152
Monaco S, Normand-Cyrot D (2008) Controller and observer normal forms in discrete time. In:
Isidori A, Astolfi A, Marconi L (eds) (in honor) Analysis and design of nonlinear control systems.
Springer, pp 377–395
Monaco S, Normand-Cyrot D (2009) Linearization by output injection under approximate sampling.
EJC 15:205–217
Monaco S, Normand-Cyrot D, Mattioni M (2017) Sampled-data stabilization of nonlinear dynamics
with input delays through immersion and invariance. IEEE Trans Autom Control 62(5):2561–
2567
Moraal PE, Grizzle JW (1995) Observer design for nonlinear systems with discrete-time measure-
ment. IEEE Trans Autom Control 40:395–404
Moog CH, Castro-Linares R, Velasco-Villa M, Márquez-Martínez LA (2000) The disturbance
decoupling for time-delay nonlinear systems. IEEE Trans Autom Control 45(2):305–309
Murray R, Sastry S (1993) Nonholonomic motion planning: steering using sinusoids. IEEE Trans
Autom Control 38:700–716
Nijmeijer H, van der Schaft A (1990) Nonlinear dynamical control systems. Springer, New York
Niculescu SI (2001) Delay effects on stability: a robust control approach, vol 269. Lecture notes in
control and information sciences, Springer, Heidelberg
Oguchi T (2007) A finite spectrum assignment for retarded non-linear systems and its solvability
condition. Int J Control 80(6):898–907
Oguchi T, Watanabe A, Nakamizo T (2002) Input-output linearization of retarded non-linear systems
by using an extension of Lie derivative. Int J Control 75:582–590
Olbrot AW (1972) On controllability of linear systems with time delay in control. IEEE Trans
Autom Control 17:664–666
Olgac N, Sipahi R (2002) An exact method for the stability analysis of time-delayed linear time-
invariant (LTI) systems. IEEE Trans Autom Control 47:793–797
Pepe P, Jiang ZP (2006) A Lyapunov-Krasovskii methodology for ISS and iISS of time-delay
systems. Syst Control Lett 55:1006–1014
Plestan F, Glumineau A (1997) Linearization by generalized input-output injection. Syst Contr Lett
31:115–128
Richard JP (2003) Time-delay systems: an overview of some recent advances and open problems.
Automatica 39(10):1667–1694
Sallet G (2008) Lobry Claude : un mathématicien militant. In: Proceedings of 2007 international con-
ference in honor of Claude Lobry, ARIMA, 9, pp 5–13. http://arima.inria.fr/009/pdf/arima00902.
pdf
Sename O, Lafay JF, Rabah R (1995) Controllability indices of linear systems with delays. Kyber-
netika 6:559–580
Shi P, Boukas EK, Agarwal RK (1999) Control of Markovian jump discrete-time systems with
norm bounded uncertainty and unknown delay. IEEE Trans Autom Control 44:2139–2144
Sipahi R, Niculescu SI, Abdallah CT, Michiels W, Gu K (2011) Stability and stabilization of systems
with time delay. IEEE Control Syst Mag 31(1):38–65
Sluis W, Shadwick W, Grossman R (1994) Nonlinear normal forms for driftless control systems.
In: Proceedings of 1994 IEEE CDC, Lake Buena Vista, FL, 320–325
References 105

Sørdalen OJ (1993) Conversion of the kinematics of a car with n trailers into a chained form. In:
Proceedings of 1993 international conference robotics and automation, Atlanta, CA, pp 382–387
Souleiman I, Glumineau A, Schreirer G (2003) Direct transformation of nonlinear systems into state
affine MISO form and nonlinear observers design. IEEE Trans Autom Control 48:2191–2196
Spivak M (1999) A comprehensive introduction to differential geometry, 3rd edn. Publish or Perish,
Houston
Timmer J, Müller T, Swameye I, Sandra O, Klingmüller U (2004) Modeling the nonlinear dynamics
of cellular signal transduction. Int J Bifurc Chaos 14:2069–2079
Ushio T (1996) Limitation of delayed feedback control in nonlinear discrete-time systems. IEEE
Trans Circuits Syst I(43):815–816
Van Assche V, Ahmed-Ali T, Hann CAB, Lamnabhi-Lagarrigue F (2011) High gain observer design
for nonlinear systems with time varying delayed measurements. In: Proceedings 18th IFAC world
congress, Milano, vol 44, pp 692–696
Velasco M, Alavarez JA, Castro R (1997) Disturbance decoupling for time delay systems. Asian J
Control 7:847–864
Xia X-H, Gao W-B (1989) Nonlinear observer design by observer error linearization. SIAM J Cont
Opt 27:199–216
Xia X, Márquez-Martínez LA, Zagalak P, Moog CH (2002) Analysis of nonlinear time-delay sys-
tems using modules over non-commutative rings. Automatica 38:1549–1555
Zhang H, Wang C, Wang G (2014) Finite-time stabilization for nonholonomic chained form systems
with communication delay. J Robot Netw Artif Life 1:39–44
Zheng G, Barbot JP, Boutat D, Floquet T, Richard JP (2011) On observation of time-delay systems
with unknown inputs. IEEE Trans Autom Control 56(8):1973–1978
Zheng G, Barbot J-P, Boutat D (2013) Identification of the delay parameter for nonlinear time-delay
systems with unknown inputs. Automatica 49(6):1755–1760

You might also like