Nonlinear Time-Delay Systems A Geometric Approach: Claudia Califano Claude H. Moog
Nonlinear Time-Delay Systems A Geometric Approach: Claudia Califano Claude H. Moog
Claudia Califano
Claude H. Moog
Nonlinear
Time-Delay
Systems
A Geometric
Approach
123
SpringerBriefs in Electrical and Computer
Engineering
Series Editors
Tamer Başar, Coordinated Science Laboratory, University of Illinois at
Urbana-Champaign, Urbana, IL, USA
Miroslav Krstic, La Jolla, CA, USA
SpringerBriefs in Control, Automation and Robotics presents concise sum-
maries of theoretical research and practical applications. Featuring compact,
authored volumes of 50 to 125 pages, the series covers a range of research, report
and instructional content. Typical topics might include:
• a timely report of state-of-the art analytical techniques;
• a bridge between new research results published in journal articles and a con-
textual literature review;
• a novel development in control theory or state-of-the-art development in
robotics;
• an in-depth case study or application example;
• a presentation of core concepts that students must understand in order to make
independent contributions; or
• a summation/expansion of material presented at a recent workshop, symposium
or keynote address.
SpringerBriefs in Control, Automation and Robotics allows authors to present
their ideas and readers to absorb them with minimal time investment, and are
published as part of Springer’s e-Book collection, with millions of users worldwide.
In addition, Briefs are available for individual print and electronic purchase.
Springer Briefs in a nutshell
• 50–125 published pages, including all tables, figures, and references;
• softcover binding;
• publication within 9–12 weeks after acceptance of complete manuscript;
• copyright is retained by author;
• authored titles only – no contributed titles; and
• versions in print, eBook, and MyCopy.
Indexed by Engineering Index.
Publishing Ethics: Researchers should conduct their research from research
proposal to publication in line with best practices and codes of conduct of relevant
professional bodies and/or national and international regulatory bodies. For more
details on individual ethics matters please see: https://www.springer.com/gp/
authors-editors/journal-author/journal-author-helpdesk/publishing-ethics/14214
Nonlinear Time-Delay
Systems
A Geometric Approach
123
Claudia Califano Claude H. Moog
Dipartimento di Ingegneria Informatica, Laboratoire des Sciences
Automatica e Gestionale “Antonio Ruberti” du Numérique de Nantes
Università di Roma La Sapienza CNRS
Rome, Italy Nantes, France
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
For the subclass of linear time-delay systems in continuous time, the use of the Laplace
transform yields quasi-polynomials in the Laplace variable s and in es Gu et al.
(2003), Michiels et al. (2007), Niculescu (2001). Among those systems, one may
distinguish betweem the so-called retarded systems (Fridman 2014) described by
differential equations where the highest differentiation order of the output, or the state,
is not delayed, and the so-called neutral systems (Fridman 2001) whose equations
involve delayed values of the highest differentiation order of the output, or the state.
Thus, the Laplace transform still enables an input–output analysis of linear
time-delay systems (Olgac and Sipahi 2002, Sipahi et al. 2011). This approach can
hardly be extended to more general nonlinear time-delay systems which are the
main focus of this book. Finite dimension approximations may be appealing, but
have a limited interest due to stability issues (Insperger 2015).
Discrete-time linear systems with unknown delays are under interest in Shi et al.
(1999). Discrete-time linear systems with varying delays are under interest with an
ad hoc predictor design in Mazenc (2008).
Stability analysis and stabilization of linear time-delay systems require general
tools derived from the Lyapunov theory as in Fridman (2001), Kharitonov and
Zhabko (2003).
Stability of linear systems with switching delays is tackled in Mazenc (2021)
using trajectory-based methods and the so-called sup-delay inequalities.
v
vi Preface
Some of the most significant historic results obtained for the general class of
nonlinear time-delay systems are about the analysis of their stability, thanks to a
generalization of the Lyapunov theory, the so-called Krasovskii-type approach (Gu
et al. 2003). This approach can hardly be circumvented even in the case of linear
time-delay systems (Fridman 2001).
Early design problems for nonlinear time-delay control systems focus on the
stabilization. Considering a positive definite functional, the stabilizing control has
to render its time derivative negative definite. Advanced control methods as
backstepping make use of Lyapunov–Razumikhin–Krasovskii-type theorems as in
Battilotti (2020), Krstic and Bekiaris-Liberis (2012), Mazenc and Bliman (2006),
Pepe and Jiang (2006). Also, delay-free nonlinear systems may be subject to a
delayed state feedback whose stability is tackled in Mazenc et al. (2008). Nonlinear
observer designs for systems subject to delayed output measurements are found in
Battilotti (2020), Van Assche (2011).
Discrete-time nonlinear systems including delays on the input are considered in
Ushio (1996). A reduction process is introduced to define a delay-free system with
equivalent stability properties in Mattioni et al. (2018), Mattioni et al. (2021).
Content
1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 The Class of Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Integrability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Geometric Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Accessibility and Observability Properties . . . . . . . . . . . . . . . . . . 6
1.5 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.6 Recalls on Non-commutative Algebra . . . . . . . . . . . . . . . . . . . . . 10
2 Geometric Tools for Time-Delay Systems . . . . . . . . . . . . . . ....... 15
2.1 The Initialization of the Time-Delay System Versus
the Initialization of the Delay-Free Extended System . . . . . . . . . . 16
2.2 Non-independence of the Inputs of the Extended System . . . . . . . 20
2.3 The Differential Form Representation . . . . . . . . . . . . . . . . . . . . . . 21
2.4 Generalized Lie Derivative and Generalized Lie Bracket . . . . . . . . 23
2.5 Some Remarks on the Polynomial Lie Bracket . . . . . . . . . . . . . . . 27
2.6 The Action of Changes of Coordinates . . . . . . . . . . . . . . . . . . . . 31
2.7 The Action of Static State Feedback Laws . . . . . . . . . . . . . . . . . . 34
2.8 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3 The Geometric Framework—Results on Integrability . . . . . . ...... 37
3.1 Some Remarks on Left and Right Integrability . . . . . . . . . ...... 38
3.2 Integrability of a Right-Submodule . . . . . . . . . . . . . . . . . ...... 39
3.2.1 Involutivity of a Right-Submodule Versus its
Integrability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...... 40
3.2.2 Smallest 0-Integrable Right-Submodule Containing
Dðd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.2.3 p-Integrability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.2.4 Bicausal Change of Coordinates . . . . . . . . . . . . . . . . . . . . 46
3.3 Integrability of a Left-Submodule . . . . . . . . . . . . . . . . . . . . . . . . 48
3.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
ix
x Contents
Let us for the moment focus our attention on the class of single-input nonlinear time-
delay systems which can be described through the ordinary differential equation
l
ẋ(t) = F(x(t), . . . , x(t − s D)) + G i (x(t), . . . , x(t − s D))u(t − i D)), (1.1)
i=0
where x(t) ∈ IR n and u(t) ∈ IR represent, respectively, the current values of the
state and of the control; D is a constant delay; s, l ≥ 0 are finite integers; and finally
the functions G i (x(t), . . . , x(t − s D)), i ∈ [0, l], and F(x(t), . . . , x(t − s D)) are
analytic in their arguments. It is easy to see that such a class of systems covers the
case of constant multiple commensurate delays as well (Gu et al. 2003).
Without loss of generality, in order to simplify the notation, we will assume D = 1.
Then, system (1.1) reads as follows, for t ≥ 0:
l
ẋ(t) = F(x(t), . . . , x(t − s)) + G i (x(t), . . . , x(t − s))u(t − i)). (1.2)
i=0
The differential equation (1.2) is influenced by the delayed state variables x(t − i),
i ∈ [1, s], which, for t > s, can be recovered as solution of the differential equations
obtained by shifting (1.2), and thus given for ∈ [1, s] by
The extended system (1.2), (1.3) now displays s additional delays on the state and
input variables. Virtually, one can continue this process adding an infinite number of
differential equations which allow to go back (or forward) in time, and thus leading
naturally to an infinite dimensional system.
As well known, from a practical point of view, time-delay systems are at a cer-
tain point initialized with some arbitrary functions, which may not necessarily be
recovered as a solution of the differential equations describing the system. The con-
sequence is that any trajectory of the extended system (1.2), (1.3) is also a trajectory
of (1.2), but conversely any trajectory of (1.2) is, in general, not a trajectory of (1.2),
(1.3).
Example 1.1 Consider the scalar delay differential equation
implies that with the initial condition ϕ0 (τ ) on the interval τ ∈ [−1, 0), the system
(1.4), (1.5) is well defined for t ≥ 1. Alternatively, if (1.4), (1.5) is considered for
t ≥ 0, then an additional initial condition ϕ1 (τ ) should be given on the interval
τ ∈ [−2, −1). However, it should be noted that the trajectory of (1.4) corresponding
to the initial condition ϕ0 (τ ) cannot be reproduced, in general, as a trajectory of
the extended system (1.4), (1.5), as there does not necessarily exist an initialization
ϕ1 (τ ) of (1.5) such that x(t − 1) = ϕ0 (t − 1) for t ∈ [0, 1).
The trajectories of system (1.4), (1.5) are special trajectories generated by
ż 1 (t) −z 2 (t)
=
ż 2 (t) −v(t)
l
ż 0 = F(z 0 , . . . z s ) + G i (z 0 , . . . z s )vi
i=0
l
ż 1 = F(z 1 , . . . z s+1 ) + G i (z 1 , . . . z s+1 )vi+1 (1.6)
i=0
..
.
which has the following nice block representation given in Fig. 1.1 where 0 is
the system described by z 0 , with entries V0 = (v0 , . . . vl ) and (z 1 , . . . , z s ); 1 is the
system described by the dynamics of z 1 , with entries V1 = (v1 , . . . vl+1 ) = V0 (t − 1)
and (z 2 , . . . , z s+1 ); and 2 is the system described by the dynamics of z 2 , with entries
V2 = (v2 , . . . vl+2 ) and (z 3 , . . . , z s+2 ), . . .
It is immediately understood, however, that the block scheme in Fig. 1.1 as well as
the differential equations (1.6) represents a broader class of systems, not necessarily
delayed and generated by Eq. (1.2). In fact, in order to represent the delay-time
system (1.2), one cannot neglect that the variables z i (t) represent pieces of the same
trajectory, which means that they cannot be initialized in an arbitrary way: for any
real , and integer h such that h ≤ ≤ h + 1, at any time t they have to satisfy
the relation that z i (t − ) = z i+ j (t − + j) for any integer j ∈ [1, h]. In a similar
4 1 Preliminaries
vein, the inputs V0 , V1 , . . . are generated by a unique signal u by considering also its
repeated delays. They are thus not independent.
Consider, for instance, the nonlinear time-delay system
x1 (t − τ )
ẋ(t) = u(t) (1.7)
1
ẋ1 (t − τ ) = x1 (t − 2τ )u(t − τ )
and further by
ẋ1 (t − 2τ ) = x1 (t − 3τ )u(t − 2τ )
..
.
1.2 Integrability
In Fig. 1.2, the trajectory of the system is shown for a switching sequence of the input
signal. The input switches from 1 to −1 includes five such forward and backward
cycles. Differently from what would happen in the delay-free case when the input
switches, the trajectory does not stay on the same integral manifold of one single
vector field. A new direction is taken in the motion, which shows that the delay adds
some additional freedom for the control direction and yields accessibility for the
example under consideration. This is a surprising property of single-input driftless
nonlinear time-delay systems and contradicts pre-conceived ideas as it could not
happen for delay-free systems. As it will be argued in Sect. 2.5, the motion in the x1
direction of the final point of each cycle has to be interpreted as the motion along
the nonzero Lie Bracket of the delayed control vector field with itself. For instance,
system (1.8) with its extension (1.9) reads
⎛ ⎞ ⎛ ⎞
x2 (t − τ ) 0
ẋ(t) = ⎝ 1 ⎠ u(t) + ⎝0⎠ u(t − τ ).
0 1
The Lie Bracket of the two vector fields generates a third independent direction. These
general intuitive considerations are discussed formally in the book using precise
definitions.
6 1 Preliminaries
Fig. 1.2 Forward and backward integration yields a motion in an additional specific direction. On
the left the state trajectory of the system initialized with x2 = −10 for t ∈ [−2, 0). The delay is
fixed to τ = 2 secs
The accessibility/controllability of systems (1.7) and (1.8) has already been discussed
above in relation with the topic of integrability. This property is unexpected for a
driftless single-input system.
Now, consider, for instance,
⎛ ⎞
x2 (t − τ )
ẋ(t) = ⎝ x3 (t) ⎠ u(t). (1.10)
1
System (1.10) is not fully controllable, as for τ = 0 there is one autonomous ele-
ment λ = x32 /2 − x2 .1 Define the change of variables (z 1 (t), z 2 (t), z 3 (t)) = (x1 (t),
x3 (t), x32 (t)/2 − x2 (t)). The dynamics in the new system of coordinates decomposes
system (1.10) into the fully controllable subsystem
2
ż 1 (t) z (t − τ )/2 − z 3 (t − τ )
= 2 u(t)
ż 2 (t) 1
ż 3 (t) = 0.
More generally, it will be shown that a decomposition with respect to accessibility can
always be carried out and a full characterization of accessibility can be given in terms
of a rank condition for nonlinear time-delay systems as shown in Chap. 4. This result
is in the continuation of the celebrated geometric approach for delay-free systems;
1 For τ
= 0 one would reduce to the delay-free case with two autonomous elements, λ1 = x32 /2 − x2
and λ2 = −x33 /3 + x2 x3 − x1 .
1.4 Accessibility and Observability Properties 7
the work (Hermann 1963) on accessibility has certainly been “the seminal paper
inspiring the geometric approach that started to be developed by Lobry, Jurdjevic,
Sussmann, Hermes, Krener, Sontag, Brockett in the early 1970’s” (quoted from Sallet
2007).
The above discussion enlightens also that in the delay context two different notions
of accessibility should be considered and accordingly two different criteria. A first
notion, which could be considered as an immediate generalization of Kalman’s result,
is based on the consideration of autonomous elements for the given system. Such
autonomous elements may be functions of the current state and its delays. Looking
instead at accessibility as the possibility to define a control which allows to move
from some initial point x0 at time t0 to a given final point x f at some time t leads to a
different notion and characterization of accessibility, actually a peculiarity of time-
delay systems only. This new notion will be referred to t-accessibility. The difference
between the two cases is illustrated by the following simple linear example.
Example 1.2 Consider the linear system
Such a system is not fully accessible due to the existence of the autonomous element
λ = x2 (t) − x1 (t − 1). This means that the point reachable at time t is linked to
the point reached at time t − 1 through the relation x2 (t) − x1 (t − 1) = constant.
However, we may reach a given fixed point at a different time t¯.
Assume, for instance, that x1 (t) = x10 , x2 (t) = x20 , u(t) = 0 for t ∈ [−1, 0). And
let x f = (x1 f , x2 f )T be the final point to be reached. Then on the interval [0, 1)
so that x2 (t) = x20 and for a constant value of the control u(t) = u 0 , x1 (t) = u 0 t +
x10 for t ∈ [0, 1). If x2 f = x20 it is immediately clear that one cannot reach the given
final point for any t ∈ [0, 1). Now let the control change to u(t) = u 1 on the interval
t ∈ [1, 2). Then on such interval
ẋ1 (t) = u 1
ẋ2 (t) = u 0
x2 f − x20
u0 =
t −1
x1 f − x10 − u 0
u1 = .
t −1
8 1 Preliminaries
ẋ1 (t) = 0
ẋ2 (t) = 0
y(t) = x1 (t)x1 (t − τ ) + x2 (t)x2 (t − τ ).
As any time derivative of the output is zero, for t ≥ 0, the two state variables of
the above system cannot be estimated independently and the system is not fully
observable. As a matter of fact, differently from the delay-free case, there is no
invertible change of state coordinates which decomposes the system into an observ-
able subsystem and a nonobservable one. This contradicts common beliefs on this
matter. Additional assumptions are required (Zheng et al. 2011) to ensure that such
a decomposition still exists. This topic is addressed in Chap. 5.
1.5 Notation
This paragraph is devoted to the notation which will be used in the book. In general,
we will refer to the class of multi-input multi-output nonlinear time-delay systems
l
m
ẋ(t) = F(x(t), . . . , x(t − s D)) + G ji (x(t), . . . , x(t − s D))u j (t − i D))
i=0 j=1
(1.11)
y(t) = H (x(t), . . . , x(t − s D)),
where x(t) ∈ IR n and u(t) ∈ IR m are the current values of the state and con-
trol variables; D is a constant delay; s, l ≥ 0 are integers; and the functions
G ji (x(t), . . . , x(t − s D)), j ∈ [1, m], i ∈ [0, l], F(x(t), . . . , x(t − s D)), and
H (x(t), . . . , x(t − s D)) are analytic in their arguments. It is easy to see that such a
class of systems includes the case of constant multiple commensurate delays as well
(Gu et al. 2003).
The following notation taken from Califano and Moog (2017), Xia et al. (2002)
will be extensively used:
• x[Tp,s] = (x T (t + p D), . . . x T (t − s D)) ∈ IR ( p+s+1)n , denotes the vector consist-
ing of the np future values x(t + i D), i ∈ [1, p], of the state together with the
first (s + 1)n components of the state of the infinite dimensional system associ-
ated to (1.11). When p = 0, the more simple notation x[s] T
= x[0,s]
T
∈ IR (s+1)n is
used, with x[0] = [x1,[0] , . . . , xn,[0] ] = x(t) ∈ IR , u[0] = [u 1,[0] , . . . , u m,[0] ]T =
T n
[k]
f (−l) := f (x[ p,s] (−l), u[q, j] (−l)).
Non-commutative algebra is used throughout the book to address the study of time-
delay systems. In this section, the mathematics and definitions beyond this method
are introduced (see, for example, Cohn 1985, Banks 2002).
Let us now consider the time-shift operator δ recalled in the previous paragraph,
and let us denote by K∗ (δ] the (left-) ring of polynomials in δ with coefficients in K∗
(analogously K(δ] will denote the (left-) ring of polynomials in δ with coefficients
in K). Every element of K∗ (δ] may be written as
max{rα , rβ }
α(δ] + β(δ] = (αi + βi )δ i
i=0
and
rα
rβ
α(δ]β(δ] = αi β j (−i)δ i+ j .
i=0 j=0
Analogously, we can consider vectors, covectors, and matrices whose entries are
in the ring. The standard operations of sum and product are well defined once one
applies the previous rules on the sum and product of the elements of the ring. As
for matrices, it should be noted that in this case the property of full rank of a square
matrix does not imply automatically the existence of its inverse. If the inverse exists,
a stronger property is satisfied which is that of unimodularity of the matrix. We give
hereafter the formal definition of unimodular matrix which will play a fundamental
role in the definitions of changes of coordinates. Some examples will clarify the
difference with full rank but not unimodular matrices.
In fact A(δ)A−1 (δ) = A−1 (δ)A(δ) = I . Note that while any unimodular matrix has
full rank, the converse is not true. For example, there is no polynomial inverse for
A(δ) = (1 + δ).
Let us now consider a set of one-forms. It is immediately clear that such a set has
both the structure of a vector space E over the field K∗ and the structure of a module,
denoted M, over the ring K∗ (δ], i.e.
Example 1.4 The one-forms dx1 (t) and dx1 (t − 1) are independent over the field
K, while they are dependent over the ring K(δ], since δdx1 (t) − dx1 (t − 1) = 0.
This simple example shows that the action of time delay is taken into account in M,
but not in E. This motivates the definition of the module M.
j
ω(x, δ)dx[0] = αi (x, δ)ωi (x, δ)dx[0] .
i=1
Example 1.5 Consider the left-submodule Ω(δ] = spanK∗ (δ] {d x1 (t − 1), d x2 (t)}.
Clearly the rank of Ω(δ] is 2. Any ω ∈ Ω(δ] = α1 (x, δ)d x1 (t − 1) + α2 (x, δ)d x2 (t).
/ Ω(δ], while α0 (x, δ)ω̄ ∈ Ω(δ]. This is, for
However, there are some ω̄ such that ω̄ ∈
example, the case of ω̄ = d x1 (t); we will then say that Ω(δ] is not left-closed.
We thus can give the following definition from Conte and Perdon (1995).
12 1 Preliminaries
Definition 1.2 Let Ω(δ] = spanK∗ (δ] {ω1 (x, δ)dx[0] , . . . ω j (x, δ)dx[0] } be a left-
submodule of rank j with ωi ∈ K∗(1×n) (δ]. The left closure of Ω(δ] is the largest
left-submodule Ωc (δ] of rank j containing Ω(δ] .
The left closure of the left-submodule Ω is thus the largest left-submodule, containing
Ω, with the same rank as Ω.
Similar conclusions can be obtained on right-submodules. More precisely, a
right-module of M̂ (Califano et al. 2011a) consists of all possible linear com-
binations of column vectors τ1 , . . . , τ j , τi ∈ Kn×1 (δ], and is denoted by Δ =
spanK∗ (δ] {τ1 , . . . , τ j }.
Let Δ(δ] = spanK∗ (δ] {τ1 (x, δ), . . . τ j (x, δ)} be a right-submodule of rank j with
τi ∈ K∗(n×1) (δ]. Then any τ (x, δ) ∈ Δ(δ] can be expressed as τ (x, δ) =
j
i=1 τi (x, δ)αi (x, δ).
Accordingly, the following definition of the right closure of a right-submodule
can be given.
Definition 1.3 Let Δ(δ] = spanK∗ (δ] {τ1 (x, δ), . . . τ j (x, δ)} be a right-submodule of
rank j with τi ∈ K∗(n×1) (δ]. The right closure of Δ(δ] is the largest right-submodule
Δc (δ] of rank j containing Δ(δ].
Definition 1.4 The right closure of a right-submodule Δ of M̂, denoted by clK(ϑ] (Δ),
is defined as clK(ϑ] (Δ) = {X ∈ M̂ | ∃q(ϑ) ∈ K∗ (ϑ], Xq(ϑ) ∈ Δ}.
Let s̄ = deg(Γ (x[ p,s] , δ)). The left-annihilator Ω(x[ p̄,α] , δ) satisfies the following
relations:
Proof Without loss of generality, assume that the first j rows of Γ (x[ p,s] , δ) are
linearly independent over K(δ]. Then Ω(x[ p̄,α] , δ) must satisfy
Γ1 (x, δ)
Ω(x, δ)Γ (x, δ) = [Ω1 (x, δ), Ω2 (x, δ)] = 0,
Γ2 (x, δ)
that is, (n − j)(rΩ2 + 1) ≥ jrΓ1 . Once fixed rΩ2 , we get that rΩ1 = rΩ2 + rΓ2 − rΓ1 .
In the worst case, rΓ2 = rΓ1 = r and n − j = 1, which proves (i).
From the set of equations (1.13), fixing the independent parameters as functions of
x[0] only, then the maximum delay is given by the largest between (s + rΩ1 , s + rΩ2 ),
while p̄ ≤ p which proves (ii). Consequently, if Γ (x, δ) is causal, then p = 0 so that
p̄ ≤ 0 which shows that Ω(x, δ) is also causal.
Chapter 2
Geometric Tools for Time-Delay Systems
In this chapter, we introduce the main tools that will be used in the book to deal with
nonlinear time-delay systems affected by constant commensurate delays. We will
introduce basic notions such as the Extended Lie derivative and the Polynomial Lie
Bracket (Califano et al. 2011a; Califano and Moog 2017) which generalize to the
time-delay context the standard definitions of Lie derivative and Lie Bracket used to
deal with nonlinear systems. We will finally show how changes of coordinates and
feedback laws act on the class of systems considered.
Before going into the technical details let us first recall that with the notation
introduced in Chap. 1, we can rewrite system (1.1) as
l
m
ẋ[0] = F(x[s] ) + G ji (x[s] )u j,[0] (−i) (2.1)
i=0 j=1
y j,[0] = H j (x[s] ), j ∈ [1, p]. (2.2)
l
m
ẋ[0] = F(x[s] ) + G ji (x[s] )u j,[0] (−i)
i=0 j=1
l
m
ẋ[0] (−1) = F(x[s] (−1)) + G ji (x[s] (−1))u j,[0] (−i − 1) (2.3)
i=0 j=1
..
.
The advantage in the representation (2.3) is that it shows that the given time-delay
system can be represented as an interconnection of subsystems and that these subsys-
tems are coupled through the action of the control. Nevertheless caution must be used
when referring to this last representation since the variables x(−i) are connected to
each other through time. This will be further discussed in this chapter.
l
m
ẋ[0] = F(x[s] ) + G ji (x[s] )u j,[0]
i=0 j=1
l
m
ẋ[0] (−1) = F(x[s] (−1)) + G ji (x[s] (−1))u j,[0] (−i − 1) (2.4)
i=0 j=1
..
.
l
m
ẋ[0] (−k) = F(x[s] (−k)) + G ji (x[s] (−k))u j,[0] (−i − k).
i=0 j=1
l
m
ẋ[0] = F(x[s] ) + G ji (x[s] )v j,i
i=0 j=1
l
m
ẋ[0] (−1) = F(x[s] (−1)) + G ji (x[s] (−1))v j,i+1 (2.5)
i=0 j=1
..
.
l
m
ẋ[0] (−k) = F(x[s] (−k)) + G ji (x[s] (−k))v j,i+k
i=0 j=1
ẋ[0] (−k − 1) = φ1
..
.
ẋ[0] (−k − s) = φs .
2.1 The Initialization of the Time-Delay System … 17
While system (2.1) requires an initial condition on the time interval [−s, 0) to com-
pute its trajectories for t ≥ 0, system (2.5) requires an initialization on the time
interval [−(k + s), 0). As a consequence, any trajectory of (2.5) is also a trajectory
of (2.1), provided the latter is correctly initialized on the interval [−s, 0).
On the contrary, considering a given trajectory of system (2.1), there may not
necessarily exist a corresponding initialization function defined on the time interval
[−(k + s), 0) for system (2.5) which reproduces the given trajectory of system (2.1).
This is a direct consequence of the fact that the initialization of the system is not
necessarily a solution of the set of differential equations. This point is further clarified
through the next example.
ẋ = −x(t − 1) (2.6)
ẋ(t) = −x(t − 1)
(2.7)
ẋ(t − 1) = −x(t − 2).
There is no initialization function for system (2.7) over the time interval [−2, −1)
so that the trajectory of (2.7) coincides with the trajectory of (2.6) for any t ≥ −1.
In the special example, an ideal Dirac impulse at time t = −1 is required to achieve
the reproduction of the trajectory.
Coming back to the extended system (2.5), note that the further trick which consists
in renaming x0 = x[0] , x = x[0] (−) for = 1, . . . , s + k may be misleading as
x0 ,…,xs+k are not independent.
l
m
ẋ0 = F(x0 , . . . , xs ) + G ji (x0 , . . . , xs )v j,i
i=0 j=1
l
m
ẋ1 = F(x1 , . . . , xs+1 ) + G ji (x1 , . . . , xs+1 )v j,i+1 (2.8)
i=0 j=1
..
.
l
m
ẋk = F(xk , . . . , xs+k ) + G ji (xk , . . . , xs+k )v j,i+k
i=0 j=1
ẋk+1 = φ1
..
.
ẋk+s = φs .
18 2 Geometric Tools for Time-Delay Systems
From a practical point of view, given a nonlinear time-delay system, its represen-
tation (2.8) can be used to compute the solution of the system by referring to the
so-called step method as shown in the example hereafter.
Example 2.2 Consider the dynamics
To compute the solution starting from the initialization x(t) = ϑ0 (t), for −1 ≤ t < 0,
the following steps are taken according to Garcia-Ramirez et al. (2016a):
• The solution ϑ1 (t) of (2.9) on the time interval 0 ≤ t < 1 is found as the solution
of the delay-free system (2.8) subject to the appropriate initial condition ϑ0 (t):
• The solution ϑ2 (t) of (2.9) on the time interval 1 ≤ t < 2 is found as the solution
of the delay-free system (2.8) subject to the initial condition ϑ1 (t) computed at
the previous step:
• More generally, for k ≥ 0, the solution ϑk+1 (t) of (2.9) on the time interval k ≤
t < k + 1 is found as the solution of the delay-free system (2.8) subject to the
initial condition ϑk (t):
Thus, the set of equations (2.5) defined on the interval (k − 1) ≤ t < k becomes
in the case of this example
System (2.13) defines the solution on the first k units of time of Eq. (2.9) shifted
into the interval (k − 1) ≤ t < k, with initialization x(τ ) = ϑ0 (τ ) for τ ∈ [k − 1, k).
2.2 Non-independence of the Inputs of the Extended System 19
Moreover, through a change of variable x0 (t) = x(t), x1 (t) = x(t − 1), xk (t) =
x(t − k), system (2.13) can also be represented as (2.8)
with initial conditions x0 (0) = x1 (1), x1 (0) = x2 (1), x2 (0) = x3 (1), . . ., xk−1 (0) =
ϑ0 (1).
Now, consider the following hypothesis:
H0 . System (2.1), under the restriction u(t) = ν(t) has a unique solution
Theorem 2.1 Consider system (2.1), with initial conditions ϕ0 (θ), and subject to
the restriction u(t) = ν(t), t ≥ 0, and assume that hypothesis H0 is satisfied. Then,
the solution (2.14) is obtained from the solution of system (2.5) in the interval
[t0 + s, t0 + T ] if T > k + s,
and
φi (t) = ϕ̇(t − t0 + i), i = 0, . . . , s − 1.
As a corollary, one can consider the special case of a single delay as done in
Example 2.2, thus getting the following result.
Corollary 2.1 Given (2.9), consider the associated extended system (2.13). There
always exists a proper initialization of (2.13) so that the trajectories of both systems
coincide on the time interval (k − 1) ≤ t < k.
This means that it is possible to group the solution of (2.9) on the interval (k − 1) ≤
t < k.
20 2 Geometric Tools for Time-Delay Systems
It has to be noticed that u(t − k) and x(t − k) are not independent of u(t) and x(t),
so that attention must be paid when referring to the representation (2.3). An example
is given hereafter.
It is easily seen that they represent the same dynamics whenever u 1 = u [0] and
u 2 = u [0] (−1). So, Σ2 has less constraints and better properties than Σ1 regarding,
for instance, feedback linearization: as a matter of fact while Σ2 can be linearized
via a regular static state feedback by setting
1
u1 = v1
2
x2,[0] +1 (2.15)
u2 = v2 ,
there is no regular static state feedback which fully linearizes Σ1 , since u [0] (−1) is
no more independent of u [0] . In fact, setting
1
u [0] = v[0]
2
x2,[0] +1
1
u [0] (−1) = v[0] (−1).
2
x2,[0] (−1) + 1
Still, the feedback (2.15) can be implemented on the time-delay system Σ1 on any
time interval [2kT, (2k + 1)T ) and can switch on u(t) = v(t) for any
t ∈ [(2k + 1)T, (2k + 2)T ), with the initialization u [0] (−1) = 0. This switching
scheme ensures a linear behavior for the given dynamics on any interval [2kT, (2k +
1)T ], k ≥ 0.
The conclusion at this stage is that one cannot neglect the links between the con-
trol/state variables and their delayed signals. As shown hereafter, this is one of the
problems that can be overcome by considering the differential representation of the
given time-delay system. Such a representation naturally takes into account through
the shift operator the link of a given variable with its delayed terms.
2.3 The Differential Form Representation 21
One of the peculiarities of nonlinear time-delay systems is the fact that when analyz-
ing their dynamics, one has to refer to two different kind of operations with respect
to time: time differentiation and time shift.
The simultaneous action of shift and differentiation determines difficulties which
are peculiar to time-delay systems. A simple case is illustrated through the following
example.
The input–output behavior is obtained by considering, in these two cases, the second-
order derivatives of the output maps. We easily get that
2 2
ÿ1,[0] = 3x2,[0] (−1) + 1 u 1,[0] (−1) + 3x2,[0] + 1 u 1,[0]
ÿ2,[0] = x2,[0] (−1)u 2,[0] + u 2,[0] (−1)x2,[0] . (2.16)
While in the first case the feedback u 1,[0] = 3x 2 1 +1 v1,[0] linearizes the input–output
2,[0]
behavior, that is, ÿ1,[0] = v1,[0] + v1,[0] (−1), there is instead no regular static state
feedback which allows to solve the same problem for the second system.
The difference between these two cases can be understood through the use of the
differential form representation, where the shifts are represented by the δ operator,
and the representation becomes linear.
More precisely, consider the time-delay system (2.1), (2.2), and recall that, using
the notation introduced in Sect. 1.5, for any k ≥ 0, d x(t − k) = dx[0] (−k) = δ k dx[0]
and similarly, for any ≥ 0, du(t − ) = du[0] (−) = δ du[0] . Through standard
computations one gets that such a differential form representation is given by
m
d ẋ[0] = f (x[s] , u[s] , δ)dx[0] + g1, j (x[s] , δ)du[0], j
j=1
(2.17)
dy[0] = h(x[s] , δ)dx[0] ,
s
∂ F(x[s] )
m s l
∂G ji (x[s] )
f (x[s] , u[s] , δ) = δ + u[0] (−i) δ ,
=0
∂x[0] (−) j=1 =0 i=0
∂x[0] (−)
22 2 Geometric Tools for Time-Delay Systems
l
g1, j (x[s] , δ) = g1, j (x)δ is a n × 1 vector representing the differential of the
=0
dynamics with respect to the control u j , and given by
l
g1, j (x[s] , δ) = G j,i (x[s] )δ i , j ∈ [1, m].
i=0
s
Finally, h j (x[s] , δ) = h j (x)δ is an 1 × n vector representing the differential of
=0
the output, and is given by
s
∂ H j (x[s] )
h j (x[s] , δ) = δi , j ∈ [1, p].
i=0
∂x[0] (−i)
With some technical manipulations, using the fact that f (−1)δ = δ f (0), one then
gets that
d ÿ1,[0] = (1 + δ) 6u 1,[0] x2,[0] d x2,[0] + (3x2,[0]
2
+ 1)du 1,[0] .
Since the left side is an exact differential, also the right-hand side is an exact differ-
ential and it is possible to find the solution of
that is,
1
u 1,[0] = v1,[0] .
2
3x2,[0] +1
Since the coefficient of du 2,[0] cannot be factorized in c0 (δ)c1 (x), there is no static
state feedback which can achieve input–output linearization.
When dealing with nonlinear systems, Lie derivatives and Lie Brackets are standard
tools used in many contexts (Isidori 1995). As well known the Lie derivative rep-
resents the derivative of a function along a given trajectory. When moving to the
time-delay context, however, several aspects should be taken into account.
As a first comment, consider again the dynamics (2.1) and consider g1, j (x, δ) in
(2.17) which is thus associated to the differential representation of (2.1).
Accordingly, it is immediate to understand that δ k g1, j (x, δ) will be associated to
the differential representation of
s
m
ẋ[0] (−k) = F(x[s] (−k)) + G ji (x[s] (−k))u j,[0] (−i − k). (2.18)
i=0 j=1
s
Such a reasoning can be generalized to any element r (x, δ) = r (x)δ , so that
=0
if r (x, δ) is associated to the differential representation of (2.1), then δ k r (x, δ) =
s
r (x[0] (−k))δ +k will be associated to the differential representation of (2.18).
=0
One thus gets the following infinite dimensional matrix:
24 2 Geometric Tools for Time-Delay Systems
⎧ 0 ⎫
∂
→ ⎪
⎪r r1 ··· rs 0 ··· ⎪
⎪
∂x[0] ⎪
⎪ . ⎪
⎪
∂
→ ⎪
⎪ 0 r 0 (−1) · · · r s−1 (−1) r s (−1) 0 . ⎪ . ⎪
∂x[0] (−1) ⎪
⎪ ⎪
⎪
.. ⎨. .. .. .. .. .. ⎬
.. . . . . . ··· .
. (2.19)
∂ ⎪
⎪ . ⎪ ⎪
→ ⎪ ⎪
∂x[0] (−s) ⎪
⎪
⎪ 0 ··· 0 r 0 (−s) r 1 (−s) · · · . . ⎪ ⎪
⎪
.. ⎪
⎪ ⎪
⎪
. ⎩ .. .. .. .. .. .. ⎭
. . . . . . ···
Definition 2.1 (Generalized Lie derivative) Given the function τ (x[ p,s] ) and the
s̄
submodule element r (x, δ) = r j (x)δ j ∈ K∗n (δ], the Generalized Lie derivative
j=0
L r μ (x) τ (x[ p,s] ), μ ∈ [0, s̄] is defined as
μ
∂τ (x[ p,s] )
L r μ (x) τ (x[ p,s] ) = r μ−l (x(−l)). (2.20)
l=− p ∂x[0] (−l)
Remark 2.1 In a delay-free context, one would have that p = s = s̄ = 0 and the
Generalized Lie derivative would reduce to
∂τ (x)
L r 0 (x) τ (x) = r 0 (x),
∂x
which is exactly the standard Lie derivative of τ along r 0 .
s̄
rq (x)δ j ∈ K∗n (δ],
j
Definition 2.2 (Generalized Lie Bracket) Let rq (x, δ) =
j=0
q = 1, 2. For any k, l ≥ 0, the Generalized Lie Bracket [r1k (·), r2l (·)] Ei , on IR (i+1)n ,
i ≥ 0, is defined as
i
T
k− j l− j ∂
r1k (·), r2l (·) = [r1 , r2 ] E , (2.21)
Ei
j=0
(x(− j)) ∂x[0] (− j)
where
r1k (·), r2l (·) E
= L r1k (x)r2l (x) − L r2l (x)r1k (x) . (2.22)
2.4 Generalized Lie Derivative and Generalized Lie Bracket 25
Remark 2.2 The Generalized Lie derivative as defined by (2.20) is the Lie derivative
of τ (x[ p,s] ) along
where ri (x) = (r1i , . . . , r ij ) and q > μ. Accordingly, assuming without loss of gen-
erality k ≥ l, the Generalized Lie Bracket [r1k (·), r2l (·)] Ei is defined starting from the
standard Lie Bracket
⎡⎛ ⎞⎛ s ⎞⎤ ⎛τ k+s−l (s − l)⎞
0 r2 (s − l)
⎢⎜r1s (s − k)⎟ ⎜ .. ⎟⎥ ⎜ .. ⎟
⎢⎜ ⎟⎜ . ⎟⎥ ⎜ ⎜ . ⎟
⎟
⎢⎜ .. ⎟ ⎜ ⎟ ⎥ ⎜ . ⎟
⎢⎜ ⎟⎜ .. ⎟⎥ ⎜ ..
⎢⎜ k . ⎟⎜ . ⎟ ⎥ ⎜
⎟
⎟
⎢⎜ r1 (0) ⎟ ⎜ l ⎟⎥
⎢⎜ ⎟ ⎜ r (0) ⎟⎥ ⎜ ⎜ τ k
(0) ⎟
⎟.
⎢⎜ .. ⎟⎜ 2 ⎟⎥ = ⎜ . ⎟
⎢⎜ . ⎟ ⎜ . ⎟ ⎥ ⎜ . ⎟
⎢⎜ ⎟⎜ .. ⎟⎥ ⎜ . ⎟
⎢⎜ . ⎟ ⎜ ⎟ ⎥ ⎜ .. ⎟
⎢⎜ .. ⎟ ⎜ r 0 (−l) ⎟⎥ ⎜ ⎟
⎢⎜ ⎟⎜ 2 ⎟⎥ ⎜ . ⎟
⎣⎝ r 0 (−k) ⎠ ⎝ ⎠⎦ ⎝
1 0 τ (−k) ⎠
0
0 0 0
min(k,i)
∂
In fact, [r1k (·), r2l (·)] Ei = (τ k− j (− j))T ∂x(− j)
.
j=0
The Generalized Lie Brackets (2.21) are associated to Δ[ p,q] defined above. In the
special case of causal submodules (which lead to consider Δ[0,q] ), they have shown
to characterize the 0-integrability conditions, that is, when the Δ⊥ (δ] of rank n − j
is generated by n − j exact and independent differentials dλμ (x) = Λμ (x, δ)dx[0] ,
μ ∈ [1, n − j] (Califano et al. 2011a). In order to give the conditions on integrability
directly on the submodule
Definition 2.3 (Lie Bracket) Given ri (x[si ,s] , δ) ∈ K∗n (δ], i = 1, 2, the Lie Bracket
1
2s+s
+s1 − j
r12, j (x, δ) = [r1 , r2 ] E0 δ +s1 , j ∈ [−2s, 2s + s1 + s2 ]. (2.23)
=−s1
Recalling that a polynomial vector r1 (x[si ,s] , δ) acts on a function (t) and denoting its
s
j
image as R1 (x[s1 ,s] , ) := r1 (x) (− j), the Polynomial Lie Bracket is then defined
j=0
as follows:
Definition 2.4 (Polynomial Lie Bracket) Given ri (x[si ,s] , δ) ∈ K∗n (δ], i = 1, 2, the
Polynomial Lie Bracket [R1 (x, ), r2 (x, δ)] is defined as
With some abuse, the Polynomial Lie Bracket and the standard Lie Bracket are both
denoted by [., .]. No confusion is possible, since in the Polynomial Lie Bracket, some
(i) will always be present inside the brackets.
Some comments
As noted in Califano and Moog (2017), the link between the Lie Bracket (2.23) and
the Generalized Lie Bracket (2.21) can be easily established by noting that r12, j (x, δ)
in (2.23) is given by
2(s+s )− j
r12, j (x, δ) = I(δ) [r1 1 , r22s+s1 ] E2s+s1 |x(2(s+s1 )) ,
where
I(δ) = In δ 2(s+s1 ) , · · · , In δ, In .
Furthermore, the r12, j (x, δ)’s also characterize the Polynomial Lie Bracket since
one easily gets that
2s+s1 +s2
Finally, in the delay-free case, the Polynomial Lie Bracket reduces (up to (0)) to
the standard Lie Bracket. In fact
[R1 (x, ), r2 (x, δ)] = [r10 (x) (0), r20 (x)] = [r10 , r20 ] (0).
2.5 Some Remarks on the Polynomial Lie Bracket 27
Instead, if delays are present, [R1 (x, ), r2 (x, δ)] immediately enlightens some
important differences with respect to the delay-free case, such as the loss of validity
of the Straightening theorem. In fact, since the term depending on δ undergoes a
different kind of operation with respect to the term depending on , starting from
r (x, δ) and its corresponding image R(x, ), in general,
1 +s
s
∂R(x[s1 ,s] , )
ṙ (x, δ)|ẋ[0] =R(x, ) δ s1 = δ k r (x(s1 ), δ),
k=0 ∂x(s1 − k)
that, in general,[r (x, δ),r (x, δ)] = 0. For instance, consider r (x, δ) =
which yields
x2 (−1) x (−1)
. Then R(x, ) = 2 (0) and
1 1
(−1) − (0)δ
[R(x, ), r (x, δ)] = = 0.
0
Accordingly,
1 −δ
[r (x, δ), r (x, δ)] = , .
0 0
Let us first examine some properties of the Polynomial Lie Bracket discussed in
Califano and Moog (2017).
∂[R1 (x, ), r2 (x, δ)] s2 −s1 + j+| j| ∂[R2 (x, ), r1 (x, δ)] | j|
δ =− δ. (2.25)
∂ (s1 − j) ∂ (s2 + j)
Property 2.2 Given for i = 1, 2, r̄i (x[s̄i ,s] , δ) = ri (x[si ,s] , δ)βi (x[si ,s] , δ), then
[R1 (x, ¯), r2 (x, δ)]¯=β1 (x, ) β̂2 + r2 (x, δ)α2 − r1 (x, δ)α1 (2.26)
s+s1
∂β1 (x, ) k
with β̂2 =β2 (x(s1 ), δ), α1 = ∂x(s1 −k)
δ r̄2 (x(s1 ), δ), and α2 = β̇2 (x, δ)|ẋ=R̄1 (x, ) δ s1 .
k=0
k
k
ad R1 (x,1) (r2 (x, δ)) α( j) |ẋ=R1 (x,1) .
k− j
ad kR1 (x,1) (r2 (x, δ)α) = (2.27)
j=0
j
It is important to point out that for delay-free systems one recovers the standard
properties of Lie Brackets. In fact, if ri (x, δ) = ri0 (x), for i = 1, 2, then Ri (x, ) =
ri0 (x) (0) and
whereas letting r̄i (x, δ) = ri0 (x)βi (x), then R̄i (x, ) = ri0 (x)βi (x) (0) and
[R̄1 (x, ), r̄2 (x, δ)] = [r10 (x)β1 (x) (0), r20 (x)β2 (x)]
= [r10 , r20 ]β2 β1 + r20 α2 − r10 α1 (0)
Then
x1 (1) (0) x (−1)
R1 (x, ) = , R2 (x, ) = 2 .
x2 (−1) x1 (0)
Accordingly, since s1 = 1, s2 = s = 0,
x2 (−1)δ (0)x2 (1)δ
[R1 (x, ), r2 (x, δ)] = δ−
x1 (1) (0) (−1)x1 δ
2
0 x2 δ − x2 (1)δ
=− (0) + (1)
x1 δ x1 (1)δ
= r12,0 (x, δ) (0) + r12,1 (x, δ) (1).
If the system were linear, that is, g1 (x) and g2 (x) were constant vectors, the appli-
cation of an input sequence of the form [(0, 1), (1, 0), (0, −1), (−1, 0)] where each
control acts exactly for a time h, would bring the state back to the starting point. In
the nonlinear case instead it was shown that such a sequence brings the system into
a final point x f different from the starting one x0 and that the Lie Bracket [g2 , g1 ]
exactly identifies the direction which should be taken to go back to x0 from x f . In
fact, if one carries out the computation it turns out that the first-order derivative of
the flow in the origin is zero, while its second-order derivative evaluated again in the
origin is exactly twice the bracket [g2 , g1 ]. As a by-product in the special case of a
single-input driftless and delay-free system then using a constant control allows to
move forward or backward on a unique integral manifold of the considered control
vector field, and this can be easily proven by considering that [g, g] = 0.
In the time-delay context, already for a single-input system the Polynomial Lie
Bracket is not, in general, identically zero. Using analogous arguments as in the delay-
free case, as already discussed in Sect. 1.3, it follows that already in the single-input
case, in fact, it is not, in general, true that one moves forward and backward on a
unique integral manifold when delays are present. To formally show this, let us go
back to the single-input time-delay system (1.8)
x2 (t − τ )
ẋ(t) = g(x(t), x(t − τ ))u(t) = u(t) (2.28)
1
and consider the dynamics over four steps of magnitude τ when the control sequence
[1, 0, −1, 0] is applied and the switches occur every τ . Then one gets that over the
30 2 Geometric Tools for Time-Delay Systems
with c0 the initial condition of x on the interval [−4τ , −3τ ). Of course not all the
trajectories of z 1 (t) in (2.30) will be trajectories of x(t) in (2.29), whereas all the
trajectories of x(t) for t ∈ [0, 4τ ) in (2.29) can be recovered as trajectories of z 1 (t)
in (2.30) for t ∈ [0, 4τ ), whenever the system is initialized with constant initial
conditions.
Mimicking the delay-free case, one should then apply the input sequence [(0, 1),
(1, 0), (0, −1), (−1, 0)] to the system. This can be achieved by considering u(t) = 1
for t ∈ [0, τ ), u(t) = 0 for t ∈ [τ , 2τ ), u(t) = −1 for t ∈ [2τ , 3τ ), and u(t) = 0 for
t ∈ [3τ , 4τ ), with the initialization u(t) = 0 for t ∈ [−τ , 0). Such an example shows
immediately that the second-order derivative in 0 is characterized by
⎡⎛ ⎞ ⎛ ⎞⎤
0 g(z 1 , z 2 )
⎢⎜ g(z 2 , z 3 ) ⎟ ⎜ 0 ⎟⎥
[g1 , g2 ] = ⎢
⎣⎝
⎜ ⎟,⎜
⎠ ⎝
⎟⎥
0 −g(z 3 , z 4 )⎠⎦
−g(z 4 , c0 ) 0
⎡⎛ ⎞⎛ ⎞⎤
0 g(x(t), x(t − τ ))
⎢⎜ g(x(t − τ ), x(t − 2τ )) ⎟⎜ 0 ⎟⎥
=⎢ ⎜
⎣⎝
⎟,⎜
⎠ ⎝
⎟⎥.
0 −g(x(t − 2τ ), x(t − 3τ ))⎠⎦
−g(x(t − 3τ ), x(t − 4τ )) 0
∂
It is straightforward to note that the ∂x(t)
component of the Lie Bracket is given by
∂g(x(t),x(t−τ ))
∂x(t−τ )
g(x(t
− τ ), x(t − 2τ )) which, in general, is nonzero, thus showing that
the presence of a delay may generate a new direction that can be taken.
2.6 The Action of Changes of Coordinates 31
Such a result can be easily recovered by using the Polynomial Lie Bracket. In fact,
one has that starting from g1 (x, δ) = g(x(t), x(t − τ )), one considers G1 (x, ) =
g(x(t), x(t − τ )) (0). Accordingly, the associated Polynomial Lie Bracket is
1
∂g(x(0), x(−1))
[G1 (x, ), g1 (x, δ)] = ġ|ẋ[0] =g(x(0),x(−1)) (0) − (0) g(− j)
j=0
∂x(− j)
∂g(x(0), x(−1))
= g(x(−1), x(−2))( (−1) − (0)δ)
∂x(−1)
1
= ( (−1) − (0)δ)
0
which is different from zero. Figure 1.2 shows the trajectories of the single-input
system (2.28) controlled with a piecewise input which varies from 1 to −1 every
10 s, highlighting the difference with the delay-free case as discussed. It is also clear
that in this framework the delay can be used as an additional control variable. This
is left as an exercise to the reader, which can investigate on the effect of different
delays on the system with the same output, as well as a system with a fixed delay
and where the input changes its period.
Changes of coordinates play a fundamental role in the study of the structural prop-
erties of a given system. In the delay-free case, a classical example of their use
is displayed by the decomposition in observable/reachable subsystems, following
Kalman intuition. When dealing with time-delay systems, several problems arise
when considering changes of coordinates.
The map z(t) = x(t) + x(t − 1) does not define a change of coordinates, since we
are not able to express x(t) as a function of z(t) and a finite number of its delays.
Nevertheless we can compute
ż 1 (t) = z 1 (t − 1)z 2 (t − 1)
ż 2 (t) = z 1 (t + 1),
where the causality property of the system has not been preserved.
The previous examples show that changes of coordinates should be defined with
carefulness. To this end, we will consider bicausal changes of coordinates as defined
in Márquez-Martínez et al. (2002), that is, changes of coordinates which are causal
and admit a causal inverse map. Let us thus consider the mapping
where α ∈ IN and ϕ ∈ Kn .
Definition 2.5 Consider a system Σ in the state coordinates x. The mapping (2.31)
is a local bicausal change of coordinates for Σ if there exists an integer ∈ IN and
a function ψ(z[] ) ∈ Kn such that, assuming z[0] and x[0] defined for t ≥ −(α + ),
then ψ(ϕ(x[α] ), . . . , ϕ(x[α] (−))) = x[0] for t ≥ 0.
Furthermore, if we consider the differential form representation of (2.31), which is
given by
s
dz[0] = T (x[γ] , δ)dx[0] = T j (x)δ j , (2.32)
j=0
then the polynomial matrix T x[γ] , δ ∈ Kn×n (δ] is unimodular and γ ≤ α, whereas
its inverse T −1 (z, δ) has polynomial degree ≤ (n − 1) α.
Under the change of coordinates (2.31), with differential representation (2.32),
the differential representation (2.17) of the given system is transformed in the new
coordinates into
m
d ż[0] = f˜(z, u, δ)dz[0] + g̃1,m (z, δ)du[0]
j=1 (2.33)
dy[0] = h̃(z, δ)dz[0]
with
f˜(z, u, δ) = T (x, δ) f (x, u, δ)+ Ṫ (x, δ) T −1 (x, δ) |x[0] =φ−1 (z)
g̃1, j (z, δ) = T (x, δ)g1, j (x, δ) |x[0] =φ−1 (z) , (2.34)
h̃(z, δ) = h(x, δ)T −1 (x, δ) |x[0] =φ−1 (z) .
Proposition 2.1 Under the bicausal change of coordinates (2.31), the causal sub-
module element r (x, δ) is transformed in r̃ (z, δ) given by
s+s̄
r̃ (z, δ) = [T (x, δ)r (x, δ)]x[0] =φ−1 (z) = T j (x)r − j (x(− j)) x[0] =φ−1 (z)
δ
=0 j=0
Proposition 2.2 Under the bicausal change of coordinates (2.31), the Polynomial
Lie Bracket [R1 (x, ), r2 (x, δ)] defined starting from the causal submodule elements
r1 (x, δ), r2 (x, δ) is transformed into
2s̃
[ R̃1 (z, ), r̃2 (z, δ)] = r̃12, j (z, δ) ( j),
j=−2s̃
where
2s̃
− j
2s̃− j
+ j
r̃12, j (z, δ) = [r̃1 , r̃2 ] E0 δ = [r̃1 , r̃2 ] E0 δ j+
=0 =0
2s̃− j
+ j
= + j,0 (x)[r1 (x), r2 (x)] E+ j δ j+ .
|x=φ−1 (z)
=0
34 2 Geometric Tools for Time-Delay Systems
Definition 2.6 Consider a system (2.1), an invertible instantaneous static state feed-
back is defined as
u(x(t), v(t)) = α(x(t)) + β(x(t))v(t), (2.35)
where v(t) is a new input of dimension m and β is a square invertible matrix whose
entries are meromorphic functions, so that (2.35) is invertible almost everywhere and
one recovers v(t) as a function of u(t), that is,
In general, the class of instantaneous static state feedback laws is not rich enough to
cope with the complexity of time-delay systems and to solve the respective control
problems. Thus, delay-dependent state feedback laws are considered as well and
have the same level of complexity as the system to be controlled.
u(x, v) = α(x(t), . . . , x(t − )) + βi (x(t), . . . , x(t − ))v(t − i), (2.36)
i=0
i=
β(x, δ) = βi (x(t), . . . , x(t − ))δ i
i=0
is a δ-polynomial matrix.
The feedback (2.37) is said to be an invertible bicausal static state feedback if β
is a unimodular polynomial matrix, i.e. it admits an inverse polynomial matrix β .
Note that in the special case where m = 1, the invertibility of (2.36) necessarily
yields
that is, the feedback law can depend only on v(t). Only in the multi-input case, several
time instants of v(·) may be involved. Referring to the differential representation,
one gets that the differential of the feedback (2.36) is
! "
∂α(x)
i=
∂βi (x)
i=
du[0] = + v(−i) δ dx[0] +
j
βi (x)δ i dv[0]
j=0
∂x(− j) i=0 ∂x(− j) i=0
(2.40)
= α(x, v, δ)dx[0] + β(x, δ)dv[0]
As the matrix β(x, δ) is unimodular, one gets β̂(x, δ) = β −1 (x, δ). Accordingly, the
differential representation of the closed-loop system, given by the dynamics
with
u 1 (t) = v1 (t)
u 2 (t) = x 2 (t − 1)v1 (t − 2) + v2 (t)
v1 (t) = u 1 (t)
v2 (t) = −x 2 (t − 1)u 1 (t − 2) + u 2 (t).
36 2 Geometric Tools for Time-Delay Systems
2.8 Problems
u 1 (t) = v1 (t) − v2 (t − 1)
u 2 (t) = v1 (t − 1) + v2 (t) − v2 (t − 2).
Is this transformation invertible? If yes, then write v1 (t) and v2 (t) in terms of
u 1 (t), u 2 (t) and their delays.
Is this state feedback invertible? If yes, is the inverse state feedback causal?
Chapter 3
The Geometric Framework—Results on
Integrability
In this chapter, we will focus our attention on the solvability of a set of partial dif-
ferential equations of the first order, or equivalently the integrability problem of a
set of one forms when the given variables are affected by a constant delay. As it
is well known, in the delay-free case such a problem is addressed by using Frobe-
nius theorem, and the necessary and sufficient conditions for integrability can be
stated equivalently by referring to involutive distributions or involutive codistribu-
tions. Frobenius theorem is used quite frequently in the nonlinear delay-free context
because it is at the basis of the solution of many control problems. This is why it is
fundamental to understand how it works in the delay context.
When dealing with time-delay systems, in fact, things become more involved.
A first attempt to deal with the problem can be found in Márquez-Martínez (2000)
where it is shown that for a single one-form one gets results which are similar to the
delay-free case, while these results cannot be extended to the general context.
As shown in Chap. 1, a first important characteristics of one-forms in the time-
delay context, is that they have to be viewed as elements of a module over a certain
non-commutative polynomial ring.
In the present chapter, it will also be shown that two notions of integrability have to
be defined in the delay context, strong and weak integrability, which instead coincide
in the delay-free case. As it will be pointed out, these main differences are linked
to the notion of closure of a submodule introduced in Conte and Perdon (1995) for
linear systems over rings and recalled in Chap. 1. Finally, it will also be shown that
the concept of involutivity can be appropriately extended to this context through the
use of the Polynomial Lie Bracket.
so that the differentials dϕ j can be expressed in terms of the one-forms ωi ’s, and this
is exactly what happens in the delay-free case. We will talk in this case of Strong
Integrability. If instead the matrix A(x, δ) is not unimodular then we cannot express
the dϕ j ’s in terms of the one-forms ωi , since
We will talk in this case of Weak Integrability which is then peculiar of the delay
context only and is directly linked to the concept of closure of a submodule.
Secondly, such a difference does not come out if one works on the right-submodule
Δ(δ], since its left-annihilator is always closed, so that one will always find out that
3.1 Some Remarks on Left and Right Integrability 39
its left-annihilator is strongly integrable. Instead, as it will be clarified later on, when
talking of integrability of a right-submodule a new notion of p-integrability needs
to be introduced which characterizes the index p such that dλ = Λ(x, δ)dx[0] ( p)
satisfies Λ(x, δ)Δ(δ] = 0.
Example 3.1 The one-form ω1 = dx(t) + x(t − 1)dx(t − 1) can be written in the
two following forms:
ω1 = (1 + x(t − 1)δ)dx(t) (3.2)
and
ω1 = d(x(t) + 1/2x(t − 1)2 ). (3.3)
Equation (3.2) suggests that the given one form is just weakly integrable; however,
it is even strongly integrable from (3.3).
Instead, the one-form ω2 = dx1 (t) + x2 (t)dx1 (t − 1) = (1 + x2 (t)δ)dx1 (t) is weakly
integrable, but not strongly integrable, because the polynomial 1 + x2 (t)δ is not
invertible.
The following section is devoted to analyze in more detail the concept of integrability
of a right-submodule. As it has been underlined at the beginning of this chapter,
differently to the delay-free case, in this context it will be necessary to introduce
a more general definition of integrability, namely, p-integrability. Another concept
strictly linked to the integrability problem is that of involutivity. Such a notion is also
introduced in this context, though it will be shown that it is not a straightforward
generalization to the delay context of the standard definitions. The main features of
delay systems, in fact, fully characterize these topics.
Let us now consider the right-submodule
s̄
of rank j, with the polynomial vector ri (x, δ) = (ri (x))T ∂x∂[ p] δ ∈ K∗n (δ]. By
=0
assumption ris̄+ = 0, ∀ > 0; by convention ri−k = 0, ∀k > 0.
As we have already underlined, integrating Δ(δ] consists in the computation of a
set of n − j exact differentials dλμ (x) = Λμ (x, δ)dx[0] ( p) independent over K∗ (δ],
which define a basis for the left-kernel of Δ(δ].
As it is immediately evident, one key point stands in the computation of the
correct p. We will thus talk of p-integrability of a right-submodule, which is defined
as follows.
40 3 The Geometric Framework—Results on Integrability
such that the dλμ (x)’s lay in the left-kernel of Δ(δ], that is, dλμ (x)ri (x, δ) = 0, for
i ∈ [1, j] and μ ∈ [1, n − j], and any other exact differential d λ̄(x) ∈ Δ⊥ (δ] can be
expressed as a linear combination over K∗ (δ] of such dλμ (x)’s.
−x1 (2)δ
Δ(δ] = spanK∗ (δ] .
x2 (2)
How to check the existence of such a solution and how to compute it is the topic
of the present chapter.
To this end, starting from the definitions of Generalized Lie Bracket, Lie Bracket, and
Polynomial Lie Bracket given in Chap. 2, the notions of Involutivity and Involutive
Closure of a right-submodule are introduced next. They represent the nontrivial
generalization of the standard definitions used in the delay-free context, which can
be recovered as a special case. These definitions play a fundamental role in the
integrability conditions.
s
of rank j, with ri (x, δ) = ril (x[si ,s] )δl and let Δc (δ] be its right closure. Then
l=0
Δ(δ] is said to be involutive if for any pair of indices i, ∈ [1, j] the Lie Bracket
[ri (x, δ), r (x, δ)] satisfies
Remark 3.1 Definition 3.3 includes as a special case the notion of involutivity of
a distribution. The main feature is that starting from a given right-submodule, its
involutivity implies that the vectors obtained through the Lie Bracket of two elements
of the submodule cannot be obtained as a linear combination of the generators of the
given submodule, but it is a linear combination of the generators of its right closure.
For finite dimensional delay-free systems, distributions are closed by definition, so
there is no such a difference.
The definition of involutivity of a submodule is crucial for the integrability problem,
as enlightened in the next theorem.
Theorem 3.1 The right-submodule
Δ(δ] = spanK∗ (δ] r1 (x, δ), . . . , r j (x, δ)
The time derivative of (3.6) along Rq (x[s1 ,s] , ) yields ∀μ ∈ [1, n − j], ∀ ∈ [1, j]
Λ̇μ (x, δ)|ẋ[0] =Rq (x, )r (x, δ) + Λμ (x, δ)ṙ (x, δ)|ẋ[0] =Rq (x, ) = 0.
ρ ∂Λkμ (x)
Rq (x(−k), (−k))δ i r (x, δ)δ s1 +
i,k=0 ∂x(−i)
1
s+s
∂Rq (x, )
+Λμ (x, δ) δ k r (x(s1 ), δ) = −Λμ (x, δ)[Rq (x, ), r (x, δ]. (3.7)
k=0 ∂x(s1 − k)
∂Λk (x)
Moreover, since λμ (x) is causal then ∂x(sμ1 −i) = 0 for i ∈ [0, s1 − 1]; since
s
Λμ (x, δ)rq (x, δ) = 0, then also k=0 Λμ (x)Rq (x(−k), (−k)) = 0, so that for
k
i ∈ [0, s + s1 ],
s
∂Λkμ (x) s
∂Rq (x(−k), (−k))
Rq (x(−k), (−k))+ Λkμ (x) = 0.
k=0
∂x(−i) k=0
∂x(−i)
1
s+s s
∂Λkμ (x) 1
s+s
∂Rq (x, )
Rq (x(−k), (−k))δ = − i
Λμ (x, δ) ,
i=0 k=0
∂x(s1 − i) i=0
∂x(s1 − i)
Since the previous relation must be satisfied ∀μ ∈ [1, n − j], and ∀ , q ∈ [1, j],
and recalling the link between the Polynomial Lie Bracket and the Lie Bracket
highlightened in Eq. (2.24), then necessarily Δ(δ] must be involutive.
Sufficiency. Let ω(x[ŝ], δ)=(ω1T (x[ŝ], δ), . . . , ωn− T
j (x[ŝ], δ)) be the left-annihilator
T
of (r1 (x[s1 ,s] , δ), . . . , r j (x[sk ,s] , δ)). Let s̄ = max{s1 , . . . , sk } and
ρ
ρ = max{ŝ,
deg(ω(x, δ))}, that is, for k ∈ [1, n − j],
ω k (x, δ) = =0 ω k (x[ρ] )δ .
Set Ω = 0, . . . , 0, ω0 (x[ρ] ), . . . , ωρ (x[ρ] ), 0, . . . , 0 , where ω0 is preceded by s̄ 0-
blocks and set Δi := Δ[s̄,i+s] ⊂ span{ ∂x[0]∂ (s̄) , . . . , ∂x[0] (−i−s) ∂
} as
⎛ ⎞
In s̄ ∗ ∗ 0 ··· 0
⎜ 0 r0 (x) ··· r (x) ⎟
⎜ 0 ⎟
⎜ .. .. .. .. ⎟
⎜ . . .. ⎟.
Δi = spanK∗ ⎜ 0 ⎟ (3.8)
⎜ .. ⎟
⎜ ⎠
⎝ . 0 r (x(−i)) · · · r (x(−i))
0 0
0 0 ··· 0 ··· 0 I ns
Δi0
3.2 Integrability of a Right-Submodule 43
By assumption ω(x, δ) is causal and for any two vector fields τ ∈ Δi0 , = 1, 2,
∂
i ≥ ρ, Ωτ = 0, and Ω[τ1 , τ2 ] = 0. Moreover, since i ≥ ρ, Ω ∂ x (−i− p)
= 0, ∀ ∈
[1, n], ∀ p ∈ [1, s]. It follows that Ω[τ1 , ∂ x ∂
(−i− p)
] = 0, since ∂ x∂(Ωτ 1)
(−i− p)
= Ω ∂x ∂τ1
(−i− p)
= 0. Analogously, since Ω is causal, then for any p ∈ [1, s̄], ∂∂(Ωτ 1)
x (+ p)
= Ω ∂ x ∂τ(+1 p) = 0,
∂
which shows that Ω[τ1 , ∂ x (+ p) ] = 0, so that Ω ⊥ Δ̄i . As a consequence, there exist
at least n − j causal exact differentials, independent over K∗ which lay in the left-
annihilator of Δ̄i . It remains to show that there are also n − j causal exact differen-
tials, independent over K∗ (δ], which lay in the left-annihilator of Δ(δ]. This follows
dλμ , μ ≤ n − j is a basis for Δ⊥ (δ], since
immediately by noting that if dλ1 , . . . ,
μ
Ω is 0-integrable then ω(x, δ)dx[0] = i=1 αi (x, δ)dλi . Since the ωi (x, δ)dx[0] ’s
are n − j and by assumption they are independent over K∗ (δ], then necessarily
μ = n − j.
A direct consequence of the proof of Theorem 3.1 is the definition of an upper bound
on the maximum delay appearing in the exact differentials which generate a basis
for the left-annihilator of Δ(δ]. This is pointed out in the next corollary.
Corollary 3.1 Let the right-submodule
Δ(δ] = spanK∗ (δ] r1 (x, δ), . . . , r j (x, δ)
s̄
of rank j, with ri (x, δ) = ril (x[si ,s] )δl , be 0-integrable. Then the maximum delay
l=0
which characterizes the exact differentials generating the left-annihilator of Δ(δ] is
not greater than j s̄ + s.
Proof The proof of Theorem 3.1 shows that if ρ is the maximum between the
degree in δ and the largest delay affecting the state variables in the left-annihilator
Ω(x[ p̄] , δ) of Δ(δ], then the exact differentials are affected by a maximum delay
which is not greater than ρ. On the other hand, according to Lemma 1.1 in Chap. 1,
deg(Ω(x, δ)) ≤ j s̄, whereas p̄ ≤ s + j s̄, which shows that ρ ≤ j s̄ + s.
The result stated by Theorem 3.1 which is itself an important achievement plays
also a key role in proving a series of fundamental results which are enlightened
hereafter.
If the given submodule Δ(δ] is not 0-integrable, one may be interested in computing
the smallest 0-integrable right-submodule containing it. This in turn will allow to
identify the maximum number of independent exact one-forms which stand in the
left-annihilator of Δ(δ]. The following definition needs to be introduced, which
generalizes the notion of involutive closure of a distribution to the present context.
44 3 The Geometric Framework—Results on Integrability
s
of rank j, with ri (x, δ) = ril (x[si ,s] )δl , let Δc (δ] be its right closure. Then its
l=0
involutive closure Δ̄(δ] is the smallest submodule, which contains Δc (δ] and which
is involutive.
Accordingly, the following result can be stated.
Theorem 3.2 Consider the right-submodule
Δ(δ] = spanK∗ (δ] r1 (x, δ), . . . , r j (x, δ)
of rank j and let Δ̄(δ] be its involutive closure and assume that the left-annihilator of
Δ̄(δ] is causal. Then Δ̄(δ] is the smallest 0-integrable right-submodule containing
Δ(δ].
Example 3.3 Let us consider
x1 x1 (1)δ 2
Δ(δ] = spanK∗ (δ] .
−x2 x1 (2)δ − x1 δ 2
x1 x1 (1)δ
Δc (δ] = .
−x2 x1 (2) − x1 δ
2
∂R(x, )
[R(x, ), r (x, δ)] = ṙ (x, δ)|ẋ[0] =R(x, ) δ 2 − δ r (x(2), δ)
∂ x(2 − )
=0
x1 x12 (1) (−1)δ + x1 x1 (1)x1 (2) (0)δ
= δ2
x1 (2)(x2 x1 (2) (0) − x1 (−1)) − x2 x1 (2)x1 (3) (1) − x1 x1 (1) (−1)δ
x1 (1) (−1)δx1 (2)x1 (3)δ + x1 (−1)δ 2 x1 (2)x1 (3)δ
−
−x2 (0)x1 (2)x1 (3)δ + (0)x1 (2)δ 2 (x2 (2)x1 (4) − x1 (2)δ) − (−1)δ(x1 (2)x1 (3)δ)
x1 x1 (1)δ
= x1 (3) δ 2 (3) − δ (1)
−x2 x1 (2) − x1 δ
= r (x, δ)x1 (3) δ 2 (3) − δ (1) .
3.2 Integrability of a Right-Submodule 45
3.2.3 p-Integrability
The approach presented in this book allows us to state a more general result con-
cerning p-integrability which is independent of any control system. This is done
hereafter.
Theorem 3.3 The right-submodule
Δ(δ] = spanK∗ (δ] r1 (x, δ), . . . , r j (x, δ)
is 0-integrable.
Proof Assume that Δ(δ] is p-integrable. Then there exist n − j independent exact
differentials dλi (x) = Λi (x, δ)dx[0] ( p) such that, denoting by Λ(x, δ) =
(Λ1T (x, δ), . . . , Λn−
T
j (x, δ)) , then Λ(x, δ)Δ(δ] = 0. Consequently, for i ∈ [1, j],
T
which thus proves that Δ̂(δ] is 0-integrable. Of course, if Δ̂(δ] is 0-integrable, there
exist n− j exact differentials d λ̄i (x) = Λ̄i (x, δ)dx[0] such that
Λ̄(x, δ)Δ̂(δ] = 0. As a consequence also Λ̄(x, δ)Δ̂(δ]δ p = 0, which shows that
Δ̂(x( p), δ) = Δ(δ] is p-integrable.
−x1 (2)δ
Δ(δ] = spanK∗ (δ]
x2 (2)
∂R(x, )
[R(x, ), r (x, δ)] = ṙ (x, δ)|ẋ[0] =R(x, ) δ 2 − r (x(2), δ)
∂ x(2)
x1 (4) (1)δ 2 (−1)x1 (4)δ
= δ −
x2 (4) (2) (0)x2 (4)
x1 (4)δ
= δ 2 (4) − (0)
x2 (4)
−x1 δ
Δ̂(δ] = Δ(x(−2), δ) = spanK∗ (δ] .
x2
We set
−x1 (−1)
R(x(−2), ) =
x2 (0)
∂R(x(−2), )
= ṙ (x(−2), δ)|ẋ[0] =R(x(−2), ) − r (x(−2), δ)
∂ x(0)
A major problem in control theory stands in the possibility of describing the given
system in some different coordinates which may put in evidence particular structural
properties. As already underlined in Chap. 1, in the delay context, it is fundamental
to be able to compute bicausal change of coordinates, that is, diffeomorphisms which
are causal and admit a causal inverse as defined by Definition 1.1.
As it will be shown in this section, the previous results on the 0-integrability of a
right-submodule have important outcomes also in the definition of a bicausal change
of coordinates. Lemma 3.1 is instrumental for determining whether or not a given
set of one-forms can be used to obtain a bicausal change of coordinates.
3.2 Integrability of a Right-Submodule 47
is closed and its right-annihilator is causal, there exists a dθ1 (x[ᾱ] ) independent of
the dλi (x[α] )’s i ∈ [1, n − k] over K(δ] and such that
While the proof is omitted and can be found in Califano and Moog (2017), starting
from it, it is immediate to get the following result.
Theorem 3.4 Given k functions λi (x[α] ), i ∈ [1, k], whose differentials are indepen-
dent over K(δ], there exist n − k functions θ j (x[ᾱ] ), j ∈ [1, n − k] such that
if and only if
spanK(δ] {dλ1 , . . . , dλk }
Proof If the k exact differentials dλi (x) can be completed to span all dx[0] over K(δ]
then necessarily spanK(δ] {dλ1 , . . . , dλk } must be closed and its right-annihilator must
be causal. On the contrary, due to Lemma 3.1, if spanK(δ] {dλ1 , . . . , dλk } is closed
and its right-annihilator is causal then one can compute an exact differential dθ1
independent over K(δ] of the dλi ’s and such that spanK(δ] {dλ1 , . . . , dλk , dθ1 } is
closed and its right-annihilator is causal. Iterating, one gets the result.
y(t) = x1 (t)
ẏ(t) = x2 (t) + x2 (t − 1)
ÿ(t) = u(t) + u(t − 1).
48 3 The Geometric Framework—Results on Integrability
The functions y and ẏ could be used to define a change of coordinates if they satisfied
the conditions of Lemma 3.1, that is
Example 3.6 Consider the function λ = x1 (t − 1)x2 (t) + x22 (t) on R2 . In this case,
−2x2 (1) − x1
r (x, δ) =
x2 δ,
which is not causal. Thus, the function λ cannot be used as a basis for a change of
coordinates.
necessary to distinguish two cases. Accordingly, one has the following two definitions
of integrability.
Definition 3.5 A set of k one-forms {ω1 , . . . , ωk }, independent over K∗ (δ], is said
to be strongly integrable if there exist k independent functions {ϕ1 , . . . , ϕk }, such
that
spanK∗ (δ] {ω1 , . . . , ωk } = spanK∗ (δ] {dϕ1 , . . . , dϕk }.
can be chosen to be unimodular, then the one-forms ω are also strongly integrable.
Remark 3.2 It should be noted that the integrability of a closed left-submodule
spanK(δ] {ω1 , . . . , ωk } always implies strong integrability. As a consequence, the two
notions of strong and weak integrability coincide in case of delay-free one forms.
Integrability of a set of k one-forms {ω1 , . . . , ωk } is tested thanks to the so-called
Derived Flag Algorithm (DFA):
Starting from a given I0 the algorithm computes
allows to compute the smallest number of time shifts required for the given one-
forms for the maximal integration of the submodule. More precisely, the sequence
p
Ii defined by (3.9) converges to an integrable vector space
p
p
I∞ = spanK {dϕ1 , . . . , dϕγpp } (3.11)
50 3 The Geometric Framework—Results on Integrability
p
for some γ p ≥ 0. By definition, dϕi ∈ spanK(δ] {ω1 , . . . , ωk } for i = 1, . . . , γ p and
p
p ≥ 0. The exact one-forms dϕi , i = 1, . . . , γ p are independent over K, but may
p p
not be independent over K(δ]. A basis for spanK(δ] {dϕ1 , . . . , dϕγ p } is obtained by
computing a basis for
p
0
I∞ ∪ i
I∞ i−1
mod(I∞ , δ I∞
i−1
)
i=1
i
as I∞ + δ I∞
i
⊂ I∞
i+1
.
which allows to compute for each p ≥ 0, the exact differentials contained in the
given submodule and which depend on x(t), . . . , x(t − p) only. Both initializations
allow the algorithm to converge toward the same integrable submodule over K(δ],
but following different steps, as shown in the next example.
Example 3.7 Let spanK(δ] {dx(t − 2)}. On one hand, the initialization (3.10) is com-
pleted for p = 0 as no time shift of dx(t − 2) is required for its integration. On the
other hand, initialization (3.12) yields a 0 limit for p = 0 and p = 1 as the exact
differential involves larger delays than x(t) and x(t − 1). The final result is obtained
for p = 2.
Assume that the maximum delay that appears in {ω1 , . . . , ωk } (either in the coef-
ficients or differentials) is s. The necessary and sufficient condition for strong inte-
grability of the one-forms {ω1 , . . . , ωk } is given by the following theorem in terms
p
of the limit I∞ .
Theorem 3.5 A set of one-forms {ω1 , . . . , ωk }, independent over K(δ], is strongly
p
integrable if and only if there exists an index p ≤ s(k − 1) such that starting from I0
p
defined by (3.10), the derived flag algorithm (3.9) converges to I∞ given by (3.11)
with
p
ωi ∈ spanK(δ] {dϕ1 , . . . , dϕγpp } (3.13)
for i = 1, . . . , k.
It remains to show that p ≤ s(k − 1). Note that there exist infinitely many pairs
(A(δ), ϕ) that satisfy ω = A(δ)dϕ. Since the degree of unimodular matrices A(δ)
has a lower bound, then one can find a pair (A(δ), ϕ), where the degree of matrix
A(δ) is minimal among all possible pairs. Let A(δ) be such a unimodular matrix for
some functions ϕ = (ϕ1 , . . . , ϕk )T . Note that A(δ) and ϕ are not unique.
We show that the degree of A(δ) is less or equal to s. By contradiction, assume
that the degree of A(δ) is larger than s, for example, s + 1. Then for some i
where a ij (δ) ∈ K(δ], j = 1, . . . , k, and at least one polynomial a ij (δ) has degree
s + 1. s+1 i l
Let a ij (δ) = l=0 a j,l δ , j = 1, . . . , k. From (3.14), one gets
k
s+1
ωi = a ij, dϕ j (−i), (3.15)
j=1 =0
where at least one coefficient a ij,s+1 ∈ K is nonzero. For simplicity, assume that
i
a1,s+1 = 0 and aγi ,s+1 = 0 for γ = 2, . . . , k. We have assumed that the maximum
delay in ωi is s, but the maximum delay in dϕ1 (−s − 1) is at least s + 1.
Note that dϕ1 , . . . , dϕ1 (−s − 1), . . . , dϕk (−s − 1) are independent over K.
Therefore, to eliminate dϕ1 (−s − 1) from (3.15)
k
dϕ1 (−s − 1) = b j (δ)dϕ j + ω̄ (3.16)
j=1
for some coefficients b j (δ) ∈ K(δ]. Let l j = deg b j (δ) ≤ s and the one-forms
ω̄ ∈ spanK {dx, dx − , . . . , dx −s }. Let l := min{l j }. For clarity, let l = l2 and b2 (δ) =
δl . We show that ω̄ can be chosen such that it is integrable. By contradiction, assume
that ω̄ cannot be chosen integrable. Then, the coefficients of ω̄ must depend on larger
delays than s. Since ω̄ is not integrable, then the coefficients of a1,s+1 i
ω̄ depend also
−s−1
i
on larger delays than s. Now, substitute a1,s+1 dϕ1 to (3.15). One gets that ωi
i
depends on a1,s+1 ω̄ and thus also on larger delays than s. This is a contradiction and
thus ω̄ can be chosen integrable.
Let ω̄ = adφ −l for some a, φ ∈ K. Then spanK(δ] {dϕ1 ,. . ., dϕk } =
spanK(δ] {dϕ1 , dφ, dϕ3 ,. . ., dϕk } and there exists an unimodular matrix Ā(δ) with
smaller degree than A(δ), and functions ϕ̄ = (ϕ1 , φ, ϕ3 , . . . , ϕk )T such that ω =
Ā(δ)dϕ̄, which leads to a contradiction. Thus, the degree of A(δ) must be less than
or equal to s and the degree of A−1 (δ) is less or equal to s(k − 1), i.e. p ≤ s(k − 1).
The general case requires a more technical proof.
52 3 The Geometric Framework—Results on Integrability
p
Sufficiency. Let I∞ = spanK {dϕ}, where p ≤ s(k − 1). By construction,
p
I∞ ⊂ spanK(δ] {ω1 , . . . , ωk } and by (3.13) ωi ∈ spanK(δ] {dϕ} for i = 1, . . . , k. Thus,
spanK(δ] {ω1 , . . . , ωk } = spanK(δ] {dϕ}.
p p+1
Since I∞ ⊆ I∞ for any p ≥ 0, one can check the condition (3.13) step by step,
increasing the value of p every step. When for some p = p̄ the condition (3.13) is
satisfied, then it is satisfied for all p > p̄.
Given the set of 1-forms {ω1 , . . . , ωk }, independent over K(δ], the basis of vector
s(k−1)
space I∞ defines the basis for the largest integrable left-submodule contained in
spanK(δ] {ω1 , . . . , ωk }.
Lemma 3.2 A set of one-forms {ω1 , . . . , ωk } is weakly integrable if and only if the
left closure of the left-submodule, generated by {ω1 , . . . , ωk }, is (strongly) integrable.
Proof Necessity. By definitions of weak integrability and left closure, there exist
functions ϕ = (ϕ1 , . . . , ϕk )T such that dϕ = A(δ)ω̄, where ω̄ is the basis of the
closure of the left-submodule, generated by {ω1 , . . . , ωk }. Choose {dϕ1 , . . . , dϕk }
such that for i = 1, . . . , k
k
dϕi = adφ + b j (δ)dϕ j (3.17)
j=1; j=i
for any φ ∈ K and b j (δ) ∈ K(δ]. It remains to show that one can choose ϕ such that
ω̄i ∈ spanK(δ] {dϕ}.
By contradiction, assume that one cannot choose ϕ such that ω̄i ∈ spanK(δ] {dϕ}.
−j
Then ω̄k ∈/ spanK(δ] {dϕ} and also ω̄k ∈/ spanK(δ] {dϕ1 , . . . , dϕk } for j ≥ 1 and any
ϕ. Really, if
−j
ω̄k = ci (δ)dϕi , (3.18)
i
then, since on the left-hand side of (3.18) everything is delayed at least j times,
everything that is delayed less than j times on the right-hand side should cancel out.
Therefore, one is able to find functions φi , ψi ∈ K, i = 1, . . . , k, such that dϕi =
dφi + dψi and
ci (δ)dφi ∈ spanK(δ] {dx − j } ci (δ)dψi = 0.
i
When one eliminates the basis elements, which are dependent over K(δ], one gets
that the rank of spanK(δ] {dx1 (t), dx1 (t − 1), d(x2 (t)x3 (t − 1))} is 2. To check the
condition (3.13), one has to check whether there exists a matrix A(δ) such that ω =
A(δ)dϕ, where ω = (ω1 , ω2 )T , ϕ = (ϕ1 , ϕ2 )T , ϕ1 = x2 (t)x3 (t − 1), and ϕ2 = x1 (t).
In fact, ω = A(δ)dϕ, where the unimodular matrix A(δ) is
1 x2 (t − 1)δ
A(δ) = .
δ 1 + x2 (t − 2)δ 2
ω1 = dx2 (t)
ω2 = x4 (t − 1)dx1 (t) + x2 (t)dx2 (t − 1) + x1 (t)dx4 (t − 1)
ω3 = x3 (t)x4 (t)dx2 (t) + x2 (t)x4 (t)dx3 (t) (3.20)
+x3 (t − 1)dx2 (t − 1) + x2 (t − 1)dx3 (t − 1).
For s(k − 1) = 2:
2 = span {dx (t), d(x (t − 1)x (t)), dx (t − 1), dx (t − 2), d(x (t − 2)x (t − 1))}.
I∞ K 2 4 1 2 2 4 1
Now, ω1 ∈ I∞ 2
and ω2 ∈ I∞ 2
, but ω3 ∈/ I∞2
. Thus, the one-forms (3.20) are not
strongly integrable, and spanK(δ] {dx2 (t), d(x4 (t − 1)x1 (t))} is the largest integrable
left-submodule, contained in A = spanK(δ] {ω1 , ω2 , ω3 }.
Now, one can check if the one-forms (3.20) are weakly integrable. For that,
one has to compute the left closure of A and check if it is strongly integrable.
In practice, the left closure of a left-submodule A can be computed as the left-
kernel of its right-kernel Δ. Thus, the right-kernel of A is Δ = spanK(δ] {q(δ)}, where
q(δ) = (x1 (t)δ, 0, 0, −x4 (t))T . The left-kernel of Δ is
clK(δ] (A) = spanK(δ] {dx2 (t), dx3 (t), d(x4 (t − 1)x1 (t))}.
54 3 The Geometric Framework—Results on Integrability
To show more explicitly how the integrability of right-submodules and weak inte-
grability of one-forms are related, consider Algorithm (3.9) initialized with (3.12).
The left-kernel of Δi , defined above, is equal to I∞i i
, where I∞ is computed with
respect to the closure of a given submodule.
The next example shows the importance of working on K∗ and K∗ (δ]. In fact,
while the one-forms considered are causal, the right-kernel is not thus requiring to
consider the polynomial Lie Bracket on K∗ (δ] as defined in Definition 2.4 to check
the integrability.
The right-kernel of the left-submodule spanK(δ] {ω1 , ω2 } is not causal (i.e. one needs
forward-shifts of variables x(t) to represent it), and is given by
⎧⎛ ⎞⎫
⎨ x1 (−1)δ ⎬
Δ = spanK∗ (δ] ⎝ 0 ⎠ = spanK∗ (δ] {r1 (x, δ)} .
⎩ ⎭
−x12 (0) − x1 (1)x1 (−1)δ
2
∂ R1 (x, ) k
[R1 (x, ), r1 (x, δ)] = ṙ1 (x, δ)|ẋ[0] =R1 (x, )δ − δ r1 (x(s1 ), δ)
k=0
∂ x(1 − k)
⎛ ⎞
x1 (−2)δ 2
=⎝ 0 ⎠ ( (0) − δ (2)).
−x1 (0)x1 (−1)δ − x1 (1)x1 (−2)δ 2
Since ⎛ ⎞
x1 (−2)δ 2
⎝ 0 ⎠ = r1 (x, δ) x1 (−1) δ,
−x1 (0)x1 (−1)δ − x1 (1)x1 (−2)δ 2 x1
Δ is involutive which shows that the closure of ω is strongly integrable. In fact, one
gets that the left-annihilator of Δ is generated by
Δ⊥ = spanK∗ (δ] {d(x1 (t)x1 (t − 1) + x3 (t − 1)), dx2 (t)} = spanK∗ (δ] {dλ1 , dλ2 }.
which shows that spanK∗ (δ] {ω1 , ω2 } ⊂ spanK∗ (δ] {dλ1 , dλ2 }, that is, weak integrabil-
ity.
The problem can be addressed by working directly on one-forms. In this case,
starting from ω1 and ω2 , we first consider the left closure which is given by
and then we can apply the derived flag algorithm, thus getting
Ωc = spanK∗ (δ] {d(x1 (t)x1 (t − 1) + x3 (t − 1)), dx2 (t)} = spanK∗ (δ] {dλ1 , dλ2 }
as expected.
3.4 Problems
x1 (0)x1 (1)δ
Δ(δ] = spanK∗ (δ]
−x1 (0)x1 (2) − x12 (0)δ
56 3 The Geometric Framework—Results on Integrability
is 0-integrable.
x1 (2)x1 (3)δ
Δ(δ] = spanK∗ (δ]
−x1 (2)x1 (4) − x12 (2)δ
Lemma 3.3 Consider the distribution Δi defined by (3.8), and let ρi = dim(Δ1 )
with ρ−1 = ns. Then
⊥
(i) If dλ(x) is such that span{dλ(x)} = Δ̄i−1 , then span{dλ(x), dλ(x(−1))} ⊂
⊥
Δ̄i .
(ii) A canonical basis for Δ̄i⊥ is defined for i ≥ 0 as follows:
Pick dλ0 (x[0] ) such that span{dλ0 (x[0] )} = Δ̄⊥ 0 , with rank d(λ0 ) = μ0 =
ρ0 − ρ−1 .
At step ≤ i pick dλ (x[ ] ) such that span{dλk (x[k] (− j)), k ∈ [0, ],
j ∈ [0, − k]} = Δ̄⊥ , and dλ (x[ ] ) ∈ / Δ̄⊥−1 , with rank d(λi )=μi =ρi − 2ρi−1
+ ρi−2 .
Chapter 4
Accessibility of Nonlinear Time-Delay
Systems
ż 1 (t) = 0
ż 2 (t) = u(t)
x1 (t) = x10
x2 (t) = u 0 t + x20 .
• Now let the control switch to u(t) = u 1 on the interval t ∈ [1, 2). The dynamics
on such interval becomes
so that once the time t¯ ∈ [1, 2) at which one wants to reach the desired state x f is
fixed, one gets that
1
x1 f (t¯) = (t¯ − 1)2 u 20 + (t¯ − 1)x20 u 0 + x10
2
x2 f (t¯) = (t¯ − 1)u 1 + u 0 + x20 .
one can reach any final point x f from the initial point x0 as long as
2
x20 − 2(x10 − x1 f ) ≥ 0.
Of course using non-constant controls may allow to reach a greater region of the
plane. However, already this simple control shows, on one hand, that the system is
accessible in a weaker sense (the set of reachable points is open), and, on the other
hand, that while it is possible to get through the desired final point x f , it is not,
however, possible to stay in x f forever.
This example shows the necessity of introducing a weaker notion of accessibility
that we call t-accessibility. We end this introductory paragraph by giving the formal
definition of accessible and t-accessible system which will be used later on in the
chapter.
Consider the dynamics
l
m
ẋ[0] = F(x[s] ) + G ji (x[s] )u j,[0] (−i). (4.4)
i=0 j=1
60 4 Accessibility of Nonlinear Time-Delay Systems
Definition 4.1 The state x f ∈ IR n is said to be reachable from the initial condition
ϕ(t), t ∈ [−sτ , 0), if there exist a time t f and a Lebesgue measurable input u(t)
defined for t ∈ [0, t f ] such that the solution x(t f , ϕ) of (4.4) equals x f .
Definition 4.2 System (4.4) is said to be t-accessible from the initial condition ϕ(t),
t ∈ [−sτ , 0), if the adherence of the set of its reachable states has a nonempty interior.
There may exist some singular initial conditions where the system is not t-
accessible. Thus, system (4.4) is just said to be t-accessible if it is t-accessible
from almost any initial condition.
Definition 4.3 System (4.4) is fully accessible if there does not exist any autonomous
function for the system, that is, a non-constant function λ(x) whose time derivative
of any order along the dynamics of the system is never affected by the control.
To tackle the accessibility problem for a given dynamics of the form (4.4), we are
essentially looking for a bicausal change of coordinates z[0] = φ(x) such that in the
new coordinates the system is split into two subsystems
with the subsystem S2 completely accessible, that is, satisfying definition 4.3.
In order to address this problem, we need to refer to the differential representation
of the given dynamics (4.4), which was introduced in Sect. 2.3. Thus, by applying
the differential operator d to both sides of (4.4), its differential form representation
is derived as
d ẋ[0] = f (x[s] , u[0] , δ)dx[0] + g1 (x[s] , δ)du[0] , (4.5)
l
g1 (x, δ) =(g11 , . . . , g1m ), g1i = G ik (x[s] )δ k, i ∈ [1, m]. (4.7)
k=0
We will assume, without loss of generality, that rank K(δ] (g1 (x, δ)) = m (the number
of inputs), that is, each input acts independently on the system.
4.1 The Accessibility Submodules in the Delay Context 61
The first step consists in finding out the maximum number of independent
autonomous functions for the given system and then in showing that these func-
tions can be used to define a bicausal change of coordinates. To this end, we need
to characterize the notion of relative degree, since as it will be stated later on, and
in accordance with the delay-free case, autonomous functions have actually infinite
relative degree. Starting from the g1i ’s defined by (4.7), we thus consider the accessi-
bility submodule generators, introduced in Márquez-Martínez (1999, 2000), defined
(up to the sign) as
gi+1, j (x, u[i−1], δ) = ġi, j (x, u[i−2], δ) − f (x, u, δ)gi, j (x, u[i−2], δ).
Let us now recall that a function λ(x[s̄] ) has finite relative degree k if ∀l ∈ [1, m],
and ∀i ∈ [1, k − 1]
It immediately follows that a function λ(x) has relative degree k > 1 if and only if
Proposition 4.1 A function λ(x) has relative degree k > 1 if and only if ∀l ∈ [1, m],
if its relative degree is greater than n, which also allows to characterize autonomous
functions. We have, in fact, the following results which allow to derive an accessi-
bility criterion.
Lemma 4.1 Given the dynamics (4.4), the relative degree of a non-constant function
λ(x[s̄] ) ∈ K is greater than n if and only if it is infinite.
Theorem 4.1 The dynamics (4.4) is locally accessible if and only if the following
equivalent statements hold true:
• Rn (x, u[n−2] , δ) is torsion free over K(δ],
• rank K(δ] Rn (x, u[n−2] , δ) = n for some u[n−2] , and
• rank R̄n (x, 0, δ) = n.
Proof If Rn (x, u[n−2] , δ) is torsion free over K(δ], then there is no nonzero element
which annihilates Rn (x, u[n−2] , δ), that is, rank K(δ] Rn (x, u[n−2] , δ) = n. Conse-
quently, there cannot exist any function with infinite relative degree, rank R̄(x, 0, δ) =
n and the given system is accessible. As for the converse, assume that Rn (x, u[n−2] , δ)
is not torsion free over K(δ]. Then rank K(δ] Rn (x, u[n−2] , δ) = k < n for all possi-
ble choices of u[k−2] . Accordingly (see Proposition 4.4), R̄n (x, 0, δ] the involutive
closure of Rn (x, 0, δ] has rank k, so that there exist n − k exact differentials in the
left-annihilator, independent over K(δ], which would contradict the assumption that
the system is fully accessible since for Proposition 4.1 the corresponding functions
have infinite relative degree.
Example 4.3 (The Chained Form Model) Consider again the two-dimensional sys-
tem (1.8):
x2,[0] (−1)
ẋ[0] = g(x[1] )u [0] = u [0] .
1
As already discussed in Sect. 2.5 and showed in Fig. 1.2, when the delay τ = 0,
the system becomes fully accessible, as opposite to the delay-free case (Bloch 2003;
Murray and Sastry 1993; Sørdalen 1993). In fact, through standard computations,
one has that
x2,[0] (−1) u [0] (−1) − u [0] δ
g1 (x, δ) = , g2 (x, u, δ) = ,
1 0
has full rank whenever the control is different from 0, which, according to Theo-
rem 4.1, ensures the full accessibility of the given system. An extensive discussion
on this topic can be found in Califano et al. (2013), Li et al. (2011).
4.1 The Accessibility Submodules in the Delay Context 63
We end this section by noting that the accessibility property of a system can also
be given working on one-forms. In fact, starting from the definition of infinite relative
degree of a one-form, it turns out that a system is completely accessible if and only if
the set of integrable one-forms with infinite relative degree is empty. To get the result,
we first have to define the relative degree of a one-form which is given hereafter.
Denoting now by M = spanK(δ] {dx(t), du (k) (t); k ≥ 0} let us now consider the
sequence of left-submodule H1 ⊃ H2 ⊃ . . . of M as follows:
H1 = spanK(δ] {dx(t)}
(4.14)
Hi = spanK(δ] {ω ∈ Hi−1 | ω̇ ∈ Hi−1 }.
Since H1 has finite rank and all the left submodule Hi are closed it was shown in
Xia et al. (2002) that the sequence (4.14) converges. Let now H∞ be the limit of
sequence (4.14) and let Ĥi denote the largest integrable left submodule contained
in Hi . A left submodule Hi contains all the one-forms with relative degree equal to
or greater than i. Thus, H∞ contains all the one-forms which have infinite relative
degree. As a consequence, the accessibility of system (4.4) can be characterized in
the following way.
Theorem 4.2 System (4.4) is accessible if and only if Ĥ∞ = ∅.
Let us then consider Rn (x, 0, δ) = spanK(δ] {g1 (x, δ), . . . , gn (x, 0, δ)} and, since
the elements of the submodule are by construction causal, consider for i ≥ 0, the
sequence of distributions Gi ⊂ span ∂ ,..., ∂ defined as
∂x[0] ∂x[0] (−i − s)
⎛ 0 ⎞
g (x[s] ) · · · g (x[s] ) 0 0
⎜ .. .. ⎟
⎜ 0 . . ⎟
⎜
Gi = span ⎜ . ⎟, (4.15)
⎟
⎝ .. 0 g (x[s] (−i)) · · · g (x[s] (−i)) 0 ⎠
0
0 ··· 0 Ins
where represents the maximum degree in δ and s the maximum delay in x which
are present in the gi, j ’s. Gi is a distribution in IR n(s+i+1) as well as its involutive
closure Ḡi . Let ρi = rank(Ḡi ), with ρ−1 = ns. The following result can be stated.
Proposition 4.2 Assume that the system Σ, given by (4.4), is not accessible, i.e.
rank Rn (x, u, δ) = k < n, then the following facts hold true:
(i) The system Σ possesses n − k independent (over K(δ]) autonomous exact
differentials.
(ii) A canonical basis for Ḡi⊥ is defined for i ≥ 0 as follows.
Let dλ0 (x[0] ) be such that span{dλ0 (x[0] )} = Ḡ0⊥ , with rank (dλ0 ) = μ0 =
ρ0 − ρ−1 .
/ Ḡ0⊥ , with rank (dλ1 ) = μ1 = ρ1 − 2ρ0 + ρ−1 , be such that
Let dλ1 (x[1] ) ∈
⊥
More generally, let dλi (x[i] ) ∈
/ Ḡi−1 , with rank (dλi ) = μi = ρi − 2ρi−1 +
ρi−2 be such that span{dλμ (x[μ] (− j)), μ ∈ [0, i], j ∈ [0, i − μ]} = Ḡi⊥ .
(iii) Let ¯ represent the maximum degree in δ and s̄ the maximum delay in x in
Rn (x, u, δ). Then there exists γ ≤ s̄ + k ¯ such that any other autonomous
function λ(x) satisfies
that is, Ḡγ characterizes completely all the independent autonomous functions
of Σ.
Proof (i) is a direct consequence of Proposition 4.4. (ii) is a direct consequence of
Lemma 3.3 in the appendix, where Δi = Gi is causal by assumption, thus ensur-
ing that the left-annihilator is causal also. Finally, (iii) is a direct consequence of
Lemma 1.1.
Theorem 4.3 Consider the continuous-time system (4.4). Let γ be the smallest index
such that any autonomous function λ(x) associated to the given system satisfies
where
then
(1.) ∃ dλγ+1 (x) such that
⎛ ⎞ ⎛ ⎞
dz 1,[0] dλ0 (x[0] )
⎜ .. ⎟ ⎜ .. ⎟
⎜ . ⎟ ⎜ . ⎟
dz[0] =⎜ ⎟=⎜ ⎟ = T (x, δ)dx[0]
⎝dz γ+1,[0] ⎠ ⎝dλγ (x[γ] )⎠
dz γ+2,[0] dλγ+1 (x)
Proof By construction spanK(δ] {dλ0 (x), . . . , dλγ (x)} is closed and its
right-annihilator is causal so that, according to Proposition 4.2, it is possible to
compute λγ+1 (x) such that
⎛ ⎞ ⎛ ⎞
dz 1,[0] dλ0 (x[0] )
⎜ .. ⎟ ⎜ .. ⎟
⎜ . ⎟ ⎜ . ⎟
z[0] =⎜ ⎟=⎜ ⎟ = T (x, δ)dx[0] (4.17)
⎝dz γ+1,[0] ⎠ ⎝dλγ (x[γ] )⎠
dz γ+2,[0] dλγ+1 (x)
On the other hand, by assumption for any k ≥ 1 and any j ∈ [1, m],
It follows that for any k ≥ 1 and any j ∈ [1, m], by considering that d λ̇i (x) is given
by (4.18), then, due to (4.19),
Γ (x, δ)gk, j (x, u, δ) = Λ̇i (x, δ)gk, j (x, u, δ) + Λi (x, δ) f (x, u, δ)gk, j (x, u, δ) = 0.
As a consequence, d λ̇i ∈ spanK(δ] {dλ0 (x[0] ), . . . , dλγ (x[γ] )} for any i ∈ [0, γ].
Accordingly, in the coordinates (4.17), the system necessarily reads (4.16).
Example 4.2 contin’d. Consider again system (4.2). Its differential representation
is
0 u[0] (−1)δ x (−1)δ
d ẋ[0] = dx[0] + 2,[0] du[0] .
0 0 1
x2,[0] (−1)δ 0
g1 (x, δ) = , g2 (x, u, δ) = .
1 0
has clearly rank 1, which according to Theorem 4.1 implies that the system is not
completely accessible.
According to Proposition 4.2, there exists an autonomous function, which can be
computed by considering the distributions Gi . Starting from G0 we get
⎛ ⎞
0 1 0 x2,[0] (−1) 0 0
g g 0 ⎜1 0 0 0⎟
G0 = spanK(δ] = spanK(δ] ⎜
⎝0
⎟.
0 0 I 0 1 0⎠
0 0 0 1
∂
Ḡ1 = spanK(δ] ∂x2,[0]
, x2,[0] (−1) ∂x∂1,[0] + ∂
, ∂
, ∂
∂x2,[0] (−1) ∂x1,[0] (−1) ∂x[0] (−2)
.
The accessibility generators gi (x, u[i−2] , δ)’s are strictly linked to the Polynomial
Lie Bracket, thus generalizing to the delay context their definition in the nonlinear
delay-free case.
In fact, starting from the dynamics (4.4), we can consider
68 4 Accessibility of Nonlinear Time-Delay Systems
⎛ ⎞
l
m
F̄(x, u, ) := ⎝ F(x) + G ji (x)u [0], j (−i)⎠ (0).
i=0 j=1
gi+1, j (x, u[i−1], δ) = ġi, j (x, u[i−2], δ) − f (x, u, δ)gi, j (x, u[i−2], δ)
which implies that they can be expressed in terms of Extended Lie Brackets. In fact,
standard but tedious computations show that setting
ns
j
ns
F̄0 (x, δ) = F̄0 (x)δ j = F(x)δ j ,
j=0 j=0
is p p p
for i ≤ n, and gil (x, 0, δ) = p=0 gil (x, 0)δ ,
p
and denoting gil (x, 0) with gil (0),
then
is
p
gil (x, 0, δ) = ad i−1 g (x, δ) =
F̄(x,0,1) 1l
[ F̄0is , gi−1,l (0)] E0 δ p
p=0
is
p
= [ F̄0is , . . . , [ F̄0is , g1l ] Eis ] E0 δ p .
p=0
where cμ0 = c1 = 1, and for μ > 1 , q > 0, cμq = cμ−1 + cμq−1 , and m i (x, u[i−3] , δ) is
q q
given by the linear combination, through real coefficients, of terms of the form
i + ν
( )
[gμ11 , j1 (0), . . . , [gμiνν+
, jν (0), g
i−q,l (0)] E is
] E 0
δ
u jμμ (−i μ ),
μ=1
ν
where ν ∈ [2, i − 1], jμ ∈ [1, m], q = k + μk ≤ i − 1.
k=1
The following result can be easily proven.
Proposition 4.3 If for some coefficient α(x, u, δ), gi+1, j (·)α(x, u, δ) ∈ Ri , then
∀ k ≥ 0 there exist coefficients ᾱk (x, u, δ) such that gi+k+1, j (·)ᾱk (x, u, δ) ∈ Ri .
The possibility of expressing the accessibility generators gil (·, δ) as a linear com-
bination of Extended Lie Brackets of the gk j ’s for j ∈ [1, l − 1] allows to prove that
when the dimension of the accessibility modules Ri ’s stabilizes, then it cannot grow,
so that Rn has maximal dimension over K(δ], and one can state the next result, whose
proof can be found in Califano and Moog (2017) and which is also at the basis of
Theorem 4.1.
Proposition 4.4 Let k = rankK(δ] (Rn (x, u, δ)) for almost all u, and set
Then R̄n (x, 0, δ), the involutive closure of Rn (x, 0, δ) has rank k.
Example 4.3 cont’d: For the dynamics (2.28), we have shown that the accessibility
matrix has full rank 2 for u = 0 since it is given by
x2,[0] (−1) u[0] (−1) − u[0] δ
R2 (x, u, δ) = .
1 0
R2 (x, 0, δ) instead has dimension 1, while its involutive closure has again dimension
2. In fact, we have that, using the Polynomial Lie Bracket
x2,[0] (−1) x2,[0] (−1) 1
R2 (x, 0, δ) = , R̄2 (x, 0, δ) = ,
1 1 0
as expected.
70 4 Accessibility of Nonlinear Time-Delay Systems
has rank n − j < n. In the general case, there exist j autonomous functions λi (x[s] ),
i ∈ [i, j], such that
A first simple result can be immediately deduced when all the autonomous functions
depend on non-delayed state variables only. In this case, the next result can be stated.
Theorem 4.4 Consider the dynamics (4.4) and assume that the system is not fully
accessible with rank Rn (x, u, δ) = n − j, j > 0. If Ḡ0 given by (4.15) satisfies
dim Ḡ0⊥ = j, then the given system is also not t-accessible.
Proof Clearly, if dim Ḡ0⊥ = j then all the independent autonomous functions of the
system are delay free. Let λi (x(t)), i ∈ [1, j] be such independent functions. Then
under the delay-free change of coordinates
⎛ ⎞ ⎛ ⎞
z 1 (t) λ1 (x(t))
⎜ .. ⎟ ⎜ .. ⎟
⎜ . ⎟ ⎜ . ⎟
⎜ ⎟ ⎜ ⎟
⎜ z j (t) ⎟ ⎜ λ j (x(t)) ⎟
⎜ ⎟=⎜ ⎟
⎜ χ1 (t) ⎟ ⎜ ϕ1 (x(t)) ⎟ ,
⎜ ⎟ ⎜ ⎟
⎜ .. ⎟ ⎜ .. ⎟
⎝ . ⎠ ⎝ . ⎠
χn− j (t) ϕn− j (x(t))
where the functions ϕi , i ∈ [1, n − j], are chosen to define any basis completion,
the given system reads
Once the initial condition φ(t) over the interval [−sτ , 0) is fixed, the z-variables
evolve along fixed trajectories defined by the initial condition, which proves that the
system is not t-accessible.
The driftless case represents a particular case, since one has that if λ(x) is an
autonomous element, then λ̇ = 0. As a consequence in order to be t-accessible, the
autonomous functions for a driftless system cannot depend on x(t) only, as underlined
hereafter.
Corollary 4.1 The dynamics (4.4) with F(x[s] ) = 0 is t-accessible only if Ḡ0⊥ = 0.
Things become more involved when only part of the autonomous functions are delay
free. In this case, a necessary condition can be obtained by identifying if there is a
subset of the autonomous functions, which is characterized by functions which are
delay free and whose derivative falls in the same subset. To this end, the following
algorithm was proposed
in Gennari and Califano
(2018).
Let Ω = span dλ1 (x(t)), . . . , dλ j (x(t)) .
Algorithm 4.1
Start
Set Ω0 = Ω
Let Ω1 = span {ω ∈ Ω0 : ω̇ ∈ Ω0 }
Let k := 1
For k ≤ dim(Ω0 )
If Ωk = Ωk−1 goto found
Else
Set Ωk+1 = span {ω ∈ Ωk : ω̇ ∈ Ωk }
Set k ← k + 1
End
found
Set Ω = maxintegrable (Ωk )
If dim(span{Ωk }) = dim(span{Ω})
goto close
Else goto start
close;
End
Proposition 4.5 Let j be the dimension of Ω. Then Algorithm 4.1 ends after k ≤ j
steps.
Based on the previous algorithm the following result was also obtained.
Theorem 4.5 Suppose that system (4.4) is not accessible.
Let rankRn (x, u, δ) =
n − j and accordingly R⊥ n (x, u, δ) = spanK(δ] dλ1 (·), . . . , dλ j (·) . Let the first
j̄ < j functions be independent of the delayed variables, that is, λi = λi (x(t)),
i ∈ [1, j̄]. If Algorithm 4.1 applied to Ω = dλ1 , . . . , dλ j̄ ends with Ω nonzero,
then system (4.4) is not t-accessible.
72 4 Accessibility of Nonlinear Time-Delay Systems
The proof is based on the consideration that the algorithm identifies a subset of the
autonomous functions which depend only on x(t) and identifies a non-accessible sub-
system with special characteristics. Of course, it is not the maximal non-accessible
subsystem in general. Furthermore, since the algorithm works on one-forms which
are delay-free, the maximal integrable codistribution contained in Ωk can be com-
puted by using standard results.
Example 4.2 contin’d. Let us consider again the dynamics (4.2). We have already
shown that the system is not fully accessible, but it was t-accessible. As a matter of
fact Ḡ0 has full rank, so that the autonomous function was obtained from Ḡ1 and was
λ(x) = x1 (t) − 21 x22 (t − 1).
4.5 Problems
τi (x, u, δ) := ad i−1 τ (x
F̄(x,u,1) 1 [ p,s]
, δ), for i > 1.
k
k
ad kF̄(x,u,1) (τ1 (x, δ)α) = τk− j+1 (x, u, δ)α( j) . (4.24)
j=0
j
(b) Find an autonomous element for the following delay-free nonlinear system:
ẋ1 (t) = x1 (t)u(t)
.
ẋ2 (t) = x2 (t)u(t)
(c) Show that the following nonlinear time-delay system is fully accessible or,
equivalently, there is no autonomous element:
4.5 Problems 73
ẋ1 (t) = x1 (t − 1)u(t)
.
ẋ2 (t) = x2 (t − 1)u(t)
where the initial condition for x3 is some smooth function c(t) defined for t ∈
[−τ , 0].
Checking accessibility amounts to compute combinations of state variables with
infinite relative degree, i.e. which are not affected by the control input.
(a) Set c(t) = x3 (t − τ ) which is considered as a time-varying parameter which
is not affected by the control input:
⎧
⎨ ẋ1 (t) = x2 (t) + u(t)
Σ̃0 : ẋ2 (t) = c(t)u(t) .
⎩
ẋ3 (t) = u(t)
Compute all functions depending on x(t) whose relative degree is larger than
or equal to 2.
(b) For t ≥ τ , set x4 (t) = x3 (t − τ ), u(t − τ ) = v(t) and the extended system
⎧
⎪ ẋ1 (t) =
⎪
⎨
x2 (t) + u(t)
ẋ2 (t) = x4 (t)u(t)
Σ1 : .
⎪
⎪ ẋ3 (t) = u(t)
⎩
ẋ4 (t) = v(t)
Compute all functions depending on x(t) whose relative degree is larger than
or equal to 2 for the dynamics Σ1 . Check its accessibility.
Compare the number of autonomous elements of Σ1 with the number of
autonomous elements of Σ̃0 .
Conclude about the accessibility of Σ0
4. Consider the system
where the initial condition for x is some smooth function c1 (t) for t ∈ [−1, 0],
c2 (t) for t ∈ [−2, −1[,…, ck (t) for t ∈ [−k, −(k − 1)[. Define the system
Any autonomous element for Σ0 is also an autonomous element for the initial
system, at least for t ∈ [0, 1], i.e. it will not depend on the control input for
t ∈ [0, 1].
Now, denote ξ(t) = x(t − 1) and define the extended system
s
s
ẋ(t) = A j x(t − jτ ) + B j u(t − jτ )
j=0 j=0
(5.1)
s
y(t) = C j x(t − jτ ),
j=0
with τ being constant, always admits a strongly observable realization of the same
order n, since the input–output equation is again of order n and of retarded type.
Surprisingly, this is no more true in the nonlinear case as it will be discussed later
on.
Let us now consider the nonlinear time-delay system
⎧
⎨
m
l
Σ : ẋ[0] = F(x[s] ) + G ji (x[s] )u[0], j (−i) . (5.2)
⎩ y = H (x ) j=1 i=0
[0] [s]
which shows that according to Definition 5.1 the given system is weakly observable,
but not strongly observable, since 1 + δ does not have a polynomial inverse.
The main difference between the definition of strong and weak observability stands
in the fact that the first one ensures that the state at time t can be reconstructed from
5 Observability 77
the observation output and its first n − 1 time derivatives at time t. However, if one
refers to the example, which shows that the system is not strongly observable, it is
also immediate to notice that considering a larger number of derivatives allows to
reconstruct the state for almost all input sequences. In fact, if one considers also the
first-order derivative
ẏ = x(−1)u + x(−2)u(−1), (5.4)
after standard computations one gets that x(t) can be recovered as a function of the
output, its first-order derivative and the input, whenever u(t) = u(t − τ ). In fact, one
gets that
ẏ(0) − u(−1)y(−1)
x(0) = y(0) + . (5.5)
u(−1) − u
Proposition 5.2 System (5.2) with x(t) ∈ IR n is strongly observable if and only if
Assume now that the system is neither strongly nor weakly observable. This means
that there exists an index k < n such that
If such a conditions is verified then one may try to decompose the system into an
observable and a non observable subsystem.
78 5 Observability
Let us thus consider the autonomous system (that is, without control)
ẋ[0] = F(x[s] )
(5.6)
y[0] = h(x),
and let us assume that for the given system the observability matrix has rank k. Then
⎛ ⎞
dy(x)
⎜ .. ⎟
T1 (x, δ)d x = ⎝ . ⎠ (5.7)
(k−1)
dy (x)
has full row rank and one may wonder if the output and its derivatives could be used
to define a bicausal change of coordinates. The following result holds true.
Proposition 5.3 Let k be the rank of the observability matrix for the dynamics (5.6).
Then there exists a basis completion χ = ψ(x) such that
⎛ ⎞ ⎛ ⎞
z1 y
⎜ .. ⎟ ⎜ .. ⎟
⎜.⎟ ⎜ . ⎟
⎜ ⎟=⎜ ⎟
⎝z k ⎠ ⎝ y (k−1) ⎠
χ ψ
is a bicausal change of coordinates, if and only if T1 (x, δ) in (5.7) is closed and its
right-annihilator is causal.
ẋ1 = 0
ẋ2 = 0
y = x1 x2 (−1) + x2 x1 (−1).
The matrix has clearly rank 1 so the system is not completely observable. Let us now
check the right annihilator of dy to see if it can be used to define a bicausal change
of coordinates. As a matter of fact, one easily gets that
5.1 Decomposing with Respect to Observability 79
x2 x1 (−2)−x1 x2 (−2)
−x1 (−1) − x1 (+1) x2 (+1)x 1 (−1)−x 2 (−1)x 1 (+1)
δ
kerdy = x2 x1 (−2)−x1 x2 (−2) ,
x2 (−1) + x2 (+1) x2 (+1)x 1 (−1)−x 2 (−1)x 1 (+1)
δ
which is causal. It can thus be used to define the desired bicausal change of coordi-
nates. A possible solution is
z x − x2 (−1)
= 1 .
χ x2
ż = z(−1)
χ̇ = zχ2 (−1) + χ
y = z,
and we see that the system is split into an observable subsystem and an unobservable
one.
As already underlined in the introductory section of this chapter, the regular observ-
ability notion is introduced to take into account the case in which the state can
be reconstructed from the input and its derivatives up to an order which is greater
80 5 Observability
than the state dimension. As it will be shown, this has important implications in the
realization problem.
the extended observability matrix Oe (x, u, . . . , u (N −2) , δ) has rank n and admits a
polynomial left inverse.
Let us underline that the previous definition of regular observability not only implies
that the state can be recovered from the output, the input, and their derivatives, but
also that the obtained function is causal. As an example consider the system
ẋ(t) = u(t)
y(t) = x(t − 1).
Such a system is weakly observable, but not regularly observable, since x(t) =
y(t + 1), which is not causal.
Note that strong observability implies regular observability and the latter yields
weak observability.
Necessary and sufficient conditions can now be given to test regular observability.
One has the following result whose proof is found in Califano and Moog (2020).
Proposition 5.4 System (5.2) is regularly observable if and only if there exists an
integer N ≥ n such that
Example 5.1 contin’d. For system (5.3), N = 2 since using Eqs. (5.3) and (5.4)
yields
1+δ
Oe (·, δ) =
uδ + u(−1)δ 2
represents the left polynomial inverse of Oe thus proving that (5.3) is regularly
observable as Proposition 5.4 holds true. In fact, the state at time t can be reconstructed
5.2 On Regular Observability for Time-Delay Systems 81
The different properties of weak, strong, and regular observability have important
implications on the input–output representation. In fact, it is easily verified that the
following result holds true.
Theorem 5.1 An observable SISO system with retarded-type state-space realization
of order n admits a retarded-type input–output equation of order n if and only if
that is, the system is strongly observable or it is weakly observable but not regularly
observable.
Theorem 5.1 completes the statement of Theorem 2 in Califano and Moog (2020).
Proposition 5.5 follows from Theorem 5.1 and is peculiar of nonlinear time-delay
systems as already anticipated.
Proposition 5.5 A weakly observable SISO with retarded-type state-space realiza-
tion of order n, such that
dy (n) ∈
/ spanK(δ] {dy, . . . , dy (n−1) , du, . . . du (n−1) }
dy = (1 + δ)d x
d ẏ = (1 + δ)uδd x + (1 + δ)x(−1)du.
The previous example shows an important issue. In fact, one gets that to the one-
dimensional retarded-type system (5.3) one can associate either a neutral-type first-
order input–output equation or a second-order retarded-type input–output equation.
In this second case, one has that using, for example, the procedure in Kaldmäe and
Kotta (2018), one can compute a second-order retarded-type system associated to the
input–output equation. To obtain it, one has to consider the filtration of submodules
Hi+1 = {ω ∈ Hi |ω̇ ∈ Hi }
Since H3 is integrable, the state variable candidates result from its integration.
ẏ − uy(−1)
So, we can set x1 = y, x2 = , and we get the second-order retarded-type
u(−1) − u
system ⎧
⎨ ẋ1 = x1 (−1)u + x2 [u(−1) − u]
ẋ2 = x2 (−1)u(−2) (5.11)
⎩
y = x1 .
If this condition is satisfied, while u = 0 and α = 0, then the accessibility matrix has
rank 1, and, after standard computations, one gets that dp(x) = (1 + δ)d x2 − δ 2 d x1
is in the left annihilator of R. Using the results of Chap. 4, we can then consider the
bicausal change of coordinates z 1 = x2 + x2 (−1) − x1 (−2), z 2 = x1 − x1 (−1)+x2 .
In these coordinates, the system then reads
ż 1 = z 1 (−1)u(−2)
ż 2 = (z 2 (−1) − z 1 )u + z 1 u(−1)
y = z 2 + z 2 (−1) − z 1
ż 1 = 0
ż 2 = z 2 (−1)u
y = z 2 + z 2 (−1),
which highlights our weakly (and for u = u(−1) also regularly) observable and
strongly accessible subsystem of dimension one.
5.3 Problems
Check the strong, regular, and weak observability of the given system.
84 5 Observability
In this chapter, it will be shown how the tools and methodologies introduced in the
previous chapters can be used to address classical control problems, underlying also
the main difference in the results with respect to the delay-free case.
Consider the nonlinear time-delay system
where x(t) ∈ Rn and u(t) ∈ Rm . Also, assume that the function f is meromor-
phic. To simplify the presentation, the following notation is used: x(·) := (x(t),
x(t − 1), . . .). The notation ϕ(x(·)) means that function ϕ can depend on
x(t), . . . , x(t − i) for some finite i ≥ 0. The same notation is used for other variables.
One classical model which is considered in the delay-free context is the chained form,
which was first proposed by Murray and Sastry (1993), since it has been shown that
many nonholonomic systems in mobile robotics, such as the unicycle, n-trailers,
etc., can be converted into this form. It also turned out that some effective control
strategies could be developed for these systems because of the special structure of the
chained system. A characterization of its existence in the delay-free case is found in
Sluis et al. (1994). We may thus be interested in understanding the characterization of
such a form in the delay context. In particular, hereafter, we will define the conditions
under which a two-input driftless time-delay nonlinear system of the form
s
j
s
j
ẋ[0] = g1 (x)u 1 (t − j) + g2 (x)u 2 (t − j) (6.2)
j=0 j=0
can be put, under bicausal change of coordinates z[0] = φ(x[α] ), in the following
chained form: ⎧
⎪
⎪ ż 1 (t) = u i (t)
⎪
⎪
⎨ ż 2 (t) = z 3 (t − k3 )u i (t − r2 )
⎪
Σc .. (6.3)
⎪ .
⎪
⎪
⎪
⎪ ż (t) = z (t − k )u (t − r )
⎩ n−1 n n i n−1
ż n (t) = u j (t),
where k and r−1 , for ∈ [3, n], are non-negative integers, i, j ∈ [1, 2], and i = j.
While the results here recalled are taken from Califano et al. (2013) and Li et al.
(2011), more general Goursat forms as well as the use of feedback laws will need
instead further investigation. A first important result stated hereafter is that the
chained form (6.3) is actually accessible, so that to investigate if a system can be put
in such a form. A first issue that must be verified is accessibility.
Proposition 6.1 The chained form with delays Σc is accessible for u = 0.
The proof, which is left as an exercise, can be easily carried out by considering the
accessibility matrix (Li et al. 2016).
Let us now consider the differential form representation of (6.2) which is equal
to
d ẋ[0] = f (x, u, δ)dx[0] + g11 (x, δ)du 1,[0] + g12 (x, δ)du 2,[0] (6.4)
Theorem 6.1 System (6.2) can be transformed into the chained form (6.3) under
bicausal change of coordinates z[0] = φ(x) only if there exists j ∈ [1, 2] such that
setting u = 1 = const, and denoting by G, j (x, ) the image of g, j (x, ū, δ) when it
acts on the function (t), then the following conditions are satisfied:
1. [G, j (x, ), gk, j (x, δ)] = 0, for k, ∈ [1, n − 1];
2. rankK(δ] Ln−2 = n, where setting u = 1, the sequence L0 ⊂ L1 ⊂ · · · is defined
as
The previous theorem, whose proof is found in Li et al. (2016), shows, in particular,
that in the delay case only one of the input channels, the jth one, has to satisfy the
straightening theorem, that is, [G1 j (x, ), g1 j (x, δ)] = 0, while the ith channel does
not have to satisfy it, still allowing a solution to the problem. This makes itself an
important difference with respect to the delay-free case.
6.1 Characterization of the Chained Form with Delays 87
Starting from Theorem 6.1, we are now ready to give necessary and sufficient
conditions for the solvability of the problem.
Theorem 6.2 System (6.2) can be transformed in the chained form (6.3) under
bicausal change of coordinates z[0] = φ(x) if only if the conditions of Theorem 6.1
are satisfied and additionally
(a) there exist integers 0 = l1 ≤ l2 ≤ · · · ≤ ln−1 such that for u = 1 = const.,
While the proof, which uses essentially the properties of the Polynomial Lie Bracket,
can be found in Califano et al. (2013), we prefer to discuss the interpretation of the
conditions. More precisely condition (a) together with Theorem 6.1 guarantees that
the Δ ’s are involutive and that a coordinates change can be defined connected to
them. Condition (b) is weaker than the straightening theorem expected in the delay-
free case, but guarantees together with condition (c) the particular structure of the
chained form with delays.
whereas
⎛ ⎞
(0)
G1,1 (x, ) = ⎝ x3 (−2) −
x2 (−3) + x1 (−5) (−3)
2
+ ⎠
2x1 (−2)(−2) .
x3 (−3) − x2 (−4) + x1 (−6) (−4)
2
It is immediate to verify that [G,2 (x, ), gk,2 (x, δ)] = 0 for , k ∈ [1, 2] and all the
conditions of Theorem 6.1 are fulfilled so that one can check the remaining conditions
of Theorem 6.2. We have already seen that g2,2 = ḡ2,2 δ 2 and that g3,2 = 0. Since also
[G1,1 (x, ), g1,1 (x, δ)] = 0, all the necessary and sufficient conditions are satisfied.
The desired change of coordinates is then
⎛ ⎞
x1
z[0] =⎝ x2 − x12 (−2) ⎠.
x3 − x2 (−1) + x12 (−3)
ż 1,[0] = u 1 (0)
ż 2,[0] = z 3 (−2)u 1 (−3)
ż 3,[0] = u 2 (0).
6.2 Input–Output Feedback Linearization 89
where both the input u(t) ∈ Rm and the output y(t) ∈ R p . A causal state feedback
is sought
u(t) = α(x(t − i), v(t − i); i = 0, . . . , k)
for some k, so that the input–output relation from the new input v(t) to the output
y(t) is linear (involving eventually some delays).
Whatever the functions f 1 and f 2 are, it is always possible to cancel f 1 with a maximal
loss of observability by the causal feedback u(t) = − f 1 (x(t), . . . , x(t − k)) + v(t)
to get the linear delay-free closed-loop system ẏ(t) = v(t). This mimics what is
done in the delay-free case and which is known as the computed torque method in
robotics.
Consider now
⎧
⎨ ẋ1 (t) = a1 x1 (t) + a2 x1 (t − 1) + f 1 (x(t − δ), . . . , x(t − k)) + u(t − δ)
x2 (t) = f 2 (x(t − i), u(t − i); i = 0, . . . , dmax ) (6.8)
⎩
y(t) = x1 (t),
where a1 and a2 are real numbers and δ is an integer greater than or equal
to 2. Whatever the functions f 1 and f 2 are, it is still possible to cancel f 1 (·)
and still with a maximal loss of observability by the causal feedback u(t) =
− f 1 (x(t), . . . , x(t − k + δ)) + v(t) to get the linear time-delay closed-loop system
ẏ(t) = a1 y(t) + a2 y(t − 1) + v(t − δ). This solution is different from the standard
one known for delay-free nonlinear systems as not all terms in ẏ(t) are cancelled out
by feedback: the intrinsic linear terms do not need to be cancelled out.
Consider further
90 6 Applications of Integrability
⎧
⎨ ẋ1 (t) = a1 x1 (t) + a2 x2 (t − 1) + f 1 (x(t − δ), . . . , x(t − k)) + u(t − δ)
ẋ2 (t) = a3 x1 (t) + f 1 (x(t − δ − 1), . . . , x(t − k)) + u(t − δ − 1) (6.9)
⎩
y(t) = x1 (t),
and a bicausal change of coordinates z[0] = φ(x[α] ) which transform the system (6.6)
into
s̄
ż[0] = A j z[0] (− j) + Bv[s̄]
j=0
(6.10)
s̄+k
y[0] = C j−k z[0] (− j).
j=k
To solve this problem, it is necessary that the dynamics (6.6) are intrinsically
linear up to nonlinear input–output injections ψ(y(·), u(·)) in the sense of Califano
et al. (2013). Further conditions are, however, to be fulfilled so that the number of
nonlinearities to be cancelled out is smaller than the number of available control
variables. This is an algebraic condition which is formalized as follows.
The solvability of the nonlinear equations involving the input requires
where cls{·} denotes the closure of a given submodule. Recall that the closure of
a given submodule M of rank r is the largest submodule of rank r containing M
according to Definition 1.2.
Consider, for instance,
which has rank r = 1. Its closure N is spanned by d x1 (t) + x3 (t)d x2 (t − 1) and thus
has rank 1 as well. Clearly, M ⊂ N but N ⊂ M.
The solvability of those equations in u(t), say ψ(y(·), u(·)) = v(t), does not yet
ensure the solution is causal.
Causality of the static state feedback is obtained from the condition
cls IR[δ] {dy, dψ} = span IR[δ] {dy, dϕ(u(t), y(t − j))} (6.12)
so that {dy(t), dϕ(u(t), y(−τ j ))} is a basis of the module cls IR[δ] {dy, dψ}, for a finite
∂ϕ
number of j’s and with ∂u(t) ≡ 0.
The above discussion is summarized as follows.
Theorem 6.3 System (6.6) admits a static output feedback solution to the input–
output linearization problem if and only if
(i) (6.6) is linearizable by input–output injections ψ,
(ii) and conditions (6.11) and (6.12) are satisfied.
The following illustrative examples are in order.
Example 6.2 ⎧
⎨ ẋ1 (t) = x2 (t − 3) − sin x1 (t − 1) + u(t)
ẋ2 (t) = 2 sin x1 (t − 1) − 2u(t) (6.13)
⎩
y(t) = x1 (t).
The static output feedback u(t) = sin y(t − 1) + v(t) yields the closed-loop system
⎧
⎨ ẋ1 (t) = x2 (t − 3) + v(t)
ẋ2 (t) = −2v(t) (6.14)
⎩
y(t) = x1 (t),
which is a linear time-delay system. Note that system (6.13) is already in the form
(6.10) and (6.11) is fulfilled as well with ψ = u(t) − sin y(t − 1).
Though the system (6.15) is in the form (6.10), two input–output injections are
required ψ1 = u(t − 1) − sin y(t − 1) and ψ2 = 2 sin y(t − 1) − 2u(t). Thus, (6.11)
is not fulfilled and there is no static output feedback solution to the input–output lin-
earization problem.
92 6 Applications of Integrability
System (6.16) is again in the form (6.10) thanks to the two input–output injections
ψ1 = u(t) + 3u(t − 1) − sin y(t − 1) − 3 sin y(t − 2) and ψ2 = 2 sin y(t − 1)
− 2u(t). However,
cls IR[δ] {dy, dψ1 , dψ2 } = span IR[δ] {dy, d[u(t) − sin y(t − 1)]}
so that (6.11) is fulfilled and the linearizing output feedback solution is again u(t) =
sin y(t − 1) + v(t). The linear closed loop reads
⎧
⎨ ẋ1 (t) = x2 (t − 3) + v(t) + 3v(t − 1)
ẋ2 (t) = −2v(t) (6.17)
⎩ y(t) = x (t).
1
Condition (6.12) is not fulfilled and there is no static output feedback which cancels
out the nonlinearity in the system.
Whenever the conditions above are not fulfilled, there is no static output feedback
which is able to cancel out all nonlinearities. Nevertheless, a broader class of output
feedbacks may still offer a solution (Márquez-Martínez and Moog 2004).
Consider, for instance, the system
Condition (6.12) is not fulfilled as ϕ involves the control variable at different time
instants. Previous elementary tools suggest to implement the implicit control law
u(t) = −u(t − 1) + sin y(t − 2) + v(t) which would just yield ẏ(t) = v(t). A prac-
tical realization of this control law is obtained setting z(t) = u(t − 1) in the form
of an hybrid output feedback since it also involves the shift operator. Note that
z(t) = u(t − 1) is nothing but a buffer. The resulting compensator is named a pure
shift dynamic output feedback in Márquez-Martínez and Moog (2004) and for the
example above it reads as
6.2 Input–Output Feedback Linearization 93
which yields ẏ(t + 1) = v(t + 1), i.e. linearity of the input–output relation is
obtained after some time only.
Accepting this broader class of output feedbacks allows to weaken the conditions
in Theorem 6.4.
Theorem 6.4 System (6.6) admits an hybrid output feedback solution to the input–
output linearization problem if and only if
(i) (6.6) is linearizable by input–output injections;
(ii) condition (6.11) is satisfied;
(iii) and cls IR[δ] {dy, dψ} = span IR[δ] {dy, dϕ(u(t), . . . , u(t − i), y(t − j))}, so that
{dy(t), dϕ(u(t), . . . , u(t − i), y(t − j))} is a basis of the module
∂ϕ
cls IR[δ] {dy(t), dψ}, for a finite number of j’s and with ∂u(t) ≡ 0.
Note that the digital implementation of the stabilizing control in Monaco et al.
(2017) goes through a trick similar to a buffer equation (see Eq. (18) in Monaco et al.
2017).
find, if possible, a causal state feedback u(t) = α(x(t − i), v(t − i); i = 0, . . . , k),
a bicausal change of coordinates z(t) = ϕ(x(t − i); i = 0, . . . , ) and K ∈ IN such
that the dynamics is linear in the z coordinates after some time K :
94 6 Applications of Integrability
for all t ≥ K and where the pair A(δ), B(δ) is weakly controllable.
ẋ1 (t) = x2 (t − 1)
ẋ2 (t) = 3x1 (t − 2) + cos (x12 (t − 2)) + x2 (t − 1)u(t − 1).
Pick the “output function” y(t) = x1 (t) which has a full relative degree equal to 2.
A standard method valid for delay-free systems consists in cancelling by feedback
the full right-hand side of ÿ(t). Due to the causality constraint this cannot be done
in this special case. Nevertheless, one takes advantage of some pre-existing suitable
structure and cancel only nonlinear terms. The obvious solution is
6.3.2 Solution
Focus on the single-input case (Baibeche and Moog 2016). Following the methodol-
ogy valid for delay-free systems, one has to search for an output function candidate
which has full relative degree, i.e. equal to n. Then, apply the results valid for input–
output linearization. Such an output function is called a “linearizing output”.
This methodology will, however, yield conditions which are only sufficient for
feedback linearization in the case of time-delay nonlinear systems.
Nevertheless, a necessary and sufficient condition for the existence of a full relative
degree output candidate can be derived thanks to the Polynomial Lie Bracket.
Lemma 6.1 There exists h(x(·)) with full relative degree n if and only if
(i) rank R̄n−1 (x, 0, δ) = n − 1,
(ii) rank R̄n (x, 0, δ) = n,
where Ri is defined by (4.8).
6.3 Input-State Linearization 95
Condition (ii) ensures the accessibility of the system under interest, so that (i) yields
the existence of an output function candidate with finite relative degree n.
Theorem 6.5 Given system (6.19), there exists a causal static state feedback which
solves the input-state linearization problem if the conditions of Lemma 6.1 are ful-
filled and for some dh ⊥ R̄n−1 (x, 0, δ)
(i) dh (n) ∈ span IR[δ] {d[a(x(·)) + b(x(·))u(t − i)]} for some i ∈ IN and a, b ∈
K;
∂[a(x(·))+b(x(·))u(t−i)]
(ii) ∂ x(t−τ )
= 0, 0 ≤ τ < i; and
(iii) there exists a polynomial matrix D(δ) in IR[δ]n×n such that
⎛ ⎞
dh
⎜ d ḣ ⎟
⎜ ⎟
⎜ .. ⎟ = D(δ)M(x, δ)d x
⎝ . ⎠
dh (n−1)
is causal. The closed loop then yields a linear time-delay differential input–output
equation.
In this section, with reference to the SISO case, we are seeking for a bicausal change
of coordinates z(t) = ϕ(x(·)) and a regular causal feedback u(t) = α(x(·), v(·)),
which allows to transform the given dynamics (6.1) with output y = h(x) into the
form
s̄
s̄
ż 1 (t) = A j z 1 (− j) + B j v(− j)
j=0 j=0
where the subsystem defined by the z 1 variable is linear, weakly or strongly observ-
able, and weakly or strongly accessible.
As a matter of fact, one has to find a bicausal change of coordinates and a regular
and causal static feedback such that the input–output behavior is linearized and the
residual dynamics does not depend on the control.
The solution requires thus, on one hand, that the input–output linearization can
be achieved via a bicausal change of coordinates and a causal feedback, on the other,
that one can choose the basis completion to guarantee that the residual dynamics is
independent of the control. This can be formulated by checking some conditions on
the Polynomial Lie Bracket [G 1 (x, ), g1 (x, δ)]. While the result can be found in the
literature (Califano and Moog 2016), we leave the formulation of the solution and
its proof as an exercise for the reader.
6.5 Problems
ẋ1 (t) = x2 (t − 1)
ẋ2 (t) = cos (x12 (t − 1)) + x2 (t − 1)u(t).
Compute a linearizing output, if any, and the corresponding linearizing state feed-
back.
3. Check whether there exists an output function candidate for the following system:
⎧
⎨ ẋ1 (t) = x2 (t − 1) + 3[cos (x12 (t − 2)) + x2 (t − 2)u(t − 1)]
ẋ2 (t) = x3 (t) + cos (x12 (t − 1)) + x2 (t − 1)u(t)
⎩
ẋ3 (t) = 2[cos (x12 (t − 3)) + x2 (t − 3)u(t − 2)].
Compute a causal static output feedback, if any, which linearizes the input–output
system.
Write the closed-loop system and check its weak accessibility.
6.5 Problems 97
Compute a causal static output feedback, if any, which linearizes the input–output
system.
Write the closed-loop system and check its weak accessibility.
Series Editor Biographies
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 101
C. Califano and C. H. Moog, Nonlinear Time-Delay Systems,
SpringerBriefs in Control, Automation and Robotics,
https://doi.org/10.1007/978-3-030-72026-1
102 References
Califano C, Monaco S, Normand-Cyrot D (2009) Canonical observer forms for multi-output systems
up to coordinate and output transformations in discrete time. Automatica 45:2483–2490. https://
doi.org/10.1016/j.automatica.2009.07.003
Califano C, Moog CH (2014) Coordinates transformations in nonlinear time-delay systems. In:
53rd IEEE conference on decision and control. Los Angeles, USA, pp 475–480
Califano C, Moog CH (2016) On the existence of the normal form for nonlinear delay systems.
In: Karafyllis I, Malisoff M, Mazenc F, Pepe P (eds) Recent results on nonlinear delay control
systems, vol 4. Advances in delays and dynamics. Springer, Berlin Heidelberg New York, pp
113–142
Califano C, Moog CH (2017) Accessibility of nonlinear time-delay systems. IEEE Trans Autom
Control 62:7. https://doi.org/10.1109/TAC.2016.2581701
Califano C, Moog CH (2020) Observability of nonlinear time-delay systems and its application to
their state realization. IEEE Control Syst Lett 4: 803–808, https://doi.org/10.1109/LCSYS.2020.
2992715
Choquet-Bruhat Y, DeWitt-Morette C, Dillard-Bleick M (1989) Analysis, manifolds and physics,
part I: basics. North-Holland, Amsterdam
Cohn PM (1985) Free rings and their relations. Academic Press, London
Conte G, Moog CH (2007) Perdon A (2007) Algebraic methods for nonlinear control systems, 2nd
edn. Springer, London, p
Conte G, Perdon AM (1995) The disturbance decoupling problem for systems over a ring. SIAM,
J Contr Optimiz 33:750–764
Crouch PE, Lamnabhi-Lagarrigue F, Pinchon D (1995) Some realizations of algorithms for nonlinear
input-output systems. Int J Contr 62:941–960
Fliess M, Mounier H (1998) Controllability and observability of linear delay systems: an algebraic
approach. ESAIM COCV 3:301–314
Fridman E (2001) New Lyapunov-Krasovskii functionals for stability of linear retarded and neutral
type systems. Syst Control Lett 43:309–319
Fridman E (2014) Introduction to time-delay systems. Birkhäuser, Basel
Fridman E, Shaked U (2002) A descriptor system approach to H-infinity control of linear time-delay
systems. IEEE Trans Aut Contr 47:253–270
Garcia-Ramirez E, Moog CH, Califano C (2016) LA Marquez-Martinez Linearisation via input-
output injection of time delay systems. Int J Control 89:1125–1136
Garcia-Ramirez E, Califano C, Marquez-Martinez LA, Moog CH (2016) Observer design based
on linearization via input-output injection of time-delay systems. Proceedings IFAC NOLCOS,
Monterey, CA, USA, IFAC-PapersOnLine 49–18(2016):672–677
Gennari F, Califano C (2018) T-accessibility for nonlinear time-delay systems: the general case. In:
IEEE conference on decision and control (CDC), pp 2950–2955
Germani A, Manes C, Pepe P (1996) Linearization of input-output mapping for nonlinear delay
systems via static state feedback. In: CESA ’96 IMACS multiconference, pp 599–602
Germani A, Manes C, Pepe P (1998) Linearization and decoupling of nonlinear delay systems. In:
Proceedings of the American control conference, Philadelphia, USA, pp 1948–1952
Germani A, Manes C, Pepe P (2002) A new approach to state observation of nonlinear systems
with delayed output. IEEE Trans Autom Control 47:96–101
Glumineau A, Moog CH, Plestan F (1996) New algebro-geometric conditions for the linearization
by input-output injection. IEEE Trans Autom Control 41:598–603
Gu K, Kharitonov VL, Chen J (2003) Stability of time-delay systems. Birkhaüser, Boston
Hermann R (1963) On the accessibility problem in control theory. In: Lasalle JP, Efschetz SL (eds)
International symposium on nonlinear differential equations and nonlinear mechanics. Academic
Press, pp 325–332
Halas M, Anguelova M (2013) When retarded nonlinear time-delay systems admit an input-output
representation of neutral type. Automatica 49:561–567
Hespanha JP, Naghshtabrizi P, Xu Y (2007) A survey of recent results in networked control systems.
Proc IEEE 95:138–162
References 103
Hou M, Pugh AC (1999) Observer with linear error dynamics for nonlinear multi-output systems.
Syst Contr Lett 37:1–9
Insperger T (2015) On the approximation of delayed systems by Taylor series expansion. ASME J
Comput Nonlinear Dyn 10:1–4
Isidori A (1995) Nonlinear control systems, 3rd edn. Springer, New York
Islam S, Liu XP, El Saddik A (2013) Teleoperation systems with symmetric and unsymmetric time
varying communication delay. IEEE Trans Instrum Meas 21:40–51
Kaldmäe A, Kotta Ü (2018) Realization of time-delay systems. Automatica 90:317–320
Kaldmäe A, Moog CH, Califano C (2015) Towards integrability for nonlinear time-delay systems.
In: MICNON 2015. St Petersburg, Russia, IFAC-PapersOnLine 48, pp 900–905. https://doi.org/
10.1016/j.ifacol.2015.09.305
Kaldmäe A, Califano C, Moog CH (2016) Integrability for nonlinear time-delay systems. IEEE
Trans Autom Control 61(7):1912–1917. https://doi.org/10.1109/TAC.2015.2482003
Kazantzis N, Kravaris C (1998) Nonlinear observer design using Lyapunov’s auxiliary theorem.
Syst Contr Lett 34:241–247
Keller H (1987) Non-linear observer design by transformation into a generalized observer canonical
form. Int J Control 46:1915–1930
Kharitonov VL, Zhabko AP (2003) Lyapunov-Krasovskii approach to the robust stability analysis
of time-delay systems. Automatica 39:15–20
Kim J, Chang PH, Park HS (2013) Two-channel transparency-optimized control architectures in
bilateral teleoperation with time delay. IEEE Trans Control Syst Technol 62:2943–2953
Kotta Ü, Zinober ASI, Liu P (2001) Transfer equivalence and realization of nonlinear higher order
input-output difference equations. Automatica 37:1771–1778
Krener AJ, Isidori A (1983) Linearization by output injection and nonlinear observers. Syst Contr
Lett 3:47–52
Krener AJ, Respondek W (1985) Nonlinear observers with linearizable error dynamics. SIAM J
Contr Opt 23:197–216
Krstic M (2009) Delay compensation for nonlinear, adaptive, and PDE. Systems Series: Systems
& Control: Foundations & Applications. Birkhaüser, Boston
Krstic M, Bekiaris-Liberis N (2012) Control of nonlinear delay systems: a tutorial. 51st IEEE
conference on decision and control (CDC). HI, Maui, pp 5200–5214
Lee EB, Olbrot A (1981) Observability and related structural results for linear hereditary systems.
Int J Control 34:1061–1078
Li SJ, Moog CH, Califano C (2011) Characterization of accessibility for a class of nonlinear time-
delay systems. In: CDC 2011. Orlando, pp 1068–1073
Li SJ, Califano C, Moog CH (2016) Characterization of the chained form with delays, IFAC NOL-
COS. Monterey, CA, USA, 2011. IFAC-PapersOnLine 49–18:808–813
Mattioni M, Monaco S, Normand-Cyrot D (2018) Nonlinear discrete-time systems with delayed
control: a reduction. Syst Control Lett 114:31–37
Mattioni M, Monaco S, Normand-Cyrot D (2021) IDA-PBC for LTI dynamics under input delays:
a reduction approach. IEEE Control Syst Lett 5:1465–1470
Márquez-Martínez LA (1999) Note sur l’accessibilité des systèmes non linéaires à retards. Comptes
Rendus de l’Académie des Sciences-Series I - Mathematics 329(6):545–550
Márquez-Martínez LA (2000). Analyse et commande de systèmes non linéaires à retards. PhD
thesis, Université de Nantes / Ecole Centrale de Nantes, Nantes, France
Márquez-Martínez LA, Moog CH (2004) Input-output feedback linearization of time-delay systems.
IEEE Trans Autom Control 49:781–786
Márquez-Martínez LA, Moog CH (2007) New insights on the analysis of nonlinear time-delay
systems: application to the triangular equivalence. Syst Contr Lett 56:133–140
Márquez-Martínez LA, Moog CH, Velasco-Villa M (2002) Observability and observers for nonlin-
ear systems with time delay. Kybernetika 38:445–456
Mazenc F, Bliman PA (2006) Backstepping design for time-delay nonlinear systems. IEEE Trans
Autom Control 51:149–154
104 References
Mazenc F, Malisoff M, Krstic M (2021) Stability analysis using generalized sup-delay inequalities.
IEEE Control Syst Lett 5:1411–1416
Mazenc F, Malisoff M, Lin Z (2008) Further results on input-to-state stability for nonlinear systems
with delayed feedbacks. Automatica 44:2415–2421
Mazenc F, Malisoff M, Bhogaraju INS (2008) Sequential predictors for delay compensation for
discrete time systems with time-varying delays. Automatica, 122, art n 109188. https://doi.org/
10.1016/j.automatica.2020.109188
Michiels W, Niculescu S-I (2007) Stability and stabilization of time-delay systems. an eigenvalue-
based approach. Advances in design and control, 12. Philadelphia, SIAM
Monaco S, Normand-Cyrot D (1984) On the realization of nonlinear discrete-time systems. Syst
Control Lett 5:145–152
Monaco S, Normand-Cyrot D (2008) Controller and observer normal forms in discrete time. In:
Isidori A, Astolfi A, Marconi L (eds) (in honor) Analysis and design of nonlinear control systems.
Springer, pp 377–395
Monaco S, Normand-Cyrot D (2009) Linearization by output injection under approximate sampling.
EJC 15:205–217
Monaco S, Normand-Cyrot D, Mattioni M (2017) Sampled-data stabilization of nonlinear dynamics
with input delays through immersion and invariance. IEEE Trans Autom Control 62(5):2561–
2567
Moraal PE, Grizzle JW (1995) Observer design for nonlinear systems with discrete-time measure-
ment. IEEE Trans Autom Control 40:395–404
Moog CH, Castro-Linares R, Velasco-Villa M, Márquez-Martínez LA (2000) The disturbance
decoupling for time-delay nonlinear systems. IEEE Trans Autom Control 45(2):305–309
Murray R, Sastry S (1993) Nonholonomic motion planning: steering using sinusoids. IEEE Trans
Autom Control 38:700–716
Nijmeijer H, van der Schaft A (1990) Nonlinear dynamical control systems. Springer, New York
Niculescu SI (2001) Delay effects on stability: a robust control approach, vol 269. Lecture notes in
control and information sciences, Springer, Heidelberg
Oguchi T (2007) A finite spectrum assignment for retarded non-linear systems and its solvability
condition. Int J Control 80(6):898–907
Oguchi T, Watanabe A, Nakamizo T (2002) Input-output linearization of retarded non-linear systems
by using an extension of Lie derivative. Int J Control 75:582–590
Olbrot AW (1972) On controllability of linear systems with time delay in control. IEEE Trans
Autom Control 17:664–666
Olgac N, Sipahi R (2002) An exact method for the stability analysis of time-delayed linear time-
invariant (LTI) systems. IEEE Trans Autom Control 47:793–797
Pepe P, Jiang ZP (2006) A Lyapunov-Krasovskii methodology for ISS and iISS of time-delay
systems. Syst Control Lett 55:1006–1014
Plestan F, Glumineau A (1997) Linearization by generalized input-output injection. Syst Contr Lett
31:115–128
Richard JP (2003) Time-delay systems: an overview of some recent advances and open problems.
Automatica 39(10):1667–1694
Sallet G (2008) Lobry Claude : un mathématicien militant. In: Proceedings of 2007 international con-
ference in honor of Claude Lobry, ARIMA, 9, pp 5–13. http://arima.inria.fr/009/pdf/arima00902.
pdf
Sename O, Lafay JF, Rabah R (1995) Controllability indices of linear systems with delays. Kyber-
netika 6:559–580
Shi P, Boukas EK, Agarwal RK (1999) Control of Markovian jump discrete-time systems with
norm bounded uncertainty and unknown delay. IEEE Trans Autom Control 44:2139–2144
Sipahi R, Niculescu SI, Abdallah CT, Michiels W, Gu K (2011) Stability and stabilization of systems
with time delay. IEEE Control Syst Mag 31(1):38–65
Sluis W, Shadwick W, Grossman R (1994) Nonlinear normal forms for driftless control systems.
In: Proceedings of 1994 IEEE CDC, Lake Buena Vista, FL, 320–325
References 105
Sørdalen OJ (1993) Conversion of the kinematics of a car with n trailers into a chained form. In:
Proceedings of 1993 international conference robotics and automation, Atlanta, CA, pp 382–387
Souleiman I, Glumineau A, Schreirer G (2003) Direct transformation of nonlinear systems into state
affine MISO form and nonlinear observers design. IEEE Trans Autom Control 48:2191–2196
Spivak M (1999) A comprehensive introduction to differential geometry, 3rd edn. Publish or Perish,
Houston
Timmer J, Müller T, Swameye I, Sandra O, Klingmüller U (2004) Modeling the nonlinear dynamics
of cellular signal transduction. Int J Bifurc Chaos 14:2069–2079
Ushio T (1996) Limitation of delayed feedback control in nonlinear discrete-time systems. IEEE
Trans Circuits Syst I(43):815–816
Van Assche V, Ahmed-Ali T, Hann CAB, Lamnabhi-Lagarrigue F (2011) High gain observer design
for nonlinear systems with time varying delayed measurements. In: Proceedings 18th IFAC world
congress, Milano, vol 44, pp 692–696
Velasco M, Alavarez JA, Castro R (1997) Disturbance decoupling for time delay systems. Asian J
Control 7:847–864
Xia X-H, Gao W-B (1989) Nonlinear observer design by observer error linearization. SIAM J Cont
Opt 27:199–216
Xia X, Márquez-Martínez LA, Zagalak P, Moog CH (2002) Analysis of nonlinear time-delay sys-
tems using modules over non-commutative rings. Automatica 38:1549–1555
Zhang H, Wang C, Wang G (2014) Finite-time stabilization for nonholonomic chained form systems
with communication delay. J Robot Netw Artif Life 1:39–44
Zheng G, Barbot JP, Boutat D, Floquet T, Richard JP (2011) On observation of time-delay systems
with unknown inputs. IEEE Trans Autom Control 56(8):1973–1978
Zheng G, Barbot J-P, Boutat D (2013) Identification of the delay parameter for nonlinear time-delay
systems with unknown inputs. Automatica 49(6):1755–1760