0% found this document useful (0 votes)
258 views349 pages

Delay Differential Equation PDF

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
258 views349 pages

Delay Differential Equation PDF

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 349

Delay Differential Equations

Balakumar Balachandran · Tamás Kalmár-Nagy


David E. Gilsinn
Editors

Delay Differential Equations

Recent Advances and New Directions

ABC
Editors
Balakumar Balachandran Tamás Kalmár-Nagy
Department Mechanical Engineering Department of Aerospace Engineering
University of Maryland Texas A&M University
2181 Glenn L. Martin Hall 609C H. R. Bright Building
College Park, MD 20742 College Station, TX 77843
USA USA
[email protected] [email protected]

David E. Gilsinn
Mathematical & Computational Sciences
Division
National Institute of Standards
and Technology (NIST)
100 Bureau Drive, Stop 8910
Gaithersburg, MD 20899
USA
[email protected]

ISBN: 978-0-387-85594-3 e-ISBN: 978-0-387-85595-0


DOI: 10.1007/978-0-387-85595-0

Library of Congress Control Number: 2008935618

°c Springer Science+Business Media, LLC 2009


All rights reserved. This work may not be translated or copied in whole or in part without the written
permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York,
NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in
connection with any form of information storage and retrieval, electronic adaptation, computer software,
or by similar or dissimilar methodology now known or hereafter developed is forbidden.
The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are
not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject
to proprietary rights.

Printed on acid-free paper

springer.com
Preface

Delay differential equations (DDEs) are important in many areas of engineering


and science. The aim of this book is to bring together contributions from leading
experts on the theory and applications of functional and DDEs. The editors have
strived throughout to point out the interdisciplinary nature of these contributions.
For example, advances in stability analysis, computational techniques (symbolic
and numerical), and results from many otherwise disconnected fields (automotive
engineering, manufacturing, neuroscience, and control theory) have been included
in this book. The different contributions are not intended as a comprehensive survey
of the many areas in which functional and DDEs have been useful, but rather, they
are meant to help a reader bridge the gap between theoretical work and applications
and extend the results and methods to other fields.
This book contains eleven chapters. These chapters have been organized into five
groups, covering control systems and stability analysis methods, Hopf bifurcations
and center manifold analysis, numerical computations of DDE solutions, neural sys-
tems, and stochastic DDEs.
The first group on control systems and stability analysis methods consists of five
chapters. In the first three chapters, delay effects in control systems are addressed.
In Chap. 1, Hongfei Li and Keqin Gu provide an exposition of the Lyapunov–
Krasovskii functional approach for studying the stability of equilibrium points and
other solutions of coupled differential–difference equations with multiple delays.
Such coupled differential–difference systems occur in the context of control pro-
cesses, propagation in electrical lines, and fluid and gaseous flows through pipes.
They show that retarded and neutral systems can be considered as special cases of
coupled differential–difference systems. For linear systems, the authors show how
a quadratic Lyapunov functional can be used to convert the stability problem to a
set of linear matrix inequalities, which can be numerically studied using MATLAB
and other software. This stability treatment is also used in Chap. 2, where Wenjuan
Jiang, Alexandre Kruszewski, Jean-Pierre Richard, and Armand Toguyeni consider
the stability and control of networked systems. They discuss the different sources of
delays in these systems, including network-induced hardware-dependent delays and
packet losses. The challenging aspect of the control of these networked systems is

v
vi Preface

that the delays are varying, unpredictable, and can also be unbounded. Control of a
remote system through the Internet is examined; controller and observer designs are
examined on the basis of available theoretical results for systems with time-varying
delays, and it is demonstrated that a gain scheduling strategy can be used to achieve
an exponential stabilization of this system. An experimental platform is also pro-
posed to explore the control architecture. The discussion presented in this chapter is
valid for networks operated through the Internet as well as the Ethernet. In Chap. 3,
Mrdjan Jankovic and Ilya Kolmanovsky examine automotive powertrain control and
discuss the different possible sources of delays in these systems and treat idle speed
control and air-to-fuel ratio control in gasoline engines. Through experiments and
simulations, the authors bring forth the power of applying different control tools
for robust and stochastic stabilization of these systems. Various models, controller
and observer designs are presented in this work. It is shown that a careful consid-
eration of time delays is important for enhancing the fuel economy, emissions, and
drivability of automotive vehicles.
Rounding up the first group, in Chaps. 4 and 5, the use of discretization meth-
ods for stability analysis of delayed systems with periodic coefficients as well as
constant or periodically varying delays is illustrated. Eric Butcher and Brian Mann
elaborate methods based on Chebyshev polynomial and expansion methods and
temporal finite element analysis in Chap. 4. They consider different applications,
including optimal and delayed state feedback control of systems with time peri-
odic delays and chatter vibrations in machine-tool systems. In Chap. 5, Xinhua
Long, Tamás Insperger, and Balakumar Balachandran consider the semidiscretiza-
tion method at length, use a variety of examples, and show how stability information
for periodic solutions of autonomous and nonautonomous delay differential systems
with constant as well as time periodic delays can be obtained by constructing an
approximation to the infinite dimensional monodromy matrix. As in Chap. 4, stabil-
ity charts are used to identify stable and unstable regions in the considered parameter
space. The possibilities for different local bifurcations of periodic solutions, includ-
ing cyclic-fold, period-doubling, and Hopf bifurcations, are also discussed in this
chapter.
The second group consists of Chaps. 6–8. In these chapters, the basis for analyti-
cal and symbolic bifurcation studies in nonlinear systems with constant time delays
is carefully introduced. David Gilsinn presents a detailed discussion of how center
manifold and normal form analysis commonly used for studying local bifurcations
in ordinary differential systems can be used for examining Hopf bifurcation of fixed
points of delay differential systems in Chap. 6. It is shown how such an analysis
is important for determining the nature of a bifurcation, that is, whether it is sub-
critical or supercritical. Machine-tool chatter is used as an illustrative example. In
a complementary effort, in Chap. 7, Siming Zhao and Tamás Kalmár-Nagy rigor-
ously examine the existence of a Hopf bifurcation of the zero solution of the delayed
Liénard equation. They also illustrate how the continuation-based DDE-Biftool and
MATLAB’s DDE-23 integrator can be used to numerically investigate such systems.
Sue Ann Campbell demonstrates how symbolic algebra packages such as MAPLE
can be used to carry out center manifold analysis in Chap. 8. As in Chap. 6, first, the
Preface vii

basis for ordinary differential systems is discussed for a reader to relate the approach
used for delay differential systems with the one followed for ordinary differential
equations.
The third group includes Chap. 9, in which Larry Shampine and Skip Thompson
focus on numerical solutions of delay differential systems. They make comparisons
with existing methods commonly used for ordinary differential systems and elabo-
rate on how the popular Runge–Kutta methods can be extended to delay differential
systems. They detail at length as to how algorithms written in MATLAB and Fortran
90/95 can be used to study solutions of a range of delay differential systems.
The fourth group consists of Chap. 10. In this chapter, Qishao Lu, Qingyun
Wang, and Xia Shi study the effects of delays in coupled neuronal systems. They
present different models including the models of Chay, Hodgkin-Huxley, FitzHugh-
Nagamo, and Hindmarsh-Rose and show how an appropriate consideration of the
delays in these systems is important to uncover the rich dynamics exhibited by
these systems. Observed behaviors include bifurcations, bursting, chaos, and wave
dynamics. The implications of these phenomena for synchronization of neurons are
discussed, and the importance of considering them for characterizing the neural
activity of the central nervous system is brought forth in this chapter.
Chapter eleven fills out the fifth group. In this chapter, Toru Ohira and John
Milton examine the interplay between delay and noise in the context of postural
actions and demonstrate how an appropriate formulation of the delayed random
walk problem can serve as an alternate as well as complimentary approach for exam-
ining stochastic delay differential systems such as the delayed Langevin equation.
The findings of this chapter are relevant for understanding feedback control mecha-
nisms that are common in physiology as well as in other systems from biology and
economics.
Through the eleven contributions, the editors have attempted to provide a glimpse
of the recent advances in the area of delay differential systems and new directions
being pursued in this area. While it is recognized that the material presented in this
edited book is by no means complete, it is our sincere hope that this book will allow
a reader to appreciate the ubiquitous presence of delays in various systems, learn
about new and exciting areas, and develop a mastery of tools available for treating
delay differential systems.
We thank the different contributors for their spirited participation in this endeavor
and Ms. Elaine Tham and Ms. Lauren Danahy of Springer for helping us see this
collaborative effort to fruition.

College Park, MD Balakumar Balachandran


College Station, TX Tamás Kalmár-Nagy
Gaithersburg, MD David E. Gilsinn
Contents

1 Lyapunov–Krasovskii Functional Approach for Coupled


Differential-Difference Equations with Multiple Delays . . . . . . . . . . . . 1
Hongfei Li and Keqin Gu
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Coupled Differential-Functional Equations . . . . . . . . . . . . . . . . . . . . 5
1.3 Stability of Continuous Time Difference Equations . . . . . . . . . . . . . 10
1.4 Linear Coupled Differential-Difference Equations . . . . . . . . . . . . . . 13
1.4.1 Quadratic Lyapunov-Krasovskii Functional . . . . . . . . . . . . 13
1.4.2 Discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.5 Discussion and Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.6 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2 Networked Control and Observation for Master–Slave Systems . . . . . 31
Wenjuan Jiang, Alexandre Kruszewski, Jean-Pierre Richard,
and Armand Toguyeni
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.2 Exponential Stability of a Remote System Controlled Through
Internet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.2.1 The Three Delay Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.2.2 Transmission and Receipt of the Control Data . . . . . . . . . . 34
2.2.3 Problem Formulation and Preliminaries . . . . . . . . . . . . . . . 34
2.2.4 Observer Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.2.5 Control Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.2.6 Global Stability of the Remote System . . . . . . . . . . . . . . . . 37
2.3 Architecture of the Global Control System . . . . . . . . . . . . . . . . . . . . 38
2.3.1 Features of the Remote System . . . . . . . . . . . . . . . . . . . . . . 38
2.3.2 Synchronization of the Time Clocks . . . . . . . . . . . . . . . . . . 38
2.3.3 Transmission and Receipt of The Control Data . . . . . . . . . 39
2.3.4 The Structure of the Master . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.3.5 The Structure of the Slave . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.3.6 Experimental Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

ix
x Contents

2.4 Performance Enhancement by a Gain Scheduling Strategy . . . . . . . 45


2.4.1 Effects of Time-Delay on the Performance
and the System Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.4.2 Uniform Stability with Gain Scheduling . . . . . . . . . . . . . . 46
2.4.3 Gain Scheduling Experiments . . . . . . . . . . . . . . . . . . . . . . . 48
2.4.4 Result of Remote Experiment . . . . . . . . . . . . . . . . . . . . . . . 48
2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

3 Developments in Control of Time-Delay Systems for Automotive


Powertrain Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Mrdjan Jankovic and Ilya Kolmanovsky
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.2 Idle Speed Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.2.1 A Model for ISC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.2.2 Treating ISC Using Tools for Time-Delay Systems . . . . . . 60
3.3 Air-to-Fuel Ratio Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.3.1 Feed-gas Air-to-Fuel Ratio Control Block Diagram . . . . . 65
3.3.2 Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.3.3 Constructing Stability Charts and Gain Scheduling . . . . . 69
3.3.4 Using Robust and Stochastic Stabilization Tools
for Time-Delay Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.3.5 Air-to-Fuel Ratio Control Using a Switching HEGO
Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.4 Observer Design for a Diesel Engine Model with Time Delay . . . . 78
3.4.1 Diesel Engine Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
3.4.2 Observer Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3.4.3 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
3.4.4 Summary and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
3.5 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

4 Stability Analysis and Control of Linear Periodic Delayed Systems


Using Chebyshev and Temporal Finite Element Methods . . . . . . . . . . 93
Eric Butcher and Brian Mann
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.2 Stability of Autonomous and Periodic DDEs . . . . . . . . . . . . . . . . . . 96
4.3 Temporal Finite Element Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
4.3.1 Application to a Scalar Autonomous DDE . . . . . . . . . . . . . 97
4.3.2 TFEA Approach Generalization . . . . . . . . . . . . . . . . . . . . . 101
4.3.3 Application to Time-Periodic DDEs . . . . . . . . . . . . . . . . . . 102
4.4 Chebyshev Polynomial Expansion and Collocation . . . . . . . . . . . . . 104
4.4.1 Expansion in Chebyshev Polynomials . . . . . . . . . . . . . . . . 105
4.4.2 Estimating the Number of Polynomials . . . . . . . . . . . . . . . 108
4.4.3 Chebyshev Collocation Method . . . . . . . . . . . . . . . . . . . . . . 109
4.5 Application to Milling Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Contents xi

4.6 Control of Periodic Systems with Delay using Chebyshev


Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
4.6.1 Variation of Parameters Formulation . . . . . . . . . . . . . . . . . . 116
4.6.2 Finite Horizon Optimal Control via Quadratic Cost
Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
4.6.3 Optimal Control Using Convergence Conditions . . . . . . . . 118
4.6.4 Example: Optimal Control of a Delayed Mathieu
Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
4.6.5 Delayed State Feedback Control . . . . . . . . . . . . . . . . . . . . . 120
4.6.6 Example: Delayed State Feedback Control of the
Delayed Mathieu Equation . . . . . . . . . . . . . . . . . . . . . . . . . . 122
4.7 Discussion of Chebyshev and TFEA Approaches . . . . . . . . . . . . . . . 123
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

5 Systems with Periodic Coefficients and Periodically Varying Delays:


Semidiscretization-Based Stability Analysis . . . . . . . . . . . . . . . . . . . . . 131
Xinhua Long, Tamás Insperger, and Balakumar Balachandran
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
5.2 Stability Analysis of Systems with Periodically Varying Delays . . . 133
5.3 Approximation of the Monodromy Matrix by using
the Semidiscretization Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
5.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
5.4.1 Scalar Delay Differential Equation with Periodic
Coefficient and One Delay . . . . . . . . . . . . . . . . . . . . . . . . . . 138
5.4.2 Scalar Autonomous Delay Differential Equation
with Two Delays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
5.4.3 Damped and Delayed Mathieu Equation . . . . . . . . . . . . . . 142
5.4.4 CSS Milling Process: Nonautonomous DDE
with Time-Periodic Delays . . . . . . . . . . . . . . . . . . . . . . . . . . 144
5.4.5 VSS Milling Process: Nonautonomous DDE
with Time-Periodic Delay . . . . . . . . . . . . . . . . . . . . . . . . . . 148
5.5 Closure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

6 Bifurcations, Center Manifolds, and Periodic Solutions . . . . . . . . . . . . 155


David E. Gilsinn
6.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
6.2 Decomposing Ordinary Differential Equations Using Adjoints . . . . 160
6.2.1 Step 1: Form the Vector Equation . . . . . . . . . . . . . . . . . . . . 160
6.2.2 Step 2: Define the Adjoint Equation . . . . . . . . . . . . . . . . . . 161
6.2.3 Step 3: Define a Natural Inner Product by way
of an Adjoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
6.2.4 Step 4: Get the Critical Eigenvalues . . . . . . . . . . . . . . . . . . 162
6.2.5 Step 5: Apply Orthogonal Decomposition . . . . . . . . . . . . . 163
6.3 An Example Application in Ordinary Differential Equations . . . . . 164
6.3.1 Step 1: Form the Vector Equation . . . . . . . . . . . . . . . . . . . . 165
xii Contents

6.3.2 Step 2: Define the Adjoint Equation . . . . . . . . . . . . . . . . . . 165


6.3.3 Step 3: Define a Natural Inner Product by Way
of an Adjoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
6.3.4 Step 4: Get the Critical Eigenvalues . . . . . . . . . . . . . . . . . . 166
6.3.5 Step 5: Apply Orthogonal Decomposition . . . . . . . . . . . . . 166
6.4 Delay Differential Equations as Operator Equations . . . . . . . . . . . . 167
6.4.1 Step 1: Form the Operator Equation . . . . . . . . . . . . . . . . . . 167
6.4.2 Step 2: Define an Adjoint Operator . . . . . . . . . . . . . . . . . . . 169
6.4.3 Step 3: Define a Natural Inner Product by Way
of an Adjoint Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
6.4.4 Step 4: Get the Critical Eigenvalues . . . . . . . . . . . . . . . . . . 171
6.4.5 Step 5: Apply Orthogonal Decomposition . . . . . . . . . . . . . 172
6.5 A Machine Tool DDE Example: Part 1 . . . . . . . . . . . . . . . . . . . . . . . 176
6.5.1 Step 1: Form the Operator Equation . . . . . . . . . . . . . . . . . . 176
6.5.2 Step 2: Define the Adjoint Operator . . . . . . . . . . . . . . . . . . 178
6.5.3 Step 3: Define a Natural Inner Product by Way
of an Adjoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
6.5.4 Step 4: Get the Critical Eigenvalues . . . . . . . . . . . . . . . . . . 179
6.5.5 Step 5: Apply Orthogonal Decomposition . . . . . . . . . . . . . 185
6.6 Computing the Bifurcated Periodic Solution
on the Center Manifold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
6.6.1 Step 1: Compute the Center Manifold Form . . . . . . . . . . . 187
6.6.2 Step 2: Develop the Normal Form
on the Center Manifold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
6.6.3 Step 3: Form the Periodic Solution on the Center
Manifold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
6.7 A Machine Tool DDE Example: Part 2 . . . . . . . . . . . . . . . . . . . . . . . 190
6.7.1 Step 1: Compute the Center Manifold Form . . . . . . . . . . . 190
6.7.2 Step 2: Develop the Normal Form
on the Center Manifold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
6.7.3 Step 3: Form the Periodic Solution on the Center
Manifold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
6.8 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201

7 Center Manifold Analysis of the Delayed Liénard Equation . . . . . . . . 203


Siming Zhao and Tamás Kalmár-Nagy
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
7.2 Linear Stability Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
7.3 Operator Differential Equation Formulation . . . . . . . . . . . . . . . . . . . 206
7.4 Center Manifold Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
7.5 Hopf Bifurcation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
7.6 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
7.7 Hopf Bifurcation in the Sunflower Equation . . . . . . . . . . . . . . . . . . . 214
7.8 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
Contents xiii

8 Calculating Center Manifolds for Delay Differential Equations


Using MapleTM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Sue Ann Campbell
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
8.2 Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
8.2.1 Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
8.2.2 Nonlinear Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
8.3 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
8.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242

9 Numerical Solution of Delay Differential Equations . . . . . . . . . . . . . . . 245


Larry F. Shampine and Sylvester Thompson
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
9.2 DDEs are not ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
9.3 Numerical Methods and Software Issues . . . . . . . . . . . . . . . . . . . . . . 252
9.3.1 Explicit Runge–Kutta Methods . . . . . . . . . . . . . . . . . . . . . . 253
9.3.2 Error Estimation and Control . . . . . . . . . . . . . . . . . . . . . . . . 255
9.3.3 Event Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
9.3.4 Software Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
9.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
9.4.1 El-Niño Southern Oscillation Variability Model . . . . . . . . 259
9.4.2 Rocking Suitcase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
9.4.3 Time-Dependent DDE with Impulses . . . . . . . . . . . . . . . . . 268
9.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
9.6 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269

10 Effects of Time Delay on Synchronization and Firing Patterns


in Coupled Neuronal Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
Qishao Lu, Qingyun Wang, and Xia Shi
10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
10.2 Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
10.2.1 Firing Patterns of a Single Neuron . . . . . . . . . . . . . . . . . . . 276
10.2.2 Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
10.3 Synchronization and Firing Patterns in Electrically Coupled
Neurons with Time Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
10.4 Synchronization and Firing Patterns in Coupled Neurons
with Delayed Inhibitory Synapses . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
10.5 Synchronization and Firing Patterns in Coupled Neurons
with Delayed Excitatory Synapses . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
10.6 Delay Effect on Multistability and Spatiotemporal Dynamics
of Coupled Neuronal Activity [14] . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
10.7 Closure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
xiv Contents

11 Delayed Random Walks: Investigating the Interplay


Between Delay and Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
Toru Ohira and John Milton
11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
11.2 Simple Random Walk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
11.2.1 Probability Distribution Function . . . . . . . . . . . . . . . . . . . . 309
11.2.2 Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
11.2.3 Fokker–Planck Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
11.2.4 Auto-correlation Function: Special Case . . . . . . . . . . . . . . 313
11.2.5 Auto-correlation Function: General Case . . . . . . . . . . . . . . 315
11.3 Random Walks on a Quadratic Potential . . . . . . . . . . . . . . . . . . . . . . 316
11.3.1 Auto-correlation Function: Ehrenfest Random Walk . . . . 318
11.3.2 Auto-correlation Function: Langevin Equation . . . . . . . . . 320
11.4 Delayed Random Walks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
11.4.1 Delayed Fokker–Planck Equation . . . . . . . . . . . . . . . . . . . . 321
11.4.2 Auto-correlation Function: Delayed Random Walk . . . . . 322
11.4.3 Auto-correlation Function: Delayed Langevin Equation . . 324
11.5 Postural Sway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
11.5.1 Transient Auto-correlation Function . . . . . . . . . . . . . . . . . . 329
11.5.2 Balance Control with Positive Feedback . . . . . . . . . . . . . . 330
11.6 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
Contributors

B. Balachandran Department of Mechanical Engineering, University of Mary-


land, College Park, MD 20742, USA, [email protected]
Eric Butcher Department of Mechanical and Aerospace Engineering, New Mexico
State University, Las Cruces, NM 88003, USA, [email protected]
Sue Ann Campbell Department of Applied Mathematics, University of Waterloo,
Waterloo, ON N2L, Canada
Centre for Nonlinear Dynamics in Physiology and Medicine, McGill University,
Montreal, QC H3A 2T5, Canada, 3G1 [email protected]
David E. Gilsinn National Institute of Standards and Technology, 100 Bureau
Drive, Stop 8910, Gaithersburg, MD 20899-8910, USA, [email protected]
Keqin Gu Department of Mechanical and Industrial Engineering, Southern Illinois
University Edwardsville, Edwardsville, IL 62026-1805, USA, [email protected]
T. Insperger Department of Applied Mechanics, Budapest University of Technol-
ogy and Economics, Budapest H-1521, Hungary, Hungary [email protected]
Mrdjan Jankovic Ford Research and Advanced Engineering, Dearborn, MI, USA,
[email protected]
Wenjuan Jiang LAGIS CNRS UMR 8146, Ecole Centrale de Lille, BP 48, 59651
Villeneuve d’Ascq, Cedex, France, [email protected]
Tamás Kalmár-Nagy Department of Aerospace Engineering, Texas A&M Univer-
sity, College Station, TX 77843, USA, [email protected]
Ilya Kolmanovsky Ford Research and Advanced Engineering, Dearborn, MI,
USA, [email protected]
Alexandre Kruszewski LAGIS CNRS UMR 8146, Ecole Centrale de Lille, BP 48,
59651 Villeneuve d’Ascq, Cedex, France, [email protected]

xv
xvi Contributors

Hongfei Li Department of Mathematics, Yulin College, Yulin City, Shaanxi


Province 719000, P.R. China, [email protected]
X.-H. Long The State Key Lab of Mechanical System and Vibration, Shanghai
Jiaotong University, Shanghai 200240, P.R. China, [email protected]
Qishao Lu School of Science, Beijing University of Aeronautics and Astronautics,
Beijing 100191, China, [email protected]
Brian Mann Department of Mechanical Engineering and Materials Science, Duke
University, Durham, NC 27708, USA, [email protected]
John Milton Joint Science Department, The Claremont Colleges, Claremont, CA,
USA, [email protected]
Toru Ohira Sony Computer Science Laboratories, Inc., Tokyo, JAPAN 141-0022,
[email protected]
Jean-Pierre Richard LAGIS CNRS UMR 8146, Ecole Centrale de Lille, BP 48,
59651 Villeneuve d’Ascq, Cedex, France Equipe-Projet ALIEN, INRIA (l’Institut
National de Recherche en Informatique et en Automatique), jean-pierre.richard@
ec-lille.fr
L.F. Shampine Mathematics Department, Southern Methodist University, Dallas,
TX 75275, USA, [email protected]
Xia Shi School of Science, Beijing University of Posts and Telecommunications,
Beijing 100876, China, [email protected]
S. Thompson Department of Mathematics and Statistics, Radford University,
Radford, VA 24142, USA, [email protected]
Armand Toguyeni LAGIS CNRS UMR 8146, Ecole Centrale de Lille, BP 48,
59651 Villeneuve d’Ascq, Cedex, France, [email protected]
Qingyun Wang School of Statistics and Mathematics, Inner Mongolia Finance and
Economics College, Huhhot 010051, China, [email protected]
Siming Zhao Department of Aerospace Engineering, Texas A&M University, Col-
lege Station, TX 77843, USA, [email protected]
Chapter 1
Lyapunov–Krasovskii Functional Approach
for Coupled Differential-Difference Equations
with Multiple Delays

Hongfei Li and Keqin Gu

Abstract Coupled differential-difference equations (coupled DDEs) represent a


very general class of time-delay systems. Indeed, traditional DDEs of retarded or
neutral type, as well as singular systems, can all be considered as special cases of
coupled DDEs. The coupled DDE formulation is especially effective when a system
has a large number of state variables, but only a few of them involve time delays. In
this chapter, the stability of such systems is studied by using a Lyapunov-Krasovskii
functional method. For linear systems, a quadratic Lyapunov-Krasovskii functional
is discretized to reduce the stability problem to a set of linear matrix inequalities
for which effective numerical algorithms are available, and widely implemented in
such software packages as MATLAB.

Keywords: Coupled differential-difference equations · Time delay · Lyapunov-


Krasovskii functional · Linear matrix inequality

1.1 Introduction

Coupled differential-difference equations (coupled DDEs)

ẋ(t) = f (t, x(t), y(t − r1 ), y(t − r2 ), . . . , y(t − rK )), (1.1)


y(t) = g(t, x(t), y(t − r1 ), y(t − r2 ), . . . , y(t − rK )), (1.2)

have received substantial attention from researchers since the early 1970s. Another
seemingly more general form of coupled DDEs often studied in the literature is

ẋ(t) = f (t, x(t), xd (t), yd (t)), (1.3)


y(t) = g(t, x(t), xd (t), yd (t)), (1.4)

B. Balachandran et al. (eds.), Delay Differential Equations: Recent Advances 1


and New Directions, DOI 10.1007/978-0-387-85595-0 1,
c Springer Science+Business Media LLC 2009
2 H. Li and K. Gu

where

xd (t) = (x(t − r1 ), x(t − r2 ), . . . , x(t − rK )),


yd (t) = (y(t − r1 ), y(t − r2 ), . . . , y(t − rK )).

Early motivation for studying such systems arose from some systems described
by hyperbolic partial differential equations, with main examples from lossless prop-
agation model of electrical lines and gas, steam, and water pipes [2,23,26,29,36,44,
45,47]. To illustrate the typical process, consider a lossless transmission line studied
in [2] described by the partial differential equations

∂i ∂v
L =− , (1.5)
∂t ∂x
∂v ∂i
C =− , (1.6)
∂t ∂x
for 0 ≤ x ≤ 1, with boundary conditions

E − v(0,t) − Ri(0,t) = 0, (1.7)


dv(1,t)
C1 = i(1,t) − g(v(1,t)). (1.8)
dt
The general solution of (1.5) and (1.6) is

v(x,t) = φ (x − t/τ ) + ψ (x + t/τ ), (1.9)


i(x,t) = k[φ (x − t/τ ) − ψ (x + t/τ )], (1.10)
√ 
where τ = LC and k = C/L. Let x = 1 in (1.5) and (1.6) and solve for φ and ψ ,
with a scaling and shift of time variable, the authors obtain
1 1 
φ (α ) = p((1 − α )τ ) + q((1 − α )τ ) , (1.11)
2 k
1 1 
ψ (β ) = p((β − 1)τ ) − q((β − 1)τ ) , (1.12)
2 k
where

p(t) = v(1,t), (1.13)


q(t) = i(1,t). (1.14)

After using (1.11) and (1.12) in (1.9) and (1.10) for x = 0, one arrives at
 
1 1 1
v(0,t) = p(t + τ ) + p(t − τ ) + q(t + τ ) − q(t − τ ) , (1.15)
2 k k
 
k 1 1
i(0,t) = p(t + τ ) − p(t − τ ) + q(t + τ ) + q(t − τ ) . (1.16)
2 k k
1 Lyapunov–Krasovskii Functional Approach for Coupled DDEs 3

By using (1.15), (1.16), (1.13), and (1.14) in the boundary conditions (1.7) and (1.8),
with a shift of time, one obtains

K1 K2
E = K1 p(t) + K2 p(t − 2τ ) + q(t) − q(t − 2τ ),
k k
1 1
ṗ(t) = q(t) − g(p(t)),
C1 C1

where K1 = (1 + kR)/2 and K2 = (1 − kR)/2. This is in the form of (1.3) and (1.4).
A transformation of variables (p(t), q(t)) → (p(t), r(t)) with r(t) = p(t) − q(t)/k
allows us transform the above to
k 1  k
ṗ(t) = p(t) − g(p(t)) − r(t),
C1 C1 C1
K2 E
r(t) = 2p(t) + r(t − 2τ ) − ,
K1 K1
which is in the standard form of (1.1) and (1.2).
Most early studies transform coupled DDEs to a set of regular DDEs of neutral
type [21, 35]. A notable exception is [44]. In recent few years, there is a renewed
interest in coupled DDEs. A substantial effort has been devoted to the Lyapunov-
Krasovskii functional approach [9, 27, 28, 41–43, 45]. The reader is also referred to
the plenary lecture by Rasvan [46] for a comprehensive review. Most studies are
carried out on the seemingly more general form of (1.3) and (1.4). On the other
hand, it should be pointed out that the description (1.1) and (1.2) is not less general
than (1.3) and (1.4). Indeed, by introducing an additional variable z(t) = x(t), (1.3)
and (1.4) can be written as

ẋ(t) = f (t, x(t), zd (t), yd (t)),


y(t) = g(t, x(t), zd (t), yd (t)),
z(t) = x(t),

which is obviously in the form of (1.1) and (1.2).


It was suggested in [15,16] that it is also of interest to reformulate regular DDEs,
even of retarded type, in the form of coupled DDEs (1.1) and (1.2). Indeed, many
practical systems may have a large number of state variables, but only involve a
rather small number of delayed variables. A process similar to “pulling out uncer-
tainties” as described in [6, 7] allows the authors to “pull out delays” and write the
system in a form with all delays appearing in the feedback channel. In [15–17], the
case of single delay or commensurate delays was described by the forward system
from u to y,

ẋ(t) = fˆ(t, x(t), u(t)),


y(t) = ĝ(t, x(t), u(t)),
4 H. Li and K. Gu

and feedback consisting of the delay

u(t) = y(t − r).

It should be pointed out that the “feedback” does not have to be the active action
of any controller. Rather, it is often the nature of the component. Having the delay
in the feedback path is a choice of modeling. In this chapter, the more general case
of multiple delays is considered:

u(t) = h(y(t − r1 ), y(t − r2 ), . . . , y(t − rK )).

This results in a standard form of coupled DDEs (1.1) and (1.2) where the dimension
of x may be significantly larger than that of y. As will be shown later on, at least in
the case of linear systems, the stability analysis of the system in this form using
Lyapunov-Krasovskii functional approach may require much less computation than
if it is formulated in a regular DDE form of Hale’s type
d
ḡ(t, x(t), x(t − r1 ), x(t − r2 ), . . . , x(t − rK ))
dt
= f¯(t, x(t), x(t − r1 ), x(t − r2 ), . . . , x(t − rK )). (1.17)

Not surprisingly, the known stability results of coupled DDEs (1.1) and (1.2) (or
its more general form of coupled differential-functional equations) parallel those of
regular DDEs of neutral type (1.17) (or its more general form of functional differen-
tial equations) discussed in [22, 31, 32]. Its flexibility in modeling practical systems
makes it an ideal form in many practical applications, and warrants a substantial
effort in developing further results relevant to such systems. In this chapter, some
important stability results of coupled DDEs (1.1) and (1.2) and its generalization
and specializations to linear systems are covered. This chapter and the next four
chapters cover material pertaining to control and stability analysis of DDEs.
This first chapter is organized as follows. Notation is described in the remaining
part of this section. In Sect. 2, the general stability theory of coupled differential-
functional equations by using the Lyapunov-Krasovskii functional approach is dis-
cussed. In Sect. 3, the stability problem of continuous time difference equations is
presented. Following that, in Sect. 4, the quadratic Lyapunov-Krasovskii functional
for linear systems and the discretization process is detailed. Examples are included
in Sect. 5 and discussed. Finally, concluding remarks are presented in Sect. 6.
Here, R is the set of real numbers, Rn and Rm×n represent the set of real n-
vectors and m by n matrices, respectively. R+ and R̄+ denote the sets of posi-
tive and nonnegative real numbers, respectively. For a given r ∈ R+ , which usu-
ally corresponds to the delay, PC is the set of bounded, right continuous, and
piecewise continuous Rn -valued functions defined on [−r, 0). For a given bounded,
right continuous, and piecewise continuous function y : [σ − r, ∞)→ Rn and τ ≥ σ ,
the authors define yτ ∈ PC by yτ (θ ) = y(τ + θ ), θ ∈ [−r, 0). || · || denotes 2-
norm for vectors and the corresponding induced norm for matrices. For φ ∈ PC,
Δ Δ
ψ ∈ Rm , define ||φ || = sup−r≤θ <0 ||φ (θ )|| and ||(ψ , φ )|| = max{||φ ||, ||ψ ||}. For
1 Lyapunov–Krasovskii Functional Approach for Coupled DDEs 5

x : [σ , ∞) → Rm and σ ≤ s < τ , x[s,τ ) denotes the restriction of x in [s, τ ), and


||x[s,τ ) || = sups≤t<τ ||x(t)||. For an A ∈ Rm×n , AT denotes the transpose of A. If A
is square, then ρ (A) represents the greatest absolute value of all the eigenvalues
of A. If A = AT , the authors use A > 0 and A ≥ 0 to denote A to be a positive def-
inite and positive semidefinite matrix. Similar notation is used to denote negative
definiteness and semidefiniteness.

1.2 Coupled Differential-Functional Equations

In this section, the authors present the basic stability theory of coupled differential-
functional equations

ẋ(t) = f (t, x(t), yt ), (1.18)


y(t) = g(t, x(t), yt ), (1.19)

where x(t) ∈ Rm y(t) ∈ Rn . The notation yt ∈ PC represents a section of function


y(t) from [t − r,t), with the notation

yt (θ ) = y(t + θ ), θ ∈ [−r, 0),

and r ∈ R+ is the maximum delay. Obviously, the system (1.18) and (1.19) includes
(1.1) and (1.2) as a special case. It is also interesting to point out that (1.18) and
(1.19) and (1.1) and (1.2) may also be used to represent many singular systems.
The smallest possible initial time σ is given. The initial conditions may be
defined for any t0 ≥ σ as
x(t0 ) = ψ ,
yt0 = φ ,
where ψ ∈ Rm and φ ∈ PC. As is conventional in stability study, it is assumed that
the real functions f and g satisfy f (t, 0, 0) = g(t, 0, 0) = 0, and the stability of the
trivial solution is studied. The material in this section is mainly adapted from [16].
The setting (1.18) and (1.19) is very general. If f and g are linear, then according
to Riesz representation theorem, it is always possible to write
 0
f (t, x(t), yt ) = A(t)x(t) + dθ μ (t, θ )y(t + θ ), (1.20)
−r
 0
g(t, x(t), yt ) = C(t)x(t) + dθ η (t, θ )y(t + θ ), (1.21)
−r

where the subscript θ indicates that θ is the integration variable, and the matrix
functions μ and η are of bounded variation with respect to θ for any fixed t, and the
integration is Riemann-Stieltjes integral. In practice, it is often sufficient to restrict
to linear systems in the form of
6 H. Li and K. Gu

K  0
f (t, x(t), yt ) = A(t)x(t) + ∑ Bi (t)y(t − ri ) + B(t, θ )y(t + θ )dθ , (1.22)
i=1 −r
K  0
g(t, x(t), yt ) = C(t)x(t) + ∑ Di (t)y(t − ri ) + D(t, θ )y(t + θ )dθ , (1.23)
i=1 −r

where 0 < ri ≤ r, i = 1, 2, . . . , K, and


 0  0
||B(t, θ )||dθ < ∞, ||D(t, θ )||dθ < ∞.
−r −r

As bounded variation implies at most countable discontinuities, the system


expressed by (1.20) and (1.21) would be equivalent to (1.22 and 1.23) if the number
of discrete delays K is allowed to be ∞ with the restriction
∞ ∞
∑ ||Bi (t)|| < ∞ and ∑ ||Di (t)|| < ∞.
i=1 i=1

It is assumed that there exists a unique solution (x(t), yt ) to the equations in


[t0 , +∞) for any initial conditions t0 ≥ σ , ψ ∈ Rm , and φ ∈ PC. Such solutions are
often denoted as x(t;t0 , ψ , φ ) and yt (t0 , ψ , φ ) when the explicit dependence on the
initial conditions is important.
Although it is not the purpose of this chapter to study the existence and unique-
ness of the solution, it can be shown using a similar procedure to [20] that the equa-
tions have a unique solution if f (t, ψ , φ ) and g(t, ψ , φ ) are continuous with respect
to t and Lipschitz with respect to ψ and φ , and the function g is uniformly nonatomic
at zero. The function g is uniformly non-atomic at zero if there exists a μ > 0 and
s0 > 0 and a strictly increasing function ζ (·), 0 ≤ ζ (s) < 1 for all s ∈ [0, s0 ), such
that
||g(t, x, φ ) − g(t, x, φ̃ )|| ≤ ζ (s)||φ − φ̃ ||,
for all φ , φ̃ ∈ PC, ||φ − φ̃ || ≤ μ , φ̃ (θ ) = φ (θ ) for θ ∈ [−r, −s).
In this chapter, the state (x(t), yt ) evolves in Rm × PC. Let C be the continuous
bounded Rn -valued functions defined in [−r, 0). In the literature, it is not uncommon
to constrain the state to evolve within the set

{(ψ , φ ) ∈ Rm × C | lim φ (θ ) = lim g(t, ψ , φ (θ ))}.


θ →0 θ →0

Indeed, if the initial condition satisfies the above constraint, then the state also sat-
isfies the above constraint [21]. Almost all the conclusions in this chapter apply to
this case.
Another possibility is to relax the state to evolve in Rm × L2 such as was done
in [41, 42]. The theory discussed in this chapter may not be directly applicable to
this possibility mainly due to different norms used.
1 Lyapunov–Krasovskii Functional Approach for Coupled DDEs 7

The definition of stability is very similar to the normal time-delay systems, and
is presented in the following.

Definition 1.1. The trivial solution x(t) = y(t) = 0 is said to be stable if for any
t0 ≥ σ and any ε > 0, there exists a δ = δ (t0 , ε ) > 0 such that ||(x(t0 ), yt0 )|| < δ
implies ||(x(t), yt )|| < ε for all t > t0 . It is said to be asymptotically stable if it is
stable, and for any t0 ∈ R there exists a δa = δa (t0 ) > 0 such that ||(x(t0 ), yt0 )|| <
δa implies x(t) → 0, y(t) → 0 as t → ∞. It is said to be uniformly stable if it is
stable and δ (t0 , ε ) can be chosen to be independent of t0 . It is said to be uniformly
asymptotically stable if it is uniformly stable, and there exists a δa > 0 independent
of t0 such that for any η > 0, there exists a T = T (δa , η ) such that ||(x(t0 ), yt0 )|| < δa
implies ||(x(t), yt )|| < η for all t ≥ t0 + T and t0 ≥ σ . It is globally (uniformly)
asymptotically stable if it is (uniformly) asymptotically stable and δa can be an
arbitrarily large finite number.

An important component of studying the stability of the complete system consist-


ing of (1.18) and (1.19) is understanding the subsystem (1.19). Taken alone, (1.19)
can be considered as a system described by the function g with x as the input and
yt as the state. From this point of view, yt depends on the initial condition yt0 = φ
and the input x[t0 ,t) , and can be denoted as yt (t0 , φ , x). The following definition is
introduced.

Definition 1.2. The function g or the subsystem (1.19) defined by g is said to be


input-to-state stable if for any t0 , there exist a KL function β (i.e., β : R̄+ × R̄+ →
R̄+ , β (α ,t) is continuous, strictly increasing with respect to α , strictly decreasing
with respect to t, β (0,t) = 0, and limt→∞ β (α ,t) = 0) and a K function γ (i.e,
γ : R̄+ → R̄+ is continuous, strictly increasing, and γ (0) = 0), such that the solution
yt (t0 , φ , x) corresponding to the initial condition yt0 = φ and input x(t) satisfies

||yt (t0 , φ , x)|| ≤ β (||φ ||,t − t0 ) + γ (||x[t0 ,t) ||). (1.24)

If β and γ can be chosen to be independent of t0 , then it is said to be uniformly


input-to-state stable.

This definition is along the line of literature on input-to-state stability for contin-
uous time systems [49,50], discrete time systems [25], and time-delay systems [52].
A well-known example of input-to-state stable system is

g(t, x(t), yt ) = Cx(t) + Dy(t − r),


− ln ρ (D)
with D satisfying ρ (D) < 1 [25]. In this case [22], for any 0 < δ < r , there
exists an M > 0 such that

||yt (t0 , x, φ )|| ≤ M[||yt0 ||e−δ (t−t0 ) + sup ||x(τ )||],


t0 ≤τ <t

and the input-to-state stability is established by letting


8 H. Li and K. Gu

β (α ,t) = M α e−δ t ,
γ (α ) = M α .

With the above background the Lyapunov-Krasovskii functional method is now


presented to study the stability of systems described by (1.18) and (1.19). Let
V (t, ψ , φ ) be differentiable, and define
Δ 
V̇ (τ , ψ , φ ) = d 
dt V (t, x(t), yt ) t=τ , x(τ )=ψ , yτ =φ

= lim supt→τ + V (t,x(t;τ ,ψ ,φ ),ytt−


(τ ,ψ ,φ ))−V (τ ,ψ ,φ )
τ .

Then the following theorem can be presented.

Theorem 1.1. Suppose that f and g maps R×(bounded sets in Rm × PC) into
bounded sets in Rm and Rn , respectively, and g is uniformly input-to-state stable;
u, v, w : R̄+ → R̄+ are continuous nondecreasing functions, where additionally u(s)
and v(s) are positive for s > 0, and u(0) = v(0) = 0. If there exists a continuous
differentiable functional V : R × Rn × PC → R such that

u(||ψ ||) ≤ V (t, ψ , φ ) ≤ v(||(ψ , φ )||), (1.25)

and
V̇ (t, ψ , φ ) ≤ −w(||ψ ||), (1.26)
then the trivial solution of the coupled differential-functional equations (1.18) and
(1.19) is uniformly stable. If w(s) > 0 for s > 0, then it is uniformly asymptotically
stable. If, in addition, lims→∞ u(s) = ∞, then it is globally uniformly asymptotically
stable.

Proof. To prove uniform stability, for any given ε > 0, one may find a corresponding
δ = δ (ε ) as follows: Choose ε̂ > 0, ε̂ < ε , such that β (ε̂ , 0) < ε /2, γ (ε̂ ) < ε /2, and
let δ > 0 be such that δ ≤ ε̂ and v(δ ) < u(ε̂ ). Then, for any ||(ψ , φ )|| < δ , due to
(1.25) and (1.26), one has

u(||x(t)||) ≤ V (t, x(t), yt ) ≤ V (t0 , ψ , φ )


≤ v(||(ψ , φ )||) ≤ v(δ ) < u(ε̂ ),

which implies ||x(t)|| < ε̂ < ε . Furthermore,

||yt (t0 , φ , x)|| ≤ β (||φ ||,t) + γ (||x[t0 ,t) ||)


< ε /2 + ε /2
= ε.

Thus ||(x(t), yt )|| < ε and uniform stability is proven.

Now let εa = 1. Because of uniform stability, there exists a δa = δ (εa ) such that
||(x(t), yt )|| < εa for any t0 ≥ σ and ||(ψ , φ )|| < δa . To show uniform asymptotic
1 Lyapunov–Krasovskii Functional Approach for Coupled DDEs 9

stability, one needs to show further that for any η > 0, one can find a T = T (δa , η )
such that ||(x(t), yt )|| < η for all t ≥ t0 + T and ||(ψ , φ )|| < δa . Because of uniform
stability, let δ = δ (η ) as was done in proving uniform stability, it is sufficient to
show that there exists a t ∈ (t0 ,t0 + T ] such that ||(x(t), yt )|| < δ .
Let Ta be such that
β (εa , Ta ) ≤ δ /2, (1.27)
and
α = min{γ −1 (δ /2), δ }. (1.28)
As f maps bounded sets to bounded sets, there exists an L > 0 such that

|| f (t, x(t), yt )|| < L for all t ≥ t0 and ||(x(t), yt )|| ≤ εa . (1.29)

One may increase L if necessary to satisfy

α /L < Ta .

Then, T may be chosen as

T = (2K + 1)Ta ,
εa )
where K is the smallest integer to satisfy K > αLv( w(α /2) . If the statement is not true;
that is,
||(x(t), yt )|| ≥ δ , (1.30)
for all t ∈ (t0 ,t0 + T ]. Then, it can be shown that there exists a sk ∈ [tk − Ta ,tk ],
tk = t0 + 2kTa , k = 1, 2, ..., K, such that

||x(sk )|| ≥ α . (1.31)

Indeed, (1.30) implies either


||x(tk )|| ≥ δ ,
in which case (1.31) is satisfied, or for s = tk − Ta

δ ≤ ||ytk ||
≤ β (||ys ||,t − s) + γ (||x[s,tk ) ||)
≤ β (εa , Ta ) + γ (||x[s,tk ) ||)
≤ δ /2 + γ (||x[s,tk ) ||),

which implies γ (||x[s,tk ) ||) ≥ δ /2, or ||x[s,tk ) || ≥ α , i.e, there is a sk ∈ [tk − Ta ,tk ) such
that (1.31) is satisfied.
α
From (1.31), (1.29), and (1.18), it can be concluded that, for t ∈ Ik = [sk − 2L , sk +
α
2L ],
α
||x(t)|| ≥ ||x(sk )|| − L · ≥ α /2.
2L
10 H. Li and K. Gu

This implies that V̇ (t, x(t), yt ) ≤ −w(α /2) in t ∈ Ik , k = 1, 2, ..., K, and V̇ (t, x(t), yt )
≤ 0 elsewhere, and Ik have no overlap for distinct k. This implies that for t = t0 + T ,

V (t, x(t), yt ) ≤ V (t0 , ψ , φ ) − Kw(α /2)α /L


≤ v(εa ) − K α w(α /2)/L
< 0,

which contradicts the fact that V (t, x(t), yt ) ≥ 0. This proved asymptotic stability.
Finally, if lims→∞ u(s) = ∞, then δa above may be arbitrarily large, and εa can
be chosen after δa is given to satisfy v(δa ) < u(εa ), and therefore global asymptotic
stability can be concluded.
This theorem is due to Gu and Liu [16], and is very similar to the counterpart of
functional differential equations of neutral type shown in [5]. In [15], asymptotic,
but not uniform, stability was proved under the assumption that g is input-to-state
stable. Asymptotic stability was also established in [43] for systems with multiple
discrete delays.
The rest of this first chapter is devoted to linear systems. For linear systems,
uniform stability is equivalent to exponential stability.

1.3 Stability of Continuous Time Difference Equations

This section considers the stability problem of a system described by


K
y(t) = ∑ Di y(t − ri ) + h(t), (1.32)
i=1

and the corresponding homogeneous equation


K
y(t) = ∑ Di y(t − ri ), (1.33)
i=1

where y(t) ∈ Rn and Di ∈ Rn×n . Let

r = max ri .
1≤i≤K

Then, initial condition can be expressed as

yt0 = φ ,

where yt is defined by

yt (θ ) = y(t + θ ), − r ≤ θ < 0.
1 Lyapunov–Krasovskii Functional Approach for Coupled DDEs 11

The characteristic equation of the system is


K
Δ (λ ) = det[I − ∑ e−ri λ Di ] = 0. (1.34)
i=1

It is well known [22] that

aD = sup{Re(λ ) | Δ (λ ) = 0} < ∞. (1.35)

For any δ > aD , there exists an M > 0 such that the solution to (1.32) satisfies

||yt (t0 , φ , h)|| ≤ M[||φ ||eδ (t−t0 ) + sup ||h(τ )||].


t0 ≤τ ≤t

See Theorem 3.4 and the discussion before the Theorem in Chap. 9 of [22]. A con-
sequence of this is that (1.33) is exponentially stable and (1.32) is uniformly input-
to-state stable if and only if aD < 0.
Unlike time-delay systems of retarded type, the stability of difference equations
may change drastically under small deviation of delay ratio. First, the following
concepts are introduced.

Definition 1.3. A set of positive real scalars r1 , r2 , . . . , rK are rationally independent


if the equation
K
∑ αi ri = 0,
i=1

can be satisfied for rational numbers αi , i = 1, 2, . . . , K only if αi = 0 for all i.

Obviously, αi can be replaced by integers without loss of generality.

Theorem 1.2. With given coefficient matrices Di , i = 1, 2, . . . , K, and “stable”


meaning aD < 0, the following statements are equivalent about the system (1.33):
(i) The system is stable for a set of rationally independent delays ri , i = 1, 2, . . . , K;
(ii) For given nominal delays ri0 > 0, i = 1, 2, . . . , K and arbitrarily small fixed devi-
ation bound ε > 0. The system is stable for any set of delays ri , i = 1, 2, . . . , K
that satisfy
|ri − ri0 | < ε ;
(iii) The system is stable for all

ri > 0, i = 1, 2, . . . , K;

(iv) The coefficient matrices satisfy


K
sup ρ ( ∑ e Ai ) | θi ∈ [0, 2π ], i = 1, 2, . . . K
jθi
< 1,
i=1

where j is the imaginary unit.


12 H. Li and K. Gu

The above theorem is due to Hale [19], and can be found in [48] and as Theorem
6.1 of Chap. 9 in [22]. The above theorem indicates that if a difference equation is
exponentially stable under an arbitrarily small independent deviation of delays, then
it must also be stable for arbitrary delays.
On the other hand, there are indeed practical cases where rational dependence of
delays are not subject to error due to the structure of the problem. In these cases, the
system can usually be recast in a different form, so that only rationally independent
parameters appear as delays as will be illustrated later in this chapter.
The following theorem presents the stability condition in a Lyapunov-like form.

Theorem 1.3. If there exist symmetric positive definite matrices S1 , S2 , . . . , SK such


that ⎛ T⎞
D1
⎜ DT ⎟ K    
⎜ 2⎟
⎜ .. ⎟ ∑ Si D1 D2 . . . DK − diag S1 S2 . . . SK < 0 (1.36)
⎝ . ⎠ i=1
DTK
is satisfied, then aD < 0 for the system (1.33).

It is easy to see that the above theorem is equivalent to Theorem 6.1 of [3].
The proof there uses (iv) of Theorem 1.2. In the following, a more direct proof is
provided.

Proof. If (1.36) is true, then


⎛ T⎞
D1
⎜ DT ⎟ K    
⎜ 2⎟
⎜ .. ⎟ ∑ Si D1 D2 . . . DK − diag S1 S2 . . . SK
⎝ . ⎠ i=1
DTK
 
≤ −ε diag S1 S2 . . . SK (1.37)

for some sufficiently small ε > 0. Let λ be a solution to the characteristic equation
(1.34). Then there exists a ζ = 0 such that
K
[I − ∑ e−ri λ Di ]ζ = 0,
i=1

or
K
∑ e−ri λ Di ζ = ζ . (1.38)
i=1

Right multiply (1.37) by ⎛ ⎞


e−r1 λ ζ
⎜ e−r2 λ ζ ⎟
⎜ ⎟
⎜ . ⎟,
⎝ .. ⎠
e−rK λ ζ
1 Lyapunov–Krasovskii Functional Approach for Coupled DDEs 13

and left multiply by its conjugate transpose; one obtains


 ∗   
K K K K
∑ e−ri λ Di ζ ∑ Si ∑ e−ri λ Di ζ − ∑ e−2ri σ ζ ∗ Si ζ
i=1 i=1 i=1 i=1
K
≤ −ε ∑ e−2ri σ ζ ∗ Si ζ .
i=1

After using (1.38), and letting σ = Re(λ ), the above becomes


 
K K
ζ∗ ∑ Si ζ − (1 − ε ) ∑ e−2ri σ ζ ∗ Si ζ ≤ 0,
i=1 i=1

or  
K
ζ∗ ∑ (1 − (1 − ε )e−2ri σ )Si ζ ≤ 0.
i=1

As ζ = 0 and Si > 0 for all i, the above can be satisfied only if

1 − (1 − ε )e−2ri σ < 0 for some i,

or
ln( 1−1 ε ) Δ
σ ≤ − min = −δ .
1≤ri ≤K 2ri
As the above must be satisfied for all solutions of the characteristic equation (1.34),
it can be concluded that aD ≤ −δ .

1.4 Linear Coupled Differential-Difference Equations

1.4.1 Quadratic Lyapunov-Krasovskii Functional

In this section, the system described by the following coupled linear differential-
difference equations is discussed
K
ẋ(t) = Ax(t) + ∑ Bi y(t − ri ), (1.39)
i=1
K
y(t) = Cx(t) + ∑ Di y(t − ri ), (1.40)
i=1

where x(t) ∈ Rm and y(t) ∈ Rn are the state variables; A ∈ Rm×m , C ∈ Rn×m , Bi ∈
Rm×n , Di ∈ Rm×m , i = 1, 2, ..., K are the coefficient matrices. The initial conditions
are defined as
14 H. Li and K. Gu

x(t0 ) = ψ ,
yt0 = φ .

Without loss of generality, the delays are ordered such that

0 < r1 < r2 < · · · < rK = r.

Similar to [12] and Sect. 7.5 of [14], choose Lyapunov-Krasovskii functional in


the form of
K  0
V (ψ , φ ) = ψ T Pψ + 2ψ T ∑ Qi (η )φ (η ) dη
i=1 −ri
K K  0  0
+∑∑ φ T (ξ )Rij (ξ , η )φ (η ) dξ dη
i=1 j=1 −ri −rj
K  0
+∑ φ T (η )Si (η )φ (η ) dη , (1.41)
i=1 −ri

where

P = PT ∈ Rm×m ,
Qi (η ) ∈ Rm×n ,
Rij (ξ , η ) = RjiT (η , ξ ) ∈ Rn×n ,
Si (η ) = SiT (η ) ∈ Rn×n , (1.42)

for i, j = 1, 2, ..., K; ξ , η ∈ [−r, 0]. The functions Qi (ξ ), Si (ξ ), Rij (ξ , η ), and


RiK (ξ , η ), i, j = 1, 2, ..., K − 1 are introduced to account for discontinuities, and
therefore, it is sufficient to constrain these functions to the following special forms
without loss of generality:

Qi (ξ ) = Qi = constant,
Si (ξ ) = Si = constant,
Rij (ξ , η ) = Rij = constant,
RiK (ξ , η ) = RiK (η ) = RKiT (η ) independent of ξ , (1.43)

for i, j = 1, 2, ..., K − 1. It should be pointed out that no such constraint is applied


to RKK .
The derivative of V (ψ , φ ) in (1.41) along the system trajectory can be calcu-
lated as
K K
V̇ (t, ψ , φ ) = − ∑ ∑ ϕ T (−ri )Δ̄ij ϕ (−rj )
i=0 j=0
K K  0
+2 ∑ ∑ ϕ T (−ri ) Π ij (η )φ (η ) dη
i=0 j=1 −rj
1 Lyapunov–Krasovskii Functional Approach for Coupled DDEs 15

K  0  0
−2 ∑ φ T (ξ )d ξ ṘiK (η )φ (η ) dη
i=1 −ri −r
 0 0
∂ KK ∂ KK
− φ T (ξ )[ R (ξ , η ) + R (ξ , η )]φ (η ) dη dξ
−r −r ∂ξ ∂η
 0
− φ T (η )ṠK (η )φ (η ) dη , (1.44)
−r

where 
ψ, i = 0;
ϕ (−ri ) =
φ (−ri ), 1 ≤ i ≤ K;

K−1
Δ̄00 = −[AT P + PA + ∑ (QlC +CT QlT +CT SlC)
l=1
+QK (0)C +CT QKT (0) +CT SK (0)C], (1.45)
K−1
Δ̄0j = −PBj − ∑ Ql Dj + Qj − QK (0)Dj
l=1
K−1
− ∑ CT Sl Dj −CT SK (0)Dj ,
l=1
K−1
Δ̄0K = −PBK − ∑ Ql DK + QK (−r) − QK (0)DK
l=1
K−1
− ∑ CT Sl DK −CT SK (0)DK ,
l=1

 
K−1
Δ̄ij = −DTi ∑ S +S l K
(0) Dj , 1 ≤ i, j ≤ K, i = j,
l=1
 
K−1
Δ̄ii = Si − DTi ∑ Sl + SK (0) Di , 1 ≤ i ≤ K − 1,
l=1
 
K−1
Δ̄KK = S K
(−r) − DTK ∑ S +S l K
(0) DK ; (1.46)
l=1

and
K−1
Π0 j = AT Qj + ∑ CT Rlj +CT RjKT (0),
l=1
K−1
Π0K = AT QK (η ) + ∑ CT RlK (η ) +CT RKK (0, η ) − Q̇K (η )
l=1
K−1
Πij = BTi Qj + DTi ∑ Rlj − Rij + DTi RjKT (0),
l=1
16 H. Li and K. Gu

K−1
ΠKj = BTK Qj + DTK ∑ Rlj + DTK RjKT (0) − RjKT (−r),
l=1
K−1
ΠiK = BTi QK (η ) + DTi ∑ RlK (η ) − RiK (η ) + DTi RKK (0, η ),
l=1
K−1
ΠKK = BTK QK (η ) + DTK ∑ RlK (η ) + DTK RKK (0, η ) − RKK (−r, η ),
l=1
1 ≤ i ≤ K − 1, 1 ≤ j ≤ K − 1.

According to Theorem 1.1, one can conclude


Theorem 1.4. The system described by (1.39) and (1.40) is exponentially stable if
(1.40) is uniformly input-to-state stable, and the Lyapunov-Krasovskii functional
(1.41) satisfies
ε ||ψ ||2 ≤ V (t, ψ , φ ), (1.47)
and its derivative (1.44) satisfies

V̇ (t, ψ , φ ) ≤ −ε ||ψ ||2 . (1.48)

for some ε > 0.


Proof. Obviously, for the expression (1.41), there exists a sufficiently large M such
that
V (t, ψ , φ ) ≤ M||(ψ , φ )||2 .
This together with (1.47), (1.48), and the uniform input-to-state stability of (1.40),
implies the uniform asymptotic stability of the system according to Theorem 1.1.
As this is a linear system, uniform asymptotic stability is equivalent to exponential
stability. 2
The above theorem reduces the stability problem to checking the satisfaction of
(1.47) and (1.48), and the uniform input-to-state stability of (1.40). For the special
case of Di = 0, for all i = K, combining the ideas of [16] and Chap. 7 of [14],
it can be shown that the existence of a quadratic Lyapunov-Krasovskii functional
in the form of (1.41) to satisfy the conditions of Theorem 1.4 is also necessary
for exponential stability. In the general case, such a theoretical result has not been
established. However, as is shown in [13], it is of interest to transform the system
to a more standard form such that y is partitioned (y1 , y2 , . . . , yK ), and each yi is
associated with one delay. More definitive theoretical results are available for such
standard form in [13].

1.4.2 Discretization

To render the conditions in Theorem 1.4 computable, the authors introduce a dis-
cretization similar to [12] by restricting the functions Qi , Rij , and Si to be piecewise
1 Lyapunov–Krasovskii Functional Approach for Coupled DDEs 17

linear. Specially, divide the interval [−r, 0] into N segments compatible with the
delays such that −ri , i = 1, 2, ..., K −1 are among the dividing points. In other words,
let θp , p = 0, 1, ..., N be the dividing points,

0 = θ0 > θ1 > · · · > θN = −r,

then
−ri = θNi , i = 1, 2, ..., K.
Thus, the interval [−ri , 0] is divided into Ni smaller intervals. Let hp be the length of
the pth segment,
hp = θp−1 − θp .
For the sake of convenience, define

N0 = 0, h0 = 0, hN+1 = 0, θN+1 = θN = −r.

Then, one has


0 = N0 < N1 < · · · < NK = N
and
Ni
ri = ∑ hp , i = 1, 2, ..., K.
p=1

The functions QK (ξ ), SK (ξ ), RiK (ξ ), and RKK (ξ , η ) are chosen to be piecewise


linear as follows:

QK (θp + α hp ) = (1 − α )QK
p + α Qp−1 ,
K
(1.49)
SK (θp + α hp ) = (1 − α )SpK + α Sp−1
K
, (1.50)
R iK
(θp + α hp ) = (1 − α )RiK
p + α Rp−1
iK
(1.51)

for 0 ≤ α ≤ 1; p = 1, 2, ..., N, i = 1, 2, ..., K − 1; and

RKK (θp + α hp , θq + β hq )

(1 − α )RKK
pq + β Rp−1,q−1 + (α − β )Rp−1,q , for α ≥ β ;
KK KK
= (1.52)
(1 − β )RKK
pq + α Rp−1,q−1 + (β − α )Rp,q−1 , for α < β ;
KK KK

for 0 ≤ α ≤ 1, 0 ≤ β ≤ 1; and p, q = 1, 2, ..., N. Then, V (ψ , φ ) is completely


determined by the matrices P, Qi , QK i K ij iK and RKK , i, j = 1, ..., K − 1,
p , S , Sp , R , R pq
p, q = 0, 1, ..., N. The stability problem becomes one of determining the input-to-
state stability of (1.40), and the existence of these matrices such that both (1.47) and
(1.48) are satisfied. The following theorem establishes the conditions for (1.47).

Theorem 1.5. The Lyapunov-Krasovskii functional V in (1.41), with QK , SK , RiK


and RKK piecewise linear as expressed in (1.49)–(1.52), satisfies (1.47) if
18 H. Li and K. Gu

Si > 0, i = 0, 1, ..., K − 1, (1.53)


SpK > 0, p = 0, 1, ..., N, (1.54)

and ⎛ ⎞
P Q̄ Q̃K
⎝ Q̄T R̄ + S̄ R̂K − S̄F ⎠ > 0, (1.55)
K T KK
Q̃ (R̂ − S̄F) R̃ + S̃ + F S̄F
KT T

where
 
Q̄ = Q1 Q2 . . . QK−1 ,
 
Q̃K = QK K K
0 Q1 . . . QN ,

⎛ ⎞
R11 R12 · · · R1,K−1
⎜ R21 R22 · · · R2,K−1 ⎟
⎜ ⎟
R̄ = ⎜ .. .. .. .. ⎟,
⎝ . . . . ⎠
RK−1,1 RK−1,2 ··· R K−1,K−1
⎛ ⎞
R1K
0 R1K
1 · · · R1KN
⎜ R2K R2K 2,K ⎟
· · · RN ⎟
⎜ 0 1
R̂K = ⎜ .. .. .. .. ⎟ ,
⎝ . . . . ⎠
RK−1,K
0 R1K−1,K · · · RN
K−1,K
⎛ KK KK ⎞
R00 R01 · · · RKK
0N
⎜ RKK RKK · · · RKK ⎟
⎜ 10 11 1N ⎟
R̃KK =⎜ . .. . . .. ⎟ ,
⎝ .. . . . ⎠
N0 RN1 · · · RNN
RKK KK KK

⎛ ⎞
f01 f11 . . . fN1
⎜ f02 f12 . . . fN2 ⎟
⎜ ⎟
F=⎜ .. .. . . .. ⎟ ,
⎝ . . . . ⎠
f0K−1 K−1
f1 . . . fNK−1

I, p ≤ Ni − 1, (or equivalently i ≥ Mp+1 ),
fpi =
0, otherwise,
 
1 1 1 2 1 K−1
S̄ = diag hN 1 S hN 2 S ... hNK−1 S ,
 
1 1 1
Ŝ = diag S S
h̃0 0 h̃1 1
... S
h̃N N ,
 
h̃p = max hp , hp+1 , p = 1, 2, ..., N − 1,
h̃0 = h1 , h̃N = hN ,
K−1
Sp = SpK + ∑ i
S, p = 0, 1, ..., N.
i=Mp+1
1 Lyapunov–Krasovskii Functional Approach for Coupled DDEs 19

Proof. The proof is very similar to that of Lemma 7.5 of Chap. 7 of [14]. 2

The next theorem established the conditions for (1.48). To derive the LKF deriva-
tive condition, it is noted that
1
Q̇(η ) = (Qp−1 − Qp ),
hp
1 K
ṠK (η ) = (Sp−1 − SpK ),
hp
1
ṘiK (η ) = (RiK − RiK
p )
hp p−1

∂ ∂
( + )RKK (ξ , η )
∂ξ ∂η
1 KK
hp (Rp−1,q−1 − Rp,q−1 ) + hq (Rp,q−1 − Rp,q ), α ≤ β ;
KK 1 KK KK
= .
hq (Rp−1,q−1 − Rp−1,q ) + hp (Rp−1,q − Rp,q ), α > β;
1 KK KK 1 KK KK

Theorem 1.6. The derivative of the Lyapunov-Krasovskii functional V̇ in (1.44),


with QK , SK , RiK , and RKK piecewise linear as expressed in (1.49)–(1.52), satisfies
(1.48) if
⎛ ⎞
Δ̄ Ys Ya
⎝ R·K ⎠>0
ds + Rds + Sd −W
KK K 0 (1.56)
Symmetric 3(Sd −W )
and  
W RKK
ds > 0, (1.57)
RKK
ds W
where
⎛ ⎞
Δ̄00 Δ̄01 · · · Δ̄0K
⎜ Δ̄10 Δ̄11 · · · Δ̄1K ⎟
⎜ ⎟
Δ̄ = ⎜ . .. . . .. ⎟ ,
⎝ .. . . . ⎠
Δ̄K0 Δ̄K1 · · · Δ̄KK
with Δ̄ij , i = 0, 1, ..., K, j = 0, 1, ..., K, definited by (1.45) and (1.46);
⎛s Ys s

Y01 02 ... Y0N
⎜ Ys Ys ... s
Y1N ⎟
s ⎜ 11 12 ⎟
Y =⎜ . . .. .. ⎟, (1.58)
⎝ .. .. . . ⎠
s Ys
YK1 K2 · · · YKN
s
20 H. Li and K. Gu

K K−1
s
Y0p = ∑ hp (AT Qj + ∑ CT Rlj +CT RjKT
0 )
j=Mp l=1

hp T K K−1
p−1 ) + ∑ C (Rp + Rp−1 )
A (Qp + QK T lK lK
+
2 l=1

0p + R0,p−1 ) + (Qp − Qp−1 ),
+CT (RKK KK K K

K K−1
Yips = ∑ hp (BTi Qj + DTi ∑ Rlj + DTi RjKT
0 −R )
ij
j=Mp l=1

hp T K K−1
p−1 ) + Di ∑ (Rp + Rp−1 )
Bi (Qp + QK T lK lK
+
2 l=1

0p + R0,p−1 ) − (Rp + Rp−1 ) ,
+DTi (RKK KK iK iK

K K−1
s
YKp = ∑ hp (BTK Qj + DTK ∑ Rlj + DTK RjKT
0 − RN )
iKT
j=Mp l=1

hp T K K−1
p−1 ) + Di ∑ (Rp + Rp−1 )
BK (Qp + QK T lK lK
+
2 l=1

0p + R0,p−1 ) − (RNp + RN,p−1 ) ;
+DTi (RKK KK KK KK


a Ya a

Y01 02 ... Y0N
⎜ Ya Ya ... a ⎟
Y1N
a ⎜ 11 12 ⎟
Y =⎜ . . .. .. ⎟ ,
⎝ .. .. . . ⎠
a Ya
YK1 K2 · · · YKN
a

 
hp T K K−1
a
Y0p = A (Qp − Qp−1 ) + ∑ C (Rp − Rp−1 ) +C (R0p − R0,p−1 ) ,
K T lK lK T KK KK
2 l=1

hp T K K−1
Yipa = p−1 ) + Di ∑ (Rp − Rp−1 )
Bi (Qp − QK T lK lK
2 l=1

0p − R0,p−1 ) − (Rp − Rp−1 ) ,
+DTi (RKK KK iK iK


hp T K K−1
p−1 ) + Di ∑ (Rp − Rp−1 )
BK (Qp − QK
a T lK lK
YKp =
2 l=1

0p − R0,p−1 ) − (RNp − RN,p−1 ) ;
+DTi (RKK KK KK KK
1 Lyapunov–Krasovskii Functional Approach for Coupled DDEs 21
 K K 
SdK = diag Sd1 K
Sd2 . . . SdN ,

K
Sdp K
= Sp−1 − SpK , 1 ≤ p ≤ N; (1.59)
⎛ ⎞
R·K ·K
ds11 Rds12 . . . R·K
ds1N
⎜ R·K R·K . . . R·K ⎟
⎜ ds21 ds22 ds2N ⎟
R·K
ds = ⎜ . ⎟,
⎝ ... ..
.
..
. .. ⎠
R·K R·K
dsN1 dsN2 . . . R·K
dsNN
 
K−1
R·K
dspq = ∑ q−1 − Rq ) + hq (Rp−1 − Rp )
hp (RiK iK iKT iKT
;
i=Mp

⎛ ⎞
RKK KK
ds11 Rds12 . . . RKK
ds1N
⎜ RKK RKK . . . RKK ⎟
⎜ ds21 ds22 ds2N ⎟
RKK
ds = ⎜ ⎟,
⎝ ... ..
.
.. .
. .. ⎠
RKK RKK
dsN1 dsN2 . . . RKK
dsNN
 
1
RKK
dspq = (hp + hq )(Rp−1,q−1 − Rpq ) + (hp − hq )(Rp,q−1 − Rp−1,q ) ;
KK KK KKT KKT
2
⎛ KK KK ⎞
Rda11 Rda12 . . . RKK
da1N
⎜ KK KK ⎟
⎜ Rda21 Rda22 . . . RKK
da2N ⎟
RKK = ⎜ ⎟
da ⎜ .. .. . . . ⎟,
⎝ . . . .. ⎠
RKK KK KK
daN1 RdaN2 . . . RdaNN
1
dapq = (hp − hq )(Rp−1,q−1 − Rp−1,q − Rp,q−1 + Rpq ).
RKK KK KK KK KK
(1.60)
2
Proof. Again, since the expression of V̇ is similar to that in [14] or [12] , one may
follow the same steps for the proof of Proposition 7.8 in [14]. See also [11, 24]. 2

From the above, the following can be concluded.

Theorem 1.7. The system expressed by (1.39) and (1.40) is asymptotically stable if
there exist m × m matrix P = PT , m × n matrices Qi , QKp , and n × n matrices S =
i

S , Sp = Sp , R = R , Rp , Rpq = Rqp ; i, j = 0, 1, ..., K − 1; p, q = 0, 1, ..., N;


iT K KT ij jiT iK KK KKT

and Nn × Nn matrix W = W T , such that (1.55), (1.57), and


⎛ ⎞
Δ Ys Ya Z
⎜ R·K 0⎟
ds + Rds + Sd −W
KK K 0
⎜ ⎟>0 (1.61)
⎝ 3(Sd −W ) 0 ⎠
Symmetric Ŝ
22 H. Li and K. Gu

are satisfied, where ⎛ ⎞


Δ00 Δ01 · · · Δ0K
⎜ Δ10 Δ11 · · · Δ1K ⎟
⎜ ⎟
Δ =⎜ . .. . . .. ⎟ , (1.62)
⎝ .. . . . ⎠
ΔK0 ΔK1 · · · ΔKK

K−1
Δ00 = −[AT P + PA + ∑ (QlC +CT QlT ) + QK0C +CT QKT
0 ],
l=1
K−1
Δ0 j = −PBj − ∑ Ql Dj + Qj − QK0 Dj ,
l=1
K−1
Δ0K = −PBK − ∑ Ql DK + QK (−r) − QK0 DK ,
l=1
Δij = 0, 1 ≤ i, j ≤ K, i = j,
Δii = S , 1 ≤ i ≤ K − 1,
i

ΔKK = SNK
, (1.63)

and
⎛ ⎞
CT Ŝ
⎜ DT Ŝ ⎟
⎜ 1 ⎟
Z = ⎜ . ⎟, (1.64)
⎝ .. ⎠
DTK Ŝ
K−1
Ŝ = ∑ Si + S0K . (1.65)
i=1

Proof. First, it is observed that (1.61) implies (1.53), (1.54), and (1.56). Further-
more, (1.61) also implies
⎛ ⎞
S1 0 ··· 0 0 DT1 Ŝ
⎜ 0 S 2 ··· 0 0 DT2 Ŝ ⎟
⎜ ⎟
⎜ .. .. . . .. .. .. ⎟
⎜ . . . . . . ⎟
⎜ ⎟ > 0,
⎜ 0 0 · · · SK−1 0
⎜ DK−1 Ŝ ⎟
T

⎝ 0 0 ··· 0 S0K DTK Ŝ ⎠
ŜD1 ŜD2 · · · ŜDK−1 ŜDK Ŝ

which is equivalent to
  K−1
diag S1 · · · SK−1 S0K − DT ( ∑ Si + S0K )D > 0, (1.66)
i=1
1 Lyapunov–Krasovskii Functional Approach for Coupled DDEs 23
 
where D = D1 D2 · · · DK . Therefore, the subsystem (1.40) is uniformly input-
to-state stable according to Theorem 1.3. Thus, all of the conditions in Theorem 1.4
has been established, and therefore, the system (1.39) and (1.40) is exponentially
stable. 2
Stability analysis based on the above theorem is known as the Discretized
Lyapunov Functional Method. The idea can also be extended to uncertain systems.
Indeed, let
ω = (A, B1 , B2 , . . . , BK ,C, D1 , D2 , . . . , DK ).
If the system matrices ω is not known exactly, but is known to be within a bounded
closed set Ω , then the system is obviously asymptotically stable provided that
Theorem 1.7 is satisfied for all ω ∈ Ω . Indeed, this is also true for ω = ω (t) provided
that Di , i = 1, 2, . . . K are independent of time. If Ω is polytopic, then let its vertices
be ωk , k = 1, 2, ..., nv , then Ω is the convex hull of {ωk , k = 1, 2, ..., nv }, and the con-
ditions in Theorem 1.7 only need to be satisfied in the nv vertices ωk , k = 1, 2, ..., nv .
The idea is very similar to the corresponding results for retarded time-delay systems
discussed in [14]. Notice, however, that the case of time-varying Di is excluded as
the corresponding stability results for the difference equations have not yet been
established.

1.5 Discussion and Examples

Many practical systems have a high dimension, but only a small number of the
elements have time delays. In this case, the coupled DDE formulation offers a sig-
nificant advantage over the traditional model even for time-delay systems of the
retarded type. Indeed, as suggested in the introduction, if one rewrites the system by
“pulling out the delays,” one arrives as a system with forward system described by

ẋ(t) = Ax(t) + Bu(t), (1.67)


y(t) = Cx(t) + Du(t), (1.68)

and the feedback system consisting of pure delays


K
u(t) = ∑ Fi y(t − ri ), i = 1, 2, ..., K., (1.69)
i=1

where x(t) ∈ Rm , y(t) ∈ Rn , and u (t) ∈ Rn . Obviously, a substitution of (1.67) and


(1.68) by (1.69) results in a standard form of coupled DDEs (1.39) and (1.40). Typ-
ically, for a large system with a small number of delay elements, m
n. The special
case of Di = 0 (i = 1, 2, ..., k) can also be written as
K
ẋ(t) = Ax(t) + ∑ BFiCx(t − ri ), (1.70)
i=1
24 H. Li and K. Gu

which is a time-delay system of the retarded type. Even in this case, the model
(1.39) and (1.40) offers a significant advantage over (1.70) in stability analy-
sis using the Discretized Lyapunov Functional Method. Indeed, the dimensions
of LMIs (1.55), (1.61), and (1.57) applicable to (1.39) and (1.40) are m + (N +
K)n, m + (2N + K + 1)n, and 2Nn, respectively, which are much smaller than
(K + N + 1)m, (K + 2N + 1)m, and 2Nm, the dimensions of LMIs (20),(27), and
(28) in [12] applicable to (1.70).
In the remaining part of this chapter, three numerical examples are presented to
illustrate the effectiveness of the method. All numerical calculations were carried
out by using the LMI Toolbox [10] of MATLAB 7.0 on a laptop PC with a 1.86
GHz processor and with 512 MB RAM. The time is measured in seconds. It is also
defined that
Ndi = Ni − Ni−1 .
Example 1.1. Consider the system

ẍ(t) − 0.1ẋ(t) + x(t) + x(t − r/2) − x(t − r) = 0.

The above example is studied in [12] and it is shown that the system is unstable for
r = 0. For small r, as a finite difference, the last two terms can approximate rẋ(t)/2,
and as r increases, it may improve stability. In [12], the system is written in the
state-space form
       
ẋ1 (t) 0 1 x1 (t) 0 0 x1 (t − r/2)
= +
ẋ2 (t) −1 0.1 x2 (t) −1 0 x2 (t − r/2)
  
00 x1 (t − r)
+ . (1.71)
10 x2 (t − r)

It is known that for r = 0.2025, the system has a pair of imaginary poles at ±1.0077i.
For r = 1.3723, the system has a pair of imaginary poles ±1.3386i. The system is
stable for r ∈ (0.2025, 1.3723). In [12], the discretized Lyapunov-Krasovskii func-
tional method was used to estimate the delay interval such that the system remains
stable, and it was found that [rmin , rmax ] = [0.204, 1.350] for Nd1 = Nd2 = 1, and
[rmin , rmax ] = [0.203, 1.372] for Nd1 = Nd2 = 2. Here, the system is written as an
equivalent couple different-difference equation with two delays
        
ẋ1 (t) 0 1 x1 (t) 0 0
= − y(t − r/2) + y(t − r), (1.72)
ẋ2 (t) −1 0.1 x2 (t) 1 1
 
  x1 (t)
y(t) = 1 0 . (1.73)
x2 (t)

It can be verified that the dimensions of LMIs (1.55), (1.61), and (1.57) applica-
ble to (1.72) and (1.73) are N + 4, 2N + 6, and 2N, respectively, which are much
smaller than 2N + 6, 4N + 6, and 4N, the dimensions of LMIs (20), (27), and (28)
in [12] applicable to (1.71). A bisection process was used. An initial interval con-
taining rmax or rmin of length 2 is subdivided 15 times until the interval size is less
than 6.135 × 10−5 . The estimated results for Nd1 = Nd2 = 1, 2, 3 are listed in the
1 Lyapunov–Krasovskii Functional Approach for Coupled DDEs 25

following table. The numerical result is virtually identical, which was achieved with
substantially reduced computational cost.

(Nd1 , Nd2 ) (1, 1) (2, 2) (3, 3)


rmin 0.2032 0.2026 0.2025
Time 29.6875 291.1875 1, 236.9
rmax 1.3246 1.3719 1.3723
Time 29.2813 226.1094 630.0469

Example 1.2. Consider the following system

ẋ(t) = Ax(t) + B1 (t)y(t − 2r) + Bu(t),


y(t) = Cx(t),

where
⎛ ⎞
− 61
2 1 0 0
⎜ −200 0 1 0 ⎟
A=⎜ ⎟
⎝ −305 0 0 1 ⎠ ,
−100 0 0 0
⎛ ⎞
0
⎜ 0 ⎟
B1 = ⎜ ⎟
⎝ −200 ⎠ ,
−250
⎛ ⎞
0 0
⎜ 20 20 ⎟
B=⎜ ⎟
⎝ 305 105 ⎠ ,
350 100
 
C= 1000 .

This system represents the model for a tele-operator robotics system studied in [33]
where the transfer function was used.

It is supposed the following feedback control is applied


   
ρ00 0 0.5 √
u(t) = x(t) − y(t − 2r),
000σ 0.5

where ρ , σ are time-invarying parameters satisfying

|ρ | ≤ ρ0 , |σ | ≤ σ0 .

Again, the formulation in this chapter is applied with Nd1 = Nd2 = 2, and the results
are listed in the following table. From the table, it is observed that the uncertainties
significantly affect the maximum delay allowed for the system to remain stable.
26 H. Li and K. Gu

(ρ0 , σ0 ) (0, 0) (0, 0.015) (0, 0.018)


rN 0.2209 0.0309 0.0060
Time 597.8594 1,159.9 1,423.8
(ρ0 , σ0 ) (0.1, 0) (0.50, 0) (1.00, 0)
rN 0.1957 0.1149 0.0283
Time 1,160.4 1,238 1,229.8
(ρ0 , σ0 ) (0.1, 0.01) (0.4, 0.01) (0.4, 0.011)
rN 0.0635 0.0134 0.0036
Time 3,087 2,927.2 3,053.2

The above two examples are of retarded type. The following example is of neutral
type.

Example 1.3. Consider the neutral time-delay system

d
[x(t) − 0.8x(t − s) − 0.4x(t − r) + 0.32x(t − s − r)]
dt
= −0.848x(t) + 0.72x(t − s) + 0.128x(t − s − r).

This example is similar to the example discussed on page 289 of [22]. It is impor-
tant to observe that the three delays depend on two independent parameters, and the
dependence is guaranteed by the structure, and is not subject to any small deviations
even though the parameters r and s may contain small errors. It is therefore impor-
tant to transform the system to a form such that only r and s appear as delays in
view of the sensitivity of difference equations to small delays. Let

z(t) = x(t) − 0.8x(t − s) − 0.4x(t − r) + 0.32x(t − s − r),


 
x(t)
y(t) = .
x(t − r)

Then the system can be written in a standard form with two independent delays
   
ż(t) = −0.848z(t) + 0.0416 0.3994 y(t − s) + −0.3392 0 y(t − r),
     
1 0.8 −0.32 0.4 0
y(t) = z(t) + y(t − s) + y(t − r).
0 0 0 1 0

Let
D(φ )=φ (0)−D1 φ (−s) − D2 φ (−r)

be the difference operator, where


   
0.8 −0.32 0.4 0
D1 = , D2 = .
0 0 1 0
1 Lyapunov–Krasovskii Functional Approach for Coupled DDEs 27

After using Theorem 1.3, one may conclude that the difference equation D(φ ) is
stable. Indeed, the matrices
 
2.3051 −0.9730
S1 = ,
−0.9730 0.7110
 
0.9578 −0.2109
S2 = ,
−0.2109 0.4055

satisfy
   
S1 0 DT1  
− (S1 + S2 ) D1 D2 > 0.
0 S2 DT2
Next, the allowable maximum r is estimated for the system to retain stability if
it is let that s = cr. The discretized Lyapunov-Krasovskii functional method with
Nd1 = Nd2 = 2 is used, and the results are listed for different c.

√ √ √
c 2 7 3 13
rN 2.2874 2.8759 0.7318 0.5977
Time 2,029.8 1,768.9 1,606.8 1,740

1.6 Concluding Remarks

A Lyapunov-Krasovskii functional approach of systems described by coupled DDEs


is presented in this chapter. For linear systems, a discretization is applied to render
the condition in the form of linear matrix inequality. Even in the case of retarded
delay, the method presented offers significant advantages over the existing method
as the dimension of the LMI system is significantly lower.
It is interesting to discuss a number of possible alternatives. In the Lyapunov-
Krasovskii functional approach, as mentioned earlier, an alternative formulation
with one delay in each yi was proposed in [13] with more complete theoretical
results. The corresponding discretized Lyapunov functional method is presented in
[34]. It is also conceivable that other methods of reducing the quadratic Lyapunov-
Krasovskii functional to a finite dimensional LMI, such as the sum-of-square for-
mulation pursued in [40], or alternative discretization pursued in [37, 38], can also
be formulated.
In the frequency-domain approach, typical results are in the form of “stability
charts” describing the parameter regions in which the system is stable [51]. Some
examples of these charts can be found in Chaps. 4 and 5. As these results are based
on the characteristic quasipolynomial of the system, the results obtained for time-
delay systems of neutral type apply equally well to coupled DDEs. At present, a
number of methods are well established for systems with single delay or com-
mensurate delays [39, 53]. For two incommensurate delays in the characteristic
28 H. Li and K. Gu

quasipolynomial, a very systematic method is available in [18]. Unfortunately, this


does not solve the problem of systems with two incommensurate delays τ1 and τ2
for general linear systems, as the resulting characteristic quasipolynomial is in the
form of
K
∑ p(s)e−rk s , (1.74)
k=1

where p(s) are polynomials, and rk , k = 1, 2, . . . , K are integer combinations of τ1


and τ2 . It is interesting to mention that [4] discussed the stability of systems with
the characteristic quasipolynomial

p(s) + q(s)e−rs , (1.75)

where p(s) and q(s) are quasipolynomials. This gives the possibility of solving the
stability problem of (1.74) by considering one rk at a time. However, the computa-
tional implementation is not easy as it requires a good knowledge of, for example,
the zero distribution of the coefficient quasipolynomials along the imaginary axis. A
more recent interesting paper on the stability problem of (1.74) is [8], even though
the procedure is far from systematic and not easy to implement for a general case.

Acknowledgments This work was completed while Dr. Hongfei Li was visiting Southern Illi-
nois University Edwardsville. Dr. Li’s work was supported by the Natural Science Foundation of
Shaanxi Province, China (Grant number 2006A13) and the Foundation for Scientific Research,
Department of Education of Shaanxi Province, China (Grant number 06JK149), and the State
Scholarship Fund of China.

References

1. Boyd, S., El Ghaoui, L., Feron E., & Balakrishnan, V. (1994). Linear matrix inequalities in
system and control theory. Philadelphia: SIAM.
2. Brayton, R. (1976). Nonlinear oscillations in a distributed network. Quartely of Applied Math-
ematics, 24, 289–301.
3. Carvalho, L. A. V. (1996). On quadratic Liapunov functionals for linear difference equations.
Linear Algebra and its Applications, 240, 41–64.
4. Cooke, K. L., & van den Driessche, P., 1986. On zeros of some transcendental equations.
Funkcialaj Ekvacioj, 29, 77–90.
5. Cruz, M. A., & Hale, J. K. (1970). Stability of functional differential equations of neutral type.
Journal of Differential Equations, 7, 334–355.
6. Doyle, J. C. (1982). Analysis of feedback systems with structured uncertainties. IEE Proceed-
ings Part D, 129(6), 242–250.
7. Doyle, J. C., Wall, J., & Stein, G. (1982). Performance and robustness analysis for structured
uncertainty. 20th IEEE Conference on Decision and Control, 629–636.
8. Fazelinia, H., Sipahi, R., & Olgac, H. (2007). Stability robustness analysis of multiple time-
delayed systems using “building block” concept. IEEE Transactions on Automatic Control,
52(5), 799–810.
9. Fridman, E. (2002). Stability of linear descriptor systems with delay: a Lyapunov-based
approach. Journal of Mathematical Analysis and Applications, 273, 14–44.
1 Lyapunov–Krasovskii Functional Approach for Coupled DDEs 29

10. Gahinet, P., Nemirovski, A., Laub, A., & Chilali, M. (1995). LMI control toolbox for use with
MATLAB. Natick, MA: Mathworks.
11. Gu, K. (2001). A further refinement of discretized Lyapunov functional method for the stabil-
ity of time-delay systems. International Journal of Control, 74(10), 967–976.
12. Gu, K. (2003). Refine discretized Lyapunov functional method for systems with multiple
delays. International Journal of Robust Nonlinear Control, 13, 1017–1033.
13. Gu, K. (2008). Large systems with multiple low-dimensional delay channels. Semi-plenary
lecture, Proceedings of 2008 Chinese Control and Decision Conference, Yantai, China, July
2–4.
14. Gu, K., Kharitonov, V. L., & Chen, J. (2003). Stability of Time-Delay Systems. Boston:
Birkhäuser.
15. Gu, K., & Liu, Y. (2007). Lyapunov-Krasovskii functional for coupled differential-functional
equations. 46th Conference on Decision and Control, New Orleans, LA, December 12–14.
16. Gu, K., & Liu, Y. (2008). Lyapunov-Krasovskii functional for uniform stability of coupled
differential-functional equations. Accepted for publication in Automatica.
17. Gu, K., & Niculescu, S.-I. (2006). Stability analysis of time-delay systems: A Lyapunov
approach. In: Lorı́a, A., Lamnabhi-Lagarrigue, F., & Panteley, E. (Eds.), Advanced Topics
in Control Systems Theory, Lecture Notes from FAP 2005 (pp. 139–170). London: Springer.
18. Gu, K., Niculescu, S.-I., & Chen, J. (2005). On stability of crossing curves for general systems
with two delays. Journal of Mathematical Analysis and Applications, 311, 231–253.
19. Hale, J. K. (1975). Parametric stability in difference equations. Bolletting Univerdella Mathe-
matica Italiana, 4, Suppl. 11.
20. Hale, J. K., & Cruz, M. A. (1970). Existence, uniqueness and continuous dependence for
hereditary. systems. Annali di Matematica Pura ed Applicata, 85(1), 63–81.
21. Hale, J., & Huang, W. (1993). Variation of constants for hybrid systems of functional differ-
ential equations. Proceedings of Royal Society of Edinburgh, 125A, 1–12.
22. Hale, J. K., & Verduyn Lunel, S. M. (1993). Introduction to functional differential equations.
New York: Springer.
23. Hale, J. K., & Verduyn Lunel, S. M. (2002). Strong stabilization of neutral functional differ-
ential equaitons. IMA Journal of Mathematical Control and Information, 19, 5–23.
24. Han Q., & Yu, X. (2002). A discretized Lyapunov functional approach to stability of linear
delay-differential systems of neutral type. Proceedings of the 15th IFAC World Congress on
Automatic Control, Barcelona, Spain, July 21–26, 2002.
25. Jiang, Z.-P. & Wang, Y. (2001). Input-to-state stability for discrete-time nonlinear systems.
Automatica, 37, 857–869.
26. Kabaov, I. P. (1946). On steam pressure control [Russian]. Inzhenerno Sbornik, 2, 27–46.
27. Karafyllis, I., Pepe, P., & Jiang, Z.-P. (2007). Stability results for systems described by cou-
pled retarded functional differential equations and functional difference equations. Seventh
Workshop on Time-Delay Systems, Nantes, France, September 17–19.
28. Karafyllis, I., Pepe, P., & Jiang, Z.-P. (2007). Stability Results for Systems Described by
Retarded Functional Differential Equations. 2007 European Control Conference, Kos, Greece,
July 2–5.
29. Karaev, R. I. (1978). Transient processes in long distance transmission lines [Russian].
Moscow: Energia Publishing House.
30. Kharitonov, V. L., & Zhabko, A. P. (2003). Lyapunov-Krasovskii approach to the robust sta-
bility analysis of time-delay systems. Automatica, 39, 15–20.
31. Kolmanovskii V., & Myshkis, A. (1999). Introduction to the theory and applications of func-
tional differential equations. Dordrecht, the Netherlands: Kluwer.
32. Krasovskii, N. N. (1959). Stability of motion [Russian], Moscow [English translation, 1963].
Stanford, CA: Stanford University Press.
33. Lee, S., & Lee, H. S. (1993). Modeling, design, and evaluation of advanced teleoperator con-
trol systems with short time delay. IEEE Transactions on Robotics and Automation, 9(5),
607–623.
30 H. Li and K. Gu

34. Li, H., & Gu, K. (2008). Discretized Lyapunov-Krasovskii functional for systems with mul-
tiple delay channels. In the Invited Session “Time-Delay Systems: From Theory to Appli-
cations,” 2008 ASME Dynamic Systems and Control Conference, Ann Arbor, MI, October
20–22.
35. Martinez-Amores, P. (1979). Periodic solutions of coupled systems of differential and differ-
ence equations. Annali di Matematica Pura ed Applicata, 121(1), 171–186.
36. Niculescu, S.-I. (2001). Delay effects on stability—A robust control approach, Lecture Notes
in Control and Information Science, 269. London: Springer.
37. Ochoa, G., & Kharitonov, V. L. (2005). Lyapunov matrices for neutral type of time delay sys-
tems. Second International Conference on Electrical & Electronics Engineering and Eleventh
Conference on Electrical Engineering, Mexico City, September 7–9.
38. Ochoa, G., & Mondie, S. (2007). Approximations of Lyapunov-Krasovskii functionals of
complete type with given cross terms in the derivative for the stability of time-delay systems.
46th Conference on Decision and Control, New Orleans, LA, December 12–14.
39. Olgac, N., & Sipahi, R. (2002) An exact method for the stability analysis of time-delayed LTI
systems. IEEE Transactions on Automatic Control, 47(5): 793–797.
40. Peet, M., Papachristodoulou A., & Lall, S. (2006). On positive forms and the stability of linear
time-delay systems. 45th Conference on Decision and Control, San Diego, CA, December
13–15.
41. Pepe, P., & Verriest, E. I. (2003). On the stability of coupled delay differential and continuous
time difference equations. IEEE Transactions on Automatic Control, 48(8), 1422–1427.
42. Pepe, P. (2005). On the asymptotic stability of coupled delay differential and continuous time
difference equations. Automatica, 41(1), 107–112.
43. Pepe, P., Jiang, Z.-P., & Fridman, E. (2007). A new Lyapunov-Krasovskii methodology for
coupled delay differential and difference equations. International Journal of Control, 81(1),
107–115.
44. Rasvan, V. (1973). Absolute stability of a class of control proceses described by functional dif-
ferential equations of neutral type. In: Janssens, P., Mawhin, J., & Rouche N. (Eds), Equations
Differentielles et Fonctionelles Nonlineaires. Paris: Hermann.
45. Rǎsvan, V. (1998). Dynamical systems with lossless propagation and neutral functional differ-
ential equations. Proceedings of MTNS 98, Padoue, Italy, 1998, 527–531.
46. Rǎsvan, V. (2006). Functional differential equations of lossless propagation and almost linear
behavior, Plenary Lecture. Sixth IFAC Workshop on Time-Delay Systems, L’Aquila, Italy.
47. Rǎsvan, V., & Niculescu, S.-I. (2002). Oscillations in lossless propagation models: a
Liapunov-Krasovskii approach. IMA Journal of Mathematical Control and Information, 19,
157–172.
48. Silkowskii, R. A. (1976). Star-shaped regions of stability in hereditary systems. Ph.D. thesis,
Brown University, Providence, RI, June.
49. Sontag, E. D. (1989). Smooth stabilization implies coprime factorization. IEEE Transactions
on Automatic Control, 34, 435–443.
50. Sontag, E. D. (1990). Further facts about input to state stabilization. IEEE Transactions on
Automatic Control, 35, 473–476.
51. Stépán, G. (1989). Retarded dynamical systems: stability and characteristic function. New
York: Wiley.
52. Teel, A. R. (1998). Connections between Razumikhin-type theorems and the ISS nonlinear
small gain theorem. IEEE Transactions on Automatic Control, 43(7), 960–964.
53. Walton, K., & Marshall, J. E. (1987). Direct Method for TDS Stability Analysis. IEE Proceed-
ings, Part D, 134(2), 101–107.
Chapter 2
Networked Control and
Observation for Master–Slave Systems

Wenjuan Jiang, Alexandre Kruszewski, Jean-Pierre Richard,


and Armand Toguyeni

Abstract This chapter concerns the design of a remote control loop that combines
a Slave system (with computing and energy limitations) and a Master computer,
communicating via an Internet connection. In such a situation, the communica-
tion cost is reduced but the quality of service (QoS) of the Internet connection
is not guaranteed. In particular, when the Slave dynamics are expected to be fast
enough, the network induces perturbations (delays, jitters, packet dropouts and sam-
pling) that may damage the performance. Here, the proposed solution relies on a
delay-dependent, state-feedback control computed by the Master on the basis of an
observer. This last estimates the present Slave’s state from its past sampled outputs,
despite the various delays. Then, the computing task is concentrated in the Mas-
ter. The theoretical results are based on the Lyapunov-Krasovskii functional and the
approach of LMI, which guarantee the stabilization performance with respect to the
expected maximum delay of the connection. Two strategies are applied: one is a con-
stant controller/observer gain strategy, which takes into account a fixed upperbound
for the communication delay. The second strategy aims at improving the perfor-
mance by adapting the gains to the available network QoS (here, with two possible
upperbounds).

Keywords: Remote control · Switching signal · Exponential stability · Linear time-


delay system · LMIs · Internet · UDP · Robot

2.1 Introduction

A networked control system (NCS) is a type of closed-loop control system with


real-time communication networks imported into the control channel and feed-
back channel. The control and feedback signals in a NCS exchanged among the
system’s components are in the form of information packets through a network.
The network-induced delay is brought into the control systems, which may create
unstable behaviors.

B. Balachandran et al. (eds.), Delay Differential Equations: Recent Advances 31


and New Directions, DOI 10.1007/978-0-387-85595-0 2,
c Springer Science+Business Media LLC 2009
32 W. Jiang et al.

Network induced delays vary depending on the network hardware, on the differ-
ent protocols, on the traffic load, etc. In some cases, such as in a token ring local
area network, the time-delay is bounded; for other networks such as Ethernet and
Internet, the time-delay is unbounded and varying.
As Internet and Ethernet are well developed, remote control systems have been
widely used in industrial, communicational, and medical systems, to cite a few.
However, alongside the advantage of low costs, the Internet inevitably brings prob-
lems to a closed-loop controlled system, such as delay variation, data-packets loss
[1] and disorder, which can cause poor performance, instability, or danger (see for
instance Chap. 5 of [2, 3] and the references therein).
How to diminish the effect of time delay in the remote system is critical in the
system design. The main solution can split into two (combinable) strategies [2, 4]:
(1) Increase the network performances quality of service or (2) design an adapted
control that can compensate the network influence. In this chapter, we consider this
last approach for the network controlled system via Internet or Ethernet. The exper-
iments we propose, in the last part, use the Internet, but the control strategy holds
for both network standards.
A variety of stability and control techniques have been developed for general
time delay systems [5–8]. Applications of these techniques to NCS were also
derived [1, 9–14]. But some of these results are based on simplifying assumptions
(for instance, the delays are constant) or lead to technical solutions that decrease
the performance (for instance, a “buffer strategy” allows the communication delays
to become constant by waiting enough after the data are received). In fact, to con-
sider the time delay as constant [10, 15–17] is actually unrealistic because of the
dynamic character of the network. A delay maximizing strategy [8, 11] (“virtual
delay,” “buffer,” or “waiting” strategy) can be carried out so as to make the delay
constant and known. This requires the knowledge of the maximum delay values hm .
However, it is obvious that maximizing the delay up to its largest value decreases the
speed performance of the remote system. Several results are limited to a time-delay,
whose value is less than the sensor and controller sampling periods [18]. In the
Internet case, this constraint leads to an increase of the sampling periods up to the
maximal network delay, which may be constraining for high dynamic applications.
Note that, in the Internet case, the network delays cannot be modeled or pre-
dicted. Moreover, the (variable) transmission delays are asymmetric, which means
that the delay h1 (t) from Master to Slave (shortly, M-to-S), and the return one
(S-to-M) h2 (t) normally satisfy h1 (t) = h2 (t). Because of this lack of knowledge,
predictor-based control laws [12] cannot be applied.
Our aim is to ensure suitable stabilization and speed performances, i.e. exponen-
tial stabilization, despite the dynamic variations of the network.
Our solution relies on the theoretical results of [19] (exponential stabilization
of systems with unknown, varying delays), as well as [13] (networked control),
the main lines of which will be given in the next section. It allows one to apply a
waiting strategy only to the M-to-S communication, whereas the S-to-M commu-
nication takes the sensor measurements into account as soon as received. In order
to enhance the performance of the system, a gain scheduling strategy is adapted
2 Networked Control and Observation for Master-Slave Systems 33

according to the variable time-delay of the network involved. In our application, we


use Internet for a long-distance remote control. The Master–Slave system is based
on the UDP protocol and involves lists as buffers. The choice of UDP is preferred
to TCP because in our NCS situation re-emitting packets is not needed and setting
up the TCP connection between two PCs is time-consuming.

2.2 Exponential Stability of a Remote System Controlled


Through Internet

We consider the remote system based on the Master–Slave structure. For energy sav-
ing reasons, the work of the Slave PC is simplified, while the control and observation
complexity is concentrated on the Master. For the main features of the system, refer
to Fig. 2.1.

2.2.1 The Three Delay Sources

In such a situation, the variable delays come from (1) the communication through
the Internet, (2) the data-sampling; and (3) the possible packet losses. In the sequel,
h1 (t) and h2 (t) denote the communication delays and τ1 (t) and τ2 (t) include the
sampling delays and possible packet losses. The total delay δi (t) between Master
and Slave results from the addition of hi (t) and τi (t) for i ∈ {1, 2}.
1. Both computers’ dates are automatically synchronized in the system. The strat-
egy of NTP (Network Time Protocol) [20] is used in the program to calculate the
time clock difference between the two sides. By this way, whenever the Master
or the Slave receives the data, including the time stamp, it knows the instant tk of
data sending out and the resulting transmission delay hi (tk ), i = 1, 2.
2. The real remote system, including Master, Slave, and network, must involve
some data sampling. However, following [21, 22], this phenomenon is equiva-
lent to a time-varying, discontinuous delay τi (t) (defined in Section (2.1)), which
allows one to keep a continuous-time model. If the packets exchange between

Fig. 2.1 Structure of general M/S based remote system


34 W. Jiang et al.

the Master and the Slave is of high speed, then τi (t) constitutes a disturbance
that should be considered in the stabilization design [1]. τi (t) is variable but it is
assumed there is a known T (maximum sampling period) so that τi (t) ≤ T .
3. If some packet ptk , containing the sample at tk , is lost, or arrives later than the
packet ptk+1 , then the Master only considers the most recent data (i.e., those from
ptk+1 ). If it is assumed that the maximum number of successive packets that can
be lost is N, then the maximum resulting delay is NT . The same conditions hold
for the control packets.

From assumptions (2) and (3) above, the delay δi (t) has a known maximum
δim (t) = (N + 1)T + hm and the delay variation satisfies δ̇i (t) ≤ 1. To keep simple
expressions, the notation T will be kept preferable to T = (N + 1)T .
Summarizing, given a signal g(t) and the global delay δ (t), which represents
the combination of the sampling and packet delay h(tk ) that the transmission line
imposes on the packet containing the kth sample at time tk , then g(t) can be written
as:
g(tk − h(tk )) = g(t − h(tk ) − (t − tk )),
= g(t − δ (t)),
(2.1)
tk ≤ t < tk+1 , δ (t)  h(tk ) + τk (t),
τk (t) = t − tk .

2.2.2 Transmission and Receipt of the Control Data


The kth data sent by the Master to Slave includes the control u(t1,k ) together with the
time stamp when the packet is sent. At time t1,kr , when the Slave receives the data it

can calculate the time delay because of the time stamp. If the delay equals h1m , the
Slave should apply immediately the control command.
The control u, sent out by the Master at time t1,k , is received by the Slave at time
r > t . It will be injected into the Slave input only at the predefined “target time”
t1,k 1,k
target
t1,k = t1,k + h1m . The corresponding waiting time t1,k + h1m − t1,k r is depicted in

Fig. 2.2. This is realistic because the transmission delay is bounded by a known
value h1m . By this way, the Master knows the time t1,k + h1m when this control
u(t1,k ) will be injected into the Slave input.

2.2.3 Problem Formulation and Preliminaries

Consider the Slave as a linear system. It is described in the following form, in which
(A, B, C) is controllable and observable.

ẋ(t) = Ax(t) + Bu(t − δ1 (t)),
(2.2)
y(t) = Cx(t),

where δ1 (t) = δ1 + η1 (t), η1 (t) ≤ μ1 .


2 Networked Control and Observation for Master-Slave Systems 35

Fig. 2.2 Control data processing

In order to guarantee the closed-loop stability, whatever the delay variation, an


exponential stability, with the rate α , must be achieved. In other words, there must
be a real κ ≥ 1 so that the solution x(t;t0 , φ ), starting at any time t0 , from any initial
function φ satisfies: x(t;t0 , φ ) ≤ κ φ e−α (t−t0 ) . In this chapter, it is achieved using
a state observer and a state feedback.
To ensure this exponential global stabilization, one can use the results of [13],
which considers a Lyapunov-Krasovskii functional with descriptor representation:
 
V (t) = x̄αT (t)EPx̄α (t) + −0 δ1 t+
t
ẋT (s)Rẋα (s) dsdθ +
t  μθ1 α t (2.3)
t−δ xα (s)Sxα (s) ds + −μ1 t+θ −δ1 ẋα (s)Ra ẋα (s) dsdθ ,
T T

where x̄α (t) = col{xα (t), ẋα (t)}, xα (t) = x(t)eα t , and E = diag{I, 0(2×2) }.
Because of the separation principle, one can divide the analysis of the global sta-
bilization into two smaller problems: the observer design and the controller design.
The results are recalled below using an observer/controller.

2.2.4 Observer Design

For a given k and for any t ∈ [t1,k + h1m , t1,k+1 + h1m [, there exists a k such that the
proposed observer is of the form:

˙ = Ax̂(t) + Bu(t1,k ) − L(y(t2,k ) − ŷ(t2,k )),
x̂(t)
(2.4)
ŷ(t) = Cx̂(t).

The index k corresponds to the most recent output information that the Master
has received. Note that the Master knows the time t1,k and the control u(t1,k ) (see
Sect. 2.3.3), which makes this observer realizable.
Using the delay (2.1) rewrite (2.4) as:

˙ = Ax̂(t) + Bu(t − δ1 (t)) − L(y(t − δ2 (t)) − ŷ(t − δ2 (t))),
x̂(t)
(2.5)
ŷ(t) = Cx̂(t).
36 W. Jiang et al.

with δ1 (t)  t − t1,k and δ2 (t)  t − t2,k . In other words, the observer is realizable
because the times t1,k and t2,k defining the observer delays are known, thanks to the
time stamps. The system features lead to δ1 (t) ≤ h1m + T and δ2 (t) ≤ h2m + T .
We define the error vector between the estimated state x̂(t) and the present system
state x(t) as e(t) = x(t) − x̂(t). From (2.2) and (2.5), this error is given by:

ė(t) = Ae(t) − LCe(t − δ2 (t)). (2.6)

Theorem 2.1. [13] Suppose that, for some positive scalars α and ε , there exists
n × n matrices 0 < P1 , P, S, Y1 , Y2 , Z1 , Z2 , Z3 , R, Ra and a matrix W with appropriate
dimensions such that the following LMI conditions are satisfied for j = 1, 2:
⎡    ⎤
β2jWC −Y1 WC
⎢ Ψ μ β

2
εβ2jWC −Y2 2 2j
ε WC ⎥
⎥ < 0,
⎣ ∗ −S 0 ⎦
∗ ∗ −μ2 Ra
 
RY
≥ 0,
∗Z
where β2j are defined by:

β11 = eα (δ1 −μ1 ) , β12 = eα (δ1 +μ1 ) ,


(2.7)
β21 = eα (δ2 −μ2 ) , β22 = eα (δ2 +μ2 ) ,

and the matrices Y , Z, and Ψ2 are given by:


 
Z1 Z2
Y = [Y1 Y2 ], Z= ∗ Z3 , (2.8)

Ψ211 = PT (A0 + α I) + (A0 + α I)T P + S


+δ2 Z1 +Y1 +Y1T ,
Ψ2 = P1 − P + ε PT (A0 + α I)T + δ2 Z2 +Y2 ,
12

Ψ222 = −ε (P + PT ) + δ2 Z3 + 2μ2 Ra + δ2 R.
Then, the gain:
L = (PT )−1W, (2.9)
makes the error (2.6) of observer (2.5) exponentially convergent to the solution
e(t)=0, with a decay rate α .
In the following, the solution of the LMI problem corresponding to this theorem
is written:
L = LMI obs (μ2 , δ2 , α ) (2.10)

2.2.5 Control Design

We first consider a controller u = Kx, i = 1, 2, i.e., the ideal situation e(t) = 0,


x(t) = x̂(t), and:
2 Networked Control and Observation for Master-Slave Systems 37

ẋ(t) = Ax(t) + BKx(t − δ1 (t)). (2.11)


Theorem 2.2. [13] Suppose that, for some positive numbers α and ε , there exists
a positive definite matrix P̄1 , matrices of size n × n: P̄, Ū, Z̄1 , Z̄2 , Z̄3 , Ȳ1 , Ȳ2 similar
to (2.8) and an n × m matrix W , such that the following LMI conditions hold:
⎡    ⎤
β1i BW − Ȳ1T β1i BW
⎢ Ψ3 εβ1i BW − Ȳ T μ1 εβ1i BW ⎥
Γ3i = ⎢⎣ ∗
2 ⎥ < 0, ∀i = 1, 2,

−S̄ 0
∗ ∗ −μ1 R̄a
⎡ ⎤
R̄ Ȳ1 Ȳ2
⎣ ∗ Z̄1 Z̄2 ⎦ ≥ 0,
∗ ∗ Z̄3
where β1i , for i = 1, 2, are defined by (2.7) and

Ψ̄311 = (A0 + α I)P̄ + P̄T (A0 + α I)T + S̄


+δ1 Z̄1 + Ȳ1 + Ȳ1T ,
Ψ̄3 = P̄1 − P̄ + ε P̄T (A0 + α I)T + δ1 Z̄2 + Ȳ2 ,
12

Ψ̄322 = −ε (P̄ + P̄T ) + δ1 Z̄3 + 2μ1 R̄n a + δ1 R̄.

Then, the gain:


K = W P̄−1 , (2.12)
exponentially stabilizes the system (2.11) with the decay rate α for all delay δ1 (t).
In the following, the solution of the LMI problem corresponding this theorem is
written:
K = LMI con (μ1 , δ1 , α ) (2.13)

2.2.6 Global Stability of the Remote System

The gains K and L have to be computed in such a way they exponentially stabi-
lize the global Master–Slave–Observer system despite the variable delays δ1 (t) and
δ2 (t). This global system is:

ẋ(t) = Ax(t) + BKx(t − δ1 (t)) + BKe(t − δ1 (t)),
(2.14)
ė(t) = Ae(t) + LCe(t − δ2 (t)).

A separation principle is then applied to the previous system. Then if L and K


are chosen with respect to Sections 2.2.4 and 2.2.5, respectively, the global system
is exponentially stable with the lowest decay rate.
38 W. Jiang et al.

2.3 Architecture of the Global Control System

2.3.1 Features of the Remote System

The remote system is based on a Master–Slave structure. To simplify the work of


the Slave PC, the control and observation complexity is concentrated on the Master.
For the main features of the system refer to Fig. 2.3. In the system, the robot Miabot
of the company Merlin Systems Corp. Ltd., together with a PC, serves as the Slave.
The Miabot works as an accessory device that communicates with the PC by the
Bluetooth port and so we cannot use a buffer strategy directly on the robot. Because
there is no time information included in the command of Miabot, we do not know
when the control is applied. That means we cannot use a time stamp on Miabot. To
simplify our problem and apply the theory of [13], we treat the time-delay of the
Bluetooth between the PC and Miabot as a constant one. We add the delays d1 , d2
into the respectively variable delays h1 (t), h2 (t).
The transmission protocol UDP is applied to communicate the data between
Master and Slave. To know the instant that the data was sent, time stamps are added
to the data packets. The data structure of a list that serves as a buffer is introduced
for the program to search for the data at the right instance. In all the lists, the control
data are stored in a decreasing order of its transmission time. That is to say, the most
recent sent data is always at the top of the list.

2.3.2 Synchronization of the Time Clocks

To synchronize the different time in the two PCs, we can add GPS to the system [13],
but this increases the cost and is inflexible. Another way is to use a certain protocol
such as NTP [20]. Because of different time clock times of the PCs, we have to make
synchronize them from time to time. Our solution is to directly adapt the strategy of
NTP in the program to calculate the time differences.
As shown in Fig. 2.4, k is the sequential number of the packets sent from the
Master and k is the number sent back. h1 (k) and h2 (k ) refer to the respective delays

Fig. 2.3 Structure of the global system


2 Networked Control and Observation for Master-Slave Systems 39

Master Slave
e
tm;k
e )
u(k;tm;k
h1 (k)
r
ts;k

e
ts;k
h2 (k )
r
tm;k e ;tr ;te ; k)
y(k ;tm,k s;k s;k

Fig. 2.4 Packets communication between the M/S

of the communication on Internet. To simplify the problem, we assume that h1 (k) =


h2 (k ). If we define the time clock difference between the M/S as follows (ts ,tm are
the respectively time of the Slave and the Master):

θ = tm − ts ; (2.15)

Then, we can calculate the time difference between the two PCs using the fol-
lowing equations ( is the process time for Slave):

θ (k, k ) = (tm,k
r
− ts,k + tm,k − ts,k )/2;
e e r
(2.16)

h1 (k) = h2 (k ) = (ts,k
r
− tm,k
e r
+ tm,k − ts,k )/2;
e
(2.17)
That is to say, every time the Master receives a packet, the time clock difference
between the M/S and the time delay of the Internet can both be measured. The
values of θ and hm are contained in the control packets, so whenever the Slave
receives a control, it can calculate the “target” time for applying the control.

2.3.3 Transmission and Receipt of The Control Data

The kth data sent by the Master to Slave includes the control u(tm,k e ) together with
r
the time stamp when the packet is sent. At time ts,k , when the Slave receives the
data it can calculate the time delay because of the time stamp. If the delay equals
h1m − d1 , the Slave should immediately apply the command.
e , is received by the Slave at time
The control u, sent out by the Master at time tm,k
ts,k > tm,k − θ . It will be inserted into the Slave input only at the predefined “target
r e
target target
time”: ts,k = tm,k − θ = tm,k e +h
1m − θ . The corresponding waiting time h1m is
depicted in Fig. 2.5. This is realistic because the transmission delay is bounded by a
known value h1m . In this way, the Master knows the time when this control u(tm,k e )

will be inserted into the Slave input.


40 W. Jiang et al.

Fig. 2.5 Control data processing

Fig. 2.6 Structure of the Master

2.3.4 The Structure of the Master

In order to implement the model for the remote control system, four-thread programs
were designed to fulfill the functions of Controller and Observer in Fig. 2.3.
These four threads concurrently work as shown in Fig. 2.6. There are two buffers,
list U and list X, which respectively keep the data sent out from the Master and the
data of the estimated state of the Slave. The most recent calculated data is inserted
at the beginning of the lists so that it is easier to find the right data we need and
delete the obsolete ones.
Here we consider the following notation. p is the index of the different commands
given by the ConsThread. k is the index of the control u given by the SenderThread.
q is the index of estimation state given by the ObserverThread and the k is the index
of the measure given by the ReceiverThread.

(a) ConsThread is a periodic thread that gets the tasks (it is the position where the
user wants the motor to arrive) from a file given by the user. In this way, the user
can freely change the task. The time period T3 , for this thread to continuously
operate is set before the system begins to work. Its value should be greater than
the response time of the Slave.
2 Networked Control and Observation for Master-Slave Systems 41

(b) SenderThread is also a periodic thread that first gets the task (c(t3,p )) designed
by the user. Considering the mechanical character of the Miabot, we choose
0.1 s as the time period. Then, it calculates the control data to send to the Slave.
The most recent x̂(t0,q ) can be found at the beginning of the list X; then, the
command data, together with the system time, is sent to the slave through a
socket. While, at the same time, it is inserted into the list U for the use by the
ObserverThread.
(c) ReceiverThread is an event-driven thread. As data arrives from the slave, it first
checks whether there are any packets lost. The maximum number of packets
that can be lost without implying instability of the system is an open issue in
our research. Then, according to the time stamp, the time clock difference and
the time-delay are calculated; meanwhile, the most recent data is sent to the
thread of the ObserverThread.
(d) ObserverThread is the most important part of the program. It mainly serves as
the Observer in the system model. The main task is to estimate the present posi-
tion and speed of the motor. To estimate the continuous state of the Miabot, the
time period of this thread should be small enough. In our program, we choose
0.01 s. As we can see from the result of the experiment (Sect. 2.3.6), this value is
suitable. To accomplish the function of the Observer, it must find out the com-
mand u which has been applied to the slave system and the estimated motor
position at the time when the information is sent from the slave.
As it is illustrated in Fig. 2.7, in order to determine ŷ(t2,k ), it is necessary to find,
in the list X, the closest state estimation x̂ with regard to the date ts,k − d1 . And we
can get the control data u in the list U with the time stamp of time h1m before. So,
according to the equation (2.4), the estimated state can be obtained. As we can see
from Fig. 2.7, in order to find the state x̂(t0,q ) at the time nearest to ts,k − d1 , the
time period of this thread should be small enough.

2.3.5 The Structure of the Slave

The Slave does not need power computation abilities, because it just needs to com-
municate with the Master and the Miabot. As we can see from Fig. 2.8, this program
is divided into two threads: ReceiveThread and SendThread. As we need to apply
the control data with the time delay of h1m after the time stamp, a list Y is used to
contain the control data temporarily, in which all the nodes are sorted in the order

Fig. 2.7 Packet Sequences


42 W. Jiang et al.

Fig. 2.8 Structure of the Slave

according to the time stamps. That means the most recent control is inserted at the
end of the list.
(a) ReceiveThread is an event-driven thread that is activated by the control data
arriving from the Master. The control data is inserted into the proper position
in the list list Y according to its time stamp. If the time stamp falls before the
oldest data in the list, the packets are out of order throughout the Internet and the
data is discarded. If there are several packets lost, the stability of the system is
not affected, since the Master’s SendThread has a sufficiently high transmission
frequency.
(b) SendThread is used to apply the control to the Miabot as well as to get its
real position, and send the data back to the Master. First, the thread takes the
packet at the beginning of list Y. Thanks to the value of the difference time clock
between Master and Slave in the packet, we can calculate the “target” time to
apply the control. Then the control is sent to the Miabot by the Bluetooth port at
the right time, when the Miabot sends back the measure y(ts,k e ) of its position.

This value is sent to the Master with the time stamp ts,k − d1 where ts,k is the
reception time by the Slave PC.

2.3.6 Experimental Study

After identification of the Miabot, we get the following model:


   
ẋ(t) = 00 −101 0
x(t) + 0.014 u(t − δ1 (t))
(2.18)
y(t) = [ 1 0 ] x(t).
2 Networked Control and Observation for Master-Slave Systems 43

600
Mean:52.37 msec
Max.=577.4 msec
Min. =4.1 msec
500
Round−trip time(msec)

400

300

200

100

0
0 100 200 300 400 500 600 700 800 900 1000
Sample

Fig. 2.9 The RTT between the two PCs by Internet (40 km away)

We have continuously tested the RTT (Round-trip-time) between the two PCs by
the ICMP (Internet Control Message protocol) as showed in Fig. 2.9. From these
tests, considering also the Bluetooth transmission delays and the sampling delays,
we take the value δ1 = δ2 = 0.4 s and μ1 = μ2 = 0.1 s. If there any times at which
the time-delays are greater than 0.5 s, we treat the packets as being lost. The gains
K and L have to be computed in such a way that they exponentially stabilize the
global Master–Slave–Observer system despite the variable delays δ1 (t) and δ2 (t).
According to [13], we get α = 0.96 when the gain L is chosen as:
 
−1.4
L= . (2.19)
−0.13

The gain K is as follows: # $


K = −702 −70 (2.20)

The experiment is done on two computers separated about 40 k away. The Master
program runs on the remote computer with an advanced computing capability, the
slave program on the local one. We get the result shown in the Fig. 2.10, in which the
dash-dot line represents the set values; the solid lines represent the robot’s estimated
state, the position and the speed; the dotted line correspond to the real position.
Fig. 2.11 represents the sampled control data sent to the Slave.
Note that all the data in the figure are obtained from the Master, so the data of
the real position of the Miabot (dotted line) lags behind the estimated one. This
illustrates the fact that, despite the time delays of the Internet and the Bluetooth, the
Master predicts an estimate of the Slave’s state.
44 W. Jiang et al.

2
Tasks
^x1(tk)
1.5 ^x2(tk)
y.dis

0.5
Distance(m)

−0.5

−1

−1.5

−2
0 5 10 15 20 25 30 35 40 45 50 55
Time(s)

Fig. 2.10 Results of remote experiment

1000
Sampled command

800

600

400

200
command

−200

−400

−600

−800

−1000
0 5 10 15 20 25 30 35 40 45 50 55
time(s)

Fig. 2.11 The corresponding Slave control of centralization experiment


2 Networked Control and Observation for Master-Slave Systems 45

2.4 Performance Enhancement by a Gain Scheduling Strategy

2.4.1 Effects of Time-Delay on the Performance


and the System Stability

The time-delay of the Internet varies greatly between the rush hour and the idle time
period (Fig. 2.9). To guarantee the exponential stabilization, we have to choose the
maximum time-delay, whereas most of the time, the time-delay is much smaller. In
other words, the performance is decreased. Comparing the two simulation results
from Matlab, Figs. 2.12 and 2.13, we can see that when the hm is smaller, the value
of α is larger and the task is quickly accomplished. In the table in Fig. 2.14, we
list several corresponding values of hm and α . It is clearly that increasing hm means
decreasing the performance of the system.
To enhance the performance and make the system adaptable to the changeable
time-delay of the Internet, we have designed two controllers corresponding with
two bounds of time-delay. The controller switches on the time-delay function. The
switching signal is given by σ (t) = γ (t − ξ ), where ξ is the time-delay of the sig-
nal due to the Internet and calculation. In order to guarantee the uniform exponen-
tial stability, our solution is to find a common Lyapunov function for both closed
loops [23]. Of course, for greater delay values, the performance cannot be guaran-
teed anymore and an alternative solution has to be considered. In our system, we
give a command for the robot to stop until the communication comes back to an
adequate level.

Fig. 2.12 hm = 0.05 s, α = 8.74


46 W. Jiang et al.

Fig. 2.13 hm = 0.5 s, α = 0.96

Fig. 2.14 The relation between hm (s) and α

2.4.2 Uniform Stability with Gain Scheduling

In order to reach a larger value for the exponential convergence, we propose both
switching controller and observer gains. The switching signals σ1 (t) and σ2 (t) cho-
sen are functions of the time delays δ1 (t) and δ2 (t). For the sake of simplicity, they
can only take two values:
ij ij
σi (t) = j, if δi (t) ∈ [hMin , hMax [, i, j = 1, 2 (2.21)

Considering every time-delay zone, we have to compute the gains K1 , K2 and L1 ,


L2 in such a way that they exponentially stabilize the global Master–Slave–Observer
system despite the variable delays δ1 (t) and δ2 (t). This global system is:


⎪ ẋ(t) = Ax(t) + BKσ1 (t) x(t − δ1 (t)) + BKσ1 (t) e(t − δ1 (t)),


⎨ ė(t) = Ae(t) + Lσ (t)Ce(t − δ2 (t)),
2
(2.22)

⎪ σ1 (t) = γ1 (t − δ1 (t) − δ2 (t)),


⎩ σ2 (t) = γ2 (t − δ2 (t)).
2 Networked Control and Observation for Master-Slave Systems 47

Fig. 2.15 Uniform stability of the system

γi (t) are the detection functions of δi (t). These functions are delayed because the
Master needs to receive the last packet to calculate the delays. Therefore each gain
is activated for a certain period (δ1 (t) + δ2 (t)) or δ2 (t))).
Each gain is computed using
1j 1j 1j 1j
Kj = LMIcon ((hMax − hMin )/2, (hMax + hMin )/2, αcon )
2j 2j 2j 2j
Lj = LMIobs ((hMax − hMin )/2, (hMax + hMin )/2, αobs ).
A sufficient condition to prove the uniform stability of the switching closed loop is
to find a common Lyapunov-Krasovskii functional for all gains. This functional has
to take into account all admissible delays, i.e., ∀δi (t) ∈ [hi1 i2
Min , hMax ]. For each gain,
Fig. 2.15 shows the regions where the exponential stability occurs.
The following theorems give sufficient conditions to prove the uniform stability
of the switching closed loop.
Theorem 2.3. Suppose that, for a given switching observer gains Lσ2 (t) , for some
positive scalars α and ε , there exists n × n matrices 0 < P1 , P, S, Y1 , Y2 , Z1 , Z2 ,
Z3 , R, Ra such that the LMI conditions 2.1 and the following ones hold for i = 1, 2,
j = 1, 2 ⎡    T ⎤
β2j PT LiC −Y1 P LiC
⎢ Ψ μ β

2
εβ2j PT LiC −Y2 2 2j
ε PT LiC ⎥
⎥ < 0,
⎣ ∗ −S 0 ⎦
∗ ∗ −μ2 Ra
where the matrices Ψ2 and β2j are the same as in Theorem 2.1.
Then the error of observer exponentially converge to the solution e(t) = 0, with
a decay rate α .
Proof. Consider the Lyapunov-Krasowskii functional (2.3). Following the same
proof as in [13], one gets the following sufficient conditions for j = 1, 2:
⎡    T ⎤
β2j PT Lσ2 (t)C −Y1 P Lσ2 (t)C
⎢ Ψ μ β

2
εβ2j PT Lσ2 (t)C −Y2 2 2j
ε PT Lσ2 (t)C ⎥
⎥ < 0,
⎣ ∗ −S 0 ⎦
∗ ∗ −μ2 Ra

Then by convexity, one obtains the conditions of the previous theorem. 



48 W. Jiang et al.

Theorem 2.4. Suppose that, for a given switching state feedback Kσ1 (t) , for some
positive numbers α and ε , there exists a positive definite matrix P̄1 , matrices of size
n × n: P̄, Ū, Z̄1 , Z̄2 , Z̄3 , Ȳ1 , Ȳ2 similar to (2.8), such that LMI conditions 2.1 and the
following ones hold for i = 1, 2, j = 1, 2
⎡    ⎤
β1i BKj P̄ − Ȳ1T β1i BK j P̄
⎢ Ψ3 εβ1i BK j P̄ − Ȳ T μ1 εβ1i BK j P̄ ⎥
⎢ 2 ⎥ < 0,
⎣ ∗ −S̄ 0 ⎦
∗ ∗ −μ1 R̄a

where the matrices Ψ3 and β1j are the same as in Theorem 1. Then the closed loop
is exponentially stable with the decay rate α for all delay δ1 (t).

Proof. Proof: Same as in the observer case. 




2.4.3 Gain Scheduling Experiments

Considering the initial approach, i.e., without switching gains, the maximum expo-
nential convergence is obtained at αcontrol = αobserver = 0.96.
Consider two zones of delay with δ11 = δ21 = 0.04 s, μ11 = μ21 = 0.04 s, and
δ1 = δ22 = 0.29 s, μ12 = μ22 = 0.21 s. This means that the gains switch, when the
2

delay crosses the value of 0.08 s. According to Theorems 2.3 and 2.4, the maximum
exponential convergence ensuring the global stability occurs when: αcontrol1 = 2.1,
αobserver1 = 2.2, and αcontrol2 = αobsever2 = 1.
Note that because the global stability is checked after the computation of the
gains, these values are not optimal. To get optimal value, needs to find the con-
trol/observer gains ensuring the exponential convergence and the global stability at
the same time with a LMI problem.
The gains Ki and Li (i = 1, 2) are:
 
−3.01 # $
L1 = , K1 = −1, 659 −260 .
−0.77
 
−1.4 # $
L2 = , K2 = −1, 015 −100 .
−0.16

2.4.4 Result of Remote Experiment

The experiment is done in the same situation as mentioned in the preceding sections.
The result is shown in Fig. 2.16, in which the dash-dot line represents the set val-
ues; the solid lines represent respectively the robot’s estimated position and speed;
2 Networked Control and Observation for Master-Slave Systems 49

0.6
Tasks
^x1(tk)
^x2(tk)
0.4 y.dis

0.2
position(m)

−0.2

−0.4

0 5 10 15 20 25 30 35 40 45
time(s)

Fig. 2.16 Results of remote experiment

the dotted line corresponds to the real position of the Miabot. Fig. 2.17 is the cor-
responding variable time-delays, which comprises the time-delay of sampling and
communication of the Bluetooth (we consider it as constant time-delay, here we take
the value of 40 ms). In Fig. 2.18, the solid line represents the sampled control sent
to Slave, and the dash-dot line and dotted one represent the command for the zone
one and two respectively. Fig. 2.19 shows the time point when the system switches
enter the values for the two zones.
Because the maximal speed of the Miabot is 3.5 m/s [24], the corresponding
command value is 2,000 (in open loop). But to guarantee the linear character of the
Miabot, we select the command value not greater than 1,000 and the speed 1.7 m/s.
The controller gains are those of the last section. Despite their high value, one can
notice that the control signal (Fig. 2.18) does not exceed the limit value as well as
the speed (Fig. 2.16 solid line) that validates the linearity assumption.
On Fig. 2.16, one can notice three kinds of step response. The first one corre-
sponds to the case where the control switches often during the response. In that
case, only the global stability is guaranteed. During the second step, only the sec-
ond zone is active, i.e., only the gains K2 and L2 are active (α = 1). In this case,
some performance is guaranteed. In the last kind of response, only the first zone is
active because the delays are small. In that case, the performance is better (α = 2.1):
The response time is smaller and the damping is greater.
As it is clearly shown in Fig. 2.16, the global stability of the closed loop is main-
tained despite the fact that some assumptions are not satisfied. With respect to the
Bluetooth assumption, it was considered constant, whereas in reality it varies (the
minimum delay recorded as less than 4, ms). And with respect to synchronization,
symmetric delays were needed and in the experiment it was clearly not the case.
50 W. Jiang et al.

x 104
12

10
Time−delay(microsecond)

0
0 5 10 15 20 25 30 35 40 45
time(s)

Fig. 2.17 The corresponding variable time-delays

500

u1(K1,L1)
400
u2(K2,L2)
Control(u)
300

200

100
Control(u)

−100

−200

−300

−400

−500
0 5 10 15 20 25 30 35 40 45
Time(s)

Fig. 2.18 The corresponding Slave control


2 Networked Control and Observation for Master-Slave Systems 51

1.5
u1(K1,L1)
1
u1(K1,L1) or u2(K2,L2)

0.5

−0.5

−1
u2(K2,L2)

−1.5

−2
0 5 10 15 20 25 30 35 40 45

Fig. 2.19 The switching plan

2.5 Conclusion

In addition to some fundamental results, an experimental platform has been devel-


oped to illustrate the results of the network-based control theory. This platform is
able to control a slave through a network and joins skills in automatic control, com-
puter science, and networks.
The experimental results confirm the theory: (1) The exponential stability is
obtained in both the time-delay zones and uniform stability is guaranteed (2) The
experimental performances are shown to be better when considering two zones of
time delay instead of one.
Considering the variation of time-delays, more than two-zone-switching signals
can be selected in order to enhance the performance of the global system. The LMI
conditions, in that case, have an increased size and are straightforwardly inspired
from Theorems 2.3 and 2.4 but are not investigated here.
A way to improve the presented results is to propose a “one shot algorithm”
that allows finding the optimal gains in terms of exponential convergence. Another
trend is to investigate a solution without the input buffer. Without the buffer, the
input delay will be smaller, ensuring better performance and the slave will need less
memory to run [25].
A last perspective is to consider the improvement of the network communication
by, for example, developing dedicated protocols that minimize the time delays and
enhance the clock synchronization.
52 W. Jiang et al.

References

1. Yu M., Wang L., and Chu T., “An LMI approach to network control systems with data packet
dropout and transmission delays,” MTNS ’04 Proceedings of Mathematical Theory Networks
and Systems, Leuven, Belgium, 2004.
2. Richard J.P. and Divoux T., Systèmes commandés en réseau. Hermes-Lavoisier, IC2, Systèmes
Automatisés, 2007.
3. Georges J.-P., Divoux T., and Rondeau E., “Confronting the performances of a switched ether-
net network with industrial constraints by using the network calculus,” International Journal
of Communication Systems (IJCS), vol. 18, no. 9, pp. 877–903, 2005.
4. Chiasson J. and Loiseau J.J., Applications of time delay systems. Springer, New York, vol. 352,
2007.
5. Hale J. and Lunel S., Introduction to functional differential equations, Springer, New York,
1993.
6. Kolmanovskii V. and Myshkis A., Applied theory of functional differential equations.
Dordrecht: Kluwer, 1999.
7. Niculescu S.-I., Delay effects on stability: A robust control approach. Springer, New York,
vol. 269, 2007.
8. Richard J.P., “Time delay systems: an overview of some recent advances and open problems,”
Automatica, vol. 39, pp. 1667–1694, 2003.
9. Almutairi N.B., Chow M.Y., and Tipsuwan Y., “Network-based controlled dc motor with
fuzzy compensation,” Proceedings of the Annual Conference on Industrial Electronics Society,
Denver, pp. 1844–1849, 2001.
10. Huang J.Q. and Lewis F.L., “Neural-network predictive control for nonlinear dynamic systems
with time delays,” IEEE Transactions on Neural Networks, vol. 14, no. 2, pp. 377–389, 2003.
11. Leleve A., Fraisse P., and Dauchez P., “Telerobotics over IP networks: Towards a low-level
real-time architecture,” IROS’01 International Conference on Intelligent Robots and Systems,
Maui, Hawaii, October 2001.
12. Witrant E., Canudas-De-Wit C., and Georges D., “Remote stabilization via communication
networks with a distributed control law,” IEEE Transactions on Automatic Control, 2007.
13. Seuret A., Michaut F., Richard J.P., and Divoux T., “Networked control using gps synchroniza-
tion,” Proceedings of ACC06, American Control Conference, Mineapolis, USA, June, 2006.
14. Sandoval-Rodriguez R., Abdallah C., Jerez H., Lopez-Hurtado I., Martinez-Palafox O., and
Lee D., Networked Control Systems: Algorithms and Experiments, A Applications of time
delay systems, vol. 352. Springer, New York, pp. 37–56, 2007.
15. Azorin J.M., Reinoso O., Sabater J.M., Neco R.P., and Aracil R., “Dynamic analysis for a
teleoperation system with time delay,” Conference on Control Applications, June 2003.
16. Fattouh A. and Sename O., “H∞ -based impedance control of teleoperation systems with time
delay,” Fourth Workshop on Time Delay Systems, September 2003.
17. Niemeyer G. and Slotine J.-J., “Towards force-reflecting teleoperation over the internet,” IEEE
Internatinal Conference on Robotics & Automation, 1998.
18. Chen Z., Liu L., and Yin X., “Networked control system with network time-delay compensa-
tion,” Industry Applications Conference, Fourteen IAS Annual Meeting, vol. 4, pp. 2435–2440,
10, 2005.
19. Seuret A., Dambrine M., and Richard J.P., “Robust exponential stabilization for systems with
time-varying delays,” Proceedings of TDS04, Fifth IFAC Workshop on Time Delay Systems,
Leuven, Belgium, September 2004.
20. Mills D.L., “Improved algorithms for synchronizing computer network clocks,” IEEE/ACM
Transactions On Networking, vol. 3, no. 3, pp. 245–254, June 1995.
21. Seuret A., Fridman E., and Richard J.P., “Sampled-data exponential stabilization of neutral
systems with input and state delays,” Proceedings of IEEE MED 2005, Thirteen Mediter-
ranean Conference on Control and Automation, Cyprus, 2005.
22. Fridman E., Seuret A., and Richard J.P., “Robust sampled-data stabilization of linear systems:
An input delay approach,” Automatica, vol. 40 no. 8 19, pp. 1441–1446, 2004.
2 Networked Control and Observation for Master-Slave Systems 53

23. Liberzon D., Switching in Systems and Control, T. Basar, Ed. Birkhäuser, Briton, Fl, 2003.
24. Merlin Systems Corporation Ltd., Miabot PRO BT v2 User Manual, rev. 1.3 ed., 2004.
25. Seuret A. and Richard J.P., “Control of a remote system over network including delays and
packet dropout,” IFAC World Congress, Seoul, Korea, July, 2008.
Chapter 3
Developments in Control of Time-Delay Systems
for Automotive Powertrain Applications

Mrdjan Jankovic and Ilya Kolmanovsky

Abstract In this chapter, the authors provide an overview of several application


problems in the area of automotive powertrain control, which can greatly benefit
from applications of analysis and control design methods developed for time-delay
systems. The first two applications considered concern, respectively, the idle speed
control (ISC) and air-to-fuel ratio control in gasoline engines. The third application
concerns an estimation problem in modern diesel engines with exhaust gas recircu-
lation and variable geometry turbocharging. The nature of the delays and the role
played by them in these applications is highlighted and the imposed performance
limitations are discussed. Links are provided with theoretical literature on the anal-
ysis and control of time-delay systems and modeling details are discussed to a level
that will permit other researchers to use the associated models for simulation case
studies.

Keywords: Air-to-fuel ratio control · Automotive powertrain control · Linear


matrix inequality · Lyapunov function · Observer design · Idle speed control

3.1 Introduction

In this chapter, the authors focus on application problems in the area of automotive
powertrain control, where the issues related to the treatment of time-delays con-
stitute important design considerations. New engines and transmissions are intro-
duced every year into production, and a broad range of advanced technologies are
being continuously developed for future applications. Powertrain control is play-
ing an increasingly important role in the automotive development cycle, and is a
key enabler for future improvements in fuel economy and emissions. Since delays
are common in powertrain systems, effective techniques for controlling time-delay
systems can have a substantial impact on these applications.

B. Balachandran et al. (eds.), Delay Differential Equations: Recent Advances 55


and New Directions, DOI 10.1007/978-0-387-85595-0 3,
c Springer Science+Business Media LLC 2009
56 M. Jankovic and I. Kolmanovsky

Here, three powertrain control applications with delays are discussed. The first
two applications concern, respectively, the idle speed control (ISC) and air-to-fuel
ratio control in gasoline engines. The third application concerns an estimation prob-
lem in modern diesel engines with exhaust gas recirculation, and variable geometry
turbocharging. In all of the applications, the authors’ objective will be to highlight
the character and the role the delay is playing and discuss performance limitations
that it imposes. We will also provide links with theoretical literature on analysis and
control of time-delay systems and discuss modeling details to a level that will permit
others to use the associated models for the simulation case studies. The linear matrix
inequality (LMI) approach used in the first two chapters will also be used here.
In the ISC case, the delay arises because of finite time between intake and power
strokes of the engine and is generally assumed to be half of the engine cycle. The
fueling rate is determined according to the predicted airflow at the time of the intake
stroke [39] and this prediction can be made so accurate that the torque can be
assumed to be proportional to the airflow delayed only by the intake to power delay
but unaffected by the delays in the fueling system. In the air-to-fuel ratio control
problem, the delays in the fueling system must be considered explicitly as this loop
compensates for the air-to-fuel ratio disturbances. Finally, in the diesel engine, the
delay is due to finite time between intake and exhaust strokes of the engine and can
be assumed to be about three-fourth of the engine cycle.

3.2 Idle Speed Control

ISC is one of the key control functionalities of modern gasoline and diesel engines
[19]. It maintains engine running when the driver foot is off the gas pedal.
A schematic block diagram of the ISC closed-loop system for a gasoline engine
is presented in Fig. 3.1. The position of the electronic throttle and the spark timing

Load torque
ML

Idle Speed Controller

uth,c Wth p1 Wac Me0 Me


Throttle Manifold filling
Cylinder flow Torque at MBT e−std
Commanded
dynamics dynamics

Throttle Cylinder
Intake-to-torque
* +
Manifold Engine
throttle position flow flow Engine torque production delay
pressure torque
at MBT

δ + Torque Ratio
Spark-to-torque
production delay
Spark −
timing δMBT
MBT spark
timing

N 1
Engine Js
speed
Engine rotational
dynamics

Fig. 3.1 Block diagram for ISC


3 Time-Delay Systems in Powertrain Applications 57

are control inputs and the engine speed is a controlled output. The engine speed
is assumed to be measured (in reality, it is fairly accurately estimated from the
crankshaft position sensor signal). The objective of ISC is to maintain engine speed
at a given set-point (around 625 rpm in drive gear and 650 rpm in neutral gear dur-
ing fully warm operation) despite measured and unmeasured torque disturbances
due to power steering, air-conditioning turn on and off, transmission engagement,
alternator load changes, and the like. Both throttle and spark have an effect on the
engine torque and can compensate the effects of disturbances. Tight performance
requirements are imposed to contain the engine speed dip (maximum engine speed
decrease when disturbance hits) within 80 rpm and engine speed flare (maximum
engine speed increase when disturbance hits) within 120 rpm.
Increasing engine throttle opening causes air flow through the throttle to increase.
The increase in the throttle flow causes the intake manifold pressure to increase,
which in turn increases the airflow into the engine cylinders. Larger engine torque is
produced as the cylinder air flow increase is proportionally matched by the increase
in the engine fueling rate. Adjusting the spark timing shifts the start of combustion
relative to the revolution of the crankshaft. The spark timing (in degree of crankshaft
revolution) at which the maximum torque is produced is referred to as maximum
brake torque (MBT) spark timing. Typically, the spark timing can only be retarded
relative to the MBT spark timing, which causes engine torque to decrease. The
maximum feasible spark retard is limited by combustion stability constraints. To
ensure that spark has a bidirectional authority over the engine torque, the set-point
for spark-timing in steady-state is retarded from MBT. This steady-state spark retard
is referred to as spark reserve.
The need to maintain spark reserve results in fuel consumption increase as com-
pared with the case when spark timing is maintained at MBT. As we will see shortly,
the delay in this system is a key factor that prevents making throttle to engine speed
control loop faster and eliminating the spark reserve. Another opportunity to reduce
fuel consumption during idling is to lower idle speed set-point, provided this can
be sustained by vehicle accessories. Lowering idle speed set-point makes the prob-
lem harder in two ways. First, the delay, which, as we will see shortly, is inversely
proportional to engine speed, becomes larger. Second, the engine speed excursions,
especially dips, must be much tighter controlled as engine speed decrease below
the so-called “fishhook” point results in a rapid increase in engine friction (due to
changing properties of the lubricant) and can easily produce an engine stall. Engine
stall has a negative customer impact and the number of engine stalls is one of the
key quality indicators (TGW–things gone wrong).
Historically, ISC is related to one of the oldest closed-loop systems discussed in
the controls literature, the so called Watt’s governor (1787), which may be viewed
as a speed controller for a steam engine. In older gasoline engines with a mechan-
ical throttle (i.e., throttle mechanically connected to driver gas pedal), a dedicated
actuator called air-by-pass valve was used for ISC [20] but in modern engines the
introduction of an electronic throttle, decoupled from the driver’s gas pedal, has
permitted to eliminate the air-by-pass valve. This did not make the control problem
easier as electronic throttle designed for regular driving operates only in a narrow
58 M. Jankovic and I. Kolmanovsky

range during idle where small imperfections may have large impact. Robustness to
changes due to aging and manufacturing tolerances, operation across a wide range of
environmental conditions (ambient temperature, pressure, humidity, etc.) and con-
stant pressures to lower the cost of the hardware components (and achieve same
quality of control with less capable components) are some of other sources of con-
tinuing challenges for ISC.
For these reasons, ISC still remains a relevant case study for application of
advanced control methods despite numerous published controls approaches (see,
e.g., [20] for a survey). In particular, it is clear from the above discussion that
more effective techniques for controlling time-delay systems can have a substan-
tial impact in improving fuel consumption during idling.
In what follows, we will first discuss a simple model suitable for initial studies
of ISC control designs and the effects of the delay. We will then comment on some
of the practical and advanced approaches proposed for dealing with this control
problem in the literature.

3.2.1 A Model for ISC

A simplified model, based on the work of [28], is now discussed to illustrate the
delay-related aspects of ISC. The model is a continuous-time, mean-value model in
which cyclic fluctuations of engine operating variables are replaced by their cycle
averages. In addition, several approximations are made that are valid locally around
idle operating point. With these approximations, the engine speed dynamics have
the form
1
Ṅ = (Me − ML ), (3.1)
J/(30/π )
where N is the engine speed in rpm, J is the engine inertia in kg m2 , (30/π ) is a
conversion factor from rad/s to rpm, Me is the torque produced by the engine (Nm),
and ML is the load torque in Nm. The engine torque is represented as a product of
engine torque at MBT, Me0 , and torque sensitivity to spark (also referred to as the
torque ratio), uδ , which is a function of spark retard:

Me (t) = Me0 (t − td )uδ (t). (3.2)

Here td is the delay between the intake stroke of the engine and torque production.
Hereafter, we assume that we can manipulate uδ as if it was a control input and we
neglect the delay between spark timing and torque production, noting that omitting
the spark to torque delay, while reasonable, is not uniformly agreed, see e.g., [15].
The engine torque at MBT spark timing, Me0 , is a function of cylinder flow, Wac
(kg/s), and engine speed, N:
k1
Me0 = Wac , (3.3)
N
3 Time-Delay Systems in Powertrain Applications 59

where k1 is a parameter. The cylinder flow is a function of the intake manifold


pressure, p1 (kPa) and engine speed, N,

k2
Wac = p1 N + k0 , (3.4)
k1
and k0 , k2 are constant parameters. The intake manifold filling dynamics are based
on the ideal gas law and constant gas temperature (isothermal) assumption:
RT1
ṗ1 = (Wth −Wac ), (3.5)
V1
where R is the ideal gas constant in kJ/kg/K, T1 is the temperature of the air in the
intake manifold in K, V1 is the intake manifold volume, and Wth is the throttle flow
in kg/s. Near idle, the flow through the throttle is choked and one can approximate

Wth = k3 uth , (3.6)

where uth is the throttle position in degree and k3 is a constant. From (3.3) and (3.4)
it follows that Me0 = k2 p1 + kN1 k0 . Differentiating this expression and using (3.4)–
(3.6) leads to
RT1 N RT1 k0 k1
Ṁe0 = −k2 Me0 + k2 k3 uth − 2 Ṅ. (3.7)
V1 k1 V1 N
k0 k1
The term N2
Ṅ is small and can be omitted so that

RT1 N RT1
Ṁe0 = −k2 Me0 + k2 k3 uth . (3.8)
V1 k1 V1
From (3.1) and (3.2) it now follows that
1
Ṅ = (Me0 (t − td )uδ − ML ). (3.9)
J/(30/π )

The complete model is now defined by (3.8) and (3.9).


The delay td is between the intake stroke of the engine and torque production, and
is about 360◦ of crankshaft revolution. Consequently, td is inversely proportional to
engine speed and is given by
60
td (t) = . (3.10)
N(t)
Note that the delay is state-dependent and time-varying since N(t) varies with time.
Nevertheless, in the practical treatment of the problem a constant delay assumption,
with the delay value corresponding to that at the idle speed set-point, is often made.
Some references (see e.g., [18]) cite a range between 230◦ and 360◦ for the delay
duration in the crank angle domain rather than a fixed value of 360◦ , others (see
e.g., [14,44]) augment to td also the throttle delay and computing delay in the engine
control module. For instance, in [14] the value of the delay is estimated as 450◦ for
a prototype engine, due to computations scheduling in the engine control module.
60 M. Jankovic and I. Kolmanovsky

It is also interesting that transforming the model to the crank angle domain by
defining
N(t)
θ= t, (3.11)
60
d(·) 60 d(·)
and using the relation dθ = N dt renders (3.8)–(3.9) as
 
dMe0 60 RT1 N RT1
= − k2 Me0 + k2 k3 uth , (3.12)
dθ N V1 k1 V1
 
dN 60
= Me0 (θ − 1)uδ − ML . (3.13)
dθ JN/(30/π )
This is a plant model with a constant delay equal to 1.
For a Ford F-150 vehicle, the model parameters in the above model have been
estimated as follows [28, 47]: k1 = 3, 961, k2 = 102, k3 = 2.02, RT 1
V1 = 0.0750 and
J/(30/π ) = 0.027 while the nominal throttle position, load torque and engine speed
are, respectively, uth,0 = 3.15, ML = 31.15 and N = 800. It is of interest to examine
the linearized transfer function at idle operating conditions from throttle position in
degree to engine speed in rpm. It can be shown that this transfer function has the
following form,
256.867 × 2.228
H(s) = 2 e−td s .
s + 1.545s + 2.228e−td s
The step response of this transfer function has a lightly damped character (damping
ratio about 0.5). The linearized transfer function at idle operating conditions from
load torque in Nm to engine speed in rpm has the form
−37.04s − 57.22
HL (s) = .
s2 + 1.545s + 2.228e−td s
A first-order transfer function with a unit static gain may additionally be aug-
mented to the above model to represent the dynamics of the actual throttle position,
uth , in response to the commanded throttle position, uth,c set by the idle speed con-
troller. Reasonable parameter estimates for this throttle dynamics model are a static
gain of 1, the time constant of about 0.05s and a delay of about 0.05s (if not already
accounted in td ).

3.2.2 Treating ISC Using Tools for Time-Delay Systems

We now comment on some of the approaches proposed in the literature to treat the
delay-related aspects of ISC.
In practical applications, the treatment of the delay using Padé approximations is
common. A first-order Padé approximation has the form
td
e−s 2 − t2d s + 1
e−std = td ≈ td . (3.14)
e2 2 s+1
3 Time-Delay Systems in Powertrain Applications 61

In ISC, such a Padé approximation can be used to represent the delay td between the
intake stroke of the engine and torque production. With this approximation, a pole-
zero pair is added to the delay-free transfer function model of the plant, thereby
permitting the resulting plant model to be treated using conventional control design
methods. Note that Padé approximation introduces a right-half plane zero, which
renders the system nonminimum phase and tends to guard against excessively high
feedback gains that are problematic because of the delay. On the other hand, with the
Padé approximation the treatment of the delay is only approximate and the range of
the stabilizing gains may be over-predicted [25].
In [38] a controller architecture for two inputs, throttle and spark, is proposed
in which the spark input compensates for the delay in the throttle channel. Specifi-
cally, a controller is designed first for the throttle to control engine speed based on
a nondelayed plant model. Then another controller for the spark timing is proposed
to fill in for the difference in engine torque delivery due to the delay, assuming that
the spark to torque production delay is neglected. Since the effect of the delay in the
throttle to speed channel is compensated by the spark timing, from the perspective
of the throttle to engine speed control loop design the delay can be ignored. The
effectiveness of this approach has been demonstrated in simulations in [40]; how-
ever, the use of spark timing may be restricted in applications where the reduction
in spark reserve is sought to improve fuel economy.
In [6] a second-order plus time-delay ISC model is considered, in which the
delay td affects throttle to engine speed control channel while the delay in the spark
to engine speed control channel is neglected. The model is essentially the same as
(3.8)–(3.9), but the physical states in [6] are engine speed and manifold pressure.
In addition, time-varying parameter uncertainties (with known bounds) are consid-
ered. Since only the engine speed is assumed to be measured, an observer for the
full state vector is proposed, in which the output injection term is composed of
two gains multiplying, respectively, the error between measured output and esti-
mated output and the error between delayed by td measured output and delayed by
td estimated output. A state feedback controller is designed which uses two gains
multiplying, respectively, the current and delayed deviations in the state estimate.
The design of observer and controller gains is based on the method of Lyapunov-
Krasovskii functionals, and it leads to the sufficient conditions, in the form of LMIs,
that guarantee the convergence of control and observer errors to zero despite param-
eter variations. This approach is tested in simulations based on a model of a Ford
F-150 vehicle with a 4.6 L V8 engine, in which the delay is assumed to be a constant
at 100 ms.
An approach based on the finite spectrum assignment is investigated in [15] for
an ISC problem with air-bypass valve and spark timing as inputs. Both delays in air-
bypass valve channel and in the spark channel are considered. The model is recast
in the form
ẋ(t) = A0 x(t) + A1 x(t − td ) + B0 u(t) + B1 u(t − ts ),

where td is the delay in the air-bypass valve channel and ts is the delay in the spark
channel. The state vector x(t) is four dimensional, with the two physical states
62 M. Jankovic and I. Kolmanovsky

(engine speed and manifold pressure) being similar to the states of the model (3.8)–
(3.9), and two additional states appearing because of augmentation of an integrator
on the engine speed to ensure zero steady-state error and due to adding a derivative
action in the spark channel. The proposed control law has the form
 t  t
Ã(t−td −θ )
u(t) = −Kx(t) − K e A1 x(θ )dθ − K eÃ(t−ts −θ ) B1 u(η )dη ,
t−td t−ts

with the integrals being discretized for implementation using a trapezoidal integra-
tion rule. The matrices Ā and K can be determined, under a spectral stabilizability
assumption, using a procedure discussed in [15]. The approach was tested in simu-
lations.
Recent work [44] featured an application of Adaptive Posicast controller for
time-delay systems developed in [34]. The control law for the throttle has been
designed in the form
 0
uth (t) = θ1T (t)ω1 (t) + θ2T (t)ω2 (t) + λ (t, η )uth (t + η )dη + θ4 (t)r,
−td
∂θ
= −Γ (y(t) − ym (t))ω (t − td ),
∂t
θ = [θ1 θ2 λ θ4 ]T , ω = [ω1 ω2 u r]T
ω̇1 = Λ0 ω1 + lu(t − td ),
ω̇2 = Λ0 ω2 + ly(t),

where y is the deviation of the engine speed from the nominal engine speed, r is
the set-point for y, ym is the output of the reference model, Λ0 and l are obtained
in the design [44] and Γ is an adaptation gain. The Adaptive Posicast controller
guarantees that the closed-loop system is stable and that the output of the plant
follows the output of the reference model. The integral term has been discretized
according to
 0
λ (t, η )u(t + η )dη ≈ λ1 uth (t − dt) + · · · + λk uth (t − k dt),
−td

where dt is the sampling interval and k dt = td . The controller was implemented


experimentally in a Ford F-150 vehicle and its capability to outperform the exist-
ing production controller in set-point following and disturbance rejection has been
demonstrated. This improved performance comes at a price of increased compu-
tational complexity. Potential opportunities to reduce calibration time and effort
through the application of this adaptive control approach have been also highlighted
in [44].
As yet another application of the recent results in the literature on time-delay
systems, [49] considered an observer design problem for the states of a system
affected by random and unknown delays. The delay is modeled by a Markov chain
with a finite number of states, and the observer structure has been proposed based
on a probabilistically averaged copy of the state space model. The observer was
3 Time-Delay Systems in Powertrain Applications 63

succesfully applied as a fault detector for an engine speed control system in which
the control input (requested engine torque) is computed in a module connected to
engine control module via a controller area network (CAN).

3.3 Air-to-Fuel Ratio Control

Air-to-fuel ratio control is imperative for conventional gasoline vehicles (see


Fig. 3.2) in order to achieve emission reductions to mandated levels. The origi-
nal introduction of the electronic powertrain control in the late 1970s has, in fact,
been largely driven by the air-to-fuel ratio control requirement stemming from the
introduction of tighter emission standards for the passenger vehicles. The conver-
sion efficiency of a three-way catalyst (TWC) as a function of the tailpipe air-to-fuel
ratio1 is rapidly degraded when the tailpipe air-to-fuel ratio deviates from the sto-
ichiometry [17]. Because of oxygen storage in the catalyst, the tailpipe air-to-fuel
ratio (after the catalyst) can remain at stoichiometry even if the feed-gas air-to-fuel
ratio (before the catalyst) deviates from stoichiometry in transients. In steady-state
conditions the two air-to-fuel ratios equilibrate. Thus the catalyst provides a capabil-
ity to withstand some transient fluctuations in the feed-gas air-to-fuel ratio without
leading to emission performance degradation.
The feed-gas air-to-fuel ratio can be disturbed by engine airflow transients, when
the driver tips-in or tips-out, or by periodic canister vapor purges, which result in
extra air and fuel vapor flow from the carbon canister that stores fuel vapors origi-
nating in the fuel tank. Tight control of the feed-gas air-to-fuel ratio to its set-point
(this set-point may depend on an estimate of the stored oxygen in the catalyst or on
an indication/measurement of the tailpipe air-to-fuel ratio) is imperative to reduce
emissions or to achieve reductions in the cost and size of the catalyst. The feed-gas
air-to-fuel ratio is measured by a wide-range air-to-fuel ratio sensor, which is also
often referred to as the universal exhaust gas oxygen (UEGO) sensor. The tailpipe
air-to-fuel ratio may be measured either by a wide-range sensor or, more typically,
by a switching sensor, called heated exhaust gas oxygen (HEGO) sensor, which
provides an indication if the tailpipe air-to-fuel ratio is richer or leaner than stoi-
chiometry. The HEGO sensor also has some resolution in a narrow range around
the stoichiometry, i.e., its voltage reading is sensitive to the actual air-to-fuel ratio
value but in a narrow range and with a steep slope. The HEGO sensor may be used
to measure the tailpipe air-to-fuel ratio because it is normally cheaper, faster, and is
less subject to drift as compared with the UEGO sensor. Moreover, if the engine is
properly controlled, the tailpipe air-to-fuel ratio tends to vary in a much narrower
range than the feed-gas air-to-fuel ratio.
The delay between fuel injection and UEGO sensor measurement, see Fig. 3.3
for an illustration, can be a limiting factor seriously degrading the achievable

1The air-to-fuel ratio is a quantity reflecting the exhaust gas composition in terms of equivalent
amounts of air and fuel which need to be burnt.
64 M. Jankovic and I. Kolmanovsky

Fig. 3.2 Schematics of a gasoline engine

1000 rpm
15.8
15.6 Fuel injection rate (scaled units)
15.4
15.2
15
14.8
A/F

14.6 A/F
14.4
14.2 Wfi
14 0.3 sec
13.8
25 25.2 25.4 25.6 25.8 26
Time (sec)

Fig. 3.3 Delay from fuel injection step to feed-gas air-to-fuel ratio change at 1,000 rpm for an
experimental engine

performance of the air-to-fuel ratio feedback loop. Thus more effective techniques
for controlling time delay systems can have a substantial impact in reducing con-
ventional gasoline vehicle tailpipe emissions. While our subsequent treatment is
focused on conventional gasoline engines, we note that the delay also plays an
important role in the air-to-fuel ratio control for advanced lean-burn engines and
for diesel engines. We refer the reader to [1] and [48] for the design of the air-to-
fuel ratio controller while accounting for the delay.
In the remainder of this section we first discuss a typical air-to-fuel ratio control
loop for a gasoline engine based on a wide range air-to-fuel ratio sensor, with special
3 Time-Delay Systems in Powertrain Applications 65

attention paid to the delay modeling. We will then illustrate an application of certain
tools for characterizing the range of stable gains in this control loop when the airflow
is constant and we will discuss links with recently developed theoretical tools for
control of time-delay systems which permit to treat the case of time-varying air
flow. The case of air-to-fuel ratio control loop based on a switching air-to-fuel ratio
(HEGO) sensor rather than the wide range air-to-fuel ratio (UEGO) sensor will be
briefly discussed at the end.

3.3.1 Feed-gas Air-to-Fuel Ratio Control Block Diagram

A typical arrangement of a closed-loop air-to-fuel ratio system, based on


proportional-plus-integral control and a wide range air-to-fuel ratio sensor, is shown
in Fig. 3.4. As in our treatment of ISC, cyclic fluctuations of engine operating vari-
ables are replaced by cycle averages using a mean-value approximation so that the
resulting plant and control system dynamics can be treated in continuous-time. The
injected fuel flow rate (control input) is denoted by Wfi . A fraction of the injected
fuel enters the cylinder during the intake event in vapor form, while the remainder
in liquid form replenishes a fuel puddle formed in the intake ports of the engine.
The liquid fuel in the puddle evaporates and enters the cylinder in vapor form.
Thus, in steady-state conditions the amount of fuel entering the cylinder is equal to
the amount of injected fuel, but in transient conditions these two quantities can be
different. The transient fuel dynamics are shown by the block “TF” on the diagram
in Fig. 3.4. Additional fuel and air can enter as a disturbance, d, due to canister
Wfc
vapor purge. The in-cylinder fuel-to-air ratio is μ = W ac
, where Wac denotes the
air flow into the engine cylinders. A first-order transfer function plus time delay
represents the dynamics between actual fuel-to-air ratio and measured fuel-to-air
ratio. The time constant, τe , accounts for the fact that individual cylinders do not
fire simultaneously within the cycle [27], for the exhaust mixing and for the UEGO
sensor time constant. The delay, td , mainly accounts for the intake to exhaust stroke

d (canister vapor purge, errors)


+ μc (cylinder fuel-air ratio)
+ Wfc,d Wfi Wfc
^ 1/Wac
TF−1 TF
+
+
Transient Fuel Transient Fuel
^ Plant
μdWac Model Inverse Dynamics
μm/μd

^ u e + e−std
μdWac 1/s ki (kp/ki) s+1 (τes+1)μd

PI Controller
r

Fig. 3.4 Block diagram of gasoline engine air-to-fuel ratio control system
66 M. Jankovic and I. Kolmanovsky

engine cycle delay and for the transport delay to exhaust sensor location; it will
be considered in Sect. 3.3.2 in more detail. As will be detailed below, the delay td
varies with engine operating conditions and it is dependent on engine speed and
engine air flow.
The closed-loop regulation to a desired fuel-to-air ratio, μd , is typically per-
formed by a proportional-plus integral (PI) controller. The value of μd is around
1/14.64 in applications to gasoline stoichiometric engines and may be adjusted
by another (outer) control loop based on either voltage reading from a tailpipe
air-to-fuel ratio sensor (HEGO) or based on estimated oxygen storage in the cata-
lyst. Other control designs, such as an H∞ controller (see e.g., [1, 36]) or Adaptive
Posicast Control (see [45, 46]) can, of course, be also used. The PI controller gains
kp and ki can be coordinated to cancel the plant pole at −1/τe [27], and for that
k
reason, the term kpi s + 1 is factored out. The estimate of the airflow multiplied by
desired fuel-to-air ratio, μdŴac scales the PI controller output, u, i.e., u provides
a multiplicative correction to the feedforward fuel quantity, μdŴac . The desired
in-cylinder fueling rate is thus Wfc,d = μdŴac (1 + u). Since the engine airflow varies
over a broad range, a multiplicative correction, which scales automatically with the
airflow, results in less transient air-to-fuel ratio excursions in fast transients. Fur-
thermore, it renders the static (dc) gain of the plant constant across operating range.
The transient fuel model inverse block, “TF ˆ −1 ”, backtracks the required injected
fueling rate, Wfi , to realize Wfc,d .

3.3.2 Modeling

While most of the blocks in Fig. 3.4 are self-explanatory and can thus be easily
modeled, the model of the delay, td , and of the transient fuel dynamics deserve
further discussion.

3.3.2.1 Delay

The delay between fuel injection and fuel-to-air measurement is comprised of the
delay between fuel injection and intake event, tfi , delay between intake and exhaust
event, tie , transport delay to the A/F sensor location, ttr , sensor delay, ts and compu-
tational delay in the engine control module, tc . Assuming two events between fuel
injection on the closed valve and engine intake event and a four cylinder engine, the
following approximations can be made
30 30
tfi = 2 , tie = 3 ,
N N
where N denotes the engine speed in rpm. If the length of travel of the exhaust gas
to the A/F sensor location is L, then,
L
ttr = ,
v
3 Time-Delay Systems in Powertrain Applications 67

where v is the gas velocity. From the ideal gas law and using the definition of the
mass flow rate of the exhaust, Wex , we obtain
p2
Wex = Av,
RT2
where p2 is the exhaust pressure, R is the gas constant, T2 is the exhaust gas temper-
ature, and A is the flow area. Thus,
Lp2 A
ttr = .
RT2Wex
In a naturally aspirated stoichiometric port-fuel injected engine, p2 is approximately
constant while the heat flux, T2Wex , is essentially proportional to the mass fuel rate,
which in turn is proportional to the mass flow rate of air, Wac . Thus the total delay,
tc + tfi + tie + ttr + ts , is mainly a function of engine speed, N, and the mass flow rate
of air, Wac .
In contrast to ISC, the delay in the air-to-fuel ratio control problem cannot be
assumed to be constant since engine speed and air flow, on which it depends, are
varying widely across engine operating range. On the other hand, the delay is not
dependent on the states of the air-to-fuel ratio control problem itself.
Recent measurements taken from a Ford vehicle with a 5.4 L V8 engine [32]
have been fitted into an aggregate model derived from the above considerations:
α1 α2
td = α0 + + , (3.15)
N MN
where N is the engine speed, M is the load, and α0 = 0.0254, α1 = −14.7798,
α2 = 46.4974. The load M is defined as the ratio of Wac /N to maximum possible
Wac /N at a given engine speed, if the engine is operated at sea level and the flow
of air into the cylinder is at normal ambient (25◦ C) temperature. For the engine on
which the delay measurements were conducted, the mass flow rate of air, Wac (in
kg/s) as a function of engine speed and load is given by Wac = 5.53 × 10−5 MN.
Figure 3.5 illustrates the delay dependence on engine speed and load.
With τe about 0.2 (typical value), the product of the pole at a = τ1e and the delay,
td , is larger than 0.5 if the delay is larger than 0.1. In process control literature, first-
order plus time-delay systems with atd ≥ 0.5 are considered to be difficult to control
using only feedback and feedforward compensation is typically required [26]. Per-
haps as a manifestation of this difficulty, in air-to-fuel ratio control applications
accurate air charge estimation, accurate transient fuel compensation, and estima-
tion of purge flow disturbance are employed to realize effective control. The devel-
opment of these feedforward estimates entails calibration time and effort, which
may be reduced if more effective methods are applied to the design of the feedback
compensation. Note also that in other cases, e.g., in lean-burn engine applications
considered in [48], the control system uses tailpipe air-to-fuel ratio measurement
directly, with the need to treat a significantly larger time delay.
68 M. Jankovic and I. Kolmanovsky

0.45

0.4

0.35

0.3
Delay (td)

0.25

0.2
M= 0.16

0.15
M= 0.25
0.1
M= 0.5
0.05
600 800 1000 1200 1400 1600 1800 2000 2200 2400 2600
Engine rpm

Fig. 3.5 The delay as a function of engine speed for different loads. Asterisks indicate experimen-
tal engine measurements

3.3.2.2 Transient Fuel Modeling

Several models of varying complexity have been developed in the literature to model
transient fuel behavior. The original mean-value model [3] has the following form:
dmfp mfp
=− + XWfi ,
dt τ
mfp
Wfc = (1 − X)Wfi + , (3.16)
τ
Wac
λ= ,
Wfc
where mfp denotes the mass of the fuel in the liquid fuel puddle formed in the ports
(total for all cylinders), τ > 0 is the fuel evaporation rate from the liquid puddle,
and X is a fraction of the injected fuel that replenishes the puddle (0 < X < 1).
The parameter X is primarily a function of the engine coolant temperature (which
changes slowly and is a measured quantity) but τ exhibits a more complicated
dependence on the engine operating variables, for instance, it may depend on the
intake manifold pressure, air flow and engine speed. The inverse of this transient
fuel model has the following form,

Wfc,d − τfp
Wfi = , (3.17)
1−X
where m̂fp is an estimate of liquid fuel in the puddle. For more elaborate transient
fuel models, see, for instance, [24] and references therein.
3 Time-Delay Systems in Powertrain Applications 69

3.3.3 Constructing Stability Charts and Gain Scheduling

The analysis of stability and convergence rates for the air-to-fuel ratio control sys-
tem in Fig. 3.4 in the case when the engine speed, N, and air flow into the engine,
Wac are constant is of interest as it characterizes the air-to-fuel ratio behavior after a
transition (such as a tip-in or a tip-out) has occurred to a new operating condition.
In this case, td and τe are constant. Assuming accurate transient fuel compensation
and in absence of the canister vapor purge disturbances, the error, e, can be shown
to satisfy the following equations,

ż = e,  
1 1 (3.18)
ė = − e(t) − kp e(t − td ) + ki z(t − td ) .
τe τe

To determine the range of the gains, kp and ki , for which the closed-loop stability
and a specified convergence rate are guaranteed, the numerical method of [9] may
be used. This method is applicable to general linear time-periodic delay-differential
equations with multiple delays commensurate to the period. The system (3.18) is
a linear time-invariant system with a single delay and can thus be handled by the
method in [9] with ease.
To apply the approach of [9] it is convenient to transform (3.18) into a form
d
x = A(ε )x(θ ) + B(ε )x(θ − 1),
dθ (3.19)
t
x = [z, e]T , ε = [td , τe , kp , ki ]T , θ = ,
td
where θ is the scaled time and ε is a parameter vector. Consistently with the method
of steps [30], (3.19) may be viewed as defining a linear operator on C[−1, 0] that
maps past solution segments to future solution segments. By approximating func-
tions on [−1, 0] using interpolation through Chebyshev extreme points, a finite-
dimensional linear operator, U(ε ), is obtained, which maps the coefficients of the
associated Chebyshev interpolating polynomials. The Chebyshev basis is selected
as it combines spectral accuracy with simplicity of computations. To compute an
approximation to the stability region in a parameter space, one needs to choose a
grid for ε and compute the eigenvalues and spectral radius, ρ (ε ), of U(ε ), for each
grid point. The stability region approximately corresponds to the values of ε for
which ρ (ε ) < 1 and its boundary corresponds to ρ (ε ) = 1.
The guaranteed convergence rate regions can be defined using ρ (ε ) as an indi-
cator of the convergence rate, i.e., ρ (ε ) ≤ ρ0 for a fixed ρ0 < 1. The smaller ρ0 is,
the faster the convergence rate. Since a time transformation is employed to arrive at
(3.19), a better indicator of the convergence rate in time can be defined as follows:
td
ρ̃ (ε ) = − . (3.20)
log ρ (ε )
70 M. Jankovic and I. Kolmanovsky

td = 0.1 td = 0.2 td =0.3


15 15 15

1
0.9

1
0.7

10 0.9 10 10
ki

ki

ki
5 5 5
0.9
0.7
0.9

0.7
0.9
1

1
1

0 0 0

0 2 4 0 2 4 0 2 4
kp kp kp

Fig. 3.6 The stability and guaranteed convergence rate regions (given in terms of ρ ) in the kp − ki
parameter space for different values of td and τe = 0.2

The ρ̃ (ε ) may be viewed as an equivalent time constant of the closed-loop system.


The Matlab2 software implementing the computations is located at http:\\www.
cs.uaf.edu\˜bueler\ddepage.htm.
Figure 3.6 illustrates the stability and guaranteed convergence rate regions in
terms of kp and ki for td = 0.1, 0.2, 0.3 and for τe = 0.2. We used 25 Chebyshev
extreme points in constructing U(ε ). Notice that the regions shrink considerably
with the increase in the delay. Figure 3.7–left presents the values of kp and ki , which
minimize ρ as a function of td and suggests that the values of kp and ki can be gain-
scheduled as a function of td to preserve the stability and maximize the convergence
rate. Figure 3.7–right compares the minimum values of ρ and ρ̃ for optimum gains
as a function of the delay td . Note that the achievable values of ρ̃ are increasing with
td , which is an indication of the larger delay limiting more the performance of the
closed-loop system.
In [4] a similar air-to-fuel ratio control system is analyzed and it is shown that
the numerical boundaries of the stability and guaranteed convergence rate regions
are very close to the analytically predicted boundaries.

2 Matlab is a registered trademark of the Mathworks, Inc., Natick, MA.


3 Time-Delay Systems in Powertrain Applications 71

τe=0.2
2
0.55
1.5
0.5
ρ
kp

1 0.45
0.5 0.4
0 0.35
0.05 0.1 0.15 0.2 0.25 0.3 0.35
0.3

ρ
15 ρ−tilde
0.25
10 0.2
ki

0.15
5
0.1
0 0.05
0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.05 0.1 0.15 0.2 0.25 0.3 0.35
td td

Fig. 3.7 Left: The values of kp and ki , which minimize ρ as functions of td . Right: The values of ρ
and ρ̃ as functions of td

3.3.4 Using Robust and Stochastic Stabilization Tools


for Time-Delay Systems

Our treatment in Sect. 3.3.3 has been based on the constant air flow assumption.
Robust and stochastic stability analysis tools for time-delay systems can be used
to treat a more general case, and we now provide a brief illustration of this for an
air-to-fuel ratio controller of feedback linearization type originally proposed in [4].
Wfc
With μ = W ac
denoting the in-cylinder fuel-to-air ratio, it can be shown from
(3.16) and the assumption that X does not vary with time (since X is mainly a func-
tion of slowly varying engine coolant temperature) that
1−X Wfi (1 − X)Wfi τ̇
μ̇ = Ẇfi + − μδ + , (3.21)
Wac τ Wac Wac τ
where
 
1 1 1 τ̇ Ẇac
δ = + δ̃ , δ̃ = − + + ,
τ0 τ τ0 τ Wac
and τ0 is a nominal value of τ (t). With μm denoting the measured fuel-to-air ratio,
μd being the desired fuel-to-air ratio and defining ξ = μ − μd , ξm = μm − μd , we
obtain that
1
ξ̇m + aξm = aξ (t − td ), a = . (3.22)
τe
While the air flow, Wac , and fuel evaporation rate parameter τ are considered
as time-varying, two simplifying assumptions, of constant td and constant a, are
now made. These two assumptions may be justified as approximations in the anal-
ysis of the engine operating only in a part of the full operating range (e.g., during
constant speed cruise with limited load changes); further, a conservative approach
of augmenting a time-varying delay in software in the engine control module
can effectively render td constant. The treatment of a more general model (with
72 M. Jankovic and I. Kolmanovsky

time-dependent td and a) is possible along similar lines but the notations and the
required conditions are considerably more cumbersome [12] and are not consid-
ered here.
With these assumptions, it follows that
 
1−X Wfi (1 − X)Wfi τ̇
ξ̈m (t + td ) + aξ̇m (t + td ) = a Ẇfi + − μd δ − ξ δ + .
Wac τ Wac Wac τ

We note that this equation can be used as a basis for system identification of a,
X, τ , and td , see [16]. Since, per process control guidelines, we established that
the air–fuel ratio plant is difficult to control using feedback-only, accurate system
identification is very important and there is much room for the development and
application of effective system identification procedures for time-delay systems. In
this regard, and given that the delay in the air-to-fuel ratio control problem is an out-
put delay and not the state delay, the convolution methods in [5] appear promising
for fast time-delay identification. Further treatment of identification issues for time
delay systems falls beyond the scope of this chapter.
With a feedback-linearizing type controller for the fuel-injection rate, Wfi , defined
according to
1−X Wfi (1 − X)Wfi τ̇ k
Ẇfi + − μd δ + = ξm , (3.23)
Wac τ Wac Wac τ a
the closed-loop dynamics are
   
1 a k
ξ̈m (t) + a + + δ̃ (t − td ) ξ̇m (t) + + aδ̃ (t − td ) ξm − ξm (t − td ) = 0.
τ0 τ0 a
(3.24)
Figure 3.8 shows the experimental response of the controller (3.23) operating
the engine at 1,500 rpm and 20◦ C engine coolant temperature, where because of the
cold temperature the air-to-fuel ratio control is particularly challenging. The air flow
into the engine is changing consistently with the torque ramp up and ramp down and
the performance is evaluated with an estimate of Ẇac included into (3.23) but with τ
set to a constant value. As the responses show, including an estimate of Ẇac and the
feedback due to k = 0 enhance the controller disturbance rejection properties.
Note that the term δ̃ (t − td ) may be viewed as a time-varying uncertainty in the
coefficients of (3.24), so that the robust or stochastic stability of (3.24) may be
analyzed by the LMI type methods developed in the literature, see e.g. [7] and [30].
With x1 = ξm , x2 = ξ̇m , and
   
0 1 00
A= , B = ,
− τa0 −(a + τ10 ) k0
 
0 0
F= ,
−a · θ −θ

and
δ̃ (t − td ) = θ Δ (t), |Δ (t)| ≤ 1,
3 Time-Delay Systems in Powertrain Applications 73

k=−2.5*a
200
180
Torque (Nm)

160
140
120
100
56 58 60 62 64 66 68 70 72
Time (sec)

1.1
A/F (normalized)

1.05
1
0.95
0.9
0.85
0.8
56 58 60 62 64 66 68 70 72
Time (sec)

Fig. 3.8 Experimental results: Feedback linearization controller performance with the air flow
disturbance for k = 0, Ẇˆ ac = 0 (solid, blue) and for k/a = −2.5 and Ẇˆ ac ≈ Ẇac (dashed, red), where
Ẇˆ ac is the estimate of the air flow time rate of change. The commanded normalized air-to-fuel ratio
is 1 and the 5% deviation band is shown by dashed green lines

the closed-loop model can be recast as

ẋ = (A + Δ (t)F)x(t) + Bx(t − td ), (3.25)

where A, B, and F are system matrices and Δ (t) is a scalar time-varying uncertainty
satisfying |Δ (t)| ≤ 1.
The LMI-based conditions, reviewed in the Appendix, can be used to conserva-
tively estimate the maximum size of θ , which preserves closed-loop stability for
different k and td . The maximum tolerable θ and the gain, k, for which tolerable θ
is maximized are shown in Fig. 3.9.
We may instead choose to view δ̃ as a stochastic process and proceed to char-
acterize the mean-square stochastic stability of (3.24). The conditions are given by
(3.52) in the Appendix where we assume3 that δ̃ ∼ θ w, and w is a standard Wiener
process. We estimate numerically the maximum tolerable value of θ , as a function
of k and td . Figure 3.10 signifies that the use of feedback helps to accommodate
larger θ although larger delays decrease significantly the maximum tolerable value
3 This means that the change in δ̃ from one sample to the next divided by square root of the

sampling period is assumed to behave approximately as a zero mean gaussian random variable
with variance θ .
74 M. Jankovic and I. Kolmanovsky

X=0.3, τ=0.6,a=5 (deterministic case) X=0.3, τ=0.6,a=5 (deterministic)


1 −2
0.9 −4
0.8 td=0.01
−6
0.7
−8
maximum θ

0.6 td=0.05

optimal k
−10
0.5
td=0.1 −12
0.4
−14
0.3
td=0.2 −16
0.2
−18
0.1 td=0.3
0 −20
−20 −18 −16 −14 −12 −10 −8 −6 −4 −2 0 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35
k td

Fig. 3.9 Left: Maximum tolerable deterministic uncertainty size, θ , as a function of the gain k for
different values of td . Right: The gain k that accommodates the maximum deterministic uncertainty
size as a function of the delay, td

X=0.3, τ=0.6,a=5 (stochastic case)


3

td = 0.01 td =0.05
2.5

td =0.1
2
maximum θ

td =0.2
1.5

1
td =0.3

0.5

0
−20 −18 −16 −14 −12 −10 −8 −6 −4 −2 0
k

Fig. 3.10 Solid lines show maximum admissible value of θ in the stochastic case. A slight “blip”
in td = 0.2 trajectory is due to numerical problems

of θ . For a standard driving cycle used in vehicle emissions and fuel economy cer-
tification, further analysis has revealed that the rate of change of δ̃ may be larger
than conservative sufficient conditions in Fig. 3.9 permit, but it appears to satisfy
conditions of Fig. 3.10.
Continuing now with the analysis of the closed-loop behavior, let X̂, τ̂ , Ŵac
denote, respectively, the estimates of X, τ , Wac used in controller implementation.
Then in steady-state conditions,
3 Time-Delay Systems in Powertrain Applications 75
) *
1 − τ̂ak
μ = μd .
Wac
Ŵac
− τ̂ak

Thus in terms of steady-state operation, air flow estimation errors are more trouble-
some for the controller while X̂, τ̂ differences from X, τ do not lead to a steady-
state offset of μ from μd if Ŵac = Wac . Large magnitude of k helps to reduce the
steady-state offset due to air flow estimation errors but the gain increase is restricted
because of the delay and sensor noise.

3.3.5 Air-to-Fuel Ratio Control Using a Switching HEGO Sensor

The air-to-fuel ratio regulation can be performed using a switching (HEGO) air-to-
fuel ratio sensor in place of a wide range air-to-fuel ratio sensor before the catalyst,
see Fig. 3.2. The advantages of a HEGO sensor are lower cost, faster response, and
higher tolerance to drift, while the main disadvantage is that for air-to-fuel ratio
outside of a narrow range; it only provides an indication of whether the air-to-fuel
ratio is rich or lean of stoichiometry, but not by how much.
A model relevant to the air-to-fuel ratio regulation with the HEGO sensor has the
following form:
 
Wfc 1
ṡ + as = a − μd + d, a = ,
Wac τe (3.26)
ym = sign(s(t − td )),
where ym is the measurement by HEGO sensor, μd is the fuel-to-air ratio set-point,
and d denotes the disturbance. We treat Wfc = Wfc,d (commanded cylinder fueling
rate) as a control input, recognizing that the fuel injection rate Wfi is backtracked
from commanded Wfc using transient fuel inverse model while the errors in realizing
Wfc are lumped into d. In the subsequent analysis, we will focus on the air-to-fuel
ratio regulation under the assumption of air flow being a constant parameter.
The controller has the following form,
   
v ˙ Wfc
Wfc = Wac μ̂d + ŝ + , ŝ + aŝ = a − μ̂d ,
a Wac (3.27)
v = −γ ym = −γ sign(s(t − td )),

where γ > 0 is a control gain and in the analysis we assume that4 μ̂d = μd . It is
straightforward to see that (3.27) is a proportional-plus-integral controller for Wfc
applied to the measurement ym and augmented with a feedforward control. The
proportional gain of the controller is equal to Wac γ /a and the integral gain is equal

4 The HEGO sensor switches around the true stoichiometry μd , while μ̂d is used by the controller
in (3.27). Depending on the composition, values of stoichiometric air-to-fuel ratio may vary for
gasoline or gasoline-ethanol blend.
76 M. Jankovic and I. Kolmanovsky

to Wac γ so that the gains are coordinated to achieve pole-zero cancellation of the
k
time constant 1a = τe in the plant model with the term ( kp s + 1) of the PI controller,
i
cf. Fig. 3.4.
We illuminate the behavior of the closed-loop system via simulations where
large parameter uncertainty has been introduced. The engine parameters are X =
0.3, τ = 0.2,td = 0.2, μd = 1/14.3, and the air flow into the engine Wac increases
from 40 kg/hr to 80 kg/h at the time instant t = 10.5 s. The controller uses estimates
of the parameters, X̂ = 0.2, τ̂ = 0.3, tˆd = 0.23, μ̂d = 1/14.64, and the estimate of the
air flow into the engine, Ŵa,c , which increases from 40 kg/h to 60 kg/h. Figure 3.11
shows the closed-loop system behavior for two values of the feedback gain. Even
though the parameters are not estimated correctly, the controller is able to cope with
parameter uncertainties. The smaller value of the gain results in a smaller amplitude
of the limit cycle but is much slower in compensating the transient at t = 10.5 s.
On the other hand, the larger value of the gain quickly compensates the transient
at t = 10.5 s, but results in a large amplitude limit the cycle in steady-state condi-
tions. Thus, none of the two choices of the gain is ideal. Better performance may be
attained if the gain is made time-varying via adaptation as discussed next.
From (3.26)-(3.27), we obtain

ṡ(t) = −γ sign(s(t
 − td
)) + w(t),
(3.28)
w(t) = d(t) + a ŝ − s .

20

19

18

17

16
λ

15

14

13

12
8 9 10 11 12 13 14 15
Time [sec]

Fig. 3.11 Time history of the air-to-fuel ratio for the controller with the high fixed gain (solid),
for the controller with the low fixed gain (dashed) and for the controller with the adaptive gain
(dash-dot)
3 Time-Delay Systems in Powertrain Applications 77

Note that if e = ŝ − s, then ė = −ae − d. Letting z = ė, we have ż = −az − d˙ and d


slow-varying implies w = −z = d + ae is ultimately bounded in an interval [−1, 1] ·
supt |d(t)|
˙
a .
If d is fast-varying but bounded, then w(t) is ultimately bounded in an
interval supt |d(t)|(1 + 1a )[−1, 1]. In either case, we can treat w(t) as a bounded
signal, i.e.,
sup |w(t)| ≤ ε , (3.29)
t

where ε > 0 is a known bound.


It is interesting to note that (3.28) is a relay-type time-delay system of the kind
recently studied in the time-delay literature, see e.g., [10]. Unlike the case treated
in [10], here the dynamics with γ = 0 are marginally stable and not unstable and
there is a bounded disturbance. Nevertheless similar ideas to [10] of basing the gain
reduction on predicted zero crossings [11] can be exploited here.
To reduce the amplitude of the steady-state limit cycle in the air-to-fuel ratio
trajectory the basic idea is to make the gain γ time-varying, γ = γ (t), and reduce its
values at time instants when s(t) is predicted to cross zero. Specifically, if at time
tk−1 a zero crossing by sm (t) = s(t − td ) is detected, we can estimate a predicted
time of next zero crossing by s(t) as tk−1 + τk , where
 
γk − ε γk + ε
τk ∈ td , td ,
γk + ε γk − ε

and γ (t) = γk is the gain used for tk−2 + τk−1 ≤ t < tk−1 + τk . In fact, τk can be
estimated as the mid-point of the interval [ γγk −ε γk +ε
+ε td , γ −ε td ], which is
k k

γk2 + ε 2
τk = τ̂k = td .
γk2 − ε 2

Note that τ̂k = td if ε = 0.


Thus at the time instant tk + τ̂k , s(t) is predicted to cross zero and to reduce limit
cycle amplitude we reduce the gain according to the rule, γk+1 = max{γk α , γmin },
where 0 < α < 1 and γmin > ε . The reduction of the gain at the time instants where
s(t) is close to zero is targeted since when s(t) is away from zero, reducing the gain
only serves to extend the transient. Note that the next zero crossing of sm (t) may
be detected at a time instant tk that occurs earlier than tk−1 + τ̂k ; in this case, γk
should still be reduced at tk−1 + τ̂k and the detected zero crossing at the time instant
tk should be ignored.
Note that the gain is not allowed to drop below the value γmin so that γk > ε . We
can prove the following
Proposition: s(t) → [−γmin − ε , γmin + ε ] · td as t → ∞.
Proof: It can be shown that there exists a time instant, tk , such that sm (tk ) = 0.
Then s(tk − td ) = 0. The proof follows from the claim that if γ (t) ≤ γ̄k for t ≥ tk − td
then |s(t)| ≤ (ε + γ̄k )td for t ≥ tk −td . To prove this claim, we define r = s−(ε + γ̄k )td
and assume, without loss of generality, that r(t) > 0 for some t > tk − td while
r(tk − td ) < 0. Since r is continuous, we can find t¯ ≥ tk − td such that r(t¯) = 0,
78 M. Jankovic and I. Kolmanovsky

r((t¯,t]) > 0. Given that ṡ = −γ (t) · sign(s(t − td )) + w(t), |w(t)| ≤ ε , this is only
possible if s((t¯ − td ,t − td ]) > 0. The latter implies, in turn, that ṙ((t¯,t]) ≤ 0, which
contradicts r((t¯,t]) > 0. The proof is complete. 2
Thus the gain reduction procedure guarantees that the limit cycle amplitude can
be reduced asymptotically to or below (γmin + ε )td . Note further that in the above
calculation of τ̂k , ε can be replaced by εk = suptk−2 ≤t≤tk−1 +τk |w(t)| that permits
tighter bounding of w based on the most recent information about w(t).
Note that large unmodeled uncertainties, e.g., due to selecting ε to bound w(t) at
most but not all conditions, may cause sm (t) to not switch by more than the expected
time during which it is expected to switch. If no switch in sm (t) is detected for more
than the threshold time n( γγk + ε
+ 1)td , where n > 1 then γk+1 should be reinsti-
k −ε
tuted to its original, largest value, γ0 . This forces the feedback to respond rapidly to
unmodeled uncertainties.
The simulation results for the controller with the adaptive gain are shown in
Fig. 3.11 by dash-dotted line. Here γ0 was set to the high gain value (= 10−3 )
of the fixed gain controller and γmin to the low gain value (= 10−4 ) of the fixed
gain controller. The threshold time for increasing the gain was set to 3tˆd and ε to
1 × 10−4 /2. In comparison, the controller with the adaptive gain achieves slightly
slower response as compared with the controller with high fixed gain when the air
flow changes at t = 10.5 s and it is not correctly estimated. At the same time the
amplitude of the limit cycle very quickly settles to much lower value.

3.4 Observer Design for a Diesel Engine Model with Time Delay

In this section we consider the problem of observer design for a model of the air-
supply system in a diesel engine with an exhaust gas recirculation (EGR) valve
and a turbo-compressor with a variable geometry turbine (VGT). The model has a
time delay equal to the interval from the air intake into the cylinder to the exhaust
of the combusted mixture. Based on the available results, one could argue that the
delay does not have a detrimental effect on the performance and robustness of the
feedback control system because it can be either dominated [21] or removed from
the input–output dynamics by a linearizing feedback [25]. This conclusion applies
if all the states of the system are measured. Here we consider the case when only
one system’s state is available for measurement. We also use the model to discuss
issues associated with observer design for systems with time delay.

3.4.1 Diesel Engine Model

A schematic diagram of a diesel engine is shown in Fig. 3.12. The EGR valve is
used to recirculate the burned gas from the exhaust into the intake manifold and
subsequently into the cylinders in order to reduce NOx emissions. The VGT actuator
3 Time-Delay Systems in Powertrain Applications 79

p1

EGR

Wegr

p2
Wc Wt

Compressor VGT
Pc

Fig. 3.12 Schematic diagram of a diesel engine with a variable geometry turbine and an exhaust
gas recirculation valve.

controls the flow of the exhaust gas through the turbine out of the exhaust manifold.
This actuator is employed to improve steady state fuel economy and the system’s
speed of response. Indeed, fast response is one of the critical objectives of the air-
supply system of a diesel engine. A slow response is associated with the driver’s
perception of a “turbo-lag,” the cause of which is the vehicle acceleration being
limited by inadequate air supply.
Our starting point is the three-state model of the EGR-VGT diesel engine adopted
from [23]. Figure 3.12 shows the compressor air-flow Wc going through the inter-
cooler (that cools the compressed gas) and filling the intake manifold. The EGR
flow Wegr , controlled by the EGR valve, is also cooled by the EGR cooler before
entering the intake manifold. The model uses the intake manifold pressure p1 as
one of the system states. The flow of gas from the intake manifold into the engine
cylinders is approximately proportional to the manifold pressure. That gas exits the
cylinders and enters the exhaust manifold after the “transport” time-delay td approx-
imately equal to 3/4 of an engine cycle (from the middle of the intake stroke to the
middle of the exhaust stroke). The delay is inversely proportional to engine speed
N and is approximately equal to td = 90 N with td in seconds and N in revolutions per
minute. The second state of the system is the exhaust manifold pressure p2 . While
the engine gas flow fills the exhaust manifold, the EGR and turbine flows empty it.
The EGR flow from the exhaust manifold into the intake manifold depends on p1
and p2 and the opening of the EGR valve. Similarly, the flow through the turbine Wt
depends on exhaust pressure p2 , the turbine downstream pressure (assumed constant
and equal to the ambient), and the position of the VGT actuator. The flow Wt drives
the turbine, that powers the compressor. The third state of the system is, hence, the
compressor power Pc .
80 M. Jankovic and I. Kolmanovsky

The dynamics for the three-state diesel engine model proposed in [23] is given by

ṗ1 = k1 (Wc +Wegr − ke p1 )

ṗ2 = k2 (ke p1 (t − td ) −Wegr −Wt +Wf ) (3.30)


1
Ṗc = (ηm Pt − Pc ),
τ
where the intake and exhaust manifold pressures are normalized by the ambient
pressure, Wf is the fuel mass flow rate, and
 
ηc Pc 1
Wc = , P = η c T 1 − μ Wt . (3.31)
Ta cp p1μ − 1
t t p 2
p2

The coefficients ηc , ηt , and ηm represent the compressor, turbine, and mechani-


cal efficiencies, respectively, cp is the specific heat at constant pressure equal to 1
kJ/(kgK), ke is the engine pumping coefficient, τ is the time constant of turbine-to-
compressor power transfer, and ki = RT i
Vi , with Vi and Ti , i = 1, 2, being the intake
and exhaust manifold volumes and gas temperatures. R is the specific gas constant
for air equal to 0.287 kJ/(kgK). The exponent μ is equal to γ −1 γ , where γ is the
temperature-dependent ratio of specific heats equal to 1.4 at T = 300K. The values
of cp and γ for a range of air temperatures can be found in Appendix D in [17].
The ambient and intake manifold temperatures Ta and T1 are assumed constant,
while the exhaust temperature T2 and the parameters ηt and ηc are assumed to
depend on the fuel flow rate Wf and the engine speed N. For the purpose of con-
trol design, the EGR mass flow rate Wegr and turbine mass flow rate Wt have been
considered the control inputs in [21, 23, 25]. In other words, the desired turbine and
EGR flows are generated by the closed-loop controller and the actuator positions
are then manipulated to maintain the flows, based on the upstream and downstream
pressure signals for each orifice.
Even when the delay is neglected, designing an effective control system for the
diesel engine model (3.30) is a difficult task. The system is multivariable and highly
nonlinear. For example, it is obvious from the model that, when the normalized pres-
sure p2 is close to 1, the control Wt has no effect on compressor power. Moreover, it
has been shown in [29] that, due to the nonlinearities, the steady-state input–output
gain matrix have entries that change sign within a typical operating region of the
engine. Such a DC-gain sign reversal is a major obstacle to using integral action in
the controller.
By employing the control Lyapunov function (CLF) concept while assuming
td = 0, a proportional multivariable controller was proposed in [23]. The CLF con-
troller, which uses the feedback of the compressor flow Wc (also referred to as the
mass air flow – MAF) and the exhaust pressure p2 , was compared with several other
methods in [41]. The measurement of the intake pressure p1 was also used by the
controller to obtain the EGR valve opening by inverting the EGR flow. The same
type of feedback has been shown analytically in [21] to be stabilizing when td = 0.
3 Time-Delay Systems in Powertrain Applications 81

130 70

125
Wc_des 60
120

115 Wc 50 VGT

EGR and VGT (% open)


Wc and Wc_des (kg/h)

110
40
105
30
100

95
20

90 EGR VGT_com
10
85
EGR_com

80 0
4.5 5 5.5 6 6.5 7 4.5 5 5.5 6 6.5 7
time (s) time (s)

Fig. 3.13 Experimentally measured response of the compressor mass air flow to a step command
(left) and the actions of the EGR and VGT actuators (right)

The compressor mass air flow signal Wc is the most relevant, directly measured
variable when the air-supply speed of response is considered. Experimental results
for this control law, tuned for fast Wc response, have been reported in [23, 41]. The
performance of this full state feedback system5 is illustrated in Fig. 3.13. From
the plot on the left one can estimate the time constant governing the Wc response
to be about 150 ms. The plot on the right shows the percent opening of the EGR
and VGT actuators and illustrates the difference between the commanded positions
of the actuators (as computed by the feedback controller) and the actual positions
attenuated by the actuator dynamics. Despite the noticeable actuator phase lag, the
response of Wc is fast.
The paper [41] also discussed using a reduced set of sensors for the controller.
From the cost point of view, it is particularly beneficial to remove the mass air flow
sensor for Wc and use measurements of p1 and p2 to estimate it. In the next section,
we revisit the sensor selection issue and consider the observer design in the case
when only one of the two pressure sensors is available. In particular, we address the
issue of observer design in the presence of nonnegligible time-delay td .

5Pc is not directly measured, but can be computed using the first equality in (3.31) from Wc and
p1 measurements.
82 M. Jankovic and I. Kolmanovsky

3.4.2 Observer Design

To preserve the fast system response, it is desirable to place the observer poles to be
faster than those of the closed-loop system with full state feedback. This means that
the observer time constants need to be close to the value of the time delay, which
is equal to 60 ms at N = 1,500 rpm. Thus, the delay may affect the observer per-
formance and robustness more significantly than it did affect the full-state-feedback
controller.
When deciding which pressure sensor to use for observer design, there is an
interesting trade-off to be considered:
1. Selecting y = p1 allows direct cancellation of the delay term k2 kc p1 (t − td ) from
the observer error dynamics. This effectively removes the delay as an issue for
finding the observer gains that achieve the desired bandwidth. On the other hand,
p1 is a nonminimum phase output with respect to the control input with lower
relative degree (EGR). Thus, while the delay effect is avoided, a potential loss of
robustness due to the right half plane zero may outweigh the benefits of removing
the delay by cancellation.
2. The output y = p2 is the minimum phase output with respect to both inputs, but
the delay term cannot be cancelled and it stays in the dynamics of the observer
error. In this case, finding observer gains that achieve the desired performance
becomes more difficult.
At this point we select the exhaust pressure as the measured signal, y = p2 , and
design an observer that uses finite time (predictor) integrals to achieve a finite spec-
trum of the observer error dynamics.
Because the downstream EGR valve pressure p1 is not measured any more, the
first step in the observer design is to express the EGR flow rate Wegr in the model in
terms of p1 , p2 , and the EGR valve flow coefficient α (EGR):
 
p2
Wegr = α (EGR)p2Ψ
p1

where Ψ (·) is the standard subsonic flow correction factor given by equation (2.7)
in [23]. The dependence of Wegr on the upstream temperature T2 is included in α (·).
Hence, the model of a diesel engine air supply system takes the form
   
ηc Pc p2
ṗ1 = k1 + α (EGR)p Ψ − k p
Ta cp p1μ − 1
2 e 1
p1
   
p2
ṗ2 = k2 ke p1 (t − td ) − α (EGR)p2Ψ −Wt +Wf
p1 (3.32)
   
1 1
Ṗc = ηm ηt cp T2 1 − μ Wt − Pc
τ p2
y = p2 .
3 Time-Delay Systems in Powertrain Applications 83

The fuel mass flow rate Wf is known and the turbine mass flow rate Wt is a function
of known variables: p2 and the VGT actuator position. Note that only the p1 state
impacts p2 (Wt and Wf are not directly dependent on Pc ). Thus, that states p1 and Pc
will have to be estimated based on the impact   that p1 makes on the measured signal
p2 through a nondelay term α (EGR)p2Ψ pp21 and a delay term ke p1 (t − td ). At a
typical operating point, magnitude of the impact of the delayed value of p1 on p2
will dominate that of the direct (nondelayed) value as we shall see in the numerical
example given below.
The next step in the observer design is to linearize the model (3.32) around an
equilibrium operating point (p1e , p2e , Pce ) (the subscript “e” identifies the value of
a variable at the equilibrium point). By defining x1 = p1 − p1e , x2 = p2 − p2e , x3 =
Pc −Pce , u1 = α (EGR)− α (EGRe ), u2 = Wt −Wte , and d = Wf , a known disturbance,
we can compute the Jacobian linearization of (3.32) around the equilibrium point
and express it in the form:

ẋ = A0 x + A1 x(t − td ) + B1 u + B2 d
(3.33)
y = Cx,

where x = [x1 x2 x3 ]T , u = [u1 u2 ]T , and C = [0 1 0]. In general, the entries of matrices


A0 , A1 , and B depend on the equilibrium point at which the system is linearized.
Their form
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
−a11 a12 a13 0 00 b11 0 0
A0 =⎣ a21 −a22 0 ⎦ , A1 =⎣ a121 0 0 ⎦ , B1 =⎣ −b21 −b22 ⎦ , B2 = ⎣ b22 ⎦
0 a32 −a33 0 00 0 b32 0
(3.34)

reveals the structure of the system (note: all the coefficients aij , bij , as well as a121
and b22 , are nonnegative at all equilibrium points).
To estimate the state x of the system (3.32), we consider the observer of the form
l
x̂˙ = A0 x̂ + A1 x̂(t − td ) + B1 u + B2 d + ∑ Li [y(t − itd ) −Cx̂(t − itd )]
 ltd i=0 (3.35)
+ Λ (θ )[y(t + θ − ltd ) −Cx̂(t + θ − ltd )]dθ ,
0

where l is a positive integer. The dynamics of the observer error e = x − x̂ is, there-
fore, given by
l  ltd
ė = A0 e + A1 e(t − td ) − ∑ LiCe(t − itd ) − Λ (θ )Ce(t + θ − ltd )dθ (3.36)
i=0 0

In general, the observer error dynamics will be infinite dimensional. We would


like to find values of Li and Λ (·) so that (3.36) has a finite set of stable poles πj
that are well damped and are faster than the controller dynamics. Considering the
desired (and achievable) speed of response of the closed-loop system, we require
84 M. Jankovic and I. Kolmanovsky

Re{πj } < −10 (rad/s). This spectrum assignment problem is solvable if the pair
(C, A0 + A1 e−std ) is spectrally observable [42], that is, if
 
sI − A0 − A1 e−std
rank = n, ∀s ∈ C
C
where C denotes the set of complex numbers and n is the dimension of the system (in
this case n = 3). One can check that the spectral observability condition is satisfied
for the model (3.33) because the (2,1) entries of A0 and A1 have the same sign
regardless of the operating point.
To find the observer matrices Li and Λ (·) that satisfy the requirements we propose
to use the duality between the controller and observer design (see, for example,
[37] for the controller–observer duality in delay systems). In this way, the observer
design problem is transformed into finding a control law for v that achieves the finite
spectrum assignment for the system of the form

ż = AT0 z + AT1 z(t − td ) +CT v (3.37)

This problem can be tackled by several design methods, including the Laplace trans-
form based [8,33,43] and the time domain based [22]. In this chapter we shall use the
latter because the transformation into the appropriate form is obvious, after which
the computation of the control law can follow a well-defined process.
The coordinate transformation consists of reordering the states and splitting them
into the strongly controllable part (denoted by ξ ) and the part that has to be con-
trolled through delay terms (denoted by χ ): [χ1 , χ2 , ξ ] = T z = [z3 , z1 , z2 ]. Hence,
the coordinate transformation matrix T is given by
⎡ ⎤
001
T = ⎣1 0 0⎦
010

In the (χ , ξ ) coordinates the system takes the form:


χ̇ = F χ + h0 ξ + h1 ξ (t − td ),
(3.38)
ξ̇ = a32 χ1 + a12 χ2 − a22 ξ + v,
where      
−a33 a13 0 0
F= , h0 = , h1 =
0 −a11 a21 a121
and the coefficients aij correspond to the notation in (3.34). The system (3.38)
belongs to the class considered in the “forwarding” section (Sect. 3.2) of [22].
Instead of the more complex Lyapunov-Krasovskii construction of the stabilizing
control, here we shall use the simpler spectrum-equivalence result from the same
paper. The design proceeds in several steps:
1. Use the control input transformation to remove the χ -state terms that affect the
ξ -subsystem: v = v0 − a32 χ1 − a12 χ2 .
3 Time-Delay Systems in Powertrain Applications 85

2. Replace the delay term h1 ξ (t − td ) in the χ -dynamics with the nondelay term
e−Ftd h1 ξ (t). After the first two steps, the system (3.38) is converted into the
system without time delay:
 
χ̇ = F χ + h0 + e−Ftd h1 ξ
(3.39)
ξ̇ = −a22 ξ + v0

3. Find a feedback law


v0 = −Kχ χ − kξ ξ (3.40)
that stabilizes (3.39). Any of the standard control design methods could be used,
including pole placement, LQR, and H∞ .
4. The gains Kχ = [kχ 1 , kχ 2 ] and kξ are substituted into the formula ((2.16) in [22])
  td 
−F θ
v = −a32 χ1 − a12 χ2 − kξ ξ − Kχ χ + e h1 ξ (t + θ − td )dθ (3.41)
0

It has been shown in [22] that the spectrum of the nondelay system (3.39) with
the feedback law (3.40) is the same as that of the delay system (3.38) with the
feedback law (3.41). Hence the latter has a finite spectrum.
To obtain the observer gains from the above controller we first revert to the orig-
inal z-coordinates of the system (3.37):
 td
v = −K0 z − Kχ e−F θ h1Cz(t + θ − td )dθ , (3.42)
0

where K0 = [kχ 2 + a12 , kξ , kχ 1 + a32 ]. Hence, for the observer (3.35), l = 1 and the
observer gains Li , i = 0, 1 and Λ (·) are obtained by transposing the controller gains
in (3.42):
L0 = K0T , L1 = 0, Λ (θ ) = CT hT1 e−F θ KχT .
T
(3.43)
Thus, the observer is given by

x̂˙ = A0 x̂ + A1 x̂(t − td ) + B1 u + B2 d + K0T (y −Cx̂)


 td (3.44)
CT hT1 e−F θ KχT [y(t + θ − td ) −Cx̂(t + θ − td )]dθ .
T
+
0

The spectrum of the observer error dynamics (3.36) with the gains (3.43) is also
finite and is the same as the spectrum of the closed-loop system (3.39) with the
control (3.40).

3.4.3 Numerical Results

To illustrate the performance of the observer in simulations, we pick operating


points for linearization: N = 1,500 (rpm) (hence, the delay is td = 60 ms), p1e = 1.3
86 M. Jankovic and I. Kolmanovsky

(bar), p2e = 1.5 (bar), and Pc = 1 (kW). The parameters of the model are selected
to represent a generic engine, turbine, and compressor hardware: ke = 26 (g/s/bar),
ηc = ηt = 0.6, ηm = 0.85, τ = 0.2 (s). The engine parameters have been assumed
independent of the state values. The operating point corresponds to the compressor
air flow Wc of about 26 (g/s) with 30% EGR flow rate. Finally, we have selected the
temperature values of Ta = 300 (K), T2 = 700 (K) and assumed they are indepen-
dent of the state x. With typical sizes of the intake and exhaust manifold volumes,
the chosen temperatures result in k1 = 0.23 and k2 = 0.8. At this operating point,
the matrices that define the dynamics of the linearized system are
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
−27 3.6 6 0 00 0.26 0 0
A0 = ⎣ 9.6 −12.5 0 ⎦ , A1 = ⎣ 21 0 0 ⎦ , B1 = ⎣ −0.9 −0.8 ⎦ , B2 = ⎣ 0.8 ⎦
0 9 −5 0 00 0 0.18 0
# $
C = 010 .
(3.45)
We note that our choice to treat Wt , rather than the position of the VGT actuator, as
a control input has resulted in a system that is open loop unstable at this operating
point. The instability is not caused by the delay – the matrix A0 + A1 , corresponding
to the system matrix for td = 0, is unstable with eigenvalues at −29, −16, and 1.
Following the procedure described in the previous subsection, we transform the
observer design problem into the controller design for the nondelay system (3.39).
This system is given by
   
−5 6 −21
χ̇ = χ+ ξ
0 −27 116 (3.46)
ξ̇ = −12.5 ξ + v0 .

We find the controller for this system by employing the LQR method [2] that mini-
mizes the cost  ∞#  
$ χ
J= χT ξ Q + v20 dt.
0 ξ
To obtain the poles of the closed-loop system faster than −10 (rad/s), we had to use
a nondiagonal matrix Q. Selecting
⎡ ⎤
1,000 100 0
Q = ⎣ 100 15 0 ⎦
0 0 1

provided the controller gains Kχ = [10.6 3.2], kξ = 8.9, and the closed-loop poles at
−16.7 ± j4.9 and −20. The observer gains L0 and Λ (θ ) are computed from (3.43).
The integral in the observer (3.44) is computed using the standard trapesoidal rule
with Δt = 5 ms.
The performance of the observer is shown in Figure 3.14. The inputs for this
simulation run are selected randomly: u1 is a square wave with amplitude 0.5 and
frequency 0.5 (Hz) and u2 is a sine wave with amplitude 3 and frequency 0.07 (Hz).
3 Time-Delay Systems in Powertrain Applications 87

0.4

0.3
x1, x1−hat

0.2

0.1

0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
0.8

0.6
x2, x2−hat

0.4

0.2

0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
1
x3, x3−hat

0.5

−0.5
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
time

Fig. 3.14 Convergence of the state estimates x̂ (dash curves) to the state x (solid curves): top plot
– x1 and x̂1 ; middle plot – x2 and x̂2 ; bottom plot – x3 and x̂3

3.4.4 Summary and Discussion

The control design for the air-supply system in a EGR-VGT turbocharged diesel
engine is a difficult problem primarily due to significant system nonlinearities. In
this section we have examined the problem of observer design for this system, in
which the presence of time delay may also present a problem. The degree of diffi-
culty depends on the selection of the measured output. We have linearized the sys-
tem and used the controller–observer duality and a recent control design method to
find observer gains that satisfy the design objective. The performance is illustrated
by numerical simulations.
One can point out several outstanding issues with respect to observer design for
a system such as (3.32):
• The observer (3.44) is very easy to design and test in simulations, but an experi-
mental implementation would be somewhat more demanding because of the need
to approximately compute the finite time integral.
88 M. Jankovic and I. Kolmanovsky

• An observer design method with partial spectrum assignment, such as [31], does
not use finite time integrals and may be easier to implement, provided that the
number of undesirable open loop poles is small. On the other hand, computation
of open loop poles of an infinite dimensional (open loop) system increases design
complexity and may reduce robustness of the resulting observer.
• The authors are not aware of a comparison, or guidelines regarding robustness
properties of different observer design methods for systems with time delay.
• As mentioned earlier, the dominant feature of the system (3.32) is significant
nonlinearity. Computing Jacobian linearizations at many equilibrium points in
the operating region, designing linear observers at each one of them, and imple-
menting the observer by gain scheduling seems prohibitively complex.
• On the other hand, it is not clear if any of the existing observers for nonlin-
ear, time-delay systems, such as the one proposed in [13], are applicable to this
problem.

3.5 Concluding Remarks

In this chapter, the authors have discussed several application problems in the area of
powertrain control from the perspective of control of time-delay systems. The asso-
ciated models and control problems for ISC and air-to-fuel ratio control in gasoline
engines and for state estimation in diesel engines with exhaust gas recirculation and
variable geometry turbocharging have been reviewed. Modern engines and trans-
missions rely on many control functions that are impacted by the delays and in this
light we have only covered a small (but representative) subset of relevant application
problems.
Nevertheless, the authors’ broader point has been that more effective analysis
and synthesis methods for time-delay systems can lead to improvements in fuel
economy, emissions, and drivability of automotive vehicles. Several connections
with the theoretical literature on control of time delay systems have been discussed.
It is hoped that this chapter will stimulate further interest in applying new theoretical
results to powertrain control and in the development of new theoretical results that
address special features of these problems.

Acknowledgments V. Averina introduced the authors to the method of [9] for constructing the sta-
bility charts, and some of the results in Sect. 3.3 are based on the collaboration with her. G. Song
has assisted in obtaining some of the results featured in Sect. 3.3 and S. Magner has conducted
experimental measurements of the air-to-fuel ratio delay in the vehicle. The authors also acknowl-
edge A. Annaswamy, S. Diop, T.L. Maizenberg, Y. Orlov, V. Winstead, D. Yanakiev, and Y. Yildiz
for past collaborations on the topic of time-delay systems, and D. Hrovat, A. Gibson, J. Michelini,
G. Ulsoy, and S. Yi for valuable suggestions that are reflected in the manuscript.
3 Time-Delay Systems in Powertrain Applications 89

Appendix: Robust and Stochastic Stability

In this section, we discuss linear matrix inequality (LMI) based techniques for the
analysis of robust and stochastic stability, which were used in this chapter.
Consider an uncertain system with a time delay (3.25) and a functional,

V = V1 +V2 ,
  t
T   t

V1 = x(t) + B x(s) ds P x(t) + B x(s) ds , P > 0,
t−td t−td
 t
V2 = (s − t + td )xT (s)Rx(s) ds, R > 0.
t−td

After straightforward algebraic manipulations, its time rate of change satisfies



V̇ ≤ x (t) P(A + B + Δ (t)F) + (A + B + Δ (t)F)T P
T


−1 T
+ td (A + B + Δ (t)F) PBR B P(A + B + Δ (t)F) + td R x(t).
T

Consequently, a sufficient condition for the asymptotic stability of (3.25) is


P(A + B + Δ (t)F) + (A + B + Δ (t)F)T P + td R
(3.47)
+td (A + B + Δ (t)F)T PBR−1 BT P(A + B + Δ (t)F) < 0,
for some P > 0, R > 0, and for all t > 0.
By using Shur’s complement, (3.47) can be replaced by
⎛ ⎞
(A + B + Δ (t)F)T P+
Δ t B P(A + B + Δ (t)F) ⎠
T
Θ = ⎝ P(A + B + Δ (t)F) + td · R d < 0. (3.48)
td (A + Δ (t)F + B) PB
T −td R

Since Δ (t) is a scalar, we can rearrange (3.48) as


 
(A + B)T P + P(A + B) + td R td BT P(A + B)
Θ=
td (A + B)T PB −td R
     
P td BT P Δ (t)I 0 F 0
+ (3.49)
0 0 0 Δ (t)I 0 F
 T   
F 0 Δ (t)I 0 P 0
+ .
0 FT 0 Δ (t)I td PB 0

Applying Lemma A.56 from [7] and using Shur’s complement, the following suffi-
cient condition in the form of an LMI is obtained:

6In the notations of [7], suppose Γ T Γ ≤ I, Y is symmetric. Then Y + H Γ E + E T Γ T H T < 0 if and


only if there exists ε > 0 such that Y + ε HH T + ε1 E T E < 0.
90 M. Jankovic and I. Kolmanovsky
⎛ ⎞
(A + B)T P + P(A + B) + td R + ε F T F td BT P(A + B) P 0
⎜ td (A + B)T PB −td R + ε F T F td PB 0 ⎟
⎜ ⎟<0
⎝P td BT P −ε I 0 ⎠
0 0 0 −ε I
(3.50)
for some P > 0, R > 0, and ε > 0.
Consider next a linear stochastic system with a delay,
# $
dx(t) = Ax(t) + Bx(t − td ) dt +Cx(t)dw(t), (3.51)

where w is a scalar standard Wiener process. A sufficient condition ( [30], p. 419)


for the delay-dependent mean-square stochastic stability of this system is
⎛ ⎞
(A + B)T P + P(A + B) T PB
⎜ +CT PC + td R t d (A + B) ⎟
⎜ ⎟ < 0, (3.52)
⎝ ⎠
BT P(A + B)td −td R

for P > 0, R > 0.


The software package SeDuMi [35] was used in the paper for the numerical
solution of the linear matrix inequalities.

References

1. Alfieri E, Amstutz A, Onder C, and Guzzella L (2007). Automatic design and parameter-
ization of a model-based controller applied to the AF-ratio control of a diesel engine. In:
Proceedings of the fifth IFAC Symposium on Advances in Automotive Control, Seascape,
Monterey Coast, CA.
2. Anderson B and Moore J (1990). Optimal control: Linear quadratic methods. Prentice-Hall,
Englewood Cliffs, NJ.
3. Aquino C (1981). Transient A/F control characteristics of 5-liter central fuel injection engine.
SAE Paper 810494.
4. Averina V, Kolmanovsky I, Gibson A, Song G, and Bueler E (2005). Analysis and control
of delay-dependent behavior of engine air-to-fuel ratio. In: Proceedings of the 2005 IEEE
Conference on Control Applications, Toronto, Canada, pp. 1222–1227.
5. Belkoura L, and Richard J (2006). A distribution framework for the fast identification of linear
systems with delays. In: Proceedings of the 6th IFAC Workshop on Time Delay Systems,
L’Aquila, Italy.
6. Bengea S, Li X, and DeCarlo R (2004). Combined controller–observer design for uncertain
time delay systems with applications to engine idle speed control. ASME Journal of Dynamic
Systems, Measurement and Control, 126:772–780.
7. Boukas E and Liu Z (2002). Deterministic and stochastic time delay systems. Birkhauser,
Boston.
8. Brethé D and Loiseau J (1998). An effective algorithm for finite spectrum assignment of
single-input systems with delay. Mathematics and Computers in Simulation 45:339–348
9. Butcher E, Haitao M, Bueler E, Averina V, and Zsolt S (2004). Stability of linear time-periodic
delay-differential equations via Chebyshev polynomials. International Journal for Numerical
Methods in Engineering, 59:895–922.
3 Time-Delay Systems in Powertrain Applications 91

10. Fridman E, Fridman L, and Shustin E (2000). Steady modes in relay control systems with
time delay and periodic disturbances. ASME Journal of Dynamic Systems, Measurement and
Control, 122:732–737.
11. Fridman L (2003). Private communications.
12. Fridman E and Niculescu S (2008). On complete Lyapunov-Krasovskii functional techniques
for uncertain systems with fast-varying delays. International Journal of Robust and Nonlinear
Control, 18(3):364–374.
13. Germani A, Manes C, and Pepe P (2001). An asymptotic state observer for a class of nonlinear
delay systems. Kybernetika, 37:459–478.
14. Gibson A, Kolmanovsky I, and Hrovat D (2006). Application of disturbance observers to
automotive engine idle speed control for fuel economy improvement. In: Proceedings of the
2006 American Control Conference, Minneapolis, Minnesota, pp. 1197–1202.
15. Glielmo L, Santini S, and Cascella I (2000). Idle speed control through output feedback sta-
bilization for finite time delay systems. In: Proceedings of the American Control Conference,
Chicago, Illinois, pp. 45–49.
16. Gomez O, Orlov Y, and Kolmanovsky I (2007). On-line identification of SISO linear time-
delay systems from output measurements: Identifiablity analysis, identifier synthesis, and
application to engine transient fuel identification. Automatica, 43(12):2060–2069.
17. Heywood J (1988). Internal combustion engine fundamentals. McGraw-Hill, New-York.
18. Hrovat D, Dobbins C, and Powell B (1998). Comments on applications of some new tools to
robust stability analysis of spark ignition engine: A case study. IEEE Transactions on Control
Systems Technology, 6(3):435–436.
19. Hrovat D and Powers W (1998). Computer control systems for automotive powertrains. IEEE
Control Systems Magazine, August 3–10.
20. Hrovat D and Sun J (1997). Models and control methodologies for IC engine idle speed
control design. Control Engineering Practice, 5(8):1093–1100.
21. Jankovic M (2001). Control design for a diesel engine model with time delay. In: Proceedings
of the Conference on Decision and Control, Orlando, Florida.
22. Jankovic M (2007). Forwarding, backstepping, and finite spectrum assignment for time delay
systems. In: Proceedings of the American Control Conference, New York.
23. Jankovic M, Jankovic M, and Kolmanovsky I (2000). Constructive Lyapunov control design
for turbocharged diesel engines. IEEE Transactions on Control Systems Technology, 8:
288–299.
24. Jankovic M, Magner S, Hagner D, and Wang Y (2007). Multi-input transient fuel control with
auto-calibration. In: Proceedings of American Control Conference, New York.
25. Jankovic M, and Kolmanovsky I (1999). Controlling nonlinear systems through time delays:
An automotive perspective. In: Proceedings of 1999 European Control Conference, Karsruhe,
Germany, paper F1027-1.
26. Khan B and Lehman B (1996). Setpoint PI controllers for systems with large normalized
dead-time. IEEE Transactions on Control Systems Technology, 4(4):459-466.
27. Kiencke U and Nielsen L (2000). Automotive control systems for engine, driveline, and vehi-
cle. Springer, New York.
28. Kolmanovsky I and Yanakiev D (2008). Speed gradient control of nonlinear systems and its
applications to automotive engine control. Journal of SICE, Japan 47(3).
29. Kolmanovsky I, Moraal P, van Nieuwstadt M, and Stefanopoulou A (1999). Issues in mod-
elling and control of intake flow in variable geometry turbocharged engines. In: System Mod-
elling and Optimization, Proceedings of 1997 IFIP Conference, Edited by M. Polis et. al.
Chapman and Hall/CRC Research Notes in Mathematics, pp. 436–445.
30. Kolmanovskii V and Myshkis A (1999). Introduction to the theory and applications of func-
tional differential equations. Kluwer, Dordrecht.
31. Leyva-Ramos J and Pearson A (1995). An asymptotic modal observer for linear autonomous
time lag systems. IEEE Transactions on Automatic Control, 40:1291–1294.
32. Magner S (2008). Private communications.
33. Manitius A and Olbrot A (1979). Finite spectrum assignment problem for systems with
delays. IEEE Transactions on Automatic Control, 24:541–553.
92 M. Jankovic and I. Kolmanovsky

34. Niculescu S and Annaswamy A (2003). An adaptive Smith controller for time delay systems
with relative degree n∗ ≤ 2. Systems and Control Letters, 49(5):347–358.
35. Peaucelle D, Henrion D, and Labit Y (2001). User’s guide for SeDuMi interface 1.01: solving
LMI problems with SeDuMi.
36. Roduner C, Onder C, and Geering H (1997). Automated design of an air/fuel controller for an
SI engine considering the three-way catalytic converter in the H∞ approach. In: Proceedings
of the fifth IEEE Mediterranean Conference on Control and Systems, Paphos, Cyprus.
37. Salamon D (1980). Observers and duality between observation and state feedback for time
delay systems. IEEE Transactions on Automatic Control, 25:1187–1192.
38. Stotsky A, Egardt B, and Eriksson S (2000). Variable structure control of engine idle speed
with esimation of unmeasurable disturbances. ASME Journal of Dynamic Systems, Measure-
ment, and Control, 122:599–603.
39. Stotsky A, Kolmanovsky I, and Eriksson S (2004). Composite adaptive and input observer-
based approaches to the cylinder flow estimation in spark ignition automotive engines. Inter-
national Journal of Adaptive Control and Signal Processing, 18(2):125–144.
40. Stotsky A, Egardt B, and Eriksson S (2000). Variable structure control of engine idle speed
with esimation of unmeasurable disturbances. ASME Journal of Dynamic Systems, Measure-
ment, and Control, 122:599–603.
41. van Nieuwstadt M, Kolmanovsky I, Moraal P, Stefanopoulou A, and Jankovic M (2000).
Experimental comparison of EGR-VGT control schemes for a high speed diesel engine. IEEE
Control Systems Magazine, 20:63–79.
42. Watanabe K (1986). Finite spectrum assignment and observer for multivariable systems with
commensurate delays. IEEE Transactions on Automatic Control, 31:543–550.
43. Watanabe K, Nobuyama E, Kitamori T, and Ito M (1992). A new algorithm for finite spectrum
assignment of single-input systems with time delay. IEEE Transactions on Automatic Control,
37:1377–1383.
44. Yildiz Y, Annaswamy A, Yanakiev D, and Kolmanovsky I (2007). Adaptive idle speed control
for internal combustion engines. In: Proceedings of the 2007 American Control Conference,
New York, pp. 3700–3705.
45. Yildiz Y, Annaswamy A, Yanakiev D, and Kolmanovsky I (2008). Adaptive air-fuel ratio
control for internal combustion engines. In: Proceedings of the 2008 American Control Con-
ference, Seattle, pp. 2058–2063.
46. Yildiz Y, Annaswamy A, Yanakiev D, and Kolmanovsky I (2008). Automotive powertrain
control problems involving time delay: An adaptive conbtrol approach. In: Proceedings of the
ASME 2008 Dynamic Systems and Control Conference, Ann Arbor, to appear.
47. Yanakiev D (2008). Private communications.
48. Zhang F, Grigoriadis K, Franchek M, and Makki I (2007). Linear parameter-varying lean
burn air-fuel ratio control for a spark ignition engine. ASME Journal of Dynamic Systems,
Measurement and Control, 129:404–414.
49. Winstead V and Kolmanovsky I (2004). Observers for fault detection in networked sys-
tems with random delays. In: Proceedings of American Control Conference, Boston,
pp. 2457–2462.
Chapter 4
Stability Analysis and Control of Linear
Periodic Delayed Systems Using Chebyshev
and Temporal Finite Element Methods

Eric Butcher and Brian Mann

Abstract In this chapter, a brief literature review is provided together with detailed
descriptions of the authors’ work on the stability and control of systems represented
by linear time-periodic delay-differential equations using the Chebyshev and tem-
poral finite element analysis (TFEA) techniques. Here, the analysis and examples
assume that there is a single fixed discrete delay, which is equal to the principal
period. Two Chebyshev-based methods, Chebyshev polynomial expansion and col-
location, are developed. After the computational techniques are explained in detail
with illustrative examples, the TFEA and Chebyshev collocation techniques are both
applied for comparison purposes to determine the stability boundaries of a single
degree-of-freedom model of chatter vibrations in the milling process. Subsequently,
it is shown how the Chebyshev polynomial expansion method is utilized for both
optimal and delayed state feedback control of periodic delayed systems.

Keywords: Periodic delay systems · Stability · Temporal finite element analysis ·


Chebyshev polynomials and collocation · Milling process · Optimal control Delayed
state feedback control

4.1 Introduction

It has been known for quite some time that many systems in science and engineer-
ing can be described by models that include past effects. These systems, where the
rate of change in a state is determined by both the past and the present states, are
described by delay differential equations (DDEs). Examples from recent literature
include applications in robotics, biology, human response time, economics, digital
force control, and manufacturing processes [3, 5, 33, 74, 75].

B. Balachandran et al. (eds.), Delay Differential Equations: Recent Advances 93


and New Directions, DOI 10.1007/978-0-387-85595-0 4,
c Springer Science+Business Media LLC 2009
94 E. Butcher and B. Mann

There has been a large amount of recent literature on time-periodic systems with
time delay, in both the mathematical and engineering literature. The vast majority
of papers in the latter category have been concerned with regenerative chatter in
machine tool vibrations. The milling problem, [10,14,16,34,37,45,51–54,61,66,78,
84] in particular, has been the main application in the area of machining processes
for this type of mathematical model, namely periodic delay-differential equations
(DDEs). Other machining applications have included modulated spindle speed [39,
42, 45, 69] or impedance [67, 68] and honing [26]. These problems have motivated
much of the recent work, which has primarily focused on stability [13, 24, 38] and
bifurcation analysis [15, 19, 53, 79] of time-periodic delayed systems, as well as the
related issue of controller design [18, 50, 83]. Other interesting problems related
to response [48], optimization [83], eigenvalue, and parameter identification [55]
have also been recently investigated. Our goal here is not to review all of these
areas, but to briefly highlight the recent work of the authors in this area during the
last few years. This work has been mainly concerned with the stability problem of
time-periodic delayed systems, while response and bifurcation analysis as well as
problems in delayed feedback control, optimal control, and parameter identification
have also received attention. Chebyshev-based methods (polynomial expansion and
collocation) and temporal finite element methods are two of the numerical tools
that have been used in all of these problems, including applications in machining
dynamics (milling and impedance-modulated turning) and the stability of columns
with periodic retarded follower forces [49]. The qualitative study of these types
of dynamical systems often involves a stability analysis, which is presented in the
form of stability charts that show the system stability over a range of parameters
[6, 60, 76].
Time-periodic delayed systems (either linear or nonlinear) simultaneously con-
tain two different types of effects in time: namely time-periodic coefficients due to
parametric excitation and terms that contain time-delay in which the state is evalu-
ated at a previous time instead of the present time. Although both of these effects
have been studied for a long time and have been extensively analyzed separately in
the literature for many years, the existing literature that concerns their simultaneous
presence in a system is much smaller and more recent. The majority of researchers
who work in this area have therefore had an initial interest in one of them (time
delay or parametric excitation) before studying their simultaneous presence in a
system. For instance, several researchers have investigated a well-known example
of a time-periodic ODE known as Mathieu’s equation with its accompanying Strutt-
Ince stability chart [59] while others have investigated a simple second order delay
equation and its accompanying Hsu-Bhatt-Vyshnegradskii stability chart [32]. Pro-
fessor Stépán and his students subsequently incorporated parametric excitation into
their time-delay systems and produced the first stability charts of the Mathieu equa-
tion with time-delay [36] – a prototypical second order system which incorporates
both time-periodic coefficients and time delay. In this stability diagram, one can
easily see the features of both the Strutt-Ince and Hsu-Bhatt-Vyshnegradskii stabil-
ity charts.
4 Stability of Time Periodic Delay Systems via Discretization Methods 95

Mathematically, the main feature of delayed systems and DDEs is that they are
infinite dimensional and map initial functions to produce the solution, in contrast
to ODEs which are finite-dimensional and map initial values to obtain the solu-
tion. Hence, the language and notation of functional analysis is often employed
in delayed systems [28, 29, 74]. For linear periodic DDEs, the Floquet theory of
linear periodic ODEs can be partially extended [27, 77]. Hence, the Floquet tran-
sition matrix (or monodromy matrix) whose eigenvalues, the Floquet multipliers,
determine the stability of periodic ODEs becomes a compact infinite dimensional
monodromy operator (whose eigenvalues or Floquet multipliers determine the sta-
bility) for periodic DDEs. Although the operator theoretically exists in a function
space independent of any chosen basis, it can be approximated in a finite number
of dimensions as a square matrix (and its eigenvalues computed for stability pre-
diction) only after a certain basis of expansion is chosen, which of course is not
unique. Although we use either orthogonal polynomials or collocation points for
this basis, many other choices are possible. A larger number of terms included in
the basis leads to a larger matrix and hence more eigenvalues and more accuracy.
The neglected eigenvalues are generally clustered about the origin due to the com-
pact nature of the operator and hence do not influence the stability if a sufficiently
large basis is used to insure convergence. However, although the freedom in choos-
ing the type and size of the basis used implies that the finite matrix approximation
to the monodromy operator is not unique, its approximate eigenvalues converge to
the largest (in absolute value) exact Floquet multipliers as the size of the basis is
increased. Different bases result in different rates of convergence, so that one basis
results in a smaller matrix (and hence more efficient computation) than does another
basis that results in a larger matrix for the same desired level of accuracy.
The majority of the existing techniques for stability analysis of periodic DDEs
are concerned with finding a finite dimensional approximation to the monodromy
operator. (Other techniques are concerned with approximating the largest Flo-
quet multipliers without approximating the monodromy operator.) Other than the
Chebyshev-based and temporal finite element methods, which we believe to be
very competitive for their rates of convergence and ease of use, the main alterna-
tive for stability analysis via computation of the approximate monodromy matrix is
the semi-discretization method [4, 35, 38, 44, 46]. We will not describe this method
or its efficiency here but refer the reader to the above references for more informa-
tion. We wish to point out the set of MATLAB codes that are applicable to peri-
odic DDEs include the well-known numerical integrator DDE23 [70] and two new
suites of codes: PDE-CONT, which does numerical bifurcation and continuation for
nonlinear periodic DDEs [79], and DDEC, which yields stability charts and other
information for linear periodic DDEs with multiple fixed delays using Chebyshev
collocation [11].
96 E. Butcher and B. Mann

4.2 Stability of Autonomous and Periodic DDEs

A linear autonomous system of n DDEs with a single delay can be described in


state-space form by
ẋ(t) = A1 x(t) + A2 x(t − τ ),
(4.1)
x(t) = φ (t) , −τ ≤ t ≤ 0,
where x(t) is the n-dimensional state vector, A1 and A2 are square n × n matrices,
and τ > 0. The characteristic equation for the above system, which is obtained by
assuming an exponential solution, becomes

|λ I − A1 − A2 e−λ τ | = 0 . (4.2)

When compared with the characteristic equation for an autonomous ordinary


differential equation (ODE), (4.2) has an infinite number of complex roots, which
are the eigenvalues of (4.1). The necessary and sufficient condition for asymptotic
stability is that all the roots (called characteristic exponents) must have negative real
parts [29].
Another general case to consider is the stability of a time periodic system with a
single delay. The general expression for this type of system is

ẋ(t) = A1 (t)x(t) + A2 (t)x(t − τ ),


(4.3)
x(t) = φ (t) , −τ ≤ t ≤ 0,

where T is the principal period, i.e., A1 (t + T ) = A1 (t) and A2 (t + T ) = A2 (t).


For convenience, we only consider the case where the single fixed delay τ = T .
However, the following analyses also extend to the case of multiple discrete delays,
which are rationally related to each other and to the period T . Unfortunately, because
the system matrices vary with time, it is not possible to obtain a characteristic equa-
tion similar to (4.2). Analogous to the time-periodic ODE case, the solution can be
written in the form x(t) = p(t)eλ t , where p(t) = p(t + T ) and λj are the character-
istic exponents. However, a primary difference exists between the dynamic map of
a time periodic ODE and a time periodic DDE, since the monodromy operator U for
the DDE (either autonomous or nonautonomous) is infinite dimensional, while the
time-periodic ODE system has a finite dimensional monodromy (Floquet transition)
matrix [36].
A discrete solution form for (4.3) that maps the n-dimensional initial vector func-
tion φ (t) in the interval [−τ , 0] to the state of the system in the first period [T − τ , T ]
(which for the current case where T = τ becomes [0, τ ]), and subsequently to each
period thereafter, can be written in operator form as

mx (i) = Umx (i − 1) , (4.4)

where mx is an expansion of the solution x(t) in some basis during either the current
or previous period and mx (0) = mφ represents the expansion of φ (t). Dropping the
subscript x, the state in the interval [0, τ ] is thus m1 = Umφ . Thus, the condition
4 Stability of Time Periodic Delay Systems via Discretization Methods 97

for asymptotic stability requires that the infinite number of characteristic multipli-
ers ρj = eλj τ , or eigenvalues of U, must have a modulus of less than one, which
guarantees that the associated characteristic exponents have negative real part.
Since we will study the stability of the eigenvalues (Floquet multipliers) of U,
therefore, it is essential that U act from a vector space of functions back to the
same vector space of functions. Note that if A2 (t) ≡ 0, then U becomes the n × n
Floquet transition matrix for a periodic ODE system. Thus, in contrast to the ODE
case, the periodic DDE system has an infinite number of characteristic multipliers,
which must have a modulus of less than one for asymptotic stability. The fact that
the monodromy operator is infinite dimensional prohibits a closed-formed solution.
In spite of this, one can approach this problem from a practical standpoint – by
constructing a finite dimensional monodromy matrix U that closely approximates
the stability characteristics of the infinite dimensional monodromy operator U. This
is the underlying approach that is followed throughout this fourth chapter. Thus,
(4.4) becomes mi = Umi−1 , where mi is a finite-dimensional expansion vector of the
state in any desired basis, and only a finite number of characteristic multipliers must
be calculated. If a sufficiently large expansion basis is chosen, to insure converge,
then the infinite number of neglected multipliers are guaranteed to be clustered about
the origin of the complex plane by the compactness of U, and thus do not influence
the stability.

4.3 Temporal Finite Element Analysis

A number of prior works have used time finite elements to predict either the stabil-
ity or temporal evolution of a system [1,9,23,63]. However, temporal finite element
analysis was first used to determine the stability of delay equations in reference [6].
The authors examined a second order delay equation that was piecewise continu-
ous with constant coefficients. Following a similar methodology, the approach was
adapted to examine second-order delay equations with time-periodic coefficients
and piecewise continuity in references [52, 53, 55]. While these prior works were
limited to second order DDEs, Mann and Patel [56] developed a more general frame-
work that could be used to determine the stability of DDEs that are in the form of
a state space model – thus extending the usefulness of the temporal finite element
method to a broader class of systems with time delays.
This section describes the temporal finite element approach of Mann and Patel
[56] and applies this technique to a variety of example problems.

4.3.1 Application to a Scalar Autonomous DDE

A distinguishing feature of autonomous systems is that time does not explicitly


appear in the governing equations. Some application areas where autonomous DDEs
98 E. Butcher and B. Mann

arise are in robotics, biology, and control using sensor fusion. In an effort to improve
the clarity of this section, we first consider the analysis of a scalar DDE before
describing the generalized approach. Thus, the stability analysis of the scalar DDE
is followed by the analysis of a nonautonomous DDE with multiple states.
Time finite element analysis (TFEA) is a discretization approach that divides the
time interval of interest into a finite number of temporal elements. The approach
allows the original DDE to be transformed into the form of a discrete map. The
asymptotic stability of the system is then analyzed from the characteristic multipli-
ers or eigenvalues of the map. We first consider the following time delay system,
originally examined by Hayes [30], which has a single state variable

ẋ(t) = α x(t) + β x(t − τ ) , (4.5)

where α and β are scalar parameters and τ = 1 is the time delay. Since the (4.5)
does not have a closed form solution, the first step in the analysis is to consider an
approximate solution for the jth element of the nth period as a linear combination
of polynomials or trial functions. The assumed solution for the state and the delayed
state are
3
xj (t) = ∑ anji φi (σ ) , (4.6a)
i=1
3
xj (t − τ ) = ∑ an−1
ji φi (σ ) , (4.6b)
i=1

where a superscript is used to denote the nth and n − 1 delay period for the current
and delayed state variable, respectively. Each trial function, φi (σ ), is written as a
function of the local time, σ , within the jth element and the local time is allowed to
vary from zero to the time for each element, denoted by tj . The introduction of a local
time variable is beneficial because it allows the trial functions to remain orthogonal
on the interval 0 ≤ σ ≤ tj once they have been normalized. To further clarify the
local time concept, assume that E elements are used in the analysis and that the time
for each element is taken to be uniform, then the time interval for a single element
is tj = τ /E and the local time would vary from zero to tj . Furthermore, a set of trial
functions, orthogonal on the interval from zero to one can be made orthogonal over
any 0 ≤ σ ≤ tj interval by simply replacing the original independent variable of the
polynomials with σ /tj . The polynomials used in all the examples that follow are
 2  3  4  5
σ σ σ σ
φ1 (σ ) = 1 − 23 + 66 − 68 + 24 , (4.7a)
tj tj tj tj
 2  3  4
σ σ σ
φ2 (σ ) = 16 − 32 + 16 , (4.7b)
tj tj tj
 2  3  4  5
σ σ σ σ
φ3 (σ ) = 7 − 34 + 52 − 24 . (4.7c)
tj tj tj tj
4 Stability of Time Periodic Delay Systems via Discretization Methods 99

y(t)

n-1 n-1 n-1 n


a13 = a21 a22 n n a22
n-1 n-1 a13 = a21
n-1 n-1 a23 = a11 n
a11 a12 a12 n
a23

?? -tj 0 tj t t

Fig. 4.1 Timeline for the state variable, x, over a time interval of 2τ . Dots denote the locations
where the coefficients of the assumed solution are equivalent to the state variable. The beginning
and end of each temporal element is marked with dotted lines

The above trial functions are orthogonal on the interval of 0 ≤ σ ≤ tj and they are
obtained through interpolation. The interpolated trial functions are constructed such
that the coefficients of the assumed solution directly represents the state variable at
the beginning σ = 0, middle σ = tj /2, and end σ = tj of each temporal element.
The graph of Fig. 4.1 is provided to illustrate the important fact that the coefficients
of the assumed solution take on the values of the state variables at specific instances
in time. Thus, these functions satisfy the natural and essential boundary conditions
(i.e., the states at the end of one element match those at the beginning of the follow-
ing element).
Substituting (4.6a) and (4.6b) into (4.5) results in the following
3  
∑ anji φ̇i (σ ) − α anji φi (σ ) − β an−1
ji φ i (σ ) = error , (4.8)
i=1

which shows a nonzero error associated with the approximate solutions of (4.6a)
and (4.6b). To minimize this error, the assumed solution is weighted by multiply-
ing by a set of test functions, or so called weighting functions, and the integral of
the weighted error is set to zero. This is called the method of weighted residuals
and requires that the weighting functions be linearly independent [65]. The weight-
ing functions used for the presented analysis were shifted Legendre polynomials.
These polynomials were used because they satisfy the required condition of linear
independence. Here, we have chosen to only use the first two shifted Legendre poly-
nomials ψ1 (σ ) = 1 and ψ2 (σ ) = 2(σ /tj ) − 1 to keep the matrices of (4.10) square.
The weighted error expression becomes
 tj  
anji φ̇i (σ ) − α anji φi (σ ) − β an−1
ji φ i (σ ) ψp (σ )dσ = 0 . (4.9)
0
100 E. Butcher and B. Mann

After applying each weighting function, a global matrix equation can be obtained
by combining the resulting equations for each element. To provide a representative
expression, we assume two elements are sufficient and write the global matrix of
(4.10). This equation relates the states of the system in the current period to the
states of the system in the previous period,
⎡ ⎤ ⎡ ⎤n ⎡ ⎤ ⎡ ⎤n−1
1 0 0 0 0 a11 0 0 0 0 1 a11
⎢N 1
⎢ 11
1
N12 1
N13 0 0 ⎥ ⎢a12 ⎥
⎥⎢ ⎥
⎢P1
⎢ 11
1
P12 1
P13 0 0⎥ ⎢a12 ⎥
⎥⎢ ⎥
⎢N 1
⎢ 21
1
N22 1
N23 0 0 ⎥⎥
⎢a21 ⎥ = ⎢P1
⎢ ⎥ ⎢ 21
1
P22 1
P23 0 0⎥ ⎢ ⎥
⎥ ⎢a21 ⎥ . (4.10)
⎣ 0 0 2
N11 2
N12 2 ⎦ ⎣a ⎦
N13 ⎣0 0 2
P11 2
P12 2 ⎦ ⎣a ⎦
P13
22 22
0 0 2
N21 2
N22 2
N23 a23 0 0 2
P21 2
P22 2
P23 a23

The terms inside the matrices of (4.10) are the following scalar terms
 tj  
j
Npi = φ̇i (σ ) − αφi (σ ) ψp (σ ) dσ , (4.11a)
0
 tj
j
Ppi = β φi (σ )ψp (σ ) dσ . (4.11b)
0

If the time interval for each element is identical, the superscript in these expressions
can be dropped for autonomous systems since the expressions for each element
would be identical. However, the use of nonuniform time elements or the examina-
tion of nonautonomous will typically require the superscript notation.
Equation (4.10) describes a discrete time system or a dynamic map that can be
written in a more compact form Ran = Han−1 , where the elements of the R matrix
j
are defined by each Npi term of (4.11a). Correspondingly, the elements of the H
j
matrix are defined by the Ppi terms from (4.11b). Multiplying the dynamic map
expression by R−1 results in an = Uan−1 , where U= R−1 H. Applying the conditions
of the chosen trial functions to the beginning, midpoint, and end conditions allows
us to replace an and an−1 with mx (n) and mx (n − 1), respectively. Here, mx (n) is the
vector that represents the variable x at the beginning, middle, end of each temporal
finite element. Thus, the final expression becomes

mx (n) = Umx (n − 1) , (4.12)

which represents a map of the variable x over a single delay period (i.e., the U matrix
relates the state variable at time instances that correspond to the beginning, middle,
and end of each element to the state variable one period into the future).
The eigenvalues of the monodromy matrix U are called characteristic multipliers.
The criteria for asymptotic stability requires that the magnitudes of the character-
istic multipliers must be in the modulus of less than one for a given combination
of the control parameters. Figure 4.2a shows the boundaries between stable and
unstable regions as a function of the control parameters α and β . The characteristic
multipliers trajectories of Fig. 4.2b show how changes in a single control param-
4 Stability of Time Periodic Delay Systems via Discretization Methods 101

(a) Stability Chart (b)


15 1.5

10 Unstable 1

5 0.5

Imag
0 0
β

Stable

−5 −0.5

−10 Unstable −1

−15 −1.5
−15 −10 −5 0 5 −1.5 −1 −0.5 0 0.5 1 1.5
α Real
Fig. 4.2 A converged stability chart (graph (a)) for (4.5) is obtained when using a single temporal
element and τ = 1. Stable domains are shaded and unstable parameter domains are unshaded.
Graph (b) shows the CM trajectories in complex plane for α = 4.9 and a range of values for β

eter can cause the characteristic multipliers to exit the unit circle in the complex
plane. The authors note that the resulting stability chart is identical those obtained
by Kálmar-Nagy [43].

4.3.2 TFEA Approach Generalization

Although the scalar case was used provide an introductory example, it is more likely
that the practitioner will encounter the case of a first order DDE with multiple states.
Thus, this section describes the generalized analysis and its application to some
illustrative problems. The general analysis assumes a state space system in the form
of (4.3). The expressions for state and the delayed state variables are now written as
vectors
3
xj (t) = ∑ a nji φi (σ ) , (4.13a)
i=1
3
xj (t − τ ) = ∑ a n−1
ji φi (σ ) , (4.13b)
i=1

during the jth element. After substituting the assumed solution forms into (4.1) and
applying the method of weighted residuals, a global matrix can be obtained that
relates the states of the system in the current period to those in the previous period,
102 E. Butcher and B. Mann
⎡ ⎤ ⎡ ⎤n ⎡ ⎤ ⎡ ⎤n−1
I 0 0 0 0 a11 0 0 0 0 Φ a11
⎢N1 N112 N113 0 0 ⎥ ⎢a12 ⎥ ⎢P1 P112 P113 0 0 ⎥ ⎢a12 ⎥
⎢ 11 ⎥⎢ ⎥ ⎢ 11 ⎥⎢ ⎥
⎢N1 N122 N123 0 0 ⎥ ⎢a21 ⎥ = ⎢P1 P122 P123 0 0 ⎥ ⎢ ⎥ ,
⎢ 21 ⎥ ⎢ ⎥ ⎢ 21 ⎥ ⎢a21 ⎥ (4.14)
⎣ 0 0 N211 N212 N213 ⎦ ⎣a22 ⎦ ⎣ 0 0 P211 P212 P13 ⎣a22 ⎦
2 ⎦
0 0 N221 N222 2
N23 a23 0 0 P221 P222 P223 a23
j j
where I is an identity matrix and the terms Npi and Ppi now become the following
square matrices  tj  
j
Npi = Iφ̇i (σ ) − A1 φi (σ ) ψp (σ ) dσ , (4.15a)
0
 tj
j
Ppi = A2 φi (σ )ψp (σ ) dσ . (4.15b)
0

Here, we note the Φ matrix is the identity matrix when the delay terms are always
present. However, Φ may not always be the identity matrix for systems that are
j j
piecewise continuous. The dimensions of the matrices I, Φ, Npi , and Ppi are the
same as the dimensions of A1 and A2 .
The dynamic map of (4.14) is then written in a more compact form, Ran =
j
Han−1 , where the elements of the R matrix are defined by each Npi term of (4.15a).
j
Correspondingly, the elements of the H matrix are defined by the Ppi terms from
(4.15b). Evaluating the chosen trial functions to the beginning, midpoint, and end
of each element allows the coefficients of the assumed solution, an and an−1 , to be
replaced with mx (n) and mx (n − 1), respectively. An eigenvalue problem is then
formed by multiplying the dynamic map expression by R−1 and taking the eigen-
values of matrix U= R−1 H. Alternatively, one may wish to avoid the complication
of inverting R, as in the case that R is close to singular, and would instead prefer
to solve for the ρ values (the characteristic multipliers) that solve the characteristic
equation that is obtained by setting the determinant of H − ρ R equal to zero.

4.3.3 Application to Time-Periodic DDEs

In this section, we consider the Mathieu equation as the case of a damped and
delayed oscillator. The damped delayed Mathieu equation (DDME) provides as rep-
resentative system with the combined effect of parametric excitation and a single
time delay. The original version of Mathieu’s Equation did not contain either damp-
ing or a time delay and was discussed first in 1868 by Mathieu [58] to study the
vibration of an elliptical membrane. Bellman and Cook [8] and Hsu and Bhatt [32]
both made attempts to lay out the criteria for stability using D-subdivision method
combined with the theorem of Pontryagin [64]. Insperger and Stépán used analyt-
ical and semidiscretization approach in references [35, 36, 38] and Garg et al. [24]
used a second order temporal finite element analysis to investigate the stability of
the DDME. The equation of interest is
 
ẍ(t) + κ ẋ(t) + δ + ε cos(ω t) x(t) = b x(t − τ ), (4.16)
4 Stability of Time Periodic Delay Systems via Discretization Methods 103

where the equation has a period of T = 2π /ω , a damping coefficient of κ , a con-


stant time delay τ =2π . The parameter b acts much like the gain in a state variable
feedback system to scale the influence of the delayed term. For the results of this
section, the parameter ω is set to one. According to the extended Floquet theory for
DDEs, this requires the monodromy matrix to be constructed over the period of the
time-periodic matrices T = 2π /ω = 2π .
The first step in the analysis is to rewrite (4.16) as a state space equation,
       
x˙1 0 1 x1 (t) 0 0 x1 (t − τ )
= + , (4.17)
x˙2 −(δ + ε cos(ω t)) −κ x2 (t) b 0 x2 (t − τ )

where y1 = x and y2 = ẋ. Once the equation is written in the form of a state space
model, it becomes apparent that the more generalized form is (4.3). This nonau-
tonomous case has two matrices, A1 (t) and A2 , which are given by
   
0 1 00
A1 (t) = , and A2 = . (4.18)
−δ − ε cos(ω t) −κ b0

Once again, the solution process starts by substituting (4.13a) and (4.13b) into
(4.3). The solution for the jth element then requires a slight alteration to the time-
periodic terms inside the matrices. Assuming that E uniform temporal elements are
applied, the time duration for each element would be tj = T /E. Next, we substitute
t = σ + ( j − 1)tj into the matrix A1 (t) so that the cosine term takes on the correct
values over the entire period T = 2π /ω . These terms are then substituted into (4.3)
and the method of weighted residuals is applied – as in the previous sections. The
expressions that populate the matrix of (4.14) are
 tj    
j
Npi = Iφ̇i (σ ) − A1 σ + ( j − 1)tj φi (σ ) ψp (σ ) dσ , (4.19a)
0
 tj
j
Ppi = A2 φi (σ )ψp (σ ) dσ , (4.19b)
0

and Φ is taken to be the identity matrix. Here, we point out that the superscript,
j, may
 not be dropped
 in the above expressions since the time-periodic terms of
A1 σ + ( j − 1)tj will assume different values within each element.
Figure 4.3 shows a series of stability charts for ω = 1, τ = 2π , and different val-
ues of ε and κ . While the presented results are for the case of T being equivalent to
the time delay, additional cases, – such as those with a nonequivalent time interval,
can be found in [24, 38]. The stability charts of Fig. 4.3 show that as the damping is
increased the stable parameter space grows. It can also be observed that the stable
parameter space begins to unify as the damping term is increased. Finally, when the
amplitude of the parametric excitation is increased to larger values of ε , the stability
regions again become disjoint.
104 E. Butcher and B. Mann

Stability Chart Stability Chart Stability Chart


1.5 1.5 1.5
(a) (b) 1
(c) Unstable
1 Unstable 1 Unstable

0.5 0.5 0.5

0 0 0

b
b

b
Stable Stable Stable
-0.5 -0.5 -0.5

-1 -1 -1

-1.5 -1.5 -1.5


-1 0 1 2 3 4 5 -1 0 1 2 3 4 5 -1 0 1 2 3 4 5
d d d
Stability Chart Stability Chart Stability Chart
1.5 1.5 1.5

1 (d) Unstable 1
(e) Unstable 1 (f) Unstable

0.5 0.5 0.5

0 0 0
b

b
b

Stable Stable Stable


-0.5 -0.5 -0.5

-1 -1 -1

-1.5 -1.5 -1.5


-1 0 1 2 3 4 5 -1 0 1 2 3 4 5 -1 0 1 2 3 4 5
d d d
1.5
Stability Chart 1.5 Stability Chart 1.5 Stability Chart

1
(g) Unstable 1
(h) Unstable 1
(i) Unstable

0.5 0.5 0.5

0 0 0
b

Stable Stable Stable


-0.5 -0.5 -0.5

-1 -1 -1

-1.5 -1.5 -1.5


-1 0 1 2 3 4 5 -1 0 1 2 3 4 5 -1 0 1 2 3 4 5
d d d

Fig. 4.3 Stability chart for (4.16) using ω = 1 and τ = 2π and five elements. Stability results for
each graph are for the following parameters: (a) ε = 0 and κ = 0; (b) ε = 0 and κ = 0.1; (c) ε = 0
and κ = 0.2; (d) ε = 1 and κ = 0: (e) ε = 1 and κ = 0.1; (f) ε = 1 and κ = 0.2; (g) ε = 2 and
κ = 0; (h) ε = 2 and κ = 0.1; (i) ε = 2 and κ = 0.2

4.4 Chebyshev Polynomial Expansion and Collocation

The use of Chebyshev polynomials to obtain the solutions of differential equations


dates back to the works of Clenshaw [17], Elliot [20], and Wright [82] and was sum-
marized in the books by Fox and Parker [22] and Snyder [73]. As stated in [22], a
simple power series solution is not the best convergent solution on a finite interval.
Instead, if the solution is expressed in terms of Chebyshev polynomials, higher accu-
racy and better convergence are achieved. Although other orthogonal polynomials
have also been used, Chebyshev polynomials are optimal in terms of minimizing the
uniform error over the entire interval [73]. Although in these early works recurrence
relations are explicitly used, in later vector/matrix formulations various operational
matrices associated with the Chebyshev polynomials (or their shifted versions) are
utilized instead. Such formulations were developed for several specific problems,
4 Stability of Time Periodic Delay Systems via Discretization Methods 105

including the response and stability analysis of linear time-periodic ODEs [71] and
the response of constant-coefficient DDEs [31].

4.4.1 Expansion in Chebyshev Polynomials

The standard Chebyshev polynomials are defined as [22]:

Tr (x) = cos rθ , cos θ = x, −1 ≤ x ≤ 1 (4.20)

Using the change of variable x = 2t –1, 0 ≤ x ≤ 1, the shifted Chebyshev polynomi-


als are defined in the interval t ∈ [0, 1] as

Tr∗ (x) = Tr (2t − 1) (4.21)

Note |Tr∗ (t)| ≤ 1. Suppose f (t) is a continuous scalar function, which can be
expanded in shifted Chebyshev polynomials:

f (t) = ∑ bj Tj∗ (t), 0 ≤ t ≤ 1. (4.22)
j=0

Using the orthogonality property of the polynomials, the coefficients bj are given by
 1
2
bj = f (t)Tj∗ (t)w(t)dt, j = 1, 2, 3, ...,
π 0
(4.23)
 1
1
b0 = f (t)w(t)dt, j = 0,
2 0

where w(t) = (t − t 2 )−1/2 . Our notation for a finite expansion in the first m shifted
Chebyshev polynomials is
m−1
f (t) = ∑ aj Tj∗ (t) = T(t)T a, (4.24)
j=0

where T(t) = {T0∗ (t)T1∗ (t)Tm−1


∗ (t)}T and a are m × 1 column vectors of the polyno-

mials and coefficients, respectively. Linear operations on functions can now be writ-
ten as matrix operations on vectors of polynomials and coefficients, respectively. To
build a square monodromy matrix whose eigenvalues will determine the stability of
the DDE, we employ square matrix approximations to these operations.
Integration can be represented with a small error by a square matrix [2]:
 t
T(τ )dτ = GT(t) + O(m−1 ), (4.25)
0
106 E. Butcher and B. Mann

where G =
⎡ ⎤
1/2 1/2 0 0 0 ··· 0
⎢ ⎥
⎢ ⎥
⎢ −1/8 0 1/8 0 0 ··· ⎥ 0
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ −1/6 −1/4 0 1/12 0 ··· 0 ⎥
⎢ ⎥ (4.26)
⎢ ⎥
⎢ ⎥
⎢ .. .. .. .. .. .. ⎥
⎢ . . . . . . 1/4(m − 1)⎥
⎢ ⎥
⎣ ⎦
(−1)m /2m(m − 2) 0 0 ··· 0 1/4(m − 2) 0

is an m × m integration operational matrix. Using the relation Tr∗ (t)Tk∗ (t) =


∗ (t) + T ∗ (t))/2 for the product of two shifted Chebyshev polynomials, the
(Tr+k |r−k|
operation of multiplying two Chebyshev-expanded scalar functions f (t) = T(t)T a
and g(t) = T(t)T b is approximated as

f (t)g(t) = T(t)T Qa b, (4.27)

using the square m × m product operational matrix


⎡ ⎤
a0 a1 /2 a2 /2 a3 /2 · · · am−1 /2
⎢ ⎥
⎢ ⎥
⎢ a1 a0 + a2 /2 1/2(a1 + a3 ) 1/2(a2 + a4 ) · · · am−2 /2⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ · · · am−3 /2⎥
Qa = ⎢ a2 1/2(a1 + a3 ) a0 + a4 /2 1/2(a1 + a5 ) ⎥ (4.28)
⎢ ⎥
⎢ ⎥
⎢ .. .. .. .. .. .. ⎥
⎢ . . . . . . ⎥
⎢ ⎥
⎣ ⎦
am−1 am−2 /2 am−3 /2 am−4 /2 · · · a0

This approximation involves dropping m(m − 1)/2 terms from the product. If f , g
are at least twice differentiable then the order of the total error in using Qa goes to
zero as m increases [13].
Suppose A(t) is any n×n matrix-valued function (such as A1 (t) or A2 (t) in (4.3))
whose entries aij (t) can be expanded in m shifted Chebyshev polynomials. Then
⎡ ⎤
a11 · · · a1n
⎢ ⎥
  ⎢ ⎥
⎢ .. ⎥ ,
A(t) = T̂(t)T Ā = In ⊗ T(t)T ⎢ ... . . . . ⎥ (4.29)
⎢ ⎥
⎣ ⎦
an1 · · · ann
4 Stability of Time Periodic Delay Systems via Discretization Methods 107

where In is the n × n identity, ⊗ is the Kronecker product, and aij is an m × 1 vector


of the coefficients of the matrix entry Aij (t). Therefore, the integral operation is
written  t
A(τ )d τ = T̂(t)T ĜT Ā, (4.30)
0

where Ĝ = In ⊗ G is an nm × nm matrix. Suppose B(t) = T̂(t)T B̄ is another matrix


function. Then the product is written

A(t)B(t) = T̂(t)T Q̂A B̄, (4.31)

where ⎡ ⎤
Qa11 · · · Qa1n
⎢ ⎥
⎢ ⎥
⎢ .. ⎥
Q̂A = ⎢ ... ..
. . ⎥ (4.32)
⎢ ⎥
⎣ ⎦
Qan1 · · · Qann
is an nm × nm matrix.
Now consider (4.3) in the special case where T = τ . Here we present a “direct”
formulation for the approximate U matrix using Chebyshev polynomials. By inte-
grating once as
 t
x(t) = x(t0 ) + (A1 (s)x(s) + A2 (s)x(s − τ ))ds, (4.33)
t0

and normalizing the period to T = τ = 1, we obtain the solution vector x1 (t) in the
first interval [0, 1] as
 t
x1 (t) = x1 (0) + (A1 (s)x(s) + A2 (s)φ (s − 1))ds. (4.34)
0

Next, we expand x1 (t), A1 (t), and A2 (t) and the initial function φ (t − 1) in shifted
Chebyshev polynomials as

x1 (t) = T̂(t)T m1 , A1 (t) = T̂(t)T Ā1 , A2 (t) = T̂(t)T Ā2 ,


(4.35)
φ (t − 1) = T̂(t)T m0 x(0) = T̂(t)T T̄(1)m0 ,

where m1 and m0 are the nm × 1 Chebyshev coefficients vectors of the solution


vector x1 (t) and the initial function φ (t − 1). The nm × nm matrix T̄(1) is defined as
T̄(1) = ÎT̂(1)T where I = T̂(t)T Î. Using the Chebyshev expansions in (4.35), (4.34)
takes the form
 t
T T
T̂(t) m1 = T̂(t) T̄(1)m0 + (T̂(s)T Ā1 T̂(s)T m1 + T̂(s)T Ā2 T̂(s)T m0 )ds. (4.36)
0
108 E. Butcher and B. Mann

Applying the operational matrices and simplifying, we obtain

T̂(t)T [I − ĜT Q̂A1 ]m1 = T̂(t)T [T̄(1) + ĜT Q̂A2 ]m0 . (4.37)

For the ith interval [i - 1, i], the linear map U which relates the Chebyshev coefficient
vector mi to that in the previous interval is therefore given by

mi = [I − ĜT Q̂A1 ]−1 [T̄(1) + ĜT Q̂A2 ]mi−1 , (4.38)

which is equivalent to (4.4). Hence, the stability matrix U is the linear map in
(4.38). The matrix U can be considered as a finite-dimensional approximation to
the infinite-dimensional operator, which maps continuous functions from the inter-
val [0, 1] back to the same interval. For asymptotic stability, all the eigenvalues
ρi of U must lie within, or, for the neutral stability, on the unit circle. Alterna-
tively, the inversion of [I − ĜT Q̂A1 ] can be avoided by setting the determinant of
 
[T̄(1) + ĜT Q̂A2 ] − ρ [I − ĜT Q̂A1 ] to zero.

4.4.2 Estimating the Number of Polynomials

Since one can determine the exponential growth rate of a system from easily cal-
culated norms of the coefficient matrices, from this information one can choose a
sufficient value for the important numerical parameter m (number of shifted Cheby-
shev polynomials) to give a desired accuracy in a particular example. Suppose that
p(t) is the best shifted Chebyshev polynomial expansion of f (t) using only the
m polynomials T0∗ (t), ..., Tm−1
∗ (t). We depend on the following relationship which

relates the accuracy of the Chebyshev approximation p to the size of the m continu-
ous derivatives of f :
1
max | f (t) − p(t)| ≤ max | f (m) (t)|. (4.39)
0≤t≤1 22m−1 m! 0≤t≤1
Suppose we want to apply these methods to the scalar constant coefficient equation

ẋ = 3x − 2x(t − 1), (4.40)

and with x(s) = φ (s), −1 ≤ s ≤ 0. Suppose we want to have an error of at most 10−6 .
Since the solution is a function with exponential growth at most e3t , from (4.40) we
need 3m e3 /(22m−1 m!) < 10−6 , that is, m ≥ 10. The heuristic x(t) ∼ eρ̄ t along with
(4.39) can be used in the general case, where

ρ̄ = max ρ (A1 (s)), (4.41)


0≤s≤1

is the maximum spectral radius over the normalized period.


4 Stability of Time Periodic Delay Systems via Discretization Methods 109

Fig. 4.4 The stability diagrams of the delayed Mathieu equation (4.42) with (a) ε = 0, (b) b = 0,
(c) ε = 1, and (d) ε = 2. Reprinted from International Journal for Numerical Methods in Engineer-
ing, vol. 59, pp. 895-922, “Stability of linear time-periodic delay-differential equation via Cheby-
shev polynomials” by E. A. Butcher, et al., 2004.  c John Wiley & Sons Limited. Reproduced with
permission

For example, for the delayed Mathieu equation [36],

ẍ(t) + (δ + 2ε cos 2t)x(t) = bx(t − π ) (4.42)

after normalizing, we suppose that the solution has approximate exponential form
eρ̄ t where ρ̄ = π |δ | + 2|ε |. (We are disregarding the parameter b because it only
contributes a subexponential term). Suppose, as in Fig. 4.4, that we wish to deter-
mine stability for parameter ranges√−5 ≤ δ ≤ 20, −4 ≤ b ≤ 4, 0 ≤ ε ≤ 20 with an
error of at most 10−6 . Then ρ̄ = π 60 ≈ 24.3 is the worst case and we seek m so
that (ρ̄ m eρ̄ )/(22m−1 m!) ≤ 10−6 , that is, m ≥ 41. We consider four special cases of
the delayed Mathieu equation (4.42) to plot in Figure 4.4 using Chebyshev poly-
nomial expansion. These include (a) ε = 0 (first obtained in [32]); (b) b = 0 (first
obtained in [81]); (c) ε = 1; and (d) ε = 2 (first obtained in [36]).

4.4.3 Chebyshev Collocation Method

We now illustrate a different Chebyshev-based method, which uses a collocation


expansion of the solution at the extrema of the Chebyshev polynomials. In fact,
the collocation method is more efficient than the method of polynomial expansion.
Unlike that method, it can easily be applied to DDEs with nonsmooth coefficients. It
110 E. Butcher and B. Mann

x(t)
mf m1

{
{
t
tN tN-1 tN-2 t2 t1 t0 tN t3 t2 t1 t0=tN t3 t2 t1 t0
−1 0 1 -τ 0 t
(a) (b)
Fig. 4.5 Diagrams of (a) Chebyshev collocation points as defined by projections from the unit
circle and (b) collocation vectors on successive intervals

generalizes and extends the collocation and pseudospectral techniques for constant-
coefficient DDEs and boundary value problems introduced in [7, 40, 80] to approx-
imate the compact monodromy operator of a periodic DDE, whose eigenvalues
exponentially converge to the exact Floquet multipliers. It is flexible for systems
with multiple degrees of freedom and it produces stability charts with high speed
and accuracy in a given parameter range. It should be noted that, unlike in [21] in
which collocation methods were used to find periodic solutions to nonlinear DDEs,
the method proposed here explicitly computes stability properties of linear peri-
odic DDEs which may have been obtained through linearization about a periodic
solution. An a priori proof of convergence was sketched in [25], and computable
uniform a posteriori bounds were given for this method in [12].
The Chebyshev collocation method is based on the properties of the Chebyshev
polynomials. The Chebyshev collocation points are unevenly spaced points in the
domain [−1,1] corresponding to the extremum points of the Chebyshev polynomial.
As seen in Fig. 4.5a, we can also define these points as the projections of equis-
paced points on the upper half of the unit circle as tj = cos( jπ /N), j = 0, 1, ..., N.
Hence, the number of collocation points used is m = N + 1. A spectral differen-
tiation matrix for the Chebyshev collocation points is obtained by interpolating a
polynomial through the collocation points, differentiating that polynomial, and then
evaluating the resulting polynomial at the collocation points [12]. We can find the
differentiation matrix D for any order m as follows: Let the rows and columns of
the m × m Chebyshev spectral differentiation matrix D be indexed from 0 to N. The
entries of this matrix are
2 2 −tj
D00 = 2N 6+ 1 , DNN = − 2N 6+ 1 , Djj = , j = 1, ..., N − 1
2(1 − tj2 )
 (4.43)
c (−1)i+j 2, i = 0, N
Dij = i , i = j, i, j = 0, ..., N, ci =
ci (ti − tj ) 1, otherwise

The dimension of D is m × m. Also let the mn × mn differential operator D be


defined as D = D ⊗ In .
4 Stability of Time Periodic Delay Systems via Discretization Methods 111

Now let us approximate (4.3) using the Chebyshev collocation method, in which
the approximate solution is defined by the function values at the collocation points
in any given interval. (Note that for a collocation expansion on an interval of
length T = τ , the standard interval [−1, 1] for the Chebyshev polynomials is easily
rescaled). As shown in Fig. 4.5b, let m1 be the set of m values of x(t) in the interval
t ∈ [0, T ] and mφ be the set of m values of the initial function φ (t) in t ∈ [−T, 0].
Recalling that the points are numbered right to left by convention, the matching con-
dition in Fig. 4.5b is seen to be that m1N = mφ 0 . Writing (4.3) in the algebraic form
representing the Chebyshev collocation expansion vectors mφ and m1 , we obtain

D̂m1 = M̂A1 m1 + M̂A2 mφ . (4.44)

To enforce the n matching conditions, the matrix D̂ is obtained from D by (1) scaling
to account for the shift [−1,1]→ [0, T ] by multiplying the resulting matrix by 2/T ,
and (2) modifying the last n rows as [0n 0n ... In ] where 0n and In are n × n null and
identity matrices, respectively. The pattern of the product operational matrices is
⎡ ⎤
A1 (t0 )
⎢ A1 (t1 ) ⎥
⎢ ⎥
⎢ .. ⎥
M̂A1 =⎢ . ⎥, (4.45)
⎢ ⎥
⎣ A1 (tN−1 ) ⎦
0n 0n ··· 0n 0n

where A1 (ti ) (calculated at the ith point on the interval of length τ ) and elements 0n
are n × n matrices. Similarly,
⎡ ⎤
A2 (t0 )
⎢ A2 (t1 ) ⎥
⎢ ⎥
⎢ .. ⎥
M̂A2 =⎢ . ⎥. (4.46)
⎢ ⎥
⎣ A2 (tN−1 ) ⎦
In 0n ··· 0n 0n

Here the hat ( ˆ ) above the operator refers to the fact that the matrices are modified
by altering the last n rows to account for the matching conditions.
In the case of nonsmooth coefficients where the matrix A2 (t) vanishes for a
percentage σ of the period T , then the DDE of (4.3) in this interval reduces to
the ODE system ẋ = A1 (t)x for which a transition matrix Φ(t) may be approxi-
mated using the technique of Chebyshev polynomial expansion in [71], for exam-
ple. (In certain problems such as interrupted milling, the system further reduces to
one with constant coefficients, i.e. ẋ = A0 x, such that the transition matrix in the
subinterval where the delay vanishes is simply Φ(t) = eA0 t ). To utilize the solu-
tion Φ(t), we rescale the Chebyshev collocation points to account for the shift
[−1, 1] → [0, (1 − σ )T ], while doing the same for matrix D̂ by multiplying D
112 E. Butcher and B. Mann

A2 (t)

sT
(1-s)T

tN t 3 t2 t 1 t 0 tN t3 t2 t1 t0 t
0 t 2t
x(t)
m1 Φ(sΤ) m2 Φ(s Τ)
{

{
{

tN t 3 t2 t 1 t 0 tN t3 t2 t1 t0 t
0 t 2t
Fig. 4.6 Chebyshev collocation vectors on successive intervals for the case where A2 (t) vanishes
for a percentage σ of the period T

by 2/((1 − σ )T ). As the matching condition between successive intervals now


becomes m1N = Φ(σ T )mφ 0 (Fig. 4.6), therefore, the last n rows of M̂A2 in (4.46)
are modified to [Φ(σ T ) 0n ... 0n ].
Therefore, since U is defined as the mapping of the solution at the collocation
points to successive intervals as m1 = Umφ (compare with (4.4)), we obtain the
approximation to the monodromy operator from (4.44) as

U = [D̂ − M̂A1 ]−1 M̂A2 . (4.47)

Alternatively,
  of [D̂ − M̂A1 ] can be avoided by setting the determinant
the inversion
of M̂A2 − ρ [D̂ − M̂A1 ] to zero. It is seen that if m is the number of collocation
points in each interval and n is the order of the original delay differential equa-
tion, then the size of the U matrix (whose eigenvalues represent the approximate
Floquet multipliers which are largest in absolute value) will be mn × mn. We can
achieve higher accuracy of the Floquet multipliers by increasing the value of m.
A MATLAB suite of codes called DDEC which computes stability boundaries for
4 Stability of Time Periodic Delay Systems via Discretization Methods 113

linear periodic DDEs with multiple discrete delays has been written and is available
for download [11]. Other than for linear stability analysis of periodic DDEs, this
collocation method has also been used for center manifold reduction of nonlinear
periodic DDEs [15]. The Chebyshev collocation method will be illustrated in the
next section through the analysis of the stability of the milling process.

4.5 Application to Milling Stability

This section investigates the stability of a manufacturing process known as milling.


We restrict our analysis to the case of a compliant workpiece and rigid tool (Fig. 4.7).
For the sake of brevity, only the salient features of the model physical parameters
are described. The interested reader is directed to references [37, 52, 55, 57, 62] for
a more comprehensive model description and for the comparison of theory with
experiments. The dynamics of the system under consideration can represented as a
single degree of freedom model

ẍ(t) + 2ζ ω ẋ(t) + ω 2 x(t) = −Ks (t)b [x(t) − x(t − τ )] , (4.48)

where ζ is the damping ratio, ω is the circular natural frequency, and b is the axial
depth of cut. The terms of Ks (t) are
Kt
Ks (t) = g(t) [sin(2Ω t) + 0.3 (1 − cos(2Ω t))] , (4.49)
2m
where Ω is the spindle rotational speed, g(t) is a unit step function (i.e. gp (t) = 1
for 0 < mod (Ω t) < 2πρ and is otherwise zero), and ρ is the fraction of the spindle

(a) (b)

Fig. 4.7 The single degree of freedom milling system investigated in this section and in references
[37, 55]. Schematic (a) is a top view and (b) is a side view
114 E. Butcher and B. Mann

period spent cutting. The constant Kt is a cutting coefficient that scales the cutting
forces in relation to the uncut chip area. The forthcoming stability results are gener-
alized by the introduction of the following non-dimensional parameters,

t˜ = ω t , (4.50a)
τ̃ = ωτ , (4.50b)
Ω
Ω̃ = , (4.50c)
ω
bKt
b̃ = , (4.50d)
2mω 2
into (4.48). The state-space representation for the revised equation of motion is
       
x˙1 0 1 x1 0 0 x1 (t˜ − τ̃ )
= + , (4.51)
x˙2 −1 − b̃Kc (t˜) −2ζ x2 b̃Kc (t˜) 0 x2 (t˜ − τ̃ )

where the expression for Kc (t˜) is


  
Kc (t˜) = g(t˜) sin 2Ω̃ t˜ + 0.3 1 − cos 2Ω̃ t˜ . (4.52)

Investigating the stability of (4.51) also requires a solution for the time interval of
free vibration, tf , or the time interval when g(t˜) = 0. For the TFEA method, the state
transition matrix Φ in (4.14) is used for this purpose, while it is also used in the
Chebyshev collocation method in (4.46). Thus it maps the states of the tool as it
exits out of the cut to the states of the tool as it reenters into the cut. The terms that
populate the state transition matrix are

λ1 eλ2 t˜f − λ2 eλ1 t˜f


Φ11 = , (4.53)
λ1 − λ2
eλ1 t˜f − eλ2 t˜f
Φ12 = , (4.54)
λ1 − λ2
λ1 λ2 eλ2 t˜f − λ1 λ2 eλ1 t˜f
Φ21 = , (4.55)
λ1 − λ2
λ1 eλ1 t˜f − λ2 eλ2 t˜f
Φ22 = , (4.56)
λ1 − λ2
where t˜f = ω tf and the subscripts for each Φ -term denote the row and column
 within
the state transition
 matrix. The remaining undefined terms are λ1 = − ζ + ζ2 −1
and λ2 = −ζ − ζ − 1. 2

Figure 4.8 shows several stability charts computed using both the TFEA method
and Chebyshev collocation method for several different values of ρ , while Fig. 4.9
shows the corresponding specific cutting forces at these same values. The stability
charts were computed using three finite elements and 25 collocation points with a
900 × 50 and 300 × 300 grid, respectively, which results in similar computational
4 Stability of Time Periodic Delay Systems via Discretization Methods 115

Stability Charts (green − TFEA, red − ChCM)


(a)
0.8
0.6

0.4
0.2
0
0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6
(b)
0.8
0.6

0.4
0.2
0
0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6
(c)
0.8
0.6

0.4
0.2
0
0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6
(d)
0.8
0.6

0.4
0.2
0
0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6
Ω̃

Fig. 4.8 Milling process stability charts for ζ = 0.003 and (a) ρ = 0.10, (b) ρ = 0.15, (c) ρ = 0.25,
and (d) ρ = 0.33. The TFEA (green) and Chebyshev collocation (red) results are almost identical.
Unstable regions are shaded and stable regions are left unshaded

(a) 2
(b) 2

1.5 1.5

1 1
Kc(t)

Kc(t)
˜

0.5 0.5

0 0

−0.5 −0.5

−1 −1
0 1 2 3 4 5 6 0 1 2 3 4 5 6
t̃ t̃

(c) 2
(d) 2

1.5 1.5

1 1
Kc(t)

Kc(t)
˜

0.5 0.5

0 0

−0.5 −0.5

−1 −1
0 1 2 3 4 5 6 0 1 2 3 4 5 6
t̃ t̃

Fig. 4.9 Plots of the piecewise continuous term, from (4.52), over a single period. Various cutting
to noncutting times are indicated by (a) ρ = 0.10, (b) ρ = 0.15, (c) ρ = 0.25, and (d) ρ = 0.33
116 E. Butcher and B. Mann

times (around one minute on a modern laptop). It is seen that the collocation and
TFEA results match very well. Further investigation reveals that the two methods
are very similar in both accuracy and convergence.

4.6 Control of Periodic Systems with Delay using Chebyshev


Polynomials

Consider the following linear time periodic delay differential system:

ẋ(t) = A1 (t)x(t) + A2 (t)x(t − τ ) + B(t)u(t)


(4.57)
x(t) = φ (t), −τ ≤ t ≤ 0

which is identical to (4.3), but with the addition of a given n× p periodic matrix B(t)
and a p × 1 control vector u(t) (to be designed). In this section we use Chebyshev
polynomial expansion for the problems of optimal (open-loop) control and delayed
state feedback (closed-loop) control of 4.57. Additional details on these techniques
can be found in [2, 18, 47, 50]. This work is partially based on previous work on the
control of non-delay systems using Chebyshev polynomials [41, 72].

4.6.1 Variation of Parameters Formulation

In this section, instead of the direct method shown in Sect. 4.4.1, we use an alterna-
tive method which is based on the use of a variation of parameters formula (convo-
lution integral). After normalizing the period to T = τ = 1, the solution of (4.57) in
the interval [0, 1] can be computed as
 t
x1 (t) = Φ(t)x(0) + Φ(t) ΨT (s)[A2 (s)φ (s) + B(s)u(s)]ds (4.58)
0

with ΨT (0) = I. Let A1 (t), A2 (t), x(t), φ (t) be expanded in shifted Chebyshev poly-
nomials as in Section 4.4.1. In addition, we also expand the matrices and vectors

Φ(t) = T̂T (t)F = F T̂(t), Ψ(t) = T̂T (t)P = P T̂(t),


(4.59)
B(t) = T̂T (t)B̄ = B̄ T̂(t), u(t) = T̂T (t)q1 .

Using the product and integration operational matrices defined in Section (4.4.1),
Eq. (4.58) is converted into an algebraic form as

m1 = [FT̂T (1) + Q̂F ĜT Q̂PT Q̂A2 ]m0 + [Q̂F ĜT Q̂PT Q̂B ]q1 , (4.60)

where m0 is the Chebyshev coefficient vector of φ (t) in the delay interval [-1, 0],
m1 is the Chebyshev coefficient vector of x(t) in the interval [0, 1] and q1 is the
Chebyshev coefficient vector of u(t) in the interval [0, 1].
4 Stability of Time Periodic Delay Systems via Discretization Methods 117

Thus, the Chebyshev coefficients of the state mi+1 are obtained on the interval
[i, i + 1] in terms of Chebyshev coefficients of the state mi on the interval [i − 1, i]
and the control input vector qi+1 as

mi+1 = Umi + Lqi+1 (4.61)

where U and L are the matrices in (4.60). Equation (4.61) defines a recursive rela-
tionship between the Chebyshev coefficients of the state in a particular interval in
terms of the Chebyshev coefficients of the state in the previous interval and the
Chebyshev coefficients of the control input vector in that particular interval. The
matrix U is the monodromy matrix which is a finite approximation of the infinite-
dimensional operator, and whose eigenvalues are identical to those of the map in
(4.38). Therefore, the stability can be obtained using Chebyshev polynomial expan-
sion of the U matrix in either (4.38) or (4.60), or from Chebyshev collocation using
(4.47), or from temporal finite elements using (4.14).

4.6.2 Finite Horizon Optimal Control via Quadratic Cost Function

We now consider the problem of finite horizon optimal control of (4.57) by mini-
mizing the cost function
 tf
1
J = [xT (tf )Sf x(tf ) + [xT (t)Q(t)x(t) + uT (t)R(t)u(t)]dt]. (4.62)
2 0

Here, tf is the final time, Q(t) and R(t) are n × n and p × p symmetric posi-
tive semidefinite and symmetric positive definite periodic matrices with period T ,
respectively. Let tf lie between the normalized interval of time [N, N + 1]. Sf is an
n × n (terminal penalty) symmetric positive semidefinite matrix. If the matrices Q(t)
and R(t) in the ith interval are represented as Q(t) = T̂T (t)Q1 and R(t) = T̂T (t)R1
then utilizing product and integration operational matrices, the cost function in
(4.62) can be written in the form of a quadratic function in terms of the unknown
Chebyshev coefficients qi of the control input vectors over each of the defined
sequence of time intervals and the known Chebyshev coefficients of the initial con-
dition function.
The Chebyshev coefficients of optimal control vector in each of the intervals are
computed by equating ∂ J/∂ qi to 0, ∀i = 1, 2, ..., N, N + 1 using the fact that
i
mi+1 = Ui+1 m0 + ∑ Uk Lqk+1 . (4.63)
k=0

This results in a set of linear algebraic equations Zq̄ = b that are solved for the
unknown Chebyshev coefficients of the control vector q̄ = [qT1 qT2 ... qTN+1 ]T .
118 E. Butcher and B. Mann

4.6.3 Optimal Control Using Convergence Conditions

The performance of the controlled state trajectories can be improved by imposing


convergence constraints on the Chebyshev coefficients of the state vector in the
instances when the objective of the control input is to suppress the overall oscillation
of the state vector in finite time. In particular, a quadratic convergence condition of
the form

mTi+1 mi+1 − mTi mi ≤ −ε 2 mTi mi 0 < ε ≤ 1, (4.64)

is of great utility. Such constraints are very popular in the model predictive control
methodology. It can be seen that the convergence condition imposes the L2 norm of
the Chebyshev coefficients of the state to decrease in the successive intervals. Also,
maximizing ε 2 or minimizing −ε 2 increases the speed of decay. Substituting (4.61)
in (4.64) yields

(Umi + Lqi+1 )T (Umi + Lqi+1 ) ≤ (1 − ε 2 )mTi mi , (4.65)

The Chebyshev coefficients qi+1 of the control vector and ε are to be computed
by optimizing a quadratic cost to maximize the decay rate. In particular, a non-
linear optimization program is formulated and solved in each interval. We minimize
a quadratic cost −ε 2 , subject to the quadratic convergence condition (4.65) and a
linear matching condition given by

T̂T (0)mi+1 = T̂T (1)mi , (4.66)

which enforces the continuity of the controlled state vector in adjacent intervals.
Thus, an NLP is solved in each interval to compute the optimized values of Cheby-
shev coefficients of the control vector qi+1 and ε . The Chebyshev coefficients of the
state vector in each interval are computed from (4.61). It is proved in [18] that this
NLP problem optimizes the state trajectories of system (4.57) to achieve zero state
regulation.
Alternatively, NLP can be avoided by imposing yet another convergence condi-
tion of the form

mi+1 ∞ − mi ∞ ≤ ε mi ∞ , 0 < ε ≤ 1, (4.67)

where ∞ represents the L∞ norm of the argument. Therefore, an LP can be formu-


lated and solved in every interval instead of an NLP. We minimize −ε , which in turn
maximizes the decay rate of the L∞ norm of the state vector in successive intervals
subject to the convergence condition obtained by substituting (4.61) into (4.67) and
a matching condition of the form of (4.66). Thus, an LP is solved in each interval
to compute the Chebyshev coefficients of the control vector qi+1 and ε . The Cheby-
shev coefficients of the state in each interval are then computed by using (4.61). It is
proved in [18] that the LP problem optimizes the state trajectories of system (4.57)
to achieve zero state regulation.
4 Stability of Time Periodic Delay Systems via Discretization Methods 119

4.6.4 Example: Optimal Control of a Delayed Mathieu Equation

Consider the controlled delayed Mathieu equation in state-space form


         
ẋ1 0 1 x1 0 0 x1 (t − 1) 0
= + + u(t)
ẋ2 −(a + b cos(2π t)) 0 x2 −c cos(2π t) 0 x2 (t − 1) 1
    (4.68)
x1 (t) 1
= − 1 ≤ t ≤ 0,
x2 (t) 0

where the quadratic cost function to be minimized is


  2 +
1
J= 104 xT (2)x(2) + [xT (t)x(t) + u2 (t)]dt (4.69)
2 0

with x(t) = [x1 (t) x2 (t)]T . For a = b = 0.8π 2 , c = 2π 2 , the uncontrolled system is
unstable as the spectral radius of the monodromy matrix is larger than one. It is
desired that the system state be zero at the final time tf = 2.0. Thus, there are two
distinct intervals [0, 1] and [1, 2], to be considered in the optimization procedure
which is carried out only once for the entire length of time. The uncontrolled and
controlled system response (displacement x1 (t)) and control effort obtained on the
two intervals using 8 Chebyshev polynomials are as shown in Fig. 4.10.

Fig. 4.10 (a) Uncontrolled displacement for (4.68), (b) Controlled displacement and (c) control
force u(t) for (4.68) using quadratic cost function. Reprinted from Optimal Control Applications
and Methods, vol. 27, pp. 123–136, “Optimal Control of Parametrically Excited Linear Delay
Differential Systems via Chebyshev Polynomials,” by V. Deshmukh, et al., 2006.  c John Wiley &
Sons Limited. Reproduced with permission
120 E. Butcher and B. Mann
0.6
0.5
1.2

Control Force
0.4
displacement

1 0.3
0.8 0.2
0.6 0.1
0
0.4 −1−0.1 1 3 5 7 9 11
0.2 −0.2
0 −0.3
−1 1 3 5 7 9 11 −0.4
t t
(a) (b)
Fig. 4.11 (a) Controlled displacement and (b) control force u(t) for (4.68) using NLP. Reprinted
form Optimal Control Applications and Methods, vol. 27, pp. 123-136, “Optimal Control of
Parametrically Excited Linear Delay Differential Systems via Chebyshev Polynomials,” by
V. Deshmukh, et al., 2006. c John Wiley & Sons Limited. Reproduced with permission

200
150

1.2 Control Force 100


50
displacement

1
0.8 0
0.6 0 1 2 3 4 5 6 7 8 9
−50
0.4
−100
0.2
−150
0
0 2 4 6 8 10 −200
t t
(a) (b)
Fig. 4.12 (a) Controlled displacement and (b) control force u(t) for (4.68) using LP. Reprinted
form Optimal Control Applications and Methods, vol. 27, pp. 123-136, “Optimal Control of
Parametrically Excited Linear Delay Differential Systems via Chebyshev Polynomials,” by
V. Deshmukh, et al., 2006. 
c John Wiley & Sons Limited. Reproduced with permission

The NLP is now solved for (4.68) and optimized values of qi+1 and ε are com-
puted until the displacement x1 (t) approaches zero for the entire interval. The con-
trolled response (displacement x1 (t)) and control effort are as shown in Fig. 4.11.
Also, the LP is solved for (4.68) and the optimized values of qi+1 and ε are com-
puted until the displacement x1 (t) approaches zero for the entire interval. The
controlled state response (displacement x1 (t)) and control effort are as shown in
Fig. 4.12. It can be seen that the control vector obtained using the LP formulation
renders faster decaying controlled trajectories than does the one obtained using the
NLP formulation. Both the NLP and LP formulations implicitly minimize a cost
function including the state vector.

4.6.5 Delayed State Feedback Control

The next problem we address is the symbolic delayed feedback control of the system
given in (4.57). Assuming the uncontrolled system is unstable, it is desired to obtain
a delayed state feedback control given by

u(t) = K(t, k̂)x(t − τ ), (4.70)


4 Stability of Time Periodic Delay Systems via Discretization Methods 121

where K(t, k̂) = K(t + T, k̂) is a k × n periodic gain matrix to be found symbolically
in terms of a vector k̂ of control gains which asymptotically stabilize (4.57). It is
assumed that the present state x(t) is not available for feedback. Such a situation
arises in practical control problems involving the delay in sensing, measurement or
reconstruction of the state. Expanding B(t)K(t, k̂) = T̂T (t)V(k̂) in terms of shifted
Chebyshev polynomials valid on the interval [0,1] and inserting into (4.58) yields
the solution in the ith interval as

xi (t) = T̂T (t)[FT̂T (1)mi−1 + Q̂F ĜT Q̂PT (Q̂A2 + Q̂V (k̂))mi−1 ] = T̂T (t)mi
(4.71)
0 ≤ t˜ ≤ 1, i − 1 ≤ t ≤ i,

where mi is the Chebyshev coefficient vector in the interval [i − 1, i] and xi (i − 1) =


xi−1 (i). Defining the closed-loop monodromy matrix as

U(k̂) = FT̂T (1) + Q̂F ĜT Q̂PT (Q̂A2 + Q̂V (k̂)), (4.72)

(4.71) reduces to

mi = U(k̂)mi−1 . (4.73)

Matrix U(k̂) in (4.73) is the monodromy matrix of the closed-loop controlled system
and is dependent on the vector k̂ of control gains. It advances the solution forward by
one period and its spectral radius should be within the unit circle [13] for asymptotic
stability of (4.57). The problem is to find the control input in the form of a delayed
state feedback as u(t) = K(t, k̂)x(t − τ ), so that the closed loop monodromy matrix
U(k̂) has spectral radius within the unit circle. The advantage of using delayed state
feedback over present state feedback is that the symbolic monodromy matrix of
the closed loop system is linear with respect to the controller parameters. Such a
property would not be obtained if one uses present state feedback.
The above procedure of forming matrix U(k̂) can be performed in symbolic
software such as Mathematica. To compute the unknown parameters k̂, one has
to convert the characteristic polynomial of discrete time map (4.72) to the char-
acteristic polynomial of the equivalent continuous time system. By applying the
Routh-Hurwitz criterion to that characteristic polynomial, a Routh-Hurwitz matrix
is constructed. For asymptotic stability, all the determinants of leading minors of this
Routh-Hurwitz matrix have to be positive. Imposing this condition on each of the
determinants, one obtains nonlinear inequalities describing regions in the parameter
space. The common region of intersection between these individual regions is the
desired stability region in the parameter space. To choose the values of the controller
parameters, an appropriate point in the stable region of the parameter space must be
selected.
122 E. Butcher and B. Mann

4.6.6 Example: Delayed State Feedback Control of the Delayed


Mathieu Equation

Reconsider the delayed Mathieu equation in (4.68) where a = 0.2, b = 0.1, and
c = −0.5. Following the procedure described in Sect. 4.4.1, the monodromy matrix
U of the uncontrolled version of (4.68) is computed, which is found to be unstable,
as its spectral radius is larger than unity. In order to obtain the asymptotic stability of
the controlled system, we design a delayed state feedback controller with a constant
gain matrix given as
K(k̂) = [k1 k2 ]. (4.74)
The close loop monodromy matrix U(k1 , k2 ) is formed symbolically by using the
methodology described above. By using m = 4 shifted Chebyshev polynomials and
applying the continuous time stability criterion (Routh-Hurwitz) after transforming
the characteristic equation from unit circle stability to left-half plane stability, we
obtain the stability boundary. The stability region in the parameter space [k1 , k2 ] is
shown in Fig. 4.13a. There are nm = 8 determinants of the nm leading minors of
the symbolic Routh-Hurwitz matrix. The stability boundary is determined by all
the nm curves in the parameter space given by the nm determinants. In Fig. 4.13a,
however, we plot only the region common to nm curves, which is the desired region
of stability in the parameter space. We select the parameters k1 = –0.49, k2 = –
0.5 from the stable region and apply the controller to the system. The asymptotic
stability of the dynamic system is clearly seen from the spectral radius of closed
loop monodromy matrix given by 0.604125.
We can enlarge the stability region of the system by designing a time-periodic
gain matrix as

K(t, k̂) = [−0.49 + k3 cos(2π t) − 0.5 + k4 sin(2π t)], (4.75)

Fig. 4.13 The stability region for (4.68) with (a) constant gain matrix and (b) periodic gain matrix
with k1 = −0.49, k2 = −0.5. Reprinted from Communications in Nonlinear Science and Numerical
Simulation, vol. 10, pp. 479–497, “Delayed State Feedback and Chaos Control for Time-Periodic
Systems via a Symbolic Approach,” by H. Ma et al.  c 2005, with permission from Elsevier
4 Stability of Time Periodic Delay Systems via Discretization Methods 123
1.2
12
1
10

Response
0.8
Response

8
6 0.6

4 0.4

2 0.2

0 0
0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
t t
(a) (b)
0.4
0.2
Control Force

0
−0.2 0 1 2 3 4 5 6 7

−0.4
−0.6
−0.8
−1
−1.2
t

(c)
Fig. 4.14 (a) The uncontrolled response for the (4.68), (b) Controlled response and (c) con-
trol force u(t) of (4.68) using delayed state feedback control with a time-periodic gain matrix.
Reprinted from Communications in Nonlinear Science and Numerical Simulation, vol. 10, pp.
479–497, “Delayed State Feedback and Chaos Control for Time-Periodic Systems via a Symbolic
Approach,” by H. Ma et al.  c 2005, with permission from Elsevier

where the average parts are the values of k1 and k2 selected above. Applying the
same procedure, we obtain the stability region as shown in Fig. 4.13b. Again, we plot
only the intersection of nm = 8 curves, which are the determinants of the symbolic
Routh-Hurwitz matrix obtained for this case. The spectral radius of closed loop
monodromy matrix is improved to 0.5786 with parameters k3 = –0.49 and k4 =
–0.5 and the stability region is clearly enlarged. The uncontrolled and controlled
responses and the control effort of Eq. (4.68) obtained by applying the periodic gain
matrix in (4.75) are illustrated in Fig. 4.14.

4.7 Discussion of Chebyshev and TFEA Approaches

In this fourth chapter, the authors have presented two different approaches, based on
Chebyshev polynomials and temporal finite elements, respectively, for investigating
the stability behavior of linear time-periodic DDEs. Both approaches are applicable
to a broader class of systems that may be written in the form of a state space model.
Two different Chebyshev-based methods, that of polynomial expansion and collo-
cation, were explained. Both Chebyshev and TFEA methods were used to produce
stability charts for delayed Mathieu equations as well as a model for the milling pro-
cess. Chebyshev polynomial expansion was also used for the optimal and delayed
state feedback control of periodic systems with delay. The discretization methods
presented in this chapter complement the semi-discretization method discussed in
the fifth chapter.
124 E. Butcher and B. Mann

Specifically, compared with the traditional numerical simulation method for sta-
bility analysis of delayed systems, the current techniques are much simpler as all
of the computation is implemented by matrix manipulation. And since an approx-
imation to the monodromy operator is found, they are also useful for designing
controllers to stabilize the original system. Also, much of the information needed
to set up the problem can be stored in the computer in advance. The “product” and
“integration” matrices associated with the shifted Chebyshev polynomials can be
readily constructed from the general expressions. In general, the periodic terms in
A(t) have the forms sin(nπ t/T ) and/or cos(nπ t/T ). The expansions of these quan-
tities can be made a part of the subroutine, and one does not have to compute the
expansion coefficients each time. The entire computation process can be automated
rather easily.
Two different formulas for the computation of the approximate U, whose size
is determined by the number of polynomials employed, were given. The first one
used the direct integral form of the original system in state space form while the
other one used a convolution integral (variation of parameters) formulation. An error
analysis was presented, which allows the number of polynomials employed in the
approximation to be selected in advance for a desired tolerance. An extension of the
Chebyshev-based methods to the case where the delay and parametric periods are
commensurate was not shown here but its implementation is straightforward as in
the DDEC collocation code [11].
When one compares the two Chebyshev-based methods, however, (i.e., polyno-
mial expansion and collocation), one can quickly see that the collocation method is
more efficient. This can be seen by comparing the forms for the monodromy matrix
U in (4.38), (4.47), (4.60) while observing the differences in the sparseness of the
corresponding operational matrices. In [12] it is shown that the eigenvalues of the U
matrix obtained via collocation are spectrally convergent to the exact multipliers of
the monodromy operator. Also, in that paper it is shown that new eigenvalue pertur-
bation and a posteriori estimation techniques give computable error bounds on the
eigenvalue approximation error. One would obviously like an a priori proof of the
spectral convergence of the collocation method when the coefficients of the DDE
are analytic, but this is open. That is, one would want to show spectral convergence,
as the degree of polynomial collocation increases, for the approximate solutions of
DDEs and for the approximate eigenvalues of the monodromy operator.
An important consideration for the TFEA approach is the choice of trial and
weighting functions. In particular, the illustration of Fig. 4.1 highlights the fact that
the trial functions were chosen so that a single coefficient of the assumed solution
would equal the state variable at the beginning, middle, and end of the temporal
element. This provides three constraints for interpolating the polynomials with the
remaining constraints being used to ensure orthogonality and meet user-preferences.
We have found (4.7a–4.7c) to be a particularly useful set of polynomials because
higher order polynomials sometimes caused round-off errors for smaller tempo-
ral elements (i.e. seventh order polynomials would require that tj7 remain within
the machine precision). Our choice of Legendre polynomials for weighting func-
tions was based upon a few factors. First, we made comparisons with the weighting
4 Stability of Time Periodic Delay Systems via Discretization Methods 125

functions used in recent works [6] and also implemented a Galerkin procedure. In
both cases, we found that converged stability boundaries were obtained with fewer
elements when using Legendre polynomials. However, we make no claim that the
current set of trial and weighting functions are the most optimum set of polynomi-
als. Another consideration is the number of weighting functions to apply. While our
preference was to use a combination of two weighting functions and three trial func-
tions, which will always keep the R and H matrices square, it is certainly possible
to use more weighting functions.
Since the TFEA method is a discretization approach, a brief discussion of solu-
tion convergence seems necessary. Following the work in spatial finite elements,
we recognize three different approaches for solution convergence: (1) the number
of temporal elements can be increased (h-convergence); (2) the polynomial order
may be increased (p-convergence); or (3) both the polynomial order and number
of elements can be increased (hp-convergence). While a much more comprehensive
discussion is offered in reference [24], let it suffice to say that tracking the character-
istic multipliers while increasing the number of elements or polynomial order may
be used for convergence. Here, the authors simply increased the number of temporal
elements for each figure until a converged result was obtained.
Computational efficiency and implementation are also issues worthy of some dis-
cussion. With regards to implementation, the integrals of (4.11a), (4.11b), (4.15a),
(4.15b) were all performed symbolically with the software package MATLAB.
These terms were then inserted into stability algorithms that varied the control
parameters while recording the largest characteristic multiplier for the formed matri-
ces. With regards to computational efficiency, the algorithm time to produce stabil-
ity results for a 900 × 50 (TFEA) and 300 × 300 grid (Chebyshev collocation) of
control parameters was typically around one minute for a modern laptop computer
using either Chebyshev collocation or TFEA approaches.
In summary, the presented Chebyshev and TFEA methods provide computation-
ally efficient ways to query the asymptotic stability of periodic DDEs written in the
form of a state space model. We have some questions for further research about the
monodromy operator U itself. For example, under what conditions does U actually
diagonalize? Also, what a priori estimates can be computed for the conditioning
of its eigenvalue problem? What can generally be said about the pseudospectra of
U? Also, are the eigenfunctions of U relatively normal or can non-normality lead
to instability even if all multipliers have less that unity modulus? These and other
questions are for future research.

Acknowledgments As one might expect, the results of this chapter required the support, assis-
tance, and efforts of many individuals. Thus we would like to recognize those who have influenced
and shaped our work in this area. First, we thank Gábor Stépán, who has put forth many interesting
questions, influenced, and given inspiration to the work of both authors. Next, we wish to extend
our deepest gratitude to our Ph.D. advisors Professors S. C. Sinha and P. V. Bayly, who are great
colleagues, have become wonderful friends, and continue to share their intellectual insights. Lastly,
both authors would like to thank their colleagues and graduate students. Dr. Butcher recognizes the
contributions of colleagues Ed Bueler, Venkatesh Deshmukh, and Zsolt Szabó, the helpful advice
of David Gilsinn and Gábor Stépán, as well as the results of students Victoria Averina, Haitao Ma,
126 E. Butcher and B. Mann

Praveen Nindujarla and Oleg Bobrenkov that were invaluable in this research. In addition, Dr.
Mann would like to thank colleagues Tamás Insperger, Gábor Stépán, Tony Schmitz, Keith Young,
Amy Helvey, and Ryan Hanks as well as former students Nitin Garg, Mike Koplow, Ryan Carter,
Jian Liu, Bhavin Patel, Ben Edes, and Firas Khasawneh for their contributions.

References

1. Argyris, J. H., and Scharpf, D. W., ‘Finite Elements in Time and Space’, Aeronaut. J. Roy.
Soc. 73, 1969 , 1041–1044.
2. Averina, V., Symbolic stability of delay differential equations, MS Thesis, Dept. of Mathemat-
ical Sciences, University of Alaska Fairbanks, 2002.
3. Balachandran, B., ‘Non-Linear Dynamics of Milling Process’, Proceedings of the Royal Soci-
ety of London A 359, 2001, 793–819.
4. Balachandran, B. and Gilsinn, D., ‘Nonlinear Oscillations of Milling’, Mathematical and
Computer Modelling of Dynamical Systems 11, 2005, 273–290.
5. Batzel, J. J., and Tran, H. T., ‘Stability of the Human Respiratory Control System. Part I: Anal-
ysis of a Two-Dimensional Delay State-Space Model’, J. Mathemat. Biol. 41, 2000, 45–79.
6. Bayly P. V., Halley, J. E., Mann, B. P., and Davis, M. A., ‘Stability of Interrupted Cutting by
Temporal Finite Element Analysis’, J. Manufact. Sci. Eng. 125, 2003, 220–225.
7. Bellen, A., ‘One-step collocation for delayed differential equations’, J. Comp. Appl. Math. 10,
1984, 275–283.
8. Bellman, R., and Cooke, K., Differential-Difference Equations, Academic, 1963.
9. Borri, M., Bottasso, C., and Mantegazza, P., ‘Basic Features of the Time Finite Element
Approach for Dynamics’, Meccanica 27, 1992, 119–130.
10. Budak, E., and Altintas, Y., ‘Analytical Prediction of Chatter Stability in Milling – Parts I and
II’, J. Dynam. Sys. Measure. Cont. 120, 1998, 22–36.
11. Bueler, E., ‘Guide to DDEC: Stability of linear, periodic DDEs using the DDEC suite of
Matlab codes’, http://www.dms.uaf.edu/ bueler/DDEcharts.htm, 2005.
12. Bueler, E., ‘Error bounds for approximate eigenvalues of periodic-coefficient linear delay dif-
ferential equations’, SIAM J. Num. Anal. 45, 2007, 2510–2536.
13. Butcher, E. A., Ma, H., Bueler, E., Averina, V., and Szabó, Z., ‘Stability of Linear Time-
Periodic Delay-Differential Equations via Chebyshev Polynomials’, International J. Num.
Meth. Eng. 59, 2004, 895–922.
14. Butcher, E. A., Nindujarla, P., and Bueler, E., ‘Stability of Up- and Down-Milling Using
Chebyshev Collocation Method’, proceedings of 5th International Conference on Multibody
Systems, Nonlin. Dynam. Cont. ASME DETC05, Long Beach, CA, Sept. 24–28, 2005.
15. Butcher, E. A., Deshmukh, V., and Bueler, E., ‘Center Manifold Reduction of Periodic Delay
Differential Systems’, proceedings of 6th International Conference on Multibody Systems,
Nonlinear Dynamics, and Control, ASME DETC07, Las Vegas, NV, Sept. 4–7, 2007.
16. Butcher, E. A., Bobrenkov, O. A., Bueler, E., and Nindujarla, P., ‘Analysis of Milling Stability
by the Chebyshev Collocation Method: Algorithm and Optimal Stable Immersion Levels’,
J. Comput. Nonlin. Dynam., in press.
17. Clenshaw, C. W., ‘The Numerical Solution of Linear Differential Equations in Chebyshev
Series’, Proc. Cambridge Philosophical Society 53, 1957, 134–149.
18. Deshmukh, V., Ma, H., and Butcher, E. A., ‘Optimal Control of Parametrically Excited Linear
Delay Differential Systems via Chebyshev Polynomials’, Opt. Cont. Appl. Meth. 27, 2006,
123–136.
19. Deshmukh, V., Butcher, E. A., and Bueler, E., ‘Dimensional Reduction of Nonlinear Delay
Differential Equations with Periodic Coefficients using Chebyshev Spectral Collocation’,
Nonlin. Dynam. 52, 2008, 137–149.
20. Elliot, D., ‘A Chebyshev Series Method for the Numerical Solution of Fredholm Integral Equa-
tions’, Comp. J. 6, 1963, 102–111.
4 Stability of Time Periodic Delay Systems via Discretization Methods 127

21. Engelborghs, K., Luzyanina, T., in T Hout, K. J., and Roose, D., ‘Collocation Methods for the
Computation of Periodic Solutions of Delay Differential Equations’, SIAM J. Sci. Comput. 22,
2000, 1593–1609.
22. Fox, L., and Parker, I.B., Chebyshev Polynomials in Numerical Analysis, Oxford Univ. Press,
London, 1968.
23. Fried, I., ‘Finite Element Analysis of Time Dependent Phenomena’, Am. Inst. Aeron. Astron.
J. 7, 2002, 1170–1173.
24. Garg, N. K., Mann, B. P., Kim, N. H., Kurdi, M. H., ‘Stability of a Time-Delayed System With
Parametric Excitation’, J. Dynam. Syst. Measurem. Cont. 129, 2007, 125–135.
25. Gilsinn, D. E., and Potra, F. A., ‘Integral Operators and Delay Differential Equations’, J. Int.
Equ. Appl. 18, 2006, 297–336.
26. Gouskov, A. M., Voronov, S. A., Butcher, E. A., and Sinha, S. C., ‘Nonconservative Oscil-
lations of a Tool for Deep Hole Honing’, Comm. Nonlin. Sci. Num. Simulation 11, 2006,
685–708.
27. Hahn, W., ‘On difference differential equations with periodic coefficients’, J. Mathemat. Anal.
Appl. 3, 1961, 70–101.
28. Halanay, A., Differential Equations: Stability, Oscillations, Time Lags, Academic Press, New
York, 1966.
29. Hale, J. K., and Verduyn Lunel, S. M., Introduction to Functional Differential Equation,
Springer, New York, 1993.
30. Hayes, N. D., ’Roots of the Transcendental Equations Associated with Certain Differential-
Difference Equations’, J. Lond. Mathemat. Soci. 25, 1950, 226–232.
31. Horng, I.-R., and Chou, J.-H., ‘Analysis, Parameter Estimation and Optimal Control of Time-
Delay Systems via Chebyshev Series’, Int. J. Cont. 41, 1985, 1221–1234.
32. Hsu, C. S., and Bhatt, S. J., ’Stability Charts for Second-Order Dynamical Systems with Time
Lag’, J. Appl. Mech. 33E, 1966, 119–124.
33. Insperger, T., and Stépán, G., ‘Remote Control of Periodic Robot Motion’, Proceedings of
13th CISM-IFToMM Symposium on Theory and Practice of Robots and Manipulators, 2000,
197–203.
34. Insperger, T., and Stépán, G., ’Comparison of the Stability Lobes for Up- and Down-Milling’,
Proceedings of Dynamics and Control of Mechanical Processing Workshop, 2nd Workshop,
2001, 53–57, Budapest University of Technology and Economics, Budapest.
35. Insperger, T., and Stépán, G., ‘Semi-Discretization Method for Delayed Systems’, Int. J. Num.
Meth. Engr. 55, 2002, 503–518.
36. Insperger, T., and Stépán, G., ‘Stability chart for the delayed Mathieu equation’, Proc. R. Soc.,
Math. Physic. Eng. Sci. 458, 2002, 1989–1998.
37. Insperger T., Mann, B. P., Stépán, G., and Bayly, P. V., ‘Stability of Up-Milling and Down-
Milling, Part 1: Alternative Analytical Methods’, Int. J. Machine Tools Manufacture 43, 2003,
25–34.
38. Insperger T., and Stépán, G., ‘Updated Semi-Discretization Method for Periodic Delay-
Differential Equations with Discrete Delay’, Int. J. Num. Meth. Eng. 61, 2004, 117–141.
39. Insperger, T., and Stépán, G., ‘Stability Analysis of Turning with Periodic Spindle Speed Mod-
ulation via Semi-Discretization’, J. Vibr. Cont. 10, 2004, 1835–1855.
40. Ito, K., Tran, H. T., and Manitius, A., ‘A Fully-Discrete Spectral Method for Delay-Differential
Equations’, SIAM J. Numer. Anal. 28, 1991, 1121–1140.
41. Jaddu, H., and Shimenura, E., ‘Computation of Optimal Control Trajectories Using Chebyshev
Polynomials: Parameterization and Quadratic Programming’, Opt. Cont. Appl. Meth. 20, 1999,
24–42.
42. Jemielniak, K., and Widota, A., ‘Suppression of Self-Excited Vibration by the Spindle Speed
Variation Method’, Int. J. Mach. Tool Desi. Res., 24, 1984, 207–214.
43. Kálmar-Nagy, T., ‘A New Look at the Stability Analysis of Delay Differential Equations’,
Proceedings of International Design Engineering Technical Conferences and Computers and
Information in Engineering Conference, Long Beach, California, DETC2005-84740, ASME,
2005.
128 E. Butcher and B. Mann

44. Long, X.-H., and Balachandran, B., ‘Milling dynamics with a variable time delay’. Proceed-
ings of ASME, IMECE 2004, Anaheim, CA, November 13 - 19, Paper No. IMECE 2004-
59207.
45. Long, X.-H., and Balachandran, B., ‘Stability analysis of a variable spindle speed milling
process’, proceedings of 5th International Conference on Multibody Systems, Nonlin. Dynam.
Contr., ASME DETC05, Long Beach, CA, Sep. 24-28, 2005.
46. Long, X.-H., and Balachandran, B., ‘Stability Analysis for Milling Process’, Nonlin. Dynam.,
49, 2007, 349–359.
47. Ma, H., Analysis and Control of Time-periodic Systems with Time Delay via Chebyshev Poly-
nomials, MS Thesis, Dept. of Mechanical Engineering, University of Alaska Fairbanks, 2003.
48. Ma, H., Butcher, E. A., and Bueler, E., ‘Chebyshev Expansion of Linear and Piecewise Linear
Dynamic Systems with Time Delay and Periodic Coefficients Under Control Excitations’,
J. Dynam. Syst. Measurement Cont. 125, 2003, 236–243.
49. Ma, H., and Butcher, E. A., ‘Stability of Elastic Columns Subjected to Periodic Retarded
Follower Forces’, J. Sound Vibration 286, 2005, 849–867.
50. Ma, H., Deshmukh, V., Butcher, E. A., and Averina, V., ‘Delayed State Feedback and Chaos
Control for Time-Periodic Systems via a Symbolic Approach’, Comm. Nonlin. Sci. Num. Sim-
ula. 10, 2005, 479–497.
51. Mann, B. P., Insperger, T., Bayly, P. V., and Stépán, G., ‘Stability of Up-Milling and Down-
Milling, Part 2: Experimental Verification’, International J. of Machine Tools and Manufacture
43, 2003, 35–40.
52. Mann, B. P., Bayly, P. V., Davies, M. A., and Halley, J. E., ‘Limit Cycles, Bifurcations, and
Accuracy of the Milling Process’, J. Sound Vibration 277, 2004, 31–48.
53. Mann, B. P., Garg, N. K., Young, K. A., and Helvey, A. M., ‘Milling bifurcations from struc-
tural asymmetry and nonlinear regeneration’, Nonlin. Dynam. 42, 2005, 319–337.
54. Mann, B. P., Young, K. A., Schmitz, T. L., and Dilley, D. N., ‘Simultaneous stability and
surface location error predictions in milling’, J. Manufact. Sci. Eng. 127, 2005, 446–453.
55. Mann, B. P., and Young, K. A., ‘An empirical approach for delayed oscillator stability and
parametric identification’, Proc. R. Soc. A 462, 2006, 2145–2160.
56. Mann, B. P., and Patel, B., ‘Stability of delay equations written as state space models’,
J. Vibra. Cont., invited paper for a special issue on delay systems, accepted in 2007.
57. Mann, B. P., Edes, B. T., Young, Easley, S. J., K. A., and Ma, K., ‘Surface location error
and chatter prediction for helical end mills,’ Int. J. Machine Tools Manufacture, 48, 2008,
350–361.
58. Mathieu, E., Memoire sur le Mouvement Vibratorie d’une Membrane de Forme Elliptique,
J. Math 13, 1868, 137–203.
59. Nayfeh, A. H., and Mook, D. T., Nonlinear Oscillations, Wiley, New York, 1979.
60. Olgac, N., Sipahi, R., ’A Unique Methodology for Chatter Stability Mapping in Simultaneous
Machining’, J. Manufact. Sci. Eng. 127, 2005, 91–800.
61. Olgac, N., and Sipahi, R., ‘Dynamics and Stability of Variable-Pitch Milling’, J. Vibrat. Cont.
13, 2007, 1031–1043.
62. Patel, B., Mann, B. P., and Young, K. A., ‘Uncharted islands of chatter instability in milling’,
Int. J. of Machine Tools Manufact. 48, 2008, 124–134.
63. Peters, D. A., and Izadpanah, ‘Hp-Version Finite Elements for the Space-Time Domain’, Com-
put. Mech. 3, 1988, 73–78.
64. Pontryagin, L. S., ‘On the zeros of some elementary transcendental functions’, Izv. Akad. Nauk
SSSR 6, 1942, 115–134.
65. Reddy, J. N., An Introduction To The Finite Element Method, McGraw-Hill, Inc., New York,
NY, 2nd edition, 1993.
66. Schmitz, T. L., Mann, B. P., ‘Closed Form Solutions for the Prediction of Surface Location
Error in Milling’, Int. Journal Mach. Tools Manufact. 46, 2006, 1369–1377.
67. Segalman, D. J., and Redmond, J., ‘Chatter suppression through variable impedance and smart
fluids’, Proc. SPIE Symposium on Smart Structures and Materials, SPIE 2721, 1996, 353–363.
68. Segalman, D. J., and Butcher, E. A., ‘Suppression of Regenerative Chatter via Impedance
Modulation’, J. Vibra. Cont. 6, 2000, 243–256.
4 Stability of Time Periodic Delay Systems via Discretization Methods 129

69. Sexton, J. S., and Stone, B. J., ‘The stability of machining with continuously varying spindle
speed’, Annals of the CIRP 27, 1978, 321–326.
70. Shampine, L. F., and Thompson, S., ‘Solving DDEs in MATLAB’, Applied Numerical Math-
ematics 37, 2001, 441–458.
71. Sinha, S. C., and Wu, D.-H., ‘An efficient computational scheme for the analysis of periodic
systems’, J. Sound Vibration 151, 1991, 91–117.
72. Sinha, S. C., Gowrdon, E., and Zhang, Y., ’Control of Time-Periodic Systems via Symbolic
Computation with Application to Chaos Control’, Communications in Nonlinear Science and
Numerical Simulation 10, 2005, 835–854.
73. Snyder, M. A., Chebyshev Methods in Numerical Approximation, Prentice-Hall, Englewood
Cliffs, N. J., 1986.
74. Stépán, G., Retarded Dynamical Systems: Stability and Characteristic Functions, Longman
Scientific and Technical, Harlow, UK, 1989.
75. Stépán, G., ‘Vibrations of Machines Subjected to Digital Force Control’, International
J. Solids and Structures 38, 2001, 2149–2159.
76. Stépán, G., ‘Modelling Nonlinear Regenerative Effects in Metal Cutting’, Philosophical
Transactions of the Royal Society of London A 359, 2001, 739–757.
77. Stokes, A., ’A Floquet Theory for Functional Differential Equations’, Proc. Nat. Acad. Sci.
U.S.A. 48, 1962, 1330–1334.
78. Szalai, R., and Stépán, G., ‘Lobes and Lenses in the Stability Chart of Interrupted Turning’,
J. Computational and Nonlinear Dynamics 1, 2006, 205–211.
79. Szalai, R., Stépán, G., and Hogan, S. J., ‘Continuation of Bifurcations in Periodic Delay-
Differential Equations Using Characteristic Matrices’, SIAM J. Sci. Comp. 28, 2006,
1301–1317.
80. Trefethen, L. N., Spectral Methods in Matlab, SIAM Press, Philadelphia, 2000.
81. Van der Pol, F., and Strutt, M. J. O., ‘On the Stability of the Solutions of Mathieu’s Equation’,
Philos. Mag. J. Sci., 5, 1928, 18–38.
82. Wright, K., ‘Chebyshev collocation methods for ordinary differential equations’, Computer
Journal 6, 1964, 358–363.
83. Zhang, J., and Sun, J.-Q., ‘Robustness analysis of optimally designed feedback control of lin-
ear periodic systems with time-delay’, proceedings of 6th International Conference on Multi-
body Systems, Nonlinear Dynamics, and Control, ASME DETC07, Las Vegas, NV, Sept. 4-7,
2007.
84. Zhao, M.-X., and Balachandran, B., ‘Dynamics and Stability of the Milling Process’, Int. J.
Solids Struct. 38, 2001, 2233–2248.
Chapter 5
Systems with Periodic Coefficients
and Periodically Varying Delays:
Semidiscretization-Based Stability Analysis

Xinhua Long, Tamás Insperger, and Balakumar Balachandran

Abstract In this chapter, delay differential equations with constant and time-
periodic coefficients are considered. The time delays are either constant or peri-
odically varying. The stability of periodic solutions of these systems are analyzed
by using the semidiscretization method. By employing this method, the periodic
coefficients and the delay terms are approximated as constants over a time interval,
and the delay differential system is reduced to a set of linear differential equations
in this time interval. This process helps to define a Floquet transition matrix that is
an approximation to the infinite-dimensional monodromy operator. Information on
the stability of periodic solutions of the delay differential system is obtained from
analysis of the eigenvalues of the finite linear operator. As illustrative examples,
stability charts are constructed for systems with constant delays as well as time-
varying delays. The results indicate that a semidiscretization-method-based stabil-
ity analysis is effective for studying delay differential systems with time-periodic
coefficients and periodically varying delays. The stability analysis also helps bring
forth the benefits of variable spindle speed milling operations compared to constant
spindle speed milling operations.

Keywords: Constant spindle speed milling · Delay differential equations · Floquet


analysis · Semidiscretization method · Stability · Time periodic delays · Variable
spindle speed milling

5.1 Introduction

Time delays have been used to develop models of many practical systems where the
rate of change of the state depends not only on the current states but also on the past
states. The mathematical model of a system with a time lag can be formulated as a
set of delay differential equations (DDEs). As also discussed in other parts of this
book, time-delay systems have been used to model manufacturing processes, neu-
ral networks, mechanical systems with viscoelasticity, nuclear reactors, distributed
B. Balachandran et al. (eds.), Delay Differential Equations: Recent Advances 131
and New Directions, DOI 10.1007/978-0-387-85595-0 5,
c Springer Science+Business Media LLC 2009
132 X. Long et al.

networks, internal combustion engines, species interaction in microbiology, learn-


ing, epidemiology, and physiology [1]. These practical applications have spurred a
considerable number of studies on the dynamics of systems with time delays. In
the context of stability, considerable research has been carried out on systems with
constant delays [2–5]. For a system with constant coefficients and a constant delay,
one can determine the sufficient conditions for the stability and the instability of
the system by using the method of Lyapunov functionals [6]. However, due to the
nature of the Lyapunov function, this stability analysis is limited for practical prob-
lems. Frequency-domain methods provide an alternative for the stability analysis of
systems with constant time delays and constant coefficients [2, 3].
As in the case of a system without a delay, an equilibrium point of DDEs is
asymptotically stable if and only if all the roots (λi , i = 1, 2, ...) of the characteris-
tic equation have negative real parts. The stability of periodic solutions of ordinary
differential equations can be determined by using Floquet theory: if all the eigenval-
ues of the Floquet transition matrix (or also called monodromy matrix) are inside
the unit circle of the complex plane, then the system is asymptotically stable. Flo-
quet theory can be extended to DDEs with time-periodic coefficients [7,8]. Through
this extension, one can define a monodromy operator UT that can be viewed as
an infinite-dimensional Floquet transition matrix and determine the stability of a
periodic solution of the time-delay system by investigating the infinite spectrum
of eigenvalues. Recently, numerical methods have been proposed to approximate
the infinite dimensional monodromy operator with a finite dimensional one. Exam-
ples of numerical methods based on Floquet theory include the methods based
on Chebyshev polynomials [9], temporal finite element method [10–12], full dis-
cretization method [13], averaged coefficients method [4,14], and semidiscretization
method [15–17].
Compared to the types of systems considered thus far, the situation is more
complex when the system has time-varying delays. Such delays can be potentially
disastrous in terms of stability and oscillations in some systems, while also being
beneficial in other systems; for example, metal cutting process [18], neural net-
works [19], and biology [20, 21]. As a representative form of such systems, the
following form of linear DDEs with periodically varying time delays and periodic
coefficients is considered:
n
˙
X̂(t) = A(t)X̂(t) + ∑ Bj (t)X̂(t − τj (t)) + F(t). (5.1)
j=1

Here, X̂ ∈ Rn , A(t) and Bj (t) are n × n matrices with period T0 , τj (t) is the time-
varying delay with period Tj , and F(t) is a forcing vector with period T0 . It is
assumed that the periods T0 and T1 , T2 , . . . , Tn are commensurate. If otherwise, the
system is quasiperiodic, and the Floquet analysis does not apply. In the following
analysis, the least period of the system (i.e., the least multiple of T0 , T1 , . . . , Tn ) is
denoted by T . In system (5.1), there are two different types of time effects, one
due to the time-varying coefficient matrices and another due to the time-varying
delay. Thus, this system is different from a system either with only time-varying
coefficient matrices and a constant time delay or with constant coefficient matrices
and a periodically varying delay. Hence, system (5.1) requires a stability analysis
5 Stability of Systems with Time Periodic Delays 133

approach different from what has been used for other systems. Liu and Liao [19],
Jiang et al. [22], and Zhou et al. [23] determined sufficient conditions for the stabil-
ity of the periodic solutions of cellular neural networks and bidirectional associative
memory neural networks with time-varying delays by constructing suitable Lya-
punov functionals. Schely and Gourley [21] considered the effect of daily, seasonal,
or annual fluctuations on the population and obtained system models with period-
ically perturbed delays. The two-timing method was used to investigate how the
stability of the system differs from that with a constant delay.
Apart from the use of DDEs with time-periodic delays to model neural networks
and systems in biology, the dynamics of a milling process can also be described
by DDEs with time-periodic delays and -periodic coefficients. In particular, such
systems can be used to model a variable spindle speed (VSS) milling process or a
constant spindle speed (CSS) milling process with consideration of feed-rate effect
on the delay. Altintas and Chan [24] and Radulescu et al. [25, 26] investigated
the stability of a VSS milling process with periodic time-varying delay through
time-domain simulations. When compared to purely simulation driven schemes,
numerical schemes with an analytical basis can provide faster and reliable stabil-
ity predictions. Tsao et al. [27] used the angular position as an independent variable
instead of time, and they employed a full discretization scheme to analyze the sta-
bility of the resulting system. Sastry et al. [28] directly analyzed the stability of a
milling process with a sinusoidal spindle speed variation by using the full discretiza-
tion scheme. Yilmaz et al. [13] also used the full discretization scheme to analyze the
stability of a milling process with random spindle speed variations. Sastry et al. [29]
used Floquet theory to carry out stability analysis for a variable speed face-milling
process. Long and Balachandran [30] modeled a VSS milling process by using an
inhomogeneous set of DDEs, and they used the semidiscretization method to ana-
lyze the stability of periodic solutions of these DDEs with time-varying coefficients
and a time-varying delay. The material presented in this chapter complements the
discretization methods discussed in chap. 4.
The rest of this chapter is organized as follows. First, the stability analysis of peri-
odic orbits and the tools available for carrying out this analysis are briefly discussed.
Following that, the construction of the approximate finite dimensional monodromy
matrix by using the semidiscretization method is presented. Then, different exam-
ples of systems with constant and periodically varying delays are considered and
the application of the semidiscretization method is illustrated. One of the detailed
examples is on milling dynamics. Finally, some remarks are collected together at
the end.

5.2 Stability Analysis of Systems with Periodically


Varying Delays

As discussed in [21], periodic delays can have either a stabilizing effect or a


destabilizing effect. There are many criteria and methods that can be used to
determine the stability of a periodic solution of a delay differential system. The
134 X. Long et al.

Lyapunov–Krasovskii functional-based method is one of them. Liu and Liao [19],


Jiang et al. [22], and Zhou et al. [23] used this method to determine sufficient con-
ditions for the stability of periodic solutions of neural networks with time-varying
delays. Krasovskii [31] is the first one who realized that such a “Lyapunov func-
tional” should depend on the delayed state. The use of Lyapunov functional methods
is limited because of the difficulty in constructing these functionals. The frequency-
domain method is convenient to use for the stability analysis of linear time-invariant
systems with delays.
Here, for the linear periodic delayed systems of the form of (5.1), the extended
Floquet theory [7,8] is used to determine the stability. Let the nominal periodic orbit
of system (5.1) be represented by X0 (t). Then, a perturbation X(t) is provided to this
nominal orbit resulting in
X̂(t) = X0 (t) + X(t). (5.2)

After substituting (5.2) into (5.1), the equation governing the perturbation is deter-
mined as
n
Ẋ(t) = A(t)X(t) + ∑ Bj (t)X(t − τj (t)). (5.3)
j=1

Let τmax = max(τj (t)) and C denote the space of continuous functions from
[−τmax , 0] to Rn , with the norm in C given by φ = max | φ (t) | for −τmax ≤ t ≤ 0.
The function Xt ∈ C is defined as Xt (θ ) = X(t + θ ) with t ∈ [−τmax , 0]. For any s ∈ R
and φ ∈ C, there is a unique solution X(·, s, φ ) of (5.3) defined on [s, ∞), such that the
associated function Xt (·, s, φ ) defined as Xt (θ , s, φ ) = X(t + θ , s, φ ), t ∈ [−τmax , 0] is
continuous in t, s, and φ , and it satisfies the initial condition Xs (·, s, φ ) = φ . Here, s
is the initial instant and φ is the initial function. One can define the solution operator
U : C → C as
Xt (·, s, φ ) = U(t, s)φ . (5.4)
Without loss of generality, the initial instant can be set to s = 0. Considering the
periodicity of (5.3), the monodromy operator can be defined by setting t = T

XT (·, 0, φ ) = UT φ , (5.5)

where UT := U(T, 0) is a short notation for the monodromy operator associated


with the initial instant s = 0. The spectrum of the monodromy operator, denoted
by σ (UT ), is at most countable, a compact set of the complex plane with the only
possible limit point being zero. If μ ∈ σ (UT ) and μ = 0, then there exists an initial
function φ ∈ C such that UT φ = μφ . The nonzero elements of σ (UT ) are called
characteristic multipliers or Floquet multipliers of the system. If all of the Floquet
multipliers are within the unit circle of the complex plane, then the corresponding
periodic solution of (5.1) is stable. If one or more of the Floquet multipliers are on
the unit circle, while the rest of them are inside the unit circle, the corresponding
periodic solution may undergo a bifurcation [32].
In Fig. 5.1, three possible ways for the loss of stability of a periodic solution
are shown. When a real Floquet multiplier crosses the unit circle at +1, then the
5 Stability of Systems with Time Periodic Delays 135

(a) Im (b) Im (c) Im

1 Re 1 Re 1 Re

Fig. 5.1 Scenarios showing how the Floquet multipliers leave the unit circle for different bifur-
cations: (a) cyclic-fold bifurcation, (b) period-doubling (flip) bifurcation, and (c) secondary Hopf
bifurcation

corresponding bifurcation is called a cyclic-fold bifurcation. On the other hand,


when a real Floquet multiplier crosses the unit circle at −1, the corresponding
bifurcation is called a period-doubling or flip bifurcation. If a pair of complex Flo-
quet multipliers leaves the unit circle, then the corresponding bifurcation is called
a secondary Hopf bifurcation. For time-periodic delay differential systems such as
(5.3), it is difficult to determine the monodromy matrix, which has no closed-form
solutions. In practical applications, a finite dimensional monodromy matrix is con-
structed to approximate the infinite dimensional monodromy operator. In Sect. 5.3,
the semidiscretization method is presented to construct an approximate monodromy
matrix for system (5.3).

5.3 Approximation of the Monodromy Matrix by using


the Semidiscretization Method

In this section, the steps underlying the semidiscretization method are shown for
constructing a finite dimensional approximation to the infinite dimensional mon-
odromy operator. The eigenvalues of the approximate monodromy matrix are used
to examine the local stability of the considered periodic solution.
As a first step, the time period T of the periodic orbit is divided into K equal
intervals of length Δt = T /K and the discrete time scale ti = i Δt is introduced. For
t ∈ [ti ,ti+1 ], the time-periodic coefficient matrices in (5.3) are approximated as
1  ti+1
Ai ≈ Δt ti A(s)ds, (5.6)
1  ti+1
Bj,i ≈ Δt ti Bj (s)ds, (5.7)

and the time delays are approximated as

τj,i = τj (ti + 0.5Δt), (5.8)


136 X. Long et al.

Fig. 5.2 Approximation of


the time-varying delay t j(t)

t j(ti + 0.5Δt)

ti ti+1
ti + 0.5Δt ωτ t

as shown in Fig. 5.2. The relationship between Δt and the time delay τj,i is given by

τj,i = (lj,i + lrj,i + 0.5)Δt, (5.9)

where  
lrj,i = mod τj,i − 0.5 Δt, Δt (5.10)
and τj,i
− lrj,i − 0.5.
lj,i = (5.11)
Δt
The delayed state is then approximated as

X(t − τj (t)) ≈ X(ti + 1/2 Δt − τj,i )


≈ (1 − lrj,i )X(ti−lj,i ) + lrj,i X(ti−lj,i −1 ). (5.12)

The sketch of this approximation is provided in Fig. 5.3. Then, over each time inter-
val t ∈ [ti ,ti+1 ] for i = 0, 1, 2, ..., N − 1, system (5.3) can be approximated as
n
Ẋ(t) = Ai X(t) + ∑ Bj,i [(1 − lrj,i )X(ti−lj,i ) + lrj,i X(ti−lj,i −1 )], t ∈ [ti ,ti+1 ].
j=1
(5.13)

Thus, the infinite-dimensional system (5.3) has been approximated by a series of


autonomous, ordinary differential equations in each discretization interval. It is
noted that the autonomous system has a constant excitation or forcing term that
arises due to the delay effect in each interval. To proceed further, the matrix Ai is
assumed to be invertible for all i. Then, for t ∈ [ti ,ti+1 ] , the solution of (5.13) takes
the form

X(t) = eAi (t−ti ) Xi


n
+ (eAi (t−ti ) − I) ∑ A−1
i Bj,i [(1 − lrj,i )X(tj−lj,i ) + lrj,i X(tj−lj,i −1 )], (5.14)
j=1
5 Stability of Systems with Time Periodic Delays 137

xj(t) t j,i

x(ti – t j,i + 0.5Δt)


xi

ti ti+1
ti–lj,i –1 ti–lj,i ti+0.5Δt
t

Fig. 5.3 Approximation of the delayed term

where Xi = X(ti ) is used for compact notation. When t = ti+1 , the system (5.14)
leads to
n
Xi+1 = Mi,0 Xi + ∑ (Mj,lj,i Xi−lj,i + Mj,lj,i +1 Xi−lj,i −1 ), (5.15)
j=1

where the associated matrices are given by

Mi,0 = eAi Δt , (5.16)

Mj,lj,i = (eAi Δt − I)A−1


i Bj,i (1 − lrj,i ), (5.17)
and
Mj,lj,i +1 = (eAi Δt − I)A−1
i Bj,i lrj,i . (5.18)
Next, the augmented state vector

Yi = (Xi , Xi−1 , . . . , Xi−lmax )T , (5.19)

where
τmax τmax
− mod
lmax = +1 (5.20)
Δt Δt
is introduced. Combining (5.15)–(5.19), one can construct the linear discrete map

Yi+1 = Di Yi , (5.21)

where the matrix Di is given by


⎡ ⎤ ⎡ ⎤
Mi,0 0 · · · 0 0 0 0 · · · Mj,lj,i Mj,lj,i +1 ··· 0
⎢ I 0 ··· 0 0 0 ⎥ ⎢ 0 ··· 0 0 ··· 0 ⎥
⎢ ⎥ ⎢ ⎥
⎢ 0 I ··· 0 0 0 ⎥ n ⎢ 0 ··· 0 0 ··· 0 ⎥
⎢ ⎥ ⎢ ⎥
Di = ⎢ . .. .. .. ⎥ + ∑ ⎢ .. .. .. .. ⎥ . (5.22)
⎢ .. . . . ⎥ j=1 ⎢
⎥ .⎥
⎢ ⎢. . . ⎥
⎣ 0 0 ··· I 0 0 ⎦ ⎣ 0 ··· 0 0 ··· 0 ⎦
0 0 ··· 0 I 0 0 ··· 0 0 ··· 0
138 X. Long et al.

Here, the second subscript of submatrices Mj,lj,i and Mj,lj,i +1 refers to the corre-
sponding location in Di .
Consecutive applications of the discrete map (5.21) over a period T = K Δt
results in
YK = DK−1 · · · D1 D0 Y0 , (5.23)
where the transition matrix can be identified as

Φ = DK−1 · · · D1 D0 . (5.24)

This matrix Φ represents a finite-dimensional approximation of the monodromy


operator associated with the periodic orbit X0 (t) of (5.1) and the trivial solution
X(t) = 0 of (5.3). If the eigenvalues of this matrix are all within the unit circle, then
the trivial fixed point of (5.3) is stable, and hence, the associated periodic orbit of
(5.1) is stable. At a bifurcation point, one or more of the eigenvalues of the transition
matrix are on the unit circle of the complex plane as illustrated in Fig. 5.1.

5.4 Applications

In this section, different DDEs are analyzed by using the semidiscretization method.
First, a scalar DDE is considered with a periodic coefficient and a single delay is
considered, followed by an autonomous scalar system with two constant delays.
Then, the damped delayed Mathieu equation with a periodic coefficient in the delay
term is analyzed. For each of these systems, it is shown as to how the stable and
unstable regions can be identified in the considered parameter space. Finally, CSS
milling and VSS milling are considered, and the associated stability diagrams are
constructed and discussed.

5.4.1 Scalar Delay Differential Equation with Periodic Coefficient


and One Delay

As a special case of (5.3), consider the scalar differential system with a constant
coefficient, a periodic coefficient, and a constant delay term

ẋ(t) = (a + ε cos(ω t))x(t) + bx(t − τ ), (5.25)

where a, ε , and b are scalar parameters; T = 2π /ω is the period of the system;


and τ is the time delay. This system (5.25) has a parametric excitation term, and
in one of the later examples, the delayed term also has a parametric excitation
effect. The solution of system (5.25) and the associated monodromy operator can-
not be constructed in closed form. An approximation for this monodromy operator
5 Stability of Systems with Time Periodic Delays 139

is obtained by using the semidiscretization method and used to construct the stabil-
ity charts for this system. Here, for convenience, the authors fix the time period to
T = 2 and the time delay to τ = 1, and construct stability charts in the a − b plane
for different values of ε .
First, the authors divide the period T into K = 20 equal intervals of length Δt =
T /K = 0.1 and introduce the discrete time scale ti = i Δt. For t ∈ [ti ,ti+1 ], the periodic
coefficient is approximated by the piecewise constant values
 ti+1
ai ≈ a + ε cos(ω s)ds (5.26)
tj

and the delayed term is approximated as

x(t − τ ) ≈ x(ti + 0.5 Δt − τ ) ≈ (1 − lr)x(ti−l ) + lr x(ti−l−1 ), (5.27)

where
lr = mod (τ − 0.5 Δt , Δt) = mod (1 − 0.05 , 0.1) = 0.5 (5.28)

and
τ 1
l= − lr − 0.5 = − 0.5 − 0.5 = 9. (5.29)
Δt 0.1
Then, over each time interval t ∈ [ti ,ti+1 ] for i = 0, 1, 3, . . . , K − 1, system (5.25)
can be approximated as

ẋ(t) = ai x(t) + (1 − lr)x(ti−l ) + lrx(ti−l−1 ). (5.30)

On substitution of the values for lr and l, one obtains

ẋ(t) = ai x(t) + 0.5x(ti−9 ) + 0.5x(ti−10 ). (5.31)

For t ∈ [ti ,ti+1 ], the solution of (5.31) takes the form

x(t) = eai (t−ti ) xi + 0.5a−1 b(ea(t−ti ) − 1)[xi−9 + xi−10 ]. (5.32)

When t = ti+1 , system (5.32) leads to

xi+1 = x(ti+1 ) = ea Δt xi + 0.5a−1 b(ea Δt − 1)(xi−9 + xi−10 ). (5.33)

After introducing the augmented state vector

Yi = (xi , xi−1 , . . . , xi−10 )T , (5.34)

and combining (5.33) and (5.34), one can construct the linear discrete map

Yi+1 = Di Yi , (5.35)
140 X. Long et al.

where the matrix Di is given by


⎡ ⎤
eai Δt 0 · · · 0 mi mi
⎢ 1 0 ··· 0 0 0 ⎥
⎢ ⎥
⎢ 0 1 ··· 0 0 0 ⎥
⎢ ⎥
Di = ⎢ . .. .. .. ⎥ (5.36)
⎢ .. . . . ⎥
⎢ ⎥
⎣ 0 0 ··· 1 0 0 ⎦
0 0 ··· 0 1 0

with mi = 0.5a−1
i b(e
ai Δt − 1).

After k consecutive applications of (5.35), one ends up with

YK = DK−1 · · · D1 D0 Y0 , (5.37)

from which the transition matrix can be identified as

Φ = DK−1 · · · D1 D0 . (5.38)

Here, k = 20.
Equation (5.37) provides an approximate discrete map of system (5.25). One can
determine the stability of the system by checking the eigenvalues of the monodromy
matrix Φ. The boundaries between stable and unstable regions in the parameter
space of a and b for different ε are shown in Fig. 5.4. The three possible routes for

4 e=0 4 e=2

2 2

0 0
b

−2 −2

−4 −4
−4 −2 0 2 −4 −2 0 2
a a

4 e=5 4 e = 10

2 2

0 0
b

−2 −2

−4 −4
−4 −2 0 2 −4 −2 0 2
a a

Fig. 5.4 Stability charts for system (5.25) obtained by semidiscretization method for T = 2, τ = 1,
and K = 20. Stable regions are shaded and unstable regions are not shaded
5 Stability of Systems with Time Periodic Delays 141

Fig. 5.5 The trajectories 1.5


of Floquet multipliers in
the complex plane for b =
−4, −2, and 2. The markers 1
◦, ∗, and + correspond to
secondary Hopf, period-
0.5
doubling, and cyclic-fold

imaginary
bifurcations, respectively
0

−0.5

−1

−1.5
−1.5 −1 −0.5 0 0.5 1 1.5
real

stability loss is demonstrated for the case ε = 5. The parameter b is fixed at −4 (◦),
−2 (∗) and 2 (+), while the parameter a is increased. The corresponding locations
of the critical Floquet multipliers are shown in Fig. 5.5. For the case b = −4 (◦), a
secondary Hopf bifurcation occurs. For the case b = −2 (∗), a period doubling (flip)
bifurcation occurs, and for the case b = 2 (+), a cyclic-fold bifurcation occurs.

5.4.2 Scalar Autonomous Delay Differential Equation


with Two Delays

In this section, the authors consider a DDE with two delays. Such a system is used to
describe the dynamics of many biological and physical systems involving feedback
incorporated delays in their action. The equation of interest is

ẋ(t) = ax(t) + bx(t − τ1 ) + cx(t − τ2 ). (5.39)

This system is autonomous; that is, it does not contain any time-periodic coefficients
or terms, and the time delays are also constants. Therefore, the period T and the
length of the discretization interval Δt = T /K can arbitrarily be chosen. Let Δt =
2π /K and the discrete time scale be ti = i Δt. For t ∈ [ti ,ti+1 ], the delay terms are
approximated as

x(t − τ1 ) ≈ x(ti + 0.5 Δt − τ1 ) ≈ (1 − lr1 ) xi+1−l1 + lr1 xi−l1


(5.40)
x(t − τ2 ) ≈ x(ti + 0.5 Δt − τ2 ) ≈ (1 − lr2 ) xi+1−l2 + lr2 xi−l2

where lr1 = mod(τ1 − 0.5 Δt , Δt), lr2 = mod(τ2 − 0.5 Δt) , Δt), l1 = τ1 /Δt − 0.5 −
lr1 , and l2 = τ2 /Δt − 0.5 − lr2 .
142 X. Long et al.

Similar to the system with one delay treated previously, one can construct the
discrete map
Yi+1 = Di Yi , (5.41)
where the matrix Di is given by
⎡ a Δt ⎤ ⎡ ⎤
e 0 ··· 0 0 0 0 · · · Mj,lj Mj,lj +1 ··· 0
⎢ 1 0 ··· 0 0 0 ⎥ ⎢ 0 ··· 0 0 ··· 0 ⎥
⎢ ⎥ ⎢ ⎥
⎢ 0 1 ··· 0 0 0 ⎥ 2 ⎢ 0 ··· 0 0 ··· 0 ⎥
⎢ ⎥ ⎢ ⎥
Di = ⎢ . . .. .. ⎥ + ∑ ⎢ .. . .. .. ⎥ (5.42)
⎢ . . . . . .⎥ ⎢ . .⎥
⎢ ⎥ j=1 ⎢ . . . ⎥
⎣ 0 0 ··· 1 0 0 ⎦ ⎣ 0 ··· 0 0 ··· 0 ⎦
0 0 ··· 0 1 0 0 ··· 0 0 ··· 0

with

M1,l1 = (1 − lr1 ) · a−1 b(ea Δt − 1),


M1,l1 +1 = lrj · a−1 b(ea Δt − 1),
M2,l2 = (1 − lr2 ) · a−1 c(ea Δt − 1),
M2,l2 +1 = lr2 · a−1 c(ea Δt − 1).

In each discrete step, the transition matrix Di is the same as given above (the ele-
ments of Di do not depend on i). Hence, the stability information can be obtained
from an analysis of the eigenvalues of matrix Φ = Di .
The stability of system (5.39) can be determined by investigating the eigenvalues
of the transition matrix Φ. The corresponding stability charts in the parameter space
of τ1 and τ2 are presented in Fig. 5.6 for different values of the coefficients a, b,
and c.

5.4.3 Damped and Delayed Mathieu Equation

In order to investigate the combined effect of parametric excitation and a single


time delay, the damped and delayed Mathieu equation has been studied for differ-
ent parameter combinations in [16, 33]. Here, the authors consider the case, where
the coefficient of the delayed term has a time-periodic variation. The equation of
interest is

ẍ(t) + cẋ(t) + (a + ε1 cos(ω t))x(t) = (b + ε2 cos(ω t))x(t − τ ), (5.43)

where the c is the damping coefficient, τ is the constant time delay, and the time
period is T = 2π /ω .
The system (5.43) can be rewritten in the state-space form
5 Stability of Systems with Time Periodic Delays 143

a b
4 4

3 3
t2

t2
2 2

1 1

0 0
0 1 2 3 4 0 1 2 3 4
t1 t1

c d
4 4

3 3
t2

t2
2 2

1 1

0 0
0 1 2 3 4 0 1 2 3 4
t1 t1

Fig. 5.6 Stability chart of system (5.39) obtained by semidiscretization method: (a) a = b = c =
0.5, (b) a = c = 0.5, b = 1, (c) a = c = 2, b = 1, and (d) b = c = 2, a = 1. Stable regions are shaded
and unstable regions are not shaded

Ẋ(t) = A(t)X(t) + B(t)X(t − τ ) (5.44)

with    
x 0 1
X= , A(t) = , (5.45)
ẋ −(a + ε1 cos(ω t)) −c
 
0 0
B(t) = (5.46)
b + ε2 cos(ω t) 0
The application of the semidiscretization method to system (5.44) is similar to
the autonomous DDE with one delay. The coefficient matrices A(t) and B(t) are
approximated according to (5.7), and the procedure presented in Sect. 5.3 can be
followed to construct the Floquet transition matrix Φ. Then, the solution stability
can be determined by analyzing the eigenvalues of Φ. The stability charts are pre-
sented in Fig. 5.7 for T = τ = 2π and different values of the coefficients c, ε1 , and
ε2 . It is noted that c = corresponds to an undamped system and c > 0 corresponds to
a damped system. For fixed ε1 and ε2 , the region of stability increases with increase
in damping.
In the first three subsections of Sect. 5.4, scalar autonomous, scalar nonau-
tonomous, and a two-dimensional nonautonomous delay differential systems have
been considered and the use of the semidiscretization method to determine the sta-
bility domains in each case has been illustrated. Next, two systems that arise in
144 X. Long et al.

c=0 c=0
1 ε =ε =0 1
1 2 ε1=ε2=1
0 0
b

b
−1 −1

0 2 4 0 2 4
a a

1 c=0.1 1 c=0.1
ε1=ε2=0 ε1=ε2=1
0 0
b

b
−1 −1

0 2 4 0 2 4
a a

1 c=0.1 1 c=0.2
ε1=ε2=2 ε1=ε2=2
0 0
b

−1 −1

0 2 4 0 2 4
a a

Fig. 5.7 Stability chart of system (5.43) obtained by semidiscretization method with τ = T = 2π .
Stable regions are shaded and unstable regions are not shaded

the context of milling dynamics are considered. These systems are comparatively
more complex than the previous three systems considered, but the application of the
semidiscretization method follows along the lines of Sect. 5.3.

5.4.4 CSS Milling Process: Nonautonomous DDE


with Time-Periodic Delays

In this section, the model developed by Long and Balachandran [17] is briefly
revisited and the stability analysis, carried out for a CSS milling process based on
the semidiscretization method, is presented. In Fig. 5.8, a multi-degree-of-freedom
configuration representative of a workpiece–tool system is illustrated for milling
operations with a cylindrical end mill. For convenience, the X-direction is oriented
along the feed direction of the cutter. The vertical axis of the tool is oriented along
the Z-direction. The spindle rotational speed in rad s−1 is represented by Ω and the
angular position is represented by θ . The quantities θs and θe represent the entry
cutting angle and exit cutting angle, respectively, and these angles define the static
cutting zone. The forces Fx and Fy act on the cutter, and the forces Fu and Fv act on
the workpiece. The natural frequencies associated with the torsion modes and the
Z-direction vibration modes are expected to be higher than those associated with
5 Stability of Systems with Time Periodic Delays 145

u
cv kv v
cu
workpiece Feed ( f )
q s⬘

q ku
qe⬘
kx
Fy
R Ω Fx
cx tool

x
ky cy

Fig. 5.8 Workpiece–tool system model

the primary bending vibration modes along the X-direction and the Y -direction. For
this reason, only the vibration modes in the horizontal plane are considered here.
In developing these models, the modal properties of the tool and the workpiece are
assumed to be known, and hence, a system with a flexible tool and a flexible work-
piece can be represented by an equivalent lumped parameter system.
The governing equations are of the form

mx q̈x (t) + cx q̇x (t) + kx qx (t) = Fx (t; τ (t, j, z)),


my q̈y (t) + cy q̇y (t) + ky qy (t) = Fy (t; τ (t, j, z)),
(5.47)
mu q̈u (t) + cu q̇u (t) + ku qu (t) = Fu (t; τ (t, j, z)),
mv q̈v (t) + cv q̇v (t) + kv qv (t) = Fv (t; τ (t, j, z)),

where both the tool and the workpiece have two degrees of freedom. The vari-
ables qx and qy , respectively, represent the tool dynamic displacements measured
along the X and the Y directions in a reference frame, whose origin is located
on the tool center and shares the rigid-body translation of the tool due to a con-
stant feed rate. The variables qu and qv represent the workpiece displacements
measured, respectively, along the U and V directions in a fixed reference frame.
The quantities mx , my , mu , mv are the modal masses, the quantities cx , cy , cu , cv are
the modal damping coefficients, and the quantities kx , ky , ku , kv are the modal stiff-
ness parameters associated with the motions along the X, Y , U, and V directions,
146 X. Long et al.

respectively. The cutting force components, which appear on the right-hand side
of the equations, are time-periodic functions. Furthermore, considering the feed
motion effect, a variable time delay is introduced in the governing equations through
the cutting-force components. A variable time delay is associated with each cutting
tooth and the number of variable time delays is equal to the number of flutes. This
variable time delay depends on the feed rate, the radius of the tool, the spatial loca-
tion along the Z-direction, and the spindle rotation speed.
System (5.47) can be put into the state-space form
N
Q̇(t) = A(t)Q(t) + ∑ Bj (t)Q(t − τj (t)) + F(t), (5.48)
j=1

where Q = {qx qy qu qv q̇x q̇y q̇u q̇u }T , A(t) is the coefficient matrix for the vector
of present states  
0 I
A(t) = (5.49)
−M−1 (K − K̂(t)) −M−1 C
and Bj (t) is the coefficient matrix associated with the vectors of delayed states.
These matrices are given by
 z2 (t,j)  
0 0
Bj (t) = dz, (5.50)
z1 (t,j) −M−1 K̂j (t, z)) 0

the last term associated with the inhomogeneous term is


 
0
F(t) = , (5.51)
K̄(t) f

where M, C, and K are the mass matrix, the damping matrix, and the stiffness
matrix, respectively. K̂(t), K̂j (t, z)), and K̄(t) are periodic coefficient matrices asso-
ciated with tool pass excitation effect and they take the following forms:
⎡ j j j j ⎤
k11 (t, z) k12 (t, z) k11 (t, z) k12 (t, z)
 z2 (t,j) ⎢
⎢ k21

(t, z) k22 (t, z) k21 (t, z) k22 (t, z) ⎥
N j j j j
K̂(t) = ∑ ⎢ ⎥, (5.52)
⎢ j j j j ⎥
j=1 z1 (t,j) ⎣ k11 (t, z) k12 (t, z) k11 (t, z) k12 (t, z) ⎦
j j j j
k21 (t, z) k22 (t, z) k21 (t, z) k22 (t, z)
⎡ j j j j ⎤
k11 (t, z) k12 (t, z) k11 (t, z) k12 (t, z)
⎢ j ⎥
⎢ k21 (t, z) k22
j
(t, z) k21 (t, z) k22 (t, z) ⎥
j j
j ⎢
K̂ (t, z) = ⎢ j ⎥, (5.53)
j j j ⎥
⎣ k11 (t, z) k12 (t, z) k11 (t, z) k12 (t, z) ⎦
j j j j
k21 (t, z) k22 (t, z) k21 (t, z) k22 (t, z)
 T
j j j j
K̄(t) = k11 (t, z) k21 (t, z) k11 (t, z) k21 (t, z) (5.54)
5 Stability of Systems with Time Periodic Delays 147

with
 j j
  
k11 (t, z) k12 (t, z) − sin θ (t, j, z) − cos θ (t, j, z)
=
j j
k21 (t, z) k22 (t, z) cos θ (t, j, z) sin θ (t, j, z)
  (5.55)
k1 kt # $
− sin θ (t, j, z) cos θ (t, j, z)
k2 kt

Due to the feed motion, the regenerative delay associated with each tooth is time
varying in the form
2π R
τj (t) = (5.56)
N[Ω R + f cosθj ]
j
Matrices K̂(t) and K̂1 (t, z) and the delays τj (t) are time periodic with the tooth
passing period T = 2π /N Ω , the period of system (5.48) is the tooth pass period T ,
and the time step is Δt = T /K = 2π /(KN Ω ). System (5.48) is identical to (5.1),
and the same approach discussed previously can be used to determine the stability.
Stability charts are determined with a single degree-of-freedom system for down-
milling operations of 25% immersion rate. The corresponding workpiece–tool sys-
tem modal parameters are given in Table 5.1. The tool and the cutting parameters
are provided in Table 5.2. Since, the tool helix angle is zero in this case, the cutting
forces along the X-direction and the Y -direction do not depend on the normal rake
angle and the friction coefficient, both of which are not shown in Table 5.2. The
numerical results are compared to experimental results published in [34].
In Fig. 5.9, the stability charts are presented for 25% immersion down-milling
operations. Dashed lines denote the stability boundaries obtained by the semidis-
cretization method. To obtain the stability lobes, the axial depth of cut and the spin-
dle speed are chosen as the control parameters. For a fixed spindle speed, the axial
depth of cut is increased gradually in a quasistatic manner. The corresponding pseu-
domonodromy matrix Φ is determined for each pair of the chosen spindle speed
and axial depth of cut values. The label “◦” (experiment) denotes that the milling
process is stable for the chosen experimental control parameters, namely, the axial
depth of cut and the spindle speed. The label “+” (experiment) denotes that the
milling process is unstable for the corresponding experimental control parameters.

Table 5.1 Modal parameters of workpiece–tool system


Mode Frequency (Hz) Damping (%) Stiffness (N m−1 ) Mass (kg)
Workpiece (X) 146.6 0.32.0 2.18 × 106 2.573

Table 5.2 Tool and cutting parameters


Helix Tooth Radius Kt (MPa) kn
angle (η ) number (mm)
0◦ 1 9.53 550 0.364
148 X. Long et al.

5
Stable Cut (Experiment)
4.5 Unstable Cut (Experiment)
Semi−discretization
4

3.5

3
ADOC (mm)

2.5

1.5

0.5

0
2.8 3 3.2 3.4 3.6 3.8 4 4.2 4.4
Spindle Speed (krpm)

Fig. 5.9 Stability predictions for 25% immersion down-milling operations

From this figure, one can find that the stability charts determined by the semidis-
cretization method agree well with the experimental results.

5.4.5 VSS Milling Process: Nonautonomous DDE


with Time-Periodic Delay

VSS milling can be used to suppress chatter that develops during conventional, con-
stant speed machining. During a VSS milling operation, the spindle speed is vari-
able and the time delay is time varying, as well. In this effort, the time variation is a
sinusoidal modulation of spindle speed given by

Ω (t) = Ω0 + Ω1 sin(ωmt) = Ω0 [1 + RVA sin(RVF · Ω0t)], (5.57)

where Ω0 is the nominal spindle speed, Ω1 is amplitude of speed variation, ωm is


frequency of speed variation, RVA = Ω1 /Ω0 is the ratio of speed variation amplitude
to the nominal spindle speed, and RVF = ωm /Ω0 is the ratio of the speed variation
frequency to the nominal spindle speed. In the case of a VSS milling operation,
the angular position θ (t, j, z) of tooth j at axial location z and time t, introduced in
(5.55), is determined by
 t
2π tan η
θ (t, i, z) = Ω (s)ds − (i − 1) − z + θ0 . (5.58)
0 N R
5 Stability of Systems with Time Periodic Delays 149

After substituting (5.57) into (5.58), one can obtain


RVA 2π tan η
θ (t, i, z) = Ω0t + [1 − cos(ωmt)] − (i − 1) − z + θ0 . (5.59)
RVF N R
The time delay, which has been introduced in (5.47), is determined from
 t

Ω (s)ds = (5.60)
t−τ (t) N

By substituting (5.57) into (5.60) and integrating, one can obtain

Ω1 Ω1 2π
Ω0 τ (t) + cos(ωm (t − τ (t))) − cos(ωmt) = . (5.61)
ωm ωm N

From (5.61), one cannot get a closed form solution for τ (t). However, for “small”
RVA and “small” RVF, τ (t) can be approximated as

τ (t) ≈ τ0 [1 − (1 − RVA sin(ωmt − φ ))RVA sin(ωmt − φ )], (5.62)

where

τ0 = . (5.63)
N Ω0
Similar to the system presented for the CSS milling process, one can write the sys-
tem for the VSS milling process in compact form as

Q̇(t) = A(t)Q(t)) + B1 (t)Q(t − τ (t)) + F(t), (5.64)

where A(t) is the coefficient matrix associated with present states and B1 is the
coefficient matrix associated with delayed states.
As in the system (5.48), the coefficient matrices A(t), B(t), and F(t) are piece-
wise, periodic functions with the period T . For the CSS milling process, this period
T = 2π /N Ω0 . For the VSS milling process, this period is equal to the period of the
spindle speed modulation: T = 1/ωm . Here, it is assumed that the modulation period
is an integer multiple of the nominal tooth passing period τ0 = 2π /N Ω0 (otherwise,
the system is quasiperiodic). Referring to Fig. 5.8, the workpiece–tool system modal
parameters and the tool and cutting parameters are chosen as shown in Tables 5.3
and 5.4.
In Fig. 5.10, the stability charts are presented for 5% immersion down-milling
operations. Throughout the considered spindle speed range, the range of stable
axial depths of cut (ADOC) for VSS milling is larger than that obtained for the

Table 5.3 Modal parameters of workpiece–tool system


Mode Frequency (Hz) Damping (%) Stiffness (N m−1 ) Mass (kg)
Tool (X) 729.07 1.07 9.14 × 105 4.36.0 × 10−2
150 X. Long et al.

Table 5.4 Tool and cutting parameters


Normal rake Helix Tooth Radius Kt (MPa) kn Cutting friction
angle (φn ) angle (η ) number (mm) coefficient (μ )
6◦ 40◦ 2 6.35 600 0.42 0.2

(a) (b)
4 4
ADOC (mm)

ADOC (mm)
3 3
2 2
1 1
0 0
2 2.5 3 3.5 4 4.5 5 2 2.5 3 3.5 4 4.5 5
(c) (d)
4 4
ADOC (mm)

ADOC (mm)
3 3
2 2
1 1
0 0
2 2.5 3 3.5 4 4.5 5 2 2.5 3 3.5 4 4.5 5
(e) (f)
4 4
ADOC (mm)

ADOC (mm)

3 3
2 2
1 1
0 0
2 2.5 3 3.5 4 4.5 5 2 2.5 3 3.5 4 4.5 5
Nominal Spindle Speed (krpm) Nominal Spindle Speed (krpm)

Fig. 5.10 Stability predictions for 5% immersion down-milling operations: (a) RVA = 0.1, RVF =
0.1, (b) RVA = 0.1, RVF = 0.2, (c) RVA = 0.2, RVF = 0.1, (d) RVA = 0.2, RVF = 0.2, (e) RVA =
0.2, RVF = 0.3, and (f) RVA = 0.3, RVF = 0.3; (—) for CSS and (- - -) for VSS

corresponding CSS milling operations. The results also show the robustness of the
stability of VSS milling process to spindle speed. One can discern the differences
among the stability lobes of VSS and CSS operations for spindle speeds up to
5,000 rpm (tooth pass frequency to first natural frequency ratio of 0.23), beyond
which the stability lobes are close to each other. For higher spindle speeds, the
results are as shown in Fig. 5.11. From this figure, one can infer that the stability
lobes for VSS milling are close to those obtained for CSS milling when RVA = 0.1
(Fig. 5.11a, b). For higher values of RVA; that is, 0.2 and 0.3, the stability charts
are as shown in Fig. 5.11c–f. The stability range is improved and this improvement
is also extended to the high-speed range. However, the stable range of ADOC for
VSS operations is lower than that obtained for CSS operations in certain spindle
speed ranges.
5 Stability of Systems with Time Periodic Delays 151

(a) (b)
8 CSS 8 CSS
ADOC (mm)

ADOC (mm)
6 VSS 6 VSS

4 4
2 2
0 0
8 10 12 14 16 18 20 8 10 12 14 16 18 20
(c) (d)
8 CSS 8 CSS
ADOC (mm)

ADOC (mm)
6 VSS 6 VSS

4 4
2 2
0 0
8 10 12 14 16 18 20 8 10 12 14 16 18 20
(e) (f)
8 CSS 8 CSS
ADOC (mm)

ADOC (mm)

6 VSS 6 VSS

4 4
2 2
0 0
8 10 12 14 16 18 20 8 10 12 14 16 18 20
Nominal Spindle Speed (krpm) Nominal Spindle Speed (krpm)

Fig. 5.11 Stability predictions for 5% immersion down-milling operations: (a) RVA = 0.1, RVF =
0.1, (b) RVA = 0.1, RVF = 0.2, (c) RVA = 0.2, RVF = 0.1, (d) RVA = 0.2, RVF = 0.2, (e) RVA =
0.2, RVF = 0.3, and (f) RVA = 0.3, RVF = 0.3; (—) for CSS and (- - -) for VSS

5.5 Closure

The first general discussion on Floquet theory for periodic DDEs is due to Stokes
[35]. Halanay [8] obtained a general form of the operator UT for a linear periodic
DDE with constant delay. For DDEs with periodic coefficients and time-varying
delays, there is no general form for the operator UT . Numerical schemes [9, 10, 36]
with an analytical basis are often used to approximate the monodromy matrix UT .
The semidiscretization method [15] is a numerical method, which can be used to
construct the approximated monodromy matrix UT , as illustrated in this chapter.
This method can be applied to linear DDEs (including piecewise linear DDEs) with
periodic coefficients and periodically varying time delays. Since periodic solutions
are of interest, the ratio of the period of the coefficients to the period of the time
delay needs to be a rational number; that is, T0 /Tj in (5.1) should be a rational num-
ber. The monodromy matrix needs to be considered on the large period for the case
of periodic motions, and the obtained information from the analysis can be used
to assess local stability. The cost of computations can be high, when eigenvalues
of a large monodromy matrix are to be computed. In order to improve the effi-
ciency, Elbeyly and Sun [37] presented the improved zeroth-order and first-order
semidiscretization methods, while Insperger et al. [38] have discussed the rate of
convergence for zeroth-, first-, and higher order semidiscretization schemes. If the
152 X. Long et al.

form of the operator UT (or the monodromy operator) for DDEs with time-periodic
coefficients and delays can be represented in a general form as in the case of time-
periodic DDEs with constant delays [8], one can use this general form to construct
an approximate monodromy matrix and study the convergence and efficiency of
computations.

Acknowledgments X.-H. Long gratefully acknowledges the support received through 973 Grant
No. 2005CB724101. T. Insperger was supported by the HNSF under grant no. OTKA K72911 and
the János Bolyai Research Scholarship of the HAS.

References

1. Hale, J.K., History of delay equations, In: Arino O., M.L. Hbid, and Ait D.E. (eds) Delay
differential equations and applications. Springer, Berlin 2006, 1–28.
2. Bellman, R. and Cooke, K.L., Differential–differences equations, Academic, New York, NY,
1963.
3. Stépán, G., Retarded dynamical systems, Longman: Harlow, UK, 1989.
4. Altintas, Y. and Budak, E., Alytical prediction of stability lobes in milling, Annals of CIRP
44, 1995, 357–362.
5. Zhao, M.X. and Balachandran, B., Dynamics and stability of milling process, International
Journal of Solids and Structures 38, 2001, 2233–2248.
6. Hale, J.K. and Lunel, S. M., Introduction to functional differential equations, Springer,
New York, NY, 1993.
7. Hahn, W., On difference–differential equations with periodic coefficients, Journal of Mathe-
matical Analysis and Applications 3, 1961, 70–101.
8. Halanay, S., Differential equations: stability, oscillations, time lags, Academic, New York,
NY, 1966.
9. Butcher, E.A., Ma, H.T., Bueler, E., Averina, V., and Szabo, Z., Stability of linear time-
periodic delay-differential equations via Chebyshev polynomials, International Journal for
Numerical Methods in Engineering 59, 2004, 895–922.
10. Bayly, P.V., Halley, J.E., Mann, B.P., and Davis, M.A., Stability of interrupted cutting by
temporal finite element analysis, Journal of Manufacturing Science and Engineering 125,
2003, 220–225.
11. Insperger, T., Mann, B.P., Stépán, G., and Bayly, P.V., Stability of up-milling and down-
milling, part 1: alternative analytical methods, International Journal of Machine Tools and
Manufacture 43, 2003, 25–34.
12. Mann, B.P., Bayly, P.V., Davies, M.A., and Halley, J.E., Limit cycles, bifurcations, and accu-
racy of the milling process, Journal of Sound and Vibration, 277, 2004, 31–48.
13. Yilmaz, A., Al-Regib, E., and Ni, J., Machine tool chatter suppression by multi-level random
spindle speed variation, ASME Journal of Manufacturing Science and Engineering 124, 2002,
208–216.
14. Minis, I. and Yanushevsky, R., A new theoretical approach for the prediction of machine tool
chatter in milling, ASME Journal of Engineering for Industry 115, 1993, 1–8.
15. Insperger, T. and Stépán, G., Semi-discretization method for delayed systems, International
Journal of Numerical Methods in Engineering 55, 2002, 503–518.
16. Insperger, T. and Stépán, G., Updated semi-discretization method for periodic delay-
differential equations with discrete delay, International Journal of Numerical Methods in
Engineering 61(1), 2004, 117–141.
17. Long, X.H. and Balachandran, B. Stability of milling process, Nonlinear Dynamics 49, 2007,
349–359.
5 Stability of Systems with Time Periodic Delays 153

18. Long, X.H., Balachandran, B., and Mann, B.P. Dynamics of milling processes with variable
time delays, Nonlinear Dynamics 47, 2007, 49–63.
19. Liu, Z. and Liao, L., Existence and global exponential stability of periodic solution of cellular
neural networks with time-varying delays, Journal of Mathematical Analysis and Applica-
tions 290, 2004, 247–262.
20. Ruan, S., Delay Differential equations in single species dynamics, In: Arino O, Hbid M.L.,
and Ait D.E. (eds) Delay differential equations and applications. Springer, Berlin Heidelberg
New York, 2006, 477–518.
21. Schely, D. and Gourley, S.A., Linear stability criteria for population models with periodic
perturbed delays, Journal of Mathematical Biology 40, 2000, 500–524.
22. Jiang, M.H., Shen, Y., and Liao, X.X., Global stability of periodic solution for bidirectional
associative memory neural netwroks with varying-time delay, Applied Mathematics and Com-
putation 182, 2006, 509–520.
23. Zhou, Q.H., Sun, J.H., and Chen, G.R., Global exponential stability and periodic oscillations
of reaction-diffusion BAM neural networks with periodic coefficients and general delays,
International Jounal of Bifurcation and Chaos 17, 2007, 129–142.
24. Altintas, Y. and Chan, P.K., In-process detection and suppression of chatter in milling, Inter-
national Journal of Machine Tools and Manufacture 32, 1992, 329–347.
25. Radulescu, R., Kapoor, S.G., and DeVor, R.E., An investigation of variable spindle speed face
milling for tool–work structures with complex dynamics, part 1: simulation results, ASME
Journal of Manufacturing Science and Engineering 119, 1997, 266–272.
26. Radulescu, R., Kapoor, S.G., and DeVor, R.E., An investigation of variable spindle speed face
milling for tool–work structures with complex dynamics, part 2: physical explanation, ASME
Journal of Manufacturing Science and Engineering 119, 1997, 273–280.
27. Tsao, T.C., McCarthy, M.W., and Kapoor, S.G., A new approach to stability analysis of vari-
able speed machining systems, International Journal of Machine Tools and Manufacture 33,
1993, 791–808.
28. Sastry, S., Kapoor, S.G., DeVor, R.E., and Dullerud, G.E., Chatter stability analysis of the
variable speed face-milling process, ASME Journal of Manufacturing Science and Engineer-
ing 123, 2001, 753–756.
29. Sastry, S. Kapoor, S.G., and DeVor, R.E., Floquet theory based approach for stability analy-
sis of the variable speed face-milling process, ASME Journal of Manufacturing Science and
Engineering 124, 2002, 10–17.
30. Long, X.H. and Balachandran, B., Stability of up-milling and down-milling operations with
variable spindle speed, Journal of Vibration and Control, 2008, accepted for publication.
31. Krasovskii, N.N., Stability of motion, Stanford University Press, Palo Alto, CA, 1963.
32. Nayfeh, A.H. and Balachandran, B., Applied nonlinear dynamics: Analytical, computational,
and experimental methods, Wiley, New York, NY, 1995.
33. Insperger, T. and Stépán, G., Stability of the damped Mathieu equation with time delay, Jour-
nal of Dynamics System, Measurement and Control, 125, 2003, 166–171.
34. Mann, B.P., Insperger, T., Bayly, P.V., and Stépán, G., Stability of up-milling and down-
milling, Part 2: Experimental verification, International Journal of Machine Tools and Man-
ufacture 43, 2003, 35–40.
35. Stokes, A.P., A Floquet theroy for functional differential equations, The Proceedings of the
National Academy of Sciences U.S.A. 48, 1962, 1330–1334.
36. Gilsinn, D.E., and Potra, F.A. Integral Operators and Delay Differential Equations, Journal
of Integral Equations and Applications 18, 2006, 297–336.
37. Elbeyly, O. and Sun, J.Q., On the semi-discretization method for feedback control design of
linear systems with time delay, Journal of Sound and Vibration 273, 2004, 429–440.
38. Insperger, T., Stépán, G., and Turi, J., On the higher-order semi-discretizations for periodic
delayed systems, Journal of Sound and Vibration 313, 2008, 334–341.
Chapter 6
Bifurcations, Center Manifolds, and Periodic
Solutions

David E. Gilsinn

Abstract Nonlinear time-delay differential equations are well known to have arisen
in models in physiology, biology, and population dynamics. These delay differential
equations (DDEs) usually have parameters in their formulation. How the nature of
the solutions change as the parameters vary is crucial to understanding the underly-
ing physical processes. When the DDE is reduced, at an equilibrium point, to leading
linear terms and the remaining nonlinear terms, the eigenvalues of the leading coef-
ficients indicate the nature of the solutions in the neighborhood of the equilibrium
point. If there are any eigenvalues with zero real parts, periodic solutions can arise.
One way in which this can happen is through a bifurcation process called a Hopf
bifurcation in which a parameter passes through a critical value and the solutions
change from equilibrium solutions to periodic solutions. This chapter describes a
method of decomposing the DDE into a form that isolates the study of the periodic
solutions arising from a Hopf bifurcation to the study of a reduced size differential
equation on a surface, called a center manifold. The method will be illustrated by
Hopf bifurcation that arises in machine tool dynamics, which leads to a machining
instability called regenerative chatter.

Keywords: Center manifolds · Delay differential equations · Exponential poly-


nomials · Hopf bifurcation · Limit cycle · Machine tool chatter · Normal form
Semigroup of operators · Subcritical bifurcation

6.1 Background

With the advent of new technologies for measurement instrumentation, it has


become possible to detect time delays in feedback signals that affect a physical
Contribution of the National Institute of Standards and Technology, a Federal Agency. Not sub-
ject to copyright.

B. Balachandran et al. (eds.), Delay Differential Equations: Recent Advances 155


and New Directions, DOI 10.1007/978-0-387-85595-0 6,
c Springer Science+Business Media LLC 2009
156 D.E. Gilsinn

system’s performance. System models in a number of fields have been investi-


gated in which time delays have been introduced in order to have the output of the
models more closely reflect the measured performance. These fields have included
physiology, biology, and population dynamics (see an der Heiden [2], Kuang [23],
and MacDonald [24]).
In recent years, time delays have arisen in models of machine tool dynamics.
In particular, a phenomenon, called regenerative chatter, is being heavily studied.
Regenerative chatter can be recognized on a manufacturing plant floor by a char-
acteristic high-pitched squeeling sound, distinctive marks on the workpiece, and by
undulated or dissected chips (see Tlusty [32]). It is a self-excited oscillation of the
cutting tool relative to the workpiece during machining. Self-excited oscillations,
mathematically called limit cycles or isolated periodic solutions, reflect the fact that
there are nonlinearities in the physical system being modeled that have to be taken
into account. For further reading on delay differential equations (DDEs) in turning
or numerically controlled lathe operations the reader is referred to Kalmár-Nagy
et al. [20] and [21]. For problems in drilling see Stone and Askari [29] and Stone
and Campbell [30]. Finally, for problems in milling operations see Balachandran [5]
and Balachandran and Zhao [6]. The modeling of regenerative chatter arose in work
at the National Institute of Standards and Technology (NIST) in conjunction work
related to error control and measurement for numeric control machining.
Along with the nonlinearities, the differential equations, from ordinary, partial,
and delay equations, that model the physical processes, usually depend on param-
eters that have physical significance, such as mass, fundamental system frequency,
nonlinear gains, and levels of external excitation. Changes, even small ones, in many
of these system parameters can drastically change the qualitative nature of the sys-
tem model solutions. In this chapter, we will examine the effect that variations of
these parameters have on the nature and the number of solutions to a class of non-
linear DDEs. The changing nature of solutions to a differential equation is often
referred to as a bifurcation, although formally the concept of bifurcation refers to
parameter space analysis. The term bifurcation means a qualitative change in the
number and types of solutions of a system depending on the variation of one or more
parameters on which the system depends. In this chapter, we will be concerned with
bifurcations in the nature of solutions to a DDE that occur at certain points in the
space of parameters, called Hopf bifurcation points. The bifurcations that arise will
be called Hopf bifurcations.
From an assumed earlier course in differential equations, it should be clear to
the reader that the eigenvalues of the linear portion of the state equations are an
indicator of the nature of the solutions. For example, if all of the eigenvalues have
negative real parts then we can expect the solutions to be stable in some sense and if
any of them have positive real parts then we can expect some instabilities in the sys-
tem. What happens if any of the eigenvalues have zero real parts? This is where, one
might say, the mathematical fun begins, because these eigenvalues indicate that there
is likely to be some oscillatory affects showing up in the solutions. The game then
is to first determine those system parameters that lead to eigenvalues with zero real
parts. The next step in analyzing a system of differential equations, that depends on
6 Bifurcations, Center Manifolds, and Periodic Solutions 157

parameters, is to write the system in terms of its linear part and the remaining non-
linear part and then to decompose it in order to isolate those equation components
most directly affected by the eigenvalues with zero real parts and those equations
affected by the eigenvalues with nonzero real parts. This same approach applies
to problems both in ordinary differential equations as well as in DDEs. Once this
decomposition has been developed we can then concentrate our effort on studying
the component equations related to the eigenvalues with zero real parts and apply
methods to simplify them.
We will assume at this point that a time-delay differential equation modeling a
physical phenomenon has been written with linear and nonlinear terms. Although
DDEs come in many forms, in this chapter we will only consider delay equations of
the form
dz
(t, μ ) = U(μ )z(t, μ ) +V (μ )z(t − σ , μ ) + f (z(t, μ ), z(t − σ ), μ ), (6.1)
dt
where z ∈ Rn , the space of n-dimensional real numbers; U and V , the coefficient
matrices of the linear terms, are n × n matrices; f ∈ Rn , f (0, 0, μ ) = 0, is a nonlin-
ear function; μ is a system parameter; and σ , s ∈ R. For most practical problems
we can assume the f function in (6.1) is sufficiently differentiable with respect to
the first and second variables and with respect to the parameter μ . Let C0 be the
class of continuous functions on [−σ , 0] and let z(0) = z0 ∈ Rn . Then, there exists,
at least locally, a unique solution to (6.1) that is not only continuous but is also
differentiable with respect to μ . For a full discussion of the existence, uniqueness,
and continuity questions for DDEs the reader is referred to Hale and Lunel [15].
For ordinary differential equations see the comparable results in Cronin [10]. For
the rest of this chapter we will assume that, for both ordinary and delay differen-
tial equations, unique solutions exist and that they are continuous with respect to
parameters.
Although the results discussed in this chapter can be extended to higher dimen-
sion spaces, we will mainly be interested in problems with n = 2 that are dependent
on a single parameter. We will also concentrate on bifurcations in the neighborhood
of the z(t) ≡ 0 solution. This is clearly an equilibrium point of (6.1) and is referred
to as a local bifurcation point. For a discussion of various classes of bifurcations
see Nayfeh and Balachandran [26]. Since machine tool chatter occurs when self-
oscillating solutions emanate from equilibrium points, i.e., stable cutting, we will
concentrate in this chapter on a class of bifurcations called Hopf (more properly
referenced as Poincaré–Andronov–Hopf in Wiggins [34]) bifurcations. These are
bifurcations in which a family of isolated periodic solutions arises as the system
parameters change and the eigenvalues cross the imaginary axis. As earlier noted,
the equilibrium points at which Hopf bifurcations occur are sometimes referred to
as Hopf points. The occurrence of Hopf bifurcations depends on the eigenvalues of
the linear portion of (6.1), given by
dz
(t, μ ) = U(μ )z(t, μ ) +V (μ )z(t − σ , μ ), (6.2)
dt
in which at least one of the eigenvalues of this problem has a zero real part.
158 D.E. Gilsinn

As in ordinary differential equations, the eigenvalues of the linear system tell


the nature of the stability of solutions of both (6.1) and (6.2). In ordinary differen-
tial equations the eigenvalues are computed from the characteristic polynomial and
in DDEs the eigenvalues arise from an equation called the characteristic equation
associated with the linear equation (6.2). This equation is a transcendental equation
with an infinite number of solutions called the eigenvalues of (6.2). We will assume
that z(t) ≡ 0 is an equilibrium point of (6.1) and that, in the neighborhood of this
equilibrium point, (6.2) has a family of pairs of eigenvalues, λ (μ ), λ (μ ), of (6.2)
such that λ (μ ) = α (μ )+iω (μ ), where α , ω are real, α (0) = 0, ω (μ ) > 0, α (0) =
0. This last condition is called a transversality condition and implies that the family
of eigenvalues λ (μ ) = α (μ ) + iω (μ ) is passing across the imaginary axis. Under
these conditions a Hopf bifurcation occurs, where periodic solutions arise from the
equilibrium point. These conditions also apply in the case of ordinary differential
equations.
In both ordinary and delay differential equations, the nature of the stability of
these bifurcating periodic solutions at Hopf points can be more easily studied if
(6.1), in the case of DDEs, can be reduced to a simpler form in the vicinity of the
bifurcation point. We will develop a simplification technique comparable to that
used in ordinary differential equations. It reduces the study of the stability of the
bifurcating periodic solutions to the study of periodic solutions of a simplified sys-
tem, called a normal form, on a surface, called a center manifold. Center manifolds
arise when the real parts of some eigenvalues are zero. A center manifold is an
invariant manifold in that, if solutions to (6.1) begin on the manifold, they remain
on the manifold. This manifold is usually of lower dimension than the space and
the nature of the stability of the equilibrium point depends on the projected form
of (6.1) on the manifold. The projection of (6.1) onto the center manifold usually
leads to a lower order system and the conversion of that system to a normal form
provides a means of studying the stability of the bifurcating periodic solutions. In
fact we will see that, for the example machining problem, the analysis will reduce
to a problem of solving a simplified approximate ordinary differential equation on a
center manifold of dimension two. The reduction process is carried out in a number
of steps based on the approaches of Hassard et al. [17] and Wiggins [34].
The bifurcation analysis developed in this chapter depends on three simplifica-
tion steps. In the first step, we will show how the DDE (6.1) can be transformed
into three equations, where the eigenvalues of the first two have zero real parts and
the eigenvalues of the third have negative real parts. In the process of developing
these equations, we will show how the transformations involved are analogous to
those used in ordinary differential equations. In the second simplification step, we
will show the form that (6.1) takes on the center manifold and finally, in the third
step, we will see what the normal form looks like and, from this, how the bifurcating
periodic solutions are developed. We will develop the first decomposition more thor-
oughly because it exemplifies the analogies between ordinary and delay differential
equations. The other simplifications will be given as formulas, but an example will
be given at the end in which the details of the transformations in a particular case
are developed and used to predict stability of the bifurcating solutions. For a more
6 Bifurcations, Center Manifolds, and Periodic Solutions 159

automated approach to computing a center manifold, using a symbolic manipulation


program, the reader is referred to Campbell [8] as well as to the eighth chapter of
this book.
This is a note to the reader. In this sixth chapter, we will be dealing with differ-
ential equations, both ordinary and delay, that have complex eigenvalues and eigen-
vectors. Although the differential equations associated with the real-world models
are most often formulated in the real space Rn , these differential equations can be
viewed as being embedded in the complex space Cn . This process is called complex-
ification and allows the differential equation to be studied or solved in the complex
space. At the end, appropriate parts of the complex solution are identified with solu-
tions to the original differential equation in the real domain. A thorough discussion
of this process is given in Hirsch and Smale [19]. We will not be concerned with
the formal embedding of the real differential equations in the complex domain, but
just to note that it is possible and causes no difficulties with what we will be talking
about in this chapter. We only bring this up so that the reader may understand why
we seem to flip back and forth between real and complex equations. We are simply
working with this complexification process behind the scenes without writing out
the complete details. As we work through the examples, the reader will see that it is
quite natural to exist in both domains and see that the final result leads to the desired
approximate solution of the original DDE.
The chapter is divided as follows. In Sect. 6.2, we will show how the adjoint to
a linear ordinary differential equation can be used to naturally generate a bilinear
form that acts as an inner product substitute and introduces geometry to a function
space. This bilinear form is then used to define an orthogonality property that is used
to decompose the differential equations into those equations that have eigenvalues
with zero real parts and those that have nonzero real parts. We first introduce these
ideas for ordinary differential equations in order to show that the ideas are models
for the analogous ideas for DDEs. In Sect. 6.3, we will show how this decomposi-
tion works for an ordinary differential equation example. In Sect. 6.4, we will show
how a DDE can be formulated as an operator equation that has a form similar to an
ordinary differential equation. Within that section we will also introduce a bilinear
form by way of an adjoint equation that is analogous to the one in ordinary dif-
ferential equation and is used in a similar manner to decompose the DDE into a
system of operator equations that have components dependent on the eigenvalues
of the characteristic equation with zero real parts and those with nonzero real parts.
In Sect. 6.5, the author starts by introducing the main example of a DDE, which we
will consider in this chapter. We will show how it is reduced to an operator equation
and decomposed into components dependent on eigenvalues with zero real parts and
those dependent on non-zero real parts. In Sect. 6.6, we will introduce the general
formulas needed in order to compute the center manifold, the normal form of the
DDE on the center manifold, and the bifurcated periodic solution for the DDE on
the manifold. In Sect. 6.7 we will continue with the main example and develop the
form of the center manifold, the normal form for the example on the center mani-
fold, and finally the resulting periodic solution on the center manifold. In Sect. 6.8,
we show the results of numerical simulations of the example DDE in the vicinity
160 D.E. Gilsinn

of the bifurcation points and show that there is a possibility of unstable behavior
for values of system parameters that extend into parameter regions, which would
otherwise be considered stable.

6.2 Decomposing Ordinary Differential Equations


Using Adjoints

Many results for DDEs are direct analogies of results in ordinary differential equa-
tions. In this section, we will review some properties of ordinary differential equa-
tions that motivate analogous properties in DDEs. We will break the decomposition
process down into five basic steps. Later we will show that analogous five steps can
be used to decompose a DDE.

6.2.1 Step 1: Form the Vector Equation

We begin by considering the differential equation


dz
(t) = Az(t) + f (z(t), μ ), (6.3)
dt
z ∈ Rn , A an n × n real matrix, f ∈ Rn with locally bounded derivatives, −∞ < t < ∞.
The homogeneous part is given by
dz
(t) = Az(t). (6.4)
dt
We will stay as much as we can with real variables since many of the problems
leading to DDEs are formulated with real variables.
The solution of the linear system (6.4) with constant coefficients can be repre-
sented as a parametric operator of the form

T (t)q = zt (q) = eAt q, (6.5)

acting on a vector q ∈ Rn . T (t) is said to be a group of operators since T (t1 + t2 ) =


T (t1 )T (t2 ) and T (t)T (−t) = I, the identity. The family of operators would be called
a semigroup if an identity exists but there are no inverse elements. In Fig. 6.1 we
show two points of view about solutions to differential equations. Traditionally we
look at solutions to ordinary differential equations as trajectories beginning with an
initial vector q and taking on the value z(t) at some time t > 0, say. There is a sense
of physical meaning here such as the trajectory of a ball. However, the solutions can
also be viewed as t-dependent maps T (t) in Rn of initial vectors q. They can then be
thought of as a flow in Rn space. This is a point of view taken by Arnold [3]. It will
be this point of view that will turn out more fruitful when we get to DDEs.
6 Bifurcations, Center Manifolds, and Periodic Solutions 161

Fig. 6.1 The solution operator maps functions in Rn to functions in Rn

In most ordinary differential equation textbooks, a linear transformation is used


to write the matrix A in a form that separates it into blocks with eigenvalues having
positive, zero, or negative real parts, called a Jordan normal form. However, in this
section, we will introduce a change of coordinates procedure that accomplishes the
decomposition by way of the adjoint equation since this translates to an analogous
method for DDEs. This process allows us to introduce a geometric point of view
to the decomposition of (6.3). In particular, we can use the geometric property of
orthogonality as a tool to decompose (6.3).

6.2.2 Step 2: Define the Adjoint Equation

The adjoint equation to (6.4) is given by


dy
(t) = −AT y(t). (6.6)
dt

6.2.3 Step 3: Define a Natural Inner Product by way of an Adjoint

Equations (6.4) and (6.6) are related by the Lagrange identity


d T 
yTΘ z + Ω yT z = y z , (6.7)
dt
where Θ z = ż − Az, Ω y = ẏ + AT y.
 If z and y are solutions of (6.4) and (6.6), respec-
tively, then it is clear that (d/dt) yT z = 0 which implies yT z is constant and is the
162 D.E. Gilsinn

natural inner product of Rn . We note here that in case if the inner product is taken,
where y and z are complex, then the inner product would be yT z. Thus, the use of an
adjoint equation leads naturally to an inner product definition. It might then seem
reasonable that using adjoint properties could geometrically lead to some form of
orthogonal decomposition of a system of differential equations in a similar manner
to the process of orthogonal decomposition of a vector in Rn . In fact this is what we
will show in this section and in Sect. 6.4 we will show that the same idea extends
to DDEs.
We begin by stating some results on eigenvalues and eigenvectors from linear
algebra that have direct analogs in the delay case. To be general, we will state them
in the complex case. Let A be an n × n matrix with elements in Cn and A∗ the usual
conjugate transpose matrix. Let (·, ·) be the ordinary inner product in Cn . Then the
following hold.
1. (ψ , Aφ ) = (A∗ ψ , φ ).
2. λ is an eigenvalue of A if and only if λ is and eigenvalue of A∗ .
3. The dimensions of the eigenspaces of A and A∗ are equal.
4. Let φ1 , . . . , φd be a basis for the right eigenspace of A associated with eigenvalues
λ1 , . . . , λd and let ψ1 , . . . , ψd be a basis for the right eigenspace of A∗ associated
with the eigenvalues λ 1 , . . . , λ d . Construct the matrices Φ = (φ1 , . . . , φd ) , Ψ =
(ψ1 , . . . , ψd ). The matrices Φ and Ψ are n × d. If we define the bilinear form
⎛ ⎞
(ψ1 , φ1 ) · · · (ψ1 , φd )
⎜ .. .. .. ⎟
Ψ , Φ  = ⎝ . . . ⎠, (6.8)
(ψd , φ1 ) · · · (ψd , φd )

then Ψ , Φ  is nonsingular and can be chosen so that Ψ , Φ  = I.


Although the ·, · is defined in terms of Ψ and Φ , the definition is general and can
be applied to any two matrices U and Z. Thus, U, Z, where U and Z are matrices,
is a bilinear form that satisfies properties of an inner product. In particular

U, α Z1 + β Z2  = α U, Z1  + β U, Z2 ,


α U1 + β U2 , Z = α U1 , Z + β U2 , Z,
UM, Z = M ∗ U, Z, (6.9)
U, ZM = U, ZM,

where α , β are complex constants and M is a compatible matrix.

6.2.4 Step 4: Get the Critical Eigenvalues

Although A in (6.3) is real, it can have multiple real and complex eigenvalues. The
complex ones will appear in pairs. To reduce computational complexity, we will
assume that A has two eigenvalues iω , −iω with associated eigenvectors φ , φ . We
6 Bifurcations, Center Manifolds, and Periodic Solutions 163

will assume that all other eigenvalues are distinct from these and have negative real
parts. The associated eigenvalues with zero real parts of A∗ are −iω , iω with right
eigenvectors ψ , ψ . We will use properties of the adjoint to decompose (6.3) into
three equations in which the first two will have eigenvalues with zero real parts.

6.2.5 Step 5: Apply Orthogonal Decomposition

We begin by defining the matrices

Φ = (φ , φ ), Ψ = (ψ , ψ )T . (6.10)

Here we take d = 2 in (6.8) and note that Φ and Ψ are n × 2 matrices.


Let z(t, μ ) be the unique family of solutions of (6.3). This is possible due to the
standard existence, uniqueness, and continuity theorems. Define
 
(ψ , z(t, μ ))
Y (t, μ ) = Ψ , z(t, μ ) = , (6.11)
(ψ , z(t, μ ))

where Y (t, μ ) ∈ C2 . Set Y (t, μ ) = (y1 (t, μ ), y2 (t, μ ))T , where y1 (t, μ ) = (ψ , z(t, μ ))
and y2 (t, μ ) = (ψ , z(t, μ )).
Define the matrices
   
iω 0 ∗ −iω 0
B= , B = . (6.12)
0 −iω 0 iω
They satisfy AΦ = Φ B, A∗Ψ = Ψ B∗ . If we join (6.9), (6.11), and (6.12) with the
fact that B∗∗ = B then we have
dY
(t, μ ) = BY (t, μ ) + Ψ , f (z(t, μ ), μ ). (6.13)
dt
This can be written as
dy
(t, μ ) = iω y(t, μ ) + F(t, μ ),
dt
dy
(t, μ ) = −iω y(t, μ ) + F(t, μ ), (6.14)
dt
where F(t, μ ) = (ψ , f (z(t, μ ), μ )). These are the first two of the three equations.
In order to develop the third equation, we will use a notion involved with decom-
posing a vector into two orthogonal components. Here is where geometry enters
the picture. This decomposition process arises, for example, when one applies the
Gramm–Schmidt orthogonalization method numerically to vectors. The general
idea is that the difference between a vector and its orthogonal projection on a lin-
ear space, formed from previous orthogonalized vectors, generates an orthogonal
decomposition of the original vector. This idea will be generalized here to function
spaces, but the basic methodology holds true.
164 D.E. Gilsinn

Begin by defining the difference between z(t, μ ) and its projection onto the linear
space formed by the columns of Φ as
w(t, μ ) = z(t, μ ) − Φ Y (t, μ ), (6.15)

which makes sense in terms of dimensionality, since Φ is an n × 2 matrix,


Y (t, μ ) ∈ C2 , and z ∈ Rn . This is a vector orthogonal to Φ Y (t, μ ) in function space.
We note that w(t, μ ) is real because Φ Y (t, μ ) = (ψ , z(t, μ )) φ + (ψ , z(t, μ )) φ =
2Re{(ψ , z(t, μ )) φ }.
To show that this function is orthogonal to the space formed by the columns of
Φ we note that
w(t, μ ) = z(t, μ ) − Φ Ψ , z(t, μ ),
Ψ , w(t, μ ) = Ψ , z(t, μ ) − Ψ , Φ Ψ , z(t, μ ), (6.16)
= Ψ , z(t, μ ) − Ψ , Φ Ψ , z(t, μ ),
= Ψ , z(t, μ ) − Ψ , z(t, μ ) = 0,
where we have used a property from (6.9) and the fact that Ψ , Φ  = I.
Now let f (z, μ ) = f (z(t, μ ), μ ), and use (6.3), (6.11), and (6.13) to show
dw dz dY
(t, μ ) = (t, μ ) − Φ (t, μ )
dt dt dt
= Az(t, μ ) + f (z, μ ) − Φ BY (t, μ ) − Φ Ψ , f (z, μ ), (6.17)

Substitute z(t, μ ) = w(t, μ ) + Φ Y (t, μ ) into (6.17) and use AΦ = Φ B to get


dw
(t, μ ) = Aw(t, μ ) + f (z, μ ) − Φ Ψ , f (z, μ ). (6.18)
dt
Therefore we have the final decomposition as
dy
(t, μ ) = iω y(t, μ ) + F(t, μ ),
dt
dy
(t, μ ) = −iω y(t, μ ) + F(t, μ ), (6.19)
dt
dw
(t, μ ) = Aw(t, μ ) − Φ Ψ , f (z, μ ) + f (z, μ ).
dt
After defining some operators in Sect. 6.4, that will take the place of the matrices
used here, we will see that there is an analogous decomposition for DDEs.

6.3 An Example Application in Ordinary Differential Equations

In this example, we will consider a simple ordinary differential equation and work
through the details of the decomposition described in this section. In Sect. 6.4, we
will begin working out a more extensive example in DDEs and show how the
6 Bifurcations, Center Manifolds, and Periodic Solutions 165

decomposition in the time-delay case is analogous to the decomposition in the ordi-


nary differential equations case. In this section, we will follow the decomposition
steps give in Sect. 6.2.
Start with the equation
ẍ + x = μ x2 . (6.20)
From the existence and uniqueness theorem we know there exists a unique solution
x(t, μ ), given the initial conditions x(0, μ ) = x0 , ẋ(0, μ ) = x1 , that is continuous
with respect to μ .

6.3.1 Step 1: Form the Vector Equation

If we let z1 = x, z2 = ẋ then (6.20) can be written in vector form as

ż(t, μ ) = Az(t, μ ) + f (z(t, μ ), μ ), (6.21)


 T
where z(t, μ ) = (z1 (t, μ ), z2 (t, μ ))T , f (z(t, μ ), μ ) = μ 0, z1 (t, μ )2 , and
 
0 1
A= . (6.22)
−1 0

The linear part of (6.21) is

ż(t, μ ) = Az(t, μ ). (6.23)

6.3.2 Step 2: Define the Adjoint Equation

The adjoint equation of (6.23) is given by

ẏ(t, μ ) = −AT z(t, μ ). (6.24)

6.3.3 Step 3: Define a Natural Inner Product by Way of an Adjoint

The inner product is developed, as in Sect. 6.2, as the natural inner product of vec-
tors. We will go directly to forming the basis vectors.
A basis for the right eigenspace of A can easily be computed as
) 1 * ) 1 *
√ − √2
φ = √i 2 , φ= √i
, (6.25)
2 2
166 D.E. Gilsinn

These are associated, respectively, with the eigenvalues λ = i and λ = −i. The
√ for A are λ = −i and λ = i with the respective
related eigenvalues and eigenvectors T

eigenvectors φ , φ . The factor 1/ 2 is a normalization factor. Now define


) 1 *
  √ √1
Φ = φ , φ = √i2 √2i , (6.26)
2
− 2

and ) *
 T √1 − √12
Ψ = φ, φ = √i
2
√i
. (6.27)
2 2

Then Ψ , Φ  = Ψ Φ = I.

6.3.4 Step 4: Get the Critical Eigenvalues

The eigenvalues of A are ±i.

6.3.5 Step 5: Apply Orthogonal Decomposition

Let z(t, μ ) be a unique family of solutions of (6.21), where we will write z(t, μ ) =
(z1 (t, μ ), z2 (t, μ ))T . Now define
)z (t,μ )
*
1√
− i z2√(t,2μ )
Y (t, μ ) = Ψ , z(t, μ ) = 2
(t,μ ) . (6.28)
z1√
2
+ i z2√(t,2μ )

If we let    
i 0 ∗ −i 0
B= , B = , (6.29)
0 −i 0 i
then
dY
(t, μ ) = BY (t, μ ) + Ψ , f (z(t, μ ), μ )), (6.30)
dt
or in an equivalent form
dy  
(t, μ ) = iy(t, μ ) + φ , f (z(t, μ ), μ ) ,
dt
dy
(t, μ ) = −iy(t, μ ) + (φ , f (z(t, μ ), μ )) , (6.31)
dt
 T
where f (z(t, μ ), μ ) = 0, z1 (t, μ )2 .
We now develop the orthogonal function w(t, μ ) as

w(t, μ ) = z(t, μ ) − Φ Y (t, μ ). (6.32)


6 Bifurcations, Center Manifolds, and Periodic Solutions 167

However, if we form Φ Y (t, μ ) from (6.26) and (6.28) it is clear that Φ Y (t, μ ) =
(z1 (t, μ ), z2 (t, μ ))T = z(t, μ ) and therefore from (6.32) that w(t, μ ) = 0 as expected,
since there are no other eigenvalues of A than i and −i. Thus (6.31) is the decom-
posed form of (6.21).

6.4 Delay Differential Equations as Operator Equations

As discussed earlier, the solutions to differential equations can be thought of in terms


of trajectories or in terms of mappings of initial conditions. This same dichoto-
mous point of view can be applied to DDEs. But, in the case of DDEs, the mapping
or operator approach provides very fruitful qualitative results and, therefore, that
approach will remain for this chapter.

6.4.1 Step 1: Form the Operator Equation

The principal difference between ordinary and delay differential equations is that,
in ordinary differential equations, the initial condition space is finite dimensional
and in DDEs it is infinite dimensional. The DDEs (6.1) and (6.2) can be thought of
as maps of entire functions. In particular, we will start with the class of continuous
functions defined on the interval [−σ , 0] with values in Rn and refer to this as class
C0 . The maps are constructed by defining a family of solution operators for the linear
DDE (6.2) by
(T (t)φ ) (θ ) = (zt (φ )) (θ ) = z(t + θ ) (6.33)
for φ ∈ C0 , θ ∈ [−σ , 0], s ≥ 0. This is a mapping of a function in C0 to another
function in C0 . Then (6.1) and (6.2) can be thought of as maps from C0 to C0 . The
norm on the space is taken as

φ = max |φ (t)|, (6.34)


−σ ≤t≤0

where | · | is the ordinary Euclidean 2-norm.


Figure 6.2 shows the mapping between an initial function in C0 , φ , to another
function, zt (φ ) in C0 . zt (φ ) is the projection of the portion of the trajectory z(t) from
t − σ to t back to C0 . This figure exhibits two approaches to looking at solutions of
DDEs. One way is to consider a solution as a trajectory of z as a function of t with an
initial condition function in C0 . The value of the solution at t on the trajectory graph
depends on the values of the function on the trajectory from t − σ to t. Another way
to look at the solutions of a DDE is to consider them as parameterized mappings
T (t)φ of functions φ in C0 . From Fig. 6.2, this would be the portion of the trajectory
from t − σ to t projected back to C0 . In the figure it is represented as zt (φ ), which
is the function in C0 to which φ is mapped under T (t). The mapped function relates
168 D.E. Gilsinn

Fig. 6.2 The solution operator maps functions in C0 to functions in C0

to the trajectory as follows. The value of zt (φ )(θ ) for θ ∈ [−σ , 0] is given as the
trajectory value z(t + θ ). This idea is not new, in that, in ordinary differential equa-
tions, solutions can be thought of in terms of either trajectories or maps of initial
condition vectors in Rn , say. Traditionally, we usually think of solving ordinary dif-
ferential equations in terms of trajectories. For some qualitative analyses, however,
the mapping approach is useful.
To determine what properties this operator must satisfy, we look at what basic
properties that (6.5) satisfies. In particular, for each t, T (t) is a bounded linear
transformation for φ ∈ Rn . The boundedness comes from T (t)φ ≤ eAt |φ |. For
t > 0, T (0)φ = φ , i.e., T (0) = I. Finally,

lim T (t)φ − T (t0 )φ = 0, (6.35)


t→t0

since T (t)φ − T (t0 )φ ≤ eA(t−t0 ) |φ |.


Based on these properties, we formulate the following definition for a family of
operators. A Strongly Continuous Semigroup satisfies

T (t) is bounded and linear for t ≥ 0,


T (0)φ = φ or T (0) = I, (6.36)
lim T (t)φ − T (t0 )φ = 0,
t→t0

where · is an appropriate operator norm and φ ∈ C0 . The family of operators,


T (t), t ≥ 0, is called a semigroup since the inverse property does not hold (see [18]
and [35]).
If we take the derivative with respect to t in (6.5) we see that T (t)φ satisfies (6.4).
It is also easy to see that
1 
A = lim eAt − I . (6.37)
t→0 t
6 Bifurcations, Center Manifolds, and Periodic Solutions 169

We can call the matrix A the infinitesimal generator of the family T (t) in (6.5). The
term infinitesimal generator can be thought of as arising from the formulation

dz = Az dt, (6.38)

where for each infinitesimal increment, dt, in t, A produces an infinitesimal


increment, dz, in z at the point z.
In terms of operators, we define an operator called the infinitesimal generator.
An infinitesimal generator of a semigroup T (t) is defined by
1
Aφ = lim [T (t)φ − φ ] , (6.39)
t→0+ t
for φ ∈ C0 .
We will state the properties of the family of operators (6.33) without proofs, since
the proofs require extensive knowledge of operator theory and they are not essential
for the developments in this chapter. To begin with, the mapping (6.33) satisfies the
semigroup properties on C0 . In the case of the linear system (6.2) the infinitesimal
generator can be constructed as
 dφ
(A(μ )φ ) = dθ (θ ) −σ ≤ θ < 0,
(6.40)
U(μ )φ (0) +V (μ )φ (−σ ) θ = 0,
where the parameter μ is included in the definition of A. Then T (t)φ satisfies
d
T (t)φ = A(μ )T (t)φ , (6.41)
dt
where
d 1
T (t)φ = lim (T (t + h) − T (t)) φ . (6.42)
dt h→0 h
Finally, the operator form for the nonlinear DDE (6.1) can be written as
d
zt (φ ) = A(μ )zt (φ ) + F(zt (φ ), μ ), (6.43)
dt
where 
0 −σ ≤ θ < 0,
(F(φ , μ )) (θ ) = (6.44)
f (φ , μ ) θ = 0.
For μ = 0, write f (φ ) = f (φ , 0), F(φ ) = F(φ , 0). We note the analogy between the
ordinary differential equation notation and the operator notation for the DDE.

6.4.2 Step 2: Define an Adjoint Operator

We can now construct a formal adjoint operator associated with (6.40). Let C0∗ =
C ([0, σ ], Rn ) be the space of continuous functions from [0, σ ] to Rn with ψ =
max0≤θ ≤σ |ψ (θ )| for ψ ∈ C0∗ . The formal adjoint equation associated with the linear
DDE (6.2) is given by
170 D.E. Gilsinn

du
(t, μ ) = −U(μ )T u(t, μ ) −V (μ )T u(t + σ , μ ). (6.45)
dt
If we define
(T ∗ (t)ψ ) (θ ) = (ut (ψ )) (θ ) = u(t + θ ), (6.46)
ut ∈ C0∗ ,
for θ ∈ [0, σ ], t ≤ 0, and ut (ψ ) be the image of T ∗ (t)ψ ,
then (6.46) defines
a strongly continuous semigroup with infinitesimal generator

∗ − ddψθ (θ ) 0 < θ ≤ σ,
(A (μ )ψ ) = dψ (6.47)
− dθ (0) = U(μ ) ψ (0) +V (μ ) ψ (σ ) θ = 0.
T T

Note that, although the formal infinitesimal generator for (6.46) is defined as
1 ∗
A∗0 ψ = lim [T (t)ψ − ψ ] , (6.48)
t→0− t
Hale [12], for convenience, takes A∗ = −A∗0 in (6.47) as the formal adjoint to (6.40).
This family of operators (6.46) satisfies
d ∗
T (t)ψ = −A∗ T ∗ (t)ψ . (6.49)
ds

6.4.3 Step 3: Define a Natural Inner Product by Way


of an Adjoint Operator

In contrast to Rn , the space C0 does not have a natural inner product associated with
its norm. However, following Hale [12], one can introduce a substitute device that
acts like an inner product in C0 . This is an approach that is often taken when a func-
tion space does not have a natural inner product associated with its norm. Spaces
of functions that have natural inner products are called Hilbert spaces. Throughout,
we will be assuming the complexification of the spaces so that we can work with
complex eigenvalues and eigenvectors.
In analogy to (6.7) we start by constructing a Lagrange identity as follows. If

Θ z(t) = z (t) −U(μ )z(t) −V (μ )z(t − σ ),


Ω u(t) = u (t) +U(μ )T u(t) +V (μ )T u(t + σ ), (6.50)

then
T d
uT (t)Θ z(t) + Ω u (t)z(t) = u, z(t), (6.51)
dt
where  t
u, z(t) = uT (t)z(t) + uT (s + σ )V (μ )z(s)ds. (6.52)
t−σ
6 Bifurcations, Center Manifolds, and Periodic Solutions 171

Deriving the natural inner product for Rn from the Lagrange identity (6.7) moti-
vates the derivation of (6.52). Again, if z and u satisfy Θ z(t) = 0 and Ω u(t) = 0
then, from (6.52), (d/dt)u, z(t) = 0, which implies u, z(t) is constant and one
can set t = 0 in (6.52) and define the form
 0
u, z = uT (0)z(0) + uT (s + σ )V (μ )z(s)ds. (6.53)
−σ

One can now state some properties of (6.40), (6.47), and (6.53) that are analogs
of the properties given in Sect. 6.2 for ordinary differential equations.
1. For φ ∈ C0 , ψ ∈ C0∗ ,
ψ , A(μ )φ  = A∗ (μ )ψ , φ . (6.54)
2. λ is an eigenvalue of A(μ ) if and only if λ is an eigenvalue of A∗ (μ ).
3. The dimensions of the eigenspaces of A(μ ) and A∗ (μ ) are finite and equal.
4. If ψ1 , . . . , ψd is a basis for the right eigenspace of A∗ (μ ) and the associated
φ1 , . . . , φd is a basis for the right eigenspace of A(μ ), construct the matrices
Ψ = (ψ1 , . . . , ψd ) and Φ = (φ1 , . . . , φd ). Define the bilinear form between Ψ and
Φ by ⎛ ⎞
ψ1 , φ1  . . . ψ1 , φd 
⎜ .. .. .. ⎟
Ψ , Φ  = ⎝ . . . ⎠. (6.55)
ψd , φ1  . . . ψd , φd 

This matrix is nonsingular and can be chosen so that Ψ , Φ  = I. Note that if


(6.55) is not the identity then a change of coordinates can be performed by set-
ting K = Ψ , Φ −1 and Φ = Φ K. Then Ψ , Φ  = Ψ , Φ K = Ψ , Φ K = I. Equa-
tion (6.55) also satisfies the inner product properties (6.9).

6.4.4 Step 4: Get the Critical Eigenvalues

The eigenvalues for (6.40) are given by the λ solutions of the transcendental
equation  
det λ I −U(μ ) − e−λ σ V (μ ) = 0. (6.56)

This form of characteristic equation, sometimes called an exponential polynomial,


has been studied in Avellar and Hale [4], Bellman and Cooke [7], Hale and Lunel
[15], Kuang [23], and Pinney [27]. The solutions are called the eigenvalues of (6.2)
and, in general, there are an infinite number of them. For a discussion of the general
expansion of solutions of (6.2) in terms of the eigenvalues see Bellman and Cooke
[7] or Pinney [27]. The actual computation of these eigenvalues can become very
involved as the reader will see in the example that will be considered later in this
chapter. Here, though, we will only be concerned with conditions for the existence
of eigenvalues of the form iω and −iω and we further limit ourselves to the case
in which there are only two eigenvalues iω and −iω and all other eigenvalues have
negative real parts. The significance of this is that we will be looking for conditions
172 D.E. Gilsinn

for which the family of eigenvalues, as a function of the parameter μ , passes across
the imaginary axis. These conditions will be the Hopf conditions referred to earlier
in this chapter. The value of ω is related to the natural frequency of oscillation of
the linear part of the DDE system.

6.4.5 Step 5: Apply Orthogonal Decomposition

For the sake of notation, let A = A(μ ), A∗ = A∗ (μ ), U = U(μ ), V = V (μ ), ω =


ω (μ ). The basis eigenvectors for A and A∗ associated with the eigenvectors λ =
iω , λ = −iω will be denoted as φC , φ C and φD , φ D , respectively, where the sub-
scripts C and D refer to parameters defining the basis vectors and depend on U
and V .
We define the matrix
Φ = (φC , φ C ). (6.57)

The two eigenvectors for A, associated with the eigenvalues λ = iω , λ = −iω , are
given by

φC (θ ) = eiωθ C,
φ C (θ ) = e−iωθ C, (6.58)

where C is a 2 × 1 vector. With these functions defined, it is clear that Φ is a


function of θ and should formally be written as Φ (θ ). Note that Φ (0) = (C,C).
However, in order to simplify notation we will write Φ = Φ (θ ) but we will some-
times refer to Φ (0). These functions follow from (6.40). If −σ ≤ θ < 0 then
dφ /dθ = iωφ implies φ (θ ) = exp(iωθ )C where C = (c1 , c2 )T . For θ = 0, (6.40)
implies (U +V exp(−iωσ ))C = iω C or (U − iω I +V exp(−iωσ ))C = 0. Since iω
is an eigenvalue, (6.56) implies that there is a nonzero solution C.
Similarly, the eigenvectors for A∗ associated with the eigenvalues −iω , iω are
also given by

φD (θ ) = eiωθ D,
φ D (θ ) = e−iωθ D, (6.59)

where D = (d1 , d2 )T . Again, this follows from (6.47) since, from 0 < θ ≤ σ ,


− = −iωφ , (6.60)

we can compute the solutions given in (6.59). Define the matrix

Ψ = (φD , φ D ) (6.61)
6 Bifurcations, Center Manifolds, and Periodic Solutions 173

where D is computed as follows. At θ = 0 we have from (6.47) that


 
U T +V T eiωσ + iω I D = 0. (6.62)

The determinant of the matrix on the left is the characteristic equation so that there
is a nonzero D.
From (6.57), (6.58), (6.59), and (6.61) one seeks to solve for D so that
   
φD , φC  φD , φ C  10
Ψ , Φ  = = . (6.63)
φ D , φC  φ D , φ C  01

Due to symmetry we only need to satisfy φD , φD  = 1 and φD , φ D  = 0.


On eigenspaces, the infinitesimal generators can be represented by matrices. In
fact A and A∗ satisfy AQ = QB, A∗Ψ = Ψ B∗ where Φ , Ψ are given by (6.57) and
(6.61) and the matrices B, B∗ are also given by (6.12).
Now that we have constructed the adjoint and given some of its properties we can
decompose the nonlinear operator equation (6.43) into a two-dimensional system
with eigenvalues iω and −iω and another operator equation with eigenvalues having
negative real parts. The procedure is based on Hale [12] and is similar to Step 5 of
Section 6.2.
We will decompose the nonlinear system (6.43) for the case μ = 0, since we will
not need to develop approximations for μ = 0 in this chapter. High order approxi-
mations have been developed in Hassard and Wan [16], but these will not be needed
in order to develop the approximate bifurcating periodic solution studied here.
Based on standard existence and uniqueness theorems for DDEs, let zt ∈ C0 be
the unique family of solutions of (6.43), where the μ notation has been dropped
since we are only working with μ = 0. Define
 
φD , zt 
Y (t) = Ψ , zt  = , (6.64)
φ D , zt 

where Y (t) ∈ C2 for t ≥ 0, and set Y (t) = (y(t), y(t))T where y(t) = φD , zt  and
y(t) = φ D , zt . The reader should note the similarity to (6.11).
By differentiating (6.53), and using zt (0) = z(t), zt (θ ) = z(t + θ ), we have
d dzt
Ψ , zt  = Ψ , . (6.65)
dt dt
Use (6.9), (6.43), (6.54), (6.65), and A∗Ψ = Ψ B∗ to write
d
Y (t) = BY (t) + Ψ , F(zt ). (6.66)
dt
T T
Using (6.44) and (6.53), compute φD , F(zt ) = φ D (0)(F(zt ))(0) = D f (zt ).
Similarly φ D , F(zt ) = DT f (zt ). Then
   
φD , F(zt ) T
D f (zt )
Ψ , F(zt ) = = , (6.67)
φ D , F(zt ) DT f (zt )
174 D.E. Gilsinn

which yields the first two equations in (6.83). They can be written as
d T
y(t) = iω y(t) + D f (zt ),
dt
d
y(t) = −iω y(t) + DT f (zt ). (6.68)
dt
If one defines the orthogonal family of functions

wt = zt − Φ Y (t), (6.69)

where Φ is given by (6.57) and Y (t) is given by (6.64), then Ψ , wt  = 0, where Ψ


is given by (6.61). Now apply the infinitesimal generator (6.40) to

zt = wt + Φ Y (t), (6.70)

to get

(Azt )(θ ) = (Awt )(θ ) + (AΦ )(θ )Y (t) = (Awt )(θ ) + Φ BY (t). (6.71)

We will need this relation below.


One can now construct the third equation. There are two cases: θ = 0 and
θ ∈ [−σ , 0). From (6.69), for the case with θ = 0,

w(t) = wt (0) = zt (0) − Φ (0)Y (t) = z(t) − Φ (0)Y (t). (6.72)

The reader is reminded here of the notation wt (θ ) = w(t + θ ). It is easy to


show that W (t) ∈ R2 , since x(t) ∈ R2 and Φ (0)Y (t) = φD , zt C + φ D , zt C =
2Re {φD , zt C} ∈ R2 .
From (6.40) and (6.43)
d dzt
z(t) = (0) = (Azt )(0) + (F(zt ))(0) = Uz(t) +V z(t − σ ) + f (zt ). (6.73)
dt dt
Differentiate (6.72) and combine it with (6.66) and (6.73) to give

d
w(t) = {Uz(t) +V z(t − σ ) + f (zt )} − Φ (0) {BY (t) − Ψ , F(zt )} . (6.74)
dt
If θ = 0 in (6.71) then, from (6.40),

Uz(t) +V z(t − σ ) = Uw(t) +V w(t − σ ) + Φ (0)BY (t). (6.75)

Now substitute (6.75) into (6.74) to get


d
w(t) = Uw(t) +V w(t − σ ) + f (zt ) − Φ (0)Ψ , F(zt ),
dt
= Uw(t) +V w(t − σ ) + f (zt ) − 2Re {φD , zt C} . (6.76)
6 Bifurcations, Center Manifolds, and Periodic Solutions 175

For the case with θ = 0 we can apply a similar argument to that used to create
(6.76). We start by differentiating (6.69) to get

dwt dzt
= − Φ Y (t),
dt dt
dzt
= − Φ {BY (t) + Ψ , F (zt )}, (6.77)
dt
dzt
= − Φ BY (t) − Φ Ψ , F (zt )}.
dt
For θ = 0 in (6.43) and (6.44)
dzt
= Azt . (6.78)
dt
Then, using (6.71) and (6.78), we have
dwt
= Azt − Φ BY (t) − Φ Ψ , F (zt )},
dt
= Awt + Φ BY (t) − Φ BY (t) − Φ Ψ , F (zt )}, (6.79)
= Awt − Φ Ψ , F (zt )}. (6.80)

This can then be written as


dwt
= Awt − 2Re{φD , zt φC }. (6.81)
ds
Use (6.44) to finally write the equation
dwt
= Awt − 2Re{φD , zt φC } + F(zt ). (6.82)
dt
Equation (6.43) has now been decomposed as

d T
y(t) = iω y(t) + D f (zt ),
dt
d
y(t) = −iω y(t) + DT f (zt ), (6.83)
dt 
d (Awt )(θ ) − 2Re{φD , zt φC (θ )} −σ ≤ θ < 0,
wt (θ ) =
dt (Aw t )(0) − 2Re{ φ , z
D t Cφ (0)} + f (zt ) θ = 0.

In order to simplify the notation write (6.83) in the form


dy
(t) = iω y(t) + F1 (Y, wt ),
dt
dy
(t) = −iω y(t) + F 1 (Y, wt ), (6.84)
dt
dwt
= Awt + F2 (Y, wt ),
dt
176 D.E. Gilsinn

where
T
F1 (Y, wt ) = D f (zt ),

−2Re{φD , zt φC (θ )} −σ ≤ θ < 0,
F2 (Y, wt ) = (6.85)
−2Re{φD , zt φC (0)} + f (zt ) θ = 0.
Note that (6.84) is a coupled system. The center manifold and normal forms will be
used as a tool to partially decouple this system.

6.5 A Machine Tool DDE Example: Part 1

This example will be discussed in multiple parts. In the first part, we will formulate
the operator form for the example of DDE, describe a process of determining the
critical eigenvalues for the problem, and formulate the adjoint operator equation.

6.5.1 Step 1: Form the Operator Equation

The example we will consider involves a turning center and workpiece combination.
For readers unfamiliar with turning centers they can be thought of as numerically
controlled lathes. The machine tool model, used only for illustration in this chapter,
is taken from Kalmár-Nagy et al. [20] and can be written as
  α 
k f0 f
ẍ + 2ξ ωn ẋ + ωn2 x = 1− , (6.86)
mα f0

where ωn = r/m is the natural frequency of the undamped free oscillating sys-
tem and ξ = c/2mωn is the relative damping factor and k is the cutting force
coefficient that is related to the slope of the power-law curve used to define the
right-hand side of (6.86). The parameters m, r, c, and α are taken as m = 10 kg,
r = 3.35 MN m−1 , c = 156 kg s−1 , and α = 0.41 and were obtained from measure-
ments of the machine–tool response function (see Kalmár-Nagy et al. [20]). The
parameter α was obtained from a cutting force model in Taylor [31]. These then
imply that ωn = 578.791/s, ξ = 0.0135. The nominal chip width is taken as f0 and
the time varying chip width is

f = f0 + x(t) − x(t − τ ), (6.87)

where the delay τ = 2π /Ωτ is the time for one revolution of the turning center
spindle. The cutting force parameter k will be taken as the bifurcation parameter
since we will be interested in the qualitative change in the displacement, x(t), as
the cutting force changes. The parameters m, r, c, and f are shown in Fig. 6.3. The
displacement x(t) is directed positively into the workpiece and the tool is assumed
not to leave the workpiece.
6 Bifurcations, Center Manifolds, and Periodic Solutions 177

Fig. 6.3 One degree-of-freedom-model for single-point turning

The model is simplified by introducing a nondimensional time s and displace-


ment z by
s = ωnt,
x
z= , (6.88)
A
where the lengthscale is computed as
3 f0
A= , (6.89)
2−α
a new bifurcation parameter p is set to
k
p= (6.90)
mωn2
and the delay parameter becomes

σ = ωn τ . (6.91)

The dimensionless model then becomes, after expanding the right-hand side of
(6.86) to the third order,
d2 x dx  
2
+ 2ξ + x = p Δx + E(Δx2 + Δx3 ) , (6.92)
ds ds
where
Δx = x(s − σ ) − x(s),
3(1 − α )
E= . (6.93)
2(2 − α )
178 D.E. Gilsinn

We will now consider x as a function of the dimensionless s instead of t. The linear


part of the model is given by
d2 x dx
+ 2ξ + x = pΔx, (6.94)
ds2 ds
Since the Hopf bifurcation studied in this chapter is local, the bifurcation parameter
will be written as
p = μ + pc , (6.95)
where pc is a critical value at which bifurcation occurs. Then (6.92) can be put into
vector form (6.1) by letting z1 (s) = x(s), z2 (s) = x (s). Then
dz
(s) = U(μ )z(s) +V (μ )z(s − σ ) + f (z(s), z(s − σ ), μ ), (6.96)
ds
where
 
z1 (s)
z(s) = ,
z2 (s)
 
0 1
U(μ ) = , (6.97)
−1 − (μ + pc ) −2ξ
 
0 0
V (μ ) = ,
μ + pc 0
and

f (z(s) , z(s − σ ), μ ) (6.98)


 
0
= .
(μ + pc )E (z1 (s − σ ) − z1 (s))2 + (μ + pc )E (z1 (s − σ ) − z1 (s))3

The linear portion of this equation is given by


dz
(s) = U(μ )z(s) +V (μ )z(s − σ ) (6.99)
ds
and the infinitesimal generator is given by
 dφ
(A(μ )φ ) = dθ (θ ) −σ ≤ θ < 0
. (6.100)
U(μ )φ (0) +V (μ )φ (−σ ) θ =0
Then (6.96) can easily be put into the operator form (6.43).

6.5.2 Step 2: Define the Adjoint Operator

Here we can follow the lead of Step 2 of Sect. 6.4 and define the formal adjoint as

dz
(s, μ ) = −U(μ )T z(s, μ ) −V (μ )T z(s + σ , μ ). (6.101)
ds
6 Bifurcations, Center Manifolds, and Periodic Solutions 179

As in Sect. 6.4, if we define

(T ∗ (s)ψ ) (θ ) = (zs (ψ )) (θ ) = z(s + θ ) (6.102)

for θ ∈ [0, σ ], s ≤ 0, us ∈ C0∗ , and zs (ψ ) as the image of T ∗ (s)ψ , then (6.102) defines
a strongly continuous semigroup with infinitesimal generator

∗ − ddψθ (θ ) 0 < θ ≤ σ,
(A (μ )ψ ) = dψ (6.103)
− dθ (0) = U(μ )T ψ (0) +V (μ )T ψ (σ ) θ = 0.

Note that, although, as before, the formal infinitesimal generator for (6.102) is
defined as
1
A∗0 ψ = lim [T ∗ (s)ψ − ψ ] . (6.104)
s→0− s

Hale [12], for convenience, takes A∗ = −A∗0 in (6.103) as the formal adjoint to
(6.100).

6.5.3 Step 3: Define a Natural Inner Product by Way of an Adjoint

This step follows simply by defining the inner product in the same manner as in
(6.53).

6.5.4 Step 4: Get the Critical Eigenvalues

In this step, the reader will begin to see some of the complexity of dealing with the
transcendental characteristic equation. The eigenvalues will depend on the parame-
ters in (6.94) and only certain parameter combinations will lead to eigenvalues of the
form iω and −iω . We will also establish the connection of the critical eigenvalues
with the Hopf bifurcation conditions.
Following Hale [14], introduce the trial solution

z(s) = ceλ s , (6.105)

where c ∈ C2 , and U(μ ) and V (μ ) are given by (6.97), into the linear system (6.94)
and set the determinant of the resulting system to zero. This yields the transcendental
characteristic equation
χ (λ ) = λ 2 + 2ξ λ + (1 + p) − pe−λ σ = 0. (6.106)
Before developing the families of conjugate eigenvalues, we wish to characterize
certain critical eigenvalues of (6.106) of the form λ = iω . However, the eigenvalues
for (6.106) of the form λ = iω exist only for special combinations of p and σ . We
will say that a triple (ω , σ , p), where ω , σ , p are real, will be called a critical eigen
180 D.E. Gilsinn

triple of (6.106) if λ = iω , σ , p simultaneously satisfy (6.106). The discussion


below points out the significant computational difficulties involved with estimating
the eigenvalues for a characteristic equation or exponential polynomial related to a
linear DDE.
The following properties characterize the critical eigen triples for linear delay
equations of the form (6.2) with coefficients from (6.97):
1. (ω , σ , p) is a critical eigen triple of (6.106) if and only if (−ω , σ , p) also is a
critical eigen triple of (6.106).
2. For ω > 1 there is a uniquely defined sequence σr = σr (ω ), r = 0, 1, 2, . . . , and
a uniquely defined p = p(ω ) such that (ω , σr , p), r = 0, 1, 2, . . . , are critical
eigen triples.
3. If (ω , σ , p) is a critical eigen triple, with ω > 1, then p ≥ 2ξ (1 + ξ ). That is,
no critical eigen triple for (6.106) exists for p < 2ξ (1 + ξ ).
4. For
pm = 2ξ (1 + ξ ), (6.107)
the minimum p value, there is a unique ω > 1 and a unique sequence σr , r = 0,
1, 2, . . . , such that (ωm , σr , pm ) is a critical eigen triple for (6.106) for r = 0,
1, 2, . . . . The frequency at the minimum is

ωm = 1 + 2ξ . (6.108)

5. For p > 2ξ (1 + ξ ) there exist two ω s, ω > 1, designated ω+ , ω− and uniquely


associated sequences σr+ = σr (ω+ ), σr− = σr (ω− ), r = 0, 1, 2, . . . such
that (ω+ , σr+ , p), (ω− , σr− , p) are critical eigen triples for (6.106) for
r = 0, 1, 2, . . . . ω+ , ω− are given by
,
ω+2 = (1 + p − 2ξ 2 ) + p2 − 4ξ 2 p + (4ξ 4 − 4ξ 2 ), (6.109)
,
ω−2 = (1 + p − 2ξ 2 ) − p2 − 4ξ 2 p + (4ξ 4 − 4ξ 2 ). (6.110)

σr+ , σr− are given by

2(ψ+ + rπ ) + 3π
σr+ = , (6.111)
ω+
2(ψ− + rπ ) + 3π
σr− = , (6.112)
ω−
where
 
2ξ ω+
ψ+ = −π + tan−1 , (6.113)
ω+2 − 1
 
−1 2ξ ω−
ψ− = −π + tan , (6.114)
ω−2 − 1

6. There do not exist critical eigen triples for 0 ≤ ω ≤ 1.


6 Bifurcations, Center Manifolds, and Periodic Solutions 181

Fig. 6.4 Stability chart with sample critical eigen triples identified

We will not prove these results (for proofs see Gilsinn [11]) but we briefly discuss
their significance graphically by examining Fig. 6.4 where the plots are based on the
value of ξ = 0.0135. The entire development of the periodic solutions on the center
manifold depends on knowing the critical bifurcation parameter p in (6.92). This
parameter is linked to the rotation rate, Ωr , of the turning center spindle. One can
plot p against Ωr = 1/σr , where Ωr is the rotation rate of the turning center spindle,
for r = 0, 1, 2, . . . where each r indexes a lobe in Fig. 6.4 moving from right to left
in the figure. Call the right most lobe, lobe 0, the next on the left lobe 1, etc. For
each r the pairs (Ωr , p) are computed for a vector of ω values. When these families
of pairs are plotted they form a family of N lobes. Each lobe is parameterized by the
same vector of ω s so that each point on a lobe boundary represents an eigenvalue
of (6.106) for a given p and σr = 1/Ωr . The minimum of each lobe is asymptotic
to a line often called the stability limit. The second property above states that for a
given ω there is associated a unique value on the vertical axis, called p(ω ), but an
infinite number of σr (ω )s, one for each lobe, depicted graphically on the horizontal
axis as 1/ωr , for rotation rate. The minimum  value on each lobe occurs at p =
2ξ (1 + ξ ) with an associated unique ω = 1 + 2ξ . Finally, for each lobe there are
two ω s associated with each p, denoted by ω− and ω+ , where ω− is the parameter
associated with the left side of the lobe and ω+ with the right side of the lobe. At
the minimum ω− = ω+ .
The significance of the stability chart is that the lobe boundaries divide the plane
into regions of stable and unstable machining. In particular, the regions below
182 D.E. Gilsinn

the lobes are considered stable and those above are considered unstable in the
manufacturing sense. This will be a result of the Hopf bifurcation at the lobe bound-
aries. Since the parameter p is proportional to material removal, the regions between
lobes represent areas that can be exploited for material removal above the stabil-
ity limit line. This figure, called a stability chart, graphically shows the meanings
of properties (2)–(5) above and was introduced by Tobias and Fishwick [33]. The
structure of stability charts for DDE can be very complex. The current machine
tool example exhibits one of the simpler ones. To see some examples of different
stability charts the reader is referred to the book by Stépán [28].
We will use an argument modeled after Altintas and Budak [1] to develop neces-
sary conditions for σr and p and show how they relate to ω . These conditions are in
fact used to graphically display the stability lobes. Set

1
Φ (λ ) = . (6.115)
λ 2 + 2ξ λ + 1

Then (6.106) becomes


1 + p(1 − e−λ σ )Φ (λ ) = 0. (6.116)
Set λ = iω and write
Φ (iω ) = G(ω ) + iH(ω ), (6.117)
where

1 − ω2
G(ω ) = , (6.118)
(1 − ω 2 )2 + (2ξ ω )2
−2ξ ω
H(ω ) = . (6.119)
(1 − ω 2 )2 + (2ξ ω )2

Substitute (6.117) into (6.116) and separate real and imaginary parts to get

1 + p[(1 − cos ωσ )G(ω ) − (sin ωσ )H(ω )] = 0, (6.120)


p[G(ω ) sin ωσ + H(ω )(1 − cos ωσ )] = 0. (6.121)

From (6.118) and (6.119)

H(ω ) sin ωσ
=− . (6.122)
G(ω ) 1 − cos ωσ

From the definition of G, H, and the fact that ω > 1, (6.122) falls in the third
quadrant so that one can introduce the phase angle for (6.117), using (6.118) and
(6.119), as
   
H(ω ) 2ξ ω
ψ = tan−1 = −π + tan−1 . (6.123)
G(ω ) ω2 − 1
6 Bifurcations, Center Manifolds, and Periodic Solutions 183

Clearly, −π ≤ ψ ≤ π . Using half-angle formulas,


sin ωσ
tan ψ = − ,
1 − cos ωσ
 ωσ 
cos 2
= −  ωσ ,
sin 2
 ωσ 
= − cot , (6.124)
 π 2 ωσ 
= tan + ± nπ ,
2 2
for n = 0, 1, 2, . . . . Therefore
π ωσ
ψ= + ± nπ , (6.125)
2 2
where ωσ > 0 must be satisfied for all n. In order to satisfy this and the condition
that −π ≤ ψ ≤ π , select the negative sign and
n = 2 + r, (6.126)
for r = 0, 1, 2, . . . . Therefore, from (6.125), the necessary sequence, σr , is given by
ω
σr = , (6.127)
2(ψ + rπ ) + 3π
where ψ is given by (6.123). Finally, substituting (6.122) into (6.120), one has the
necessary condition for p as
1
p=− , (6.128)
2G(ω )
where, p > 0 since ω > 1. Therefore (6.127) and (6.128) are the necessary condi-
tions for (ω , σr , p), r = 0, 1, 2, . . . , to be critical eigen triples for (6.106). Note that
this also implies uniqueness. Equations (6.127) and (6.128) show how p = p(ω ) and
1/σr uniquely relate in Fig. 6.4.
The lobes in Fig. 6.4 are plotted by the following algorithm. Since p must be
positive, select any set of values ω > 1 such that G(ω ) < 0. Given a set of ω > 1
values, pick r = 0, 1, 2, . . . for as many lobes as desired. Compute 1/σr from (6.127)
and p from (6.128), then plot the pairs (1/σr , p).
The following is a consequence of properties (1)–(6) for critical eigen triples.
If (ω0 , σ , p) is a critical eigen triple, ω0 > 0, then there cannot be another crit-
ical eigen triple (ω1 , σ , p), ω1 > 0, ω1 = ω0 . Furthermore, since (−ω0 , σ , p)
is also a critical eigen triple, there can be no critical eigen triple (ω2 , σ , p),
ω2 < 0, ω2 = −ω0 . This does not preclude two or more lobes crossing. It only
refers to a fixed lobe.
Finally we can state the Hopf criteria. That is, there is a family of simple, conju-
gate eigenvalues λ (μ ), λ̄ (μ ), of (6.106), such that

λ (μ ) = α (μ ) + iω0 (μ ), (6.129)
184 D.E. Gilsinn

where α , ω0 are real, ω0 (μ ) = ω (μ ) + ωc where ω (μ ) is a perturbation of a critical


frequency ωc , and
α (0) = 0,
ω0 (0) > 0, (6.130)
α (0) > 0.

The proof of this result depends on the Implicit Function Theorem and is given
in Gilsinn [11]. As a consequence of the Implicit Function Theorem the conditions
α (0) = 0 and ω (0) = 0 and thus ω0 (0) > 0 follows. The last condition, that α (0) >
0, follows from following relations, valid for the current machine tool model, that
are also shown in Gilsinn [11]
[2ξ − σ ωc2 + σ (1 + pc )][1 − ωc2 ] + [2ωc (1 + σ ξ )][2ξ ωc ]
α (0) = ,
pc [2ξ − σ ωc2 + σ (1 + pc )]2 + pc [2ωc (1 + σ ξ )]2
[2ξ − σ ωc2 + σ (1 + pc )][2ξ ωc ] − [1 − ωc2 ][2ωc (1 + σ ξ )]
ω (0) = , (6.131)
pc [2ξ − σ ωc2 + σ (1 + pc )]2 + pc [2ωc (1 + σ ξ )]2
The numerator of α (0), divided by pc , can be expanded to give
2ξ (1 + ωc2 ) + σ (1 − ωc2 )2 + 4σ ωc2 ξ 2 + σ pc (1 − ωc2 ). (6.132)

The only term in (6.132) that can potentially cause (6.132) to become negative is
the last one. However, pc and ωc are related by (6.128) which implies that
(1 − ωc2 )2 + (2ξ ωc )2
pc = . (6.133)
2(ωc2 − 1)
If we substitute (6.133) into (6.132) one gets
σ
2ξ (1 + ωc2 ) + (1 − ωc2 )2 + 2σ ωc2 ξ 2 , (6.134)
2
which is clearly positive so that α (0) > 0. To compute the bifurcating periodic
solutions and determine their periods later we will need to use both α (0) and ω (0).
Finally, the last Hopf condition is also shown in Gilsinn [11] and that is that all other
eigenvalues than the two critical ones have negative real parts. The proof of this part
of the Hopf result involves a contour integration.
We are now in a position to determine the nature of the Hopf bifurcation that
occurs at a critical eigen triple for the machine tool model. To simplify the calcu-
lations only the bifurcation at the minimum points of the lobes, pm and ωm , given
by (6.107) and (6.108), will be examined. Any other point on a lobe would involve
more complicated expressions for any p greater than pm and obscure the essential
arguments. We will see that a bifurcation, called a subcritical bifurcation occurs
at this point, which implies that, depending on the initial amplitude used to inte-
grate (6.92), the solution can become unstable in a region that otherwise might be
considered a stable region.
6 Bifurcations, Center Manifolds, and Periodic Solutions 185

The rotation rate, Ωm , at pm can be computed from (6.108) and (6.123) as


 
ψm = −π + tan−1 1 + 2ξ ,
1 ωm
Ωm = = , (6.135)
σm 2(ψm + rπ ) + 3π

for r = 0, 1, 2, . . . . When ξ = 0.0135, as computed for the current machining model,


one has that ψm = −2.3495. When r = 0, Ωm = 0.2144 (σm = 4.6642), which is the
dimensionless rotation rate at the minimum of the first lobe to the right in Fig. 6.4.
This point is selected purely in order to illustrate the calculations. The stability limit
in Fig. 6.4 is given by (6.107) as

pm = 2ξ (ξ + 1) = 0.027365. (6.136)

The frequency at this limit is given by (6.108) as



ωm = 1 + 2ξ = 1.01341. (6.137)
Then from (6.131)
1
α (0) = ,
2(1 + ξ )2 (1 + ξ σm )

1 + 2ξ
ω (0) = . (6.138)
2(1 + ξ )2 (1 + ξ σm )

6.5.5 Step 5: Apply Orthogonal Decomposition

We will now follow the steps needed to compute the bifurcating periodic solu-
tions on the center manifold. The first step is to compute the eigenvectors for the
infinitesimal generators A and A∗ . The general forms for these eigenvectors are
given by (6.58) and (6.59). We wish to compute the constant vectors C and D. To
compute C we note that for θ = 0, (6.40) implies (U +V exp(−iωσ ))C = iω C
or (U − iω I +V exp(−iωσ ))C = 0. Since iω is an eigenvalue, (6.56), (6.97), and
(6.106) imply that there is a nonzero solution C. If we set c1 = 1 it is easy to compute
c2 = iω . The eigenvectors of A are then given by
 
iωθ 1
φC (θ ) = e ,

 
−iωθ 1
φ C (θ ) = e . (6.139)
−iω
 
To compute D we have, at θ = 0, from (6.47), that UT + VT eiωσ + iω I D = 0. The
determinant of the matrix on the left is the characteristic equation so that there is a
nonzero D. From (6.58), (6.59), (6.10), and (6.63) one seeks to solve for D so that
186 D.E. Gilsinn
   
φD , φC  φD , φ C  10
Ψ , Φ  = = . (6.140)
φ D , φC  φ D , φ C  01

Due to symmetry one only needs to satisfy φD , φC  = 1 and φD , φ C  = 0. From
(6.53), (6.58), and (6.59) compute d1 and d2 to satisfy

1 = d 1 + [σ pc cos ωσ + i (ω − σ pc sin ωσ )] d 2 ,
p 
c
0 = d1 + sin ωσ − iω d 2 , (6.141)
ω
from which the eigenvectors of A∗ associated with the eigenvalues −iω , iω can be
computed as
φD (θ ) = eiωθ D,
φ D (θ ) = e−iωθ D, (6.142)
T
where D = (d1 , d2 ) and
p 
c
d1 = − sin ωσ + iω d2 ,
 ω 2   
σ pc ω cos ωσ − pc ω sin ωσ + i 2ω 3 − σ pc ω 2 sin ωσ
d2 = , (6.143)
(σ pc ω cos ωσ − pc sin ωσ )2 + (2ω 2 − σ pc ω sin ωσ )2

From (6.143) the value of d 2 can be calculated as



−ξ − i 1 + 2ξ
d2 = , (6.144)
2(1 + ξ σc )(1 + ξ )2

where σ = σm , pc = pm , ω = ωm in (6.143).
Once these eigenvectors have been computed the decomposition then can be writ-
ten by a straightforward use of (6.98), (6.84), and (6.85).

6.6 Computing the Bifurcated Periodic Solution


on the Center Manifold

This section will be written in such a manner that it could apply to both ordinary
and delay differential equations. We know from ordinary differential equations that
in the case of the homogeneous portion having two eigenvalues with zero real parts
and all of the others negative real parts there are two manifolds of solutions. One
manifold, called the stable manifold, is an invariant manifold of solutions that decay
to the equilibrium point. The other manifold, called the center manifold, is an invari-
ant manifold on which the essential behavior of the solution in the neighborhood of
the equilibrium point is determined. The step numbers in the next sections should
not be confused with the decomposition step numbers. The step numbers in these
sections refer to the steps needed to form the center manifold, the normal forms, and
the periodic solutions.
6 Bifurcations, Center Manifolds, and Periodic Solutions 187

6.6.1 Step 1: Compute the Center Manifold Form

To begin the construction of a center manifold we start with the equations (6.84)

dy
(t) = iω y(t) + F1 (Y, wt ),
dt
dy
(t) = −iω y(t) + F 1 (Y, wt ), (6.145)
dt
dwt
= Awt + F2 (Y, wt ),
dt
where
T
F1 (Y, wt ) = φ D (0) f (zt ),

−2Re{φD , zt C} −σ ≤ θ < 0,
F2 (Y, wt ) = (6.146)
−2Re{φD , zt C} + f (zt ) θ = 0.

This system has been decomposed into two equations that have eigenvalues with
zero real parts and one that has eigenvalues with negative real parts. For notation,
let E c be the subspace formed by the eigenvectors φC , φC . We can now define a
center manifold by a function w = w(y, y) for |y|, |y| sufficiently small such that
w(0, 0) = 0, Dw(0, 0) = 0, where D is the total derivative operator. Note that w =
w(y, y) is only defined locally and the conditions on w make it tangent to E c at (0, 0).
According to Wiggins [34] the center manifold for (6.84) can then be specified as

W c (0) = {(y, y, w) ∈ C3 |w = w(y, y), |y|, |y| < δ , w(0, 0) = 0, Dw(0, 0) = 0}.
(6.147)
for δ sufficiently small.
Since the center manifold is invariant, the dynamics of the first two equations in
(6.145) must be restricted to the center manifold and satisfy

dy
= iω y + F1 (y, w(y, y)),
ds
dy
= −iω y + F 1 (y, w(y, y)). (6.148)
ds
There is also one more condition on the dynamics that must be satisfied by w =
w(y, y) and that is, it must satisfy the last equation in (6.145). Then, by taking appro-
priate partial derivatives we must have

Dy w(y, y){iω y + F1 (y, w(y, y))} + Dy w(y, y){−iω y + F 1 (y, w(y, y))}
= Aw(y, y) + F2 (y, w(y, y)), (6.149)

where D represents the derivative with respect to the subscripted variable. The argu-
ment of Kazarinoff et al. [22] (see also Carr [9]) can be used to look for a center
manifold w(y, y) that approximately solves (6.149). We will not need to reduplicate
188 D.E. Gilsinn

the argument here but to note that in fact an approximate center manifold, satisfying
(6.149), can be given as a quadratic form in y and y with coefficients as functions
of θ
y2 y2
w(y, y)(θ ) = w20 (θ ) + w11 (θ )yy + w02 (θ ) . (6.150)
2 2
Then the projected equation (6.148) on the center manifold takes the form
dy
= iω y + g(y, y),
ds
dy
= −iω y + g(y, y), (6.151)
ds
where the g(y, y) is given by

y2 y2 y2 y
g(y, y) = g20 + g11 yy + g02 + g21 . (6.152)
2 2 2
Since w02 = w20 , one only needs to solve for w20 and w11 . It can be shown that
w20 , w11 take the form
w20 (θ ) = c1 φ (θ ) + c2 φ (θ ) + Me2iωθ ,
w11 (θ ) = c3 φ (θ ) + c4 φ (θ ) + N, (6.153)

where ci , i = 1, . . . , 4 are constants and M, N are vectors. We will show how the
coefficients and the vectors can be computed in a specific example below.

6.6.2 Step 2: Develop the Normal Form on the Center Manifold

The equations on the center manifold can be further simplified. Normal form the-
ory, following the argument of Wiggins [34] (see also Nayfeh [25]), can be used to
reduce (6.151) to the simpler form (6.154) below on the center manifold. In fact,
(6.148) is reduced to a normal form by a transformation of variables, y → v, so that
the new system takes the form
dv
= iω v + c21 v2 v,
ds
dv
= −iω v + c21 v2 v, (6.154)
ds
where the higher order terms have been dropped. The derivation of this formula is
complex and is not needed for this chapter. The interested reader should consult
Gilsinn [11] and Wiggins [34] for the essential ideas involved. The formula needed
is one that links (6.154) with (6.152) and is given by
 +
i |g02 |2 g21
c21 = g11 g20 − 2|g11 |2 − + . (6.155)
2ω 3 2
6 Bifurcations, Center Manifolds, and Periodic Solutions 189

For a more general discussion of the normal form on the center manifold when
μ = 0 see Hassard et al. [17]. Up to this point one has only needed μ = 0. But, to
compute the periodic solution we will reintroduce μ = 0. A formula for the periodic
solutions of (6.1) can be computed. The argument is based on that of Hassard et al.
[17] and will not be given but the references Gilsinn [11] and Hassard et al. [17] can
be consulted by the interested reader. What is important here is a formula that can
be used in specific examples.

6.6.3 Step 3: Form the Periodic Solution on the Center Manifold

To begin with let ε > 0 and an initial condition for a periodic solution of (6.154) be
given as
v(0; ε ) = ε . (6.156)
Then, there exists a family of periodic solutions v(s, μ (ε )) of (6.194) with

μ (ε ) = μ2 ε 2 + · · · ,
β (ε ) = β2 ε 2 + · · · , (6.157)
T (ε ) = T0 (1 + τ2 ε 2 + · · · ),

where T (ε ) is the period of v(s, μ (ε )), β (ε ) is the nonzero characteristic exponent,


and
Re {c21 }
μ2 = − ,
α (0)
β2 = 2Re {c21 (0)} ,
1 
τ2 = − μ2 ω (0) + Im {c21 } , (6.158)
ω

T0 = .
ω
Furthermore, v(s, μ (ε )) can be transformed into a family of periodic solutions for
(6.1) given by
- . - .
z(s) = P(s, μ (ε )) = 2ε Re φ (0)eiω s + ε 2 Re Me2iω s + N . (6.159)

with ε = (μ /μ2 )1/2 . For μ2 > 0 the Hopf bifurcation is called supercritical and
for μ2 < 0 it is called subcritical. Finally, note that since μ ≈ μ2 ε 2 one can take
ε = (μ /μ2 )1/2 which allows one to associate Z(s) with the parameter p = μ + pc .
From Floquet theory if β (ε ) < 0 the periodic solution is stable and if β (ε ) > 0 it is
unstable.
190 D.E. Gilsinn

6.7 A Machine Tool DDE Example: Part 2

In this section, we will show how the formulas in Sect. 6.6 are computed for the
specific example of the turning center model. We will conclude this section with a
formula approximating the periodic solution of (6.96).

6.7.1 Step 1: Compute the Center Manifold Form

With the eigenvectors computed we can proceed to approximate the center manifold
and the projected equations on the center manifold. We begin by assuming that an
approximate center manifold, satisfying (6.149), can be given as a quadratic form in
y and y with coefficients as functions of θ
y2 y2
w(y, y)(θ ) = w20 (θ ) + w11 (θ )yy + w02 (θ ) . (6.160)
2 2
The object of this section is to compute the constants c1 , c2 , c3 , c4 , and the vectors
M, N in (6.153) in order to create the coefficients for (6.150).
Using (6.70) we can introduce coordinates on the center manifold by

z = w + φ y + φ y. (6.161)

From (6.83), (6.84), (6.148), and (6.161) we have on the center manifold
dy ∗T
= iω y + φ (0) f (w(y, y) + φ y + φ y). (6.162)
ds
Define ∗T
g(y, y) = φ (0) f (w(y, y) + φ y + φ y), (6.163)
∗T  
where φ (0) = d 1 , d 2 from (6.142) and
⎛ ⎞
 # 0
⎜ pc E w(y, y)1 (−σ ) + yφ1 (−σ ) + yφ 1 (σ ) ⎟
⎜ $2 ⎟
⎜ −w(y, y)1 (0) − yφ1 (0) − yφ 1 (0) ⎟
f (w(y, y) + φ y + φ y) = ⎜ # ⎟.
⎜ +E w(y, y) (−σ ) + yφ (−σ ) + yφ (σ ) ⎟
⎝ 1 1
$13  ⎠
−w(y, y)1 (0) − yφ1 (0) − yφ 1 (0)
(6.164)
From (6.150)
y2 y2
w(y, y)(0) = w20 (0) + w11 (0)yy + w02 (0) ,
2 2
y2 y2
w(y, y)(−σ ) = w20 (−σ ) + w11 (−σ )yy + w02 (−σ ) , (6.165)
2 2
 T
where wij (θ ) = w1ij (θ ), w2ij (θ ) .
6 Bifurcations, Center Manifolds, and Periodic Solutions 191

Note here that in order to compute μ2 , τ2 , β2 one need only determine g(y, y)
in the form (6.152). To find the coefficients for (6.152) begin by expanding the
nonlinear terms of (6.164) up to cubic order, keeping only the cubic term y2 y. To
help simplify the notation let
γ = e−iωσ − 1. (6.166)
Then, using (6.58), (6.165), and (6.166),
# $2
E w(y, y)1 (−σ ) + yφ1 (−σ ) + yφ 1 (σ ) − w(y, y)1 (0) − yφ1 (0) − yφ 1 (0)
= Ey2 γ 2 + 2Eyyγγ
# $ # $ 
+Ey2 γ 2 + E w120 (−σ ) − w120 (0) γ + 2 w111 (−σ ) − w111 (0) γ y2 y, (6.167)
# $3
E w(y, y)1 (−σ ) + yφ1 (−σ ) + yφ 1 (σ ) − w(y, y)1 (0) − yφ1 (0) − yφ 1 (0)
= 3E γ 2 γ y2 y.

If we define

g20 = 2E γ 2 d 2 pc ,
g11 = 2E γγ d 2 pc , (6.168)
g02 = 2E γ 2 d 2 pc ,

and
 # $ # $ 
g21 = 2pc E w120 (−σ ) − w120 (0) γ + 2E w111 (−σ ) − w111 (0) γ + 3E γ 2 γ d 2 ,
(6.169)
we can use (6.163)–(6.169) to write
 + 
g20 y2 g11 yy g02 y2 g21 y2 y 0
f (w(y, y) + φ y + φ y) = + + + . (6.170)
2d 2 d2 2d 2 2d 2 1

In order to complete the computation of g21 one needs to compute the center mani-
fold coefficients w20 , w11 .
Since we are looking for the center manifold as a quadratic form, we need only
expand functions in terms of y2 , yy, y2 . From the definition of F2 in (6.85), (6.152),
and (6.163) write F2 as

  y2
F2 (y, y)(θ ) = − g20 φ (θ ) + g02 φ (θ )
 2
− g11 φ (θ ) + g11 φ (θ ) yy (6.171)
  y2
− g02 φ (θ ) + g20 φ (θ ) ,
2
for −σ ≤ θ < 0 and for θ = 0
192 D.E. Gilsinn
  + 2
g20 0 y
F2 (y, y)(0) = − g20 φ (0) + g02 φ (0) −
d2 1 2
  +
g11 0
− g11 φ (0) + g11 φ (0) − yy (6.172)
d2 1
  + 2
g02 0 y
− g02 φ (0) + g20 φ (0) − .
d2 1 2

Note that, to compute the coefficients of the center manifold, one only needs to work
to the second order.
Since g02 /d 2 = g20 /d2 write the coefficients of F2 (y, y) as
⎧  
⎨ − g20 φ (θ ) + g02 φ (θ )   −σ ≤ θ < 0,
2
F20 (θ ) = g20 0
⎩ − g20 φ (0) + g02 φ (0) − d θ = 0,
2 1
⎧  
⎨ − g11 φ (θ ) + g11 φ (θ )   −σ ≤ θ < 0,
2
F11 (θ ) = g11 0 (6.173)
⎩ − g11 φ (0) + g11 φ (0) − d θ = 0,
2 1
2
2
F02 (θ ) = F 20 (θ ).

One can now set up equation (6.149) to approximate the center manifold. On this
manifold one must have
W (s) = w(y(s), y(s)). (6.174)
By taking derivatives, the equation for the manifold becomes

wy (y, y)y (s) + wy (y, y)y (s) = Aw(y(s), y(s)) + F2 (y(s), y(s)), (6.175)

where w(y, y) is given by (6.150). The partial derivatives are given by

wy (y, y) = w20 y + w11 y,


wy (y, y) = w11 y + w02 y. (6.176)

Using (6.151) and (6.176) expand the terms of (6.175) to second order as

wy (y, y)(θ )y (s) = iω w20 (θ )y2 (s) + iω w11 (θ )y(s)y(s),


wy (y, y)(θ )y (s) = −iω w11 (θ )y(s)y(s) − iω w02 (θ )y2 (s),
y2 y2
Aw(y, y)(θ ) = (Aw20 )(θ ) + (Aw11 )(θ )yy + (Aw02 )(θ ) , (6.177)
2 2
y2 y2
F2 (y, y)(θ ) = F20
2
(θ ) 2
+ F11 (θ )yy + F20
2
(θ ) .
2 2
Substitute (6.177) into (6.175) and equate coefficients to get
6 Bifurcations, Center Manifolds, and Periodic Solutions 193

2iω w20 (θ ) − Aw20 = F20


2
, (θ )
−Aw11 = F11 (θ ),
2
(6.178)
−2iω w02 (θ ) − Aw02 = F02
2
(θ ).

2 =F 2
Since F02 20 and w02 = w20 , one only needs to solve for w20 and w11 .
To compute c3 , c4 , N, use the second equation in (6.178), the definition of A in
(6.40) and (6.173). Then for −σ ≤ θ < 0
dw11
(θ ) = g11 φ (θ ) + g11 φ (θ ). (6.179)

Integrate (6.179) and use (6.58) to get
g11 g
w11 (θ ) = φ (θ ) − 11 φ (θ ) + N. (6.180)
iω iω
Clearly
g11
c3 = ,

g
c4 = − 11 . (6.181)

To determine N we will use (6.40) for θ = 0, μ = 0 and the fact that φ (0), φ (0)
are eigenvectors of A with eigenvalues iω , −iω at θ = 0. The eigenvector property
implies

U φ (0) +V φ (−σ ) = iωφ (0),


U φ (0) +V φ (−σ ) = −iωφ (0). (6.182)

If we combine this with (6.173), then it is straightforward to show that


 
g11 0
(U + V)N = − , (6.183)
d2 1
which can be solved for  
g11 −1
N=− . (6.184)
d2 0
To solve for c1 , c2 , M, use the definition of A in (6.40) for −σ ≤ θ < 0, (6.173),
(6.178) to get
dw20
= 2iω w20 (θ ) + g20 φ (θ ) + g02 φ (θ ). (6.185)

This nonhomogeneous system has the solution
g20 g
w20 (θ ) = − φ (θ ) − 02 φ (θ ) + Me2iωθ . (6.186)
iω 3iω
194 D.E. Gilsinn

Again, clearly
g20
c1 = − ,

g
c2 = − 02 . (6.187)
3iω
To solve for M use the definition of (6.40) for θ = 0, μ = 0, (6.173), and again the
fact that φ (0), φ (0) are eigenvectors of A with eigenvalues iω , −iω at θ = 0 to
show that    
−2iωσ g20 0
2iω I −U −Ve M= , (6.188)
d2 1
which can be solved for M as
 
g20 1
M= , (6.189)
d2Δ 2iω

where  
Δ = −4ω 2 + 4iξ ω + pc 1 − e2iωσ . (6.190)

One can now return to (6.169) and use (6.180)–(6.184) and (6.186)–(6.190) to
construct g21 in (6.191), which concludes the construction of (6.152) and thus the
projected equation (6.151) on the center manifold. Then the projected equation
(6.148) on the center manifold takes the form (6.151) and (6.152). The coefficients
gij for (6.152) are given by

g20 = 2E γ 2 d 2 pc ,
g11 = 2E γγ d 2 pc ,
g02 = 2E γ 2 d 2 pc , (6.191)
  
g20 γ g g20 −1 e−2iωσ
g21 = 2pc E − − 02 + γ
iω 3iω d2Δ
  +
g11 γ g11 γ
+2E − γ + 3E γ 2 γ d 2 ,
iω iω

where

γ = e−iωσ − 1,
 
Δ = −4ω 2 + 4iξ ω + pc 1 − e2iωσ . (6.192)

6.7.2 Step 2: Develop the Normal Form on the Center Manifold

Once the constants (6.191) have been developed then the normal form on the center
manifold is computed as (6.154) using (6.155). Normal form theory, following the
6 Bifurcations, Center Manifolds, and Periodic Solutions 195

argument of Wiggins [34] (see also Nayfeh [25]), can be used to reduce (6.151) to
the simpler form (6.154) on the center manifold. In fact, the system (6.151) can be
reduced by a near identity transformation to (6.154) where
 +
i |g02 |2 g21
c21 = g11 g20 − 2|g11 | −
2
+ . (6.193)
2ω 3 2

The proof of the result is detailed and the reader is referred to Gilsinn [11] and
Wiggins [34]. As shown in Hassard et al. [17] the general normal form for the case
μ = 0 is given by
dv
= λ (μ )v + c21 (μ )v2 v (6.194)
ds
where λ (0) = iω and c21 (0) is given by (6.155).
One can now compute g20 , g11 , g02 , and g21 from (6.191). Then from (6.155)
one computes c21 and finally, from (6.158), one can compute μ2 , τ2 , β2 as

μ2 = −0.09244,
τ2 = 0.002330, (6.195)
β2 = 0.08466.

This implies that at the lobe boundary the DDE bifurcates into a family of unstable
periodic solutions in a subcritical manner.

6.7.3 Step 3: Form the Periodic Solution on the Center Manifold

Using (6.159), one can compute the form of the bifurcating solutions for (6.96) as
⎛ ⎞
2ε cos 1.01341s + ε 2 ((−2.5968e − 6) cos 2.02682s
⎜ −0.014632 sin 2.02682s + 0.060113) ⎟
z(s) = ⎜
⎝ −2ε sin 1.01341s + ε 2 (−0.02966 cos 2.02682s
⎟.
⎠ (6.196)
+(5.2632e − 6) sin 2.02682s)

As noted at the end of Sect. 6.6 one can take as an approximation


 1/2
μ
ε= . (6.197)
μ2

It is clear from (6.195) that μ must be negative. Thus select

(−μ )1/2
ε= . (6.198)
0.3040395
The period of the solution can be computed as
2π    
T (ε ) = 1 + τ2 ε 2 = 6.2000421 1 + 0.002330 ε 2 , (6.199)
ωm
196 D.E. Gilsinn

and the characteristic exponent is given by

β = 0.08466 ε 2 . (6.200)

6.8 Simulation Results

To compare with the theoretical results, simulations were performed by direct inte-
gration of (6.96). The first of two sets of simulations was initialized at the points A
through F in Fig. 6.5 along the vertical line Ω = 0.2144 (selected for ease of calcu-
lation only). This line crosses the minimum of the first lobe in Fig. 6.5. The simu-
lations numerically demonstrate that there are three branches of periodic solutions
emanating from the critical bifurcation point pm = 0.027365. The three branches are
shown in Fig. 6.6. The amplitudes in this figure were computed using (6.196) with
ε given by (6.197). Two of the branches are unstable and one is stable in the fol-
lowing sense. Solutions initialized below the subcritical branch converge to the zero
solution. Those initialized above the subcritical branch grow in amplitude. The solu-
tions initialized above zero for bifurcation parameter values greater than the critical
value grow in amplitude. Similar results would be obtained along the lines crossing
at other critical points on the lobes.

Fig. 6.5 Locations of sample simulated solutions


6 Bifurcations, Center Manifolds, and Periodic Solutions 197

Fig. 6.6 Amplitudes of bifurcating branches of solutions at subcritical point

The Hopf bifurcation result is very local around the boundary and only for very
small initial amplitudes is it possible to track the unstable limit cycles along the
branching amplitude curve. This is shown in Fig. 6.7 where initial simulation loca-
tions were selected along the subcritical curve and the DDE was integrated forward
over five delay intervals. Note that nearer the critical bifurcation point the solution
amplitude remains near the initial value, whereas further along the curve the solution
amplitude drops away significantly from the subcritical curve.
Since the subcritical bifurcation curve in Fig. 6.6 is itself an approximation to the
true subcritical curve, solutions initialized on the curve tend to decay to zero. This
occurs at points A, B, and C in Fig. 6.5.
The decay at point B, when initialized on the subcritical curve is similar to the
result at point A, but with a less rapid decay (Fig. 6.8). However, one can show
the effect of initializing a solution above the curve at point B. That is shown in
Fig. 6.9. Point B is given by (Ω , p) = (0.2144, 0.020365). For p = 0.020365 the
subcritical curve amplitude value is 0.550363 (Fig. 6.10). Figures 6.11 and 6.12
show the solution growing when the simulation amplitude is initialized at 0.8.
At point C, when the solution is initialized on the subcritical bifurcation curve,
the phase plot remains very close to a periodic orbit, indicating that the Hopf results
are very local in being able to predict the unstable periodic solution.
The behavior at points D, E, and F of Fig. 6.5 are similar in that all of the solutions
initialized above zero experience growth and eventually explode numerically.
198 D.E. Gilsinn

Fig. 6.7 Theoretical and simulated subcritical solution amplitudes

Fig. 6.8 Stable amplitude of solution at point A


6 Bifurcations, Center Manifolds, and Periodic Solutions 199

Fig. 6.9 Unstable amplitude of solution at point B

Fig. 6.10 Nearly stable amplitude of solution at point C


200 D.E. Gilsinn

Fig. 6.11 Unstable amplitude of solution at point E

Fig. 6.12 Stable amplitude of solution at point M


6 Bifurcations, Center Manifolds, and Periodic Solutions 201

The second set of simulations, initialized at points G through M along the line
Ω = 0.15, in Fig. 6.5, shows the stability of solutions for parameters falling between
lobes in that the solutions of the DDE (6.92) all decay to zero. The gaps between
lobes are significant for machining. Since the parameter p is proportional to chip
width, the larger the p value for which the system is stable the more material
can be removed without chatter, where chatter can destroy the surface finish of
the workpiece. These large gaps tend to appear between the lobes in high-speed
machining with spindle rotation rates of the order of 2, 094.4 rad s−1 (20,000 rpm)
or greater. Figure 6.12 illustrates this stability with one time plot for point M,
(Ω , p) = (0.15, 0.08). The solution for this plot is initialized at amplitude 0.08.

Acknowledgments The author wishes to thank Dr. Tim Burns of NIST and Prof. Dianne O’Leary
of the University of Maryland for their careful reading and gracious assistance in preparing this
chapter.

References

1. Altintas, Y. and Budak, E., Analytic prediction of stability lobes in milling, Ann. CIRP 44,
1995, 357–362.
2. an der Heiden, U. Delays in physiological systems, J. Math. Biol. 8, 1979, 345–364.
3. Arnold, V.I., Ordinary Differential Equations, Springer, Berlin, 1992.
4. Avellar, C.E. and Hale, J.K., On the zeros of exponential polynomials, J. Math. Anal. Appl.
73, 1980, 434–452.
5. Balachandran, B., Nonlinear dynamics of milling processes, Philos. Trans. R. Soc. Lond.
A 359, 2001, 793–819.
6. Balachandran, B. and Zhao, M.X., A mechanics based model for study of dynamics of milling
operations’, Meccanica 35, 2000, 89–109.
7. Bellman, R. and Cooke, K.L., Differential–Difference Equations, Academic, New York, NY,
1963.
8. Campbell, S.A., Calculating centre manifolds for delay differential equations using maple, In:
Balachandran, B., Gilsinn, D.E., Kalmár-Nagy, T. (eds), Delay Differential Equations: Recent
Advances and New Directions, Springer, New York, NY, p. TBD
9. Carr, J., Applications of Centre Manifold Theory, Springer, New York, NY, 1981.
10. Cronin, J., Differential Equations: Introduction and Qualitative Theory, Marcel Dekker,
New York, NY, 1980.
11. Gilsinn, D.E., Estimating critical Hopf bifurcation parameters for a second-order differential
equation with application to machine tool chatter, Nonlinear Dynam. 30, 2002, 103–154.
12. Hale, J.K., Linear functional-differential equations with constant coefficients, Contrib. Diff.
Eqns. II, 1963, 291–317.
13. Hale, J.K., Ordinary Differential Equations, Wiley, New York, NY, 1969.
14. Hale, J., Functional Differential Equations, Springer, New York, NY, 1971.
15. Hale, J.K. and Lunel, S.M.V., Introduction to Functional Differential Equations, Springer,
New York, NY, 1993.
16. Hassard, B. and Wan, Y.H., Bifurcation formulae derived from center manifold theory,
J. Math. Anal. Appl. 63, 1978, 297–312.
17. Hassard, B.D., Kazarinoff, N.D., and Wan, Y.H., Theory and Applications of Hopf Bifurca-
tions, Cambridge University Press, Cambridge, 1981.
18. Hille, E. and Phillips, R. S., Functional Analysis and Semi-Groups, American Mathematical
Society, Providence, RI, 1957.
202 D.E. Gilsinn

19. Hirsch, M.W. and Smale, S., Differential Equations, Dynamical Systems, and Linear Algebra,
Academic, New York, NY, 1974.
20. Kalmár-Nagy, T., Pratt, J.R., Davies, M.A., and Kennedy, M.D., Experimental and analytical
investigation of the subcritical instability in metal cutting, Proceedings of DETC’99, 17th
ASME Biennial Conference on Mechanical Vibration and Noise, Las Vegas, Nevada, Sept.
12–15, 1999, 1–9.
21. Kalmár-Nagy, T., Stépán G., Moon, F.C., Subcritical Hopf bifurcation in the delay equation
model for machine tool vibrations. Nonlinear Dynam. 26, 2001, 121–142
22. Kazarinoff, N.D., Wan, Y.-H., and van den Driessche, P., Hopf bifurcation and stability of
periodic solutions of differential-difference and integro-differential equations’, J. Inst. Math.
Appl. 21, 1978, 461–477.
23. Kuang, Y., Delay Differential Equations with Applications in Population Dynamics, Aca-
demic, Boston, MA, 1993.
24. MacDonald, N. Biological Delay Systems: Linear Stability Theory, Cambridge University
Press, Cambridge, 1989.
25. Nayfeh, A.H., Method of Normal Forms, Wiley, New York, NY, 1993.
26. Nayfeh, A.H. and Balachandran, B. Applied Nonlinear Dynamics, Wiley, New York,
NY, 1995.
27. Pinney, E., Ordinary Difference–Differential Equations, University of California Press,
Berkeley, CA, 1958.
28. Stépán, G., Retarded Dynamical Systems: Stability and Characteristic Functions, Longman
Scientific & Technical, Harlow, England, 1989.
29. Stone, E. and Askari A., Nonlinear models of chatter in drilling processes. Dynam. Syst. 17(1),
2002, 65–85.
30. Stone E, Campbell SA (2004) Stability and bifurcation analysis of a nonlinear DDE model for
drilling. Journal of Nonlinear Science 14 (1), 27–57.
31. Taylor, J.R., ‘On the art of cutting metals’, American Society of Mechanical Engineers, New
York, 1906.
32. Tlusty, J., ‘Machine Dynamics’, Handbook of High-speed Machine Technology, Robert I.
King, Editor, Chapman and Hall, New York, 1985, 48-153.
33. Tobias, S.A. and Fishwick, W., ‘The Chatter of Lathe Tools Under Orthogonal Cutting Con-
ditions’, Transactions of the ASME 80, 1958, 1079-1088.
34. Wiggins, S., Introduction to Applied Nonlinear Dynamical Systems and Chaos, Springer-
Verlag, New York, 1990.
35. Yosida, K., Functional Analysis, Springer-Verlag, Berlin, 1965.
Chapter 7
Center Manifold Analysis of the Delayed
Liénard Equation

Siming Zhao and Tamás Kalmár-Nagy

Abstract In this chapter, the authors show the existence of the Hopf bifurcation in
the delayed Liénard equation. The criterion for the criticality of the Hopf bifurcation
is established based on the reduction of the infinite-dimensional problem onto a two-
dimensional center manifold. Numerics based on DDE-Biftool are given to compare
with the authors’ theoretical calculation. The Liénard type sunflower equation is
discussed as an illustrative example based on our method.

Keywords: Delayed Liénard Equation · Hopf bifurcation · Center manifold ·


Poincaré-Lyapunov constant

7.1 Introduction

Time delay is very common in many natural and engineering systems. It has been
widely studied in fields as diverse as biology [1], population dynamics [2], neural
networks [3], feedback controlled mechanical systems [4], machine tool vibrations
[5,6], and lasers [7]. Delay effects can also be exploited to control nonlinear systems
[8]. A good exposition of delay equations can be found in [9].
Several methods developed to study dynamical systems can be well applied to
delay equations, these include the method of multiple scales [10], the Lindstedt-
Poincaré method [11, 12], harmonic balance [13], and the averaging method [14].
Center manifold theory [15] is one of the rigorous mathematical tools to study
bifurcations of delay-differential equations [16, 17]. In this chapter, we consider the
Hopf bifurcation of the delayed Liénard equation

ẍ(t) + f (x(t))ẋ + g(x(t − τ )) = 0, (7.1)

where f , g ∈ C4 , f (0) = K > 0, g(0) = 0, g (0) = 1, and τ > 0 is a finite time-delay.

B. Balachandran et al. (eds.), Delay Differential Equations: Recent Advances 203


and New Directions, DOI 10.1007/978-0-387-85595-0 7,
c Springer Science+Business Media LLC 2009
204 S. Zhao and T. Kalmár-Nagy

Luk [18] and Zhang [19] derived necessary and sufficient conditions for the
boundedness of all solutions and their derivatives of (7.1) using a Lyapunov-
functional approach. Zhang [20] and Omari and Zangfin [21] showed the existence
of periodic orbits for this equation. Metzen [22] proved the existence of periodic
orbits in a duffing-type equation with delay and studied the effects of delay on the
solvability of the problem. Acosta and Lizana [23] and Xu and Lu [24] investigated
the linear stability of (7.1). Xu and Lu [24] performed a center manifold based
Hopf bifurcation analysis. Campbell et al. [25] also used center manifold analysis
of single and double Hopf bifurcations for a similar second-order delay differential
equation. In these analysis, however, the curvature of the center manifold caused
by quadratic terms was not accounted for. Colonius [26] studied optimal periodic
control of (7.1).
As shown in [16], (7.1) can be rewritten in state space form as:

ẋ(t) = y(t) − S(x(t)),


(7.2)
ẏ(t) = −g(x(t − τ )),

where S(x) = 0x f (δ ) dδ . Expanding (7.2) in the neighborhood of the null solution
up to third order yields:

ẋ(t) = y(t) − Kx(t) + ax2 (t) + bx3 (t),


(7.3)
ẏ(t) = −x(t − τ ) + cx2 (t − τ ) + dx3 (t − τ ),

with a = − 12 f (0), b = − 16 f (0), c = − 12 g (0), and d = − 16 g (0). By defining a


new vector z = (x, y)T , the above system can be written as:

ż(t) = Lz(t) + Rz(t − τ ) + f (z) , (7.4)

where
     
−K 1 0 0 ax2 + bx3
L= , R= , f (z) = . (7.5)
0 0 −1 0 cx2 (t − T ) + dx3 (t − T )

Note that there are many ways to write (7.1) in state space form. The reason why the
form (7.2) is used is that when calculating the center manifold, the nonlinearity will
only contain the first component of the state vector; this will simplify the calculation.

7.2 Linear Stability Analysis

The linear stability analysis of the z = 0 solution of (7.4) is based on the linear
equation [24]
ż(t) = Lz(t) + Rz(t − τ ). (7.6)
7 Center Manifold Analysis of the Delayed Liénard Equation 205

The characteristic equation of (7.6) can be obtained by substituting the trial solution
z(t) = c expλ t into (7.6)
λ 2 + K λ + e−λ τ = 0. (7.7)
On the stability boundary the characteristic equation has a pair of pure imaginary
roots. To find these roots, we substitute λ = iω , ω > 0 into (7.7) and separate the
real and imaginary part to obtain

ω 2 = cos ωτ ,
(7.8)
K ω = sin ωτ .
Using simple trigonometry, (7.8) can be reduced to
/
1 − ω4
K= ,

2
 (7.9)
1 K
τ= 2nπ + arctan ,
ω ω
where n is an nonnegative integer. Since K > 0, we have 0 < ω < 1.
Note that (7.9) describes infinitely many curves in the (τ , K) plane, but the first
branch (n = 0) (shown in Fig. 7.1) actually separates the stable and unstable regions
(this is a consequence of a theorem by Hale [16]).
For a fixed time-delay τ , the critical value of the bifurcation parameter K (i.e., on
the stability boundary) will be denoted by k. A necessary condition for the existence
of periodic orbits is that by changing K through k, the critical characteristic roots
cross the imaginary axis with nonzero velocity, i.e., dμ /dK |K=k = 0. Taking the first
derivative with respect to K in (7.7) and using (7.9) gives

dλ ω 2 (2 + kτ )
γ = Re |K=k = > 0. (7.10)
dK (k − τω 2 )2 + (2ω + kτω )2

2.5

2 Stable region
K
1.5

1
Unstable region
0.5

0
0 1 2 3 4
t

Fig. 7.1 Linear stability boundary of the delayed Liénard equation.


206 S. Zhao and T. Kalmár-Nagy

Therefore, the characteristic roots cross the imaginary axis with positive velocity.
This velocity will later be used to calculate the estimation of the vibration amplitude.
In the following sections, we will use center manifold theory to investigate
the Hopf bifurcation that might occur on the stability boundary and calculate the
Poincaré-Lyapunov constant to characterize the criticality of the bifurcation. The
solution method closely follows Kalmár-Nagy et al. [6].

7.3 Operator Differential Equation Formulation

Generally delay differential equations can be expressed as abstract evolution


equations on the Banach space H of continuously differentiable functions
μ : [−τ , 0] → R2
żt = Dzt + F(zt ), (7.11)
where the shift of time zt (ϕ ) ∈ H is defined as:

zt (ϕ ) = z(t + ϕ ), ϕ ∈ [−τ , 0]. (7.12)

The linear operator D at the critical bifurcation parameter assumes the form
 d
u(ϕ ), ϕ ∈ [−τ , 0)
Du(ϕ ) = dϕ ,
Lu(0) + Ru(−τ ), ϕ = 0

while the nonlinear operator is written as



0, ϕ ∈ [−τ , 0),
F(u)(ϕ ) =
f(u), ϕ = 0,
 
au21 (0) + bu31 (0)
f(u(ϕ )) = .
cu21 (−τ ) + du31 (−τ )

To calculate the center manifold, we also need to define the adjoint space H∗ of con-
tinuously differentiable functions θ : [0, τ ] → R2 together with the adjoint operator
 d
− dϕ u(ϕ ), ϕ ∈ (0, τ ]
D∗ u(θ ) = ,
L∗ u(0) + R∗ u(τ ), ϕ = 0

with respect to the bilinear form (·, ·) : H∗ × H → R


 0

(υ , μ ) = υ (0)μ (0) + υ ∗ (δ + τ )Rμ (δ ) dδ . (7.13)
−τ

These operators are used to construct a projection from the infinite-dimensional


space onto the two-dimensional center manifold by using the eigenfunctions of the
linear operator D and its adjoint D∗ . The center subspace is spanned by the real
and imaginary parts of the complex eigenfunction p(ϕ ) of D corresponding to the
7 Center Manifold Analysis of the Delayed Liénard Equation 207

critical characteristic roots ±iω . The complex eigenfunction p(ϕ ) and the eigen-
functions q(θ ) of the adjoint operator D∗ can be found from:

Dp(ϕ ) = iω p(ϕ ),
(7.14)
D∗ q(θ ) = −iω q(θ ).

Since the critical eigenvalues of the linear operator D just coincide with the critical
characteristic roots of the characteristic function D(λ , K), the Hopf bifurcation can
be studied at the two-dimensional center manifold embedded in the infinite dimen-
sional phase space.
The general solutions to (7.14) are of the form:

p(ϕ ) = ceiωϕ ,
(7.15)
q(θ ) = deiωθ ,

and the constants p, q are found by using the boundary conditions embedded in the
operator equations (7.14)

(iω I − L − e−iωτ R)c = 0,


(−iω I − LT − eiωτ RT )d = 0.

The vectors p, q should not be aligned, i.e., the bilinear form of q and p should be
nonzero
(q, p) = β = 0. (7.16)
The constant β can be freely chosen. Using (7.13) we have:
 0
(q, p) = q∗ (0)p(0) + q∗ (ξ + τ )Rp(ξ ) dξ
−τ
 0
∗ ∗ −iωτ
= d c + d Rce dξ (7.17)
−τ
= d∗ (I + τ e−iωτ R)c.

To summarize, the vectors c, d are found from the following equations

(iω I − L − e−iωτ R)c = 0, (7.18)

(iω I + LT + eiωτ RT )d = 0, (7.19)


d∗ (I + τ e−iωτ R)c = 2. (7.20)
There are three independent complex equations for four complex unknowns. (7.18)
and (7.19) result in (here c1 , d2 are complex!)
 
1
c= c1 , (7.21)
k + iω
208 S. Zhao and T. Kalmár-Nagy
 
−iω
d= d2 . (7.22)
1
Then from (7.20) we obtain:

c1 d2∗ [k − ω 2 τ + i(2ω + kωτ )] = 2. (7.23)

We have the freedom to fix 1 unknown, and so we choose c1 = 1, which in turn


yields
2
d2∗ = . (7.24)
k − ω 2 τ + i(2ω + kωτ )
After separating the real and imaginary part
   
c11 1
c1 = Re c = = ,
c12 k
   
c21 0
c2 = Im c = = ,
c22 ω
   2 
d11 ω (1 + 12 kτ )
d1 = Re d = =Ω 1 ,
2 (k − ω τ )
d12 2

   ω 
d21 − 2 (k − ω 2 τ )
d2 = Im d = =Ω ,
d22 ω (1 + 12 kτ )

with Ω = ω 2 (2+k τ)
.
Let us decompose the solution zt (ϕ ) into two components y1 (t) and y2 (t) lying
in the center subspace and the infinite-dimensional component w transverse to the
center subspace:

zt (ϕ ) = y1 (t)p1 (ϕ ) + y2 (t)p2 (ϕ ) + w(t)(ϕ ), (7.25)

y1 (t) = (q1 , zt ) |ϕ =0 , y2 (t) = (q2 , zt ) |ϕ =0 .


With these new coordinates the operator differential equation (7.11) is be trans-
formed into:
y˙1 = ω y2 + qT1 (0)F, (7.26)
y˙2 = −ω y1 + qT2 (0)F, (7.27)
ẇ = Dw + F(zt ) − qT1 (0)Fp1 − qT2 (0)Fp2 . (7.28)
Note that the nonlinear operator in (7.28) should be written as:

0, ϕ ∈ [−T, 0)
F(y1 p1 + y2 p2 + w) = ,
F, ϕ = 0

where F =( f1 , f2 )T and f1 and f2 are given as (neglecting terms higher than third
order):

f1 = a(w1 (0) + y1 )2 + b(w1 (0) + y1 )3 = a(y21 + 2y1 w1 (0)) + by31 , (7.29)


7 Center Manifold Analysis of the Delayed Liénard Equation 209

f2 = c(w1 (−τ ) + cos ωτ y1 − sin ωτ y2 )2 + d(w1 (−τ ) + cos ωτ y1 − sin ωτ y2 )3


= c(ω 4 y21 + k2 ω 2 y22 − 2kω 3 y1 y2 + 2ω 2 w1 (−τ )y1 − 2kω w1 (−τ )y2 )
+ d(ω 6 y31 − k3 ω 3 y32 − 3kω 5 y21 y2 + 3k2 ω 4 y1 y22 ). (7.30)

In the next section we will approximate w(y1 , y2 )(ϕ ) assuming that it only contains
quadratic terms (higher order terms of w are not relevant for local Hopf bifurcation
analysis).

7.4 Center Manifold Reduction

The center manifold is tangent to y1 ,y2 plane at the origin, and it is locally invariant
and attractive to the flow of the system (7.11). Notice that when a = c = 0 (symmet-
ric nonlinearities), the center manifold coincides with the center subspace, which
is spanned by the eigenfunctions calculated earlier. Since the nonlinearities consid-
ered here are not always symmetric, the second-order expansion of the center man-
ifold needs to be computed. Neglecting higher order terms for the local bifurcation
analysis
1 
w(y1 , y2 )(ϕ ) = h1 (ϕ )y21 + 2h2 (ϕ )y1 y2 + h3 (ϕ )y22 . (7.31)
2
The time derivative of ϕ can be expressed by differentiating the right-hand side of
the above equation via substituting (7.26) and (7.27):

ẇ = h1 y1 y˙1 + h2 y2 y˙1 + h2 y1 y˙2 + h3 y2 y˙2


= ẏ1 (h1 y1 + h2 y2 ) + ẏ2 (h2 y1 + h3 y2 ) (7.32)
= −ω h2 y21 + ω (h1 − h3 )y1 y2 + ω h2 y22 + O(y3 ).

Comparing the coefficients of y21 , y1 y2 , y22 with another form of ẇ from (7.28) we
arrive at the following boundary value problem:

1
ḣ1 = −ω h2 + m f111 + n f211 ,
2
ḣ2 = ω h1 − ω h3 + m f112 + n f212 , (7.33)
1
ḣ3 = ω h2 + m f122 + n f222 ,
2

1 
Lh1 (0) + Rh1 (−τ ) = −ω h2 (0) + m(0) f111 + n(0) f211 − s1 ,
2
Lh2 (0) + Rh2 (−τ ) = ω h1 (0) − ω h3 (0) + m(0) f112 + n(0) f212 − s2 , (7.34)
1  
Lh3 (0) + Rh3 (−τ ) = ω h2 (0) + m(0) f122 + n(0) f222 − s3 ,
2
210 S. Zhao and T. Kalmár-Nagy

with
m(ϕ ) = d11 p1 (ϕ ) + d21 p2 (ϕ ),
(7.35)
n(ϕ ) = d12 p1 (ϕ ) + d22 p2 (ϕ ).
The coefficients of the quadratic terms can be extracted from (7.29) and (7.30) as:

f111 = a, f211 = cω 4 ,
f112 = 0, f212 = −2ckω 3 ,
f122 = 0, f222 = ck2 ω 2 .
By introducing the following notation
⎛ ⎞ ⎛ ⎞
h1 0 −2I 0
h = ⎝ h2 ⎠ , C6×6 = ω ⎝ I 0 −I ⎠ ,
h3 0 2I 0
⎛ ⎞ ⎛ ⎞
2S0 s1 2N0 s1
s = ⎝ S0 s2 ⎠ , n = ⎝ N0 s2 ⎠ ,
2S0 s3 2N0 s3
     
f111 f112 f122
s1 = , s2 = , s3 = ,
f211 f212 f222
 
d11 d12
S0 = ,
kd11 + ω d21 kd12 + ω d22
 
d21 d22
N0 = .
kd21 − ω d11 kd22 − ω d12
Eq. (7.33) is written as the inhomogeneous differential equation:
d
h (ϕ ) = Ch + s cos ωϕ + n sin ωϕ . (7.36)

This ODE has the general solution form:
h(ϕ ) = eCϕ K + M cos ωϕ + N sin ωϕ . (7.37)
After substituting this solution form into (7.36) we get the following equations for
solving matrix M and N together with the corresponding boundary value problem
for solving K     
C6×6 −ω I6×6 M s
=− , (7.38)
ω I6×6 C6×6 N n
Ph(0) + Qh(−τ ) = s − r, (7.39)
where ⎛ ⎞
L0 0
P = ⎝ 0 L 0 ⎠ − C6×6 ,
0 0L
7 Center Manifold Analysis of the Delayed Liénard Equation 211
⎛ ⎞ ⎛ ⎞
R 0 0 2s1
Q = ⎝ 0 R 0 ⎠, r = ⎝ s2 ⎠ .
0 0 R 2s3
The expressions for w1 (0) and w1 (−τ ) are given as:

1 
w1 (0) = (M1 + K1 )y21 + 2(M3 + K3 )y1 y2 + (M5 + K5 )y22
2 (7.40)
= h110 y21 + h210 y1 y2 + h310 y22 ,

1  −Cτ
w1 (−τ ) = (e K |1 +M1 cos ωτ − N1 sin ωτ )y21
2
+2(e−Cτ K |3 +M3 cos ωτ − N3 sin ωτ )y1 y2
 (7.41)
+(e−Cτ K |5 +M5 cos ωτ − N5 sin ωτ )y22
= h11τ y21 + h21τ y1 y2 + h31τ y22 .
From (7.38) and (7.39) we can calculate the first, third, and fifth component of M,
N, K, e−Cτ K
⎛ ⎞ ⎛ ⎞
M1 ad21 + cω 2 (−2d12 kω + d22 (2k2 + ω 2 ))
⎝ M3 ⎠ = − 2 ⎝ −ad11 + cω 2 (d22 kω + d12 (k2 − ω 2 )) ⎠ ,

M5 2ad21 + cω 2 (2d12 kω + d22 (k2 + 2ω 2 ))
⎛ ⎞ ⎛ ⎞
N1 ad11 + cω 2 (2d22 kω + d12 (2k2 + ω 2 ))
⎝ N3 ⎠ = 2 ⎝ ad21 + cω 2 (d12 kω + d22 (−k2 + ω 2 )) ⎠ ,

N5 2ad11 + cω 2 (−2d22 kω + d12 (k2 + 2ω 2 ))
⎛ ⎞ ⎛ ⎞
K1 ω (−2ak(ω 2 − 1) + 3cω 2 (4 + k4 − 2ω 2 − ω 4 ))
⎝ K3 ⎠ = ζ ⎝ a(1 − 4ω 2 − 2k2 ω 2 ) + ck(1 + 2ω 4 ) ⎠,
K5 ω (2ak(−1 + ω ) + c(2k + 8ω − 2ω ))
2 2 2 4

⎛ ⎞
e−Cτ K |1
⎝ e−Cτ K |3 ⎠ =
e−Cτ K |5
⎛ ⎞
ω (−2ak(1 + 2ω 4 ) + cω 2 (9 + 2k2 + 2k4 + (2k2 − 6)ω 2 + 8k2 ω 4 )
ζ⎝ a(1 − 4ω 6 ) + ckω 2 (−1 − 5k2 + (7 + 4k4 )ω 2 − 4k2 ω 4 ) ⎠,
ω (2ak(1 + 2ω 4 ) + c(3k2 − 2 + (8 − 8k2 )ω 2 + 8k4 ω 4 ))

where ζ = 5+12ω 4 −8ω 6
.
212 S. Zhao and T. Kalmár-Nagy

7.5 Hopf Bifurcation Analysis

To restrict a third-order approximation of system (7.26) and (7.27) to the two-


dimensional center manifold calculated in the previous section, the dynamics of
y1 and y2 is assumed to have the form:

ẏ1 = ω y2 + a20 y21 + a11 y1 y2 + a02 y22 + a30 y31 + a21 y21 y2 + a12 y1 y22 + a03 y32 ,
(7.42)
ẏ2 = −ω y1 + b20 y21 + b11 y1 y2 + b02 y22 + b30 y31 + b21 y21 y2 + b12 y1 y22 + b03 y32 .
Using the 10 out of these 14 coefficients a jk , b jk , the so called Poincaré-Lyapunov
constant can be calculated as shown in [15]
1
= ((a20 + a02 )(−a11 + b20 − b02 ) + (b20 + b02 )(a20 − a02 + b11 ))
8ω (7.43)
1
+ (3a30 + a12 + b21 + 3b03 ).
8
Based on the center manifold calculation, the ten coefficients are as follows:
a20 = ad11 + cω 4 d12 ,
b20 = ad21 + cω 4 d22 ,
a11 = −2ckω 3 d12 ,
b11 = −2ckω 3 d22 ,
a02 = ck2 ω 2 d12 ,
b02 = ck2 ω 2 d22 ,
a30 = (2ah110 + b)d11 + (2cω 2 h11τ + d ω 6 )d12 ,
a12 = 2ah310 d11 + (2cω 2 h31τ − 2ckω h21τ + 3dk2 ω 4 )d12 ,
b21 = 2ah210 d21 + (2cω 2 h21τ − 2ckω h11τ − 3dkω 5 )d22 ,
b03 = −(2ckω h31τ + dk3 ω 3 )d22 .
Substitute all these coefficients into (7.43) and tedious simplification can lead us to:

= l1 d12 + l2 d22 , (7.44)

where we have
3 a2 ac
l1 = d ω 2 + ωζ (1 + 4ω 2 − 2ω 4 ) − kωζ (1 + ω 2 + ω 4 )
8 4 2
c2 11
+ ωζ ( + k + 2ω + 12ω − 12ω 6 ),
2 2 4
4 2 (7.45)
3 3 a2 ac 7
l2 = bω − dkω + kω 2 ζ (1 − ω 2 ) + ζ ( + ω 2 + 10ω 4 − 10ω 6 )
8 8 2 4 2
c2 11
+ kζ (− + ω − 12ω + 12ω ).
2 4 6
4 2
7 Center Manifold Analysis of the Delayed Liénard Equation 213

The negative or positive sign of determines whether the Hopf bifurcation is super-
critical or subcritical.

7.6 Numerical Results

To validate the results of the center manifold calculations, we used the continuation-
based DDE-Biftool [27, 28] as well as the MATLAB numerical solver DDE-23. By
γ
defining α = , the vibration amplitude in the neighborhood of the bifurcation point
is estimated as: 
r = α (K − k). (7.46)
The delayed Liénard equation

ẋ(t) = y(t) − Kx(t) + ax2 (t) + bx3 (t),


(7.47)
ẏ(t) = −x(t − τ ) + cx2 (t − τ ) + dx3 (t − τ ),

with three different sets of parameters (see Table 7.1) was solved by continuation
(DDE-Biftool) and numerical integration (DDE-23). For τ = 1, 2, 3, 4 Table 7.2
shows the value of k at the bifurcation point, the critical frequency ω , and the percent
error ε = 100| α −ααnum |. The numerical approximation αnum has been obtained from
the DDE-Biftool results (using amplitudes corresponding to values of the bifurca-
tion parameter K such that |K − k| ≤ 0.0005k) by least-squares fit.
Figures 7.2–7.4 show the bifurcation diagrams of (7.47) for the above parameter
values. The solid line, dots, and rectangles show the analytical amplitude estimate
(7.46), the DDE-Biftool results, and numerical results by DDE-23, respectively.
The DDE-23 results are obtained by combining numerical integration, estimation
of amplitude decay/growth, and bisection to locate periodic orbits. Finally, by

Table 7.1 Sets of parameter values of three different delayed Liénard equations
a b c d
I 1 −2 5 −2
II −3 −2 1 −4
III −3 −1 4 3

Table 7.2 Value of the bifurcation parameter K on the stability boundary, critical frequency ω , and
error ε evaluated at different time delays τ
τ 1 2 3 4
k 0.891 1.552 2.154 2.753
ω 0.824 0.601 0.454 0.360
εI 0.13% 0.10% 0.15% 0.11%
εII 0.91% 0.94% 0.30% 0.25%
εIII 0.29% 0.15% 0.09% 0.15%
214 S. Zhao and T. Kalmár-Nagy

0.03 0.025

0.025 a=1, b=−2, c=5, d=−2, t=1 0.02 a=1, b=−2, c=5, d=−2, t=2

0.02

Amplitude
Amplitude

0.015
0.015
0.01 Stable limit cycle
0.01 Stable limit cycle

0.005
0.005

0 0
0.882 0.884 0.886 0.888 0.89 0.892 1.54 1.545 1.55 1.555
K K
(a) τ = 1 (b) τ = 2
0.025 0.025

0.02 a=1, b=−2, c=5, d=−2, t=3 0.02 a=1, b=−2, c=5, d=−2, t=4
Amplitude

Amplitude

0.015 0.015

0.01 Stable limit cycle 0.01 Stable limit cycle

0.005 0.005

0 0
2.135 2.14 2.145 2.15 2.725 2.73 2.735 2.74 2.745 2.75 2.755
K K
(c) τ = 3 (d) τ = 4

Fig. 7.2 Bifurcation diagram for K for case I of Table 7.2. The solid line is the estimated vibration
amplitude based on (7.46), the dots and rectangles are results obtained from DDE-Biftool and
DDE-23, respectively. The bifurcation is subcritical.

randomly choosing a, b, c, d from [−10, 10] and τ from [0, 5], 1,000 DDE-Biftool
simulations were performed and the amplitude results were compared with the ana-
lytical ones. We found that α agrees very well with αnum (approximation is again
based on amplitudes within 0.05% of the critical value k): the mean error is 0.8%,
with a small variance of 5.43 × 10−4 .
Having verified our analytical results, we can also utilize them to study the so-
called Sunflower equation.

7.7 Hopf Bifurcation in the Sunflower Equation

Israelson and Johnson [29] proposed the equation:


A B
y + y + sin y(t˜ − ε ) = 0 (7.48)
ε ε
7 Center Manifold Analysis of the Delayed Liénard Equation 215

0.4 0.14

0.35 0.12
0.3 Unstable limit cycle
0.1
0.25 a=−3, b=−2, c=1, d=−4, t=1
Amplitude

Amplitude
0.08
0.2
0.06
0.15
0.04
0.1
a=−3, b=−2, c=1, d=−4, t=2
0.05 0.02
Stable limit cycle
0 0
0.86 0.865 0.87 0.875 0.88 0.885 0.89 1.55 1.555 1.56 1.565
K K
(a) τ = 1 (b) τ = 2
0.1 0.1

0.08 Unstable limit cycle 0.08


Unstable limit cycle
Amplitude
Amplitude

0.06 0.06

0.04 0.04

0.02 a=−3, b=−2, c=1, d=−4, t=3 0.02 a=−3, b=−2, c=1, d=−4, t=4

0 0
2.15 2.155 2.16 2.165 2.17 2.175 2.75 2.755 2.76 2.765 2.77 2.775 2.78
K K
(c) τ = 3 (d) τ = 4

Fig. 7.3 Bifurcation diagram for K for case II of Table 7.2. The solid line is the estimated vibration
amplitude based on (7.46) the dots and rectangles are results obtained from DDE-Biftool and DDE-
23, respectively. The bifurcation is subcritical.

to explain the helical movement of the tip of a growing plant. The upper part of
the stem of the sunflower performs a rotating movement. y(t˜) is the angle of the
plant with respect to the vertical line, the delay factor ε is corresponding to a
geotropic reaction time in the effect due to accumulation of the growth hormone
alternatively on both side of the plant. The parameters A and B can be obtained
experimentally.
Somolinos [30] proved the existence of periodic solutions for (7.48); this result
covers both small amplitude limit cycle generated by Hopf bifurcation and large
amplitude limit cycles. Casal and Freedman [11] computed a perturbation expan-
sion based on the Lindstedt-Poincaré method. MacDonald [13] performed first-
order harmonic balance of (7.48). Recently, Liu and Kalmár-Nagy [31] used the
high-dimensional harmonic balance technique to compute limit cycle amplitudes
and frequencies.
216 S. Zhao and T. Kalmár-Nagy

0.025 0.025

0.02 0.02
Stable limit cycle
Stable limit cycle
Amplitude

Amplitude
0.015 0.015

0.01 0.01

a=−3, b=−1, c=4, d=3, t=2


0.005 a=−3, b=−1, c=4, d=3, τ=1 0.005

0 0
0.882 0.884 0.886 0.888 0.89 1.54 1.545 1.55
K K
(a) τ = 1 (b) τ = 2
0.025 0.025

0.02 0.02
Stable limit cycle Stable limit cycle
Amplitude
Amplitude

0.015 0.015

0.01 0.01

a=−3, b=−1, c=4, d=3, t=3 a=−3, b=−1, c=4, d=3, t=4
0.005 0.005

0 0
2.135 2.14 2.145 2.15 2.155 2.725 2.73 2.735 2.74 2.745 2.75 2.755
K K
(c) τ = 3 (d) τ = 4

Fig. 7.4 Bifurcation diagram for K for case III of Table 7.2. The solid line is the estimated vibration
amplitude based on (7.46) the dots and rectangles are results obtained from DDE-Biftool and DDE-
23, respectively. The bifurcation is supercritical.
,
Introducing a time scaling t → B˜
ε t and expanding (7.48) about the null solution
up to third order yields:
A 1
ẍ + ẋ + x(t − τ ) − x3 (t − τ ) = 0, (7.49)
τ 6
√ ,
where τ = Bε , x(t) = y( Bε t˜). This equation is in the same form as (7.1) with
K = Aτ , a = b = c = 0, and d = 16 ; therefore, our previous results can be directly
applied. The characteristic equation is:
A
λ 2 + λ + e−λ τ = 0. (7.50)
τ
On the stability boundary

ω 2 = cos ωτ ,
Acr (7.51)
ω = sin ωτ .
τ
7 Center Manifold Analysis of the Delayed Liénard Equation 217

16

14

12

10
A Stable Region
8

4 Unstable Region
2

0
0 1 2 3 4 5
t
Fig. 7.5 Linear stability boundary of the Sunflower equation.

The parametric curve (τ (ω ) , Acr (ω )) describes the stability boundary (see Fig. 7.5).
We now substitute Aτcr = k into our result for the Poincaré-Lyapunov formula (7.44)
to yield
Ω
=− (Acr ω 2 + τ 2 ) < 0, (7.52)
32τ
4τ 2
where Ω = (A −τ 2 ω 2 )2 +(2 τω +Acr τω )2
. To obtain the amplitude estimate, we also need
cr
to calculate root crossing velocity through bifurcating A:

dλ ω2
γ = Re |A=Acr ,T=τ = − (2 + Acr )Ω < 0. (7.53)
dA 4τ
We conclude that the Hopf bifurcation of the Sunflower equation is always super-
critical. This conclusion is in full agreement with that from the earlier studies
referred to. From (7.46), (7.52), and (7.53), the amplitude of the limit cycle can
be estimated as: 0
−2(2 + Acr )(A − Acr )
r = 2ω . (7.54)
Acr ω 2 + τ 2

Figure 7.6a shows the amplitude estimate for B = 4 and ε = 1. The solid line denotes
the plot of the analytical result (7.54), while the dots and triangles correspond to the
numerical results from DDE-Biftool based on the original equation (7.48) and the
Taylor expanded one (7.49). Figure 7.6b shows the x − ẋ plot corresponding to point
C (A = 3, B = 4, ε = 1) in Fig. 7.6a.
218 S. Zhao and T. Kalmár-Nagy

1
0.7

0.6 C=(3, 0.5479)


0.5479 0.5
0.5 Stable Limit Cycle
Amplitude

0.4 x’ 0
0.3

0.2 −0.5

0.1
−1
0
2.95 3 3.05 3.1 3.15 −1 −0.5 0 0.5 1
A x
(a) bifurcation diagram (b) x − ẋ plot

Fig. 7.6 (a) shows the analytical amplitude estimate and DDE-Biftool simulation of (7.49) and
(7.48) when B = 4, ε = 1. (b) shows the x − ẋ plot of the point C in (a). The initial function is
chosen as x(t) = 0.1, t ∈ (−1, 0].

7.8 Concluding Remarks

In this chapter, we proved the existence of the Hopf bifurcation of the null solution of
the delayed Liénard equation using center manifold analysis. Based on the reduction
onto a two-dimensional system, we have presented a closed-form criterion for the
criticality of the Hopf bifurcation. The amplitude estimate for the bifurcating limit
cycle was obtained by using the calculated root crossing velocity (γ ) and Poincaré-
Lyapunov constant ( ). The analytical result agrees well with the numerical results
obtained from DDE-Biftool and DDE-23. The Sunflower equation is investigated as
a special case of the delayed Liénard equation.

References

1. N. MacDonald. Biological Delay Systems. Cambridge University Press, New York, 1989.
2. Y. Kuang. Delay Differential Equations with Applications in Population Dynamics. Mathe-
matics in Science and Engineering, 191, 1993.
3. A. Beuter, J. Bélair, C. Labrie, and J. Bélair. Feedback and Delays in Neurological Dis-
eases: A Modeling Study Using Gynamical Systems. Bulletin of Mathematical Biology, 55(3):
525–541, 1993.
4. H.Y. Hu. Dynamics of Controlled Mechanical Systems with Delayed Feedback. Springer,
New York, 2002.
5. G. Stépán. Modelling Nonlinear Regenerative Effects in Metal Cutting. Philosophical Trans-
actions: Mathematical, Physical and Engineering Sciences, 359(1781):739–757, 2001.
6. T. Kalmár-Nagy, G. Stépán, and F.C. Moon. Subcritical Hopf Bifurcation in the Delay Equa-
tion Model for Machine Tool Vibrations. Nonlinear Dynamics, 26(2):121–142, 2001.
7. D. Pieroux, T. Erneux, T. Luzyanina, and K. Engelborghs. Interacting Pairs of Periodic Solu-
tions Lead to Tori in Lasers Subject to Delayed Feedback. Physical Review E, 63(3):36211,
2001.
7 Center Manifold Analysis of the Delayed Liénard Equation 219

8. K. Pyragas. Continuous Control of Chaos by Self-controlling Feedback. Physics Letters A,


170(6):421–428, 1992.
9. G. Stépán. Retarded Dynamical Systems: Stability and Characteristic Functions. Longman
Scientific & Technical, Marlow, New York, 1989.
10. H.Y. Hu, E.H. Dowell, and L.N. Virgin. Resonances of a Harmonically Forced Duffing Oscil-
lator with Time Delay State Feedback. Nonlinear Dynamics, 15(4):311–327, 1998.
11. A. Casal and M.M. Freedman. A Poincaré-Lindstedt Approach to Bifurcation Problems for
Differential-delay Equations. IEEE Transactions on Automatic Control, 25(5):967–973, 1980.
12. H.C. Morris. A Perturbative Approach to Periodic Solutions of Delay-differential Equations.
IMA Journal of Applied Mathematics, 18(1):15–24, 2001.
13. N. MacDonald. Harmonic Balance in Delay-differential Equations. Journal of Sound and
Vibration, 186(4):649–656, 1995.
14. B. Lehman and S. Weibel. Moving Averages for Periodic Delay Differential and Difference
equations. Lecture Notes in Control and Information Science, 228:158–183.
15. B.D. Hassard, N.D. Kazarinoff, and Y.H. Wan. Theory and Applications of Hopf Bifurcation.
London Mathematical Society Lecture Note Series, 41, 1981.
16. J.K. Hale. Theory of Functional Differential Equations. Applied Mathematical Sciences,
vol. 3, 1977.
17. J.K. Hale. Nonlinear Oscillations in Equations with Time delay, in Nonlinear Oscillations in
Biology, ed. Hoppenstadt, K. (AMS, Providence, RI) 157–185, 1979.
18. W. Luk. Some Results Concerning the Boundedness of Solutions of Liénard Equations With
Delay. SIAM Journal on Applied Mathematics, 30(4):768–774, 1976.
19. B. Zhang. On the Retarded Liénard Equation. Proceedings of the American Mathematical
Society, 115(3):779–785, 1992.
20. B. Zhang. Periodic Solutions of the Retarded Liénard Equation. Annali di Matematica Pura
ed Applicata, 172(1):25–42, 1997.
21. P. Omari and F. Zanolin. Periodic Solutions of Liénard Equations. Rendiconti del Seminario
Matematico della Università di Padova, 72:203–230, 1984.
22. G. Metzen. Existence of Periodic Solutions of Second Order Differential Equations with
Delay. Proceedings of the American Mathematical Society, 103(3):765–772, 1988.
23. A. Acosta and M. Lizana. Hopf Bifurcation for the Equation x (t) + f (x(t))x (t) + g(x(t −
r)) = 0. Divulgaciones Matematicas, 13:35–43, 2005.
24. J. Xu and Q.S. Lu. Hopf Bifurcation of Time-delay Liénard Equations. International Journal
of Bifurcation and Chaos, 9(5):939–951, 1999.
25. S.A. Campbell, J. Bélair, T. Ohira, and J. Milton. Limit cycles, Tori, and Complex Dynamics in
a Second-order Differential Equation with Delayed Negative Feedback. Journal of Dynamics
and Differential Equations, 7(1):213–236, 1995.
26. F. Colonius and F. Box. Optimal Periodic Control of Retarded Liénard Equation. Distributed
Parameter Systems, Proceedings of Science International Conference, Vorau/Austria, 1984.
27. K. Engelborghs, T. Luzyanina, and D. Roose. Numerical Bifurcation Analysis of Delay Dif-
ferential Equations Using DDE-BIFTOOL. ACM Transactions on Mathematical Software,
28(1):1–21, 2002.
28. K. Engelborghs, T. Luzyanina, and G. Samaey. DDE-BIFTOOL v. 2.00: A Matlab Package
for Bifurcation Analysis of Delay Differential Equations. Report TW, 330, 2001.
29. D. Israelson and A. Johnson. Theory of Circumnutations in Helianthus Annus. Physiology
Plant, 20:957–976, 1967.
30. A.S. Somolinos. Periodic Solutions of the Sunflower Equation: x + (a/r)x + (b/r) sin x(t −
r) = 0. Q. Appl. Math. (Quarterly of Applied Mathematics) 35:465–478, 1978.
31. L.P. Liu and T. Kalmár-Nagy. High Dimensional Harmonic Balance Analysis for Second-
Order Delay-Differential Equations. Journal of Vibration and Control, in press.
Chapter 8
Calculating Center Manifolds for Delay
Differential Equations Using Maple TM

Sue Ann Campbell

Abstract In this chapter, we demonstrate how a symbolic algebra package may


be used to compute the center manifold of a nonhyperbolic equilibrium point of a
delay differential equation. We focus on a specific equation, a model for high speed
drilling due to Stone and Askari, but the generalization to other equations should be
clear. We begin by reviewing the literature on center manifolds and delay differential
equations. After outlining the theoretical setting for calculating center manifolds, we
show how the computations may be implemented in the symbolic algebra package
MapleTM . We conclude by discussing extensions of the center manifold approach as
well as alternate approaches.

Keywords: Delay differential equation · Center manifold · Symbolic computation

8.1 Introduction

To begin, we briefly review some results and terminology from the theory of
ordinary differential equations (ODEs). Consider an autonomous ODE

x = f(x), (8.1)

which admits an equilibrium point, x∗ . The linearization of (8.1) about x∗ is given by

x = Ax, (8.2)

where A = Df(x∗ ). Recall that x∗ is called nonhyperbolic if at least one of the eigen-
values of A has zero real part. Given a complete set of generalized eigenvectors
for the eigenvalues of A with zero real part, one can construct a basis for the sub-
space solutions of (8.2) corresponding to these eigenvalues. This subspace is called
the center eigenspace of (8.2). Nonhyperbolic equilibrium points are important as
they often occur at bifurcation points of a differential equation. The center manifold
B. Balachandran et al. (eds.), Delay Differential Equations: Recent Advances 221
and New Directions, DOI 10.1007/978-0-387-85595-0 8,
c Springer Science+Business Media LLC 2009
222 S.A. Campbell

is a powerful tool for studying the behavior of solutions (and hence the nature of
the bifurcation) of (8.1) in a neighborhood of a nonhyperbolic equilibrium point.
It is a nonlinear manifold, which is tangent to the center eigenspace at x∗ . For a
more detailed review of the theory and construction of center manifolds for ODES
see [17, Sect. 3.2], [49, Sect. 2.1] or [33, Sect. 2.12].
In this eighth chapter, we study the center manifolds for nonhyperbolic equilib-
rium points of delay differential equations (DDEs). In general, one cannot find the
center manifold exactly, thus one must construct an approximation. Some authors
have performed the construction by hand, e.g., [11, 12, 14, 18–20, 23–25, 28, 30–32,
38,46,47,51,53]. However, the construction generally involves a lot of computation
and is most easily accomplished either numerically or with the aid of a symbolic
algebra package. Here we focus on the symbolic algebra approach, which has been
used by several authors, e.g., [3, 7, 34–36, 45, 48, 54–56]. Unfortunately, there is
rarely space in journal articles to give details of the implementation of such compu-
tations. Thus, the purpose here is to give these details, for a particular example DDE
and for the symbolic algebra package MapleTM , so that other authors may reproduce
them in other contexts.
In the following section, we will outline the theoretical setting for calculating
center manifolds similar to the discussion of the sixth chapter. In the second section,
we will show how the computations may be implemented in the symbolic algebra
package MapleTM by applying the theory to a model due to Stone and Askari [44].
In the final section, we will discuss extensions of this approach as well as alternate
approaches.

8.2 Theory

In this section, we briefly outline the theoretical setting for calculating center mani-
folds. More detail on the theory can be found in [2, 15, 22].
Consider the general delay differential equation

x (t) = g(x(t), x(t − τ1 ), . . . , x(t − τp ); μ ), (8.3)

where x ∈ IRn , g : IRn × IRn × . . . × IRn × IRk → IRn , p is a positive integer and
μ ∈ IRk and τj > 0, j = 1, . . . , p are parameters of the model. We shall assume that
g is as smooth as necessary for our subsequent computations (i.e., g ∈ Cr for r large
enough) and the equation admits an equilibrium solution x(t) = x∗ . In general x∗
may depend on μ , but not on the τj . Shifting the equilibrium to zero and separating
the linear and nonlinear terms gives
p
x (t) = A0 (μ ) x(t) + ∑ Aj (μ ) x(t − τj ) + f(x(t), x(t − τ1 ), . . . , x(t − τp ); μ ), (8.4)
j=1

where
Aj (μ ) = Dj+1 g(x∗ , . . . , x∗ ; μ ), (8.5)
8 Calculating Center Manifolds for DDEs Using MapleTM 223

and
f(x(t), x(t − τ1 ), . . . , x(t − τp ); μ ) = g(x(t), x(t − τ1 ), . . . , x(t − τp ); μ )
p (8.6)
−A0 (μ ) x(t) − ∑j=1 Aj (μ ) x(t − τj ).

Here Dj g means the Jacobian of g with respect to its jth argument.


Let τ = maxj τj . To pose an initial value problem at t = t0 for this DDE, one must
specify the value of x(t) not just at t = t0 , but on the whole interval [t0 − τ ,t0 ]. Thus
an appropriate initial condition is

x(t0 + θ ) = ζ0 (θ ), −τ ≤ θ ≤ 0, (8.7)

where ζ0 : [−τ , 0) → IRn is a given function. It can be shown (see e.g. [22, Sect.
2.2] that if ζ0 is continuous and f is Lipschitz there exists a unique solution to the
initial value problem (8.4)–(8.7), which is defined and continuous on a (maximal)
interval [t0 − τ , β ), β > 0. In the following, we will assume that ζ0 and f satisfy
these conditions.
To define an appropriate phase space for the solutions of the DDE, make the
following definition
def
xt (θ ) = x(t + θ ), −τ ≤ θ ≤ 0.
Note that the initial condition can now be expressed as xt0 = ζ0 and that xt will be a
continuous mapping from [−τ , 0] → IRn for each t ∈ [t0 , β ).
With this in mind, it is usual [22] to take the phase space for (8.4) to be the
def
Banach space C = C([−τ , 0], IRn ) of continuous mappings from [−τ , 0] into IRn ,
equipped with the norm
ζ τ = sup ζ (θ ) ,
θ ∈[−τ ,0]

where · is the usual Euclidean norm on IRn . We can then define the flow for the
DDE as a mapping on C, which takes the initial function ζ0 into the function xt .
The equation (8.4) for x(t) can be expressed as a functional differential equation
(FDE)
x (t) = L(xt ; μ ) + F(xt ; μ ), (8.8)
where L : C × IRk → IRn is a linear mapping defined by
p
L(φ ; μ ) = A0 (μ ) φ (0) + ∑ Aj (μ ) φ (−τj ), (8.9)
j=1

and F : C × IRk → IRn is a nonlinear functional defined by

F(φ ; μ ) = f(φ (0), φ (−τ1 ), . . . , φ (−τp ); μ ). (8.10)

As shown in, e.g., [11, 12, 51] one may extend (8.8) to a differential equation for
xt (θ ) as follows
224 S.A. Campbell

⎨ dθ (xt (θ )) −τ ≤ θ < 0
d
,
d
xt (θ ) = . (8.11)
dt ⎩
L(xt ; μ ) + F(xt ; μ ) , θ =0,

This equation will be important for the center manifold construction.

8.2.1 Linearization

Clearly (8.4) and (8.8) admit the trivial solution x(t) = 0, ∀t, which corresponds to
the equilibrium solution x(t) = x∗ of (8.3). The stability of this equilibrium solution
can be studied via the linearization of (8.8) about the trivial solution:

x (t) = L(xt ; μ ) (8.12)

or, in the DDE form,


p
x (t) = A0 (μ ) x(t) + ∑ Aj (μ ) x(t − τj ). (8.13)
j=1

Substituting the ansatz x(t) = eλ t v, v ∈ IRn into (8.13) yields the matrix vector
equation  
p
λ I − A0 (μ ) − ∑ Aj (μ )e−λ τj v = 0, (8.14)
j=1

which we will sometimes write in the compact form Δ(λ ; μ )v = 0. Requiring non-
trivial solutions (v = 0) yields the constraint det(Δ(λ ; μ )) = 0, i.e., that λ is a root
of the characteristic equation
 
p
det λ I − A0 (μ ) − ∑ Aj (μ )e−λ τj = 0. (8.15)
j=1

It can be shown [22, Corollary 7.6.1] that the trivial solution of (8.12) (or (8.13))
will be asymptotically stable (and hence the equilibrium solution of (8.33) will be
locally asymptotically stable) if all the roots of (8.15) negative real parts. We will
call these roots the eigenvalues of the equilibrium point.
Consider a point, μ = μc , in the parameter space where the characteristic
equation (8.15) has m roots with zero real parts and the rest of the roots have
negative real parts. The following results are shown in [22]. At such a point there
exists a decomposition of the solution space for the linear FDE (8.12) as C = N ⊕ S,
where N is an m-dimensional subspace spanned by the solutions to (8.12) corre-
sponding to the eigenvalues with zero real part, S is infinite dimensional and N and
S are invariant under the flow associated with (8.12). N and S are analogous to the
center and stable eigenspaces for ODEs.
8 Calculating Center Manifolds for DDEs Using MapleTM 225

For simplicity, we will assume that all the eigenvalues with zero real part have
multiplicity one. This includes the most common cases studied: single Hopf
bifurcation, double Hopf bifurcation (with nonidentical frequencies) and zero-
Hopf bifurcation. For a discussion of DDEs with a zero eigenvalue of multiplicity
two (Bogdanov Takens bifurcation) or three, see [6] or [36]. For a discussion of
DDEs where complex conjugate eigenvalues with higher multiplicity arise due to
symmetry, see [7, 18–20, 26, 31, 52–54]. For a general discussion of eigenspaces
associated with eigenvalues of higher multiplicity in DDEs see [22, Sect. 7.4].
Let {φ1 (t), φ2 (t), . . . , φm (t)} be a basis for N and {λ1 , λ2 , . . . , λm } the correspond-
ing eigenvalues. This basis can be constructed in a similar manner to that for ODEs.
Since Re(λk ) = 0 for each k, either λk = 0 or λk = iωk . In the latter case, it is
easy to check that −iωk is also a root, and we will order the eigenvalues so that
λk+1 = −iωk . With the restriction of simple eigenvalues, the construction of the
basis functions is straight forward. If λk = 0, then φk = vk where vk a solution of
Δ(0; μc )vk = 0. If λk = iω , then φk = Re(eiωk t vk ) and φk+1 = Im(eiωk t vk ), where vk
a solution of Δ (iωk ; μc )vk = 0. In the following, we will usually write the basis as
an n × m matrix, with the kth column given by φk , viz.:

Φ(t) = [ φ1 (t) | φ2 (t) | . . . | φm (t) ] . (8.16)

A simple calculation then shows that φ satisfies the following matrix ordinary dif-
ferential equation:
Φ = ΦB, (8.17)
where B is a block diagonal matrix, with block [ 0 ] for each zero eigenvalue and
block  
0 ωk
−ωk 0
for each pair of complex conjugate eigenvalues, ±iωk .
Note that the basis functions may also be treated as functions on C, by changing
their argument to θ ∈ [−τ , 0]. Now consider
p
L(eλk θ vk ; μc ) = A0 (μc )vk + ∑ Aj (μc )e−λk τj vk
j=1
= λk vk ,

which follows from (8.14). This implies that L(φk ; μc ) = 0 when λk = 0, and when
λk = iωk , L(φk ; μc ) = ωφk+1 and L(φk+1 ; μc ) = −ωφk . We then have the following
result:
L(Φ; μc ) = Φ(0)B. (8.18)
As for ODEs, the decomposition of the solution space may be accomplished via
the introduction of the adjoint equation for (8.12). However, a different equation,
which is closely related to the adjoint equation, may also be used to decompose the
solution space. It turns out that this latter equation is useful for the center manifold
construction, so we focus on it.
226 S.A. Campbell

Let Rn∗ be the n-dimensional row vectors and C ∗ = C([0, r], IRn∗ ). For ψ ∈ C ∗
and φ ∈ C, define the following bilinear form
n p  0
ψ , φ  = ∑ ψj (0)φj (0) + ∑ ψ (σ + τj ) Aj φ (σ )dσ . (8.19)
i=1 j=1 −τ

As shown in [22, Sect. 7.5], this can be used to define a system dual to (8.12)
given by
y (t) = LT (yt ; μ ), s ≤ 0, (8.20)
where ys = y(s + ξ ), 0 ≤ ξ ≤ τ and LT is a linear mapping on C ∗ × IRk given by
p
LT (ψ ; μ ) = −ψ (0) A0 (μ ) − ∑ ψ (τj ) Aj (μ ). (8.21)
j=1

Equation (8.20) is called the transposed system by [22]. In the literature, it is some-
times called the formal adjoint. The corresponding differential equation is
p
y (s) = −y(s) A0 (μ ) − ∑ y(s + τj ) Aj (μ ), s ≤ 0. (8.22)
j=1

Using the ansatz y(s) = we−λ s , w ∈ IRn∗ and proceeding as for (8.12), shows
that w must satisfy wΔ(λ ; μ ) = 0. Thus the characteristic equation of (8.22) is just
(8.15). It follows that the trivial solutions of (8.22) and (8.13) have the same eigen-
values.
Let ⎡ ⎤
ψ1 (s)
⎢ ⎥
Ψ(s) = ⎣ ... ⎦ ,
ψm (s)
be a basis for the solutions of (8.20) (or, equivalently, (8.22)) corresponding to the m
eigenvalues with zero real part (i.e. the “center eigenspace” of (8.22)). Note that the
ψj are row vectors and that they can be considered as functions on C ∗ if we change
their argument to ξ ∈ [0, τ ]. The fundamental result used in the center manifold con-
struction is that Ψ may be used to decompose the solution space. See [22, Sect. 7.5]
for details and proofs. In particular, for any ζ ∈ S,

ψj , ζ  = 0, j = 1, . . . , m.

Further, we can choose a basis so that Ψ, Φ = I, where Ψ, Φ is the m × m matrix
with i, j elements ψi , φj  and I is the m × m identity matrix. Thus for any ζ ∈ N we
have ζ = Φ u where u = Ψ, ζ  ∈ IRm . Finally, one can show that

Ψ = −BΨ and LT (Ψ; μc ) = −BΨ(0), (8.23)

where B is the same block diagonal matrix as in (8.17).


8 Calculating Center Manifolds for DDEs Using MapleTM 227

8.2.2 Nonlinear Equation

Now let us return to the nonlinear equation (8.8). For the rest of this section we
will assume that μ = μc and hence that the characteristic equation (8.15) has m
eigenvalues with zero real parts and all other eigenvalues have negative real parts.
In this situation [22, Chap. 10] has shown that there exists, in the solution space C
for the nonlinear FDE (8.8), an m dimensional center manifold. Since all the other
eigenvalues have negative real parts, this manifold is attracting and the long term
behavior of solutions to the nonlinear equation is well approximated by the flow
on this manifold. In particular, studying the flow on this manifold will enable us to
characterize the bifurcation, which occurs as a μ passes μc . Below, we outline the
steps involved in computing this manifold. The approach we take follows the work
of [21] and [51] (scalar case) and of [2] (vector case). Since all our computations
will be done for μ = μc , we not write the dependence on μ explicitly.
To begin, we note that points on the local center manifold of 0 can be expressed
as the sum of a linear part belonging to N and a nonlinear part belonging to S, i.e.,
c
Wloc (0) = {φ ∈ C | φ = Φu + h(u)} ,

where Φ(θ ), θ ∈ [−τ , 0] is the basis for N introduced above, u ∈ IRm , h(u) ∈ S
and u is sufficiently small. The solutions of (8.8) on this center manifold are then
given by x(t) = xt (0), where xt (θ ) is a solution of (8.11) satisfying

xt (θ ) = Φ(θ )u(t) + h(θ , u(t)) . (8.24)

To find the center manifold and the solutions on it, we proceed as follows. Sub-
stituting (8.24) into (8.11) yields

⎪ ∂h
  ⎪


Φ (θ )u(t) + , −τ ≤ θ < 0
∂h ∂ θ
Φ(θ ) + u̇(t) = (8.25)
∂u ⎪

⎩ L(Φ(θ ))u(t) + L(h(θ , u(t)))

+F[Φ(θ )u(t) + h(θ , u(t))] , θ =0.

Using (8.17) and (8.18) in (8.25) we obtain



⎪ ∂h
  ⎪
⎪ Φ(θ )Bu(t) + , −τ ≤ θ < 0,
∂h ⎨ ∂ θ
Φ(θ ) + u̇(t) = (8.26)
∂u ⎪

⎪ Φ(0)Bu(t) + L(h(θ , u(t)))

+F[Φ(θ )u(t) + h(θ , u(t))] , θ = 0.

This coupled system must be solved for u(t) and h(θ , u(t)).
To derive the equation for u(t) we will use the bilinear form (8.19). First, we note
some useful results. Since h(θ , u) ∈ S for any u,

Ψ(ξ ), h(θ , u(t)) = 0.


228 S.A. Campbell

It then follows from the definition of the partial derivative that

∂h
Ψ(ξ ), (θ , u(t)) = 0.
∂u
Finally, using (8.23) we have
p  0
∂h
Ψ(0)L(h(θ , u)) + ∑ Ψ(σ + τj ) Aj dσ
j=1 −τ ∂σ
p  0
= −LT (Ψ(ξ ))h(0, u(t)) − ∑ Ψ (σ + τj ) Aj h(σ , u(t)) dσ
j=1 −τ
p  0
= BΨ(0)h(0, u(t)) + ∑ BΨ(σ + τj ) Aj h(σ , u(t)) dσ
j=1 −τ
= BΨ(ξ ), h(θ , u)
= 0.

Applying the bilinear form to Ψ and (8.26) and using these results gives the
following system of ODEs for u(t):

u̇(t) = Bu(t) + Ψ(0)F[Φ(θ )u(t) + h(θ , u(t))] . (8.27)

Using (8.27) in (8.26) then yields the following system of partial differential equa-
tions for h(θ , u):

∂h  
Bu + Ψ(0)F[Φ(θ )u + h(θ , u)] + Φ(θ )Ψ(0)F[Φ(θ )u + h(θ , u)]
∂u

∂h
, −τ ≤ θ < 0 (8.28)
= ∂θ
L(h(θ , u)) + F[Φ(θ )u + h(θ , u)] , θ =0.

Thus, the evolution of solutions on the center manifold is determined by solv-


ing (8.28) for h(θ , u) and then (8.27) for u(t). To solve (8.28), one uses a stan-
dard approach in center manifold theory, namely, one assumes that h(θ , u) may be
expanded in power series in u:

h(θ , u) = h2 (θ , u) + h3 (θ , u) + · · · , (8.29)

where
⎡ ⎤
h111 (θ )u21 + · · · h11m (θ )u1 um + h122 (θ )u22 + · · · + h1mm (θ )u2m
⎢ .. ⎥
h2 (θ , u) = ⎣ . ⎦,
hn11 (θ )u21 + · · · hn1m (θ )u1 um + hn22 (θ )u22 + · · · + hnmm (θ )u2m

and similarly for h3 and the higher order terms.


8 Calculating Center Manifolds for DDEs Using MapleTM 229

Before proceeding to solve for h, we would like to note that to determine the
terms of (8.27) to O( u(t) l ), one only needs the terms which are O( u(t) l−1 ) in
the series for h. To see this, write F in series form

F = F2 + F3 + · · · (8.30)

and hence rewrite (8.27) as



u̇ = Bu + Ψ(0) F2 [Φ(θ )u + h2 (θ , u) + h3 (θ , u) + O( u 4 )]

+F3 [Φ(θ )u + h2 (θ , u) + h3 (θ , u) + O( u 4 )] + O( u 4 ) .

Expanding each Fj in a Taylor series about φ (θ )u yields

u̇ = Bu + Ψ(0) [F2 (Φ(θ )u) + DF2 (Φ(θ )u)h2 (θ , u) + F3 (Φ(θ )u)] + O( u 4 ).

Thus we see that h2 is only needed to calculate the third-order terms not the second-
order terms. A similar result holds for the higher order terms. Of particular note is
the fact that if the lowest order terms, we need in the center manifold are the same as
the lowest order terms in F, then there is no need to calculate h at all! This is the case
for a Hopf bifurcation when F2 = 0. Examples of this can be found in [27, 43, 51].
This is also the case when the normal form for a particular bifurcation is determined
at second order, such as for the Bogdanov-Takens bifurcation (see, e.g., [6]) or a
double Hopf bifurcation with 1:2 resonance [4].
Now let us return to solving (8.28). Substituting (8.29) and (8.30) into the first
part of (8.28) and expanding the Fj about Φ(θ )u yields

∂ h2 ∂ h2
+ O( u 3 ) = (θ , u)Bu + Φ(θ )Ψ(0)F2 (Φ(θ )u) + O( u 3 ). (8.31)
∂θ ∂u
Equating terms with like powers of u1 , . . . , um in this equation yields a system of
ODEs for the hijk (θ ). The system is linear and is easily solved to find the general
solutions for the hijk (θ ) in terms of arbitrary constants.
These arbitrary constants may be determined as follows. Substituting (8.29) and
(8.30) into the second part of (8.28) and expanding the Fj about φ (θ )u yields

∂ h2 
Bu + Φ(0)Ψ(0)F2 (Φ(θ )u) + O( u 3 )
∂ u θ =0 (8.32)
= L(h2 (θ , u)) + F2 (Φ(θ )u) + O( u 3 )

Equating terms with like powers of u1 , . . . , um in this equation yields a set of bound-
ary conditions for the arbitrary constants.
Once one has determined h2 one may proceed to the next order of approxima-
tion and calculate h3 . As discussed above, however, for most applications this is
unnecessary.
230 S.A. Campbell

8.3 Application

Now consider the model of [44]:

η + δ η + η − β (1 − μ (η − η (t − τ )))(p0 + p1 η + p2 η 2 ) = 0, (8.33)

which was developed to study the vibrations in drilling. This model is in dimension-
less form, the model in physical variables can be found in [44] or [45]. The variable
η corresponds to the amplitude of the vibrations and to derivative with respect to
time. The parameter 1/τ is proportional to the speed of rotation of the drill and β to
the width of cut. Since these two parameters can be varied in practice, [45] chose
these as the bifurcation parameters. The other parameters, μ , p0 , p1 , p2 , can be
related to other physical parameters [45].
Note that this equation has an equilibrium solution η (t) = β p0 , which corre-
sponds to the steady cutting solution. The drilling process may exhibit chatter,
which is a self excited oscillation of the drill. The emergence of chatter in the physi-
cal system corresponds to a Hopf bifurcation in the model (8.33). In [45] the critical-
ity of this bifurcation was studied using the center manifold construction described
in Sect. 1. We will reproduce the essence of the analysis here, including the relevant
commands in MapleTM 111 used to perform the computations symbolically. Com-
mands will be written in typewriter font and preceded by a >. Each command will
be followed by the output produced when it is executed. If a command ends with
a colon then no output is printed. More information on Maple can be found in the
manual [29].
Shifting the equilibrium to the origin and rewriting the equation as a first order
vector equation puts it in the form (8.4):

x (t) = A0 x(t) + A1 x(t − τ ) + f(x(t), x(t − τ )), (8.34)

where    
x(t) η (t) − β p0
x(t) = = , (8.35)
x (t) η (t)
   
0 1 0 0
A0 = , A1 = , (8.36)
−1 − β p0 μ β p1 − δ β p0 μ 0
and  
0
f= . (8.37)
β p2 x (t)2 − β μ (x(t) − x(t − τ ))(p1 x (t) + p2 x (t)2 )
Thus, in this example, the linear mapping of (8.8) is given by

L(φ (θ )) = A0 φ (0) + A1 φ (−τ ), (8.38)

and F : C → IR2 is a nonlinear functional defined by

F(φ (θ )) = f(φ (0), φ (−τ )). (8.39)


1 The commands used are backward compatible to at least MapleTM 9.5.
8 Calculating Center Manifolds for DDEs Using MapleTM 231

The characteristic matrix and equation for this example can be defined in Maple
as follows.
> A0:=matrix(2,2,[[0,1],[-1-beta*p0*mu,
beta*p1-delta]]);
 
0 1
A0 :=
−1 − β p0 μ β p1 − δ
> A1:=matrix(2,2,[[0,0],[beta*p0*mu,0]]);
 
0 0
A1 :=
β p0 μ 0
> ident:=evalm(array(1..2,1..2,identity));
 
1 0
ident :=
0 1
> Delta:=evalm(lambda*ident-A0-exp(-lambda*tau)*A1);
 
λ −1
Δ := −λ τ
1 + β p0μ − β p0μ e λ − β p1 + δ
> char_eq:=collect(det(Delta),lambda);

char eq := λ 2 + (−β p1 + δ )λ + 1 + β p0 μ − e(−λ τ ) β p0 μ


The work of [45] described curves, in the τ , β parameter space, along which
the equilibrium solution of (8.33) loses stability. At each point on these curves, the
characteristic (8.15) has a pair of pure imaginary roots and the rest of the roots have
negative real parts. Equations describing where the characteristic equation has pure
imaginary roots can be easily found in Maple:
> eq_im:=evalc(subs(lambda=I*omega,char_eq)):
eq_Re:=coeff(eq_im,I,0);
eq_Im:=coeff(eq_im,I,1);

eq Re := −ω 2 + 1 + β p0 μ − cos(ω τ ) β p0 μ
eq Im := −ω β p1 + ω δ + sin(ω τ ) β p0 μ

[45] solved these equations to find expressions for τ and β in terms of ω and the
other parameters. For fixed values of the other parameters, these expressions deter-
mined curves in the τ , β parameter space, which are parametrized by ω .
It is straightforward to check that the FDE (8.33) satisfies the conditions for a
Hopf bifurcation to occur as one passes through a point on these curves (see [22,
pp. 331–333] or [14, Sect. 8.2] for a statement of the Hopf bifurcation Theorem
232 S.A. Campbell

for FDE’s). To determine the criticality of this Hopf bifurcation, we compute the
center manifold of the equilibrium point at the Hopf bifurcation, following the steps
outlined in Sect. 8.2.
To begin, we calculate a basis for the “center eigenspace”, N. To do this we need
to find the eigenfunctions corresponding to the eigenvalues ±iω . It Maple this may
be done as follows. First, solve Δ(iω )v = 0, where Δ(iω ) is the characteristic matrix
with λ = iω .
> v:=matrix([[v1],[v2]]);
Dv:=subs(lambda=I*omega,evalm(multiply(Delta,v)));
v2res:=v2=solve(Dv[1,1], v2);
 
v1
v :=
v2
 
I ω v1 − v2
Dv :=  ω τ

1 + β p0 μ − e β p0 μ v1 + (I ω − β p1 + δ ) v2
I

v2res := v2 = I ω v1.
Then define the complex eigenfunction and take the real and imaginary parts.
> yy:=map(evalc,subs(v2res,v1=1,
evalm(exp(I*omega*theta)*v))):
Phi:=array(1..2,1..2,[[coeff(yy[1,1],I,0),
coeff(yy[1,1],I,1)],
[coeff(yy[2,1],I,0),coeff(yy[2,1],I,1)]]);
 
cos(ωθ ) sin(ωθ )
Φ := − sin(ωθ ) ω cos(ωθ ) ω
Similarly we define u and Φ u.
u:=matrix([[u1],[u2]]);
 
u1
u :=
u2
Phiu:=multiply(Phi,u);
 
cos (ω θ ) u1 + sin (ω θ ) u2
Phiu :=
− sin(ω θ ) ω u1 + cos(ω θ ) ω u2
Next define the matrix B.
> B:=matrix([[0,omega],[-omega,0]]);
 
0 ω
B := .
−ω 0
8 Calculating Center Manifolds for DDEs Using MapleTM 233

We need to define the basis, Ψ(ξ ), ξ ∈ [0, τ ], for the “center eigenspace” of the
transpose system. First, we calculate a general basis Ψg in the same way as we set
up the basis Φ.
> w:=array(1..2):
wD:=subs(lambda=I*omega,multiply(w,Delta)):
w1res:=w[1]=solve(wD[2],w[1]):
yy:=map(evalc,subs(w1res,w[2]=1,lambda=I*omega,
evalm(w*exp(-lambda*xi)))):
Psi_g:=array(1..2,1..2,[[coeff(yy[1],I,0),
coeff(yy[2],I,0)],
[coeff(yy[1],I,1),coeff(yy[2],I,1)]]);
 
cos(ω ξ )(−β p1 + δ ) + sin(ω ξ )ω cos(ω ξ )
Ψg :=
− sin(ω ξ )(−β p1 + δ ) + cos(ω ξ )ω − sin(ω ξ )
We now wish to find a basis Ψ such that Ψ, Φ = I. The elements of Ψ will be
linear combinations of those of Ψg , i.e., Ψ = KΨg , where K is a 2 × 2 matrix of
constants. Thus we have

I = Ψ, Φ
= KΨg , Φ
= KΨg , Φ,

Which implies that K = Ψg , Φ−1 .


For this example the bilinear form (8.19) becomes
2  0
ψ , φ  = ∑ ψj (0)φj (0) + −τ
ψ (σ + τ )A1 φ (σ ) dσ
j=1
 0
= ψ (0)φ (0) + β μ p0 ψ2 (σ + τ )φ1 (σ ) dσ .
−τ

We define this bilinear form as a procedure as follows:


> bilinear_form:=proc(rowv,colv)
local pstemp;
pstemp:=subs(xi=0,theta=0,innerprod(rowv,colv))
+int(subs(xi=sigma+tau,theta=sigma,
innerprod(rowv,A1,colv)),sigma=-tau .. 0);
RETURN(pstemp)
end:
Note that the command innerprod calculates the dot product when given two
vectors and the vector-matrix-vector product when given two vectors and a matrix.
234 S.A. Campbell

Next we apply the bilinear form to each row of Ψg and each column of Φ and
store the result in the matrix produit. We then invert produit and multiply the
result by Ψg . 2
> rowvec:=array(1..2): colvec:=array(1..2):
produit:=array(1..2):
> for I1 from 1 to 2 do
for I2 from 1 to 2 do
rowvec:=row(Psi_g,I1);
colvec:=col(Phi,I2);
produit[I1,I2]:=eval(bilinear_form
(rowvec,colvec));
od;
od;
> K:=inverse(produit):
> PPsi:=map(simplify,multiply(K,Psi_g)):
In fact, all we need for subsequent calculations is Ψ(0). To keep the expressions
from getting too large, we will define an empty matrix Ψ 0 to use as a place holder.
We will store the actual values of Ψ(0) in the list Psi0 vals.
Psi0:=matrix(2,2);
Psi0_res:=map(simplify,map(eval,
subs(xi=0,evalm(PPsi)))):
Psi0_vals:=[Psi0[1,1]=Psi0_res[1,1],
Psi0[1,2]=Psi0_res[1,2],
Psi0[2,1]=Psi0_res[2,1],
Psi0[2,2]=Psi0_res[2,2]]:

Ψ 0 = array(1 .. 2, 1 .. 2, []).
Now, to determine the criticality of the Hopf bifurcation, one need only find the
terms up to and including those which are O( u(t) 3 ) in (8.27). Thus, as discussed
in the previous section, we only need the quadratic terms in the series for h. We thus
define
h:=matrix([[h1_11(theta)*u1ˆ2+h1_12(theta)*u1*u2
+h1_22(theta)*u2ˆ2],
[h2_11(theta)*u1ˆ2+h2_12(theta)*u1*u2+h2_22
(theta)*u2ˆ2]]);
 
h1 11(θ ) u12 + h1 12(θ ) u1 u2 + h1 22(θ ) u22
h := (8.40)
h2 11(θ ) u12 + h2 12(θ ) u1 u2 + h2 22(θ ) u22

2 Note that Psi is a reserved word, so we use PPsi instead.


8 Calculating Center Manifolds for DDEs Using MapleTM 235

We define the linear and nonlinear parts of the DE as follows


> x:=matrix([[x1],[x2]]);
xt:=matrix([[x1t],[x2t]]);
lin:=evalm(multiply(A0,x)+multiply(A1,xt));
f:= beta*p2*x2ˆ2-beta*mu*p1*x1*x2+beta*mu*p1*x1t*x2
-beta*mu*p2*x1*x2ˆ2+beta*mu*p2*x1t*x2ˆ2:
nlin:=matrix([[0],[f]]);
 
x1
x :=
x2
 
x1t
xt :=
x2t
 
x2
lin :=
(−1 − β μ p0) x1 + (β p1 − δ ) x2 + β μ p0 x1t
 
0
nlin := .
β p2 x22 − β μ p1 x1 x2 + β μ p1 x1t x2 − β μ p2 x1 x22 + β μ p2 x1t x22

Then we define the expressions, in terms of the coordinates u, for points on the
centre eigenspace, x ce, and on the centre manifold, x cm.
> Phiu0:=map(eval,subs(theta=0,evalm(Phiu))):
Phiut:=map(eval,subs(theta=-tau,evalm(Phiu))):
x_ce:=[x1=Phiu0[1,1],x2=Phiu0[2,1],x1t=Phiut[1,1],
x2t=Phiut[2,1]];
Phiuh0:=map(eval,subs(theta=0,evalm(Phiu+h))):
Phiuht:=map(eval,subs(theta=-tau,evalm(Phiu+h))):
x_cm:=[x1=Phiuh0[1,1],x2=Phiuh0[2,1],
x1t=Phiuht[1,1],x2t=Phiuht[2,1]];

x ce := [x1 = u1, x2 = ω u2, x1t = cos(ω τ ) u1 − sin(ω τ ) u2,


x2t = sin(ω τ ) ω u1 + cos(ω τ ) ω u2]
x cm := [x1 = u1 + h1 11(0) u12 + h1 12(0) u1 u2 + h1 22(0) u22 ,
x2 = ω u2 + h2 11(0) u12 + h2 12(0) u1 u2 + h2 22(0) u22 ,
x1t = cos(ω τ ) u1 − sin(ω τ ) u2 + h1 11(−τ ) u12
+h1 12(−τ ) u1 u2 + h1 22(−τ ) u22 ,
x2t = sin(ω τ ) ω u1 + cos(ω τ ) ω u2 + h2 11(−τ ) u12
+h2 12(−τ ) u1 u2 + h2 22(−τ ) u22 ]
We can now define differential equations for the hijk . First define the left hand
side of (8.31).
> delhs:=map(diff,h,theta);
236 S.A. Campbell
⎡      ⎤
d d d
h1 11(θ ) u1 +
2
h1 12(θ ) u1 u2 + h1 22(θ ) u22
⎢ dθ dθ dθ ⎥
delhs := ⎢
⎣ d      ⎥

d d
h2 11(θ ) u1 +
2
h2 12(θ ) u1 u2 + h2 22(θ ) u22
dθ dθ dθ
Now define the right-hand side of (8.31).
> dhdu:=matrix([[diff(h[1,1],u1),diff(h[1,1],u2)],
[diff(h[2,1],u1),diff(h[2,1],u2)]]);

 
2 h1 11(θ ) u1 + h1 12(θ ) u2 h1 12(θ ) u1 + 2 h1 22(θ ) u2
dhdu :=
2 h2 11(θ ) u1 + h2 12(θ ) u2 h2 12(θ ) u1 + 2 h2 22(θ ) u2
> derhs:=map(collect,map(expand,evalm
multiply(dhdu,multiply(B,u))+
multiply(Phi,multiply(Psi0,
[0,subs(x_ce,f)])))),
[u2,u2],distributed,factor):
The expression for derhs is quite long, so we do not display it. Now we put
together the right-hand side and left-hand side. The coefficient of each distinct
monomial, u1k u2j , j + k = 2, determines one differential equation. We display two
of them as examples.
> hdes:=delhs-derhs:
de1:=coeff(coeff(hdes[1,1],u1ˆ2),u2,0);
de2:=coeff(coeff(hdes[1,1],u1),u2);
de3:=coeff(coeff(hdes[1,1],u2ˆ2),u1,0):
de4:=coeff(coeff(hdes[2,1],u1ˆ2),u2,0):
de5:=coeff(coeff(hdes[2,1],u1),u2):
de6:=coeff(coeff(hdes[2,1],u2ˆ2),u1,0):
 
d
de1 := h1 11(θ ) + ω h1 12(θ )

 
d
de2 := h1 12(θ ) − 2ω (h1 11(θ ) − h1 22(θ )) + ω β μ p1(1 − cos(ωτ ))

(cos(ωθ ) Ψ1,2 (0) + sin(ωθ ) Ψ2,2 (0))

Now define the list of differential equations and functions to solve for.

> des:={de1,de2,de3,de4,de5,de6}:
fns:={coeff(h[1,1],u1ˆ2),coeff(coeff(h[1,1],u1),u2),
coeff(h[1,1],u2ˆ2), coeff(h[2,1],u1ˆ2),
coeff(coeff(h[2,1],u1),u2),coeff(h[2,1],u2ˆ2)};
8 Calculating Center Manifolds for DDEs Using MapleTM 237

f ns := {h1 11(θ ), h1 12(θ ), h1 22(θ ), h2 11(θ ), h2 12(θ ), h2 22(θ )}


The differential equations are linear and are easily solved to find the general solu-
tions for the hijk (θ ) in terms of six arbitrary constants using the command dsolve.
For convenience, we rename the arbitrary constants.
> temp:=dsolve(des,fns):
changeC:=[_C1=C1,_C2=C2,_C3=C3,_C4=C4,
_C5=C5,_C6=C6];
hsoln:=simplify(expand(evalc(subs(changeC,
value(temp))))):
The solutions are quite long, so we show only one example.
> collect(hsoln[6],[Psi0[1,1],Psi0[1,2],Psi0[2,1],
Psi0[2,2],p1,p2],factor);

1
h1 22(θ ) = − β μ (cos(ωθ ) + sin(ωθ ) sin(ωτ ) − cos(ωθ ) cos(ωτ )) p1
3
 
1 1
+ β sin(ωθ ) ω p2 Ψ01,2 + β μ (cos(ωθ ) sin(ωτ )
3 3

1
− sin(ωθ ) + sin(ωθ ) cos(ωτ )) p1 − β cos(ωθ ) ω p2 Ψ02,2
3
1
−C6 cos(ωθ )2 +C2 + C6 +C5 sin(ωθ ) cos(ωθ )
2

Recall that the values for Ψ0i,j are stored in the list Psi0 vals. Later we will need
the values of hijk (0) and hijk (−τ ), so we store them in the sets hsoln0 and hsolnt.
> hsoln0:=simplify(eval(subs(theta=0,hsoln))):
hsolnt:=simplify(eval(subs(theta=-tau,hsoln))):
We now set up the boundary conditions to solve for the arbitrary constants,
C1,C2, . . .. Note that the left-hand side of (8.32) is just the right-hand side of
(8.31) with θ = 0.
> bclhs:=map(eval,subs(simpres,theta=0,evalm(derhs))):
bcrhs:=map(collect,evalm(subs(x_cm,evalm(lin))+
subs(x_ce,evalm(nonlin))),[u1,u2]);
Now we put together the right-hand side and left-hand side. The coefficient of each
distinct monomial, u1k u2j , j + k = 2, determines one boundary condition.
> consts:=[C1,C2,C3,C4,C5,C6];
bceq:=subs(hsoln0,hsolnt,evalm(bclhs-bcrhs)):
bc1:=collect(coeff(coeff(bceq[1,1],u1,2),u2,0),
consts);
238 S.A. Campbell

bc2:=collect(coeff(coeff(bceq[1,1],u1,1),u2,1),
consts):
bc3:=collect(coeff(coeff(bceq[1,1],u1,0),u2,2),
consts):
bc4:=collect(coeff(coeff(bceq[2,1],u1,2),u2,0),
consts);
bc5:=collect(coeff(coeff(bceq[2,1],u1,1),u2,1),
consts):
bc6:=collect(coeff(coeff(bceq[2,1],u1,0),u2,2),
consts):
Form the list of boundary conditions and solve using solve.
> bcs:={bc1,bc2,bc3,bc4,bc5,bc6}:
consts:=convert(const,set);
Csoln:=map(simplify,solve(bcs,consts)):
The solutions are quite long, so we show only one example.
> collect(Csoln[2],[Psi0[1,1],Psi0[1,2],Psi0[2,1],
Psi0[2,2],p1,p2],factor);
 
1 1 2 2 1
C2 := − β 2 ω μ sin(ω τ )p12 + β ω p2 + μ (δ ω sin(ω τ ) + β p0 μ
2 2 2
 
1
−β p0 μ cos(ω τ )2 ) β p1 − ω (β p0 μ sin (ω τ ) + δ ω ) β p2 Ψ 01,2
2

1  
+ μ sin(ω τ ) −1 − β p0 μ + β p0 μ cos(ω τ ) + ω 2 β p1
2

1  
− ω −1 − β p0 μ + β p0 μ cos(ω τ ) + ω 2 β p2 Ψ 02,2
2
1 1
− β p1 μ sin(ω τ ) ω + β ω 2 p2
2 2
The final step is to use the expressions for Ψ(0), Φ, and h to calculate the nonlin-
ear terms of (8.27).
> fu:=collect(expand(subs(x_cm,f)),[u1,u2],
distributed,factor);
nonlinu:=matrix([[0],[fu]]):
ODE_nonlin:=multiply(Psi0,nonlinu):
Note that we have used the fact that the first component of the nonlinearity in our
example (8.34) is 0.
Recalling our expression for the matrix B, we can see that for our example, the
general equation on the centre manifold (8.27) becomes (to O( u 3 ))
8 Calculating Center Manifolds for DDEs Using MapleTM 239

u̇1 = ω u2 + f11
1 u2 + f 1 u u + f 1 u2 + f 1 u3 + f 1 u2 u
1 12 1 2 22 2 111 1 112 1 2
1 u u2 + f 1 u3 ,
+ f122 1 2 222 2 (8.41)
u̇2 = −ω u1 + f111 u2 + f 1 u u + f 1 u2 + f 1 u3 + f 1 u2 u
1 12 1 2 22 2 111 1 112 1 2
1 u u2 + f 1 u3 .
+ f122 1 2 222 2

i are functions of the parameters β , τ , δ , θ , p , p , p , the Hopf fre-


The fjki and fjkl 0 1 2
quency ω , and the center manifold coefficients hijk (0) and hijk (−τ ). As should be
expected, (8.41) is an ODE at a Hopf bifurcation. The criticality of this bifurcation
(and hence of the Hopf bifurcation in the original system of DDE’s) may be deter-
mined by applying standard approaches. For example, one can show that the critical-
ity of the Hopf bifurcation of (8.41) is determined by the sign of the quantity [17, p.
152]
 1 
a = 18 3 f111 + f1221 + f2 +3f2
112 222  (8.42)
− 81ω f12 1 ( f 1 + f 1 ) − f 2 ( f 2 +2 ) − 2 f 1 f 2 + 2 f 1 f 2 .
11 22 12 11 22 11 11 22 22

To evaluate this expression, we first find the coefficients of the quadratic terms.
> quad:=array(1..2,1..3):
> quad[1,1]:=coeff(coeff(ODE_nonlin[1,1],u1,2),u2,0);
> quad[1,2]:=coeff(coeff(ODE_nonlin[1,1],u1,1),u2,1);
> quad[1,3]:=coeff(coeff(ODE_nonlin[1,1],u1,0),u2,2);
> quad[2,1]:=coeff(coeff(ODE_nonlin[2,1],u1,2),u2,0);
> quad[2,2]:=coeff(coeff(ODE_nonlin[2,1],u1,1),u2,1);
> quad[2,3]:=coeff(coeff(ODE_nonlin[2,1],u1,0),u2,2);

quad1,1 := 0
quad1,2 := Ψ 01,2 ω β μ p1(cos(ωτ ) − 1)
quad1,3 := Ψ 01,2 β ω (ω p2 − μ p1 sin(ωτ ))
quad2,1 := 0
quad1,2 := Ψ 02,2 ω β μ p1(cos(ωτ ) − 1)
quad1,3 := Ψ 02,2 β ω (ω p2 − μ p1 sin(ωτ ))

The necessary cubic coefficients are found in a similar way.


>cub:=array(1..2,1..4):
>cub[1,1]:=coeff(coeff(ODE_nonlin[1,1],u1,3),u2,0);
>cub[1,3]:=coeff(coeff(ODE_nonlin[1,1],u1,1),u2,2):
>cub[2,2]:=coeff(coeff(ODE_nonlin[2,1],u1,2),u2,1):
>cub[2,4]:=coeff(coeff(ODE_nonlin[2,1],u1,0),u2,3);

cub1,1 := Ψ 01,2 β μ p1 h2 11(0)(cos(ωτ ) − 1)



cub1,3 := Ψ 01,2 β μ p2 cos(ω τ )ω 2 − μ p2 ω 2 + p1 μ h1 12(−τ ) ω − p1 μ h2 22(0)
+2 p2 h2 12(0) ω − p1 μ h1 12(0) ω − p1 μ sin(ω τ ) h2 12(0)
+p1 μ cos(ω τ ) h2 22(0))
240 S.A. Campbell

cub2,2 := −Ψ 02,2 β (p1 μ h2 12(0) − p1 μ h1 11(−τ ) ω p1 μ sin(ω τ ) h2 11(0)


−p1 μ cos(ω τ ) h2 12(0) + p1 μ h1 11(0) ω − 2 p2 h2 11(0) ω )

cub2,4 := −Ψ 02,2 β μ p2 sin(ω τ ) ω 2 + p1 μ sin(ω τ ) h2 22(0) + p1μ h1 22(0) ω
−p1 μ h1 22(−τ ) ω − 2 p2 h2 22(0) ω )

Note that only the cubic terms depend on the hijk , as expected. The quantity a is
evaluated using the formula of (8.42)
a:=collect(simplify(1/8*(3*cub[1,1]+cub[1,3]+cub[2,2]
+3*cub[2,4])-1/(8*omega)*(quad[1,2]*(quad[1,1]
+quad[1,3])-quad[2,2]*(quad[2,1]+quad[2,3])
-2*quad[1,1]*quad[2,1]+2*quad[1,3]*quad[2,3]))),
[Psi0[1,2],Psi0[2,2]],distributed,factor);
1 2 1
a := β ω p1 μ (cos (ω τ ) − 1) (−ω p2 + μ p1 sin (ω τ )) Ψ 021,2 + β 2 ω
 64  32
2
−ω p2 − μ p1 + 2 ω p2 μ p1 sin (ω τ ) + μ p1 (cos (ω τ )) Ψ 01,2Ψ 02,2
2 2 2 2 2 2

1 
+ β p1 μ cos (ω τ ) h2 22(0) − 3 μ p1 h2 11(0)
8
+3 μ p1 h2 11(0) cos (ω τ ) + 2 p2 h2 12(0)ω − p1 μ h2 22(0)
+p1 μ h1 12(−τ )ω − p1 μ sin (ω τ ) h2 12(0) − μ p2 ω 2 − p1 μ h1 12(0)ω
 1 
+μ p2 cos (ω τ ) ω 2 Ψ01,2 − β 2 ω p1 μ (cos (ω τ ) − 1) − ω p2
64
 1 
+μ p1 sin (ω τ ) Ψ 022,2 − β − 2 p2 h2 11(0)ω + p1 μ h2 12(0)
8
−6 p2 h2 22(0)ω + p1 μ h1 11(0)ω + p1 μ sin (ω τ ) h2 11(0)
−p1 μ cos (ω τ ) h2 12(0) − p1 μ h1 11(−τ )ω − 3 p1 μ h1 22(−τ )ω

+3 μ p2 sin (ω τ ) ω 2 + 3 p1 μ h1 22(0)ω + 3 p1 μ sin (ω τ ) h2 22(0) Ψ 02,2

To get the final expression for a we need to substitute in the actual values for Ψ 0,
hijk (0) and hijk (τ ). The expression is very large, so we do not print it out.
afinal:=subs(Psi0_vals,simplify(subs(Csoln,
simplify(subs(hsoln0,hsolnt,a))))):

8.4 Discussion

In this chapter, we have shown how the symbolic algebra package MapleTM can be
used to calculate the center manifold for a delay differential equation at a Hopf bifur-
cation. The commands involved are fairly simple, and thus it should be fairly easily
to adapt them to other computer algebra systems.
8 Calculating Center Manifolds for DDEs Using MapleTM 241

The emphasis of this chapter was on a system at a Hopf bifurcation. The imple-
mentation for other bifurcations is similar, and just requires the modifying the fol-
lowing parts:
1. The calculation of the basis functions for the center eigenspace for the original
and transpose systems (Φ, Ψ).
2. The calculation of the quantity that determines the criticality. This depends on the
normal form for the bifurcation involved.
Also, as discussed in Sect. 8.2, for some systems and some bifurcations, it may not
be necessary to compute the nonlinear terms of the center manifold.
This chapter has focused on systems at a bifurcation. This means that our pre-
dictions of the stability of the bifurcating limit cycle will only be valid in some
neighborhood of the bifurcation point. To get predictions which are valid in a larger
region, one can use the approach of parameter-dependent center manifolds. For a
general outline of this approach and some specific examples for ordinary differential
equations see [17, p. 134] or [49, p. 198]. For the application of the approach to
DDEs with a Hopf bifurcation see [12] or [35]. For the application of this approach
to DDEs with a Bogdanov-Takens singularity see [11] or [36]. For the application of
this approach to delay differential equations with a Fold-Hopf singularity see [34].
Note that the papers of Qesmi et al. [34–36] have some discussion of the implemen-
tation of their algorithms in Maple.
There are other approaches for studying the dynamics of a delay differential
equation near a nonhyperbolic equilibrium point. Perturbation techniques (multi-
ple scales, Poincaré-Lindstedt, averaging) have been used to study Hopf bifurca-
tion [5, 8, 9, 16, 30, 37, 39, 50]. Often the computations for such methods are as
extensive than for the center manifold; however, the mathematical theory is more
approachable. The Liapunov-Schmidt reduction has also been implemented for delay
differential equations, both numerically [1, 40–42] and using symbolic algebra [13].
However, this method only determines existence of the bifurcating solutions. Some
other method must be used to determined stability.
Finally, while this chapter has focused on using center manifolds to study non-
hyperbolic equilibrium points of autonomous delay differential equations (DDEs),
other applications are possible. In particular, in [10,38,46] center manifolds are used
to study DDEs with time periodic coefficients.

Acknowledgments Maple is a trademark of Waterloo Maple Inc. This work was supported by
a grant from the Natural Sciences and Engineering Research Council of Canada. I acknowl-
edge the contributions of my collaborators on the center manifold computations. In particular,
Jacques Bélair introduced me to delay differential equations and wrote the first version of the
implementation of the center manifold calculations in Maple. Emily Stone introduced me to the
drilling application studied in this chapter, which motivated further development of center manifold
Maple code.
242 S.A. Campbell

References

1. Aboud N., Sathaye A., Stech H.W. (1988) BIFDE: software for the investigation of the Hopf
bifurcation problem in functional differential equations. In: Proceedings of the 27th IEEE
Conference on Decision and Control. Vol. 1. pp. 821–824
2. Ait Babram M., Arino O., Hbid M.L. (1997) Approximation scheme of a system manifold
for functional differential equations. Journal of Mathematical Analysis and Applications 213,
554–572
3. Bélair J., Campbell S.A. (1994) Stability and bifurcations of equilibria in a multiple-delayed
differential equation. SIAM Journal on Applied Mathematics 54(5), 1402–1424
4. Campbell S.A., LeBlanc V.G. (1998) Resonant Hopf-Hopf interactions in delay differential
equations. Journal of Dynamics and Differential Equations 10, 327–346
5. Campbell S.A., Ncube I., Wu J. (2006) Multistability and stable asynchronous periodic oscil-
lations in a multiple-delayed neural system. Physica D 214(2), 101–119
6. Campbell S.A., Yuan Y. (2008) Zero singularities of codimension two and three in delay
differential equations. Nonlinearity 21, 2671–2691
7. Campbell S.A., Yuan Y., Bungay S.D. (2005) Equivariant Hopf bifurcation in a ring of iden-
tical cells with delayed coupling. Nonlinearity 18, 2827–2846
8. Chow S.-N., Mallet-Paret J. (1977) Integral averaging and Hopf bifurcation. Journal of Dif-
ferential Equations 26, 112–159
9. Das S.L., Chatterjee A. (2002) Multiple scales without center manifold reductions for delay
differential equations near Hopf bifurcations. Nonlinear Dynamics 30(4), 323–335
10. Deshmukh V., Butcher E.A., Bueler E. (2008) Dimensional reduction of nonlinear delay dif-
ferential equations with periodic coefficients using Chebyshev spectral collocation. Nonlinear
Dynamics 52, 137–149
11. Faria T., Magalhães L. (1995a) Normal forms for retarded functional differential equations
with parameters and applications to Bogdanov-Takens singularity. Journal of Differential
Equations 122, 201–224
12. Faria T., Magalhães L. (1995b) Normal forms for retarded functional differential equa-
tions with parameters and applications to Hopf bifurcation. Journal of Differential Equations
122, 181–200
13. Franke J.M., Stech H.W. (1991) Extensions of an algorithm for the analysis of nongeneric
Hopf bifurcations, with applications to delay-difference equations. In: Busenberg S, Martelli
M (eds), Delay Differential Equations and Dynamical Systems. Vol. 1475 of Springer Lecture
Notes in Mathematics. Springer-Verlag, Berlin, pp. 161–175
14. Gilsinn D.E. (2002) Estimating critical Hopf bifurcation parameters for a second order
delay differential equation with application to machine tool chatter. Nonlinear Dynamics
30, 103–154
15. Gilsinn D.E. (2008) Bifurcations, center manifolds, and periodic solutions. In: Balachandran
B, Gilsinn DE, Kalmár-Nagy T (eds), Delay Differential Equations: Recent Advances and
New Directions. Springer Verlag, New York, pp. 157–204
16. Gopalsamy K., Leung I. (1996) Delay induced periodicity in a neural netlet of excitation and
inhibition. Physica D 89, 395–426
17. Guckenheimer J., Holmes P.J. (1983) Nonlinear Oscillations, Dynamical Systems and Bifur-
cations of Vector Fields. Springer-Verlag, New York
18. Guo S. (2005) Spatio-temporal patterns of nonlinear oscillations in an excitatory ring network
with delay. Nonlinearity 18, 2391–2407
19. Guo S., Huang L. (2003) Hopf bifurcating periodic orbits in a ring of neurons with delays.
Physica D 183, 19–44
20. Guo S., Huang L., Wang L. (2004) Linear stability and Hopf bifurcation in a two neuron
network with three delays. International Journal of Bifurcation and Chaos 14, 2799–2810
21. Hale J.K. (1985) Flows on center manifolds for scalar functional differential equations. Pro-
ceedings of the Royal Society of Edinburgh 101A, 193–201
8 Calculating Center Manifolds for DDEs Using MapleTM 243

22. Hale J.K., Verduyn Lunel S.M. (1993) Introduction to Functional Differential Equations.
Springer Verlag, New York
23. Jiang M., Shen Y., Jian J., Liao X. (2006) Stability, bifurcation and a new chaos in the logistic
differential equation with delay. Physics Letters A 350(3–4), 221–227
24. Kalmár-Nagy T., Pratt J.R., Davies M.A., Kennedy M.D. (1999) Experimental and analyti-
cal investigation of the subcritical instability in turning. In: Proceedings of the 1999 ASME
Design Engineering Technical Conferences, 17th ASME Biennial Conference on Mechanical
Vibration and Noise. DECT99/VIB-8060
25. Kalmár-Nagy T., Stépán G., Moon F.C. (2001) Subcritical Hopf bifurcation in the delay equa-
tion model for machine tool vibrations. Nonlinear Dynamics 26, 121–142
26. Krawcewicz W., Wu J. (1999) Theory and applications of Hopf bifurcations in symmet-
ric functional-differential equations. Nonlinear Analysis 35 (7, Series A: Theory Methods),
845–870
27. Landry M., Campbell S.A., Morris K.A., Aguilar C. (2005) Dynamics of an inverted pen-
dulum with delayed feedback control. SIAM Journal on Applied Dynamical Systems 4 (2),
333–351
28. Liu Z., Yuan R. (2005) Stability and bifurcation in a harmonic oscillator with delays. Chaos,
Solitons and Fractals 23, 551–562
29. Maple 9.5 Getting Started Guide (2004) Maplesoft, a division of Waterloo Maple Inc.,
Toronto, Canada
30. Nayfeh A.H. (2008) Order reduction of retarded nonlinear systems – the method of multiple
scales versus center-manifold reduction. Nonlinear Dynamics 51, 483–500
31. Orosz G., Stépán G. (2004) Hopf bifurcation calculations in delayed systems with transla-
tional symmetry. Journal of Nonlinear Science 14(6), 505–528
32. Orosz G., Stépán G. (2006) Subcritical Hopf bifurcations in a car-following model with
reaction-time delay. Proceedings of the Royal Society of London, series A 462(2073),
2643–2670
33. Perko L. (1996) Differential Equations and Dynamical Systems. Springer-Verlag, New York
34. Qesmi R., Ait Babram M., Hbid M.L. (2006a) Center manifolds and normal forms for a
class of retarded functional differential equations with parameter associated with Fold-Hopf
singularity. Applied Mathematics and Computation 181(1), 220–246
35. Qesmi R., Ait Babram M., Hbid M.L. (2006b) Computation of terms of center manifolds and
normal elements of bifurcations for a class of functional differential equations associated with
Hopf singularity. Applied Mathematics and Computation 175(2), 932–968
36. Qesmi R., Ait Babram M., Hbid M.L. (2007) Symbolic computation for center manifolds and
normal forms of Bogdanov bifurcation in retarded functional differential equations. Nonlinear
Analysis 66, 2833–2851
37. Rand R., Verdugo A. (2007) Hopf bifurcation formula for first order differential-delay equa-
tions. Communications in Nonlinear Science and Numerical Simulation 12(6), 859–864
38. Sri Namachchivaya N., van Roessel H.J. (2003) A centre-manifold analysis of variable speed
machining. Dynamical Systems 18(3), 245–270
39. Stech H.W. (1979) The Hopf bifurcation: a stability result and application. Journal of Mathe-
matical Analysis and Applications 71, 525–546
40. Stech H.W. (1985a) Hopf bifurcation analysis in a class of scalar functional differential equa-
tions. In: Lighthourne J, Rankin S (eds), Physical mathematics and nonlinear partial differen-
tial equations. Marcel Dekker, New York, pp. 175–186
41. Stech H.W. (1985b) Hopf bifurcation calculations for functional differential equations. Jour-
nal of Mathematical Analysis and Applications 109, 472–491
42. Stech H.W. (1985c) Nongeneric Hopf bifurcations in functional differential equations. SIAM
Journal on Mathematical Analysis 16, 1134–1151
43. Stépán G., Haller G. (1995) Quasiperiodic oscillations in robot dynamics. Nonlinear Dynam-
ics 8, 513–528
44. Stone E., Askari A. (2002) Nonlinear models of chatter in drilling processes. Dynamical
Systems 17(1), 65–85
244 S.A. Campbell

45. Stone E., Campbell S.A. (2004) Stability and bifurcation analysis of a nonlinear DDE model
for drilling. Journal of Nonlinear Science 14(1), 27–57
46. Szalai R., Stépán G. (2005) Period doubling bifurcation and center manifold reduction in a
time-periodic and time-delayed model of machining, preprint
47. Verdugo A., Rand R. (2008) Center manifold analysis of a DDE model of gene expression.
Communications in Nonlinear Science and Numerical Simulation 13(6), 1112–1120
48. Wei J.J., Yuan Y. (2005) Synchronized Hopf bifurcation analysis in a neural network model
with delays. Journal of Mathematical Analysis and Applications 312(1), 205–229
49. Wiggins S. (1990) Introduction to Applied Nonlinear Dynamic Systems and Chaos. Springer
Verlag, New York
50. Wirkus S., Rand R. (2004) The dynamics of two coupled van der Pol oscillators with delay
coupling. Nonlinear Dynamics 30(3), 205–221
51. Wischert W., Wunderlin A., Pelster A., Olivier M., Groslambert J (1994) Delay-induced insta-
bilities in nonlinear feedback systems. Physical Review E 49(1), 203–219
52. Wu J. (1998) Symmetric functional-differential equations and neural networks with memory.
Transactions of the American Mathematical Society 350(12), 4799–4838
53. Wu J., Faria T., Huang Y.S. (1999) Synchronization and stable phase-locking in a network of
neurons with memory. Mathematical and Computer Modelling 30(1–2), 117–138
54. Yuan Y., Campbell S.A. (2004) Stability and synchronization of a ring of identical cells with
delayed coupling. Journal of Dynamics and Differential Equations 16(1), 709–744
55. Yuan Y., Wei J.J. (2005) Multiple bifurcation analysis in a neural network model with delays.
International Journal of Bifurcation and Chaos 16(10), 2903–2913
56. Yuan Y., Yu P., Librescu L., Marzocca P. (2004) Aeroelastiticy of time-delayed feedback
control of two-dimensional supersonic lifting surfaces. Journal of Guidance, Control, and
Dynamics 27(5), 795–803
Chapter 9
Numerical Solution of Delay Differential
Equations

Larry F. Shampine and Sylvester Thompson

Abstract After some introductory examples, in this chapter, some of the ways in
which delay differential equations (DDEs) differ from ordinary differential equa-
tions (ODEs) are considered. Then, numerical methods for DDEs are discussed, and
in particular, how the Runge–Kutta methods that are so popular for ODEs can be
extended to DDEs. The treatment of these topics is complete, but it is necessarily
brief, so it would be helpful to have some background in the theory of ODEs and
their numerical solution. The chapter goes on to consider software issues special
to the numerical solution of DDEs and concludes with some substantial numerical
examples. Both topics are discussed in concrete terms using the programming lan-
guages MATLAB and Fortran 90/95, so a familiarity with one or both languages
would be helpful.

Keywords: DDEs · Propagated discontinuities · Vanishing delays · Numerical


methods · Runge–Kutta · Continuous extension · Event location · MATLAB ·
Fortran 90/95

9.1 Introduction

Ordinary differential equations (ODEs) have been used to model physical phenom-
ena since the concept of differentiation was first developed, and nowadays compli-
cated ODE models can be solved numerically with a high degree of confidence. It
was recognized early that phenomena may have a delayed effect in a differential
equation, leading to what is called a delay differential equation (DDE). For instance,
fishermen along the west coast of South America have long observed a sporadic and
abrupt warming of the cold waters that support the food chain. The recent investi-
gation [10] of this El-Niño/Southern Oscillation (ENSO) phenomenon discusses the
history of models starting in the 1980s that account for delayed feedbacks. An early
model of this kind,
T (t) = T (t) − α T (t − τ ) (9.1)

B. Balachandran et al. (eds.), Delay Differential Equations: Recent Advances 245


and New Directions, DOI 10.1007/978-0-387-85595-0 9,
c Springer Science+Business Media LLC 2009
246 L.F. Shampine and S. Thompson

for constant α > 0, is simple enough to study analytically. It is the term involving
a constant lag or delay τ > 0 in the independent variable that makes this a DDE.
An obvious distinction between this DDE and an ODE is that specifying the initial
value T (0) is not enough to determine the solution for t ≥ 0; it is necessary to specify
the history T (t) for −τ ≤ t ≤ 0 for the differential equation even to be defined for
0 ≤ t ≤ τ . The paper [10] goes on to develop and study a more elaborate nonlinear
model with periodic forcing and a number of physical parameters of the form

h (t) = −a tanh[κ h(t − τ ))] + b cos(2πω t). (9.2)

These models exemplify DDEs with constant delays. The first mathematical software
for solving DDEs is dmrode [17], which did not appear until 1975. Today a good
many programs that can solve reliably first-order systems of DDEs with constant
delays are available, though the paper [10] makes clear that even for this relatively
simple class of DDEs, there can be serious computational difficulties. This is not just
a matter of developing software, rather that DDEs have more complex behavior than
ODEs. Some numerical results for (9.2) are presented in Sect. 9.4.1.
Some models have delays τ j (t) that depend on time. Provided that the delays are
bounded away from zero, the models behave similarly to those with constant delays
and they can be solved with some confidence. However, if a delay goes to zero, the
differential equation is said to be singular at that time. Such singular problems with
vanishing delays present special difficulties in both theory and practice. As a concrete
example of a problem with two time-dependent delays, we mention one that arises
from delayed cellular neural networks [31]. The fact that the delays are sometimes
very small and even vanish periodically during the integration makes this a relatively
difficult problem.

y 1 (t) = −6y1 (t) + sin(2t) f (y1 (t)) + cos(3t) f (y2 (t))


     
1 + cos(t) 1 + sin(t)
+ sin(3t) f y1 t − + sin(t) f y2 t −
2 2
+4 sin(t)
cos(t) cos(2t)
y 2 (t) = −7y2 (t) + f (y1 (t)) + f (y2 (t))
 3 2   
1 + cos(t) 1 + sin(t)
+ cos(t) f y1 t − + cos(2t) f y2 t −
2 2
+2 cos(t).

Here f (x) = (|x + 1| − |x − 1|)/2. The problem is defined by this differential equation
and the history y1 (t) = −0.5 and y2 (t) = 0.5 for t ≤ 0. A complication with this
particular example is that there are time-dependent impulses, but we defer discussion
of that issue to §9.4.3 where we solve it numerically as in [6]. Some models have
delays that depend on the solution itself as well as time, τ (t, y). Not surprisingly, it is
more difficult to solve such problems because only an approximation to the solution
is available for defining the delays.
9 Numerical Solution of Delay Differential Equations 247

Now that we have seen some concrete examples of DDEs, let us state more for-
mally the equations that we discuss in this ninth chapter. In a first-order system
of ODEs
y (t) = f (t, y(t)), (9.3)
the derivative of the solution depends on the solution at the present time t. In a first-
order system of DDEs, the derivative also depends on the solution at earlier times.
As seen in the extensive bibliography [2], such problems arise in a wide variety of
fields. In this chapter, we consider DDEs of the form

y (t) = f (t, y(t), y(t − τ1 ), y(t − τ2 ), . . . , y(t − τk )). (9.4)

Commonly the delays τ j here are positive constants. There is, however, consider-
able and growing interest in systems with time-dependent delays τj (t) and systems
with state-dependent delays τj (t, y(t)). Generally, we suppose that the problem is
non-singular in the sense that the delays are bounded below by a positive constant,
τj ≥ τ > 0. We shall see that it is possible to adapt methods for the numerical solution
of initial value problems (IVPs) for ODEs to the solution of initial value problems
for DDEs. This is not straightforward because DDEs and ODEs differ in impor-
tant ways. Equations of the form (9.4), even with time- and state-dependent delays,
do not include all the problems that arise in practice. Notably absent are equations
that involve a derivative with delayed argument like y (t − τm ) on the right-hand
side. Equations with such terms are said to be of neutral type. Though we com-
ment on neutral equations in passing, we study in this ninth chapter just DDEs of
the form (9.4). We do this because neutral DDEs can have quite different behav-
ior that is numerically challenging. Although we cite programs that have reasonable
prospects for solving a neutral DDE, the numerical solution of neutral DDEs is still
a research area.

9.2 DDEs are not ODEs

In this section, we consider some of the most important differences between DDEs
and ODEs. A fundamental technique for solving a system of DDEs is to reduce it to
a sequence of ODEs. This technique and other important methods for solving DDEs
are illustrated.
The simple ENSO model (9.1) is a constant coefficient, homogeneous differential
equation. If it were an ODE, we might solve it by looking for solutions of the form
T (t) = eλ t . Substituting this form into the ODE leads to an algebraic equation, the
characteristic equation, for values λ that provide a solution. For a first-order equa-
tion, there is only one such value. The same approach can be applied to DDEs. Here
it leads first to
λ eλ t = −α eλ (t−τ ) + eλ t
and then to the characteristic equation

λ = −α e−λ τ + 1
248 L.F. Shampine and S. Thompson

In contrast to the situation with a first-order ODE, this algebraic equation has
infinitely many roots λ . Asymptotic expressions for the roots of large modulus are
derived in [9]. They show that the equation can have solutions that oscillate rapidly.
To make the point more concretely, we consider an example from [9] for which it is
easy to determine the roots analytically, even with a parameter a, namely the DDE
of neutral type
y (t) = y (t − τ ) + a(y(t) − y(t − τ )). (9.5)
Substituting y(t) = eλ t into this equation leads to

(λ − a)(1 − e−λ τ ) = 0.

The two real roots 0 and a are obvious. They correspond to a constant solution and
an exponential solution y(t) = eat , respectively. These solutions are not surprising
because they are like what we might find with an ODE. However, any λ τ leading to
a root of unity also provides a solution. Written in terms of real functions, we find
then that there are solutions cos(2π nt/τ ) and sin(2π nt/τ ) for any integer n. This is
surprising because it is so different from the behavior possible with an ODE.
These observations about homogeneous DDEs with constant coefficients and con-
stant delays show that they can have solutions that behave quite differently from
ODEs. The method of steps is a basic technique for studying DDEs that reduces
them to a sequence of ODEs. To show how it goes and to illustrate other differences
between ODEs and DDEs, we solve

y (t) = y(t − 1) (9.6)

with history S(t) = 1 for t ≤ 0. On the interval 0 ≤ t ≤ 1, the function y(t − 1) in (9.6)
has the known value S(t − 1) = 1 because t − 1 ≤ 0. Accordingly, the DDE on this
interval reduces to the ODE y (t) = 1 with initial value y(0) = S(0) = 1. We solve this
IVP to obtain y(t) = t + 1 for 0 ≤ t ≤ 1. Notice that the solution of the DDE exhibits
a typical discontinuity in its first derivative at t = 0 because S (0) = 0 = y (0−) and
y (0+) = 1. Now that we know the solution for t ≤ 1, we can reduce the DDE on
the interval 1 ≤ t ≤ 2 to an ODE y = (t − 1) + 1 = t with initial value y(1) = 2
and solve this IVP to find that y(t) = 0.5t 2 + 1.5 on this interval. The first derivative
is continuous at t = 1, but there is a discontinuity in the second derivative. It is
straightforward to see that the solution of the DDE on the interval [k, k + 1] is a
polynomial of degree k + 1 and it has a discontinuity of order k + 1 at time t = k. By
a discontinuity of order k + 1 at a time t = t ∗ , we mean that y(k+1) has a jump there.
Figure 9.1 illustrates these observations. The upper (continuous) curve with square
markers at integers is the solution y(t) and the lower curve with circle markers is the
derivative y (t). The jump from a constant value of 0 in the derivative for t < 0 to a
value of 1 at t = 0 leads to a sharp change in the solution there. The discontinuity
propagates to t = 1 where the derivative has a sharp change and the solution has a
less obvious change in its concavity. The jump in the third derivative at t = 2 is not
noticeable in the plot of y(t).
9 Numerical Solution of Delay Differential Equations 249

−1 −0.5 0 0.5 1 1.5 2 2.5 3

Fig. 9.1 Solution smoothing

In principle we can proceed in a similar way with the general equation (9.4) for
delays that are bounded away from zero, τj ≥ τ > 0. With the history function S(t)
defined for t ≤ t0 , the DDEs reduce to ODEs on the interval [t0 ,t0 + τ ] because for
each j, the argument t − τj ≤ t − τ ≤ t0 and the y(t − τj ) have the known values
S(t − τj ). Thus, we have an IVP for a system of ODEs with initial value y(t0 ) =
S(t0 ). We solve this problem on [t0 ,t0 + τ ] and extend the definition of S(t) to this
interval by taking it to be the solution of this IVP. Now that we know the solution for
t ≤ t0 + τ , we can move on to the interval [t0 + τ ,t0 + 2τ ], and so forth. In this way,
we can see that the DDEs have a unique solution on the whole interval of interest by
solving a sequence of IVPs for ODEs. As with the simple example, there is generally
a discontinuity in the first derivative at the initial point. If a solution of (9.4) has a
discontinuity at the time t ∗ of order k, then as the variable t moves through t ∗ + τj ,
there is a discontinuity in y(k+1) because of the term y(t − τj ) in the DDEs. With
multiple delays, a discontinuity at the time t ∗ is propagated to the times

t ∗ + τ1 , t ∗ + τ2 , . . . , t ∗ + τk

and each of these discontinuities is in turn propagated. If there is a discontinuity at


the time t ∗ of order k, the discontinuity at each of the times t ∗ + τj is of order at
least k + 1, and so on. This is a fundamental distinction between DDEs and ODEs:
There is normally a discontinuity in the first derivative at the initial point and it is
propagated throughout the interval of interest. Fortunately, for problems of the form
(9.4) the solution becomes smoother as the integration proceeds. That is not the case
with neutral DDEs, which is one reason that they are so much more difficult.
250 L.F. Shampine and S. Thompson

Neves and Feldstein [18] characterize the propagation of derivative discontinu-


ities. The times at which discontinuities occur form a discontinuity tree. If there is
a derivative discontinuity at T , the equation (9.4) shows that there will generally
be a discontinuity in the derivative of one higher order if for some j, the argument
t − τj (t, y(t)) = T because the term y(t − τj (t, y(t)) has a derivative discontinuity at T .
Accordingly, the times at which discontinuities occur are zeros of functions

t − τj (t, y(t)) − T = 0. (9.7)

It is required that the zeros have odd multiplicity so that the delayed argument actu-
ally crosses the previous jump point and in practice, it is always assumed that the
multiplicity is one. Although the statement in [18] is rather involved, the essence
is that if the delays are bounded away from zero and delayed derivatives are not
present, a derivative discontinuity is propagated to a discontinuity in (at least) the
next higher derivative. In contrast, smoothing does not necessarily occur for neutral
problems nor for problems with vanishing delays. For constant and time-dependent
delays, the discontinuity tree can be constructed in advance. If the delay depends on
the state y, the points in the tree are located by solving the algebraic equations (9.7).
A very important practical matter is that the solvers have to track only discontinuities
with order lower than the order of the integration method because the behavior of the
method is not affected directly by discontinuities in derivatives of higher order.
Multiple delays cause special difficulties. Suppose, for example, that one delay
is 1 and another is 0.001. A discontinuity in the first derivative at the initial point
t = 0 propagates to 0.001, 0.002, 0.003, . . . because of the second delay. These
“short” delays are troublesome, but the orders of the discontinuities increase and
soon they do not trouble numerical methods. However, the other delay propagates
the initial discontinuity to t = 1 and the discontinuity there is then propagated to
1.001, 1.002, 1.003, . . . because of the second delay. That is, the effects of the short
delay die out, but they recur because of the longer delay. Another difficulty is that
discontinuities can cluster. Suppose that one delay is 1 and another is 1/3. The sec-
ond delay causes an initial discontinuity at t = 0 to propagate to 1/3, 2/3, 3/3, . . .
and the first delay causes it to propagate to t = 1, . . .. In principle the discontinuity
at 3/3 occurs at the same time as the one at 1, but 1/3 is not represented exactly
in finite precision arithmetic, so it is found that in practice there are two disconti-
nuities that are extremely close together. This simple example is an extreme case,
but it shows how innocuous delays can lead to clustering of discontinuities, clearly a
difficult situation for numerical methods.
There is no best way to solve DDEs and as a result, a variety of methods that
have been used for ODEs have been modified for DDEs and implemented in modern
software. It is possible, though awkward in some respects, to adapt linear multistep
methods for DDEs. This approach is used in the snddelm solver [15], which is
based on modified Adams methods [24]. There are a few solvers based on implicit
Runge–Kutta methods. Several are described in [12, 14, 25], including two codes,
radar5 and ddesd, that are based on Radau IIA collocation methods. By far the
most popular approach to nonstiff problems is to use explicit Runge–Kutta methods
[3, 13]. Widely used solvers include archi [21, 22], dde23 [4, 29], ddverk [7, 8],
9 Numerical Solution of Delay Differential Equations 251

and dde solver [5, 30]. Because the approach is so popular, it is the one that we
discuss in this chapter. Despite sharing a common approach, the codes cited deal
with important issues in quite different ways.
Before taking up numerical issues and how they are resolved, we illustrate the
use of numerical methods by solving the model (9.1) over [0, 6] for α = 2 and τ = 1
with two history functions, T (t) = 1 − t and T (t) = 1. By exploiting features of
the language, the MATLAB [16] solvers dde23 and ddesd and the Fortran 90/95
solver dde solver make it nearly as easy to solve DDEs as ODEs. Because of
this and because we are very familiar with them, we use these solvers for all our
numerical examples. A program that solves the DDE (9.1) with both histories and
plots the results is
function Ex1
lags = 1; tspan = [0 6];
sol1 = dde23(@dde,lags,@history,tspan);
sol2 = dde23(@dde,lags,1,tspan);
tplot = linspace(0,6,100);
T1 = deval(sol1,tplot);
T2 = deval(sol2,tplot);
% Add linear histories to the plots:
tplot = [-1 tplot]; T1 = [1 T1]; T2 = [2 T2];
plot(tplot,T1,tplot,T2,0,1,’o’)
%--Subfunctions---------------------
function dydt = dde(t,T,Z)
dydt = T - 2*Z;
function s = history(t)
s = 1 - t;
The output of this program is displayed as Fig. 9.2. Although MATLAB displays the
output in color, all the figures of this chapter are monochrome. Except for the addi-
tional information needed to define a delay differential equation, solving a DDE with
constant delays using dde23 is nearly the same as solving an ODE with ode23.
The first argument tells the solver of the function for evaluating the DDEs (9.4). The
lags argument is a vector of the lags τ1 , . . . , τk . There is only one lag in the example
DDE, so the argument is here a scalar. The argument tspan specifies the interval
[t0 ,t f ] of interest. Unlike the ODE solvers of MATLAB, the DDE solvers require
that t0 < t f . The third argument is a function for evaluating the history, i.e., the solu-
tion for t ≤ t0 . A constant history function is so common that the solver allows users
to supply a constant vector instead of a function and that was done in computing the
second solution. The first two arguments of the function for evaluating the DDEs is
the same as for a system of ODEs, namely the independent variable t and a col-
umn vector of dependent variables approximating y(t). Here the latter is called T
and is a scalar. A challenging task for the user interface is to accomodate multiple
delays. This is done with the third argument Z, which is an array of k columns. The
first column approximates y(t − τ1 ) with τ1 defined as the first delay in lag. The
second column corresponds to the second delay, and so forth. Here there is only one
252 L.F. Shampine and S. Thompson

25

20

15

10

−5

−10
−1 0 1 2 3 4 5 6

Fig. 9.2 The simple ENSO model (9.1) for two histories

delay and only one dependent variable, so Z is a scalar. A notable difference between
dde23 and ode23 is the output, here called sol. The ODE solvers of MATLAB
optionally return solutions as a complex data structure called a structure, but that is
the only form of output from the DDE solvers. The solution structure sol1 returned
by the first integration contains the mesh that was selected as the field sol1.x and
the solution at these points as the field sol1.y. With this information the first solu-
tion can be plotted by plot(sol1.x,sol1.y). Often this is satisfactory, but this
particular problem is so easy that values of the solution at mesh points alone does
not provide a smooth graph. Approximations to the solution can be obtained any-
where in the interval of interest by means of the auxiliary function deval. In the
example program, the two solutions are approximated at 100 equally spaced points
for plotting. A marker is plotted at (0, T (0)) to distinguish the two histories from the
corresponding computed solutions of (9.1). The discontinuity in the first derivative
at the initial point is clear for the history T (t) = 1, but the discontinuity in the second
derivative at t = 1 is scarcely visible.

9.3 Numerical Methods and Software Issues

The method of steps and modifications of methods for ODEs can be used to solve
DDEs. Because they are so popular, we study in this chapter only explicit Runge–
Kutta methods. In addition to discussing basic algorithms, we take up important
issues that arise in designing software for DDEs.
9 Numerical Solution of Delay Differential Equations 253

9.3.1 Explicit Runge–Kutta Methods

Given yn ≈ y(tn ), an explicit Runge–Kutta method for a first-order system of ODEs


(9.3) takes a step of size hn to form yn+1 ≈ y(tn+1 ) at tn+1 = tn + hn as follows. It first
evaluates the ODEs at the beginning of the step,

yn,1 = yn , fn,1 = f (tn , yn,1 ),

and then for j = 2, 3, . . . , s forms


j−1
yn,j = yn + hn ∑ βj,k fn,k , fn,j = f (tn + αj hn , yn,j )
k=1

It finishes with
s
yn+1 = yn + hn ∑ γk fn,k .
k=1

The constants s, β j,k , αj , and γk are chosen to make yn+1 an accurate approximation
to y(tn+1 ). The numerical integration of a system of ODEs starts with given initial
values y0 = y(t0 ) and then forms a sequence of approximations y0 , y1 , y2 , . . . to the
solution at times t0 ,t1 ,t2 , . . . that span the interval of interest, [t0 ,t f ]. For some pur-
poses, it is important to approximate the solution at times t that are not in the mesh
when solving a system of ODEs, but this is crucial to the solution of DDEs. One of
the most significant developments in the theory and practice of Runge–Kutta meth-
ods is a way to approximate the solution accurately and inexpensively throughout
the span of a step, i.e., anywhere in [tn ,tn+1 ]. This is done with a companion formula
called a continuous extension.
A natural and effective way to solve DDEs can be based on an explicit Runge–
Kutta method and the method of steps. The first difficulty is that in taking a step,
the function f must be evaluated at times tn + αj hn , which for the delayed arguments
means that we need values of the solution at times (tn + αj hn ) − τm , which may be
prior to tn . Generally, these times do not coincide with mesh points, so we obtain
the solution values from an interpolant. For this purpose, linear multistep methods
are attractive because the popular methods all have natural interpolants that provide
accurate approximate solutions between mesh points. For other kinds of methods, it
is natural to use Hermite interpolation for this purpose. However, though formally
correct as the step size goes to zero, the approach does not work well with Runge–
Kutta methods. That is because these methods go to considerable expense to form an
accurate approximation at the end of the step and as a corollary, an efficient step size
is often too large for accurate results from direct interpolation of solution and first
derivative at several previous steps. Nowadays, codes based on Runge–Kutta formu-
las use a continuous extension of the basic formula that supplements the function
evaluations formed in taking the step to obtain accurate approximations throughout
[tn ,tn+1 ]. This approach uses only data from the current interval. With any of these
approaches, we obtain a polynomial interpolant that approximates the solution over
the span of a step and in aggregate, the interpolants form a piecewise-polynomial
254 L.F. Shampine and S. Thompson

function that approximates y(t) from t0 to the current tn . Care must be taken to ensure
that the accuracy of the interpolant reflects that of the basic formula and that vari-
ous polynomials connect at mesh points in a sufficiently smooth way. Details can be
found in [3] and [13]. It is important to appreciate that interpolation is used for other
purposes, too. We have already seen it used to get the smooth graph of Fig. 9.2. It is
all but essential when solving (9.7). Capable solvers also use interpolation for event
location, a matter that we discuss in Sect. 9.3.3.
Short delays arise naturally and as we saw by example in §9.1, it may even hap-
pen that a delay vanishes. If no delayed argument occurs prior to the initial point,
the DDE has no history, so it is called an initial value DDE. A simple example is
y (t) = y(t 2 ) for t ≥ 0. A delay that vanishes can lead to quite different behavior
– the solution may not extend beyond the singular point or it may extend, but not
be unique. Even when the delays are all constant, “short” delays pose an important
practical difficulty: Discontinuities smooth out as the integration progresses, so we
may be able to use a step size much longer than a delay. In this situation, an explicit
Runge–Kutta formula needs values of the solution at “delayed” arguments that are
in the span of the current step. That is, we need an approximate solution at points
in [tn ,tn+1 ] before yn+1 has been computed. Early codes simply restrict the step size
so as to avoid this difficulty. However, the most capable solvers predict a solution
throughout [tn ,tn+1 ] using the continuous extension from the preceding step. They
compute a tentative yn+1 using predicted values for the solution at delayed argu-
ments and then repeat using a continuous extension for the current step until the
values for yn+1 converge. This is a rather interesting difference between ODEs and
DDEs: If the step size is bigger than a delay, a Runge–Kutta method that is explicit
for ODEs is generally an implicit formula for yn+1 for DDEs. The iterative scheme
for taking a step resembles a predictor–corrector iteration for evaluating an implicit
linear multistep method. More details are available in [1, 29].
The order of accuracy of a Runge–Kutta formula depends on the smoothness of
the solution in the span of the step. This is not usually a problem with ODEs, but we
have seen that discontinuities are to be expected when solving DDEs. To maintain the
order of the formula, we have to locate and step to discontinuities. The popular codes
handle propagated discontinuities in very different ways. The solver dde23 allows
only constant delays, so before starting the integration, it determines all the discon-
tinuities in the interval of interest and arranges for these points to be included in the
mesh t0 ,t1 , . . .. Something similar can be done if the delays are time-dependent, but
this is awkward, especially if there are many delays. The approach is not applicable
to problems with state-dependent delays. For problems of this generality, disconti-
nuities are located by solving (9.7) as the integration proceeds. If the function (9.7)
changes sign between tn and tn+1 , the algebraic equation is solved for the location
of the discontinuity with the polynomial interpolant for this step, P(t), replacing y(t)
in (9.7). After locating the first time t ∗ at which α (t ∗ , P(t ∗ )) − T = 0, the step to
tn+1 is rejected and a shorter step is taken from tn to a new tn+1 = t ∗ . This is what
dde solver does for general delays, but it uses the more efficient approach of
building the discontinuity tree in advance for problems with constant delays. archi
also solves (9.7), but tracking discontinuities is an option in this solver. ddverk
9 Numerical Solution of Delay Differential Equations 255

does not track discontinuities explicitly, rather it uses the defect or residual of the
solution to detect discontinuities. The residual r(t) of an approximate solution S(t)
is the amount by which it fails to satisfy the differential equation:

S (t) = f (t, S(t), S(t − τ1 ), S(t − τ2 ), . . . , S(t − τk )) + r(t).

On discovering a discontinuity, ddverk steps across with special interpolants.


ddesd does not track propagated discontinuities. Instead it controls the residual,
which is less sensitive to the effects of propagated discontinuities. radar5 treats
the step size as a parameter to locate discontinuities detected by error test failures;
details are found in [11].
When solving a system of DDEs, it is generally necessary to supply an initial his-
tory function to provide solution values for t ≤ t0 . Of course the history function must
provide values for t as far back from t0 as the maximum delay. This is straightfor-
ward, but some DDEs have discontinuities at times prior to the initial point or even
at the initial point. A few solvers, including dde23, ddesd, and dde solver,
provide for this, but most codes do not because it complicates both the user interface
and the program.

9.3.2 Error Estimation and Control

Several quite distinct approaches to the vital issue of error estimation and control
are seen in popular solvers. In fact, this issue most closely delineates the differ-
ences between DDE solvers. Popular codes like archi, dde23, dde solver,
and ddverk use pairs of formulas for this purpose. The basic idea is to take each
step with two formulas and estimate the error in the lower order result by compari-
son. The cost is kept down by embedding one formula in the other, meaning that one
formula uses only fn,k that were formed in evaluating the other formula or at most
a few extra function evaluations. The error of the lower order formula is estimated
and controlled, but it is believed that the higher order formula is more accurate, so
most codes advance the integration with the higher order result. This is called local
extrapolation. In some ways an embedded pair of Runge–Kutta methods is rather
like a predictor–corrector pair of linear multistep methods.
If the pair is carefully matched, an efficient and reliable estimate of the error is
obtained when solving a problem with a smooth solution. Indeed, most ODE codes
rely on the robustness of the error estimate and step size selection procedures to han-
dle derivative discontinuities. Codes that provide for event location and those based
on control of the defect (residual) further improve the ability of a code to detect and
resolve discontinuities. Because discontinuities in low-order derivatives are almost
always present when solving DDEs, the error estimates are sometimes questionable.
This is true even if a code goes to great pains to avoid stepping across discontinuities.
For instance, ddverk monitors repeated step failures to discern whether the error
estimate is not behaving as it ought for a smooth solution. If it finds a discontinuity in
256 L.F. Shampine and S. Thompson

this way, it uses special interpolants to get past the discontinuity. The solver generally
handles discontinuities quite well since the defect does a good job of reflecting dis-
continuities. Similarly, radar5 monitors error test failures to detect discontinuities
and then treats the step size as a parameter to locate the discontinuity. Despite these
precautions, the codes may still use questionable estimates near discontinuities. The
ddesd solver takes a different approach. It exploits relationships between the resid-
ual and the error to obtain a plausible estimate of the error even when discontinuities
are present; details may be found in [25].

9.3.3 Event Location

Just as with ODEs, it is often important to find out when something happens. For
instance, we may need to find the first time a solution component attains a prescribed
value because the problem changes then. Mathematically, this is formulated as find-
ing a time t ∗ for which one of a collection of functions

g1 (t, y(t)), g2 (t, y(t)), . . . , gk (t, y(t))

vanishes. We say that an event occurs at time t ∗ and the task is called event loca-
tion [28]. As with locating discontinuities, the idea is to monitor the event functions
for a change of sign between tn and tn+1 . When a change is encountered in, say,
equation m, the algebraic equation gm (t, P(t)) = 0 is solved for t ∗ . Here P(t) is the
polynomial continuous extension that approximates y(t) on [tn ,tn+1 ]. Event location
is a valuable capability that we illustrate with a substantial example in Sect. 9.4.
A few remarks about this example will illustrate some aspects of the task. A sys-
tem of two differential equations is used to model a two-wheeled suitcase that may
wobble from one wheel to the other. If the event y1 (t) − π /2 = 0 occurs, the suitcase
has fallen over and the computation comes to an end. If the event y1 (t) = 0 occurs,
a wheel has hit the floor. In this situation, we stop integrating and restart with ini-
tial conditions that account for the wheel bouncing. As this example makes clear,
if there are events at all, we must find the first one if we are to model the physical
situation properly. For both these event functions, the integration is to terminate at
an event, but it is common that we want to know when an event occurs and the value
of the solution at that time, but we want the integration to continue. A practical dif-
ficulty is illustrated by the event of a wheel bouncing – the integration is to restart
with y1 (t) = 0, a terminal event! In the example we deal with this by using another
capability, namely that we can tell the solver that we are interested only in events
for which the function decreases through zero or increases through zero, or it does
not matter how the function changes sign. Most DDE codes do not provide for event
location. Among the codes that do are dde23, ddesd, and dde solver.
9 Numerical Solution of Delay Differential Equations 257

9.3.4 Software Issues

A code needs values from the past, so it must use either a fixed mesh that matches
the delays or some kind of continuous extension. The former approach, used in early
codes, is impractical or impossible for most DDEs. Modern codes adopt the latter
approach. Using some kind of interpolation, they evaluate the solution at delayed
arguments, but this requires that they store all the information needed for the inter-
polation. This information is saved in a solution history queue. The older Fortran
codes had available only static storage and using a static queue poses significant
complications. Since the size of the queue is not known in advance, users must either
allocate excessively large queues or live with the fact that the code will be unsuccess-
ful for problems when the queue fills. The difficulty may be avoided to some extent
by using circular queues in which the oldest solution is replaced by new informa-
tion when necessary. Of course this approach fails if discarded information is needed
later. The dynamic storage available in modern programming languages like MAT-
LAB and Fortran 90/95 is vital to modern programs for the solution of DDEs. The
dde23, ddesd, and dde solver codes use dynamic memory allocation to make
the management of the solution queue transparent to the user and to allow the solu-
tion queue to be used on return from the solver. Each of the codes trims the solution
queue to the amount actually used at the end of the integration. The latest version of
dde solver has an option for trimming the solution queue during the integration
while allowing the user to save the information conveniently if desired.
The more complex data structures available in modern languages are very help-
ful. The one used in the DDE solvers of MATLAB is called a structure and the
equivalent in Fortran 90/95 is called a derived type. By encapsulating all information
about the solution in a structure, the user is relieved of the details about how some
things are accomplished. An example is evaluation of the solution anywhere in the
interval of interest. The mesh and the details of how the solution is interpolated are
unobtrusive when stored in a solution structure. A single function deval is used to
evaluate the solution computed by any of the differential equation solvers of MAT-
LAB. If the solution structure is called sol, there is a field, sol.solver, that is
the name of the solver as a string, e.g., "dde23". With this deval knows how the
data for the interpolant is stored and how to evaluate it. There are, in fact, a good
many possibilities. For example, the ODE solvers that are based on linear multistep
methods vary the order of the formula used from step to step, so stored in the struc-
ture is the order of the polynomial and the data defining it for each [tn ,tn+1 ]. All the
interpolants found in deval are polynomials, but several different representations
are used because they are more natural to the various solvers. By encapsulating this
information in a structure, the user need give no thought to the matter. A real dividend
for libraries is that it is easy to add another solver to the collection.
The event location capability discussed in Sect. 9.3.3 requires output in addition to
the solution itself, viz., the location of events, which event function led to each event
reported, and the solution at each event. It is difficult to deal properly with event
location without modern language capabilities because the number of events is not
known in advance. In the ODE solvers of MATLAB, this information is available
258 L.F. Shampine and S. Thompson

in output arguments since the language provides for optional output arguments. Still,
it is convenient to return the information as fields in a solution structure since a user
may want to view only some of the fields. The equivalent in Fortran 90/95 is to return
the solution as a derived type. This is especially convenient because the language
does not provide for optional output. The example of §9.4.2 illustrates what a user
interface might look like in both languages.
In the numerical example of §9.4.2, the solver returns after an event, changes the
solution, and continues the integration. This is easy with an ODE because continua-
tion can be treated as a new problem. Not so with DDEs because they need a history.
Output as a structure is crucial to a convenient implementation of this capability. To
continue an integration, the solver is called with the output structure from the prior
integration instead of the usual history function or vector. The computation proceeds
as usual, but approximate solutions at delayed arguments prior to the starting point
are taken from the previously computed solution. If the delays depend on time and/or
state, they might extend as far back as the initial data. This means that we must save
the information needed to interpolate the solution from the initial point on. Indeed,
the delays might be sufficiently long that values are taken from the history function
or vector supplied for the first integration of the problem. This means that the history
function or vector, as the case may be, must be held as a field in the solution structure
for this purpose. A characteristic of the data structure is that fields do not have to be
of the same type or size. Indeed, we have mentioned a field that is a string, fields that
are arrays of length not known in advance, and a field that is a function handle.

9.4 Examples

Several collections of test problems are available to assess how well a particular
DDE solver handles the issues and tasks described above. Each of the references
[5, 7, 11, 19, 20, 25, 27, 29] describes a variety of test problems. In this section, the
two problems of §9.1 are solved numerically to show that a modern DDE solver may
be used to investigate complex problems. With such a solver, it is not much more
difficult to solve a first-order system of DDEs than ODEs. Indeed, a design goal of
the dde23, ddesd, and dde solver codes was to exploit capabilities in MAT-
LAB and Fortran 90/95 to make them as easy as possible to use, despite exceptional
capabilities. We also supplement the discussion of event location in §9.3.3 with an
example. Although it is easy enough to solve the DDEs, the problem changes at
events and dealing with this is somewhat involved. We provide programs in both
MATLAB and Fortran 90/95 that show even complex tasks can be solved conve-
niently with codes like dde23 and dde solver. Differences in the design of these
codes are illustrated by this example. Further examples of the numerical solution of
DDEs can be found in the documentation for the codes cited throughout the chapter.
They can also be found in a number of the references cited and in particular, the
references of Sect. 9.6.
9 Numerical Solution of Delay Differential Equations 259

9.4.1 El-Niño Southern Oscillation Variability Model

Equation (9.2) is a DDE from [10] that models the El-Niño Southern Oscillation
(ENSO) variability. It combines two key mechanisms that participate in ENSO
dynamics, delayed negative feedback, and seasonal forcing. They suffice to generate
very rich behavior that illustrates several important features of more detailed models
and observational data sets. In [10] a stability analysis of the model is performed in
a three-dimensional space of its strength of seasonal forcing b, atmosphere–ocean
coupling κ , and propagation period τ of oceanic waves across the Tropical Pacific.
The physical parameters a, κ , τ , b, and ω are all real and positive.
Figure 9.3 depicts typical solutions computed with dde solver and constant
history h(t) = 1 for t ≤ 0. It shows six solutions obtained by fixing b = 1, κ = 100
and varying the delay τ over two orders of magnitude, from τ = 10−2 to τ = 1, with
τ increasing from bottom to top in the figure. The sequence of changes in solution
type as τ increases seen in this figure is typical for any choice of (b, κ ).
For a small delay, τ < π /(2 κ ), we have a periodic solution with period 1 (curve
a); here the internal oscillator is completely dominated by the seasonal forcing. When
the delay increases, the effect of the internal oscillator becomes visible: small wig-
gles, in the form of amplitude-modulated oscillations with a period of 4 τ , emerge as
the trajectory crosses the zero line. However, these wiggles do not affect the overall
period, which is still 1. The wiggle amplitude grows with τ (curve b) and eventually
wins over the seasonal oscillations, resulting in period doubling (curve c). Further
increase of τ results in the model passing through a sequence of bifurcations that pro-
duce solution behavior of considerable interest for understanding ENSO variability.
Although solution of this DDE is straightforward with a modern solver, it is quite
demanding for some parameters. For this reason, the compiled computation of the

0 1 2 3 4 5 6 7 8 9 10
Time

Fig. 9.3 Examples of DDE model solutions. Model parameters are κ = 100 and b = 1, while τ
increases from curve (a) to curve (f) as follows: (a) τ = 0.01, (b) τ = 0.025, (c) τ = 0.15, (d)
τ = 0.45, (e) τ = 0.995, and (f) τ = 1
260 L.F. Shampine and S. Thompson

Fortran dde solver was much more appropriate for the numerical study of [10]
than the interpreted computation of the MATLAB dde23. The curves in Fig. 9.3
provide an indication of how much the behavior of the solution depends on the delay
τ . Further investigation of the solution behavior for different values of the other prob-
lem parameters requires the solution over extremely long intervals. The intervals
are so long that it is impractical to retain all the information needed to evaluate an
approximate solution anywhere in the interval. These problems led to the option of
trimming the solution queue in dde solver.

9.4.2 Rocking Suitcase

To illustrate event location for a DDE, we consider the following example from [26].
A two–wheeled suitcase may begin to rock from side to side as it is pulled. When
this happens, the person pulling it attempts to return it to the vertical by applying
a restoring moment to the handle. There is a delay in this response that can affect
significantly the stability of the motion. This may be modeled with the DDE

θ (t) + sign(θ (t))γ cos(θ (t)) − sin(θ (t)) + β θ (t − τ ) = A sin(Ω t + η ),

where θ (t) is the angle of the suitcase to the vertical. This equation is solved on the
interval [0, 12] as a pair of first-order equations with y1 (t) = θ (t) and y2 (t) = θ (t).
Parameter values of interest are
γ 
γ = 2.48, β = 1, τ = 0.1, A = 0.75, Ω = 1.37, η = arcsin ,
A
and the initial history is the constant vector zero. A wheel hits the ground (the suit-
case is vertical) when y1 (t) = 0. The integration is then to be restarted with y1 (t) = 0
and y2 (t) multiplied by the coefficient of restitution, here chosen to be 0.913. The
suitcase is considered to have fallen over when |y1 (t)| = π2 and the run is then termi-
nated. This problem is solved using dde23 with the following MATLAB program.

function sol = suitcase


state = +1;
opts = ddeset(’RelTol’,1e-5,’Events’,@events);
sol = dde23(@ddes,0.1,[0; 0],[0 12],opts,state);

ref = [4.516757065, 9.751053145, 11.670393497];


fprintf(’Kind of Event: dde23 reference\n’);
event = 0;
while sol.x(end) < 12
event = event + 1;
if sol.ie(end) == 1
fprintf(’A wheel hit the ground.
%10.4f %10.6f\n’,...
9 Numerical Solution of Delay Differential Equations 261

sol.x(end),ref(event));
state = - state;
opts = ddeset(opts,’InitialY’,[ 0; 0.913*sol.y
(2,end)]);
sol = dde23(@ddes,0.1,sol,[sol.x(end) 12],
opts,state);
else
fprintf(’The suitcase fell over. %10.4f
%10.6f\n’,...
sol.x(end),ref(event));
break;
end
end
plot(sol.y(1,:),sol.y(2,:))
xlabel(’\theta(t)’)
ylabel(’\theta’’(t)’)

%===================================================
function dydt = ddes(t,y,Z,state)
gamma = 0.248; beta = 1; A = 0.75; omega = 1.37;
ylag = Z(1,1);
dydt = [y(2); 0];
dydt(2) = sin(y(1)) - state*gamma*cos(y(1)) -
beta*ylag ...
+ A*sin(omega*t + asin(gamma/A));

function [value,isterminal,direction] = events


(t,y,Z,state)
value = [y(1); abs(y(1))-pi/2];
isterminal = [1; 1];
direction = [-state; 0];

The program produces the phase plane plot depicted in Fig. 9.4. It also reports what
kind of event occurred and the location of the event. The reference values displayed
were computed with the dde solver code and much more stringent tolerances.
Kind of Event: dde23 reference
A wheel hit the ground. 4.5168 4.516757
A wheel hit the ground. 9.7511 9.751053
The suitcase fell over. 11.6704 11.670393
This is a relatively complicated model, so we will elaborate on some aspects of
the program. Coding of the DDE is straightforward except for evaluating properly the
discontinuous coefficient sign(y1 (t)). This is accomplished by initializing a param-
eter state to +1 and changing its sign whenever dde23 returns because y1 (t)
vanished. Handling state in this manner ensures that dde23 does not need to deal
262 L.F. Shampine and S. Thompson

0.5

0
q’(t)

−0.5

−1

−1.5
−1 −0.5 0 0.5 1 1.5 2
q(t)

Fig. 9.4 Two–wheeled suitcase problem

with the discontinuities it would otherwise see if the derivative were coded in a man-
ner that allowed state to change before the integration is restarted; see [28] for a
discussion of this issue. After a call to dde23 we must consider why it has returned.
One possibility is that it has reached the end of the interval of integration, as indicated
by the last point reached, sol.x(end), being equal to 12. Another is that the suit-
case has fallen over, as indicated by sol.ie(end) being equal to 2. Both cases
cause termination of the run. More interesting is a return because a wheel hit the
ground, y1 (t) = 0, which is indicated by sol.ie(end) being equal to 1. The sign
of state is then changed and the integration restarted. Because the wheel bounces,
the solution at the end of the current integration, sol.y(:,end), must be modi-
fied for use as initial value of the next integration. The InitialY option is used to
deal with an initial value that is different from the history. The event y1 (t) = 0 that
terminates one integration occurs at the initial point of the next integration. As with
the MATLAB IVP solvers, dde23 does not terminate the run in this special situa-
tion of an event at the initial point. No special action is necessary, but the solver does
locate and report an event at the initial point, so it is better practice to avoid this by
defining more carefully the event function. When the indicator state is +1, respec-
tively −1, we are interested in locating where the solution component y1 (t) vanishes
only if it decreases, respectively increases, through zero. We inform the solver of this
by setting the first component of the argument direction to -state. Notice that
ddeset is used to alter an existing options structure in the while loop. This is a
convenient capability also present in odeset, the corresponding function for IVPs.
The rest of the program is just a matter of reporting the results of the computations.
9 Numerical Solution of Delay Differential Equations 263

Default tolerances give an acceptable solution, though the phase plane plot would
benefit from plotting more solution values. Reducing the relative error tolerance to
1e-5 gives better agreement with the reference values.
It is instructive to compare solving the problem with dde23 to solving it with
dde solver. In contrast to the numerical study of the ENSO model, solving this
problem is inexpensive in either computing environment, so the rather simpler pro-
gram of the MATLAB program outweighs the advantage in speed of the Fortran
program.
We begin by contrasting briefly the user interface of the solvers. A simple problem
is solved simply by defining it with a call like

SOL = DDE_SOLVER(NVAR,DDES,BETA,HISTORY,TSPAN) (9.8)

Here NVAR is an integer array of two entries or three entries. The first is NEQN, the
number of DDEs, the second is NLAGS, the number of delays, the third, if present, is
NEF, the number of event functions. DDES is the name of a subroutine for evaluating
the DDEs. It has the form

SUBROUTINE DDES(T,Y,Z,DY) (9.9)

The input arguments are the independent variable T, a vector Y of NEQN compo-
nents approximating y(T ), and an array Z that is NEQN × NLAGS. Column j of
this array is an approximation to y(βj (T, y(T ))). The subroutine evaluates the DDEs
with these arguments and returns y (T ) as the vector DY of NEQN components.
BETA is the name of a subroutine for evaluating the delays, and HISTORY is
the name of a subroutine for evaluating the initial history function. More precisely,
the functions are defined by subroutines for general problems, but they are defined
in a simpler way in the very common situations of constant lags and/or constant
history. dde solver returns the numerical solution in the output structure SOL.
The input vector TSPAN is used to inform the solver of the interval of integration
and where approximate solutions are desired. TSPAN has at least two entries. The
first entry is the initial point of the integration, t0 , and the last is the final point, t f .
If TSPAN has only two entries, approximate solutions are returned at all the mesh
points selected by the solver itself. These points generally produce a smooth graph
when the numerical solution is plotted. If TSPAN has entries t0 < t1 < . . . < t f , the
solver returns approximate solutions at (only) these points.
The call list of (9.8) resembles closely that of dde23. The design provides for a
considerable variety of additional capabilities. This is accomplished in two ways. F90
provides for optional arguments that can be supplied in any order if associated with
a keyword. This is used, for example, to pass to the solver the name of a subroutine
EF for evaluating event functions and the name of a subroutine CHNG in which
necessary problem changes are made when event times are located, with a call like
SOL = DDE_SOLVER(NVAR, DDES, BETA, HISTORY, TSPAN, &
EVENT_FCN=EF, CHANGE_FCN=CHNG)
264 L.F. Shampine and S. Thompson

This ability is precisely what is needed to solve this problem. EF is used to define the
residuals g1 = y1 and g2 = |y1 | − π2 . CHNG is used to apply the coefficient of resti-
tution and handle the STATE flag as in the solution for dde23. One of the optional
arguments is a structure containing options. This structure is formed by a function
called DDE SET that is analogous to the function ddeset used by dde23 (and
ddesd). The call list of (9.8) uses defaults for important quantities such as error
tolerances, but of course, the user has the option of specifying quantities appropriate
to the problem at hand. A more detailed discussion of these and other design issues
may be found in [30]. Here is a Fortran 90 program that uses dde solver to solve
this problem.
MODULE define_DDEs

IMPLICIT NONE
INTEGER, PARAMETER :: NEQN=2, NLAGS=1, NEF=2
INTEGER :: STATE

CONTAINS

SUBROUTINE DDES(T, Y, Z, DY)


DOUBLE PRECISION :: T
DOUBLE PRECISION, DIMENSION(NEQN) :: Y, DY
DOUBLE PRECISION :: YLAG
DOUBLE PRECISION, DIMENSION(NEQN,NLAGS) :: Z
! Physical parameters
DOUBLE PRECISION, PARAMETER :: gamma=0.248D0,
beta=1D0, &
A=0.75D0,
omega=1.37D0
YLAG = Z(1,1)
DY(1) = Y(2)
DY(2) = SIN(Y(1)) - STATE*gamma*COS(Y(1)) -
beta*YLAG &
+ A*SIN(omega*T + ASIN(gamma/A))
RETURN
END SUBROUTINE DDES

SUBROUTINE EF(T, Y, DY, Z, G)


DOUBLE PRECISION :: T
DOUBLE PRECISION, DIMENSION(NEQN) :: Y, DY
DOUBLE PRECISION, DIMENSION(NEQN,NLAGS) :: Z
DOUBLE PRECISION, DIMENSION(NEF) :: G
G = (/ Y(1), ABS(Y(1)) - ASIN(1D0) /)
RETURN
END SUBROUTINE EF
9 Numerical Solution of Delay Differential Equations 265

SUBROUTINE CHNG(NEVENT, TEVENT, YEVENT, DYEVENT,


HINIT, &
DIRECTION, ISTERMINAL, QUIT)
INTEGER :: NEVENT
INTEGER, DIMENSION(NEF) :: DIRECTION
DOUBLE PRECISION :: TEVENT, HINIT
DOUBLE PRECISION, DIMENSION(NEQN) :: YEVENT,
DYEVENT
LOGICAL :: QUIT
LOGICAL, DIMENSION(NEF) :: ISTERMINAL
INTENT(IN) :: NEVENT,TEVENT
INTENT(INOUT) :: YEVENT, DYEVENT, HINIT,
DIRECTION, &
ISTERMINAL, QUIT
IF (NEVENT == 1) THEN
! Restart the integration with initial values
! that correspond to a bounce of the suitcase.
STATE = -STATE
YEVENT(1) = 0.0D0
YEVENT(2) = 0.913*YEVENT(2)
DIRECTION(1) = - DIRECTION(1)
! ELSE
! Note:
! The suitcase fell over, NEVENT = 2.
The integration
! could be terminated by QUIT = .TRUE.,
but this
! event is already a terminal event.
ENDIF
RETURN
END SUBROUTINE CHNG

END MODULE define_DDEs

!****************************************************

PROGRAM suitcase

! The DDE is defined in the module define_DDEs. The


problem
! is solved here with ddd_solver and its output
written to
! a file. The auxilary function suitcase.m imports
the data
! into Matlab and plots it.
266 L.F. Shampine and S. Thompson

USE define_DDEs
USE DDE_SOLVER_M

IMPLICIT NONE

! The quantities
! NEQN = number of equations
! NLAGS = number of delays
! NEF = number of event functions
! are defined in the module define_DDEs as
PARAMETERs so
! they can be used for dimensioning arrays here.
They ! pre assed to the solver in the array NVAR.
INTEGER, DIMENSION(3) :: NVAR = (/NEQN,NLAGS,NEF/)

TYPE(DDE_SOL) :: SOL
! The fields of SOL are expressed in terms of the
! number of differential equations, NEQN, and the
! number of output points, NPTS:
! SOL%NPTS -- NPTS,number of output points.
! SOL%T(NPTS) -- values of independent
variable, T.
! SOL%Y(NPTS,NEQN) -- values of dependent
variable, Y,
! corresponding to values of
SOL%T.
! When there is an event function, there are fields
! SOL%NE -- NE, number of events.
! SOL%TE(NE) -- locations of events
! SOL%YE(NE,NEQN) -- values of solution at events
! SOL%IE(NE) -- identifies which event
occurred
TYPE(DDE_OPTS) :: OPTS

! Local variables:
INTEGER :: I,J

! Prepare output points.


INTEGER, PARAMETER :: NOUT=1000
DOUBLE PRECISION, PARAMETER :: T0=0D0,TFINAL=12D0
DOUBLE PRECISION, DIMENSION(NOUT) :: TSPAN= &
(/(T0+(I-1)*((TFINAL-T0)/(NOUT-1)), I=1,NOUT)/)
9 Numerical Solution of Delay Differential Equations 267

! Initialize the global variable that governs the


! form of the DDEs.

STATE = 1
! Set desired integration options.
OPTS = DDE_SET(RE=1D-5,DIRECTION=(/-1,0/),&
ISTERMINAL=(/ .FALSE.,.TRUE. /))

! Perform the integration.


SOL = DDE_SOLVER(NVAR,DDES,(/0.1D0/),(/0D0,0D0/),&
TSPAN,OPTIONS=OPTS,EVENT_FCN=EF,CHANGE_FCN=
CHNG)

! Was the solver successful?


IF (SOL%FLAG == 0) THEN
! Write the solution to a file for subsequent
! plotting in Matlab.
OPEN(UNIT=6, FILE=’suitcase.dat’)
DO I = 1,SOL%NPTS
WRITE(UNIT=6,FMT=’(3D12.4)’) SOL%T(I),&
(SOL%Y(I,J),J=1,
NEQN)
ENDDO
PRINT *,’ Normal return from DDE_SOLVER with
results’
PRINT *," written to the file ’suitcase.dat’."
PRINT *,’ ’
PRINT *,’ These results can be accessed in
PRINT *,’ Matlab’ and plotted in a phase plane by’
PRINT *,’ ’
PRINT *," >> [t,y] = suitcase;"
PRINT *,’ ’
PRINT *,’ ’
PRINT *,’ Kind of Event:’
DO I = 1,SOL%NE
IF(SOL%IE(I) == 1) THEN
PRINT *,’ A wheel hit the ground at’,
SOL%TE(I)
ELSE
PRINT *,’ The suitcase fell over at’,
SOL%TE(I)
END IF
END DO
PRINT *,’ ’
ELSE
268 L.F. Shampine and S. Thompson

PRINT *,’ Abnormal return from DDE_SOLVER.


FLAG = ’,&
SOL%FLAG
ENDIF

STOP
END PROGRAM suitcase

9.4.3 Time-Dependent DDE with Impulses

In Sect. 9.1 we stated a first-order system of DDEs that arises in modeling cellular
neural networks [31] and commented that it is relatively difficult to solve numeri-
cally because the delays vanish periodically during the integration. It is also difficult
because the system is subject to impulse loading. Specifically, at each tk = 2k an
impulse is applied by replacing y1 (tk ) with 1.2y1 (tk ) and y2 (tk ) by 1.3y2 (tk ). Because
the impulses are applied at specific times, this problem might be solved in sev-
eral ways. When using dde solver, it is convenient to define an event function
g(t) = t − Te , where Te = 2 initially and Te = 2(k + 1) once T = 2k has located. This
could be done with ddesd, too, but the design of the MATLAB solver makes it
more natural simply to integrate to tk , return to the calling program where the solu-
tion is altered because of the impulse, and call the solver to continue the integration.

0.5

0.4

0.3

0.2

0.1
y2

−0.1

−0.2

−0.3

−0.4

−0.5
−1 −0.5 0 0.5 1
y1

Fig. 9.5 Neural network DDE with time-dependent impulses


9 Numerical Solution of Delay Differential Equations 269

This is much like the computation of the suitcase example. Figure 9.5 shows the
phase plane for the solution computed with impulses using dde solver. Similar
results were obtained with ddesd.

9.5 Conclusion

We believe that the use of models based on DDEs has been hindered by the availabil-
ity of quality software. Indeed, the first mathematical software for this purpose [17]
appeared in 1975. As software has become available that makes it not greatly harder
to integrate differential equations with delays, there has been a substantial and grow-
ing interest in DDE models in all areas of science. Although the software benefited
significantly from advances in ODE software technology, we have seen in this chap-
ter that there are numerical difficulties and issues peculiar to DDEs that must be
considered. As interest has grown in DDE models, so has interest in both developing
algorithms and quality software. There are classes of problems that remain challeng-
ing, but the examples of Sect. 9.4 show that it is not hard to use quality DDE solvers
to solve realistic and complex DDE models.

9.6 Further Reading

A short, but good list of sources is

1. Baker C T H, Paul C A H, and Willé D R [2], A Bibliography on the Numerical


Solution of Delay Differential Equations
2. Bellen A and Zennaro M [3], Numerical Methods for Delay Differential Equa-
tions
3. Shampine L F, Gladwell I, and Thompson S [26], Solving ODEs with M ATLAB
4. http://www.radford.edu/˜thompson/ffddes/index.html, a
web site devoted to DDEs

In addition, the numerical analysis section of Scholarpedia [23] contains several


very readable general articles devoted to the numerical solution of delay differential
equations.

References

1. Baker C T H and Paul C A H (1996), A Global Convergence Theorem for a Class of Parallel
Continuous Explicit Runge–Kutta Methods and Vanishing Lag Delay Differential Equations,
SIAM J Numer Anal 33:1559–1576
270 L.F. Shampine and S. Thompson

2. Baker C T H, Paul C A H, and Willé D R (1995), A Bibliography on the Numerical Solution


of Delay Differential Equations, Numerical Analysis Report 269, Mathematics Department,
University of Manchester, U.K.
3. Bellen A and Zennaro M (2003), Numerical Methods for Delay Differential Equations, Oxford
Science, Clarendon Press
4. Bogacki P and Shampine L F (1989), A 3(2) Pair of Runge–Kutta Formulas, Appl Math Lett
2:1–9
5. Corwin S P, Sarafyan D, and Thompson S (1997), DKLAG6: A Code Based on Continuously
Imbedded Sixth Order Runge–Kutta Methods for the Solution of State Dependent Functional
Differential Equations, Appl Numer Math 24:319–333
6. Corwin S P, Thompson S, and White S M (2008), Solving ODEs and DDEs with Impulses,
JNAIAM 3:139–149
7. Enright W H and Hayashi H (1997), A Delay Differential Equation Solver Based on a Con-
tinuous Runge–Kutta Method with Defect Control, Numer Alg 16:349–364
8. Enright W H and Hayashi H (1998), Convergence Analysis of the Solution of Retarded and
Neutral Differential Equations by Continuous Methods, SIAM J Numer Anal 35:572–585
9. El’sgol’ts L E and Norkin S B (1973), Introduction to the Theory and Application of Differ-
ential Equations with Deviating Arguments, Academic Press, New York
10. Ghil M, Zaliapin I, and Thompson S (2008), A Differential Delay Model of ENSO Variability:
Parametric Instability and the Distribution of Extremes, Nonlin Processes Geophys 15:417–
433
11. Guglielmi N and Hairer E (2008), Computing Breaking Points in Implicit Delay Differential
Equations, Adv Comput Math
12. Guglielmi N and Hairer E (2001), Implementing Radau IIa Methods for Stiff Delay Differen-
tial Equations, Computing 67:1–12
13. Hairer E, Nörsett S P, and Wanner G (1987), Solving Ordinary Differential Equations I,
Springer–Verlag, Berlin, Germany
14. Jackiewicz Z (2002), Implementation of DIMSIMs for Stiff Differential Systems, Appl Numer
Math 42:251–267
15. Jackiewicz Z and Lo E (2006), Numerical Solution of Neutral Functional Differential Equa-
tions by Adams Methods in Divided Difference Form, J Comput Appl Math 189:592–605
16. M ATLAB 7 (2006), The MathWorks, Inc., 3 Apple Hill Dr., Natick, MA 01760
17. Neves K W (1975), Automatic Integration of Functional Differential Equations: An Approach,
ACM Trans Math Softw 1:357–368
18. Neves K W and Feldstein A (1976), Characterization of Jump Discontinuities for State Depen-
dent Delay Differential Equations, J Math Anal Appl 56:689–707
19. Neves K W and Thompson S (1992), Software for the Numerical Solution of Systems of
Functional Differential Equations with State Dependent Delays, Appl Numer Math 9:385–
401
20. Paul C A H (1994), A Test Set of Functional Differential Equations, Numerical Analysis
Report 243, Mathematics Department, University of Manchester, U.K.
21. Paul C A H (1995), A User–Guide to ARCHI, Numerical Analysis Report 283, Mathematics
Department, University of Manchester, U.K.
22. Paul C A H (1992), Developing a Delay Differential Equation Solver, Appl Numer Math
9:403–414
23. Scholarpedia, http://www.scholarpedia.org/
24. Shampine L F (1994), Numerical Solution of Ordinary Differential Equations, Chapman &
Hall, New York
25. Shampine L F (2005), Solving ODEs and DDEs with Residual Control, Appl Numer Math
52:113–127
26. Shampine L F, Gladwell I, and Thompson S (2003) Solving ODEs with M ATLAB, Cambridge
Univ. Press, New York
27. Shampine L F and Thompson S, Web Support Page for dde solver, http://www.
radford.edu/˜thompson/ffddes/index.html
9 Numerical Solution of Delay Differential Equations 271

28. Shampine L F and Thompson S (2000), Event Location for Ordinary Differential Equations,
Comp Maths Appls 39:43–54
29. Shampine L F and Thompson S (2001), Solving DDEs in M ATLAB, Appl Numer Math
37:441–458
30. Thompson S and Shampine L F (2006), A Friendly Fortran DDE Solver, Appl Numer Math
56:503–516
31. Yongqing Y and Cao J (2007), Stability and Periodicity in Delayed Cellular Neural Networks
with Impulsive Effects, Nonlinear Anal Real World Appl 8:362–374
Chapter 10
Effects of Time Delay on Synchronization
and Firing Patterns in Coupled Neuronal
Systems

Qishao Lu, Qingyun Wang, and Xia Shi

Abstract Synchronization and firing patterns of neurons play a significant role in


neural signal encoding and transduction of information processing of nervous sys-
tems. For real neurons, the signal transmission delay is inherent due to finite propa-
gation speed of action potentials through axons, as well as both dendritic and synaptic
processing. Time delay is universal and inevitable, and results in complex dynami-
cal behavior, such as multistability and bifurcations leading to chaos. In this chapter,
the authors provide a survey of some recent developments on the effects of time
delay on synchronization and firing patterns in coupled neuronal systems. Basic
concepts of firing patterns of neurons and complete and phase synchronization of
oscillators are introduced. Synchronization and firing patterns in electrically coupled
neurons as well as inhibitory and excitatory delayed synapses with time delay are
presented. Delay effects on spatiotemporal dynamics of coupled neuronal activities
are discussed. Some well-known neuron models are also included for benefit of the
readers.

Keywords: Firing pattern · Time delay · Neuronal system · Synchronization

10.1 Introduction

Neurons are the basic building blocks of nervous systems, and there are about 1011
neurons in the human brain. These specialized cells are the information processing
units of the brain responsible for receiving and transmitting information. Neurons are
coupled in an extremely complex neural network, and neural information is mainly
encoded and integrated through various firing patterns of neurons [1–4]. To under-
stand the essence of collective behavior of coupled neurons, various firing patterns of
coupled neurons should be studied theoretically and numerically by means of neural
models.

B. Balachandran et al. (eds.), Delay Differential Equations: Recent Advances 273


and New Directions, DOI 10.1007/978-0-387-85595-0 10,
c Springer Science+Business Media LLC 2009
274 Q. Lu et al.

Synchronization of neurons plays a significant role in neural signal encoding and


transduction of information processing of nervous systems. Physiological experi-
ments have indicated the existence of synchronous motions of neurons in different
areas of the brain. Quasiperiodic synchronous rhythm was shown in cortex and a
small neural systems like central pattern generators. Synchronous firing of neurons
was also observed in cat visual cortex. The observations of synchronous neural activ-
ity in the central nervous system have stimulated a great deal of theoretical work on
synchronization in coupled neural networks.
Electrophysiological and anatomical data indicate that the cerebral cortex is spa-
tially and functionally organized. These patterns of connectivity must influence the
intrinsic dynamics of cortical circuits which can now be visualized using optical
techniques. The relationship between the spatial profile of neural interactions and
spatiotemporal patterns of neuronal activity can be investigated in modeling studies.
An important property of neural interactions is that they involve delays. We know
that the information flow in neural systems is not instantaneous in general. The sig-
nal transmission delay is inherent due to finite propagation speed of action potentials
through axons, as well as both dendritic and synaptic processing. For example, con-
duction velocity ranges within 10 m s−1 , leading to nonnegligible transmission times
from milliseconds to hundreds milliseconds for propagation through the cortical net-
work. Effective delays can also be induced by the spike generation dynamics. Hence,
for real neurons the effect of time delay is universal and inevitable in the behavior
of synchronization and firing patterns of coupled neuronal systems. Behaviors due
to time delays, such as multistability and bifurcation leading to chaos, have been
analytically and numerically investigated by using some simple neuronal models
(such as integrate-and-fire (IF), FitzHugh-Nagumo (FHN), Hindmarsh-Rose (HR),
and Hodgkin-Huxley (HH) models) with delayed coupling.
The effect of time delays on the coupled oscillators was studied extensively [5–9].
Recently, the synchronization dynamics for a system of two HH neurons with
delayed diffusive and pulsed coupling was investigated in [10]. The stability of a net-
work of coupled HH neurons was analyzed and the stability regions in the parameter
space were obtained. With diffusive coupling, there are three regions in the parame-
ter space, corresponding to qualitatively distinct behavior of the coupled dynamics.
In particular, the two neurons can synchronize in the two regions and desynchro-
nize in the third. Reference [11] studied enhancement of neural synchrony by time
delay, that is, a stable synchronized state existing at low coupling strengths for sig-
nificant time delays, in a time-delayed system of two chaotic HR neurons. It was
shown that in the delayed system there was always an extended region of stable syn-
chronous activity corresponding to low coupling strengths, which could be achieved
only by much higher coupling strengths without delay. Bursting behavior in a pair of
HR neurons with delayed coupling was studied in [12], in which bifurcations due to
time lag and coupling as well as the stability of stationary state corresponding to the
quiescence behavior were analyzed. It is found that bursting is created by coupling
and its properties strongly depended on the time lag. In particular, there is a domain
of values of time lags which render the bursting of the two neurons exactly syn-
chronous. Moreover, phenomena of suppression of oscillations and synchronization
10 Effects of Delay on Neuronal Synchronization and Firing Patterns 275

are important in understanding the physiology of neuronal systems, and have been
intensively studied. In [13], a method for suppression of synchrony in a globally
coupled oscillator network was suggested based on time-delayed feedback via the
mean field. A theory was developed based on the consideration of the synchroniza-
tion transition as a Hopf bifurcation. Also time-delay-induced switching between in
phase and antiphase oscillations has been observed in several systems [8, 9].
When interactions are spatially structured, delays can induce a wealth of dynam-
ical states with different spatiotemporal properties and domains of multistability.
In [14] for the dynamics of large networks of neurons, It was shown that delays gave
rise to a wealth of bifurcations and to a rich phase diagram, which includes oscil-
latory bumps, traveling waves, lurching waves, standing waves arising via a period-
doubling bifurcation, aperiodic regimes, and regimes of multistability. The existence
and the stability of various dynamical patterns was studied analytically and numer-
ically in a simplified firing-rate model as a function of the interaction parameters.
As we know, noise could have quite different qualitative effects on the determin-
istic dynamics depending on the values of time lag and coupling. It has been shown
elsewhere that such delays may lead to homogeneous oscillations in neural networks
with random connectivity. Dynamics of FHN neuron ensembles with time-delayed
couplings subject to white noise, was studied by using both direct simulations and
a semianalytical augmented moment method in [15]. Effects of the parameters of
coupling strength, delay, noise intensity, and the ensemble size on the emergence
of the oscillation and on the synchronization in FN neuron ensembles were studied.
The synchronization showed the fluctuation-induced enhancement at the transition
between nonoscillating and oscillating states. Reference [16] explored the dynamics
of a HH-type model for thermally sensitive neurons with time-delayed feedback that
exhibit intrinsic oscillatory activity. The dynamics of the neuron depending on the
temperature, the synaptic strength, and the delay time was investigated. The parame-
ter regions where the effect of the recurrent connection is excitatory, inducing spikes
or trains of spikes, and the regions where it is inhibitory, reducing or eliminating
completely the spiking behavior, were found. The complex interplay of the intrin-
sic dynamics of the neuron with the recurrent feedback input and a noisy input was
revealed.
The influence of white noise on the dynamics of a pair of stochastically perturbed
HR bursting neurons with delayed electrical coupling was studied in [17, 18]. Possi-
bility of stochastically stable exact synchronization with sufficiently strong coupling
was proved for arbitrary time lags and sufficiently small noise. In particular, a simple
method to predict the intensity of noise that can destabilize the quiescent state was
proposed and compared with numerical computations. Furthermore, it was demon-
strated that quite small noise might completely destroy the exact synchronization of
bursting dynamics.
In this chapter, we make surveys on some recent developments in the effects
of time delay on synchronization and firing patterns in coupled neuronal systems.
In Sect. 10.2, basic concepts of firing patterns of neurons and complete and phase
synchronization of oscillators are introduced. Section 10.3 contains a discussion of
synchronization and firing patterns in electrically coupled neurons with time delay.
276 Q. Lu et al.

Synchronization and firing patterns in coupled neurons with inhibitory and excitatory
delayed synapses are presented in Sects. 10.4 and 10.5, respectively. Delay effects on
spatiotemporal dynamics of coupled neuronal activity are discussed by the authors in
Sect. 10.6. Concluding remarks are given in Sect. 10.7, and some well-known neuron
models are introduced in the Appendices for the convenience of the reader.

10.2 Basic Concepts

10.2.1 Firing Patterns of a Single Neuron

A neuron consists of the dendrites, the soma, and the axon. Each part of the neuron
plays a role in the communication of information throughout the body. The den-
drites receive information from other neurons and transmit electrical stimulation to
the soma, and the signals are joined in the soma and passed on to the axon, which
transmits the neural signal to the other neurons.
The junction between two neurons is called a synapse. It is common to refer to the
sending neuron as the presynaptic cell and to the receiving neuron as the postsynaptic
cell. A single neuron in vertebrate cortex often connects to more than 104 postsynap-
tic neurons. There are two common types of synapses in the vertebrate brain, that is,
electrical and chemical synapses. According to these junctions, the couplings among
neurons are divided into electrical and chemical ones in coupled neuronal systems.
Furthermore, the chemical coupling is also classified either as excitatory or inhibitory
one according to its function.
Neural signals are expressed by the action potentials, which are also called
spikes or impulses. Action potentials are generated and sustained by ionic currents
through the cell membrane. A neuron receives inputs from other neurons through the
synapses and the inputs produce electrical transmembrane currents that change the
membrane potential of the neuron. The synaptic currents produce postsynaptic poten-
tials that could be amplified by the voltage-sensitive channels embedded in neuronal
membrane and lead to the generation of action potentials or spikes, that is, abrupt and
transient changes of membrane voltage that propagate to other neurons via the axon.
The dynamical models illustrating the generation of spikes in a neuron are presented
in Appendices.
A neuron is quiescent if its membrane potential is at rest or exhibits small ampli-
tude (“subthreshold”) oscillations. In the point of dynamics, this corresponds to the
system residing at an equilibrium or a small amplitude limit cycle attractor, respec-
tively. A neuron is said to be excitable if a small perturbation away from a quiescent
state can result in a large excursion of its action potential before returning to quies-
cence. Then the neuron can fire spikes when there is a large amplitude limit cycle
attractor, which may coexist with the quiescent state. Neuronal firing is crucial to
the information processing in the nervous system, and there are many complex firing
patterns observed in neural experiments and numerical simulations.
10 Effects of Delay on Neuronal Synchronization and Firing Patterns 277

a 2.5
b 2.5
Periodic bursting
Periodic spiking 2
2

membrane potential
1.5
membrane potential

1.5
1
1
0.5
0.5 Bifurcation Bifurcation
0 of rest state of limit cycle
0 −0.5

−0.5 −1

−1 −1.5
0 100 200 300 400 500 0 50 100 150 200 250 300
time time

Fig. 10.1 Neuronal spiking and bursting

There are two basic types of neuronal firing patterns, that is, spiking and burst-
ing. Repetitive spiking comes from a large amplitude limit cycle, which corresponds
to oscillatory dynamics of neurons, Fig. 10.1a. While the bursting depends on two
important bifurcations, that is, the bifurcation of a quiescent state that leads to repet-
itive spiking and the bifurcation of a spiking attractor that results in quiescence, as
shown in Fig. 10.1b. Bursting appears when the dynamical neural system contains
two timescales and the slowly changing variable visits the spiking and quiescent
states alternately. The neuron may fire several spikes in the duration of each burst,
depending on the period of the slowly changing variable spending in the spiking area
relative to the period of a single spike. The classification of bursting was presented
by Izhikevich [19] according to the bifurcations between the quiescent state and that
of the repetitive spiking. The bifurcation scenarios of the interspike interval (ISI)
series can often be used to identify the bursting types by the number of spikes per
burst and to show whether the bursting pattern is periodic or chaotic (Fig. 10.2).

10.2.2 Synchronization

In a classical context, synchronization means adjustment of rhythms of self-sustained


periodic oscillators due to weak interaction and can be described in terms of phase
locking and frequency entrainment. Nowadays, it is well-known that self-sustained
oscillators can generate rather complex, chaotic signals. Recent studies have revealed
that such systems, being coupled, are able to undergo synchronization. Therefore,
modern concept of synchronization also covers coupled chaotic systems. Chaotic
synchronization can be defined as the adjustment of varying rhythms of coupled
chaotic oscillators. One distinguishes synchronization in different forms, that is,
complete, phase, lag, and generalized synchronization, etc.
Complete synchronization of coupled identical systems means the coincidence of
state variables with time evolution, starting from different initial points in the state
space. This kind of synchronization usually can be reached by strong coupling. Let
278 Q. Lu et al.

180

160

140
Periodic bursting
120

100 Chaotic bursting


ISI

80

60
Spiking
40

20

0
1 1.5 2 2.5 3 3.5 4

Fig. 10.2 Some bifurcation scenarios of ISI of neurons

us begin with a general system with temporal evolution governed by the following
equation.
ẇ = f (w).
Here w = (w1 , w3 , ..., wn ) is an n-dimensional state vector, f : Rn → Rn is a vector
function. Through a bidirectional coupling, we obtain a coupled system as follows:

v̇ = f (v) +C(w − v),

ẇ = f (w) +C(v − w), (10.1)


in which C is the coupling matrix. For the coupled neurons, this would mimic the
electrical coupling. In this framework, complete synchronization is defined as the
identity between the trajectories of two systems. Then the existence of complete
synchronization requires that the synchronization manifold v ≡ w is asymptotically
stable, or equivalently, lim e(t) = 0, where e(t) is the synchronization error defined
t→∞
by e(t) ≡ v − w. This property can be justified in general by using the stability anal-
ysis of the following linearized system for small e:

ė = f (w) − f (v) +C(v − w) −C(w − v)


= (D f (w) + 2C)e, (10.2)

where D f (w) is the Jacobian matrix of the vector field f evaluated onto the trajectory
of any one of the coupled systems.
10 Effects of Delay on Neuronal Synchronization and Firing Patterns 279

Let J = D f (w) + 2C. Three types of complete synchronization will be considered


theoretically as follows. First, if the trajectory of w(t) is a fixed point, then the study
of the stability problem of system (10.2) can be made by evaluating the eigenvalues
of the matrix J. Therefore, a condition for the complete synchronization of fixed
points of the coupled system (10.1) is that the real parts of all eigenvalues of J are
smaller than zero. Second, when the trajectory of w(t) is periodic, (10.2) is a linear
ordinary differential equation with periodic coefficients. According to the Floquet
theorem, the stability problem of (10.1) can be solved through calculating the Floquet
multipliers. A condition for asymptotic stability of the trivial solution of (10.1), that
is, a condition for the complete synchronization of periodic motions of the coupled
system (10.1) is that the modula of all characteristic multipliers are not larger than 1.
Finally, if the trajectory of w(t) is chaotic, then we need to calculate the Lyapunov
exponents of system (10.2), that is, the conditional Lyapunov exponents. A condition
for the complete synchronization of chaotic motions of the coupled system (10.1) is
that all conditional Lyapunov exponents are smaller than zero.
If the coupling is weak, another kind of synchronization, phase synchronization,
can be reached for two coupled nonidentical systems. Many physical, chemical, and
biological systems can produce self-sustained oscillations, which correspond mathe-
matically to a stable limit cycle in the state space of an autonomous continuous-time
dynamical system, and the phase φ can be introduced as the variable parameterizing
the motion along this cycle. Usually, the phase can be chosen in such a way that
it grows uniformly in time, that is, dφ /dt = ω , where ω is the natural frequency
of oscillations. Then for two coupled periodic oscillators, n:m phase locking or fre-
quency locking is represented by |nφ1 − mφ2 | < constant or nω1 = mω2 .
For phase synchronization of coupled chaotic oscillators, the primary problem
is to determine the time-dependent phase φ (t). A few approaches have been pro-
posed [20] to calculate the phases of chaotic oscillators. While for chaotic neuronal
systems, the Poincaré section-based method is usually used, in which the rotation
angle of a radius vector projection on a certain plane of variables is defined to calcu-
late the instantaneous phase.
Choose a Poincaré section defined as Σ . A trajectory of the system successively
intersects Σ and each intersection is associated with a phase increase of 2π . Then the
phase can be defined by
t − Ti
φ (t) = 2π + 2π i (Ti < t < Ti+1 ) (10.3)
Ti+1 − Ti

where t is the time and Ti is the time of the ith intersection of the trajectory with the
Poincaré section. With this definition of phase, the mean frequency is

1 N 2π
ω  = lim ∑ .
i=1 Ti+1 − Ti
N→∞ N

Similar to phase locking of coupled periodic oscillators, the condition of phase


synchronization of two coupled chaotic oscillators can be formalized as |mφ1 (t) −
nφ2 (t)| < const, where m and n are integers. Phase synchronization is also defined
280 Q. Lu et al.

as frequency entrainment, provided that the mean frequencies of the oscillators are
in rational relation, that is |mω1 − nω2 | = 0, where m and n are same as the above
description. In particular, it is called in-phase synchronization when |φ1 (t) − φ2 (t)| =
0 and antiphase synchronization when |φ1 (t) − φ2 (t)| = π . For more results about
weak coupling, one can refer to the book [21].

10.3 Synchronization and Firing Patterns in Electrically


Coupled Neurons with Time Delay

In [22], we consider the synchronization and firing patterns of two electrically cou-
pled ML neurons with time delay. For a ML neuron (see Appendix 10.2), VCa is
considered as a control parameter. To generate rich firing patterns in the single ML
neuron model , an external linear slow subsystem is added as follows:

dI
= μ (0.2 +V ), (10.4)
dt
where μ is a constant, which takes μ = 0.005 below. When the parameter VCa
changes, the ML neuron model can exhibit rich firing behavior, such as various peri-
odic and chaotic patterns. The bifurcation diagram of ISI and the two largest Lya-
punov exponents are shown in Fig. 10.3, in which various firing behavior of a single
ML neuron is exhibited with the parameter VCa changing.
Two coupled ML neurons with a time delayed gap junction are described by the
following equations:

dV1,2
= gCa m∞ (VCa −V1,2 ) + gk ω (Vk −V1,2 )+,
dt
gl (Vl −V1,2 ) − I1,2 +C(V2,1 (t − τ ) −V1,2 ),

a 40
b 0.06
35 0.04

30 0.02

0
25
ISI

λ1,2

−0.02
20
−0.04
15
−0.06
10 −0.08

5 −0.1
0.8 0.85 0.9 0.95 0.8 0.85 0.9 0.95
VCa VCa

Fig. 10.3 a The bifurcation diagram of ISI vs. the parameter VCa in the ML neuron model; b the
corresponding two largest Lyapunov exponents λ1,2 of a
10 Effects of Delay on Neuronal Synchronization and Firing Patterns 281

dω1,2
= λ∞ (V1,2 )(ω∞ (V1,2 ) − ω1,2 ), (10.5)
dt
dI1,2
= μ (0.2 +V1,2 ),
dt
where C is the coupling strength and τ is the time delay in gap junction, the subscript
1 (or 2) represents the neuron 1 (or 2). Choose the parameter VCa = 0.845 mV, at
which a single ML neuron exhibits chaotic firing behavior as shown in Fig. 10.3.
It is shown in [22] that when the coupling strength is below C = 0.6, the coupled
neurons cannot realize synchronization . In what follows, we focus on the effect of
time-delay coupling on synchronization and firing patterns of two coupled chaotic
ML neurons. To investigate the synchronization of two coupled neurons with time
delay, the similarity function is introduced as
) *1
(V1 (t) −V2 (t − τ1 ))2 
2

S(τ1 ) = 1 (10.6)
(V12 (t)V22 (t)) 2

where · denotes the temporal average. In the numerical simulation, this value
can be calculated with enough long time after omitting the enough transient states.
This function measures the temporal correlation of two signals V1 (t),V2 (t) with the
property given below. If two signals are independent, then S(τ1 ) = 0 for any τ1 . If
S(τ1 ) ≈ 0 for certain τ1 , then there exists a time lag τ1 between two signals. Hence
for complete synchronization states, the similarity function S(τ1 ) has a minimum
zero at τ1 = 0. Now we consider the case of complete synchronization of (10.5) here
and calculate S(0) when the time delay τ and the coupling strength C vary. A contour
graph is plotted in (C, τ )-parameter plane as shown in Fig. 10.4a. One can see clearly
in Fig. 10.4b that there exist extensive regions where S(0) vanishes, and this implies
the occurrence of complete synchronization of two coupled ML neurons.
However, it is worth noting that when two chaotic-coupled neurons achieve com-
plete synchronization in the presence of time delay they exhibit regular firing patterns

b 1
a 0.06 1
0.9
0.9
0.8
0.05 0.8
0.7
0.7
0.6 0.6
0.04
S(0)

0.5 0.5
C

0.03 0.4 0.4

0.3 0.3

0.02 0.2 0.2


0.1 0.1

0.01 0 0
2 4 6 8 10 0.01 0.02 0.03 0.04 0.05 0.06
τ C

Fig. 10.4 a The contour plot of S(0) in (τ ,C)-parameter plane, where the gray scale table on the
bar shows its variation; b S(0) vs. the coupling strength C for the delay τ = 4
282 Q. Lu et al.

τ=0 τ=4
a 60 b 70

50 60

50
40
40
ISI

ISI
30
30
20
20

10
10

0 0
0.02 0.04 0.06 0.01 0.02 0.03 0.04 0.05 0.06
C C

Fig. 10.5 a The bifurcation diagram of ISI with respect to the coupling strength C of the first
neuron without time delay; b the bifurcation diagram of ISI of the first neuron with respect to the
coupling strength C for the delay τ = 4

a 90 b 1.4
80
1.2
70
1
60
0.8
50
S(0)

ISI
40 0.6

30 0.4
20 0.2
10
0
0
0 2 4 6 8 10 12 14 16 18 20 0 2 4 6 8 10 12 14 16 18 20
τ τ

Fig. 10.6 a The bifurcation diagram of ISI of the first neuron with respect to the delay τ ; b S(0)
vs. the delay τ for the coupling strength C = 0.12

instead of original chaotic behavior (see Fig. 10.5). Although the neurons can exhibit
rich firing patterns as the delay is changed, complete synchronization can only occur
in periodic behavior.
For further investigation, we can find that delay can destroy synchronization or
induce antiphase synchronization for certain delays. When we take the coupling
strength C = 0.12, where the coupled neurons can achieve synchronization without
delay, the bifurcation diagram of ISI and the similarity function S(0) with respect
to the delay are shown in Fig. 10.6a, b respectively. It is obvious that the synchro-
nization of coupled neurons is destroyed for large ranges of delay and different fir-
ing patterns may occur. Moreover, for some delays, the coupled neurons can realize
antiphase synchronization. One can see that antiphase synchronization with periodic
firing patterns can appear in Figs. 10.7a, b for the delay τ = 14 and 13, respectively.
10 Effects of Delay on Neuronal Synchronization and Firing Patterns 283

a 0.3 b 0.3
0.2 0.2

0.1 0.1
V1, V2

V 1, V2
0 0

−0.1 −0.1

−0.2 −0.2

−0.3 −0.3
1800 1820 1840 1860 1880 1900 1920 1940 1960 1980 2000 1800 1820 1840 1860 1880 1900 1920 1940 1960 1980 2000
t t

Fig. 10.7 a Antiphase synchronization with period two for the delay τ = 14; b Antiphase synchro-
nization with period three for the delay τ = 13 with the coupling strength C = 0.12

10.4 Synchronization and Firing Patterns in Coupled Neurons


with Delayed Inhibitory Synapses

In [23], It was shown that with the control parameter Vc changing, the Chay neuron
model (see Appendix 10.3) exhibits rich firing behavior, such as periodic spiking and
bursting and chaotic patterns. The bifurcation diagram of ISI and the corresponding
two largest Lyapunov exponents with respect to the parameter Vc for a single Chay
neuron model are shown in Fig. 10.8.
The dynamics of two coupled Chay neurons with delayed inhibitory synapses are
governed by the following set of differential equations:

dV1,2 gkcC1,2
= gI m3∞ h∞ (VI −V1,2 ) + gkv n4 (Vk −V1,2 ) + (Vk −V1,2 )
dt 1 +C1,2
Hsyn (Vsyn −V1,2 )
+ gl (Vl −V1,2 ) + ,
1 + exp(−σ (V2,1 (t − τ ) − θ ))
dn1,2 n∞ − n1,2
= ,
dt τn
dC1,2
= ρ [m3∞ h∞ (Vc −V1,2 ) − KcC1,2 ],
dt
where Hsyn is the coupling strength and τ is the time delay in two coupled Chay
neurons with synaptic connection. The subscript 1 (or 2) represents the neuron 1 (or
2). Vsyn is the synaptic reversal potential, which is dependent on the type of synap-
tic transmitter released from a presynaptic neuron and its receptors. The coupling
becomes excitatory or inhibitory with Vsyn > Ve or Vsyn < Ve , where Ve is an equilib-
rium potential for a neuron. θ is the synaptic threshold, above which the postsynaptic
neuron is affected by the presynaptic one; σ represents a constant rate of onset of
excitation or inhibition.
284 Q. Lu et al.

a 7
b 1

6 0

5 −1

4 −2

λ1,2
ISI

3 −3

2 −4

1 −5

0 −6
100 150 200 250 300 350 400 100 150 200 250 300 350 400
Vc Vc

Fig. 10.8 a The bifurcation diagram of ISI vs. the parameter Vc in the Chay neuron model; b the
corresponding two largest Lyapunov exponents λ1,2 of a

Here, we choose the synaptic reversal potential Vsyn = −50 mV so that interac-
tion of two neurons is inhibitory, with the synaptic threshold θ = −30 and the rate
constant σ = 10.
To begin with, we take a look at the synchronization of two synaptically cou-
pled chaotic Chay neurons without conduction delay (i.e., τ = 0). When we choose
the control parameter Vc = 136 mV, a single Chay neuron exhibits chaotic firing
behavior as shown in Fig. 10.8. With the coupling strength Hsyn increasing from
initially zero, two coupled chaotic Chay neurons gradually transit from nonsynchro-
nization to out-of-phase bursting synchronization. Meanwhile, two chaotic neurons
become more and more regular in the process of realizing out-of-phase synchroniza-
tion. This is resulted from the effect of inhibition, which forces one neuron to be
active and other one to be quiescent alternately under a sufficient strong coupling. In
order to observe the transition to out-of-phase synchronization clearly with the cou-
pling strength Hsyn increasing, we take the coupling strength Hsyn = 1, 3.5, and 6,
respectively. Temporal evolutions of the corresponding membrane potential and the
phase portraits on (V1 ,V2 )-plane of two synaptically coupled chaotic Chay neurons
are shown in Fig. 10.9. It is obvious that out-of-phase bursting synchronization with
a period-5 firing can be achieved when the coupling strength Hsyn is above a certain
critical value.
In what follows, we focus on the effect of the conduction delay on the synchro-
nization in two coupled Chay neurons. Three cases are considered in terms of the
states of coupled neurons. Typically, when we take the coupling strength Hsyn = 1,
3.5, and 6, two coupled neurons lie in the states of nonsynchronization, nearly out-
of-phase synchronization and out-of-phase synchronization, respectively. The simi-
larity function SV (0) is also introduced as that in Sect. 10.3 to investigate the effect
of conduction delay.
At first, it is shown that the conduction delay cannot efficiently enhance the syn-
chronization of two coupled Chay neurons when two coupled Chay neurons are
in nonsynchronous state. From the results of numerical simulation for Hsyn = 1
10 Effects of Delay on Neuronal Synchronization and Firing Patterns 285

a −15 −15

−20 −20

−25 −25

−30 −30
V1,2

V2
−35 −35

−40 −40

−45 −45

−50 −50
0 5 10 15 20 25 30 −50 −45 −40 −35 −30 −25 −20 −15
t V1
b −15 −15

−20 −20

−25 −25

−30 −30
V1,2

V2

−35 −35

−40 −40

−45 −45

−50 −50
0 5 10 15 20 25 30 −50 −45 −40 −35 −30 −25 −20 −15
t V1
c −15
−15

−20
−20

−25
−25

−30 −30
V2
V1,2

−35 −35

−40 −40

−45 −45

−50 −50
0 5 10 15 20 25 30 −50 −45 −40 −35 −30 −25 −20 −15
t V1

Fig. 10.9 The time series of two coupled Chay neurons, denoted by solid and dashed-dot curves,
respectively, and the corresponding phase portraits on (V1 ,V2 ) plane, for the coupling strength
a Hsyn = 1; b Hsyn = 3.5; c Hsyn = 6

illustrated in Fig. 10.10, it is obvious that the similarity function SV (0) cannot be
reduced drastically by the conduction delay and two coupled neurons are not syn-
chronous and still in chaotic firing within the range of conduction delay concerned.
There are only several isolated points, at which SV (0) is lightly minimized with a
little enhancement of synchronization in two coupled neurons.
Next, if two coupled neurons lie in nearly out-of-phase synchronization state for
Hsyn = 3.5 as shown in Fig. 10.9b, what influence does the conduction delay have
286 Q. Lu et al.

a 0.32 b 4

3.5
0.3
3

0.28 2.5
SV(0)

ISI
0.26 2

1.5
0.24
1
0.22
0.5

0.2 0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
τ τ
c 4

3.5

2.5
ISI

1.5

0.5

0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
τ

Fig. 10.10 a The similarity function SV (0) vs. the conduction delay τ ; b the bifurcation diagram
of ISI of the first neuron vs. the conduction delay τ ; c the bifurcation diagram of ISI of the second
neuron vs. the conduction delay τ , where the coupling strength Hsyn = 1

on the synchronization of neurons? The bifurcation diagrams of ISI of two neu-


rons and the similarity function SV (0) with respect to the conduction delay τ are
shown in Fig. 10.11. There is a clear window, in which the similarity function SV (0)
is reduced drastically and at the same time two coupled Chay neurons behave as
periodic firings in this window. Hence, the conduction delay can enhance synchro-
nization when the coupled Chay neurons are nearly in out-of-phase synchronization.
However, it is noted that the effective range of delay for enhancing synchronization is
very narrow.
Finally, if two coupled Chay neurons can achieve out-of-phase synchronization
without conduction delay for Hsyn = 6 as shown in Fig. 10.9c, it is shown that the
firing behavior can be greatly changed by the conduction delay. The bifurcation dia-
grams of ISI and the corresponding similarity function SV (0) vs. the conduction
delay are illustrated in Fig. 10.12. It is shown that the similarity function SV (0) can
be drastically reduced for more values of conduction delay. This implies the enhance-
ment of in-phase synchronization of two synaptically coupled neurons due to con-
duction delay. Moreover, we can see in Fig. 10.12 that the firings of two coupled
10 Effects of Delay on Neuronal Synchronization and Firing Patterns 287

a 0.35 b 4.5

4
0.3
3.5
0.25
3
0.2 2.5
SV(0)

ISI
0.15 2

1.5
0.1
1
0.05
0.5
0 0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
τ τ
c 4.5

3.5

2.5
ISI

1.5

0.5

0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
τ

Fig. 10.11 a The similarity function SV (0) vs. the conduction delay τ ; b the bifurcation diagram
of ISI of the first neuron vs. the conduction delay τ ; c the bifurcation diagram of ISI of the second
neuron vs. the conduction delay τ , where the coupling strength Hsyn = 3.5

a b 4.5
0.35
4
0.3
3.5
0.25 3

0.2 2.5
SV(0)

ISI

2
0.15
1.5
0.1
1
0.05 0.5

0 0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
τ τ
c 4.5

4
3.5

3
2.5
ISI

2
1.5

1
0.5

0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
τ

Fig. 10.12 a The similarity function SV (0) vs. the conduction delay τ ; b the bifurcation diagram
of ISI of the first neuron vs. the conduction delay τ ; c the bifurcation diagram of ISI of the second
neuron vs. the conduction delay τ , where the coupling strength Hsyn = 6
288 Q. Lu et al.

a 0 b −15
−20

−5
−30 −20
−10
−40
−15 −25
41.75 41.8 41.85 41.9 41.95

−20
−30
V1,2

V2
−25
−35
−30
−35 −40
−40
−45
−45
−50 −50
0 2 4 6 8 10 12 14 16 18 20 −50 −45 −40 −35 −30 −25 −20 −15
t V1

b 0.4 d 0.55

0.35 0.54

0.3 0.53
C2

0.25 0.52
n2

0.2 0.51

0.15 0.5

0.1 0.49
0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.49 0.5 0.51 0.52 0.53 0.54 0.55
n1 C1

Fig. 10.13 a The time series of membranes of two coupled Chay neurons; b the phase portraits
on (V1 ,V2 ) plane; c the phase portraits on (n1 , n2 )-plane; d the phase portraits on (C1 ,C2 )-plane;
where the conduction delay τ = 2, and the coupling strength Hsyn = 6

neurons become periodic during the enhancement of synchronization. There are two
clear windows, in which synchronization and regularization of coupled neurons are
greatly enhanced due to the conduction delay.
However, it is noted that enhanced synchronization is actually a bursting synchro-
nization with spikes being weakly correlated as shown in Fig. 10.13. Furthermore, it
is illustrated from Fig. 10.13b, c that the synchronization of the slow subsystems of
coupled neurons is stronger than that of the fast subsystems because the correlation
degree of two slow variables C1 and C2 is higher than that of the corresponding fast
variables V1 and V2 or n1 and n2 .
For stronger coupling strength Hsyn , similar investigation can be conducted (see
Fig. 10.14). It is shown that with the coupling strength Hsyn increasing, the number
of windows, in which synchronization and regularization of coupled neurons are
enhanced, is increased. Moreover, it is seen that synchronization cannot be enhanced
at high or low conduction delays.
10 Effects of Delay on Neuronal Synchronization and Firing Patterns 289

a 0.35 b 4.5

4
0.3
3.5
0.25
3
0.2 2.5
SV(0)

ISI
0.15 2

1.5
0.1
1
0.05
0.5

0 0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
τ τ
c
5
4.5
4
3.5
3
ISI

2.5
2
1.5
1
0.5
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
τ

Fig. 10.14 a The similarity function SV (0) with respect to the conduction delay τ ; b the bifurcation
diagram of ISI of neuron 1 vs. the conduction delay τ ; c the bifurcation diagram of ISI of neuron
2 vs. the conduction delay τ , where the coupling strength Hsyn = 8

10.5 Synchronization and Firing Patterns in Coupled Neurons


with Delayed Excitatory Synapses

For the fast spiking (FS) neuron model (see Appendix 10.4) with the dynamics of the
voltage being as following:

dV
C = gNa m3 h(VNa −V ) + gkv n2 (Vk −V ) + gl (Vl −V ) + Iext + Isyn ,
dt
where Isyn is the synaptic current resulted from all presynaptic neurons and its
dynamics is described by a model of receptor binding [24]:

Isyn = −gsyn r(V −Vsyn ), (10.7)

dr
= α T (1 − r) − β r, (10.8)
dt
1 1 1
T= , α= −β, β= , (10.9)
1 + exp(−Vpre ) τrise τdecay
290 Q. Lu et al.

b 0.18

a 60 0.16

40 0.14

20 0.12

0 0.1

F
H 0.08
−20
V

−40 LP 0.06

−60 0.04
LP
−80 0.02

−100 0
0 20 40 60 80 100 120 140 0 5 10 15
Iext Iext

Fig. 10.15 a The bifurcation diagram of equilibrium vs. the parameter Iext in the FS neuron model;
b the corresponding F–Iext curve

where Vsyn is the synaptic reverse potential, Vpre is the membrane potential of the
presynaptic neuron, τrise is the rise time constant of synapse, τdecay is the decay time
constant of synapse, and gsyn is the coupling strength. We set τrise = 0.01 ms, Vsyn =
40 mV for the excitatory chemical connections. Iext , τdecay , and gsyn are used as the
control parameters.
For a single FS neuron, when the parameter Iext is changed, periodic firing arises
at Iext ≈ 0.35 through a saddle-node bifurcation. At Iext = 127.5, the periodic firing
dies out through an inverse Hopf bifurcation (see Fig. 10.15a). Hence, the FS neuron
exhibits type-I excitability. The corresponding frequency vs. the external stimulus
Iext , that is, the F–Iext curve, is shown in Fig. 10.15b, which has small frequencies
and varies continuously.
In the following, the effect of the time delay on synchronization in coupled FS
neurons will be studied. Suppose that the function that describes the influence of the
ith unit on the jth unit at time t depends on the state of the ith unit at some earlier
time t − τ . For example, the dynamics of two coupled FS neurons with a synaptic
delay is described by

dV1,2
C = gNa m31,2 h1,2 (VNa −V1,2 ) + gkv n21,2 (Vk −V1,2 )
dt
+ gl (Vl −V1,2 ) + Iext + gsyn r1,2 (t − τ )(Vsyn −V1,2 ),
dx1,2 x∞ − x1,2
= , (x = m, h, n),
dt τx (V1,2 )
dr1,2
= α T (1 − r1,2 ) − β r1,2 ,
dt
where τ is the delay of information transmission through the synapse between two
neurons, the subscripts 1 and 2 are for neurons 1 and 2, respectively.
To begin with, we fix the parameters Iext = 4, τdecay = 6.26 and gsyn = 1.53.
The effect of delay on the coupled neurons is shown in Fig. 10.16. It can be seen
clearly that as τ increases, the coupled neurons fire periodically or chaotically. More
10 Effects of Delay on Neuronal Synchronization and Firing Patterns 291

a 40 b 30

30
20
20
10
10
Vmax

Vmax
0 0

−10
−10
−20

−30 −20

−40 −30
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 0.26 0.265 0.27 0.275 0.28 0.285
τ τ

Fig. 10.16 a The bifurcation diagram of Vmax of neuron 1 vs. the delay τ in the coupled FS neuron
model; b an enlargement of a part of a

importantly, it is shown that there are two kinds of transition processes within the
same periodic pattern, namely, the transition from one period-3 firing to another one
for τ between 0.26 and 0.28 (see Fig. 10.16a, b) as well as that from one period-
1 firing to another one for τ between 2.5 and 3.0 (see Fig. 10.16a), which will be
discussed in what follows.
To carry out further analysis, define the phase of a neuron as follows:

t − tnk
φk (t) = 2π + 2nπ , tnk ≤ t ≤ tn+1
k
(k = 1, 2). (10.10)
tn+1 − tnk
k

where {tnk } represents the spike sequence of the kth neuron. The phase variation
of two neurons can be studied through the instantaneous phase difference between
them, ΔΦ (t) = |φ1 (t) − φ2 (t)|. If ΔΦ = lim ΔΦ (t) = 0 (or π ), then the coupled
t→+∞
neurons realize in-phase (or anti-phase) synchronization.
It can be shown from Figs. 10.16 and 10.17 that there are two essentially different
transition processes as follows:

1. Transition from in-phase synchronization of one period-3 firing to antiphase syn-


chronization of another one for τ between 0.26 and 0.28 (see Figs. 10.16b and
10.17a). This process is somewhat complicated and includes some intermediate
nonsynchronous states. Here we call it as “continuous transition.” This transition
process is clearly displayed in Fig. 10.18, where τ = 0.265, 0.27, 0.275, and 0.28,
respectively.
2. Transition from antiphase synchronization of one period-1 firing to in-phase
synchronization of another one. There is no intermediate nonsynchronous state
during transition but a sudden jump between them, and then it is called “jump
transition.” The corresponding numerical simulations are shown in Fig. 10.19.
292 Q. Lu et al.

a 3.5
b 4
π 3.5
3
π
3
2.5
2.5
2 2

ΔΦ
ΔΦ

1.5 1.5
1
1
0.5
0.5
0
0 −0.5
−0.5 −1
0.26 0.265 0.27 0.275 0.28 0.285 2 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3
τ τ

Fig. 10.17 Two cases of the variation of Δφ with respect to the time delay τ : a continuous transi-
tion; b jump transition

It should be noted that there exist even more complex transitions in other ranges
of τ , including chaotic and periodic firing modes between the antiphase synchronous
states of period-3 and period-1 firings.
For other combinations of parameters, similar investigation can also be conducted
and it is found that the above-derived transition modes are universal in two synapti-
cally coupled FS neurons with time delay. For example, taking Iext = 4, τdecay = 5.5,
and gsyn = 1.53, the corresponding numerical simulations are shown in Fig. 10.20,
which are similar to Fig. 10.16. Moreover, it is shown in Fig. 10.20b that chaotic fir-
ing appears in a narrow range 0.50 < τ < 0.51 between two antiphase synchronous
period-1 firings.

10.6 Delay Effect on Multistability and Spatiotemporal


Dynamics of Coupled Neuronal Activity [14]

The relationship between spatial profiles of neural interactions and spatiotempo-


ral patterns of neuronal activity can be investigated in modeling studies. It was
shown that delays might lead to homogeneous oscillations in inhibitory networks
with homogeneous, random connectivity. It was also shown that when interactions
were spatially structured, delays induced a wealth of dynamical states with different
spatiotemporal properties and domains of multistability in [14].
In [14], the authors considered the following equation

α ṁ = −m + Φ [I(x) + dyJ(|x − y|)m(y,t − τ )], (10.11)

where m(x) is the activity of neurons at a one-dimensional location x on a periodic


ring [−π , π ], α is the time constant of rate dynamics (α = 1 in the following), and
Φ is the steady-state current-to-rate transfer function with Φ (I) = I if I > 0 and
Φ (I) = 0 otherwise. The total synaptic input is split into the external current I(x)
10 Effects of Delay on Neuronal Synchronization and Firing Patterns 293

a 30 b 0.12
20 0.11
10 0.1
0 0.09
−10 0.08
V2

n1
−20 0.07
−30 0.06
−40 0.05
−50 0.04
−60 0.03
−60 −50 −40 −30 −20 −10 0 10 20 30 −60 −50 −40 −30 −20 −10 0 10 20 30
V1 V1
c 5 d 0.105
0 0.1
−5 0.095
−10
0.09
−15
0.085
V2

−20
n1

0.08
−25
0.075
−30
−35 0.07

−40 0.065

−45 0.06
−50 −40 −30 −20 −10 0 10 −50 −40 −30 −20 −10 0 10
V1 V1
e 30 f 0.11
20 0.1
10
0.09
0
0.08
−10
V2

0.07
n1

−20
0.06
−30

−40 0.05

−50 0.04

−60 0.03
−60 −50 −40 −30 −20 −10 0 10 20 30 −60 −50 −40 −30 −20 −10 0 10 20 30
V1 V1

g 10 h 0.095

0 0.09

0.085
−10

0.08
V2

−20
n1

0.075
−30
0.07

−40 0.065

−50 0.06
−50 −40 −30 −20 −10 0 10 −50 −40 −30 −20 −10 0 10
V1 V1

Fig. 10.18 The transition process from in-phase synchronization to antiphase synchronization in
period-3 firings. The phase plots in (V1 ,V2 )-plane with the attractors of neuron 1 are: a and b
in-phase synchronization for τ = 0.265; c and d intermediate nonsynchronous state for τ = 0.27;
e and f intermediate nonsynchronous state for τ = 0.275; g and h antiphase synchronization for
τ = 0.28
294 Q. Lu et al.

a 10 b 0

−5
0
−10

−10 −15

−20
V2

−20

V2
−25
−30 −30

−35
−40
−40
−50 −45
−50 −40 −30 −20 −10 0 10 −45 −40 −35 −30 −25 −20 −15 −10 −5 0
V1 V1

Fig. 10.19 The transition process from antiphase synchronization to in-phase synchronization in
period-1 firings. The phase plots in (V1 ,V2 )-plane are: a antiphase synchronization for τ = 2.65; b
in-phase synchronization for τ = 2.7

a 20 b 20
15 15
10 10
5 5
0 0
Vmax

Vmax

−5 −5
−10 −10
−15 −15
−20 −20
−25 −25
0 0.5 1 1.5 2 2.5 3 0.55 0.56 0.57 0.58 0.59 0.6 0.61 0.62 0.63 0.64 0.65
τ τ

Fig. 10.20 a The bifurcation diagram of Vmax of neuron 1 vs. the delay τ in the coupled FS neuron
model; b an enlargement of a part of a

and the synaptic current due to the presynaptic activity at a location y with a weight
J(|x − y|) and a delay τ .
In the absence of delay (i.e., τ = 0), the dynamics of (10.11) for a stationary,
homogeneous external input converges to a stable fixed point, for which the activity
of neuron is either homogeneous or localized, depending on the spatial modulation
of the interactions. For the case of τ > 0, the stability of the stationary uniform state
with respect to perturbations of wave number n is given by the dispersion relation

λ = −1 + Jn e−λ τ ,

where  π
1
Jn = dyJ(y) × cos ny.
2π −π

A steady instability of the nth mode occurs for Jn = 1, while for Jn cos(τω ) = 1
there is an oscillatory instability with frequency ω = − tan(τω ). Hence, four types
10 Effects of Delay on Neuronal Synchronization and Firing Patterns 295

Fig. 10.21 (Cited from [14]) Phase diagram of the rate model (10.11) for τ = 0.1. The states are
referred to the patterns in Fig. 10.22. Regions of bistability are indicated by hyphens, e.g., OU-SW.
Reprinted with permission from [14]. Copyright (2005) by the American Physical Society

of linear instability of the stationary uniform state are possible, including (1) a firing-
rate instability (ω = 0, n = 0), (2) a Hopf instability (ω = 0, n = 0), (3) a Turing
instability (ω = 0, n = 0), and (4) a Turing–Hopf instability (ω = 0, n = 0). In what
follows, the characteristics and stability of nonlinear firing patterns arising from
these instabilities are studied. For simplicity, assume that the interaction has only two
nonzero Fourier components: J(|x − y|) = J0 + J1 cos(x − y). For J0 > |J1 |, the inter-
action is purely excitatory, while for J0 < −|J1 | it is purely inhibitory. For J1 > |J0 |,
the connectivity is locally excitatory and inhibitory at larger distances, while for
J1 < −|J0 | the inverse is true.
Analytical and numerical investigation of (10.11) reveals a phase diagram on (J0 ,
J1 )-plane shown in Fig. 10.21, in which one can discern eight states of activity:
stationary uniform (SU), stationary bump (SB), oscillatory bump (OB), oscillatory
uniform (OU), traveling waves (TW), standing waves (SW), lurching waves (LW),
and aperiodic dynamics (A). Figure 10.22 gives the space–time plots of typical pat-
terns of activity in the different regions shown in Fig. 10.21.
296 Q. Lu et al.

Fig. 10.22 (Cited from [14]) Space–time plots of typical patterns of activity in the different regions
of Fig. 10.21 shown over five units of time. In the left-hand column from top to bottom, J0 = −80
and J1 =15, 5, −46, −86, respectively, correspond to the states OB, OU, SW, and A in Fig. 10.21.
In the right-hand column from top to bottom, J0 = − 10 and J1 =5, −38, −70, −80, respectively,
correspond to the states SB, TW, SW, and A. τ = 0.1 and I is varied to maintain the mean firing
rate at 0.1. Dark regions indicate higher levels of activity in gray scale. Reprinted with permission
from [14]. Copyright (2005) by the American Physical Society

The results presented are for a threshold-linear transfer function and simplified
connectivity. While the simplicity of (10.11) allows for analysis, firing-rate models
do not necessarily provide an accurate description of the dynamics of more realistic
networks of spiking neurons (NSN). To what extent are the dynamics in (10.11) rel-
evant for understanding the patterns of activity observed in the NSN? Now consider
a one-dimensional network of conductance-based neurons with periodic boundary
conditions, composed of two populations of n neurons: excitatory E and inhibitory I.
All neurons are described by a Hodgkin–Huxley type model (see Appendix 10.1)
with one somatic compartment, in which Na and K ionic currents shape the action
potentials. The probability of connection from a neuron in the population A ∈ (E, I)
10 Effects of Delay on Neuronal Synchronization and Firing Patterns 297

to a neuron in another population B is pBA , where p depends on the distance r


between them as pBA = pBA BA
0 + p1 cos(r). Synaptic currents are modeled as

Isyn,A = −gA s(t)(V −VA ), A ∈ (E, I),

where V is the voltage of the postsynaptic neuron, VA is the reversal potential of the
synapse (0 mV for A = E and −880 mV for A = I), gA is the maximum conductance
change, and s(t) is a variable which, given a presynaptic spike at time t ∗ − δ , takes
the form
1 ∗ ∗
s(t) = (e−(t−t )/τ1 − e−(t−t )/τ2 ),
τ1 − τ2
where δ is the delay and τ1 and τ2 are the rise and decay times. Each neuron receives
an external, excitatory synaptic input as a Poisson process with a rate νext , modeled
as the synaptic currents above with a maximum conductance change of gext . Choose
probabilities pIE = pEE = pE , pEI = pII = pI and identical synaptic time constants
for excitatory and inhibitory connections τ1 = 1, ms and τ2 = 3, ms. This creates
an effective one-population network with an effective coupling similar to that of the
rate model. Figure 10.23 shows eight typical firing patterns in the NSN. The figures
have been arranged to allow comparison with those in Fig. 10.22. From the eight
regions of bistability displayed by (10.11), at least one (OU-SW) is also presented in
the NSN in an analogous parameter regime (see Fig. 10.24).

10.7 Closure

In modeling realistic neuronal networks, it is important to consider time delays


explicitly in the description of the transfer of information in neuronal networks. This
enables us to model the complicated processes that take place in real synapses by
interaction terms with an explicit time- lag in the dynamical equations of coupled
neurons. It is known that the values of time delay in a realistic domain can change
the qualitative properties of dynamics, such as introduce or destroy stable oscilla-
tions, enhance or suppress synchronization between different neurons, and generate
spatiotemporal patterns. Some recent results on the complex dynamical behavior of
delayed coupled neurons in this respect are presented in this chapter. Most of the phe-
nomena are illustrated by numerical simulations; however, the physiological mech-
anisms of such stimulations remain unclear and then further qualitative analyses on
them are necessary.
Although the importance of effects of delay in neural firing patterns has been rec-
ognized, there is an increasing interest in more complex behavior of time delays.
For example, it is attractive to study spatiotemporal behavior of neural networks
with a larger number of different delayed coupled neurons, possibly with an addi-
tive or multiplicative noise and complex topological structures. The results derived
in that framework allow us to understand the origin of the diversity of dynamical
states observed in large neural networks of spiking neurons. The phenomenon of
298 Q. Lu et al.

Fig. 10.23 (Cited from [14]) Typical firing patterns in a NSN. The network consists of two pop-
ulations of 2,000 neurons. In the left-hand column from top to bottom: a localized bump with
oscillations, homogeneous oscillations, a period-doubled state of oscillating bumps, and a chaotic
state. In the right-hand column from top to bottom: a steady and localized bump, the stationary uni-
form state, oscillating bumps and a chaotic state. Reprinted with permission from [14]. Copyright
(2005) by the American Physical Society

enhanced neural synchrony by delay has important implications, in particular, in


understanding synchronization of distant neurons and information processing in the
brain. Moreover, synchronization of neurons was reported to play the crucial role in
the emergence of pathological rhythmic brain activity in Parkinson’s disease, essen-
tial tremor, and epilepsies. Obviously, the technique for suppression of undesired
neural synchrony by means of delay will be an important clinical topic.
Therefore, in spite of successful experimental and clinical studies followed by bio-
logical physical experiments and medical applications, the development of effective
theoretical and stimulation techniques in the studies of synchronization and firing
10 Effects of Delay on Neuronal Synchronization and Firing Patterns 299

Fig. 10.24 (Cited from [14]) Bistability between uniform oscillations and an oscillating bump
state in a purely inhibitory network. A 30-ms inhibitory pulse is applied to neurons 500–1,500
around time 200. This induces a state in which two groups of neurons oscillate out of phase with
one another; cf. voltage trace of neuron 1,000 (middle) and neuron 1 (bottom). Reprinted with
permission from [14]. Copyright (2005) by the American Physical Society

patterns in coupled neuronal systems with time delay is a challenging problem of


computing neuroscience and biological mathematics.

Acknowledgments This work was supported by the National Natural Science Foundation
of China (Nos. 10432010 and 10872014) and China Post-doctoral Science Foundation (No.
20070410022). The authors also thank for the permission of reprinting from Roxin A, Brunel N,
Hansel D, Phys. Rev. Lett. 90: 238103, 2005.

Appendix 10.1 The Hodgkin–Huxley (HH) Model

Hodgkin and Huxley performed experiments on the giant axon of the squid and found
three different types of ionic current: sodium, potassium, and leak currents. Specific
voltage-dependent ion channels, one for sodium and another one for potassium, con-
trol the flow of those ions through the cell membrane. The HH neuron model [25] is
described by the following differential equations:

dV
C = gNa m3 h(VNa −V ) + gk n4 (Vk −V ) + gl (Vl −V ) + Iext
dt
300 Q. Lu et al.

dm
= αm (V )(1 − m) − βm (V )m,
dt
dh
= αh (V )(1 − h) − βh (V )h,
dt
dn
= αn (V )(1 − n) − βn (V )n,
dt
where C is the membrane capacitance, V is the membrane potential, and Iext is an
externally applied current. And VNa , VK , and VL are the reversal potentials for Na+ ,
K+ , and the leakage ions, respectively. gNa , gK , and gL are the maximal conduc-
tances. m and h are the activation and inactivation variables of sodium channel, and
n is the activation variable of potassium channel, with

αn = 0.01(V + 10)/(e0.1V+1 − 1),


βn = 0.125eV/80 ,
αm = 0.1(25 +V )/(e0.1V+2.5 − 1),
βm = 4eV/18 ,
αh = 0.07eV/20 ,
βh = 1/(e0.1V+3 − 1).

Appendix 10.2 The Morris–Lecar(ML) Model

The ML neuron model [26], which is a model for electrical activity in the barnacle
muscle fiber, is a simplified version of the HH neuron model for describing the firing
and the refractory properties of real neurons. Dynamics of the reduced ML neuron is
described by the following differential equations:

dV
= gCa m∞ (V )(VCa −V ) + gk ω (Vk −V ),
dt
+ gl (Vl −V ) − I

= λ∞ (V )(ω∞ (V ) − ω )
dt
with  
V −Va
m∞ (V ) = 0.5 1 + tanh ,
Vb
 
V −Vc
ω∞ (V ) = 0.5 1 + tanh ,
Vd
1 V −Vc
λ∞ (V ) = cosh ,
3 2Vd
10 Effects of Delay on Neuronal Synchronization and Firing Patterns 301

where t is the time variable, V is the membrane action potential of the ML neuron
model, and ω is the probability of potassium (K+ ) channel activation. The values of
the parameters and their detailed explanation are given in [26].

Appendix 10.3 The Chay Model

Based on the HH model, the Chay neuron model [27] contains ion channel dynamics
of HH type (sodium (Na+ ) and potassium (K+ ) as well as that of calcium (Ca2+ )).
It was shown that the Chay model could simulate many discharging patterns of β
cells, neuronal pacemakers, and cold receptors. The Chay model is described by the
following differential equations of three variables:

dV gkcC
= gI m3∞ h∞ (VI −V ) + gkv n4 (Vk −V ) + (Vk −V ) + gl (Vl −V ),
dt 1 +C
dn n∞ − n
= ,
dt τn
dC
= ρ [m3∞ h∞ (Vc −V ) − KcC],
dt
where t is the time variable, V represents the membrane action potential of the Chay
neuron, n is the probability of potassium (K+ ) channel activation, and C is the dimen-
sionless intracellular concentration of calcium ion (Ca2+ ); and VI , Vk , Vc and Vl are
the reversal potentials for the mixed Na+ – Ca2+ , K+ , Ca2+ channels, and the leakage
ions, respectively. gI , gkv , gkc and gl are the maximal conductances divided by the
membrane capacitance. ρ is a proportional constant. Kc is the rate constant for efflux
of intracellular Ca2+ ion. τn is the relaxation time of the voltage-gated K+ channel;
n∞ is the steady state value of n; m∞ and h∞ are the probability of activation and
inactivation of the mixed channel. The explicit expressions for m∞ , h∞ , and n∞ can
be written as y∞ = αy /(αy + βy ), where y stands for m, n, and h, and

αm = 0.1(25 +V )/(1 − e−0.1V−2.5 ),


βm = 4e−(V+50)/18 αh = 0.07e−0.05V−2.5 ,
βh = 1/(1 + e−0.1V−2 ),
αn = 0.01(20 +V )/(1 − e−0.1V−2 ),
βn = 0.125e−(V+30)/80 .

In the numerical simulations of Sect. 10.4, the parameter values are chosen as gI
=1,800 , gkv = 1, 700, gkc = 10, gl = 7, ρ = 0.27, Kc = 3.3/18, VI = 100, Vk = −75,
and Vl = −40, and Vc is used as a control parameter.
302 Q. Lu et al.

Appendix 10.4 The Fast-Spiking (FS) Model

Many models of neurons have been proposed in the past with the aim of charac-
terizing their electrophysiological properties and elucidating their roles in various
cortical functions. Erisir et al. [28] carried out some pharmacological experiments
on the interneurons of the mouse somatosentory cortex and found that Kv3.1/3.2
voltage-gated K+ channels play significant roles in creating a characteristic feature
of fast-spiking (FS) cells. To describe the characteristic feature of FS neurons, a
model was proposed as follows:

dV
C = gNa m3 h(VNa −V ) + gkv n2 (Vk −V ) + gl (Vl −V ) + Iext + Isyn ,
dt
dx x∞ − x
= , (x = m, h, n),
dt τx (V )
αx 1
x∞ = , τx (V ) = ,
αx + βx αx + βx
40(75 −V )
αm = ,
13.5 ) − 1
exp( 75−V
 
−V
βm = 1.2262 exp ,
42.248
 
−V
αh = 0.0035 exp ,
24.186
0.017(−51.25 −V )
βh = ,
exp( −51.25−V
5.2 )−1
95 −V
αn = ,
11.8 ) − 1
exp( 95−V
 
−V
βn = 0.025 exp ,
22.222
where V is the membrane potential, m and h are the activation and inactivation vari-
ables of the sodium channel, respectively, and n is the activation variable of the potas-
sium channel.
In Sect. 10.5, the parameter values are chosen as VNa = 55.0 mV, Vk = −97.0 mV,
Vl = −70.0 mV, gNa = 112 cm−2 , gk = 224 ms cm−2 , gl = 0.1 ms/cm2 , and C = 1
μF cm−2 . Iext is an external stimulus current. Isyn is synaptic current, which results
from all presynaptic neurons.
10 Effects of Delay on Neuronal Synchronization and Firing Patterns 303

References

1. Scott A (2002) Neuroscience – A Mathematical Premier. Springer, New York, NY


2. Bear MF, Connors BW, Paradiso MA (2nd Edition, 2002) Neuroscience – Exploring the
Brain. Lippincott Williams & Wilkins., Baltimore, MD
3. Borisyuk A, Friedman A, Ermentrout B, Terman D (2005) Mathematical Neuroscience.
Springer, Berlin
4. Koch C, Laurent G (1999) Science 284: 96–98
5. Crock SM, Ermentrout GB, Vanier MC, Bower JM (1997) J. Comp. Neurosci. 4: 161–172
6. Campbell SA (2007) Time delay in neural systems. In: McIntosh R, Jirsa VK (eds) Handbook
of Brain Connectivity. Springer, New York, NY
7. Foss J, Milton JG (2000) J. Neurophysiol. 84: 975–985
8. Reddy DV, Sen A, Johnston GL (2000) Phys. Rev. Lett. 85: 3381–3384
9. Wirkus S, Rand R (2002) Nonlinear Dynam. 30: 205–221
10. Rossoni E, Chen YH, Ding MZ, Feng JF (2005) Phys. Rev. E 71: 061904
11. Dhamala M, Jirsa VK, Ding MZ (2004) Phys. Rev. Lett 92: 074104
12. Burić N, Ranković D (2007) Phys. Lett. A 363: 282–289
13. Rosenblum M, Pikovsky A (2004) Phys. Rev. E 70: 041904
14. Roxin A, Brunel N, Hansel D (2005) Phys. Rev. Lett. 90: 238103
15. Hasegawa H (2004) Phys. Rev. E 70: 021912
16. Sainz-Trapga M, Masoller C, Braun HA, Huber MT (2004) Phys. Rev. E 70: 031904
17. Burić N, Todorović K, Vasović N (2007) Chaos, Solitons and Fractals, doi:
10.1016/j.chaos.2007.08.067
18. Burić N, Todorović K, Vasović N (2007) Phys. Rev. E 75: 067204
19. Izhikevich EM (2000) Int. J. Bifur Chaos 10: 1171–1266
20. Boccaletti S, Kurths J, Osipov G (2002) Phys. Rep. 366: 1–101
21. Hoppensteadt FC and Izhikevich EM (1997) Weakly Connected Neural Networks, Springer,
New York NY
22. Wang QY, Lu QS (2005) Chinese Phys. Lett. 22: 543–546
23. Wang QY, Lu QS, Zheng YH (2005) Acta Biophys. Sinica 21: 449–455
24. Destexhe A, Mainen ZF, Sejnowski. TJ. (1994) Neural Comput. 6: 14–18
25. Hodgkin AL, Huxley AF (1952.) J. Physiol 117: 500–544
26. Morris C, Lecar H (1981) Biophys. J. 35: 193–213
27. Chay TR (1985) Phys. D 16: 233–242
28. Erisir A, Lau D, Rudy B, Leonard CS (1999) J. Neurophysiol. 82: 2476–2489
Chapter 11
Delayed Random Walks: Investigating
the Interplay Between Delay and Noise

Toru Ohira and John Milton

Abstract A model for a 1-dimensional delayed random walk is developed by gen-


eralizing the Ehrenfest model of a discrete random walk evolving on a quadratic,
or harmonic, potential to the case of non-zero delay. The Fokker–Planck equation
derived from this delayed random walk (DRW) is identical to that obtained starting
from the delayed Langevin equation, i.e. a first-order stochastic delay differential
equation (SDDE). Thus this DRW and SDDE provide alternate, but complimentary
ways for describing the interplay between noise and delay in the vicinity of a fixed
point. The DRW representation lends itself to determinations of the joint probability
function and, in particular, to the auto-correlation function for both the stationary
and the transient states. Thus the effects of delay are manisfested through experi-
mentally measurable quantities such as the variance, the correlation time, and the
power spectrum. Our findings are illustrated through applications to the analysis of
the fluctuations in the center of pressure that occur during quiet standing.

Keywords: Delay · Random walk · Stochastic delay differential equation · Fokker–


Planck equation · Auto-correlation function · Postural sway

11.1 Introduction

Feedback control mechanisms are ubiquitous in physiology [2, 8, 17, 22, 34, 41,
47, 48, 60, 63, 65–67]. There are two important intrinsic features of these control
mechanisms: (1) all of them contain time delays and (2) all of them are continually
subjected to the effects of random, uncontrolled fluctuations (herein referred to as
“noise”). The presence of time delays is a consequence of the simple fact that the
different sensors that detect changes in the controlled variable and the effectors that
act on this variable are spatially distributed. Since transmission and conduction times
are finite, time delays are unavoidable. As a consequence, mathematical models for
feedback control take the form of stochastic delay differential equations (SDDE);

B. Balachandran et al. (eds.), Delay Differential Equations: Recent Advances 305


and New Directions, DOI 10.1007/978-0-387-85595-0 11,
c Springer Science+Business Media LLC 2009
306 T. Ohira and J. Milton

an example is the delayed Langevin equation or first-order SDDE with additive


noise [20, 25, 33, 36, 37, 42, 43, 53, 59]
dx(t) = −kx(t − τ )dt + dW, (11.1)
where x(t), x(t − τ ) are, respectively, the values of the state variable at times t, and
t − τ , τ is the time delay, k is a constant, and W describes the Wiener process.
In order to obtain a solution of (11.1) it is necessary to define an initial function,
x(t) = Φ (t),t ∈ [−τ , 0], denoted herein as Φ0 (t).
Understanding the properties of SDDEs is an important first step for interpret-
ing the nature of the fluctuations in physiological variables measured experimen-
tally [11, 37, 53]. However, an increasingly popular way to analyze these fluctuations
has been to replace the SDDE by a delayed random walk, i.e., a discrete random
walk for which the transition probabilities at the n-th step depend on the position of
the walker τ steps before [55–59]. Examples include human postural sway [49, 56],
eye movements [46], neolithic transitions [18], econophysics [23, 58], and stochastic
resonance-like phenomena [57]. What is the proper way to formulate the delayed
random walk so that its properties are equivalent to those predicted by (11.1)?
An extensive mathematical literature has been devoted to addressing issues related
to the existence and uniqueness of solutions of (11.1) and their stability [51, 52].
These fundamental mathematical studies have formed the basis for engineering
control theoretic studies of the effects of the interplay between noise and delay on the
stability of man-made feedback control mechanisms [6,54]. Lost in these mathematical
discussions of SDDEs is the fact that the nearly 100 years of careful experimental
observation and physical insight that established the correspondence between (11.1)
and an appropriately formulated random walk when τ = 0 [15, 16, 21, 31, 40, 45, 61]
does not have its counterpart for the case when τ = 0. Briefly the current state of
affairs is as follows. The continuous time model described by (11.1), referred to as
the delayed Langevin equation, and the delayed random walk must be linked by a
Fokker–Planck equation; i.e. a partial differential equation which describes the time
evolution of the probability density function. This is because all of these models
describe the same phenomenon and hence they must be equivalent in some sense.
When τ = 0 it has been well demonstrated that the Langevin equation and the random
walk leads to the same Fokker–Planck equation provided that the random walk occurs
in a harmonic, or quadratic, potential (the Ehrenfest model) [31]. Although it has
been possible to derive the Fokker–Planck equation from (11.1) when τ = 0 [19,59],
the form of the Fokker–Planck equation obtained from the delayed random walk
has not yet been obtained. One of the objectives of this chapter is to show that the
Fokker–Planck equation for the random walk can be readily obtained by generalizing
the Ehrenfest model on a quadratic potential to non-zero delay. The importance of
this demonstration is that it establishes that (11.1) and this delayed random walk
give two different, but complimentary views of the same process.
Since (11.1) indicates that the dynamics observed at time t depend on what hap-
pened at time t − τ , it is obvious that the joint probability function must play a
fundamental role in understanding the interplay between noise and delay. Moreover,
the auto-correlation function, c(Δ ) ≡ x(t)x(t + Δ ), is essential for the experimental
descriptions of real dynamical systems [4, 14, 30]. This follows from the fact that
11 Delayed Random Walks 307

three measurements are required to fully describe a noisy signal: (1) its probability
density function (e.g., uniform, Gaussian); (2) its intensity; and (3) its correlation
time (e.g., white, colored). From a knowledge of c(Δ ) we can obtain an estimate of
the variance (Δ = 0) which provides a measure of signal intensity, the correlation
time, and the power spectrum. Armed with these quantities the experimentalist can
directly compare experimental observation with prediction. Surprisingly little atten-
tion has been devoted to the subject of the joint probability functions in the SDDE
literature.
The organization of this chapter is as follows. First, we review the simple random
walk that appears in standard introductory textbooks [1,5,40,45,62]. We use this sim-
ple random walk to introduce a variety of techniques that are used in the subsequent
discussion including the concepts of the generating and characteristic functions, joint
probability, and the inter-relationship between the auto-correlation function and the
power spectral density (Wiener–Khintchine theorem). The Fokker–Planck equation
for the simple random walk is the familiar diffusion equation [45]. Second, we dis-
cuss the Ehrenfest model for a discrete random walk in a quadratic, or harmonic,
potential in order to introduce the concept of stability into a random walk. Third,
we introduce a discrete delay into the Ehrenfest model. The Fokker–Planck equation
is obtained and is shown to be identical to that obtained starting from (11.1). In all
cases, particular attention is given to obtaining an estimate of c(Δ ) and to demonstrat-
ing how the presence of τ influences the correlation time and the power spectrum.
In the final section, we review the application of delayed random walk models to the
analysis of the fluctuations recorded during human postural sway.

11.2 Simple Random Walk

Analyses of random walks in their various forms lie at the core of statistical physics
[16,40,45,61,64] and their applications ranging from biology [5] to economics [44].
The simplest case describes a walker who is confined to move along a line by taking
identical discrete steps at identical discrete time intervals (Fig. 11.1). Let X(n) be
the position of the walker after the n-th step. Assume that at zero time all walkers
start at the origin, i.e., the initial condition is X(0) = 0, and that the probability, p,
that the walker takes a step of unit length, , to the right (i.e., X(n + 1) − X(n) = +)
is the same as the probability, q, that it takes a step of unit length to the left (i.e.,
X(n + 1) − X(n) = −), i.e., p = q = 0.5.
The total displacement, X, after n steps is
n
X = ∑ i , (11.2)
i=1

where  = ±1. Since steps to the right and to the left are equally probable, then after
a large number of steps, we have
308 T. Ohira and J. Milton

Fig. 11.1 Conceptual view of simple random walks. The probability to take a step to the right is
p; a step to the left is q: (a) p > q; (b) p < q; and (c) p = q = 0.5

n
X = ∑ i  = 0, (11.3)
i=1

where the notation · · ·  signifies the “ensemble” average.


Of course each time we construct, or realize, a particular random walk in this
manner, the relationship given by (11.3) provides us no information as to how far
a given walker is displaced from the origin after n steps. One way to consider the
displacement of each realization of a random walk is to compute X 2 , i.e.,
11 Delayed Random Walks 309

X 2 = (1 + 2 + · · · + n )(1 + 2 + · · · + n ) (11.4)


n n
= ∑ 2i + ∑ i j . (11.5)
i=1 i=j

If we now average over many realizations of the random walk we obtain


n n
X 2  = ∑ 2i  + ∑i j . (11.6)
i=1 i=j

The first term is obviously n2 . The second term vanishes since the direction of the
step that a walker takes at a given instance does not depend on the direction of previ-
ous steps. In other words, the direction of steps taken by the walker are uncorrelated
and i and j are independent for i = j. Hence we have

X 2  = n2 . (11.7)

The problem with this method of analysis of the random walk is that it is not
readily transferable to more complex types of random walks. In particular, in order
to use the notion of a random walk to investigate the properties of a stochastic delay
differential equations, such as (11.1), it is necessary to introduce more powerful tools
such as the characteristic, generating, and auto-correlation functions. Without loss of
generality we assume that the walker takes a step of unit length, i.e., || = 1.

11.2.1 Probability Distribution Function

The probability that after n steps the walker attains a position r (X(n) = r) is P(X =
r, n) = P(r, n), where P(r, n) is the probability distribution function and satisfies

∑ P(r, n) = 1.
r=−∞

By analogy with the use of the characteristic function in continuous dynamical sys-
tems [7, 14], the discrete characteristic function,
+∞
R(θ , n) = ∑ P(r, n)ejθr , (11.8)
r=−∞

where θ is the continuous “frequency” parameter, can be used to calculate P(r, n).
Since R(θ , n) is defined in terms of a Fourier series whose coefficients are the P(r, n),
the probability distribution after n steps can be represented in integral form as
 π
1
P(r, n) = R(θ , n)e−jθr dθ . (11.9)
2π −π
310 T. Ohira and J. Milton

In order to use R(θ , n) to calculate P(r, n) we first write down an equation that
describes the dynamics of the changes in P(r, n) as a function of the number of
steps, i.e.,
P(r, 0) = δr,0 , (11.10)
P(r, n) = pP(r − 1, n − 1) + qP(r + 1, n − 1),

where δr,0 is the Kronecker delta function defined by

δr,0 = 1, (r = 0),
δr,0 = 0, (r = 0).
Second, we multiply both sides of (11.10) by ejθr , and sum over r to obtain
R(θ , 0) = 1
R(θ , n) = (pejθ + qe−jθ )R(θ , n − 1).
The solution of these equations is

R(θ , n) = (pejθ + qe−jθ )n (11.11)

Taking the inverse Fourier transform of (11.11) we eventually obtain


 
n r+n n−r
P(r, n) = r+n p 2 q 2 , (r + n = 2m) (11.12)
2
= 0, (r + n = 2m),

where m is a nonnegative integer. For the special case that p = q = 0.5 this expression
for P(r, n) simplifies to
   n
n 1
P(r, n) = r+n , (r + n = 2m) (11.13)
2 2
= 0, (r + n = 2m),

and we obtain

X(n) = 0, (11.14)
σ 2 (n) = n. (11.15)

When p > q the walker drifts toward the right (Fig. 11.1a) and when p < q toward
the left (Fig. 11.1b). The evolution of P(r, n) as a function of time when p = 0.6 is
shown in Fig. 11.2 (for the special case of Brownian motion, i.e., p = q = 0.5, see
Fig. 11.3).
11 Delayed Random Walks 311

Fig. 11.2 The probability density function, P(r, n), as a function of time for a simple random walk
that starts at the origin with p = 0.6:(closed circle) n = 10, (open circle) n = 30, (open square)
n = 100. We have plotted P(r, n) for even r only: for these values of n, P(r, n) = 0 when r is odd

Fig. 11.3 The probability density function, P(r, n), as a function of time for Brownian motion, i.e.,
a simple random walk that starts at the origin with p = 0.5: (closed circle) n = 10, (open circle)
n = 30, (open square) n = 100. We have plotted P(r, n) for even n only: for these values of n,
P(r, n) = 0 when n is odd
312 T. Ohira and J. Milton

11.2.2 Variance

The importance of P(r, n) is that averaged quantities of interest, or expectations, such


as the mean and variance can be readily determined from it. For a discrete variable,
X, the moments can be determined by using the generating function, Q(s, n), i.e.,
+∞
Q(s, n) = ∑ sr P(r, n). (11.16)
r=−∞

The averaged quantities of interest are calculated by differentiating Q(s, n) with


respect to s.
By repeating arguments analogous to those we used to obtain P(r, n) from the
characteristic function, we obtain
 q n
Q(s, n) = ps + . (11.17)
s
The mean is obtained by differentiating Q(s, n) with respect to s


Q(s, n) |s=1 ≡ X(n) = n(p − q). (11.18)
∂s
The second differentiation leads to the variance
∂2
Q(s, n) |s=1 = X(n)2  − X(n) ≡ σ 2 (n) = 4npq. (11.19)
∂ s2
The variance can be calculated from these two equations as σ 2 (n) = 4npq.
The variance gives the intensity of the varying component of the random process,
i.e., the AC component or the variance. The positive square root of the variance is the
standard deviation which is typically referred to as the root-mean-square (rms) value
of the AC component of the random process. When X(n) = 0, the variance equals
the mean square displacement.

11.2.3 Fokker–Planck Equation

We note here briefly that the random walks presented here has a correspondence with
the continuous time stochastic partial differential equation

dx
= μ + ξ (t), (11.20)
dt
where ξ (t) is a gaussian white noise and μ is a “drift constant.” Both from the random
walk presented above and from this differential equation, we can obtain the Fokker–
Planck equation [21, 45].
11 Delayed Random Walks 313

∂ ∂ ∂2
P(x,t) = −v P(x,t) + D 2 P(x,t), (11.21)
∂t ∂x ∂x
where v and D are constants. When there is no bias, μ = 0, p = q = 1/2 and we have
v = 0, leading to the diffusion equation. This establishes a link between the Wiener
process and simple symmetric random walk.

11.2.4 Auto-correlation Function: Special Case

The stationary discrete auto-correlation function, C(Δ ), provides a measure of how


much average influence random variables separated Δ steps apart have on each other.
Typically little attention is given to determining the auto-correlation function for a
simple random walk. However, it is useful to have a baseline knowledge of what the
auto-correlation looks like for a simple random process.
Suppose we measure the direction that the simple random walker moves each
step. Designate a step to the right as R, a step to the left as L, and a “flip” as an abrupt
change in the direction that the walker moves. Then the time series for a simple
random walker takes the form
flip flip
1234 1234
LR · · · RL · · ·.
R · · · RL · · · 3412 (11.22)
flip

Assume that the step interval, δ n, is so small that the probability that two flips
occur within the same step is approximately zero. The auto-correlation function,
C(Δ ), where |Δ | ≥ |δ n|, for this process will be

C(Δ ) ≡ X(n)X(n + |Δ |) = A2 (p0 (Δ ) − p1 (Δ ) + p2 (Δ ) − p3 (Δ ) + · · · ), (11.23)

where pκ (Δ ) is the probability that in a time interval Δ that exactly κ flips occur and
A is the length of each step.
In order to calculate the pκ we proceed as follows. The probability that a flip
occurs in δ n is λ δ n, where λ is some suitably defined parameter. Hence the proba-
bility that no flip occurs is 1 − λ δ n. If n > 0 then the state involving precisely κ flips
in the interval (n, n + δ n) arises from either κ − 1 events in the interval (0, n) with
one flip in time δ n, or from κ events in the interval (0, n) and no new flips in δ n.
Thus
pκ (n + δ n) = pκ −1 (n)λ δ n + pκ (n)(1 − λ δ n) (11.24)

and hence we have

pκ (n + δ n) − pκ (n) dpκ (n)


lim ≡ = λ [pκ −1 (n) − pκ (n)] (11.25)
δ n→0 δn dn
314 T. Ohira and J. Milton

for κ > 0. When κ = 0 we have


dp0 (n)
= −λ p0 (n) (11.26)
dn
and at n = 0
p0 (0) = 1. (11.27)
Equations (11.25)–(11.27) describe an iterative procedure to determine pκ . In partic-
ular we have
(λ |Δ |)κ e−λ |Δ |
pκ (Δ ) = . (11.28)
κ!
The required auto-correlation function C(Δ ) is obtained by combining (11.23) and
(11.28) as
C(Δ ) = A2 e−2λ |Δ | . (11.29)
The auto-correlation function, C(Δ ), and the power spectrum, W ( f ), are inti-
mately connected. In particular, they form a Fourier transform pair
n−1
1 1
W(f) = Δ ∑ C(Δ )e−j2π f mΔ , −

≤f<

(11.30)
m=−(n−1)

and  1/2Δ
C(Δ ) = W ( f )ej2π f Δ d f , −N Δ ≤ Δ ≤ N Δ . (11.31)
−1/2Δ

Our interest is to compare C(Δ ) and W ( f ) calculated for a discrete random walk to
those that would be observed for a continuous random walk, respectively, c(Δ ) and
w( f ). Thus we assume in the discussion that follows that the length of the step taken
by the random walker can be small enough so that we can replace (11.30) and (11.31)
by, respectively,  ∞
w( f ) = c(Δ )e−j2π f Δ dΔ (11.32)
−∞
and  ∞
c(Δ ) = w( f )ej2π f Δ d f . (11.33)
−∞
Together (11.32) and (11.33) (or (11.30) and (11.31) are referred to as the Wiener–
Khintchine theorem.
The power spectrum, w( f ), describes how the energy (or variance) of signal is
distributed with respect to frequency. Since the energy must be the same whether we
are in the time or frequency domain, it must necessarily be true that
 ∞  ∞
|X(t)| dt =
2
|w( f )|2 d f . (11.34)
−∞ −∞

This result is sometimes referred to as Parseval’s formula. If g(t) is a continuous


signal, w( f ) of the signal is the square of the magnitude of the continuous Fourier
transform of the signal, i.e.,
11 Delayed Random Walks 315
 2
 ∞ 
w( f ) =  g(t)e−j2π f t dt  = |G( f )|2 = G( f )G∗ ( f ), (11.35)
−∞

where G( f ) is the continuous Fourier transform of g(t) and G∗ ( f ) is its complex


conjugate.
Thus we can determine w( f ) for this random process as
 
2A2 1
w( f ) = . (11.36)
λ 1 + (π f /λ )2

Figure 11.4 shows c(Δ ) and w( f ) for this random process, where f is Δ −1 . We can
see that the noise spectrum for this random process is essentially “flat” for f  λ
and thereafter decays rapidly to zero with a power law 1/ f 2 .

11.2.5 Auto-correlation Function: General Case

The important point is that the Fourier transform pairs given by (11.32) and (11.33)
are valid whether X(t) represents a deterministic time series or a realization of a
stochastic process [30]. Unfortunately, it is not possible to reliably estimate w( f )
from a finitely long time series (see pp. 211–213 in [30]). The problem is that

Fig. 11.4 (a) Auto-correlation function, c(Δ ) and (b) power spectrum, w( f ), for the random pro-
cess described by (11.22). Parameters: A = 1, λ = 0.5
316 T. Ohira and J. Milton

w( f ) measured for a finite length stochastic processes does not converge in any
statistical sense to a limiting value as the sample length becomes infinitely long.
This observation should not be particularly surprising. Fourier analysis is based
on the assumption of fixed amplitudes, frequencies, and phases, but time series of
stochastic processes are characterized by random changes of amplitudes, frequen-
cies, and phases. Unlike w( f ) a reliable estimate of c(Δ ) can be obtained from a long
time series. These observations emphasize the critical importance of the Wiener–
Khintchine theorem for describing the properties of a stochastic dynamical system
in the frequency domain.
In order to define C(Δ ) for a random walk it is necessary to introduce the concept
of a joint probability, P(r, n1 ; m, n2 ). The joint probability is a probability density
function that describes the occurrence X(n1 ) = r and X(n2 ) = m, i.e. the probability
that X(n1 ) is at r at n1 when X(n2 ) is at site m at n2 . The importance of P(r, n1 ; m, n2 )
is that we can use it to determine the probability that each of these events occurs
separately. In particular, the probability, P(r, n1 ), that X(n1 ) is at site r irrespective of
the location of X(n2 ) is
+∞
P(r, n1 ) = ∑ P(r, n1 ; m, n2 ).
m=−∞

Similarly, the probability, P(m, n2 ), that X(n2 ) is at site m irrespective of the location
of X(n1 ) is
+∞
P(m, n2 ) = ∑ P(r, n1 ; m, n2 ).
r=−∞

Using these definitions and the fact that P(r, n1 ; m, n2 ) is a joint probability distribu-
tion function, we can determine C(Δ , n) to be

C(Δ , n) ≡ X(n)X(n − Δ ) (11.37)


+∞ +∞
= ∑ ∑ rmP(r, n; m, n − Δ ).
r=−∞ m=−∞

The stationary auto-correlation function can be defined by taking the long time limit
as follows

C(Δ ) ≡ X(n)X(n − Δ )s ≡ lim C(Δ , n) (11.38)


n→∞
+∞ +∞
= lim ∑ ∑
n→∞ r=−∞ m=−∞
rmP(r, n; m, n − Δ )).

11.3 Random Walks on a Quadratic Potential

The major limitation for applying simple random walk models to questions related
to feedback is that they lack the notion of stability, i.e., the resistance of dynamical
systems to the effects of perturbations. In order to understand how stability can be
11 Delayed Random Walks 317

incorporated into a random walk, it is useful to review the concept of a potential


function, φ (x), in continuous time dynamical systems. For τ = 0 the deterministic
version of (11.1) can be written as [26]

ẋ(t) = −kx(t) = − (11.39)
dx
and hence  x
kx2
φ (x) = g(s)ds =
0 2
describes a quadratic, or harmonic, function. The bottom of this well corresponds to
the fixed-point attractor x = 0; the well is its basin of attraction. If x(t) is a solution
of (11.39) then
d d d
φ (x(t)) = φ (x(t)) · x(t)
dt dx dt
= −[g(x(t))]2 ≤ 0.

In other words φ is always decreasing along the solution curves and in this sense is
analogous to the “potential functions” in physical systems.
The urn model developed by Paul and Tatyana Ehrenfest showed how a quadratic
potential could be incorporated into a discrete random walk [15, 31, 32]. The demon-
stration that the Fokker–Planck equation obtained for the Ehrenfest random walk
is the same as that for Langevin’s equation, i.e., (11.1), is due to Kac [31]. Here
we derive the auto-correlation function, C(Δ ), for the Ehrenfest random walk and
the Langevin equation. The derivation of the Fokker–Planck equation for τ = 0 and
τ = 0 is identical. Therefore we present the Fokker–Planck equation for the Ehrenfest
random walk as a special case of that for a delayed random walk in Sect. 11.4.1.
By analogy with the above observations, we can incorporate the influence of a
quadratic-shaped potential on a random walker by assuming that the transition prob-
ability toward the origin increases linearly with distance from the origin (of course
up to a point). In particular, the transition probability for the walker to move toward
the origin increases linearly at a rate of β as the distance increases from the origin
up to the position ±a beyond which it is constant (since the transition probability is
between 0 and 1). Equation (11.10) becomes

P(r, 0) = δr,0 , (11.40)


P(r, n) = g(r − 1)P(r − 1, n − 1)
+ f (r + 1)P(r + 1, n − 1),

where a and d are positive parameters, β = 2d/a, and


⎧ 1+2d

⎨ 2 x>a
f (x) = 1+2β n −a ≤ x ≤ a,

⎩ 1−2d
2 x < −a
318 T. Ohira and J. Milton
⎧ 1−2d

⎨ 2 x>a
g(x) = 1−2β n −a ≤ x ≤ a

⎩ 1+2d
2 x < −a,
where f (x), g(x) are, respectively, the transition probabilities to take a step in the
negative and positive directions at position x such that

f (x) + g(x) = 1. (11.41)

The random walk is symmetric with respect to the origin provided that

f (−x) = g(x) (∀x). (11.42)

We classify random walks by their tendency to move toward the origin. The random
walk is said to be attractive when

f (x) > g(x) (x > 0). (11.43)

and repulsive when


f (x) < g(x) (x > 0). (11.44)
We note that when
1
f (x) = g(x) = (∀x), (11.45)
2
the general random walk given by (11.41) reduces to the simple random walk dis-
cussed Sect. 11.3. In this section, we consider only the attractive case.

11.3.1 Auto-correlation Function: Ehrenfest Random Walk

Assume that with sufficiently large a, we can ignore the probability that the walker is
outside of the range (−a, a). In this case, the probability distribution function P(r, n)
approximately satisfies the equation
1
P(r, n) = (1 − β (r − 1))P(r − 1, n − 1) (11.46)
2
1
+ (1 + β (r + 1))P(r + 1, n − 1).
2
By symmetry, we have

P(r, n) = P(−r, n),


X(t) = 0.

The variance, obtained by multiplying (11.46) by r2 and summing over all r, is


1
σ 2 (n) = X 2 (n) = (1 − (1 − 2β )n ). (11.47)

11 Delayed Random Walks 319

Thus the variance in the stationary state is


1
σs2 = X 2 s = . (11.48)

The auto-correlation function for this random walk can be obtained by rewriting
(11.46) in terms of joint probabilities to obtain
1
P(r, n; m, n − Δ ) = (1 − β (r − 1))P(r − 1, n − 1; m, n − Δ ) (11.49)
2
1
+ (1 + β (r + 1))P(r + 1, n − 1; m, n − Δ ).
2
By defining
Ps (r; m, Δ ) ≡ lim P(r, n; m, n − Δ )
n→∞

the joint probability obtained in the stationary state, i.e., the long time limit, becomes
1
Ps (r; m, Δ ) = (1 − β (r − 1))Ps (r − 1; m, Δ − 1) (11.50)
2
1
+ (1 + β (r + 1))Ps (r + 1; m, Δ − 1).
2
Multiplying by rm and summing over all r and m yields

C(Δ ) = (1 − β )C(Δ − 1), (11.51)

which we can rewrite as


1
C(Δ ) = (1 − β )Δ C(0) = (1 − β )Δ , (11.52)

where C(0) is equal to the mean-square displacement, X 2 s . When β  1, we can


approximate,
1 −β |Δ |
C(Δ ) ≈ e . (11.53)

Then, from the Wiener–Khintchine theorem we have
 
2 1
w( f ) ≈ 2 . (11.54)
β 1 + (2π f /β )2

These expressions for K(Δ ) and w( f ) are in the same form as those obtained for
a random process, i.e., (11.28) and (11.36). Hence K(Δ ) and w( f ) are qualitatively
the same as those shown in Fig. 11.4.
320 T. Ohira and J. Milton

11.3.2 Auto-correlation Function: Langevin Equation

When τ = 0, (11.1) describes the effects of random perturbations on a dynamical sys-


tem confined to move within a quadratic potential. It is well known that the variance,
c(0), is
1
c(0) = , (11.55)
2k
which is identical to (11.53) if we identify k with β and take Δ = 0 [21, 35].
In order to determine c(Δ ) and w( f ) we note that when τ = 0, (11.1) describes
the effects of a low-pass filter on δ -correlated (white) noise. Thus we can write the
Fourier transform of (11.1) when τ = 0 as

X( f ) = H( f )I( f ), (11.56)

where the frequency response, H( f ), is given by

1
H( f ) = (11.57)
j2π f + k

and
I( f ) = σ 2 , (11.58)
where σ is a constant. Hence we obtain
σ2
w( f ) = (11.59)
(2π f )2 + k2

and, applying the Weiner–Khintchine theorem,

σ 2 −kΔ
c(Δ ) = e . (11.60)
2k
Equation (11.54) is the same as (11.59) and (11.53) is the same as (11.60).

11.4 Delayed Random Walks

For a delayed random walk, the transition probability depends on its past state. If
we generalize the random walk in a quadratic potential developed in Sect. 11.3 to
non-zero delay we obtain the following definition for the transition probability

P(r, n + 1) = ∑ g(m)P(r − 1, n; m, n − τ )
m
+ ∑ f (m)P(r + 1, n; m, n − τ ), (11.61)
m
11 Delayed Random Walks 321

where the position of the walker at time n is X(n), P(r, n) is the joint probability for
the walker to be at X(n) = r and P(r, n; m, n − τ ) is the joint probability such that
X(n) = r and X(n − τ ) = m takes place. f (x) and g(x) are transition probabilities for
the walker to take the step to the negative (−1) and positive (+1) directions, respec-
tively, and are the same as those used for the random walk in a quadratic potential
described in Sect. 11.3.2.

11.4.1 Delayed Fokker–Planck Equation

Here we derive the Fokker–Planck equation for the delayed random walk in a
quadratic potential. This method is a direct use of the procedure used by Kac to
obtain the Fokker–Planck equation for the case that τ = 0. For f and g as defined in
Sect. 3 (11.61) becomes
1
P(r, n + 1) = ∑ 2 (1 − β m)P(r − 1, n; m, n − τd )
m
1
+ ∑ (1 + β m)P(r + 1, n; m, n − τd ), (11.62)
m 2

To make a connection, we treat that the random walk takes a step with a size of Δx
at a time interval of Δt, both of which are very small compared to the scale of space
and time we are interested in. We take x = r Δx and y = m Δx, t = n Δt and τ = τd Δt.
With this stipulation, we can rewrite (11.62) as follows

P(x,t + Δt) − P(x,t)


Δt
  
1 P(x − Δx,t) + P(x + Δx,t) − 2P(x,t) (Δx)2
=
2 (Δx)2 Δt
1 y β
+ ∑ (P(x,t; y,t − τ ) − P(x − Δx,t; y,t − τ ))
2 y Δx Δt
Δx

1 y β
+ ∑ (P(x + Δx,t; y,t − τ ) − P(x,t; y,t − τ )) . (11.63)
2 y Δx Δt
Δx

In the limits

Δx → 0, Δt → 0, β → 0
(Δx)2 β
→ D, → γ , n Δt → t, τd Δt → τ
2Δt Δt
nΔx → x, m Δx → y, (11.64)
322 T. Ohira and J. Milton

the difference equation (11.63) goes over to the following integro-partial differential
equation
 ∞

P(x,t; y,t − τ )dy
∂t −∞
 ∞  ∞ 2
∂ ∂
= γ (yP(x,t; y,t − τ ))dy + D P(x,t; y,t − τ )dy. (11.65)
−∞ ∂x −∞ ∂ x2
This is the same Fokker–Planck equation that is obtained from the delayed Langevin
equation, (11.1) [19]. When τ = 0 we have

P(x,t; y,t − τ ) → P(x,t)δ (x − y), (11.66)

and the above Fokker–Planck equation reduces to the following familiar form.
∂ ∂ ∂2
P(x,t) = γ (xP(x,t)) + D 2 P(x,t). (11.67)
∂t ∂x ∂x
Thus, the correspondence between Ehrenfest’s model and Langevin equation car-
ries over to that between the delayed random walk model and the delayed Langevin
equation.
We can rewrite (11.65) by making the definition
 ∞
Pτ (x,t) ≡ P(x,t; y,t − τ )dy
−∞

to obtain
 ∞
∂ ∂ ∂2
Pτ (x,t) = γ [yP(x,t; y,t − τ )] dy + D 2 Pτ (x,t). (11.68)
∂t −∞ ∂x ∂x
In this form we can more clearly see the effect of the delay on the drift of the random
walker.

11.4.2 Auto-correlation Function: Delayed Random Walk

Three properties of delayed random walks are particularly important for the discus-
sion that follows. First, by the symmetry with respect to the origin, we have that the
average position of the walker is 0. In particular, for an attractive delayed random
walk, the stationary state (i.e., when n → ∞)

P(r, n + 1; r + 1, n) = P(r + 1, n + 1; r, n). (11.69)

We can show the above as follows. By the definition of the stationarity, we have

P(r, n + 1; r + 1, n) + P(r, n + 1; r − 1, n) (11.70)


= P(r + 1, n + 1; r, n) + P(r − 1, n + 1; r, n). (11.71)
11 Delayed Random Walks 323

For r = 0, we note that due to the symmetry, we have

P(0, n + 1; 1, n) = P(1, n + 1; 0, n). (11.72)

Using these two equations inductively leads us to the desired relation (11.69).
Second, the generating function

cos(α X(n)) = (11.73)


cos(α )cos(α X(n)) + sin(α )sin(α X(n)){ f (X(n − τ )) − g(X(n − τ ))}

can be obtained by multiplying (11.61) for the stationary state by cos(α r) and then
summing over r and m.
Finally, we have the following invariant relationship with respect to the delay
1
= X(n){ f (X(n − τ )) − g(X(n − τ ))}. (11.74)
2
When we choose f , g as before, this invariant relation in (11.74) becomes the
following with this model
1
X(n + τ )X(n) = C(τ ) = . (11.75)

This invariance with respect to τ of the correlation function with τ steps apart is a
simple characteristic of this quadratic potential delayed random walk model. This
property is a key to obtaining the analytical expression for the correlation function.
Below we discuss C(Δ ) for the stationary state. Observations pertaining to C(Δ ) in
the transient state are presented in Sect. 11.5.1.
For the stationary state and 0 ≤ Δ ≤ τ , the following is obtained from the defini-
tion (11.61):

Ps (r, n + Δ ; m, n) = ∑ g()Ps (r − 1, n + Δ ; m, n + 1; , n + Δ − τ )

+ ∑ f ()Ps (r + 1, n + Δ ; m, n + 1; , n + Δ − τ ). (11.76)


We can derive the following equation for the correlation function by multiplica-
tion of this equation by rm and summing over.

C(Δ ) = C(Δ − 1) − β C(τ + 1 − Δ ), (0 ≤ Δ ≤ τ ). (11.77)

A similar argument can be given for τ < Δ ,

C(Δ ) = C(Δ − 1) − β C(Δ − 1 − τ ), (τ < Δ ). (11.78)

Equations (11.77) and (11.78) can be solved explicitly using (11.75). In particular,
for 0 ≤ Δ ≤ τ we obtain
324 T. Ohira and J. Milton

Fig. 11.5 Stationary auto-correlation function, C(Δ ) (left-hand column) and power spectra, W ( f )
(right-hand column) for an attractive delayed random walk on a quadratic potential for different
time delays, τ . Stationary correlation function C(u) calculated using (11.80) and W ( f ) calculated
from this using the Weiner–Klintchine theorem. The parameters were a = 50, d = 0.4, and τ was
0 for (a) and (b), 40 for (c) and (d) and 80 for (e) and (f)

(zΔ+ − zΔ+−1 ) − (zΔ− − zΔ−−1 ) 1 (zΔ+ − zΔ− )


C(Δ ) = C(0) − ,
z+ − z− 2 z+ − z−
1 (z+ − z− ) + β (zτ+ − zτ− )
C(0) = τ −1 τ −1
, (11.79)
2β (zτ+ − z+ ) − (zτ− − z− )

where  
β2 β 2
z± = 1 − ± β − 4.
2 2
For τ < Δ , it is possible to write C(Δ ) in a multiple summation form, though the
expression becomes rather complex. For example, with τ < Δ ≤ 2τ ,
Δ −τ
1
C(Δ ) = − β ∑ C(i), (11.80)
2β i=1

where the C(i) are given by (11.79).


Figure 11.5 compares C(Δ ) and W ( f ) for different values of the delay. As we
increase τ , oscillatory behavior of the correlation function appears. The decay of the
peak envelope is found numerically to be exponential. The decay rate of the envelope
for the small u is approximately 1/(2C(0)).

11.4.3 Auto-correlation Function: Delayed Langevin Equation

The statistical properties of (11.1) have been extensively studied previously [20, 25,
33, 42, 43]. Two approaches can be used to calculate c(Δ ).
First, we can take the Fourier transform of (11.1) to determine w( f ) and then
use the Wiener–Khintchine theorem to obtain c(Δ ). From (11.56) we obtain the fre-
quency response, H( f ),
11 Delayed Random Walks 325

1
H( f ) = .
j2π f + μ e−j2π fτ
As before we assume a “white noise” input and hence the power spectrum, w( f ), is

w( f ) = |H( f )σ |2
σ2
= .
(2π f )2 − 4 π f μ sin 2π f τ + μ 2
Using the Wiener–Khintchine theorem we obtain
 ∞
σ2 cos ωΔ
c(Δ ) = dω , (11.81)
2π 0 ω 2 − 2ω μ sin(ωτ ) + μ 2

where we have defined ω = 2π f to simplify the notation. The variance, σx2 , is equal
to the value of c(Δ ) when Δ = 0, i.e.,
 ∞
σ2 dω
σx2 = . (11.82)
2π 0 μ 2 + ω 2 − 2μω sin(ωτ )
When (11.82) is integrated numerically the result obtained agrees with the results
obtained by Küchler and Mensch (discussed below) for 0 ≤ Δ ≤ τ [25].
Second, Küchler and Mensch showed that the stationary correlation function for
Δ < τ could be obtained directly from (11.1) and was equal to
1
c(Δ ) = c(0) cos(μΔ ) − sin(μΔ ), (11.83)

where
1 + sin(μτ )
c(0) = . (11.84)
2μ cos(μτ )
It should be noted that for small delay the variance increases linearly. This observa-
tion can be confirmed [25].
We now show that the expressions for C(Δ ) obtained from the delayed random
walk (11.80) is equivalent to those given by (11.83) for 0 ≤ Δ ≤ τ . In particular, for
small β , we have

(zΔ+ − zΔ+−1 ) − (zΔ− − zΔ−−1 )


∼ cos(β Δ ),
z+ − z−
β (zΔ+ − zΔ− )
∼ sin(β Δ ).
z+ − z−
Thus
1
C(Δ ) ∼ C(0) cos(β Δ ) − sin(β Δ ) (11.85)

with
1 + sin(β τ )
C(0) ∼ . (11.86)
2β cos(β τ )
326 T. Ohira and J. Milton

11.5 Postural Sway

Postural sway refers to the fluctuations in the center of pressure (COP) that occur as
a subject stands quietly with eyes closed on a force platform [13, 53]. Mathematical
models for balance control identify three essential components [17, 49, 50, 56]: (1)
feedback, (2) time delays, and (3) the effects of random, uncontrolled perturbations
(“noise”). Thus it is not surprising that the first application of a delayed random walk
was to investigate the fluctuations in COP [56]. In this study it was assumed that
the probability p+ (n) for the walker to take a step at time n to the right (positive
direction) was given by

⎨p X(n − τ ) > 0
p+ (n) = 0.5 X(n − τ ) = 0 (11.87)

1 − p X(n − τ ) < 0,
where 0 < p < 1. The origin is attractive when p < 0.5. By symmetry with respect
to the origin we have X(n) = 0. As shown in Fig. 11.6 this simple model was
remarkably capable of reproducing some of the features observed for the fluctuations
in the COP observed for certain human subjects.

Fig. 11.6 Comparison of the two-point correlation function, K(s) = (X(n) − X(n − s))2 , for
the fluctuations in the center-of-pressure observed for two healthy subjects (solid line) with that
predicted using a delayed random walk model (open circle). In (a) the parameters for the delayed
random walk were p = 0.35 and τ = 1 with an estimated unit step length of 1.2 mm and a unit time
of 320 ms. In (b) the parameters were p = 0.40 and τ = 10 with an estimated step length and unit
time step of, respectively, 1.4 mm and 40 ms. For more details see [56]
11 Delayed Random Walks 327

There are a number of interesting


 properties of this random walk (Fig. 11.7).
First, for all choices of τ ≥ 0, X 2 (n)
 approaches a limiting value, Ψ . Second, the
qualitative nature of the approach of X 2 (n) to Ψ depends on the value of τ . In
particular, for short τ there is a nonoscillatory approach to Ψ , whereas for longer τ
damped oscillations occur (Fig. 11.7) whose period is approximately twice the delay.
Numerical simulations of this random walk led to the approximation
1
Ψ (τ ) ∼ (0.59 − 1.18p)τ + √ . (11.88)
2(1 − 2p)

This approximation was used to fit the delayed random walk model to the experi-
mentally measured fluctuations in postural sway shown in Fig. 11.6.
In the context of a generalized delayed random walk (11.61) introduced in
Sect. 11.4, (11.87) corresponds to choosing f (x) and g(x) to be

1
f (x) = [1 + ηθ (x)], (11.89)
2
1
g(x) = [1 − ηθ (x)], (11.90)
2
where
η = 1 − 2p

Fig. 11.7 Examples of dynamics of the root mean square position C(0,t)1/2 for various choices of
τ when p = 0.25
328 T. Ohira and J. Milton

and θ is a step-function defined by



⎨ 1 if x > 0
θ (x) = 0 if x = 0 (11.91)

−1 if x < 0.
In other words the delayed random walk occurs on a V-shaped potential (which, of
course is simply a linear approximation to a quadratic potential). Below we briefly
describe the properties of this delayed random walk (for more details see [59]).
By using symmetry arguments it can be shown that the stationary probability
distributions Ps (X) when τ = 0 can be obtained by solving the system of equations
with the long time limit.

P(0, n + 1) = 2(1 − p)P(1, n),


1
P(1, n + 1) = P(0, n) + (1 − p)P(2, n), (11.92)
2
P(r, n + 1) = pP(r − 1, n) + (1 − p)P(r + 1, n) (2 ≤ r),

where P(r, n) is the probability to be at position r at time n, using the trial function
Ps (r) = Z r , where
Ps (r) = lim P(r, n).
n→∞
In this way we obtain

Ps (0) = 2C0 p,
 r
p
Ps (r) = C0 (1 ≤ r),
1− p
where
(1 − 2p)
C0 = .
4p(1 − p)
Since we know the p.d.f. we can easily calculate the variance when τ = 0, σ 2 (0), as
1
σ 2 (0) = . (11.93)
2(1 − 2p)2

The stationary probability distributions when τ > 0 can be obtained by solving


the set of equations with the long time limit.
for (0 ≤ r < τ + 2)

1
P(r, n + 1) = pP(r − 1, n; r > 0, n − τ ) + P(r − 1, n; r = 0, n − τ )
2
+ (1 − p)P(r − 1, n; r < 0, n − τ ) + pP(r + 1, n; X < 0, n − τ )
1
+ P(r + 1, n; r = 0, n − τ ) + (1 − p)P(r + 1, n; r > 0, n − τ ),
2
for (τ + 2 ≤ r)
11 Delayed Random Walks 329

P(r, n + 1) = pP(r − 1, n) + (1 − p)P(r + 1, n).

These equations are very tedious to solve and not very illuminating. Indeed we have
only been able to obtain the following results for τ = 1
 
1 7 − 24p + 32p2 − 16p3
X 2  =
2(1 − 2p)2 3 − 4p
and for τ = 2
 
1 25 − 94p + 96p2 + 64p3 − 160p4 + 64p5
X  =
2
.
2(1 − 2p)2 5 + 2p − 24p2 + 16p3

11.5.1 Transient Auto-correlation Function

In Sect. 11.5, we assumed that the fluctuations in COP were realizations of a sta-
tionary stochastic dynamical system. However, this assumption is by no means clear.
An advantage of a delayed random walk model is that it is possible to gain some
insight into the nature of the auto-correlation function for the transient state, Ct (Δ ).
In particular, for the transient state we can calculate in the similar manner to (11.77)
and (11.78), the set of coupled dynamical equations

Ct (0, n + 1) = Ct (0, n) + 1 − 2β Ct (τ , n − τ ), (11.94)


Ct (Δ , n + 1) = Ct (Δ − 1, n + 1) − β Ct (τ − (Δ − 1), n + Δ − τ ), (1 ≤ Δ ≤ τ ),
Ct (Δ , n + 1) = Ct (Δ − 1, n + 1) − β Ct ((Δ − 1) − τ , n + 1), (Δ > τ ).

For the initial condition, we need to specify the correlation function for the interval
of initial τ steps. When the random walker begins at the origin we have a simple
symmetric random walk for n ∈ (1, τ ). This translates to the initial condition for the
correlation function as

Ct (0, n) = n (0 ≤ Δ ≤ τ ) Ct (u, 0) = 0 (∀u). (11.95)

The solution can be iteratively generated using (11.95) and this initial condition.
We have plotted some examples for the dynamics of the mean square displacement
Ct (0, n) in Fig. 11.8. Again, the oscillatory behavior arises with increasing τ . Hence,
the model discussed here shows the oscillatory behavior with increasing delay which
appears in both its stationary and transient states.
We also note that from (11.95), we can infer the corresponding set of equations for
the transient auto-correlation function of (11.1)with a continuous time, ct (Δ ). They
are given as follows:

ct (0,t) = −2kct (τ ,t − τ ) + 1
∂t

ct (Δ ,t) = −kct (τ − Δ ,t + Δ − τ ) (0 < Δ ≤ τ ) (11.96)
∂Δ

ct (Δ ,t) = −kct (Δ − τ ,t) (τ < Δ )
∂Δ
330 T. Ohira and J. Milton

Fig. 11.8 Examples of dynamics of the transient variance Ct (0,t) for different delays τ . Data
averaged from 10,000 simulations (closed circle) is compared to that determined analytically from
(11.95) (solid line). The parameters were a = 50 and d = 0.45

Studies on these coupled partial differential equations with delay are yet to
be done.

11.5.2 Balance Control with Positive Feedback

Up to this point we have assumed that the feedback for balance control is continuous
and negative. However, careful experimental observations for postural sway [38, 39]
and stick balancing at the fingertip [11, 12, 28] indicate that the feedback is on aver-
age positive and that it is administered in a pulsatile, or ballistic, manner. Recently
the following switch-type discontinuous model for postural sway that incorporates
positive feedback has been introduced in an attempt to resolve this paradox [49, 50]:

α x(t − τ ) + η 2 ξ (t) + K if x(t − τ ) < −Π
dx ⎨
= α x(t − τ ) + η 2 ξ (t) if −Π ≤ x(t − τ ) ≤ Π (11.97)
dt ⎩
α x(t − τ ) + η 2 ξ (t) − K if x(t − τ ) > Π ,

where α , K, and Π are positive constants. This model is a simple extension of


a model proposed by Eurich and Milton [17] and states that the nervous system
allows the controlled variable to drift under the effects of noise (ξ (t)) and positive
feedback (α > 0) with corrective actions (“negative feedback”) taken only when
x(t − τ ) exceeds a certain threshold, Π . It is assumed that α is small and that
11 Delayed Random Walks 331

τ < 3α /2π . This means that there is one real positive eigenvalue and an infinite
number of complex eigenvalue pairs whose real part is negative. The magnitude of
the real positive eigenvalue decreases as τ increases for 0 < τ < 3α /2π . “Safety
net” type controllers also arise in the design of strategies to control attractors that
have shallow basins of attraction [24].
The cost to operate this discontinuous balance controller is directly proportional
to the number of times it is activated. Thus the mathematical problem becomes that
of determining the first passage times for a repulsive delayed random walk, i.e., the
times it takes a walker starting at the origin to cross the threshold, ±|Π |. Numerical
simulations of a repulsive delayed random walk indicate that the mean first passage
time depends on the choice of Φ0 (t) (see Fig. 11.9).
In principle, the most probable, or mean, first passage time can be calculated from
the backward Kolmogorov equation for (11.1). We have been unable to complete this
calculation. However, simulations of (11.97) indicate that this discontinuous balance
controller takes advantage of two intrinsic properties of a repulsive delayed random
walk: (1) the time delay which slows scape and (2) the fact that the distribution of
first passage times is bimodal. Since the distribution of first passage times is bimodal,
it is possible that a reset due to the activation of the negative feedback controller can
lead to a transient confinement of the walker near the origin thus further slowing
escape.

Fig. 11.9 The mean first passage time, L̂, for a repulsive delayed random walk as a function of
τ for three different choices of the initial function, Φ0 (t): (1) a constant zero function (solid line,
closed circle); (2) an initial function constructed from a simple random walk with τ = 0 (solid line,
closed triangle); (3) a linear decreasing initial function with end points x = τ at t = −τ and x = 0
at t = 0 (solid line, closed square). In all cases Φ0 (0) = 0. The dashed line is equal to L̂τ =0 + τ .
For each choice of Φ0 (t), 500 realizations were calculated with X ∗ = ±30 with d = 0.4 and a = 30
332 T. Ohira and J. Milton

11.6 Concluding Remarks

In this chapter, we have shown that a properly formulated delayed random walk
can provide an alternate and complimentary approach for the analysis for stochas-
tic delay differential equations such as (11.1). By placing the study of a stochastic
differential equation into the context of a random walk it is possible to draw upon
the large arsenal of analytical tools previously developed for the study of random
walks and apply them to the study of these complex dynamical systems. The advan-
tages of this approach to the analysis of SDDEs include the following: (1) it avoids
issues inherent in the Ito vs. Stratonovich stochastic calculus, (2) it provides insight
into transient dynamics, and (3) it is, in principle, applicable to the study of delayed
stochastic dynamical systems in the setting of complex potential surfaces such as
those that arise in the setting of multistability. In general, the use of the delayed
random walk involves solutions in the form of equations which must be solved iter-
atively. However, this procedure causes no practical problem given the ready avail-
ability of symbolic manipulation computer software programs such as MAPLE c
and
c
MATHEMATICA . The use of these computer programs compliment the variety of
numerical methods [3,27,29] that have been developed to investigate SDDEs, such as
(11.1). However, much remains to be done, particularly the important cases in which
the noise enters the dynamics in a multiplicative, or parametric, fashion [9, 10, 52].
Thus we anticipate that the subject of delayed random walks will continue to gain in
importance.

Acknowledgments We thank Sue Ann Campbell for useful discussions. We acknowledge support
from the William R. Kenan, Jr. Foundation (JM) and the National Science Foundation
(Grant 0617072)(JM,TO).

References

1. Bailey, N. T., The Elements of Stochastic Processes, Wiley, New York, NY, 1990.
2. Bechhoefer, J., Feedback for physicists: A tutorial essay on control, Rev. Mod. Phys. 77, 2005,
783–836.
3. Bellen, A. and Zennaro, M., Numerical Methods for Delay Differential Equations, Oxford
University Press, New York, NY, 2003.
4. Bendat, J. S. and Piersol, A. G., Random Data: Analysis and Measurement Procedures, 2nd
Edition, Wiley, New York, NY, 1986.
5. Berg, H. C., Random Walks in Biology, Expanded Edition, Princeton University Press,
Princeton, NJ, 1993.
6. Boukas, E-K. and Liu Z-K., Deterministic and Stochastic Time Delay Systems, Birkhäuser,
Boston, MA, 2002.
7. Bracewell, R. N., The Fourier Transform and its Applications, 2nd Edition, McGraw-Hill,
New York, NY, 1986.
8. Bratsun, D., Volfson, D., Tsimring, L. S. and Hasty, J., Delay-induced stochastic oscillations
in gene regulation, Proc. Natl Acad. Sci. USA 102, 2005, 14593–14598.
11 Delayed Random Walks 333

9. Cabrera, J. L. and Milton, J. G., On-off intermittency in a human balancing task, Phys. Lett.
Rev. 89, 2002, 158702.
10. Cabrera, J. L. and Milton, J. G., Human stick balancing: Tuning Lévy flights to improve
balance control, Chaos 14, 2004, 691–698.
11. Cabrera, J. L., Bormann, R., Eurich, C. W., Ohira, T. and Milton, J., State-dependent noise
and human balance control, Fluct. Noise Lett. 4, 2004, L107–L117.
12. Cabrera, J. L., Luciani, C., and Milton, J., Neural control on multiple time scales: Insights
from human stick balancing, Condens. Matter Phys. 2, 2006, 373–383.
13. Collins, J. J. and De Luca, C. J., Random walking during quiet standing, Phys. Rev. Lett. 73,
1994, 907–912.
14. Davenport, W. B. and Root, W. L., An Introduction to the Theory of Random Signals and
Noise, IEEE, New York, NY, 1987.
15. Ehrenfest, P. and Ehrenfest, T., Über zwei bekannte Einwände gegan das Boltzmannsche
H-Theorem, Phys. Zeit. 8, 1907, 311–314.
16. Einstein, A., Zür Theorie der Brownschen Bewegung, Annalen der Physik 19, 1905, 371–381.
17. Eurich, C. W. and Milton, J. G., Noise-induced transitions in human postural sway, Phys. Rev.
E 54, 1996, 6681–6684.
18. Fort, J., Jana, D. and Humet, J., Multidelayed random walk: Theory and application to the
neolithic transition in Europe, Phys. Rev. E. 70, 2004, 031913.
19. Frank, T. D., Delay Fokker-Planck equations, Novikov’s theorem, and Boltzmann distribu-
tions as small delay approximations, Phys. Rev. E 72, 2005, 011112.
20. Frank, T. D. and Beek, P. J., Stationary solutions of linear stochastic delay differential equa-
tions: Applications to biological systems, Phys. Rev. E 64, 2001, 021917.
21. Gardiner, C. W., Handbook of Stochastic Methods for Physics, Chemistry and the Natural
Sciences, Springer, New York, NY, 1994.
22. Glass, L. and Mackey, M. C., From Clocks to Chaos: The Rhythms of Life, Princeton Univer-
sity Press, Princeton, NJ, 1988.
23. Grassia, P. S., Delay, feedback and quenching in financial markets, Eur. Phys. J. B 17, 2000,
347–362.
24. Guckhenheimer, J., A robust hybrid stabilization strategy for equilibria, IEEE Trans. Autom.
Control 40, 1995, 321–326.
25. Guillouzic, S., L’Heureux, I. and Longtin, A., Small delay approximation of stochastic delay
differential equation, Phys. Rev. E 59, 1999, 3970–3982.
26. Hale, J. and Koçak, H., Dynamics and Bifurcations, Springer, New York, NY, 1991.
27. Hofmann, N. and Müller-Gronbach, T., A modified Milstein scheme for approximation of
stochastic delay differential equation with constant time lag, J. Comput. Appl. Math. 197,
2006, 89–121.
28. Hosaka, T., Ohira, T., Luciani, C., Cabrera, J. L. and Milton, J. G., Balancing with noise and
delay, Prog. Theor. Phys. Suppl. 161, 2006, 314–319.
29. Hu, Y., Mohammed, S.-E. A. and Yan,F., Discrete time approximations of stochastic delay
equations: the Milstein scheme, Ann. Probab. 32, 2004, 265–314.
30. Jenkins, G. M. and Watts, D. G., Spectral Analysis and its Applications, Holden-Day, San
Francisco, CA, 1968.
31. Kac, M., Random walk and the theory of Brownian motion, Am. Math. Monthly 54, 1947,
369–391.
32. Karlin, S. and McGregor, J., Ehrenfest urn models, J. Appl. Prob. 2, 1965, 352–376.
33. Küchler, U. and Mensch, B., Langevins stochastic differential equation extended by a time-
delayed term, Stoch. Stoch. Rep. 40, 1992, 23–42.
34. Landry, M., Campbell, S. A., Morris, K. and Aguilar, C. O., Dynamics of an inverted pendu-
lum with delayed feedback control, SIAM J. Dynam. Syst. 4, 2005, 333–351.
35. Lasota, A. and Mackey, M. C., Chaos, Fractals and Noise: Stochastic Aspects of Dynamics,
Springer, New York, NY, 1994.
36. Longtin, A., Noise-induced transitions at a Hopf bifurcation in a first-order delay-differential
equation, Phys. Rev. A 44, 1991, 4801–4813.
334 T. Ohira and J. Milton

37. Longtin, A., Milton, J. G., Bos, J. E., and Mackey, M. C., Noise and critical behavior of the
pupil light reflex at oscillation onset, Phys. Rev. A 41, 1990, 6992–7005.
38. Loram, I. D. and Lakie, M., Human balancing of an inverted pendulum: position control by
small, ballistic-like, throw and catch movements, J. Physiol. 540, 2002, 1111–1124.
39. Loram, I. D., Maganaris,C. N. and Lakie, M., Active, non-spring-like muscle movements in
human postural sway: how might paradoxical changes in muscle length be produced? J. Phys-
iol. 564.1, 2005, 281–293.
40. MacDonald, D. K. C., Noise and Fluctuations: An Introduction, Wiley, New York, NY, 1962.
41. MacDonald, N., Biological Delay Systems: Linear Stability Theory, Cambridge University
Press, New York, NY, 1989.
42. Mackey, M. C. and Nechaeva, I. G., Noise and stability in differential delay equations,
J. Dynam. Diff. Eqns. 6, 1994, 395–426.
43. Mackey, M. C. and Nechaeva, I. G., Solution moment stability in stochastic differential delay
equations, Phys. Rev. E 52, 1995, 3366–3376.
44. Malkiel, B. G., A Random Walk Down Wall Street, W. W. Norton & Company, New York,
NY, 1993
45. Mazo, R. M., Brownian Motion: Fluctuation, Dynamics and Applications, Clarendon,
Oxford, 2002.
46. Mergenthaler, K. and Enghert, R., Modeling the control of fixational eye movements with
neurophysiological delays, Phys. Rev. Lett. 98, 2007, 138104.
47. Milton, J. and Foss, J., Oscillations and multistability in delayed feedback control. In: Case
Studies in Mathematical Modeling: Ecology, Physiology, and Cell Biology (H. G. Othmer,
F. R. Adler, M. A. Lewis and J. C. Dallon (eds). Prentice Hall, Upper Saddle River, NJ,
pp. 179–198, 1997.
48. Milton, J. G., Longtin, A., Beuter, A., Mackey, M. C. and Glass, L., Complex dynamics and
bifurcations in neurology, J. Theor. Biol. 138, 1989, 129–147.
49. Milton, J. G., Cabrera, J. L. and Ohira, T., Unstable dynamical systems: Delays, noise and
control, Europhys. Lett. 83, 2008, 48001.
50. Milton, J., Townsend, J. L., King, M. A. and Ohita, T., Balancing with positive feedback: The
case for discontinuous control, Philos. Trans. R. Soc. (submitted).
51. Mohammed, S.-E. A., Stochastic Functional Differential Equations, Pitman, Boston, MA,
1984.
52. Mohammed, S.-E. A. and Scheutzow, M. K. R., Lyapunov exponents of linear stochastic
functional differential equations. Part II. Examples and case studies, Ann. Probab. 25, 1997,
1210–1240.
53. Newell, K. M., Slobounov, S. M., Slobounova, E. S. and Molenaar, P. C. M., Stochastic pro-
cesses in postural center-of-pressure profiles, Exp. Brain Res. 113, 1997, 158–164.
54. Niculescu, S.-I. and Gu, K., Advances in Time-Delay Systems, Springer, New York, NY, 2004.
55. Ohira, T., Oscillatory correlation of delayed random walks, Phys. Rev. E 55, 1997,
R1255–R1258.
56. Ohira, T. and Milton, J., Delayed random walks, Phys. Rev. E 52, 1995, 3277–3280.
57. Ohira, T. and Sato, Y., Resonance with noise and delay, Phys. Rev. Lett. 82, 1999, 2811–2815.
58. Ohira, T. and Yamane, T., Delayed stochastic systems, Phys. Rev. E 61, 2000, 1247–1257.
59. Ohira, T., Sazuka, N., Marumo, K., Shimizu, T., Takayasu, M. and Takayasu, H., Predictabil-
ity of currency market exchange, Phys. A 308, 2002, 368–374.
60. Patanarapeelert, K., Frank, T. D., Friedrich, R., Beek, P. J. and Tang, I. M., Theoretical anal-
ysis of destablization resonances in time-delayed stochastic second-order dynamical systems
and some implications for human motor control, Phys. Rev. E 73, 2006, 021901.
61. Perrin, J, Brownian Movement and Molecular Reality, Taylor & Francis, London, 1910.
62. Rudnick, J. and Gaspari, G., Elements of the Random Walk, Cambridge University Press, New
York, NY, 2004.
63. Santillan, M. and Mackey, M. C., Dynamic regulation of the tryptophan operon: A modeling
study and comparison with experimental data, Proc. Natl Acad. Sci. 98, 2001, 1364–1369.
64. Weiss, G. H., Aspects and Applications of the Random Walk, North-Holland, New York, NY,
1994.
11 Delayed Random Walks 335

65. Wu, D. and Zhu, S., Brownian motor with time-delayed feedback, Phys. Rev. E 73, 2006,
051107.
66. Yao, W., Yu, P. and Essex, C., Delayed stochastic differential equation model for quiet stand-
ing, Phys. Rev. E 63, 2001, 021902.
67. Yildirim, N., Santillan, M., Horik, D. and Mackey, M. C., Dynamics and stability in a reduced
model of the lac operon, Chaos 14, 2004, 279–292.
Index

A Hopf, 135, 141, 156–158, 178, 179, 182,


Adaptive Posicast controller, 62 184, 189, 197, 203, 204, 206, 207, 209,
Adjoint equation, operator, 159 212–215, 217, 218, 225, 229–232, 234,
Air-to-fuel ratio control, 56, 63–65, 67, 69–72, 239–241, 275, 290
75, 88 Poincaré-Andronov-Hopf, 157
Amplitude period-doubling, 135
stable, 198–200 secondary Hopf, 135, 141
unstable, 199, 200 subcritical, 184, 197
Ansatz, 224, 226 Biology, 93, 98, 132, 133, 156, 203, 218, 307
Aperiodic dynamics, 295 Bluetooth, 38, 42, 43, 49
archi, 250, 254, 255 Boundary conditions
Auto-correlation function, 306, 307, 309, essential, 99
313–320, 322, 324, 329 natural, 99
Automotive powertrain, 55 Brownian motion, 310, 311
Autonomous system, 96, 97, 100, 136, 138, Bursting, 274, 275, 277, 283, 284, 288
143, 279
non-autonomous system, 96, 98, 100, 103, C
143–151 Characteristic equation, 11–13, 96, 102, 122,
Averaged coefficients, 132 132, 158, 159, 171, 173, 179, 180, 185,
Axon, 274, 276, 299 205, 216, 224, 226, 227, 231, 247
Characteristic exponents, 96, 97
B Characteristic multipliers, 97, 98, 100–102,
Balance control, 326, 330, 331 125, 134, 279
Banach space, 206, 223 Characteristic quasipolynomial, 27, 28
Bidirectional coupling, 278 Chatter, regenerative, 94, 156
Bilinear form, 159, 162, 171, 206, 207, Center (centre) eigenspace, 221–222, 226, 232,
226–228, 233, 234 233, 241
Bifurcation Center (centre) manifold, parameter dependent,
Bogdanov Takens, 225, 229 241
cyclic-fold, 135, 141 Center manifold, 158
double Hopf, 204, 225, 229 Center subspace, 206, 208, 209
diagram, 213–216, 218, 280, 282–284, 286, Chaos, 122, 123, 274
287, 289–291, 294 Chay model, 301
existence of Hopf, 218 Chay neurons, 283–286, 288
flip, 135, 141 Chebyshev
inverse Hopf, 290 collocation, 95, 109–115, 117, 125
local, 157, 209 polynomials, 104–111, 116, 119–124, 132

337
338 Index

Chebyshev-based methods, 94, 123, 124 Discretization methods, semi-discretization,


Collocation method, 109–111, 113, 114, 124, 151
250 Discretized Lyapunov Functional Method,
Complexification, 159, 170 23–27
Conduction delay(s), 284–289
Continuous extension, 253, 254, 256, 257 E
Continuous mapping, 223 Economics, 93, 307
Controller architecture, 223 Ehrenfest model, 306, 307
Controller area network (CAN), 223 Eigenfunctions, 125, 206, 207, 209, 232
Control Eigenspace, 162, 165, 171, 173, 221, 222,
data, 34, 35, 38–43 224–226, 232, 233, 235, 241
delayed state feedback, 116, 120–123 Eigenvalues
feedback, 25, 78, 305 complex, 159, 162, 170
Lyapunov function (CLF), 80, 132 critical, 162, 166, 171, 176, 179, 207
optimal, 94, 123, 117–120 zero, 225
proportional-plus-integral, 65, 75 El-Niño Southern Oscillation (ENSO), 245,
Correlation function, 306, 307, 313–320, 247, 252, 259–260, 263
322–326, 329 Engines
Cost function, 117, 119, 120 diesel, 56, 64, 88
Coupled differential-difference equations, 1 gasoline, 56, 57, 64, 88
Coupled oscillators, 274 lean-burn, 64
Critical eigen triple, 180, 181, 183, 184 Enhanced neural synchrony, 298
Critical frequency, 184, 213 Ensemble average, 308
Error estimation, 255
Euclidean norm, induced norm for matrices,
D 223
Data sampling, 33 Event location, 254, 255–258, 260
dde23, 95, 250–252, 254–258, 260–264 Exhaust gas recirculation (EGR), 56, 78–83,
ddesd, 250, 251, 255–258, 264, 268, 269 86–88
DDE SOLVER, 263, 266–268 Exponential
Ddverk, 250, 254, 255 polynomials, 171, 180
Delay differential equations (DDEs), 1–28, 31, stabilization, 32, 45
69, 93–97, 102, 103, 109, 110, 123–125,
131–133, 138, 151, 152, 156–162, 164, F
167, 173, 186, 203, 206, 221, 222, 225, Fast-spiking (FS) model, 289–291, 302
241, 245–269, 305, 309, 332 Feedback, 3, 4, 23, 25, 31, 35, 48, 61, 64, 67,
DDE-Biftool, 213–218 71–73, 76, 78, 80–82, 85, 94, 103, 116,
Delays 120–123, 141, 155, 203, 245, 259, 275,
constant, 59, 60, 132, 133, 138, 151, 152, 305, 306, 316, 326, 330–331
246, 248, 251, 254 Finite dimensional approximation, 95, 108,
global, 34 135, 138
incommensurate, 27, 28 Finite element methods, 93–126, 132
multiple, 4, 69, 249–251 Finite horizon, 117
periodic, 132–135, 151 Finite spectrum assignment, 61, 84
varying, 32, 132–135, 151, 259 Firing pattern, 273, 302
Delay differential equations, time periodic, 69 FitzHugh-Nagumo (FHN) model, 274, 275
Delayed Fokker-Planck equation, 321–322 Floquet
Delayed Langevin equation, 306, 322, 324 multipliers, 95, 97, 110, 112, 134, 135, 141,
Delayed random walk (DRW), 305–332 279
Dendritic, 274 theory, 95, 103, 132–134, 151, 189, 279
Diesel engine, 56, 64, 78–80, 82, 87, 88 transition matrix, 95–97, 132, 143
Differentiable functional, 8 Fokker-Planck equation, 306, 307, 312–313,
Differential-difference equations, 1, 13–16 317, 321–322
Discontinuity tree, 250, 254 Fortran 90/95, 251, 257, 258
Index 339

Fourier components, 295 Liénard equation, 203–218


Fourier transform, 310, 314, 315, 320, 324 Linstedt-Poincaré method, 203, 215
Frequency-domain approach, 27 Limit cycle, 76–78, 156, 197, 215, 217, 218,
Frequency entrainment, 277, 280 241, 276, 277, 279
Full discretization, 132, 133 Linear matrix inequalities (LMI), 24, 27, 36,
Functional differential equation (FDE), 223, 37, 47, 48, 51, 56, 61, 72, 73, 89
224, 227, 231, 232 Linearization, 71, 73, 83, 85, 88, 110, 221,
224–226
G Linear space, 163, 164
Gain matrix, 80, 121–123 Lipschitz, 6, 223
Gain scheduling, 32, 45–51, 69–71, 88 LMI Toolbox, 24
Geometry turbocharging, 56, 88 Local extrapolation, 255
Gramm-Schmidt orthogonalization, 163 Lossless transmission line, 2
Group of operators, 160 Lyapunov exponents, 279, 280, 283, 284
Lyapunov functional, 23, 24, 27, 132–134
H Lyapunov-Kraskovskii functional, quadratic,
Harmonic balance, 203, 215 4, 13–16, 27
Heated exhaust gas oxygen (HEGO), 63, 65,
66, 75–78
Hilbert space, 170 M
Hindmarsh-Rose (HR) model, 274 Machine tool vibrations, 94, 203
Hopf bifurcation, 135, 141, 156–158, 178, Map, 95, 96, 98, 100, 102, 108, 117, 121,
179, 182, 184, 189, 197, 203, 204, 206, 137–140, 142
207, 209, 212–218, 225, 229–232, 234, Mathieu equation
239–241, 275, 290 damped, 102, 138, 142–144
Hopf condition, 172, 184 delayed, 102, 109, 119–120, 122–123, 138,
Hodgkin-Huxley (HH) model, 274, 275, 142–144
299–301 MAPLE, 230, 231, 232, 241, 332
MATLAB, 24, 95, 112, 125, 213, 251, 252,
I 257, 258, 260, 262, 263, 268, 269
Idle speed control (ISC), 56–63, 65, 67, 88 MATHEMATICA, 121, 332
Infinitesimal generator, 169, 170, 173, 174, Markov chain, 62
178, 179, 185 Master, 31–51
Initial condition, 5–7, 10, 13, 117, 134, 165,
Master-slave structure, 33, 38
167, 168, 189, 223, 256, 307, 329
Matrices
Initial function, 35, 95, 107, 111, 134, 167,
coefficient, 11, 13, 108, 132, 135, 143, 146,
218, 223, 306, 331
149, 157
Initial value problems (IVPs), 247–249, 262
periodic coefficient, 135, 146
Inner product, 159, 161–163, 165–166,
170–171, 179 positive definite, 5, 12, 37, 48, 117
Invariant manifold, 158, 186 negative definite, 5
Matrix
J block diagonal, 225, 226
Jacobian, 83, 88, 223, 278 characteristic, 231, 232
Maximum Brake Torque, 57, 58
K Method of multiple scales, 203
Kronecker product, 107 Method of steps, 69, 248, 252, 253
Miabot, 38, 41, 42, 43, 49
L Milling
Lagrange identity, 161, 170, 171 constant spindle speed, 133, 138, 144–151
Langevin equation, 306, 317, 320, 322, downmilling, 147
324–325 process, 113, 115, 123, 133, 144–151
Laplace transform, 84 variable spindle speed, 133, 138, 148–151
Legendre polynomials, 99, 124, 125 Monodromy matrix, pseudo, 147
340 Index

Monodromy operator Probability


finite-dimensional approximation, 138 density, 306, 307, 311, 316
infinite dimensional, 95, 97, 132, 135 distribution function, 309–311, 316, 318
Morris-Lecar (ML) model, 300–301 Propagated discontinuities, 254, 255
Multistability, 274, 275, 292–297, 332 Proportional plus-Integral control, 65, 75

N Q
Neural networks, 131–134, 203, 246, 268, Quadratic potential, 306, 316–318, 320, 321,
273–275, 297 323, 324, 328
Neuronal system, 273–302 Quasipolynomial, 27, 28
Neurons Quasiperiodic, 132, 149, 274
firing, 273–302
synchronization, 273–302 R
Networked control system Radar5, 250, 255, 256
Ethernet, 32 Random walk
Internet, 32, 33, 39, 42, 43, 45 attractive, 318, 322, 324
Network time protocol (NTP), 33, 38 repulsive, 318, 331
Noise, 75, 275, 297, 305–332 Regenerative chatter, 94, 156
Nonhyperbolic, equilibrium point, 221, 222, Residual, 99, 101, 103, 255, 256, 264
241 Riesz representation theorem, 5
Nonlinear functional, 223, 230 Robotics, 25, 93, 98
Normal form, 158, 159, 161, 176, 186, Rocking suitcase, 260–268
188–189, 194–195, 229, 241 Routh-Hurwitz criterion, 121
Numerical methods, 69, 109, 132, 151, Runge-Kutta, 250, 252–255
250–258, 269, 332
S
O Semi-discretization, 95, 123
Observer Semigroup of operators, strongly continuous,
error dynamics, 82, 83, 85 168, 170, 179
design, 35–36, 62, 78–88 Singular, 5, 102, 241, 246, 254
Ode23, 251, 252 Singular problems, 246
Operator differential equation, 206–209 Slave, 31–51
Ordinary differential equations (ODEs), Solution history queue, 257
94–97, 105, 111, 210, 221, 222, 224, Solution operator, 134, 161, 167, 168
225, 228, 229, 238, 239, 245–258, 269 Solution smoothing, 249
Orthogonal decomposition, 162–164, 166–167, Spark reserve, 57, 61
172–176, 185–186 Spatiotemporal dynamics, 276, 292–297
Spectral observability, 84
P Stability
Packet losses, 33 asymptotic, 7–10, 16, 21, 23, 89, 96–98,
Padé approximation, 60, 61 100, 108, 121, 122, 125, 132, 224, 278,
Parseval’s formula, 314 279
Partial derivatives, 187, 192 boundaries, 70, 100, 112, 125, 140, 147,
Partial differential equations, 2, 228, 330 181, 182
Phase locking, 277, 279 charts, 69, 94, 95, 103, 110, 114, 115, 123,
Physiology, 132, 156, 275, 305 139, 140, 142, 143, 147–150, 182
Poincaré-Lyapunov constant, 157, 206, 212, global, 7, 8, 10, 35, 37
217, 218 input-to-state, uniformly, 7, 8, 11, 16, 23
Poincaré-Lyapunov formula, 217 limit, 181, 182, 185
Poincaré section, 279 linear, 4, 10, 204–206, 217
Population dynamics, 156, 203 periodic solution, 110, 132–135, 158
Power series, 104, 228 robust, 89–90
Postural sway, 306, 307, 326–330 stochastic, 71–73, 89–90
Index 341

trivial solution, 5, 7, 8, 138, 224, 226, 279 Time finite element analysis (TFEA), 98,
uniform, 7–11, 16, 23, 45–48, 51 101–102, 114–116, 123–125
Stochastic delay differential equation (SDDE), Transmission delay, 32–34, 39, 43, 274
305–307, 309, 332 Trial functions, 98–100, 102, 124, 125
Stochastic dynamical system, 316, 329, 332 Turing-Hopf instability, 295
Stochastic resonance, 306
Sunflower equation, 214–218 U
Symbolic algebra, 222, 240, 241 Uncertainty, 72–74, 76
Symbolic computation, 221 Uniformly non-atomic function, 6
Synaptic, 274–276, 283, 284, 289, 290, 294, Universal Exhaust gas oxygen (UEGO) sensor,
297, 302 63, 65
Synchronization
clock, 33, 38–39, 51 V
manifold, 278 Vanishing delays, 246, 250
neuron systems, 274–276 Variable geometry turbine (VGT), 78, 79, 81,
phase, 275, 279, 280, 284, 293, 294 83, 86, 87
System identification, 72 Variance, 214, 307, 312, 314, 318–320, 323,
325, 328, 330
T Variation of parameters, 116–117, 124
Temporal finite element analysis (TFEA), 97, Virtual delay, 32
98, 101–102, 114–116, 123–126
Time-delay systems W
neutral, 26 Waves
retarded, 23, 24 traveling, 275, 295
Time-domain approach, 84, 133 stationary, 274, 294, 295
Time lag, 131, 274, 275, 281, 297 Wiener-Khintchine theorem, 307, 314, 316,
Time-varying delay, approximation, 136, 137 319, 324, 325
Three-way-catalyst (TWC), 63 Weiner process, 320, 324

You might also like