Stability Analysis of Nonlinear Systems 2016
Stability Analysis of Nonlinear Systems 2016
Vangipuram Lakshmikantham
Srinivasa Leela
Anatoly A. Martynyuk
Stability Analysis
of Nonlinear Systems
Second Edition
Vangipuram Lakshmikantham
Srinivasa Leela Anatoly A. Martynyuk
Stability Analysis
of Nonlinear Systems
Second Edition
Srinivasa Leela
State University of New York
Geneseo, NY, USA
Anatoly A. Martynyuk
National Academy of Sciences of Ukraine
Kiev, Ukraine
ISSN 2324-9749
ISSN 2324-9757 (electronic)
Systems & Control: Foundations & Applications
ISBN 978-3-319-27199-6
ISBN 978-3-319-27200-9 (eBook)
DOI 10.1007/978-3-319-27200-9
Library of Congress Control Number: 2015958665
Mathematics Subject Classification (2010): 34Dxx, 37C75, 93Dxx, 34Gxx, 34Kxx, 70K20
Springer Cham Heidelberg New York Dordrecht London
1st edition Taylor & Francis Group LLC (successor of Marcel Dekker, Inc.), Boca Raton, Florida, 1989
Springer International Publishing Switzerland 2015
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or
dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt
from the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, express or implied, with respect to the material contained
herein or for any errors or omissions that may have been made.
Printed on acid-free paper
Springer International Publishing AG Switzerland is part of Springer Science+Business Media
(www.birkhauser-science.com)
vi
S. Leela
A. A. Martynyuk
PREFACE
The problems of modern society are both complex and interdisciplinary. Despite the apparent diversity of problems, however, often
tools developed in one context are adaptable to an entirely dierent
situation. For example, consider Lyapynovs second method. This
interesting and fruitful technique has gained increasing signicance
and has given decisive impetus for modern development of stability
theory of dierential equations. A manifest advantage of this method
is that is does not require the knowledge of solutions and therefore
has great power in applications. There are several books available
expounding the main ideas of Lyapynovs second method, including
some extensions and generalizations.
It is now recognized that the concept of Lyapynov-like function
and the theory of dierential and integral inequalities can be utilized to study qualitative and quantitative properties of non-linear
dierential equations. Lyapunov-like function serves as a vehicle to
transform a given complicated dierential system and therefore it is
enough to investigate the properties of this simpler dierential system. It is also being realized that the same versatile tools are adaptable to discuss entirely dierent nonlinear systems, and other tools
such as the method of variation of parameters and monotone iterative technique provide equally eective methods to investigate problems of similar nature. Moreover, interesting new notions and ideas
have been introduced which seem to possess great potential. Due to
the increased interdependency and cooperation among mathematical
sciences across the traditional boundaries and the accomplishments
thus far achieved, there is every reason to believe that many breakthroughs are there waiting and oering an exiting prospect for this
versatile technique to advance further.
It is in this spirit that wee see the importance of our monograph.
vii
viii
Preface
CONTENTS
Preface
1 Inequalities
1.0 Introduction . . . . . . . . . . . . . .
1.1 GronwallType Inequalities . . . . .
1.2 WendorType Inequalities . . . . .
1.3 BihariType Inequalities . . . . . . .
1.4 Multivariate Inequalities . . . . . . .
1.5 Dierential Inequalities . . . . . . .
1.6 Integral Inequalities . . . . . . . . .
1.7 General Integral Inequalities . . . . .
1.8 IntegroDierential Inequalities . . .
1.9 Dierence Inequalities . . . . . . . .
1.10 Interval-Valued Integral Inequalities
1.11 Inequalities for Piecewise Continuous
1.12 Reaction-Diusion Inequalities . . .
1.13 Notes . . . . . . . . . . . . . . . . .
vii
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
Functions
. . . . . .
. . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
2
12
16
24
26
31
35
38
46
52
56
61
65
.
.
.
.
.
.
.
67
67
68
73
83
89
93
96
ix
Contents
2.7
2.8
. . . . . . . . . . . . .
Solutions and Interval
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. 100
.
.
.
.
.
103
106
116
123
133
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
201
. 201
. 202
. 205
. 209
. 216
. . . . 227
. . . . 231
. . . . 234
. . . . 237
Contents
xi
311
Index
327
1
INEQUALITIES
1.0
Introduction
Inequalities
1.1
GronwallType Inequalities
We begin with one of the simplest and most useful integral inequalities.
Theorem 1.1.1 Let m, v C[R+ , R+ ] where R+ denotes the
nonnegative real line. Suppose further that, for some c 0, we have
t
m(t) c +
v(s)m(s) ds,
t t0 0.
(1.1.1)
t t0 .
(1.1.2)
t0
Then,
t
m(t) c exp
v(s) ds ,
t0
t
t
v(s)m(s) ds log c
log c +
t0
v(s) ds.
t0
Gronwalltype inequalities
m(t0 ) = c 0,
t t0 .
Then
t
m(t) c exp
t
t
v(s) ds +
t0
h(s) exp
v() d ds ,
t0
t
Proof Set q(t) = m(t) exp v(s) ds so that we have
t0
v(s) ds
t0
t
t
v(s) ds ,
q(t0 ) = c.
t0
s
h(s) exp
q(t) c +
t0
v() d ds
t0
t 0.
Inequalities
t t0 .
v(s)m(s) ds,
(1.1.3)
t0
Then,
t
t
m(t) h(t) +
t t0 .
(1.1.4)
t
t
v(s) ds + h (s) exp
v() d ds,
t t0 .
[v(s)h(s)] exp
v() d ds,
s
t0
If h is dierentiable, then
t
m(t) h(t0 ) exp
t0
t0
(1.1.5)
t
t0
and
p (t) v(t)m(t),
t t0 .
t
t t0 .
t0
q (t) = [p (t) v(t)p(t)] exp
v(s) ds
t0
h(t)v(t) exp
t
t
v(s) ds ,
t0
Gronwalltype inequalities
which implies
t
q(t)
h(s)v(s) exp
s
t0
t t0 .
v() d ds,
t0
Consequently, we obtain
t
t
p(t)
v(s)h(s) exp
t t0
v() d ds,
s
t0
p(t0 ) = h(t0 )
p(t0 ) = h(t0 ).
Now, we get
t
p(t) h(t0 ) exp
t
t
v(s) ds + h (s) exp
v() d ds,
t0
t t0
t0
t
t
h (s) exp
t0
t
t
h(s)v(s) exp
+
t0
v() d
t0
v() d ds,
s
which, in view of (1.1.5), yields (1.1.4). Thus, it is clear that assuming dierentiability of h does not oer anything new.
Inequalities
If, in Theorem 1.1.2, h is assumed to be nondecreasing and positive, then the estimate (1.1.4) reduces to
t
m(t) h(t) exp
t t0 .
v(s) ds ,
(1.1.6)
t0
m(t)
, we get from (1.1.3)
h(t)
t
w(t) 1 +
t t0 .
v(s)w(s) ds,
t0
t
v(s) ds and (1.1.6)
t0
follows.
We can also get (1.1.6) from (1.1.4) because
t
t
v(s)h(s) exp
h(t) +
t0
t
t
h(t) 1 +
v() d ds
v(s) exp
t
(s)
v() d ds
s
t0
= h(t) 1
d(s)
t
= h(t) exp
t0
v() d ,
t t0 .
t0
t
Example 1.1.1 Let m C[R+ , R+ ] and m(t) (A + Bm(s)) ds,
t t0 , where A 0 and B > 0. Then,
m(t)
A
[exp(B(t t0 )) 1] ,
B
t0
t t0 .
(B + cm(s)) ds,
t0
t t0 ,
Gronwalltype inequalities
B
[exp(c(t t0 )) 1] + A exp(c(t t0 )),
c
t t0 .
m(s) ds,
t t0 .
t0
Then,
t t0 .
t t0 .
q(t)v(s)m(s) ds,
(1.1.7)
t0
Then,
t
t
m(t) h(t) + q(t)
v(s)h(s) exp
We set p(t) =
t t0 . (1.1.8)
v() d ds,
s
t0
Proof
t
t0
t t0 ,
and consequently
t
t
p(t)
v(s)h(s) exp
t0
v()q() d ds,
t t0 ,
Inequalities
n
i=1
t
gi (t)
vi (s)m(s) ds,
t t0 .
t0
Then,
t
t
m(t) h(t) + G(t)
V (s)h(s) exp
V ()G() d ds,
t t0 ,
t0
vi (t).
i=1
g1 (s)m(s) ds + 2 (t)
t0
n
i=1
t
ci
gi (s)m(s) ds,
t0
t [t0 , T ],
Gronwalltype inequalities
where
t
t
K1 (t) = h(t) + 1 (t)
g1 (s)h(s) exp
t
t
g1 (s)2 (s) exp
n
ci
i=1
1
n
i=1
ci
ti
1 ()g1 () d ds,
ti
gi (s)K1 (s) ds
t0
n
i=1
provided
t0
M=
1 ()g1 () d ds,
s
t0
ti
ci
,
gi (s)K2 (s) ds
t0
t0
An integral inequality with a non-separable kernel that is dierentiable can be reduced to an integro-dierential inequality which
can further be reduced to a dierential inequality.
2 , R ],
Theorem 1.1.4 Suppose that h, m C[R+ , R+ ], K C[R+
+
Kt (t, s) exists, is continuous and nonnegative. Let
t
m(t) h(t) +
t t0 .
(1.1.9)
t0
Then,
t
t
m(t) h(t) +
(s, t0 ) exp
t0
A(, t0 ) d ds,
t t0 , (1.1.10)
where
t
A(t, t0 ) = K(t, t) +
Kt (t, s) ds,
t0
and
t
(t, t0 ) = K(t, t)h(t) +
10
Inequalities
t
t0
t
Using the fact that p(t) is nondecreasing and m(t) h(t) + p(t), we
arrive at
t
t
p (t) K(t, t) + Kt (t, s) ds p(t) + K(t, t)h(t) + Kt (t, s)h(s) ds
t0
t0
t t0 ,
t0
t
v2 (s) ds
t0
t
t
+
v1 (s) exp
t0
v2 () d
s
ds ,
11
Gronwalltype inequalities
where
t
v1 (t) = c3 c1 v(t) +
w(t, )d
t0
and
t
v2 (t) = c3 (t)v(t) +
w(t, )( ) d .
t0
2 , R ],
Corollary 1.1.5 Suppose that m C[R+ , R+ ], K C[R+
+
3
g C[R+ , R+ ] and Kt (t, s), gt (t, s, ) exist, are continuous and nonnegative. If
t
m(t) m(t0 )+
t s
K(t, s)m(s) ds+
t0
t t0 ,
t0 t0
then,
t
m(t) m(t0 ) exp
t t0 ,
A(s, t0 ) ds ,
t0
where
t
A(t, t0 ) = K(t, t)+
t
Kt (t, s) ds+
t0
t s
g(t, t, ) d +
t0
gt (t, s, ) d ds.
t0 t0
T
v(s)m(s) ds +
t0
t0 t T . Then, if exp
v(s)m(s) ds,
(1.1.11)
t0
T
v(s) ds < 2, we have
t0
t
m(t0 )
v(s) ds ,
m(t)
T
exp
t0
2 exp
v(s) ds
t0
t0 t T.
12
Inequalities
Proof Set p(t) equal to the right hand side of (1.1.11). We obtain
T
p (t) v(t)p(t)
v(s)m(s) ds,
t0
which yields
t
p(t) p(t0 ) exp
v(s) ds ,
t0 t T.
t0
T
T
v(s) ds , and
But p(T ) = m(t0 ) + 2 v(s)m(s) ds p(t0 ) exp
t0
t0
T
v(s)m(s) ds = p(t0 ) m(t0 ).
t0
Consequently, p(t0 )
1.2
2exp
m(t0 )
T
v(s) ds
t0
WendorType Inequalities
x y
m(x, y) c +
x x0 , y y 0 .
(1.2.1)
x 0 y0
Then,
x y
m(x, y) c exp
v(s, t) ds dt ,
x 0 y0
x x0 , y y 0 .
(1.2.2)
13
Wendorfftype inequalities
y
v(x, t)m(x, t) dt
px (x, y) =
y0
v(x, t) dt p(x, y),
y0
x y
m(x, y) p(x, y) c exp
v(s, t) ds dt ,
x x0 , y y 0 .
x 0 y0
x y
m(x, y) h(x, y) +
x x0 , y y 0 .
x 0 y0
(1.2.3)
Then, for x x0 , y y0 ,
m(x, y) h(x, y)
x y
x y
v(s, t)h(s, t) exp
+
x 0 y0
v(, ) d d ds dt.
(1.2.4)
v(s, t) ds dt ,
x x0 , y y 0 .
x 0 y0
(1.2.5)
14
Inequalities
Rx Ry
x 0 y0
and
Zy
px (x, y)
v(x, t) dt p(x, y) +
y0
Zy
y0
using the fact that m(x, t) h(x, t) + p(x, t) and p(x, t) is nondecreasing in x and y. Hence, we get
p(x, y)
Zx Zy
x 0 y0
Zx Zy
s
v(, ) d d ds dt,
Zx Zy
x 0 y0
Zx Zy
v(s, t) ds dt .
x 0 y0
x0
v(s) d(s) =
Zx1 Zx2
x01 x02
...
Zxn
x0n
15
Wendorfftype inequalities
2 , R ] and for x x ,
Theorem 1.2.3 Let m, h, v C[R+
+
0
m(x) h(s) +
Zx
v(s)m(s) ds.
(1.2.6)
x0
Then
m(x) h(s) +
Zx
v(s)h(s) exp
x0
Zx
x x0 .
v() d ds,
(1.2.7)
x x0 .
v() d ds,
(1.2.8)
x0
px1 (x) =
Zx2
...
x02
Zxn
x0n
Using the relation m(x) h(x) + p(x) and the nondecreasing character of p(x), we get
Px1
+
Zx2
x02
x
2
Z
...
...
x02
Zxn
x0n
x
n
Z
x0n
Consequently, we obtain
p(x)
Zx1 "
x01
exp
Zx
s
v() d
! Zx2
x02
...
Zxn
x0n
v(x1 , s2 , . . . , sn )
16
Inequalities
Zx
v(s)K(s) ds,
x0
m(x)
h(x) ,
x x0
K(x) exp
v(s) ds ,
x0
1.3
BihariType Inequalities
The theory of Gronwalltype integral inequalities discussed in Section 1.1 can be extended to separable type nonlinear integral inequalities which are known as Biharitype inequalities. In this section, we
prove several such results that correspond to results in Section 1.1.
Theorem 1.3.1 Let m, v C[R+ , R+ ], g C[(0, ), (0, )] and
g(u) be nondecreasing in u. Suppose, for some c > 0,
m(x) c +
Then
Zt
v(s)g(m(s)) ds,
t t0 > 0.
"
t0
m(t) G1 G(c) +
Zt
v(s) ds ,
t0
t0 t < T
holds, where
G(u) G(u0 ) =
Zu
u0
ds
,
g(s)
G1 (u)
(1.3.1)
17
Biharitype inequalities
t
is the inverse of G(u) and T = sup t t0 : G(c) + v(s) ds
t0
domG1 .
Proof Denote the right hand side of (1.3.1) by p(t) so that
p(t0 ) = c and p (t) v(t)g(m(t)). Since g is nondecreasing and
m(t) p(t), we get
p (t) v(t)g(p(t)) ,
p(t0 ) = c.
t
dz
g(z)
v(s) ds
t0
and consequently
m(t) p(t) G1 G(c) +
t
v(s) ds ,
t0 t < T.
t0
v(s)g(m(s)) ds,
t t0 > 0.
t0
Then,
(i) if g(u) is subadditive,
1
m(t) h(t) + G
t
G(c) +
v(s) ds ,
t0
t0 t T0 < T, (1.3.2)
18
Inequalities
ZT0
v(s)g(h(s)) ds ;
t0
(ii) if h is nondecreasing,
1
m(t) h(t0 ) + G
"
G(h(t0 )) +
Zt
v(s) ds ,
t0
Rt
t0 t < T. (1.3.3)
t0
Rt
v(s)g(h(s)) ds is non-
t0
Zt
t0
v(s)g(p(s)) ds,
t0 t T0 < T.
w(t0 ) = h(t0 ).
19
Biharitype inequalities
K(t, s)g(m(s)) ds .
t0
Then
(i) if g is subadditive,
1
t
t0 t T0 < T,
v1 (s) ds ,
G(c) +
t0
t
v1 (s)g(v2 (s)) ds,
c=
v1 (t) = K(t, t) +
t0
Kt (t, s) ds
t0
and
t
v2 (t) = K(t, t)h(t) +
Kt (t, s)g(h(s)) ds ;
t0
(ii) if h is nonincreasing,
m(t) h(t) h(t0 ) + G1 G(h(t0 )) +
t
v1 (s) ds ,
t0 t < T.
t0
2 , R ] such
Theorem 1.3.4 Let m, v C[R+ , R+ ] and w C[R+
+
that
t
m(t) c +
t t0 ,
(1.3.4)
20
Inequalities
t
w(t, z exp
v(s) ds ,
t0
(1.3.5)
t0
t0
t
v(s) ds , so that we have, using (1.3.4) and (1.3.5),
p(t) exp
t0
t
v(s) ds
t0
exp
v(s) ds
t0
t
t
v(s) ds .
t0
t
v(s) ds , we get
t0
p (t) (t)g(p(t)) ,
Hence, by Theorem 1.3.1,
1
p(t) G
t
G(c) +
p(t0 ) = c .
(s) ds ,
t0
t0 t < T ,
to
21
Biharitype inequalities
t
h(s)(m(s))p ds,
v(s)m(s) ds +
t0
t t0 .
t0
Then,
s
t
m(t) cq + q
h(s) exp
t0
v() d
1
t
ds
exp
t0
v(s) ds ,
t0
for t t0 , where q = 1 p.
If the kernel K(t, s) in Theorem 1.3.3 is such that Kt (t, s) 0,
then the arguments of that theorem oer only a crude estimate. However, a dierent approach gives a better bound which we discuss in
the next result.
Theorem 1.3.5 Suppose m C[R+ , R+ ], g C[(0, ), (0, )],
g(u) is nondecreasing in u and for some c > 0, > 0,
t
m(t) c +
t t0 .
(1.3.7)
t0
Then,
m(t) (1 + 0 )c ,
t t0 ,
and
(1.3.8)
[0, 0 ).
Proof Set the right hand side of (1.3.7) equal to p(t) so that
p(t0 ) = c and
p (t) = g(m(t)) (p(t) c) g(p(t)) p(t) + c .
(1.3.9)
22
Inequalities
g((1 + z)c) z,
d
c
= 0.
ds
0 s
z( )
0
1
c g((1
ds
0 ,
+ s)c) s
t t0 .
T0
v(s)g(m(s)) ds +
t0
v(s)g(m(s)) ds,
(1.3.10)
t0 t T0 ,
(1.3.11)
t0
t [t0 , T0 ]. Then,
m(t) G1 G(c0 ) +
t
v(s) ds ,
t0
23
Biharitype inequalities
p (t) v(t)g(p(t)) .
v(s)g(m(s)) ds,
t0
ds
= .
g(s)
t
p(t) G1 G(p(t0 )) +
v(s) ds ,
t0 t T0 ,
(1.3.12)
t0
v(s)g(m(s)) ds
t0
= p(T0 ) G1 G(p(t0 )) +
T0
v(s) ds .
t0
As a result, we get
T0
G(2p(t0 ) c) G(p(t0 ))
v(s) ds .
t0
T0
t0
Z (s) > 0 for s > 2c and Z(Kc) > 0 for suciently large K > 0.
Thus, it is clear that there exists a c0 such that 2c < c0 < Kc which
is a solution of Z(s) = 0 and therefore, p(t0 ) c0 . Hence, by (1.3.12),
it follows that the estimate (1.3.11) holds.
24
Inequalities
1
t
m(t) cq0 + q
v(s) ds
t0 t T0 ,
q = 1 p.
t0
1.4
Multivariate Inequalities
x y
m(x, y) c +
x x0 , y y0 , (1.4.1)
x 0 y0
m(x, y) G
x y
v(s, t) ds dt ,
G(c) +
(1.4.2)
x 0 y0
u
u0
given by
ds
g(s)
x y
and a, b are
v(s, t) ds dt dom G
x 0 y0
Proof Denoting the right hand side of (1.4.1) by p(x, y), we obtain
2 p(x, y)
= v(x, y)g(m(x, y)) v(x, y)g(p(x, y))
xy
(1.4.3)
25
Multivariate inequalities
and
p(x0 , y) = p(x, y0 ) = p(x0 , y0 ) = c .
Since
(1.4.4)
2p
p p
2 G(p)
= G (p)
+ G (p)
,
xy
xy
x y
1
,
g(p)
p p
,
0,
x y
v(s, t) ds dt .
(1.4.5)
x 0 y0
The desired estimate (1.4.2) easily follows from (1.4.5) and the
proof is complete.
Example 1.4.1 Let v(x, y) = exp[(xx0 )+(yy0 )] and g(u) = u2 .
Then, for x0 x a, y0 y b,
m(x, y) c[1 c(exp(x x0 ) 1)(exp(y y0 ) 1)]1 ,
where a, b satisfy a0 = exp(a x0 ) 1 > 0 and b = c +
log(1 + ca0 ) log ca0 .
The next result generalizes the inequality (1.4.1) and deals with
several independent variables. We follow the notation of Section 1.2.
n , R ], h(x) > 0 and be deTheorem 1.4.2 Let m, h, v C[R+
+
creasing in x, g C[(0, ), (0, )], g(u) be nondecreasing in u and
1
u
v g(u) g( v ) for any v > 0. Suppose further that
x
m(x) h(x) +
v(s)g(m(s)) ds,
x0
x x0 .
(1.4.6)
26
Inequalities
x0 x a ,
(1.4.7)
x0
x0
1.5
Dierential Inequalities
u(t0 ) = u0 ,
(1.5.1)
where g C[R+ Rn , Rn ]. The function g is said to be quasimonotone nondecreasing if x y and xi = yi for some i, 1 i n, implies
gi (t, x) gi (t, y) for each t. To appreciate this denition, note that
if g(t, u) = Au, where A is an n n matrix, g is quasimonotone
implies that aij 0, i, j = 1, 2, . . . , n, i = j. Let us now introduce
Differential inequalities
27
t J.
(1.5.2)
u(t0 ) = u0 +
(1.5.3)
Lemma 1.5.1 Let v, w C[J, R] and for some xed Dini derivative Dv(t) w(t), t J S, where S is an almost countable subset of J. Then D v(t) w(t) on J, where D v(t) =
lim inf(1/h)[v(t + h) v(t)].
h0
t t0 .
28
Inequalities
Let t0 < T < for all suciently small > 0 and lim u(t, ) = r(t)
0
t [t0 , T ].
(1.5.4)
t [t0 , t1 ].
Consequently, we have
D m(t1 ) u (t1 , )
which, in turn, leads to the contradiction
g(t1 , m(t1 )) D m(t1 ) u (t1 , ) = g(t1 , u(t1 , )) + .
Hence (1.5.4) is true and the proof is complete.
To avoid repetition, we have not so far considered the lower estimate for m(t) which can be obtained by reversing the inequalities.
For later reference, we shall state the following result which yields a
lower bound for m(t).
Theorem 1.5.3 Let g C[R+ R+ , R] and (t) be the minimal
solution of (1.5.1) existing on [t0 , ). Suppose that m C[R+ , R+ ]
and Dm(t) g(t, m(t)), t t0 , where D is any xed Dini derivative.
Then m(t0 ) u0 implies m(t) (t), t t0 .
Proof The proof runs parallel to the proof of Theorem 1.5.2.
Instead of (1.5.3), we now have to consider solutions v(t, ) of
v = g(t, v) ,
v(t0 ) = u0 ,
for suciently small > 0 on [t0 , T ] and note that lim v(t, ) = (t)
0
t [t0 , T ]
Differential inequalities
29
An extension of Theorem 1.5.2 to systems requires g to be quasimonotone nondecreasing which is also a necessary condition for the
existence of extremal solutions of (1.5.1). Thus, we have the following
generalization of Theorem 1.5.2.
n , Rn ], g(t) be quasimonoTheorem 1.5.4 Let g C[R+ R+
tone nondecreasing in u for each t and r(t) be the maximal solution of (1.5.1) existing on [t0 , ). Suppose that Dm(t) g(t, m(t)),
t t0 holds for a xed Dini derivative. Then m(t0 ) u0 implies
m(t) r(t), t t0 .
As mentioned earlier, the inequalities in Theorem 1.5.4 are componentwise. Instead of considering componentwise inequalities between
vectors, we could utilize the notion of a cone to induce partial order
on Rn and prove Theorem 1.5.4 in that framefork. Naturally, this
approach is more general and is useful when we deal with conevalued
functions. We shall therefore develop theory of dierential inequalities in arbitrary cones and prove a result corresponding to Theorem
1.5.4 in that setting.
A proper subset K Rn is called a cone if the following properties
hold:
K K , 0 , K + K K , K = K ,
(1.5.5)
K {K} = {0} and
K 0 = ,
where K denotes the closure of K, K 0 is the interior of K. We shall
denote by K the boundary of K. The cone K induces the order
relations on Rn dened by
K
xy
yxK
and x < y
y x K 0.
(1.5.6)
30
Inequalities
K
(1.5.7)
m(t) r(t) ,
t t0 .
(1.5.8)
31
Integral inequalities
h0
1
[w(t1 + h) w(t1 )] 0
h
1.6
Integral Inequalities
u = g(t, u) ,
u(t0 ) = u0 ,
(1.6.1)
t t0 .
t0
(1.6.2)
32
Inequalities
t
t0
m(t0 ) = v(t0 ) and v g(t, v), in view of the fact that g is nondecreasing in u. An application of Theorem 1.5.2 yields v(t) r(t),
t t0 and this completes the proof.
Corollary 1.6.1 Let the assumptions of Theorem 1.6.1 hold
except that (1.6.2) is replaced by
t
m(t) n(t) +
t t0 ,
t0
t t0
u(t0 ) = 0.
A more general result than Theorem 1.6.1 deals with Volterra integral inequality which, in general, cannot be reduced to a dierential
inequality. To prove such a result, we consider the integral equation
of Volterra type given by
t
u(t) = h(t) +
(1.6.3)
t0
2 R, R]. One can prove the exwhere h C[R+ , R] and K C[R+
istence of extremal solutions of (1.6.3) employing arguments similar
to those in case of ordinary dierential equations. However, we do
need monotone nondecreasing property of K(t, s, u) with respect to
u. We shall merely state the result.
K(t, s, m(s)) ds
t0
(1.6.4)
33
Integral inequalities
t [t0 , T ],
t0
for suciently small > 0. Then, since lim u(t, ) = r(t) uniformly
0
If this is not true, since m(t0 ) < u(t0 , ), there exists a t1 > t0 such
that m(t1 ) = u(t1 , ) and m(t) u(t, ), t [t0 , t1 ]. Using the
monotone property of K, we are led to the contradiction
t1
m(t1 ) h(t1 ) +
t1
K(t1 , s, u(s, )) ds = u(t1 , )
+ +
t0
34
Inequalities
One can reduce the integral inequality (1.6.4) to an integrodierential inequality if h and K are smooth enough. Our next result
deals with this situation.
Theorem 1.6.3 In addition to the assumptions on m, h and K
in Theorem 1.6.2, suppose that h (t), Kt (t, s, u) exist, are continuous, nonnegative and Kt (t, s, u) is nondecreasing in u for each (t, s).
Suppose that r(t) is the maximal solution of the dierential equation
t
u(t0 ) = h(t0 )
t0
t
t0
t
Kt (t, s, m(s)) ds.
t0
t
v(t0 ) = h(t0 ).
t0
35
(1.6.5)
t
g(s) exp ( 1)
1 ( 1)c1
s
f ( ) d ds > 0,
a
then
c exp
t
a
m(t)
1 ( 1)c1
t
f (s)ds
g(s) exp ( 1)
4
1
(1)
f ( )d ds
(1.6.6)
for all t [a, b].
Proof Inequalities (1.6.5) for all t [a, b] can be written in the
pseudo linear form
t
m(t) c +
Applying GronwallBellman lemma (see Theorem 1.1.1) to this inequality and having performed some non-complex transformations,
we get estimate (1.6.6).
1.7
36
Inequalities
(1.7.1)
t0
u(t0 ) = 0
(1.7.2)
existing on [t0 , ).
Then, for t t0 ,
m(t) f 1 (h(t) + H(t, r (t))),
(1.7.3)
t
t0
t0 t T.
(1.7.4)
t0
In view of the assumptions (i) and (ii), the relations (1.7.1) and
(1.7.4) imply that
m(t) f 1 (h(t) + H(t, v(t, t)))
f 1 (h(t) + H(t, v(t, T ))),
t0 t T.
(1.7.5)
)
= g(T, t, m(t)), the monotone nature of g(t, s, u) in u
Since dv(t;T
dt
together with (1.7.5) yields the dierential inequality
dv(t; T )
g(T, t, f 1 (h(t) + H(t, v(t; T )))),
dt
v(t0 , T ) = 0, (1.7.6)
37
t0 t T,
(1.7.7)
t
t t0
(1.7.8)
t0
v(t; t) v(t; T ),
(1.7.9)
v(t0 ; T ) = 0. (1.7.11)
38
Inequalities
t0 t T,
(1.7.12)
t t0 ,
(1.7.13)
t0
then one could state and prove theorems that are duals to Theorems
1.7.1 and 1.7.2 with appropriate monotone conditions on the functions involved. In order to avoid repetition, we omit the details, and
give an example to illustrate the scope of the results developed.
Example 1.7.1 Let f (u) = u, H(t, u) = b(t)u1/p , g(t, s, u) =
K(s)up , so that (1.7.1) reduces to
1/p
t
K(s)(m(s))p ds
,
m(t) h(t) + b(t)
t t0 .
t0
u(t0 ) = 0,
1.8
IntegroDierential Inequalities
Integrodifferential inequalities
39
2 , R ], H C[R3 , R ], H(t, s, u)
Theorem 1.8.1 Let f C[R+
+
+
+
be nondecreasing in u for each (t, s) and for t t0 ,
t
(1.8.1)
t0
u(t0 ) = u0 0,
(1.8.2)
t0
t
u(t0 ) m(t0 ).
t0
We shall show that m(t) u(t) for t t0 . If this is not true, suppose
that for some t > t0 , m(t ) > u(t ). Then, there exists a t1 > t0
such that
u(t1 ) m(t1 ) and u (t1 ) < D + m(t1 ).
By denition of p(t, u), it is clear that m(t) p(t, u(t)) and
m(t1 ) = p(t1 , u(t1 )). Hence, using the monotone character of H,
we have
+
t1
H(t1 , s, m(s)) ds
t0
t1
f (t1 , p(t1 , u(t1 ))) +
40
Inequalities
= F (t1 , u(t1 )) +
Zt1
t0
Zt
on the interval t0 s t.
For the proof, it is enough to observe that r(t) = R(t, t0 )u(t0 ) is
the solution of (1.8.2).
Even in the case when f and H are linear, finding R(t, s) is difficult in general. Hence, a comparison result which enables us to
reduce integrodifferential inequalities to differential inequalities will
provide great advantage. This is what we shall consider next. First,
we need the following lemma.
2 , R ] satisfy
Lemma 1.8.1 Let g0 , g C[R+
+
g0 (t, u) g(t, u),
2
(t, u) R+
.
(1.8.4)
u(t0 ) = u0 0
(1.8.5)
u(T ) = u0 0
(1.8.6)
41
Integrodifferential inequalities
t [t0 , T ],
(1.8.7)
whenever r(T, t0 , u0 ) v0 .
Proof It is known that lim u(t, ) = r(t, t0 , u0 ) and lim v(t, ) =
0
u(t0 ) = u0 + ,
v(T ) = v0 ,
Since g0 g and r(T, t0 , u0 ) v0 , it is easy to see that for a suciently small > 0, we have
u(t, ) < v(t, ),
T t < T,
t0 t T .
t < t T
42
Inequalities
2 , R], H
Let m C[R+ , R+ ], f C[R+
Theorem 1.8.2
3 , R] and
C[R+
t
D m(t) f (t, m(t)) +
t I0 ,
(1.8.8)
t0
(1.8.9)
t
F (t, u; t0 ) = f (t, u) +
H(t, s, u) ds
(1.8.10)
t0
u(t0 ) = u0 ,
(1.8.11)
t t0 .
(1.8.12)
solution of
u = F (t, u; t0 ) + ,
u(t0 ) = u0 + ,
t0 t T.
t0 s < t
(1.8.13)
Integrodifferential inequalities
43
t0 s t .
t0 s t .
K(t, s, u) ds,
t0
and
K(t, s, u) = H(t, s, (s, t, u)),
where (s, T, v0 ) is the left maximal solution of (1.8.6) existing on
t0 t T . Then, m(t) r(t), t t0 , whenever m(t0 ) u0 , where
r(t) is the maximal solution of (1.8.11) existing on [t0 , ).
44
Inequalities
t I0 .
t0
u(t0 ) = u0 ,
t0
t
H(t, s) ds.
t0
where B(t, t0 ) = +
t
t0
45
Integrodifferential inequalities
t
u(t0 ) = u0
t0
u
u0
ds
c(s)
and
t t0 .
t0
2 , R ] and K (t, s)
where m C[R+ , R+ ], h C[R+ , R+ ], K C[R+
+
t
exists, is continuous and nonnegative. Then,
t
t
m(t) h(t) +
(s, t0 ) exp
d(, t0 ) d
t t0 ,
t0
ds,
t
Kt (t, s) ds and
t0
t
(t, t0 ) = K(t, t)h(t) +
46
Inequalities
H(t, s, 0) ds,
t t0 ,
t0
existing on [t0 , ).
1.9
Dierence Inequalities
un+1 g(n, un )
(1.9.1)
n n0 .
(1.9.2)
for all
47
Difference inequalities
Proof Suppose that (1.9.2) is not true. Then, because of the fact
yn0 un0 , there exists a k Nn+0 such that yk uk and yk+1 > uk+1 .
It then follows that, using (1.9.1) and the monotone character of g,
g(k, uk ) uk+1 < yk+1 g(k, yk ) g(k, uk )
which is a contradiction. Hence the theorem is provided.
Normally, when applying Theorem 1.9.1, one of the relations in
(1.9.2) is an equation and correspondingly yn (or un ) is the solution
of the difference equation. Let us now consider the simplest difference
equation given by
y(x) = g(x)
(1.9.3)
where y(x) = y(x + 1) y(x), y, g : Jx+ R and Jx+ =
{x, x + 1, . . . , x + k, . . . }. Then the solution of (1.9.3) is a function
of y such that (1.9.3) is satisfied and we denote it by
y(x) = 1 g(x),
where 1 is called the antidifference operator. This solution y(t)
is not unique since y(x) + w(x), w(x) being an arbitrary function of
period one, is also a solution of (1.9.3). It is also easy to compute
that
n
X
i=0
(1.9.4)
yi = 1 yi |i=n+1
.
i=0
(1.9.5)
x1
Q
t=x0
48
Inequalities
q(x)
If z(x)
p(x) = y(x), and p(x+1) = g(x), the equation (1.9.6) now assumes
the form (1.9.3). Consequently, the solution of (1.9.6) is
" x1
#
X q(s)
1
z(x) = P (x)[ g(x) + z0 ] = P (x)
+ z0
(1.9.7)
p(s + 1)
s=x
0
x1
X
q(x)
s=x0
x1
Y
p(t)
t=s+1
+ z0
x1
Y
p(t).
t=x0
We are now in a position to deduce from Theorem 1.9.1 the discrete version of Gronwall inequality.
Corollary 1.9.1 Let n Nn+0 , kn 0 and
yn+1 yn0 +
Then, for all n n0 ,
yn yn0 exp
n1
X
ks
s=n0
n
X
(ks ys + ps ).
s=n0
n1
X
n1
X
ps exp
s=n0
=s+1
k .
n1
X
(ks us + ps ),
un0 = yn0 .
s=n0
This is equivalent to
un = kn un + pn ,
the solution of which, based on (1.9.7), is
un = un0
n1
Y
s=n0
(1 + ks ) +
n1
X
s=n0
ps
n1
Y
(1 + k ) .
=s+1
49
Difference inequalities
n1
hs W (ys ),
s=n0
then, for n N1 ,
1
yn G
G(y0 ) + M
n1
hs ,
s=n0
n1
s=n0
Vn
W (Vn )
n1
hs .
s=n0
n1
hs .
Hence, for n N1 , Vn G1 G(y0 ) + M
s=n0
s=n0
s=n0
50
Inequalities
un0 = 1,
k=n0
so that we have
un = an
hy
Pn
n1
X
bk
k=n0
yk i
.
Pk
By setting xn = un +
n1
P
bk uk , we obtain
k=n0
xn = un + bn un an xn + bn un (an + bn )un ,
form which one gets xn+1 (1 + an + bn )un , and therefore,
xn
n1
Y
(1 + ak + bk ).
k=n0
Consequently we arrive at
un an
n1
Y
(1 + ak + bk ),
k=n0
which implies
un 1 +
n1
X
s=n0
as
n1
X
k=n0
(1 + ak + bk ).
Difference inequalities
51
n1
yn+1 g yn ,
yj ,
yj ,
j=0
j=0
un+1 = g un ,
uj ,
uj ,
j=0
j=0
yn un .
yk+1 g yk ,
k1
j=0
yj ,
k2
yj g uk ,
j=0
k1
uj ,
j=0
k2
uj = uk+1 .
j=0
yj uj ,
j = 0, 1, . . . , k.
52
Inequalities
Proof Suppose that the conclusion is not true. Then there exists
an index m k such that ym+1 > um+1 and yy uj , j m. It then
follows that
g(ym , ym1 , . . . , ymk ) ym+1 > um+1 = g(um , um1 , . . . , umk ),
which is a contradiction.
1.10
0ta
implies
Zt
0
Z(s) ds
Zt
0
Y (s) ds.
53
We note also that if Y (t) = [y(t), y(t)], then the interval integral of
Y (t) is an interval between the lower Darboux integral of y(t) and
the upper Darboux integral of y(t).
A basic result which uses the inclusion monotone property of interval mappings is the following theorem. Before stating the theorem,
let us introduce some convenient notation.
Let BIn [t0 , t0 + a] denote the set of all bounded ndimensional
interval vector-valued functions on [t0 , t0 + a]. Suppose that a > 0
and U, H BIn [t0 , t0 + a] with
B < b1 U (t), H(t) b2 < B
for all
t [t0 , t0 + a],
(1.10.1)
(1.10.2)
Now, define
ph, [u](t) = h(t) +
Zt
(1.10.3)
t0
Zt
(1.10.4)
t0
for all
t [t0 , t0 + a].
(1.10.5)
54
Inequalities
for all
t [t0 , t0 + ].
(1.10.6)
P i [U ] P i [B]
for all
i, j = 0, 1, 2, . . .
(1.10.7)
Moreover,
and
(1.10.8)
and
U U .
(1.10.9)
Finally, U contains the set of all fixed points such that u(t) B,
t [t0 , t0 + a] of (1.10.3) and if the set of all fixed points of (1.10.3)
contains U and is an element of BIn [t0 , t0 + a], then
P i [U ] U {all fixed points in
B} U P j [B],
(1.10.10)
for all i, j = 0, 1, 2, . . .
Proof The existence of a positive such that (1.10.6) holds follows
from the relation
P [B](t) H(t) + M (t t0 ).
(1.10.11)
t [t0 , t0 + ],
(B)k (b2 )k M k (t t0 )
ij
55
and
U P [U ] implying P j [U ] P i [U ]
for i j.
i = 0, 1, 2, . . . ,
and thus u U .
If the set of all such fixed points is an element of BIn [t0 , t0 + a]
and if it contains U , then it also contains P i [U ] and U as well by
inclusion argument as before. We can summarize these results by
(1.10.10) and thus the proof is complete.
This theorem is an extension of Theorem 1.7.1 and 1.7.2 to interval mappings and contains their dual results which are obtained by
reversing the inequalities.
The following corollary of Theorem 1.10.1 is a generalization of
Gronwall inequality to interval maps.
Corollary 1.10.1 Let Z be an interval vector valued function
which is bounded, i.e. kZ(t)k b for t [t0 , t0 + a]. Suppose that A
is an interval vector and M is an interval matrix. If
Z(t)
Zt
t0
(M Z(s) + A) ds,
t [t0 , t0 + a]
56
Inequalities
then
Z(t)
M iA
i=0
1.11
(t t0 )i+1
,
(i + 1)!
t [t0 , t0 + a].
(s)m(s) ds +
i m(ti )
(1.11.1)
t0 <ti <t
t0
t
(1 + i ) exp
t0 <ti <t
(s) ds ,
t t0 .
t0
(s) ds ,
t0
t [t0 , t1 ]
(1.11.2)
57
(s)m(s) ds + 1 m(t1 )
t0
s
t1
m(t0 ) +
(s)m(t0 ) exp
()d
t0
t0
t1
t
(s)m(s) ds + 1 m(t0 ) exp
ds
(s) ds
t1
t0
t1
= m(t0 )(1 + 1 ) exp
(s) ds
t
+
t0
(s)m(s) ds.
t1
t
(s) ds exp
(s) ds
t0
t1
t
= m(t0 )(1 + 1 ) exp
(s) ds .
t0
58
Inequalities
where
i=1
t [ti , ti+1 ),
i = 0, 1, 2, . . .
i=1
t t0 ,
u(t0 ) = u0 0,
(1.11.3)
existing on [t0 , ).
Proof By Theorem 1.5.2, we have
m(t) r0 (t),
t [t0 , t1 ),
m(t) r1 (t),
t [t1 , t2 ),
(1.11.4)
and
where r0 (t) and r1 (t) are the maximal solutions of
u = g(t, u)
(1.11.5)
t [t1 , t2 ),
(t1 , m(t
1 ) + 1 ). By (1.11.4), m(t1 ) r0 (t1 ) and therefore, again
applying Theorem 1.5.2, we get
m(t) r12 (t),
t [t1 , t2 ),
59
0 (t) is well dened and now, by the monotonicity of g(t, u), we have
0 (t) = r0 (t) = g(t, r0 (t)) g(t, r0 (t)+1 ) = g(t, 0 (t)),
t [t0 , t1 ],
and
(t) = g(t, r12 (t)) = g(t, 0 (t)),
0 (t) = r12
t [t1 , t2 ).
Hence
0 (t) g(t, 0 (t)),
t [t0 , t2 ),
t [t0 , t2 ),
m(t) r t, t0 , m(t0 ) +
i ,
t t0 ,
i=1
60
Inequalities
2 , R], : R R and
Theorem 1.11.3 Assume that g C[R+
+
k
k (u) is nondecreasing in u, m : R+ R+ is continuous for t = k ,
left continuous at k and satises
t = k ,
t > t0 ,
with
m(k+ ) k (m(k )),
k = 1, 2, . . . ,
where D is any xed Dini derivative. Let r(t) be the maximal solution of (1.11.6) existing on [t0 , ). Then m(t+
0 ) u0 implies that
m(t) r(t), t t0 .
Proof By Theorem 1.5.2, we have m(t) r(t), t (t0 , 1 ] and
hence m(1 ) r(1 ). Consequently,
m(1+ ) 1 (m(1 )) 1 (r(1 )) = r(1+ ).
Similar arguments show that
for t (k , k+1 ] [t0 , ),
m(t) r(t),
t
dk exp
t0 <k <t
t0
dk exp
s<k <t
t0 <k <t
k <i <t
p(s) ds
t0
t
t
p()d q(s) ds
t
di exp
p(s) ds
k
bk .
Reactiondiffusion inequalities
1.12
61
Reaction-Diusion Inequalities
h
We shall always assume that an outer normal exists on H1 and the
functions in question have outer normal derivatives on H1 .
If u C[H, R] and if the partial derivatives ut , ux , uxx exist and
are continuous in H, then we shall say that u C. The following
lemma is basic to our discussion.
Lemma 1.12.1 Assume that
(i) m C and m(t, x) < 0 on H0 ;
(ii)
m(t,x)
on
[t0 , t1 )
and m(t1 , x1 ) = 0.
62
Inequalities
i j mxi xj (t1 , x1 ) 0,
Rn .
i,j=1
(Qij Rij )i j 0,
Rn
i,j=1
implies
f (t1 , x1 , u, P, Q) f (t1 , x1 , u, P, R).
If this property holds for every (t, x) H, then f is said to be elliptic
in H.
We can now prove the following comparison results.
Theorem 1.12.1 Suppose that
2
(i) v, w C, f C[H R Rn Rn , R], f is elliptic in H and
vt f (t, x, v, vx , vxx ), wt f (t, x, w, wx , wxx ) in H;
v
(ii) v < w on H0 and
< w
on H1 .
Then, v < w on H, if one of the inequalities in (i) is strict.
Proof Assume that one of the inequalities in (i) is strict. It is
enough to show that m = v w satises the hypotheses of Lemma
1.12.1. It is easy to see that conditions (i) and (ii) of Lemma 1.12.1
hold. We only need to verify condition (iii). Let (t1 , x1 ) H be such
that
m(t1 , x1 ) = 0 = mx (t1 , x1 )
and
n
i,j=1
i j mxi xj (t1 , x1 ) 0,
Rn .
63
Reactiondiffusion inequalities
Rn .
i,j=1
small > 0,
either
(a) zt > f (t, x, v, vx , vxx ) f (t, x, v z, vx zx , vxx zxx )
or
(b) zt > f (t, x, w + z, wx + zx , wxx + zxx ) f (t, x, w, wx , wxx )
holds on H, where v, w C.
Theorem 1.12.2 Let the assumption (i) of Theorem 1.12.1 hold
and suppose that (C0 ) is satised. Then the relation
vw
on
H0
and
v
w
on
H,
imply v w on H.
= w + z. Then,
Proof Assume that (b) of (C0 ) holds and set w
we have
w
x , w
xx )
w
t = wt + zt f (t, x, w, wx , wxx ) + zt > f (t, x, w,
z
v
on H. Also, v < w
on H0 and on H1 , w = w
+ + .
Thus, the functions v and w
satisfy the assumptions of Theorem
64
Inequalities
n
i,j=1
n
j=1
and
zt azxx bzx (N A)z = LM z > Lz.
Consequently, using Lipschitz condition of F , we arrive at
zt > [azxx + bzx ] + F (t, x, w + z) F (t, x, w)
which is exactly the condition (b) of (C0 ).
If H1 is empty so that H0 = H, then assumption (C0 ) can be
replaced by a one sided Lipschitz condition
(C1 )f (t, x, u, P, Q) f (t, x, v, P, Q) L(u v),
u v.
on
H.
65
Notes
on
H1
1.13
Notes
66
Inequalities
2
VARIATION OF PARAMETERS
AND MONOTONE TECHNIQUE
2.0
Introduction
67
68
Section 2.9 considers nonlinear integro-dierential equations, investigates the problem of continuity and dierentiability of solutions with respect to initial values, obtains nonlinear variation of
parameters formula and discusses qualitative behavior of solutions
of perturbed integro-dierential systems as an application of this
formula. Stability in variation is studied for integro-dierential systems in Section 2.10 where, given a linear integro-dierential system, a method of nding an equivalent linear dierential system, is
discussed and then exploiting this approach stability properties of
nonlinear integro-dierential systems are investigated.
Finally Section 2.11 deals with stability results for dierence equations. For this purpose, we develop suitable variation of parameters
formulae and employ the corresponding theory of dierence inequalities.
2.1
x(t0 ) = x0 ,
t0 R+ ,
(2.1.1)
y(t0 ) = x0 ,
(2.1.2)
x(t, t0 , x0 )
x0
v(t0 ) = x0 ,
(2.1.3)
69
(2.1.4)
t0
y(t, t0 , x0 ) = x(t, t0 , x0 )
t
+ (t, t0 , v(s))1 (s, t0 , v(s))R(s, y(s, t0 , x0 )) ds
t0
(2.1.5)
for t t0 .
Proof Let x(t, t0 , x0 ) be the unique solution of (2.1.1) existing for
t t0 . The method of variation of parameters requires determining
a function v(t) such that
y(t, t0 , x0 ) = x(t, t0 , v(t)),
v(t0 ) = x0 ,
(2.1.6)
x(t, t0 , v(t))
v (t)
x0
70
t0
(2.1.7)
t0
(i) (t, t0 , x0 ) =
71
x(t, t0 , x0 )
exists and is solution of
x0
y = H(t, t0 , x0 )y,
(2.1.8)
x(t, t0 , x0 )
exists, is the solution of (2.1.8) with
t0
x(t0 , t0 , x0 )
= f (t0 , x0 )
t0
and satises the relation
x(t, t0 , x0 )
+ (t, t0 , x0 )f (t0 , x0 ) = 0,
t0
t t0 .
(2.1.9)
(2.1.10)
t0
x(t, t0 , x0 )
. Furthermore, the relax0
tions (2.1.5) and (2.1.10) are equivalent.
(2.1.11)
in view of (2.1.9). Noting that x(t, t, y(t)) = y(t) and R(s, y(s)) =
y (s) f (s, y(s)), the desired relation (2.1.10) follows by integrating
(2.1.11) from t0 to t. Now, to show the equivalence of (2.1.5) and
(2.1.10), rst note that for t0 s t,
x(t, s, y(s)) = x(t, t0 , v(s)),
(2.1.12)
72
(2.1.13)
holds for t t0 .
Proof Set w(s) = x(t, t0 , y0 s + (1 s)x0 ) for 0 s 1 so that
dw(s)
= (t, t0 , y0 s + (1 s)x0 ) (y0 x0 ).
ds
Integrating this relation from 0 to 1 yields (2.1.13).
Now we shall prove the promised result.
Theorem 2.1.5 Under the assumptions of Theorem 2.1.1, if
y(t, t0 , y0 ) and x(t, t0 , x0 ) are solutions of (2.1.1) and (2.1.2) respec-
73
Estimates of solutions
y(t, t0 , y0 ) = x(t, t0 , x0 ) +
0
(2.1.14)
t
(y0 x0 ) +
Proof
y(t, t0 , y0 ) = x(t, t0 , x0 ) +
and
1
(t, t0 , y0 s + (1 s)x0 ds (y0 x0 ),
x(t, t0 , y0 ) = x(t, t0 , x0 ) +
0
2.2
Estimates of Solutions
(2.2.1)
t0
(t, x) R+ Rn ,
(2.2.2)
t
m(t) m(t0 ) +
t t0 ,
(2.2.3)
74
t t0 ,
(2.2.4)
u(t0 ) = u0 0,
(2.2.5)
existing on [t0 , ).
Instead of using the integral equation (2.2.1), if we utilize the
dierential equation (2.1.1) directly, we get
m+ (t) = f (t, x(t, t0 , x0 )) g(t, m(t)),
t t0 ,
(2.2.6)
where m+ (t) is the right hand derivative of m(t). Then, Theorem
1.5.2 provides the same estimate (2.2.4) without the extra assumption of nondecreasing nature of g.
Note also that (2.2.2) demands that g be nonnegative which in
turn implies that the solutions r(t, t0 , u0 ) of (2.2.5) are nondecreasing. Thus the estimate (2.2.4) does not provide the best possible
information. To remove this drawback, we need to replace (2.2.2) by
[x, f (t, x)]+ lim
h0
1
[x + hf (t, x) x] g(t, x),
h
n
i=1
n
i=1
|aik |,
75
Estimates of solutions
then
(A) = sup Re akk +
k
|aik | .
i=k
Also we note that x[x, y]+ = (x, y), the inner product. For more
details on (A), direction derivatives [x, y] and the generalized inner
products (x, y) in an arbitrary Banach space see Lakshmikantham
and Leela [1,2]. The relation (2.2.6) yields the dierential inequality
Dm(t) g(t, m(t)),
t t0 ,
for a suitable Dini derivative Dm(t), which implies (2.2.4), by Theorem 1.5.2 as before. We observe that g in (2.2.6) need not be nonnegative and consequently the bound obtained in (2.2.4) is much better.
These considerations prove the following result.
Theorem 2.2.1 Assume that either (2.2.2) or (2.2.6) holds.
Then if x(t, t0 , x0 ) is any solution of (2.1.1) existing for t t0 and
r(t, t0 , u0 ) is the maximal solution of (2.2.5) existing on [t0 , ), we
have the estimate (2.2.4).
We shall next utilize the various nonlinear variation of parameters
formulae to obtain estimates of solutions of the perturbed system
(2.1.2).
Theorem 2.2.2 In addition to the assumptions of Theorem 2.1.1,
suppose that
1 (t, t0 , x0 )R(t, x(t, t0 , x0 )) g(t, x0 )
(2.2.7)
t t0 ,
(2.2.8)
t t0 .
(2.2.9)
76
(2.2.10)
t t0 .
(2.2.11)
t t0 ,
(iii) r(t, t0 , x0 ), r0 (t, t0 , a(x0 )x0 ) are the maximal solutions
of
u = g(t, u),
u = g0 (t, u),
u(t0 ) = x0 ,
u(t0 ) = a(x0 )x0 ,
t t0 .
(2.2.12)
77
Estimates of solutions
1
x(t, t0 , x0 ) =
(t, t0 , x0 s) ds x0
t t0 .
(2.2.13)
Furthermore, it follows from (2.1.3) and in view of (i) and (ii), that
t
v(t) x0 +
t
x0 +
t t0 .
t0
t t0 .
(2.2.14)
t0
t
a(x0 )x0 +
78
existing for t t0 .
Then, for any solution y(t, t0 , x0 ) of (2.1.2) we have
y(t, t0 , x0 ) r(t, t0 , a(x0 )x0 ),
t t0 .
(2.2.15)
u(t0 ) = u0
t t0 .
(2.2.16)
79
Estimates of solutions
t
t0
y(t)
1 ( 1)y0
1
t
b(s)ds
t
c(s) exp ( 1) b( )d ds
1
1
t0
(2.2.17)
holds true for all t t0 0 whenever
t
( 1)y0
t
c(s) exp ( 1)
1
t0
b( )d ds < 1.
(2.2.18)
80
t
c(s)y(s) ds.
b(s)y(s) ds +
t0
(2.2.19)
t0
b(s) + c(s)y(s)1 y(s) ds,
(2.2.20)
t0
b(s) + c(s)y(s)1 ds
(2.2.21)
t0
for all t t0 0.
Further, estimating the expression
t
c(s)y(s)1 ds
exp
t0
y(t0 ) = y0 .
c(s) ds > 0
tk
(2.2.22)
81
Estimates of solutions
c(s)y(s) ds.
t0
c(s) ds > 0
t0
y(t)
1 ( 1)y0
1
t
c(s)ds
1
1
(2.2.23)
t0
for all t t0 0.
Corollary 2.2.2 In system (2.1.2) let R(t, y) B(t, y)y, where
B : R+ Rn Rnn is an n n-matrix continuous with respect to
(t, x) R+ Rn .
Consider a system of non-autonomous linear equations with
pseudo-linear perturbation
dy
= f (t, y) + B(t, y)y,
dt
y(t0 ) = y0 .
(2.2.24)
(2.2.25)
82
ky(t)k
ky0 k exp
1 ky0 k
Rt
Rt
b(s)ds
h(s) exp
Rs
b( )d ds
which holds true for the values of (t) [0, ) for which
1 ky0 k
Zt
h(s) exp
Zs
b( ) d ds > 0.
dy X
=
Ak (t)y k ,
dt
y(t0 ) = y0 ,
(2.2.26)
k=1
k = 1, 2, . . . , n.
(2.2.27)
ky(t)k ky0 k +
Zt X
n
ky0 k +
0 k=1
Zt n
0 k=1
kAk (s)kky(s)kk ds
(2.2.28)
bk (s)ky(s)kk ds.
83
1 (n 1)
Zt
b1 (s) ds
0
t
Z X
n
0 k=2
k1
bk (s)ky0 k
1 (n 1)
0 k=2
exp (k 1)
Zt
b1 ( )d ds
1
n1
(2.2.29)
Zt X
n
"
k1
bk (s)ky0 k
exp (k 1)
Zt
0
b1 ( )d ds > 0.
2.3
The assumption (2.2.2) of Theorem 2.2.1 is strong enough to guarantee global existence of solutions of (2.1.1). In fact, we can prove
the following result.
84
t0 t < ,
(2.3.1)
where r(t) = r(t, t0 , u0 ). For any t1 , t2 such that t0 < t1 < t2 < ,
we see, by using (2.3.1) and the monotone nature of g, that
t2
x(t2 ) x(t1 )
g(s, x(s)) ds
t1
(2.3.2)
t2
in view of (2.3.1) and (2.3.2) that lim r(t) exists and the proof is
t
complete.
85
t Ix ,
(2.3.3)
86
x(t0 ) = z0 (cz ).
t [t0 , cz + ).
x() = x ,
(2.3.4)
has a solution.
Proof We shall construct a solution x(t) of (2.1.1) such that
lim x(t) = x . We shall rst observe that for every (t0 , )
g(s, ) ds < .
(2.3.5)
t0
t
r r(t) = +
t
g(s, r(s)) ds +
t0
g(s, ) ds.
t0
87
(2.3.6)
T + n t < .
(2.3.7)
and consequently
t2
t2
g(s, Rn (s)) ds
r =
t1
g(s, 2r ) ds
t1
g(s, 2r ) ds.
T
This contradicts (2.3.6). Now, an argument similar to that of Theorem 2.1.1 shows that xn (t) exists on [T, T + n].
88
t2
f (s, xn (s)) ds 2r
t1
2r
g(s, 2r ) ds
t1
g(s, 2r ) ds > r
T
and passing to the limit, we see that x(t) is a solution of (2.3.4). Since
lim x(t) = x() exists and lim xnk (t) = x(t), xnk (T + n) = x ,
nk
Stability criteria
2.4
89
Stability Criteria
90
and x(t) ,
t [t0 , t1 ].
t [t0 , t1 ],
u(t0 ) = u0 0.
(2.4.1)
91
Stability criteria
kx(t, t0 , x0 )k Kkx0 k,
t t0 ,
kx0 k < .
if
(2.4.2)
Zt
Zt
t0
t0
Thus
Kkv(t)k Kkx0 k + K
Zt1
g(s, Kkv(s)k) ds
t0
t [t0 , t1 ],
.
K
Now, the relations (2.1.5) and (2.4.2) yield, because of (i) and (ii),
which is a contradiction. Hence kv(t)k < for t t0 , if kx0 k <
ky(t, t0 , x0 )k Kkx0 k + K 2
Zt1
t0
t t0 .
t t0 ,
92
where G(w) =
Ru
u0
t0 t t1
and ky(t1 , t0 , x0 )k = ,
Zt
t0
m(t) kx0 kG
g(kx0 k)
G(1) +
kx0 k
Zt
t0
(s) ds ,
t0 t t1
93
t t0
2.5
u(0) = u0 ,
t J = [0, T ].
(2.5.1)
v(0) u0
t J}.
If f is quasimonotone nondecreasing in u, then there exists a solution u(t) of (2.5.1) such that v(t) u(t) w(t) on J, provided
v(0) u(0) w(0).
94
i
v(t) w(t) and vi (t) = i
(2.5.2)
w fi (t, ) for all such that
i
v(t) w(t) and wi (t) = i .
Theorem 2.5.2 Let v, w C 1 [J, Rn ] with v(t) w(t) on
J satisfying (2.5.2). Let f C[, Rn ]. Then there exists a solution u of (2.5.1) such that v(t) u(t) w(t) on J provided
v(0) u(0) w(0).
Since the assumptions of Theorem 2.5.1 imply the assumptions of
Theorem 2.5.2, it is enough to prove Theorem 2.5.2.
Proof of Theorem 2.5.2 Consider P : J Rn Rn dened by
Pi (t, u) = max{vi (t), min[ui , wi (t)]},
for each
i = 1, 2, . . . , n.
on [0, t1 )
95
|xi yi |.
i=1
Proof We shall rst assume that v, w in (2.5.2) satisfy strict inequalities and v(0) u(0) w(0). If the conclusion is false, there
exists a t1 > 0 and an i, 1 i n such that v(t1 ) u(t1 ) w(t1 )
and either vi (t1 ) = ui (t1 ) or ui (t1 ) wi (t1 ). Then,
fi (t1 , u(t1 )) = ui (t1 ) vi (t1 ) < fi (t1 , 1 , . . . , vi (t1 ), . . . , n )
= fi (t1 , u(t1 ))
or
fi (t1 , u(t1 )) = ui (t1 ) wi (t1 ) > fi (t1 , 1 , . . . , wi (t1 ), . . . , n )
= fi (t1 , u(t1 ))
which leads to a contradiction. In order to prove the conclusion in
case of nonstrict inequalities, consider
w
i (t) = wi (t) + e(n+1)Li t ,
96
for each
i.
and
i = w
i (t). Here we have
used the fact that
|
j Pj (t, )| e(n+1)Lj t ,
for each
j.
and
i = vi (t). Since v(0) < u0 < w(0),
t J. As is arbitrary,
the result follows letting 0.
2.6
v(0) u0 .
(2.6.2)
w(0) u0 .
(2.6.3)
v(0) = u0
and
97
w(0) = u0 ,
(2.6.4)
u(0) = u0 .
(2.6.5)
98
mi (v1i v0i ) = Mi mi .
It thus follows that mi (t) mi (0)eMi t 0 on J and hence v0i v1i .
Similarly we can show w0i w1i which proves (i).
In order to prove (ii), let 1 , 2 , [v0 , w0 ] be such that
1 2 . Suppose A[1 , ] = u1 and A[2 , ] = u2 . Then, setting
mi = u1i u2i , we nd, using the mixed monotone property of f and
(2.6.4), that
mi = fi (t, 1i , [1 ]pi , []qi ) fi (t, 2i , [2 ]pi , []qi )
Mi (u1i 1i ) + Mi (u2i 2i ) Mi mi .
Also, since mi (0) = 0, we get u1i u2i . Similarly, if 1 , 2 ,
[v0 , w0 ] such that 1 2 , then, as before one can prove that
A[, 1 ] A[, 2 ]. It therefore follows that the mapping A satises
property (ii). Consequently this implies A[, ] A[, ] whenever
and , [v0 , w0 ]. In view of (i) and (ii) above, we can dene
the sequences
vn = A[vn1 , wn1 ],
wn = A[wn1 , vn1 ].
It is then easy to prove that the sequences {vn }, {wn } are monotonic
and converge uniformly and monotonically to coupled quasi-solutions
v, w of (2.6.1). Letting v = lim vn , w = lim wn , we nd
n
v(0) = u0 ,
99
and
wi = fi (t, wi , [w]pi , [v]qi ),
w(0) = u0 .
We shall show that v, w, are coupled minimal and maximal quasisolutions respectively. Let u1 , u2 , be any coupled quasi-solutions of
(2.6.1) such that u1 , u2 [v0 , w0 ]. Let us assume that for some integer, k > 0, vk1 u1 , u2 wk1 on J. Then, setting mi = vki u1i
and employing the mqmp of f and (2.6.1), we arrive at
mi fi (t, vk1i , [u1 ]pi , [u2 ]qi ) fi (t, u1i , [u1 ]pi , [u2 ]qi )
Mi (vki vk1i ) Mi mi .
Since mi (0) 0, this implies that vk u1 . Using similar arguments
it is easy to conclude that vk u1 , u2 wk on J. It therefore follows
by induction that vn u1 , u2 wn on J for all n, since v0 u1 ,
u2 w0 on J. Hence, we have v u1 , u2 w on J showing that v, w
are coupled minimal and maximal quasi-solutions of (2.6.1). Since
any solution u of (2.6.1) such that u [v0 , w0 ] can be considered as
u, u coupled quasi-solutions of (2.6.1) we also have v u w on J.
This completes the proof.
We note rst that if qi = 0 so that pi = n 1 for each i, then
f is quasi-monotone nondecreasing and consequently Theorem 2.6.1
yields minimal and maximal solutions v, w of (2.6.1) respectively.
On the other hand, if pi = 0, qi = n 1 for each i, then f is quasimonotone nonincreasing and in this case, we have coupled extremal
quasi-solutions of extreme type. As it is, this theorem considers
a general class of quasi-solutions. Thus, it becomes an important
question to examine the relation between quasi-solutions and actual
solutions. It is not dicult to show that if f satises a Lipschitz
condition in the sector [v0 , w0 ], then a quasi solution is actually a
solution. In fact, we state this fact in the following corollary, whose
proof is immediate.
Corollary 2.6.1 In addition to the assumptions of Theorem
2.6.1, suppose that f (t, u) f (t, v) Lu v for u, v [v0 , w0 ],
then u = v = w is the unique solution of (2.6.1) such that v0 u
w0 on J.
100
2.7
101
on
J.
u(0) = u0 ,
(2.7.1)
on
J.
= F (t, , r),
r = F (t, r, ),
r(0) = (0) = u0 ,
t J.
102
P (0) = 0.
Suppose that
f (t, ) B( ),
f (t, ) + B( )
and
B(x y) f (t, x) f (t, y) B(x y),
whenever (t) y x (t), B being n by n matrix of nonnegative elements. Then the system (2.5.1) admits the method of mixed
monotony.
Proof We dene
1
F (t, y, z) = [f (t, y) + f (t, z) + B(y z)].
2
(2.7.2)
(2.7.3)
103
2.8
104
u(0) = u0 ,
(2.8.1)
where f C[J, R] and J = [0, T ]. Let us list the following assumptions for convenience:
(A0 ) , C 1 [J, R], (t) (t) on J and (t) f (t, ),
(t) f (t, ) with (0) u0 (0);
(A1 ) f (t, x) f (t, y) M (x y) whenever (t) y x (t)
and M 0;
(A2 ) , C 1 [J, R], (t) (t) on J with (0) u0 (0) and
f (t, ) M ( ), f (t, ) M ( ) for all such
that (t) (t), M > 0.
We note that whenever (A0 ) and (A1 ) hold, (A2 ) is satised.
Theorem 2.8.2 Assume either (A0 ) and (A1 ) or (A2 ). Then
there exists a decreasing sequence {Un (t)} of interval functions with
U0 (t) = [(t), (t)] such that lim Un (t) = U (t) exists and is an
n
pu = u(0)e
t
+
(2.8.2)
105
Y = [y, y].
(2.8.3)
106
(t) x,
y (t),
t J,
2.9
Integro-Dierential Equations
In this and the next section, we shall discuss some problems for
integro-dierential equations. We rst develop nonlinear variation
of parameters formula for integro-dierential equations. For this
purpose, we need to investigate the problem of continuity and differentiability of solutions of nonlinear integro-dierential equations
and obtain the relation between the derivatives. We then prove the
nonlinear variation of parameters formula for solutions of perturbed
integro-dierential equations.
We need the following result before we proceed further.
Theorem 2.9.1 Consider the initial value problem for linear
integro-dierential equations given by
t
x (t) = A(t)x(t) +
x(t0 ) = x0 ,
(2.9.1)
t0
2
where A(t), B(t, s) are continuous n by n matrices on R+ and R+
respectively and F C[R+ , Rn ]. Then, the unique solution of (2.9.1)
is given by
t
x(t) = R(t, t0 )x0 +
t t0 ,
(2.9.2)
107
Integro-differential equations
R(t, s)
+ R(t, s)A(s) +
s
R(t, )B(, s) d = 0,
(2.9.3)
t0
R(t, s)
+ R(t, s)A(s) x(s) ds
s
t0
s
t
R(t, s)
+
t0
t
B(s, u)x(u) du ds +
t0
s
t
t
R(t, u)B(u, s) du x(s) ds.
R(t, s) B(s, u)x(u) du ds =
t0
t0
t0
Hence we obtain
t
R(t, t)x(t) R(t, t0 )x0 =
R(t, s)
+ R(t, s)A(s)
s
t0
t
+
t
108
Since R(t, s) satises (2.9.3), the desired relation (2.9.2) follows immediately and the proof is complete.
Now, we shall discuss the problem of continuity and dierentiability of solutions x(t, t0 , x0 ) of the initial value problem
t
x(t0 ) = x0
(2.9.4)
t0
Rn , Rn ],
(i) (t, t0 , x0 ) =
x(t, t0 , x0 )
exists and is the solution of
x0
t
(2.9.5)
t0
x(t, t0 , x0 )
exists and is the solution of
t0
(2.9.6)
Integro-differential equations
109
(2.9.7)
t0
(2.9.8)
1
fx (t, sx(t, h) + (1 s)x(t)) ds
0
t 1
(x(t, h) x(t)) +
gx (t, s, x(s, h)
t0 0
110
If xh (t) =
x(t, h) x(t)
, h = 0, we see that the existence of
h
x(t, t0 , x0 )
is equivalent to the existence of lim xh (t). Since
h0
x0
x(t0 , h) = x0 + ek h, xh (t0 ) = ek , it is clear that xh (t) is the solution
of the initial value problem
y (t) = H(t, t0 , x0 , h)y(t)
t
+ G(t, s, t0 , x0 , h)y(s) ds,
y(t0 ) = ek ,
(2.9.9)
t0
where
1
fx (t, sx(t, h) + (1 s)x(t)) ds,
H(t, t0 , x0 , h) =
0
and
1
gx (t, s, x(s, h) + (1 )x(s)) d.
G(t, s, t0 , x0 , h) =
0
plies that
lim H(t, t0 , x0 , h) = H(t, t0 , x0 )
h0
and
lim G(t, s, t0 , x0 , h) = G(t, s; t0 , x0 )
h0
h0
x(t,t0 ,x0 )
x0
0 ,t0 ,x0 )
= I. Furthermore, it is easy to see from (2.9.5)
that x(tx
0
x(t,t0 ,x0 )
is also continuous with respect to its arguments.
x0
Integro-differential equations
111
1
fx (t, sx(t, h) + (1 s)x(t)) ds(x(t, h) x(t))
0
t 1
gx (t, s, x(s, h) + (1 )x(s)) d(x(s, h) x(s)) ds
+
t0 +h 0
t
0 +h
x(t, h) x(t)
, h = 0, it is obvious that
h
xh (t) is the solution of the initial value problem
Setting as before, xh (t) =
t
0 +h
g(t, s, x(s)) ds
t0
and
1
a(h) =
h
t
0 +h
1
f (s, x(s)) ds
h
t0
t
0 +hs
t0
h0
112
t
g(t, s, y(s)) ds
t0
(2.9.10)
y(t0 ) = x0 .
t0
t0
t
t
[(t, , y()) R(t, ; s, y(s))]g(, s, y(s)) d ds
+
t0 s
Integro-differential equations
113
theorem, we get
t
p(t) p(t0 )
t
=
t0
t
+
s
Now, the relation (2.9.7) together with the fact x(t, t, y(t)) =
y(t, t0 , x0 ) yields (2.9.11), completing the proof.
In the special case when f (t, x) = A(t)x and g(t, s, x) = B(t, s)x
in (2.9.4) where A, B are continuous n by n matrices, we have
0 ,x0 )
= R(t, t0 ), where
x(t, t0 , x0 ) = R(t, t0 )x0 and (t, t0 , x0 ) = x(t,t
x0
R(t, s) is the solution of (2.9.8) such that R(t, t) = I. Consequently it
is easy to see that the relation (2.9.11) reduces to the linear variation
of parameters formula given by
t
y(t, t0 , x0 ) = R(t, t0 )x0 +
t t0 ,
t0
114
(2.9.12)
(t, t0 , x0 ) M e(tt0 ) ,
for t0 s t; and
(iii) g(t, s, x) Kxe2t whenever x and
t
F (t, x, (Sy)(t)) (t)x +
y(t, t0 , x0 ) M x0 e
t
t0
(s,t0 ) ds
t t0 ,
115
Integro-differential equations
where
2M Ke2t
+M
(t, t0 ) = M (t) +
t
(t, s)e(ts) ds,
t0
and
L1 [t
0 , ).
t t0 .
y(t) M x0 e
t
+M
(s)e(ts) y(s) ds
t0
t
+M
e(ts)
t0
t
s
s
(s, )y() d ds
t0
t0 t0
t0
t
0 (s)V (s) ds
t0
t
(2.9.13)
+
s
t0
2M K 2t
e
. Setting the right hand side of
t0
116
which yields
(tt0 )+
y(t) M x0 e
provided x0
is complete.
2.10
t
t0
(s,t0 ) ds
for
t t0 ,
p N (t0 )
e
, where (s, t0 ) ds N (t0 ). The proof
M
t0
Stability in Variation
One of the important techniques for investigating the qualitative behavior of nonlinear systems is to utilize the corresponding linear variational systems. To bring out the inherent rich behavior of linear
integro-dierential systems, we shall rst discuss a method of nding an equivalent linear dierential system and then exploiting this
method, we investigate the stability properties of nonlinear integrodierential system by means of the corresponding variational system.
Let us continue to consider the system (2.9.4) and suppose that
fx , gx exist and are continuous and f (t, 0) = g(t, s, 0) 0. Let x(t) =
x(t, t0 , x0 ) be the unique solution of (2.9.4) existing on [t0 , ) and
let us consider the variational systems
t
(2.10.1)
y(t0 ) = x0 ,
and
t
z(t0 ) = x0 .
(2.10.2)
t0
117
Stability in variation
2
where A(t), K(t, s) are continuous n by n matrices on R+ and R+
respectively. Then, the initial value problem for linear integrodierential system
t
u (t) = A(t)u(t) +
u(t0 ) = x0 , (2.10.4)
t0
v(t0 ) = x0 ,
t
(2.10.5)
t0
t
s
L(t, s)
+
t0
K(s, )u() d ds +
t0
t
t0
t t
K(s, )u() d ds =
t0
t
+
s
118
t
=
t0
we shall show that z(t) 0 which proves that v(t) satises (2.10.4).
Now, substituting for v (t) from (2.10.5) and using (2.10.3) together
with Fubinis theorem, we get
z(t) = L(t, t)v(t) L(t, t0 )x0
t
+
t
Ls (t, s)v(s) ds
t0
s
d
Since (L(t, s)v(s)) = Ls (t, s)v(s) + L(t, s)v (s), we have, by intedt
gration,
t
L(t, t)v(t) L(t, t0 )x0 =
t0
s
L(t, s) v (s) + A(s)v(s)
t
+
t0
Stability in variation
119
(2.10.6)
t0
(2.10.7)
v(t0 ) = x0 ,
t
v(t) = (t, t0 )x0 +
t t0 ,
120
solution
v(t) of (2.10.1) such that v(t0 ) = x0
x(t,t0 ,x0 )
is given by v(t) =
x0 . Furthermore, if x(t, t0 , x0 ) is any
x0
solution of (2.9.4), then, because of Theorem 2.9.4, we have
1
x(t, t0 , x0 ) =
0
x(t, t0 , x0 s)
ds x0 .
x0
(2.10.8)
y (t) = A(t)y(t) +
(2.10.9)
y(t0 ) = x0 ,
where
F (t, y) = [fx (t, x(t)) fx (t, 0)]y(t)
t
+ [gx (t, s, x(s)) gx (t, s, 0)]y(s) ds.
t0
v(t0 ) = x0 ,
(2.10.10)
where
t
H(t, v) = F (t, v) +
(2.10.11)
121
Stability in variation
C[R+ , R+ ]
(2.10.12)
and
2
C[R+
, R+ ]
(2.10.13)
hold whenever x for some > 0. For convenience, let us dene
the following functions
t
L(t, )(, s) ds
and
t
(t, )(, s) d .
(2.10.14)
t0
R(t, ) d(, s) d.
s
t t0 ,
(2.10.15)
122
(2.10.16)
t0
Since
F (t, v) fx (t, x(t)) fx (t, 0)v(t)
t
+ gx (t, s, x(s)) gx (t, x, 0)v(s) ds
t0
t
(t, s)H(s, v(s)) ds
t0
t
+
s
t t0 .
Difference equations
123
t t0 ,
r(t, t0 , x0 ) being the linear integral equation (2.10.14). The proof
is therefore complete.
Since (2.10.10) and (2.10.9) are equivalent by Theorem 2.10.1 and
(2.10.9) is nothing but a restatement of (2.10.1), it is clear from the
conclusion of Theorem 2.10.3 that the stability properties of (2.10.14)
imply the corresponding stability properties of (2.10.1) and therefore,
imply the stability properties of (2.9.4).
2.11
Dierence Equations
In this section, we shall rst discuss the variation of parameters formula for dierence equations and then, utilizing the theory of dierence inequalities, investigate the stability properties of solutions of
dierence equations.
Consider the dierence equation
yn+1 = A(n)yn + f (n, yn ),
y n0 = y 0 ,
(2.11.1)
n1
(2.11.2)
j=n0
(2.11.3)
Proof Let
y(n, n0 , y0 ) = (n, n0 )xn ,
xn 0 = y 0 .
(2.11.4)
124
n1
j=n0
n1
(2.11.5)
j=n0
(2.11.6)
f (n, x(n, n0 , x0 ))
.
x
(2.11.7)
x(n, n0 , x0 )
x0
(2.11.8)
Then,
(n, n0 , x0 ) =
exists and is the solution of
(n + 1, n0 , x0 ) = H(n, n0 , x0 )(n, n0 , x0 ),
(n0 , n0 , x0 ) = 1.
(2.11.9)
(2.11.10)
125
Difference equations
=
x0
x x0
which coincides with (2.11.8). Then (2.11.9) follows from the denition of .
We are now able to generalize Theorem 2.11.1 to the equation
yn+1 = f (n, yn ) + F (n, yn ).
(2.11.11)
f
exist
x
Rs . If x(n, n0 , x0 ) is the
(2.11.12)
n1
1 (j + 1, n0 , vj , vj+1 )F (j, yj ))
(2.11.13)
j=n0
where
1
(n, n0 , svj + (1 s)vj1 ) ds
(n, n0 , vj , vj+1 ) =
0
(2.11.14)
126
n1
1 (j + 1, n0 , vj , vj+1 )F (j, yj )
(2.11.15)
j=n0
n1
1 (j + 1, n0 , vj , vj+1 )F (j, yj )
(2.11.16)
j=n0
Difference equations
127
n1
1 (j + 1, n0 , vj , vj+1 )F (j, yj )
j=n0
(2.11.17)
128
bounded and since this is just the denition of the norm of , we are
done.
129
Difference equations
n n0 + N ().
(2.11.18)
130
We then have
yn+1 yn + g(n, yn ).
If y0 u0 , we obtain yn un where un is the solution of (2.11.9).
It then follows that
xn (n, n0 )yn (n, n0 )un .
If the solution of the linear system is, for example, uniformly
asymptotically stable, then from Lemma 2.11.2, we see that
(n, n0 ) L nn0 ,
for some suitable L > 0 and 0 < < 1. Then
xn L nn0 un
and this shows that the solution x = 0 is uniformly asymptotically
stable because un is bounded. The proof of other cases is similar.
We shall merely state another important variant of Theorem
2.11.3* which is widely used in numerical analysis.
Theorem 2.11.5 Given the dierence equation
yn+1 = yn + hA(n)yn + f (n, yn )
(2.11.21)
(2.11.22)
Difference equations
131
(A(n)) = lim
(2.11.23)
n=n0
n1
j=n0
132
n1
gj (yj ).
j=n0
n1
gj
j=n0
from which follows the proof, provided that y0 is small enough such
that
n1
gj < .
M y0 exp M
j=n0
yn exp M
gj
n0
n=n0
B(n) < .
(2.11.24)
H > 0,
0 < < 1,
133
Notes
nn0
y0 + LH
n1
n1
1 yj .
j=n0
n1
pj .
j=n0
n0
= H
n0
y0
n1
(1 + LH 1 )
j=n0
which implies
yn Hy0 ( + LH)nn0 .
1
, the conclusion follows.
H
Corollary 2.11.4 Consider the equation
(2.11.25)
where A has all the eigenvalues inside the unit disk and moreover,
f (n, y)
= 0,
y0
y
lim
2.12
Notes
Theorem 2.1.1 is from Lord and Mitchell [1] and Theorem 2.1.3 is
due to Alekseev [1]. Theorems 2.1.2 and 2.1.4 are standard, see
Lakshmikantham and Leela [1]. Theorem 2.1.5 is new.
134
3
STABILITY OF MOTION IN TERMS OF TWO
MEASURES
3.0
Introduction
135
136
3.1
x(t0 ) = x0 ,
t0 R+ ,
(3.1.1)
1
[V (t + h, x + hf (t, x)) V (t, x)]
h
137
(3.1.2)
1
[V (t + h, x + hf (t, x)) V (t, x)].
h
(3.1.3)
(t, x) R+ Rn ,
(3.1.4)
u = g(t, u),
u(t0 ) = u0 0,
(3.1.5)
t t0 ,
(3.1.6)
138
m(t0 ) u0 ,
(t, x) R+ Rn ,
where g C[R+ Rn , RN ] and g(t, u) is quasimonotone nondecreasing in u. Let r(t) = r(t, t0 , u0 ) be the maximal solution of
u = g(t, u),
u(t0 ) = u0 0,
(3.1.7)
t t0 .
We recall that inequalities between vectors are componentwise inequalities and quasimonotonicity of g(t, u) means that u v, ui vi ,
1 i N implies gi (t, u) gi (t, v). Theorem 3.1.2 is a special case
of the following result which deals with cone-valued Lyapunov functions.
Theorem 3.1.3 Assume that V C[r+ Rn , K] and V (t, x)
is locally Lipschitzian in x relative to the cone K RN and for
(t, x) R+ Rn ,
K
139
t t0 ,
provided V (t0 , x0 ) u0 .
Proof Proceeding as in Theorem 3.1.1 with suitable modications
we arrive at the dierential inequality
K
m(t0 ) u0 ,
t t0 .
(t, x) R+ Rn .
Assume that g C[R+ Q, RN ] and g(t, u) is quasimonotone nondecreasing in u relative to P and x(t) = x(t, t0 , x0 ) is any solution of
P
t t0 ,
(3.1.8)
140
t, x and V (t, x). The following result is in that direction whose proof
is similar to the proof of Theorem 3.1.1.
Theorem 3.1.5 Let V C[R+ Rn , R+ ] and V (t, x) be locally
Lipschitzian in x. Assume that g C[R+ Rn R+ , R] and for
(t, x) R+ Rn ,
D + V (t, x) g(t, x, V (t, x)).
Let x(t) = x(t, t0 , x0 ) be any solution of (3.1.1) existing on [t0 , )
and t(t, t0 , x0 , u0 ) be the maximal solution of
u = g(t, x(t), u),
u(t0 ) = u0 0,
3.2
t t0 .
2
, R+ ] : a(t, s) K for each s and a(t, s) L for
KL = {a C[R+
each t},
2
, R+ ] : a(t, s) K for each t},
CK = {a C[R+
We shall now dene the stability concepts for the system (3.1.1)
in terms of two measures h0 , h .
Denition 3.2.1 The dierential system (3.1.1) is (3.1.1) is said
to be
141
t t0 ,
t t0 + T ;
t t0 + T ;
(S6 ) (h0 , h)-equi asymptotically stable, if (S1 ) and (S3 ) hold together;
(S7 ) (h0 , h)-uniformly asymptotically stable, if (S2 ) and (S4 ) hold
simultaneously;
(S8 ) (h0 , h)-unstable, if (S1 ) fails to hold.
142
143
u(t0 ) = u0 0,
(3.2.1)
3.3
Let us now establish some sucient conditions for the (h0 , h)-stability properties of the dierential system (3.1.1).
Theorem 3.3.1 Assume that
(A0 ) h, h0 and h0 is uniformly ner than h;
144
(A3 ) D + V (t, x) g(t, V (t, x)) for (t, x) S(h, ) for some > 0,
where S(f, ) = {(t, x) R+ Rn : h(t, x) < }.
Then, the stability properties of the trivial solution of (3.1.5) imply
the corresponding (h0 , h)-properties of (3.1.1).
Proof We shall only prove (h0 , h)-equiasymptotic stability of
(3.1.1). For this purpose, let us rst prove (h0 , h)-equistability.
Since V is h-positive denite, there exists a (0, ] and b K
such that
b(h(t, x)) V (t, x), (t, x) S(h, ).
(3.3.1)
Let 0 < < and t0 R+ be given and suppose that the trivial
solution of (3.1.5) is equistable. Then, given b() > 0 and t0 R+ ,
there exists a function 1 = 1 (t0 , ) that is continuous in t0 such
that
u0 < 1 implies u(t, t0 , u0 ) < b(),
t t0 ,
(3.3.2)
(3.3.3)
(t0 , x0 ) S(h0 , 0 ).
(3.3.4)
Choose = (t0 , ) such that (0, 0 ], a() < 1 and let
h0 (t0 , x0 ) < . Then (3.3.4) shows that h(t0 , x0 ) < since 1 < b().
We claim that h(t, x(t)) < , t t0 whenever h0 (t0 , x0 ) < , where
x(t) = x(t, t0 , x0 ) is any solution of (3.1.1) with h0 (t0 , x0 ) < . If
this is not true, then there exists a t1 > t0 and a solution x(t) of
(3.1.1) such that
h(t1 , x(t1 )) = and
t0 t t1 ,
(3.3.5)
145
in view of the fact that h(t0 , x0 ) < whenever h0 (t0 , x0 ) < . This
means that x(t) S(h, ) for [t0 , t1 ] and hence by Theorem 3.1.1, we
have
V (t, x(t)) r(t, t0 , u0 ), t0 t t1 ,
(3.3.6)
where r(t, t0 , x0 ) is the maximal solution of (3.1.5). Now the relations
(3.3.1), (3.3.2), (3.3.5) and (3.3.6) yield
b() V (t1 , x(t1 )) r(t1 , t0 , x0 ) < b(),
a contradiction proving (h0 , h)-equistability of (3.1.1).
Suppose next that the trivial solution of (3.1.5) is quasi-equi
asymptotically stable. From the (h0 , h)-equistability, we set = so
that 0 = 0 (t0 , ). Now let 0 < < . Then, by quasi-equi asymptotic stability of (3.1.5), we have that, given b() > 0 and t0 R+ ,
there exist positive numbers 1 = 1 (t0 ) and T = T (t0 , ) > 0 such
that
u0 < 1
implies
t t0 + T.
(3.3.7)
146
V (t, x(t)) as t .
Suppose that lim W (t, x(t)) = 0. Then, for any > 0, there exist
t
147
W (ti , x(ti ))
= , W (t, x(t))
2
, . (3.3.9)
2
ti
D + V (s, x(s)) ds
1in t
< 0,
V (t0 , x0 ) nC
2 2M
which is a contradiction. Thus, W (t, x(t)) 0 as t and hence
h(t, x(t)) 0 as t . The argument is similar when D + W
is bounded from below and we use (3.3.9). The proof is therefore
complete.
Corollary 3.3.1 (Marachkovs theorem) Suppose that f is
bounded on R+ S(). Then the trivial solution of (3.1.1) is asymptotically stable if there exist C K and V C[R+ S(), R+ ] such
that
(i) V is positive denite, V (t, 0) 0 and V (t, x) is locally Lipschitzian in x;
(ii) D + V (t, x) C(x), (t, x) R+ S(), C K.
Proof Take h0 = h = x, W = x and note that |D + W (t, x)|
f (t, x). Then, all the hypotheses of Theorem 3.3.3 are satised and
the proof is complete.
Corollary 3.3.2 The positive deniteness of V in Corollary
3.3.1 can be weakened to positive semi deniteness of V , that is,
V (t, 0) 0 and V (t, x) 0 on R+ S(). Then the conclusion of
Corollary 3.3.1 holds.
Proof It is enough to prove that x = 0 is stable. Let 0 < <
and t0 R+ be given. Let f (t, x) M on R+ S(). Choose
148
x(t2 ) = ,
x(t1 ) =
t [t0 , t1 ).
(t2 t1 )
V (t0 , x0 ) C
2
<0
V (t0 , x0 ) C
2 2M
which is a contradiction. Hence the proof is complete.
Corollary 3.3.3 Suppose that there exist a, C K and
V C[R+ S(), R+ ] such that
(i) V (t, x) is locally Lipschitzian in x, V (t, x) 0 and V is positive
denite;
(ii) D + V (t, x) C(V (t, x)), (t, x) R+ S().
Then x = 0 is asymptotically stable.
Proof Take W = V and h = h0 = x in Theorem 3.3.3.
If we dene only uniform stability properties, we can relax the
assumptions of Theorems 3.3.1 and 3.3.2 by employing a family of
Lyapunov functions each of which satises less restrictive conditions.
Theorem 3.3.4 Assume (A0 ) and (A2 ) of Theorem 3.3.1. Suppose further (A1 ) for each (0, ), > 0, there exists a oneparameter family of functions V C[S(h, ) S c (h0 , ), R+ ] such
that V is locally Lipschitzian in x, V is h-positive denite and h0 decrescent, where S c (h0 , ) is the complement of S(h0 , );
(A3 ) D V (t, x) g(t, V (t, x)) for (t, x) S(h, ) S c (h0 , ).
149
t [t1 , t2 ],
(3.3.11)
for
Then, if the system (3.3.1) is (h0 , h)-uniformly stable, it is (h0 , h)uniformly asymptotically stable
Proof Assume that (3.3.1) is (h0 , h)-uniformly stable. Then taking = , we set 0 = (). Let t0 R+ and h0 (t0 , x0 ) < 0 . Clearly
h(t, x(t)) < , t t0 , where x(t) is any solution of (3.3.1). Let
150
t t0 ,
3.4
s 0.
151
that this means that there exists a > 0 and a K such that
h0 (t, x) < implies h(t, x) (h0 (t, x)).
We are now in position to prove our main result of this section.
Theorem 3.4.1 Suppose that
(i) |h(t, x)h(t, y)| Lxy and f (t, x)f (t, y) M (xy)
for (t, x), (t, y) R+ Rm , where L, M > 0 are constants;
(ii) the system (3.1.1) is (h0 , h)-uniformly stable and (h, h)-quasi
uniformly asymptotically stable.
Then, for a constant > 0 there exist two functions U, W
C[S(h, ), R+ ] which are Lipschitzian in x and such that
(a) U (t, x) b(h(t, x)) for (t, x) S(h, ) and U (t, x) a(h0 (t, x))
for (t, x) S(h0 , 0 ), where a, b K and 0 (0, ) is a constant with (0 ) < ;
(b) D + U (t, x) 0 for (t, x) S(h, );
(c) W (t, x) N for (t, x) S(h, ) and W (t, x) b1 (h0 (t, x)) for
(t, x) S(h0 , 0 ), where b1 K and N > 0 is a constant;
(d) D + W (t, x) C(U (t, x)) for (t, x) S(h, ), where c K.
Proof We shall use some standard arguments. Choose a constant
> 0 and for any > 0, a T = T () > 0, both associated with
(h, h)-quasiuniform asymptotic stability assumed in (ii). Obviously,
the function T can be assumed to be decreasing. For (t, x) S(h, )
and j = 1, 2, . . . , dene
Uj (t, x) = sup{Gj (h(t + , x(t + , t, x))) : 0} exp[M T (j 1 )],
(3.4.1)
where Gj (u) = u j 1 for u j 1 and Gj (u) = 0 for 0 u j 1 .
Clearly, for every u, v 0,
|Gj (u) Gj (v)| |u v|.
Because of (h, h)-quasiuniform asymptotic stability and the continuity of Gj and h, Uj is well dened by (3.4.1) as a mapping from
152
(3.4.2)
2j Uj (t, x).
(3.4.4)
j=1
153
(3.4.6)
W (t, x) =
(3.4.7)
(t, x) S(h0 , 0 ),
(3.4.8)
and
[C(p(0 )q())]1/2 d
0
(3.4.10)
154
(3.4.11)
= [C ()|U (, x(, t, x)) U (, x(, t, y))|] d,
t
exp[( t)(M )] d
Lx y
t
exp[(M )] d.
Lx y
0
Thus,
|W (t, x) W (t, y)| Lx y.
(3.4.12)
155
[C(p(0 )q())]1/2 d
0
b1 K.
Now, it is easy to show, using (3.4.12) that D + W (t, x) = c(U (t, x))
for (t, x) S(h, ).
The two functions U and W have all the desired properties. The
proof is therefore complete.
Conversely, it can be easily proved that the system (3.1.1) is
(h0 , h)-uniformly stable and (h, h)-quasi uniform asymptotically stable if the following conditions are satised:
(i) h0 is uniformly ner than h;
(ii) for every > 0, every solution x(t, t0 , x0 ) with h(t0 x0 ) <
exists for all t t0 ; and
(iii) there exist two functions U, W C[S(h, ), R+ ], which are locally Lipschitzian in x and satisfy the conditions (a), (b), (c)
and (d) of Theorem 3.4.1.
We have the following corollary of Theorem 3.4.1.
Corollary 3.4.1 Suppose that the assumptions (i), (ii) of Theorem 3.4.1 hold. Then, for a constant > 0, there exists a function
V C[S(h, ), R+ ], which is Lipscitzian in x for a constant and such
that
(a) V (t, x) b(h(t, x)) for (t, x) S(h, ) and V (t, x) a(h0 (t, x))
for (t, x) S(h0 , 0 ), where a, b K, 0 (0, ) is a constant
with (0 ) < ;
(b) D + V (t, x) (h(t, x)) for (t, x) S(h, ), where K.
Indeed, if U and W are the functions obtained in Theorem 3.4.1,
the function V = U + W has the desired properties with a = b + b1 ,
= coa.
If h(t, x) = h0 (t, x), Theorem 3.4.1 and its corollary become
two equivalent propositions. Thus, when h(t, x) = h0 (t, x) = x,
156
Theorem 3.4.1 reduces to the well-known Masseras converse theorem on uniform asymptotic stability (actually, in Masseras theorem further assertions about smoothness of V are made). If
h(t, x) = xs , and
h0 (t, x) = x, where is the Euclidean
norm and xs = x21 + + x2s , s < n, then Theorem 3.4.1 yields
a converse theorem for partial uniform asymptotic stability. It is
clear that various choices of h and h0 are possible and thus Theorem
3.4.1 oers a unied result that is exible enough to warrant its use
in several applications.
If we carefully examine the proof of Masseras theorem, we notice
that it is the domain of attraction which plays the prominent role
in obtaining a smooth Lyapunov function. Consequently, when two
measures are employed, this same feature shows that the price we
had to pay to prove Theorem 3.4.1 is reasonable and natural.
3.5
t t0 ;
t t0 + T ;
157
implies
u(t, t0 , u0 ) < 1 ,
t t0 ,
(3.5.1)
158
t [t0 , t1 ].
(3.5.2)
t [t0 , t1 ],
(3.5.3)
where r(t, t0 , u0 ) is the maximal solution of (3.1.5). Hence, the relations (3.3.1), (3.5.1),(3.5.2) and (3.5.3) lead to the contradiction
b() V (t1 , x(t1 )) r(t1 , t0 , u0 ) < b(),
proving the theorem.
3.6
x(t0 ) = x0 ,
(3.6.1)
159
The following simple result shows that if f is periodic in t or autonomous and is smooth enough to guarantee uniqueness of solutions,
then stability of the trivial solution of (3.6.1) is always uniform.
Theorem 3.6.1 Let f C[R S(), Rn ], f (t, x) be periodic in t
with a period and the system (3.6.1) admit unique solution. Then,
the stability of the trivial solution of (3.6.1) is necessarily uniform.
Proof By the periodicity of f (t, x) in t, it follows that, if x(t, t0 , x0 )
is a solution of (3.6.1), then x(t + , t0 , x0 ) is also a solution. Furthermore, the uniqueness of solution shows that, for any integer k,
x(t k, t0 kw, x0 ) = x(t, t0 , x0 ).
For each xed t0 , t0 (, ), let (t0 ) = sup (t0 , ). Since
0<<
0t0
t t0 ,
or
(k + 1) t0 k.
160
t t0 .
(3.6.2)
for all t t0 ,
(3.6.3)
v R+ .
(3.6.4)
161
Therefore,
(3.6.5)
Then,
V (t , x(t , t0 , xki )) = V (t + ki , x(t + ki , t0 , x0 )) = v(t + ki ),
and hence, by (3.6.4), it follows that
lim V (t , x(t , t0 , xki )) = v
162
3.7
(t, x) R+ S(),
2 , R] and g (t, 0) 0;
where g1 C[R+
1
(ii) for every > 0, there exists a V2, C[R+ S() S c (), R+ ],
V2, is locally Lipschitzian in x and for (t, x) R+
S() S c (),
b(x) V2, (t, x) a(x),
a, b K
and
D + V1 (t, x) + D + V2, (t, x) g2 (t, V1 (t, x) + V2, (t, x)),
2 , R], g (t, 0) 0;
where g2 C[R+
2
u(t0 ) = u0 0
(3.7.1)
v(t0 ) = v0 0
(3.7.2)
163
t t0 ,
(3.7.3)
0
.
2
(3.7.4)
0
,
2
t t0 ,
0
> 0 and
2
(3.7.5)
(3.7.6)
x(t2 , t0 , x0 ) =
(3.7.7)
t [t1 , t2 ],
t [t1 , t2 ],
164
which yields
m(t2 ) r2 (t2 , t1 , m(t1 )),
0
.
2
(3.7.9)
x, y K.
implies
165
W
W
+
f (t, x).
t
x
Theorem 3.7.2 Suppose that V C[R+ Rn , R+ ], V (t, x) is
locally Lipschitzian in x and W C 1 [R+ Rn , R] such that
where W (t, x) =
(t, x) <
implies
for (t, c) (, ) \ G .
Obviously these two functions are of class C. Let C 1 [R+ , [0, 1]]
be such that
, ( ) = 0 for [ , ),
( ) = 1 for 0,
1
(i)
166
(3.7.11)
for all
(t, x) (, ).
and (t, x)
(3.7.13)
holds. Now, we shall show that we can determine a constant > 0
such that, setting for all (t, x) (, ),
V (t, x) = V (t, x) + k (t, x),
the function V satises the hypotheses of Theorem 3.3.5 and V is
bounded. Moreover, for > 0, we have
D + V (t, x) D + V (t, x) + D + k (t, x).
We now prove that we can choose > 0 such that D + V (t, x) <
, then from
. Indeed, if (t, x) (, ) and (t, x)
2
167
(3.7.13) and (ii) it follows that D + V (t, x) < for every choice
+
D V (t, x) < e
+ A ,
2
where A > 0 is an upper bound for D + k (t, x). Hence, for all
(t, x) (, ), we shall have D + V (t, x) < , if we choose
so that
0 < < (A + 1 )c( /2).
On the other hand, by virtue of (i), (ii), the solution x 0 of (3.6.1)
is uniformly stable. Thus, all the hypotheses of Theorem 3.3.5 are
satised and the proof is complete.
The assumption g1 (t, u) = g2 (t, u) 0 is admissible in Theorem
3.7.1, which implies that
+
D + V1 (t, x) 0 on
+
R+ S(),
and
R+ S() S c ().
If, on the other hand, we demand that D + V1 (t, x) satises a strengthened condition, then we can conclude from Theorem 3.7.2 uniform
asymptotic stability of the trivial solution of (3.6.1). This is stated
in the following corollary.
Corollary 3.7.1 Let, in Theorem 3.7.1, g1 (t, u) = g2 (t, u) 0.
Suppose that
D + V1 (t, x) c((t, x)),
(t, x) R+ S(),
168
and
V (t, x) = V (t, x) + k (t, x),
(3.7.14)
(3.7.15)
169
implies
V (t0 , x) < ,
, x(t2 ) = and
x(t) for t [t1 , t2 ].
2
2
From (3.7.15) and the consideration in (I), it follows that
x(t1 ) =
170
3.8
M0 -Stability Criteria
x(t0 ) = (t0 , x ),
t0 0,
(3.8.1)
as
t ;
M0 -Stability Criteria
171
t > ().
Let us now give the denitions for M0 -invariant set and the various
types of M0 -stability. As usual, let x(t, s, (s, x )), t s, represent
a solution of (3.8.1) starting at (s, (s, x )) .
Denition 3.8.1 Let A Rn . A is M0 -invariant with respect to
the system (3.8.1) if whenever x A and (s, x ) M0 , we have
x(, s, (s, x )) M0 .
Denition 3.8.2 With respect to the system (3.8.1), the set A
is said to be
(M1 ) M0 -equistable if for each > 0, there exists 1 (), 1 ()
as and 1 (t0 , ), 2 (t0 , ) such that
t
0 +1
t0
t0 +1
t0
t t0 + 1,
t t0 + 1 + T (t0 , ),
t0
t0 0 ,
172
x(t0 ) = (t0 , x ),
t0 0,
1
. The solution is given by
s
x(t, s, (s, x )) = x +
1
+ es et ,
s
n,
2n4 (t n) + n,
(t) =
2n4 (t n) + n,
x(t0 ) = x
(3.8.2)
when t = n, an integer,
1
when n 3 < t < n,
2n
1
when n < t < n + 3 ,
2n
for all other t 0.
M0 -Stability Criteria
173
t +1
The set x = 0 is M0 -uniformly stable since t00 (s) ds is at most
1
for n [t0 , t0 + 1]. But x = 0 is not eventually uniformly stable
2n2
since (t0 ) does not approach zero as t0 .
We need a preliminary result and some convenient notation before
we can proceed to prove M0 -stability criteria.
Lemma 3.8.1 (Jensen inequality) Let be a convex function
and f integrable. Then
u(t0 ) = (t0 , u ),
t0 0,
(3.8.3)
t t0 + 1,
t0
t0 +1
t0
174
t t0 + 1,
t0 1 ()
t0
t +1
provided u < 1 and t00 (s, u ) ds < 2 . By (iii) and denition of A, we can nd 1 (), 2 () and 2 () such that the following
inequalities will hold simultaneously:
t
0 +1
t0 2 ()
t0
and x S(A, 1 ),
t
0 +1
(s, x ) ds < 2 .
t0
t0
t t0 + 1,
t0 ().
M0 -Stability Criteria
175
t0
t
0 +1
t0 + 1 t < t1 ,
t0 ().
t0
(3.8.4)
since V (s, (s, x )) a(s, (s, x )) (s, d(x , A)) = (s, u )
letting u = d(x , A). Using (3.8.4) and assumption (iii), we obtain
the following contradiction:
t0 +1
b() b
x(t1 , s, (s, x )) ds
t0
t
0 +1
t0
t
0 +1
t0
t0 +1
t0
176
t t0 + 1 + T (),
t0 0 ,
t0
and t0 +1 (s, u ) ds < .
provided u < 10
20
t0
As in the proof of Theorem 3.8.1, we can nd positive numbers
10 , 20 , 0 which satisfy the inequalities
t
0 +1
t 0 ,
t0
t +1
and x S(A, 10 ), t00 (s, x ) ds < 20 . Then 0 does not
is independent of t . Let
depend on t0 since 20
0
10 = min(10 , 10 )
t t0 + 1 + T (),
t0 0 ,
t0
(s, x ) ds < 20 .
t0
177
b()
t
0 +1
t0
3.9
3.9.1
Let us rst consider the method of vector Lyapunov functions. Naturally, Theorem 3.1.2 plays an important role whenever we employ
vector Lyapunov functions. As a typical result, we shall merely state
a theorem that gives sucient conditions in terms of vector Lyapunov functions for the stability properties of the trivial solution of
(3.6.1).
Theorem 3.9.1 Assume that
(i) g C[R+ RN , RN ], g(t, 0) 0 and g(t, u) is quasimonotone
nondecreasing in u for each t R+ ;
N ], V (t, x) is locally Lipschitzian in x and
(ii) V C[R+ S(), R+
the function
N
Vi (t, x)
(3.9.1)
V0 (t, x) =
i=1
178
u(t0 ) = u0 0,
(3.9.2)
V0 (t, x) =
di Vi (t, x)
i=1
(3.9.3)
179
Vi (t, x) = x2 +y 2
i=1
i=1
180
zi (t0 )xi0 ,
(3.9.6)
i=1
j=1
(3.9.7)
and if the right hand side of (3.9.7) can be majorized such that (3.9.7)
takes the general form
D + V (t, z) H(t, V (t, z))
(3.9.8)
181
(3.9.9)
i = j,
i, j = 1, 2, . . . , m.
182
y T g(y)
.
y2
(3.9.10)
Lemma 3.9.1 For any r (0, H), the set Ar is nonempty and is
a compact set, while function (3.9.10) is continuous on Ar and equal
to zero at no point of the set.
Proof. Let us rst show that Ar = . Consider the vector eld
w(y) = g(y)
y T g(y)
y.
y2
(3.9.11)
183
y T g(y)
< 0,
y2
(3.9.12)
s=1
(y T g(y))(z T e)
s (y) > 0.
l
T
s=1
y
s (y)
y+e
l
z T w(y) = z T g(y)
s=1
184
s=1
z T w(y) = z T g(y) 0.
Thus, the conditions of Theorem 3.9.3 are satised for the vector
eld (3.9.12). Then, there exists a point Qr such that
l
T g()
s () .
g() =
+e
l
T
s=1
+e
s ()
(3.9.13)
s=1
Us
Qr because
s=1
tem, at least one coordinate of the vector g() must be nonnegative for Q
r . However, the point cannot belong to the set
Qr \
Us
s=1
T g()
.
2
+
D = {y G Rm : g(y) 0} are positively invariant sets for
system (3.9.9).
Proof. Let y D + , and y(t, y) be the solution of system (3.9.9)
satisfying the initial condition y(0, y) = y. Denote by [0, ) the right
185
dy(t, y)
g(y(t, y)),
dt
y(0, y) = v(0) = y.
But then, according to Rouche, Habets, and Laloy [1], the estimate
y ).
y(t, y) v(t) = y holds for all t [0, ), i.e., y(t, y) R+
m (
Let us now choose two times t1 and t2 such that 0 t1 < t2 < .
We get y(t2 t1 , y) y. Then y(t2 , y) y(t1 , y). Hence, all the
components of the vector y(t, y) are nondecreasing functions on the
interval [0, ). Therefore, g(y(t, y)) 0 for t [0, ), i.e., D + is a
positively invariant set.
It can similarly be shown that y D , then the solution y(t, y)
remains in the set K (
y ) = {y Rm : 0 y y} with time.
Then this solution can be extended to the interval [0, +) and all
its components on this interval are nonincreasing functions.
y < H/2. Then for the solution
Lemma 3.9.4 Let y D + ,
y(t, y) of system (3.9.9) emerging from the point y at t = 0, there
exists T > 0 such that y(T, y) = H/2.
Proof. Suppose that the solution y(t, y) remains in the domain
y < H/2 with time. Then it is dened on the interval [0, +).
In the proof of Lemma 3.9.3, we showed that all the components
of the vector y(t, y) are nondecreasing functions. Then, there exists
lim y(t, y) = z, where 0 < z H/2. Thus, the set of -limiting
t+
186
set for system (3.9.9) and is in the attraction domain of the equilibrium position y = 0.
Proof. Consider the solution y(t, ) emerging from the point at
t = 0. According to Lemma 3.9.3, this solution is dened on the
interval [0, +), satises the condition y(t, ) K () for t 0,
and all its components on this interval are nonincreasing functions.
Then y(t, ) 0 as t +.
With Lemma 3.9.3, if y K (), then y(t, y) y(t, ) for
all t 0. Then y(t, y) K () for t [0, +) and y(t, y) 0 as
t +.
Proof of Theorem 3.9.2. Necessity. Let the zero solution of system
(3.9.9) be asymptotically stable in a nonnegative cone. If there exists
a number r, 0 < r < H, such that
= max
yAr
y T g(y)
< 0,
y2
187
But then estimate (3.9.14) will also be valid for t > T because
y(t, y (0) ) y(t, ) < for all t > T . Then the zero solution of
system (3.9.9) is asymptotically stable in a nonnegative cone. Theorem 3.9.2 is proved
Corollary 3.9.1 If the zero solution of system (3.9.9) is stable
on the cone R+
m , then it is asymptotically stable in this cone.
Indeed, when proving the necessity of Theorem 3.9.2, we showed
that if the zero solution is stable in the cone R+
m , it satises the MOcondition. From the proof of the suciency of the theorem, it follows
that the zero solution is asymptotically stable.
With Lemma 3.9.4, it is easy to prove the following theorem.
Theorem 3.9.4 Let the following conditions be satised:
(1) system (3.9.9) has properties C1 and C3 ;
(m) 0
(2) there exists a sequence of points y (m) R+
m such that y
(m)
(m)
for m , y
= 0, g(y ) 0;
(m) ) G.
(3) g(0) = 0, g(y) = 0 for y R+
m (y
188
2
i
2
i=1
i=1
i=1
i=1
i=1
p,
(3.9.16)
189
estimated as follows:
2
min(i )
i
4
X
i=1
|ai |2 <
p 2
4
(3.9.17)
li
k
dyi X
mj
=
pij yj
,
dt
i = 1, 2, . . . , k,
(3.9.18)
j=1
where mj and li are odd natural numbers, pij are constant coefficients, pij 0 for i 6= j, i, j = 1, 2, . . . , k.
Let P = (pij )ki,j=1 . It is easy to verify that if det P 6= 0, then system (3.9.18) has properties C1 C3 , where H is any positive number.
Then, the zero solution of this system is asymptotically stable in a
nonnegative cone if and only if there exist positive numbers 1 , . . . , k
satisfying the inequalities
k
X
mj
pij j
< 0,
i = 1, . . . , k,
j=1
(3.9.19)
where l and m are odd natural numbers, pij are constant coefficients,
p12 0, p21 0.
190
3.9.2
(3.9.20)
Here y Rp , z Rq , g : R+ Rp Rp , G : R+ Rp Rq Rp ,
h : R+ Rq Rq , H : R+ Rp Rq Rq . In addition, functions
g, G; h, H are continuous on R+ Rp , R+ Rq , R+ Rp Rq and
they vanish for y = z = 0.
The problem itself is to point out the connection between the
stability properties of equilibrium state y = z = 0 with respect to
system (3.9.20) on Rp Rq and its nonlinear approximation
dy
= g(t, y),
dt
dz
= h(t, z).
dt
(3.9.21)
191
Assumption 3.9.1 Let there exist the time-invariant neighborhood Ny Rp and Nz Rq of the equilibrium state y = 0 and
z = 0, respectively and let there exist a matrix-valued function
v11 (t, y) v12 (t, y, z)
U (t, y, z) =
(3.9.22)
v21 (t, y, z) v22 (t, z)
the element vij of which satisfy the estimations (cf. Krasovskii [1],
Djordjevic [1])
c11 y2 v11 (t, y) c11 y2
2
for all
(t, y = 0) R+ Ny ;
for all
(t, z = 0) R+ Nz ;
for all
cii > 0,
(t, y = 0, z = 0) R+ Ny Nz ;
i = j.
(3.9.23)
192
v12
z
T
c12 = c21 ,
C=
c11 c12
,
c21 c22
c12 = c21 ,
and
11 = 12 (11 + 12 ) + 21 2 (14 + 16 + 26 );
22 = 22 (21 + 22 ) + 21 2 (18 + 24 + 28 );
1 2
1 13 + 23 22 + 1 2 (15 + 25 + 17 + 27 ),
12 =
2
1 , 2 being positive numbers, have the characteristic roots
1 (t), 2 (t) and let Re i (t) , for all t t0 .
Then the state of equilibrium y = z = 0 of the system (3.9.20)
is uniformly stable if the number is equal to zero and exponentially
stable if < 0.
If conditions of Assumptions 3.9.1, 3.9.2 are fullled for Ny =
Rp , Ny = Rq and conditions (2), (3) of the theorem hold, then the
equilibrium state y = z = 0 of the system (3.9.20) is uniformly stable
in the whole if = 0 and exponentially stable in the whole if < 0.
193
Proof The proof uses the direct method. On the basis of estimations (3.9.23), it is not dicult to show that the function
v(t, y, z) = T U (t, y, z) satises the estimates
uT T Cu v(t, y, z) uT T Cu,
(3.9.25)
uT
where
= (y, z), = diag [1 , 2 ].
Also, in view of Assumption 3.9.1 and the estimates (3.9.24), the
derivative Dv(t, y, z) dened by Dv(t, y, z) = T DU (t, y, z) satises
(3.9.26)
Dv(t, y, z) uT S(t)u = uT M (t)u uT u.
By virtue of (2) and (3) and the inequalities (3.9.25), (3.9.26), we
see that all conditions of Theorems 1.2.1, 1.2.5 from the book by
Martynyuk [4] are veried for the function v(t, y, z) and its derivative.
Hence the proof is complete.
If in estimate (3.9.24) we change the sign of inequality for the
opposite one, then by means of the method similar to the given one
we can obtain the estimate
Dv(t, y, z) uT S(t)u
which allows us to formulate instability conditions for the equilibrium
state y = z = 0 of system (3.9.20) on the basis of Theorem 1.2.7
from the book by Martynyuk [4].
The statement of Theorem 3.9.5 shows that uniform stability or
exponential stability of the equilibrium state y = z = 0 of system
(3.9.20) can hold even if the equilibrium state y = z = 0 of system
(3.9.21) has no properties of asymptotic quasi-stability (cf. Lefschetz
[1]).
Example 3.9.5 Consider the system describing the motion of two
nonautonomously connected oscillators
dx1
dt
dx2
dt
dy1
dt
dy2
dt
(3.9.27)
194
dx2
= 1 x1 ,
dt
dy2
= 2 y1 ,
dt
(3.9.28)
y = (y1 , y2 )T .
(3.9.29)
We use the equation (3.9.28) to determine the nondiagonal element v12 (x, y) of the matrix-valued function U (t, x, y) = [vij ()],
i, j = 1, 2. To this end set = (1.1)T and v12 (x, y) = xT P12 y,
where P12 C 1 (T , R22 ). For the equation
dP12
0 1
0
2
cos t sin t
+
P12 +P12
+2v
= 0,
1
0
2 0
sin t cos t
dt
(3.9.30)
the matrix
2v
sin t cos t
P12 =
+ 1 2 cos t sin t
is a partial solution bounded for all t T .
Thus, for the function v(t, x, y) = T U (t, x, y) it is easy to establish the estimate of (3.9.25) type with matrices C and C in the
form
c11 c12
c11 c12
C=
, C=
,
c12 c22
c12 c22
.
where c11 = c11 = 1; c22 = c22 = 1; c12 = c12 = |+|2v|
1 2 |
T
T
Besides, the vector u1 = (x, y) = u2 since the system (3.9.28)
is linear.
For system (3.9.27) the estimate (3.9.26) becomes
&
Dv(t, x, y)&(3.9.28) = 0
for all (x, y) R2 R2 because M = 0.
195
3.10
1
| + 1 2 |.
2
As we have noted in the last section, an unpleasant fact in the approach of several Lyapunov functions is the requirement of quasimonotone property of the comparison system. Since comparison system with a desired stability property can exist without satisfying
the quasi-monotone property, the limitation of this otherwise eective technique is obvious. It is observed that this diculty is due
to the choice of the cone relative to the comparison system, namely,
N , the cone of nonnegative elements of RN and a possible answer
R+
N to work in a given
lies in choosing a suitable cone other than R+
situation.
Using the comparison results 3.1.3 and 3.1.4, it is now easy to
discuss the method of cone valued Lyapunov functions. We shall
merely state two typical results.
Theorem 3.10.1 Assume that
(i) V C[R+ S(), K], V (t, x) is locally Lipschitzian in x relative to the cone K RN and for (t, x) R+ S(),
K
196
(iii) f (t, 0) 0 and for some 0 K0 , 0 (V (t, x)) is positive denite and decrescent for (t, x) R+ S(), where K0 = K \ {0}
and K0 is the adjoint of K0 .
Then, the stability properties of the trivial solution u = 0 of
u = g(t, u),
u(t0 ) = u0 ,
(3.10.1)
(t, x) R+ S();
also use other measures in place of 0 (V (t, x)). For example, let
C[K, R+ ], (u) is nondecreasing in U relative to K. Then it
is enough to suppose (V (t, x)) be positive denite and decrescent
N in Theorem 3.10.2,
in Theorem 3.10.1. Moreover, if P Q = R+
the unpleasant fact concerning the quasimonotonicity of g(t, u) mentioned earlier can be removed. This, of course, means that we have
197
(3.10.2)
(3.10.3)
(3.10.4)
(3.10.5)
and
'
(
u1
u1
1
0 for all u = 0.
, a , a11 u1 + a12 , a21 u1 + a22
198
(3.10.6)
(3.10.7)
i=2
the conditions 0 < 1 < min(k ), k [2, n + 1], f () > 0, for each
k
199
Notes
n
(1 1 + a1 )
(i i + ai )2
2
i=2
+
2
n
(ai i n+1 g()) ,
i=2
1 0,
n
2i + 2
21
i=2
n
a2
i
i=2
>
a21 f ()
(3.10.8)
for = 0.
Remark 3.10.2 In Martynyuk and Obolensky [1] sucient conditions for uniform asymptotic stability were estableshed as follows
n+1
n
a2
i
i=2
(3.10.9)
where the signs before coecients ai were not taken into account. It
is easily seen that condition on parameter (3.10.8) extends (3.10.9).
3.11
Notes
For various comparison results given in Section 3.1 see Lakshmikantham and Leela [1, 4] and Martynyuk and Obolensky [2]. Stability
in terms of two measures was introduced by Movchan [1] and successfully developed by Salvadori, see Bernfeld and Salvadori [1].
The contents of Sections 3.2 and 3.3 are modelled on the basis
of the works of Bernfeld and Salvadori [1] and Lakshmikantham and
Leela [1]. Corollary 3.3.1 is due to Marachkov [1].
The converse theorem given in Section 3.4 is due to Larshmikanthan and Salvadori [1].
Section 3.5 contains new results and the results of Section 3.6 are
taken from Krasovski [1].
200
4
STABILITY OF PERTURBED
MOTION
4.0
Introduction
In this chapter, stability considerations are extended to a variety of nonlinear systems utilizing the same versatile tools, namely,
Lyapunov-like functions, theory of appropriate inequalities and different measures, that were developed in the previous chapters. In order to avoid monotony, we have restricted ourselves to present only
typical extensions which demonstrate the essential unity achieved
and pave the way for further work.
Sections 4.1 and 4.2 deal with stability criteria for perturbed differential equations. The main features are the use of converse theorem for uniform asymptotic stability in terms of two measures and
the coupled comparison equation which depends on the solutions of
the given system. In Section 4.3, a technique in perturbation theory
is presented which combines Lyapunov method and the method of
variation of parameters to help preserve the inherent rich behavior
of perturbations.
Section 4.4 is devoted to the extension of Lyapunov method to
dierential equations with innite delay using both Lyapunov functionals and functions. A unied approach is presented which is parallel to the corresponding theory for dierential equations without
delay. In Section 4.5 we describe a technique which oers better
qualitative information compared to the method of Section 4.4. The
idea is to use upper and lower estimates simultaneously together
with certain auxiliary functions so as to obtain growth of solutions
201
202
4.1
x(t0 ) = x0 ,
(4.1.1)
203
t t0 ,
(4.1.3)
where u(t, t0 , x0 , u0 ) is any solution of (4.1.2), h0 , h are functions as
in Section 3.2.
h0 (t0 , x0 ) < 2
and u0 < 1
204
(4.1.4)
(4.1.6)
and
k2 eM < a(1 )/2,
(4.1.7)
205
and
a(1 ) U (t, y(t, t0 , x0 )) b()
on
[t1 , t2 ].
4.2
206
as
t .
t
() d =
t0
() ds d
t0
s+1
t
t
() d ds =
G(s) ds,
t0 1
t0 1
(4.2.2)
where G(t) =
t+1
207
(s)ds. Let
t1
|w(s, x(s))| ds
c(u(s)) ds +
t2
t2
t1
c()(t2 t1 ) +
G(s) ds
t2
(t1 t2 )[c() + Q( )] + Q( )
+ Q( ) < + = ,
2 2
which is a contradiction. Hence u = 0 is uniformly stable.
Next, we shall prove quasi uniform asymptotic stability. Taking
= , set 0 = () and 0 = (). Because of uniform stability
of x = 0, it follows that x0 < 0 implies x(t) < , t t0 . Let
0 < < and t0 R+ be given. Choose = () and = () as
before. Choose
T = [c() () + 2Q(1) + 2]/c() > ()
and note that T = T () only. Let us suppose that u0 < 0 but
u(t) for t [t0 + , t0 + T ]. Then we get
0 < u(t0 + T ) u(t0 + ) + [c() + Q(t0 + )](T )
1
+ Q(t0 + ) c()(T ) + Q(1) = 0
2
208
t t0 + T.
x(t1 ) =
t [t2 , t1 ],
209
4.3
A study of the eect of perturbations of dierential equations depends on the method employed and on the nature of perturbations.
One of the most used techniques is that of Lyapunov method and
the other is the nonlinear variations of parameters. These methods
dictate that we measure the perturbations by means of a norm and
thus destroy the ideal nature, if any of the perturbing terms.
In this section, we develop a new comparison theorem that connects the solutions of perturbed and unperturbed dierential systems
in a manner useful in the theory of perturbations. This comparison
result bounds, in a sense, the two approaches mentioned earlier and
consequently provides a exible mechanism to preserve the nature
of perturbations. The results that are given in this section show
that the usual comparison theorem (Theorem 3.1.1) in terms of a
210
Lyapunov function is included as a special case and that the perturbation theory could be studied in a more fruitful way.
Consider the two dierential systems
and
y = f (t, y),
y(t0 ) = x0
(4.3.1)
x = F (t, y),
x(t0 ) = x0
(4.3.2)
where f, F C[R+
Relative to the system (4.3.1), let
us assume that the following assumption (H) holds:
S(), Rn ].
h
h0
V (s, y(t, s, x))],
(4.3.3)
(4.3.4)
u = g(t, u),
exists for t0 t < T .
u(t0 ) = u0 0
(4.3.5)
211
t0 t < T,
(4.3.6)
t0 s t,
so that m(t0 ) = V (t0 , y(t, t0 , x0 )). Then using the assumptions (H)
and (i), it is easy to obtain
D + m(s) g(s, m(s)),
t0 < s t,
t0 s t,
(4.3.7)
t0 t < T,
(4.3.8)
(4.3.9)
212
t t0 .
(4.3.10)
t t0 .
(4.3.11)
t t0 .
(4.3.12)
t t0 .
(4.3.13)
t t0
(4.3.14)
213
(4.3.15)
(4.3.16)
where V (s, x) and y(t, s, x) are as before. One could use V (s, x) =
x so that W (s, x) = y(t, s, x) is a candidate for Lyapunov function for (4.3.2). If y(t, s, x) x, condition (4.3.14) reduces to
lim
h0
1
[x + hF (t, x) x] g(t, x)
h
(4.3.17)
214
(4.3.18)
t t0 ,
if
x0 < .
(4.3.19)
t0 t t1 .
t t0
215
y(t0 ) = x0 ,
(4.3.20)
x0
. The
1 + x0 (et et0 )
fundamental matrix solution of the corresponding variational equation is
1
.
(t, t0 , x0 ) =
1 + x0 (et et0 )2
whose solutions are given by y(t, t0 , x0 ) =
x2
,
2
x(t0 ) = x0 ,
(4.3.21)
[2 +
the relation
|x(t, t0 , x0 )|2
4u0
1/2
u0 (t
t0 )2 ]
|x0 |2
[1 + x0 (et et0 +
t t0 2 ,
)]
2
t t0
216
4.4
217
when h = , }
0
Mh
es (s)ds < },
h
sup es (s),
hs0
0
M = (0) +
es (s) ds,
xt0 = 0 Bt0 ,
t0 > 0,
(4.4.1)
218
u = g(t, u),
u(t0 ) = u0 0,
(4.4.2)
u(T0 ) = v0 0,
(4.4.3)
existing for t0 t T .
(H5 ) The derivative of V with respect to (4.4.1) dened by
D V (t, (0), ) = lim inf
h0
1
[V (t + h, (0) + hf (t, )) V (t, (0))]
h
(4.4.4)
b sup L(s, t0 , 0 B(t0 ) ) u0 .
t0 st1
219
Then,
V (t, x(t0 , 0 (t))) r(t, t0 , u0 ),
where
r(t, t0 , u0 ) =
u0 ,
r(t, t1 , u0 ),
t t0 ,
t [t0 , t1 ];
t t1 and t1 = (t0 , 0).
Because of the fact that r(t, t0 , u0 ) = lim u(t, ) and the denition of
0
t t1
(4.4.5)
u(t1 ) = u0 + ,
and
t [t1 , t ).
(4.4.6)
u(t ) = m(t ).
s [t1 , t ].
220
Since
r(t , t1 , u0 ) = lim u(t , ) = m(t ) = (t , t , m(t ))
0
and
m(t) u(t, ),
t [t1 , t ],
it follows that
m(s) r(s, t1 , u0 ) (s, t , m(t )),
t [t1 , t ].
t I0 ,
t t0 ,
t0 st1
221
implies
xt Y <
for all
t t0 .
The other concepts of uniform stability, equi- and uniform asymptotic stability can be dened in a similar fashion. We shall now give
sucient conditions for the stability of the null solution of (4.4.1)
in (Bt0 , Rn ), since many practical phenomena associated with functional dierential equations with innite delay suggest the advantage
of stability in Rn rather than a function space. Thus, Denition 3.4.1
with X = Bt0 and x(t) in place of xt Y will suce for our discussion.
Theorem 4.4.3 Let the assumptions of Theorem 4.4.1 hold. Suppose that
a(x) V (t, x), (t, x) R+ S(),
where a K. Then, the equi-stability properties of the null solution
of (4.4.2) imply the corresponding equi-stability properties of the null
solution of (4.4.1). If, instead of (H1 ), (H2 ) in Theorem 4.4.1, hypotheses (H1 ), (H2 ) are satised, then the uniform stability properties
of the zero solution of (4.4.1) follow from the uniform stability properties of the zero solution of (4.4.2).
Proof. We shall indicate the proof for equi- and uniform stability
only. The other assertions can be proved similarly.
Let 0 < < and t0 0 be given. Then a() > 0 is dened.
Assume that the zero solution of (4.4.2) is equistable. Then, given
a() > 0 and t0 R+ , there exists a 1 = 1 (t0 , ) > 0 such that
0 u0 < 1
implies
t t0 ,
222
0 , r) =
Let L(t
t0 st1
b(t0 , r) b(L(t
0 , r)) is continuous, b(t0 , 0) = 0 and b(t0 , r) is increasing in r. Consequently, there exists a = (t0 , ) > 0 such that
b(t0 , r) < 1 if r < .
Now choose 0 Bt0 < . Then, we claim that
x(t0 , 0 )(t) < ,
t t0 .
t0 t t2 .
Dene m(t) = V (t, x(t0 , 0 )(t)), t t0 . Since (H1 )(H5 ) are satised, we have, by Theorem 4.4.1,
m(t) r(t, t0 , u0 ).
Hence, it follows that
a() = a(x(t0 , 0 )(t2 )) V (t2 , x(t0 , 0 )(t2 )) r(t2 , t0 , u0 ) < a().
This contradiction proves that the null solution of (4.4.1) is equistable.
If we assume uniform stability of the null solution of (4.4.2), then
1 (in the above paragraph) is independent of t0 . Note that, by
(H2 ), we get (t, r) = t + q(r), where q(r) is independent of t. Now
choose > 0 such that b(r) < 1 if r < , where b(r) = b(L(r))
223
where t t0 .
Case (b). Suppose that g0 (t, u) = [A (t)/A(t)]u, where A(t) > 0
is continuous and dierentiable on [t0 , ) and A(t) as t .
2 ,R ]
Let g(t, u) = g0 (t, u) + [1/A(t)]g1 (t, A(t)u), where g1 C[R+
+
and r(t, t0 , u0 ) be the maximal solution of (4.4.2). Evidently, the
solution (s, t, v0 ) = v0 A(t)/A(s), s t. Hence
= { Btt0 : V (s, (s))A(s) V (t, (0))A(t),
s [p(t, V (t, (0))), t], t t0 }.
Case (c). Suppose that g0 (t, u) = c(u), where c K and
g(t, u) g0 (t, u). Computing (s, t, v0 ), we see that
(s, T, v0 ) = J 1 [J(v0 ) (s T )],
where J(u) J(u0 ) =
0 s T,
u ds
and J 1 is the inverse function of J.
u0 c(s)
t t0 }.
224
a K,
1
[V (t + h, xt+h (t, ); t0 ) V (t, ; t0 )]}
h
sup
t t0 ,
t0 s(t0 ,0)
225
sup
t0 s(t0 ,0)
and
0 Bt0
inf
t0 s(t0 ,0)
(0 (s, t0 )),
t0 s(t0 ,0)
inf
(0 (s, t0 )).
t0 s(t0 ,0)
(4.4.7)
0
(s)e ds <
s
and
(4.4.8)
226
(s) ds = > 0
(4.4.9)
and
q(m)
(s)e s ds m1/2 .
(4.4.10)
Let m(t) = V (t, (0)) = |(0)|2 and g0 (t, u) = 2u. Then, we have
0
(s)|(s)| ds,
u(t0 ) = v0
(s)|(s)| ds.
q(m(t))
(s)e s ds (0),
(s)|(s)| ds 2c
(4.4.11)
227
Z0
(s)|(s)| ds 2|(0)|e
q(m(t))
Z0
(s) ds
q(m(t))
2|(0)|eh
Z0
(4.4.12)
(s) ds.
Z0
(s)ds]m + m = m g(t, m)
and it is easy to see that all the assumptions of Theorem 4.4.3 are
satisfied. Since the zero solution of
u = u,
u(t0 ) = u0
4.5
Zt
a, b, > 0 (4.5.1)
with V (t, x) =
|x|2 ,
x (t) = a
Zt
a, > 0
(4.5.2)
228
xt0 = 0 ,
t
t0 0
(4.5.3)
229
for
s 0.
r1 (t , t0 ) = x(t ) or
and
r2 (s, t0 ) x(s) r1 (s, t0 ),
r2 (t , t0 ) = x(t ),
t0 s t .
(t , t
x(t )
(t , t ).
= 1
Let us consider the case r1
0) =
case can be proved similarly. We need to prove that
(4.5.4)
The other
t1 s t
for
t1 s t .
s (s , t ).
230
t < ,
,
t s t1 ,
1 (s, t) =
A
(t s) , t1 s t,
2
where t1 = t2 = max 0, t 4/A and A = 2(b + a )(1 + 2 ). It is easily
checked that (ii) of (A2 ) is veried for these 1 , t1 and A. From (i)
of (A2 ) we get
3
t
f b( + ) + a( + )[t1 (t )] a
t1
(1 + 31 ) ds.
231
1 + [1 + b2 2 (1 + 2 )2 ]1/2
.
2 (1 + 2 )
2
.
1 + 2
4.6
x(t0 ) = x0 ,
t
t0
t0 0,
(4.6.1)
2 S(), Rn ].
kernel k C[R+
Assume that f (t, 0, 0) 0 and
K(t, s, 0) 0 so that (4.6.1) admits the trivial solution.
Before we proceed to state the theorem, let us list the following
hypotheses.
232
u(t0 ) = u0 0,
(4.6.2)
v(t0 ) = v0 0,
t0 > t0 ,
(4.6.3)
existing on t0 t t0 .
1
[V (t+h, x+hf (t, x, T x))V (t, x)]
h
g(t, V (t, x)), (t, x) , where = {x C[R+ , Rn ] : V (s, x(s))
(s, t, V (t, x(t))), t0 s t}.
(H2 ) D V (t, x, T x) lim inf
h0
t t0 .
u = g(t, u) + ,
u(t0 ) = u0 + ,
for > 0, suciently small, it is enough to prove that m(t) < u(t, ),
for t t0 . If this is not true, there exists a t1 > t0 such that
m(t1 ) = and
t0 t < t1 .
233
(4.6.4)
t0 s t1 .
t0 s t}.
A (t)
u where A(t) > 0
Case II. Suppose that g0 (t, u) =
A(t)
is continuously dierentiable on [t0 , ) and A(t) as t .
234
1
2 , R ].
Let g(t, u) = g0 (t, u) +
g1 (t, A(t)u) with g1 C[R+
+
A(t)
0
0
Evidently, (s, t , v0 ) = v0 A(t)/A(s) for t0 s t . Hence
= {x C[R+ , Rn ] : V (s, x(s))A(s) V (t, x(t))A(t), t0 s t}.
Case III. Suppose that g0 (t, u) = (u) where C[R+ , R+ ],
K and g(t, u) g0 (t, u). Computing (s, t0 , v0 ) we see that
(s, t0 , v0 ) = J 1 [J(v0 ) (s t0 )],
0 s t0 ,
u ds
and J 1 is the inverse function of J.
u0 (s)
Since (s, t0 , v0 ) is increasing in s to the left of t0 , choosing a xed
s0 < t0 and dening L(u) = (s, t0 , v0 ), it is clear that L(u) > u
for u > 0, L(u) is continuous and increasing in u. Hence the set
reduces to
4.7
The approach developed in Sections 4.4 and 4.6 for delay as well
as integro-dierential equations oers useful results in a unied way
only when the ordinary dierential equations part of the given system have nice stability properties, since the delay term or integral
term is essentially treated as perturbation. The comparison method
presented in Section 4.5 is one of the ways to overcome the diculty
posed by lack of nice stability information for the unperturbed system. In this section, we shall discuss another method that makes use
of Theorem 2.10.1 and the corresponding variational system. However, a result analogous to Theorem 2.10.1 is not yet available for
delay equations.
Consider the integro-dierential system
t
x(t0 ) = x0 ,
(4.7.1)
235
fx (t, 0) = A(t),
(4.7.2)
and using the mean value theorem, equation (4.7.1) can be written
as
Zt
Zt
t0
(4.7.3)
with x(t0 ) = x0 and
F (t, x) =
Z1
G(t, s, x) =
Z1
x (t) = A(t)x(t) +
Zt
x(t0 ) = x0
(4.7.4)
t0
(4.7.5)
236
(4.7.6)
t0
L(t, s) being any solution of (2.10.3), imply the corresponding stability properties of the system (4.7.1).
Proof By Theorem 2.10.1, system (4.7.3) is equivalent to
x (t) = B(t)x(t) + L(t, t0 )x0 + F (t, x(t))
t
t
+ L(t, s)F (s, x(s)) ds + G(t, s, x(s)) ds
t0
(4.7.7)
t0
t s
L(t, )G(s, , x()) d ds,
+
t0 t0
x(t0 ) = x0
(4.7.8)
and hence assumption (i) shows that the trivial solution of (4.7.8)
is exponentially asymptotically stable. Consequently, there exists a
Lyapunov function V such that
(a) V C[R+ S(), R+ ], V (t, x) is Lipscitzian in x for a constant
M > 0 and x V (t, x) Kx, (t, x) R+ S();
(b) D + V (t, x) V (t, x), (t, x) R+ S().
Let x(t) = x(t, t0 , x0 ) be any solution of (4.7.3) existing on
some interval t0 t t1 . Then, using (a) and (b) and setting
m(t) = V (t, x(t)) we get the integro-dierential inequality
D + m(t) m(t) + M w1 (t, m(t))
237
t
w(t, s, m(s))ds,
+M
t0 t t1 ,
t0
t0 t t1 ,
t
w(t, s, u(t)e(st0 ) ) ds ,
t0
t0 t t1 ,
4.8
(4.8.1)
238
(4.8.2)
(4.8.4)
239
(4.8.5)
n0
240
(4.8.6)
which shows that the origin is asymptotically stable for (4.8.6) and
because the equation is autonomous, it follows that the stability is
also uniform.
If in Theorem 4.8.2 the condition that V is decrescent is removed,
the asymptotic stability will result.
Theorem 4.8.3 Assume that there exists a function V such that
(1) V : Nn+0 S() R+ , V (n, 0) = 0, positive denite;
241
n Jn0 .
4.9
t0 0,
where
(i) 0 < 1 < 2 < . . . < k < . . . and k as k ;
(ii) f C[R+ S(), Rn ], f (t, 0) 0; and
(iii) Ik C[S(), Rn ], Ik (0) = 0, k = 1, 2, 3, . . .
(4.9.1)
242
t = k ,
u(t+
0 ) = u0 0,
t0 0,
= k (u(k ))
(4.9.2)
t t0 .
t = k ,
m(t+
0 ) u0
k = 1, 2, 3, . . .
243
(t, x) R+ S(),
lim
V (t, y),
lim
V (t, y)
(t,y)(k ,x)0
(t,y)(k ,x)+0
ds
g(s)
is satised and
k
p(s) ds k ,
k 0,
k = 1, 2, . . . ,
(4.9.4)
k1
i=1
244
u(
i (u(i ))
u(i
ds/g(s)
u(i+ 1)
ds/g(s) i , (4.9.5)
p(s) ds +
u(i1 )
u(i
+
). Thus, by induction, we
which in turn shows that u(i+ ) u(i1
+
+
get u(k ) u(k1 ) for each k j + 1 and consequently, u(t) <
for t t0 .
i = , we can show that lim u(i+ ) = 0. Assume the
If
i
i=1
i
u(i+ )
+
u(i1
) u(i+ )
,
ds/g(s)
g()
+
) i g(), which yields
and therefore u(i+ ) u(i1
+
u(j+k
)
u(j+ )
g()
j+k
i .
i=j+1
245
(4.9.6)
ds/g(s) <
j ()
u(t
)
ds/g(s)
ds/g(s) <
u0
u0
j+1
p(s) ds
ds/g(s) =
p(s) ds,
j
t0
j+1
p(s) ds +
j
ds/g(s) > 0
t
ds/g(s)
u(i+ )
Since
u(i+ )
= i (u(i )),
i+1
p(s) ds
i+
+
u(
i )
i+
ds/g(s) =
u(i )
p(s) ds.
i (u(
i ))
u(i )
i+1
ds/g(s)
u(i )
i (u(
i ))
ds/g(s) 0.
p(s) ds +
i
u(i )
246
This proves that u(t) u(i ) < for t (j , i+1 ] which by induction
shows that u(t) < for t t0 proving the statement.
Further we consider the impulsive system (4.9.1) under the condition 0 < 1 k+1 k 2 < , k = 1, 2, . . . . For system (4.9.1)
the following Theorem is valid.
Theorem 4.9.3 Let there exists the function V (t, x) V0 for
system (4.9.1) such that the following inequalities are satised:
(1) 0 V (t, x) c(x) for (t, x) R+ D, D Rn ;
(2) V (t+ , x(t+ )) V (t, x) 0 for t = k , k = 1, 2, . . . ;
&
(3) D + V (t, x)&(4.9.1) b(x) for t (k , k+1 ], k = 1, 2, . . . ;
(4) V (k+ , x(k+ )) a(x(k+ )), where a, b, c K.
Then the zero solution of system (4.9.1) is asymptotically stable.
Proof Choose the Lyapunov function
x) V0 .
+ candidate V ,(t,
Next, consider the sequence of numbers V (k+ , x(k+ )) k=0 . Obviously this is nonincreasing sequence. Let t0 = 0 then using conditions
(1), (4) of Theorem 4.9.3 we get
a(x(k+ )) V (0+ , x(0+ )) c(x0 ),
where x(0+ ) = x0 .
At rst we show that for any > 0 there exists () > 0 such
that x0 < () implies x(k+ ) < eL2 , where L > 0 is Lipschitz
constant for the function f (t, x) k = 1, 2, . . . . Suppose the contrary.
+
Then there exists N > 0
such that x(N
) eL2 . Next, for any
> 0 choose () = c1
a(eL2 )
2
. Then we get
+
)) c(x0 )
a(eL2 ) a(x(N
L2 )
a(eL2 )
1 a(e
=
.
<c c
2
2
This contradiction proves that for any > 0 there exists () > 0
such that x0 < () implies x(k+ ) < eL2 < , k = 1, 2, . . . .
247
Next, show that for any > 0 there exists () > 0 such that
x0 < () implies x(t) < for t (k , k+1 ].
dx
Consider the system
= f (t, x) for t = k . The systems soludt
t
tion x(t) is expressed as x(t) = x(k+ ) + f (s, x(s))ds, t (k , k+1 ].
k
x(k+ )
t
f (s, x(s)) ds
x(k+ )
t
+
Lx(s) ds.
k
Using Theorem 1.1.1 and inequality x(k+ ) < eL2 < we obtain
x(t) x(k+ )eL(tk ) eL2 x(k+ ) < eL2 eL2 =
i.e. x(t) < as
x0 < () = c1
a(eL2 )
.
2
,
+
Next, suppose that x(k+ )) k=0 0 as k . Then we
+
,
can select subsequence x(n+k )) n , where n, k = 1, 2, . . . and
k
limnk x(nk ) > 0.
248
(n+k+1 , x(n+k+1 ))
(n+k , x(n+k ))
(n+k , x(n+k ))
b(x(s)) ds
nk
(nk+1 nk ).
V (n+k+1 , x(n+k+1 ))
k=1
p
V (n+k , x(n+k )) (nk+1 nk ) ,
k=1
(n+p+1 , x(n+p+1 ))
V (n+1 , x(n+1 )) 1 np .
2
V (t, x),
t
t = i,
249
Reaction-diffusion equations
p(s)ds+
Zi (c)
c
ds
=
g(s)
Zi+1
ds
+
s
2
(1+p
Z i) c
ds
i+1
= log
|1 + pi | 0.
2s
i
4.10
Reaction-Diffusion Equations
kv(x) v(y)k
and [] = sup
, x 6= y, 0 < < 1. The space C k+
kx
yk
x,y
and the norm in that space are defined in the usual way. For example,
kvk = kvk0 + [v]
and
kvkk+ = kvk0 +
where Di =
form by
N
X
kDi vk0 +
N
X
Di (aij (x)Dj v)
i=1
N
X
i,j=1
kDi Dj vk ,
i,j=1
We assume that
where aij are functions defined in .
and
(A1 ) aij C 1+ ()
0 < < 1, > 0;
N
P
i,j=1
250
For any interval J = (0, T ], let c (J) consist of all u for which
We
ut , Di u, Di Dj u, 1 i, j N exist and are continuous on J .
consider the reaction-diusion system
ut = Lu + f (u) in J ,
(4.10.1)
u
u(0, x) = (x) in ,
= 0 on J ,
r(0) = r0 0
(4.10.2)
existing on [0, ).
implies
Then V ((x)) r0 in
V (u(t, x)) r(t),
t0
(4.10.3)
251
Reaction-diffusion equations
on J ,
in J
m(0, x) r0
in .
u S(), a, b K,
(4.10.4)
whenever V ((x)) r0 , x ,
V (u(t, x)) V ((x)), t 0, x ,
which yields the uniform stability of the zero solution of (4.10.1).
We note that if V (u) is convex, then Vu L(u) LV holds. Also,
if the system (4.10.1) does not possess the same diusion law for all
components, the foregoing method does not work. Since the stability
properties depend solely on the reaction term, one concludes the same
stability property of (4.10.2) even when = 0 or > 0 in (A1 ). To
avoid this defect, we need to compare the system (4.10.1) with the
scalar reaction-diusion equation
rt = Lr + g(r) in J ,
(4.10.5)
r = 0 on J , r(0, x) 0 on .
252
t 0,
x ,
(4.10.6)
4.11
Notes
Theorem 4.1.1 is new while Theorem 4.1.2 is due to Lakshmikantham and Salvadori [1]. The contents of Section 4.2 are essentially
new and the special cases considered are based on Strauss and Yorke
[1]. Section 4.3 consists of the work of Lakshmikantham and Leela
[1], while Section 4.4 contains the work of Lakshmikantham, Ladde
and Leela [6]. See Hale and Kato [1], Kato [1,2] concerning allied results for equations with innite delay. For a survey on equations with
unbounded delay see Corduneanu and Lakshmikantham [1]. Section
4.5 introduces a new technique which is adapted from Lakshmikantham and Leela [5]. All the results of Section 4.6 are taken from
Lakshmikantham [4]. The contents of Section 4.7 are new while
the results given in Section 4.8 are adapted from Lakshmikantham
and Trigiante [1]. Section 4.9 consists of the results from the work
of Bainov, Lakshmikantham and Simenov [1] and Martynyuk and
Slynko [1]. The material of Section 4.10 is adapted from Lakshmikantham and Leela [1]. See also Hale [1] and Redheer and Walter
[1] for the use of Lyapunov functions. For the use of vector Lyapunov functions in reaction-diusion systems see Lakshmikantham
and Leela [7]. The concept of matrix valued function for perturbed
motion is well-established in stability theory (see Martynyuk [16]
and Khoroshun and Martynyuk [1]).
5
STABILITY IN THE MODELS OF REAL WORLD
PHENOMENA
5.0
Introduction
253
254
5.1
(5.1.1)
where q, q,
q Rn are the vectors of the generalized coordinates,
velocities and accelerations of the robot; H(q) is the positive denite
matrix of inertia moments of manipulator mechanisms; h(q, q)
is the
n-dimensional nonlinear vector function which takes into consideration centrifugal, Coriolis and gravitational moments; = (t) is the
n-dimensional vector on input (control); J T (q) is the n m Jacobi
matrix associated with the motion velocity of control robot devices
and its generalized coordinates; F (t) is the n-dimensional vector of
generalized forces or generalized forces and moments acting on the
executive robot device due to the dynamic environment.
Under the condition when the environment does not admit eigen
motions independent of the motion of the executive robot organs,
the mathematical model of environment is described by the nonlinear
vector equation
M (s)
s + L(s, s)
= F,
(5.1.2)
s = (q),
(5.1.3)
where s is the vector of the environment motions; (q) is the vector function connecting the coordinates s and q. Note that in the
case of traditional hybrid control, the environment plays the role of
kinematic limitation and the relationship (5.1.3) becomes
(q) = 0.
(5.1.4)
255
Thus the equation set (5.1.1), (5.1.5) represents the closed mathematical model of the robot interacting with the environment.
Let qp (t) be the n-dimensional vector of the program value of the
generalized coordinates, qp (t) be the n-dimensional vector of the program value of the generalized velocities, Fp (t) be the m-dimensional
vector of forces corresponding to the program values of the generalized coordinates and velocities. The program values of force
Fp (t) and those of functions qp (t), qp (t), qp (t) cannot be arbitrary
and should satisfy the relationship Fp (qp (t), qp (t), qp (t)) where
C(Rn Rn Rn , Rm ). The connected system of equations
(5.1.1), (5.1.5) can easily be reduced to the form
.
L(qp , qp ) + S T (q) S T (qp ) Fp
M (q)
q M (qp ) + L(q, q)
(5.1.6)
= S T (q)(F Fp ).
The n-dimensional vector of deviations of the program trajectory
from real one is designated by y. Then the equation (5.1.6) becomes
y + K(t, y, y)
= M 1 (y + qp )S T (y + qp )(F Fp ),
where
)
K(t, y, y)
= M 1 (y + qp ) L(y + qp , y + qp )
.
L(q, qp ) + M (y + qp ) M (qp ) qp +
. *
+ S T (y + qp ) S T (qp ) Fp .
x = (x1 , x2 )T , x1 = y, x2 = y,
&On
&In
&
K &&
A(t) = K &
,
&
&
y (y,y)=(0,0)
y (y,y)=(0,0)
(5.1.7)
256
x(t0 ) = x0 ,
(5.1.8)
(5.1.9)
under certain assumptions of functions specifying the action of dynamic environment on the robot.
Consider the independent equation (5.1.9) which species the inuence of dynamic environment on the executive organ of the robot.
From (5.1.9) it follows that
t
(t) = 0 +
Q((s)) ds,
t t0 .
(5.1.10)
t0
(5.1.11)
257
where p(t) is the function integrable over any nite time interval.
With
(5.1.12)
(t) = F (t) Fp (t)
and (5.1.10) representing the deviation of program value of the force
Fp (t) from the force F (t) acting due to the dynamic environment,
the action of environment on the robot may be estimated by the
function p(t). We introduce the designations
p0 = sup p(t),
t0
t+1
p1 = sup
p(s) ds,
t0
t+1
p2 (s) ds
p2 = sup
t0
1/2
.
258
(5.1.13)
should be taken instead of (5.1.11), where (t) is the integrable function such that
(s) ds < +.
(5.1.14)
0
(5.1.15)
259
such that any movement x(t; t0 , x0 ) of the robot simulated by the system (5.1.8) will approach to zero with t and suciently small
values x(t0 ).
Proof When the condition I is satised, the value L in (5.1.17) is
dened by the formula L = (2N )1 :
(t, x)
x,
2N
t T.
(5.1.18)
From the condition III it follows that > 0 exists such that
u(t, x) <
NL
,
2N
t T.
(5.1.19)
.
W (t, ) (, x( )) + u(, x( )) ds. (5.1.20)
t0
x(t) N e
t
x0 + N L
e(ts) x(s) ds
t0
t
+N
(5.1.21)
t0
M (t) N x0 +
NL
N
N
N
M (t) +
x0 +
.
NL
NL
( N L)
<
.
4N
4N
260
tj
+ NL
tj /2
e(tj s) x(s) ds + N
e(tj s) u(s, x(s)) ds
t0
tj /2
tj
+N
tj /2
For a given > 0 there exists J such that x(tj ) > for all
j J and x(t) < + with t tj /2. Consequently, with j J
we nd
N L 1 tj N L( + ) N 1 tj
e 2
+
e 2
N x0 e(tj t0 ) +
+
NL
max u(t, x(s)).
+
12 tj stj
N L( + )
is obtained as j +. Since
Further we study the motion of the robot interacting with dynamic environment under the conditions (5.1.13) and (5.1.14). For
providing sucient stability conditions the following Lemma is
needed.
Lemma 5.1.1 Let be the positive constant and the function
(t) C(R+ , R+ ) be such that
(s) ds < +
0
or
lim (t) = 0.
t+
261
Then
t
t
es (s) ds = 0.
lim e
t+
Proof Let us rst prove the case when (t) is integrable. For the
given > 0 we choose t to be large enough so that
(s) ds < ,
2
t/2
(s) ds < .
2
0
t/2
Then
et
t/2
t/2
t
es (s) ds e 2
(s) ds < ,
2
0
0
t
e (s) ds
t/2
(s) ds
t/2
Consequently,
t
(s) ds <
.
2
t/2
t
es (s) ds <
t
es (s) ds = 0.
lim e
es (s) ds < +
t
es (s) ds = lim
(t)
= 0.
262
Lemma 5.1.2 Let the function u(t) be continuous and nonnegative and satisfy the inequality
t
u(t) c +
.
ku(s) + (s) ds,
t 0,
t 0.
NL
,
2N
t 0.
.
N L()u(s) + N es u(s, x(s)) ds,
(5.1.22)
263
and consequently,
x(t) N x0 e(N L)t + N e(N L)t
t
(5.1.23)
From the inequality x0 /(2N ) it follows that the rst summand in (5.1.23) will be smaller than /2 for all t 0. From condition 3 of Theorem 5.1.2 it follows that
t
( N L) (N L)t
e
(5.1.24)
e(N L)s ds .
2
2
0
1 e(N L)t
x(t) N x0 e(N L)t +
2
(N L)t
+
1 e(N L)t =
e
2
2
2
for all t 0.
The proof is complete.
Remark 5.1.1 From Theorem 5.1.2 it follows that if u(t, x)
0 or u(s, x(s)) ds < , the robot motion tends to the equilibrium
0
state as t +.
Case A. Consider the interactions of the robot with dynamic
environment when functions (t, x)(t) satisfy the estimate
(t, x)(t) (t)
(5.1.25)
(5.1.26)
264
as t .
It is evident that the condition (5.1.26) will be satised if (t) 0
with t + or
(s) ds < +. It is shown (see Strauss and
0
1
(t) = 0
with
with
with
t = 3n,
3n + n1 t 3(n + 1)
0 t 2.
1
n+1 ,
265
(L)
= 1
2N
(5.1.27)
is valid.
It is easy to show that for all t t0 1 the inequality
Zt
t0
(s) ds
Zt
G(s) ds
t0 1
t0
e (s) ds
k(s+1)
(u) du ds =
t0 1
Zt
ek(s+1) G(s) ds
t0 1
(5.1.28)
is valid
With (5.1.28) we obtain
kt
Zt
t0
ks
kt
e (s) ds e
Zt
(5.1.29)
t0 1
lim e
Zt
ek(s+1) G(s) ds = 0
t0 1
kx(t)k N 1 e
+N
Zt
t0
e(ts) Lkx(s)k + (s) ds,
266
thus
t
x(t)e
N 1 e
t0
.
N Lx(s)es + N es (s) ds.
(5.1.30)
t0
Let us designate x(t)et = w(t) and use Lemma 5.1.2 for the
inequality (5.1.30). It is easy to see that
t0 N L(tt0 )
w(t) N 1 e
t
eN L(ts) N es (s) ds,
+
t0
or
(N L)(tt0 )
x(t) N 1 e
t
+N
t0
+ N 1 = .
2
t0
267
(3) for any function (t), satisfying the relationships (5.1.10) and
(5.1.12) the estimation (5.1.11) holds for all x < H and
t 0.
Then for suciently small initial perturbations x0 = x(0) and
(0) = F (0) Fp (0) the transient process in (5.1.8) satises the
estimate
x(t) N (1 (t) + 2 (t)),
(5.1.31)
where
1 (t) = et x0 , x0 = x(0),
t
t
es p(s) ds, = N L.
2 (t) = e
0
2 (t) p1 e (1 e )1 ,
2
1 1/2
1 e
.
2 (t) p2 (1 e )
2
268
(3) for any function (t) which satisfies the relationships (5.1.10)
and (5.1.12) for all kxk H and t 0 the estimate (5.1.11)
and one of the inequalities are satisfied
;
2N
p1 <
e
1 e ;
2N
1/2
p2 <
1
e
.
2N e2 1
p0 <
(5.1.32)
(5.1.33)
(5.1.34)
(5.1.35)
for which kx0 k < (2N )1 , the transient process of the system
(5.1.8) satisfies the estimate
kx(t)k N (1 (t) + 2 (t))
(5.1.36)
269
;
N
p1 < e 1 e ;
N
1/2
2
1 e ,
p2 <
2
N e 1
p0 <
(5.1.37)
(5.1.38)
(5.1.39)
Further the equations of the perturbed motion (5.1.8) will be considered under the following assumptions:
I . The matrix A(t), the vector function of nonlinearity (t, x) and
the vector function (t, x)(t) where (t) = F (t) Fp (t) are
continuous and periodic with respect to t. The period of these
functions is supposed to be common, for example, unity.
II . As above, the assumption I is preserved for the case of periodic
matrix A(t), i.e.
W (t, s) N e(ts)
(5.1.40)
(5.1.41)
270
5.2
(5.2.1)
i=1
y(t) = Cx(t),
(5.2.2)
x(t0 ) = x0 ,
(5.2.3)
271
q+1
t
(Ki C)x0 qI
i=1
where =
eqs ds > 0,
i .
i=1
i = 1, 2, . . . , l,
u0 (t) = K0 y(t)
i=1
and
x(t) = (t)
t
(t0 )x0
t0
(t)1 (s)
i=1
(5.2.4)
272
x(t) x0 M e
t
+
M e(ts)
i=1
(5.2.5)
M eqs
i=1
(5.2.6)
1
l
t
q
1 qM q+1 (Ki C)x0 qI eqs ds
0
i=1
M x0
1+
M q+1
l
i=1
1
q
(eqt 1)
M x0
M q+1
l
(Ki C)x0 q 1
i=1
for all t 0.
If condition
1 qM
q+1
t
(Ki C)x0 qI
i=1
eqs ds > 0,
l
i=1
(Ki C)x0 q
>0
273
Synchronization of Motions
M
1
M q+1
l
(Ki C)x0 q q
i=1
5.3
Synchronization of Motions
The theory of motion synchronization studies the systems of dierential equations of the form (see Rozo [1] and bibliography therein)
dx
= f (t, x, ),
dt
x(t0 ) = x0 ,
(5.3.1)
x
(t0 ) = x0 ,
(5.3.2)
T
f (s, x, 0) ds.
0
274
f (s, x(s, ), ) ds
0
and
t
x
(t, ) = x0 +
g(
x(s, )) ds,
0
x(x, ) x
(t, ) =
0
t
[f (s, x(s, ), 0) f (s, x
(s, ), 0)] ds
(5.3.3)
t
[f (s, x
(s, ), 0) g(
x(s, ))] ds.
+
0
As it is shown in monograph Rozo [1] for the rst and third summands in correlation (5.3.3) the following estimates hold true
1
1 t
1
1
1
1
1 [f (s, x(s, ), ) f (s, x(s, ), 0)] ds1 M t,
1
1
(5.3.4)
1
1 t
1
1
1
1
(s, ), 0) g(
x(s, ))] ds1 2M T + 4M 2 T t.
1 [f (s, x
1
1
(5.3.5)
Synchronization of Motions
275
(5.3.6)
(5.3.7)
for all s t.
Let there exist [0, 1] such that
.1
T
N (s) ds > 0
0
for all < . Then the norm of divergence of solutions x(t, ) and
x
(t, ) under the same initial conditions is estimated as follows
x(t, ) x
(t, ) [2M T + (4M 2 T + M )t]
1
T
1
.1
2
1 ( 1) (2M T + (4M T + M )t)
N (s) ds
0
(5.3.8)
for all t [0, T ] and for < .
Estimate (5.3.8) is obtained from inequality (5.3.7) by the application of Corollary 2.2.1.
If in estimate (5.3.6) = 1 and N (t) = M , then the application of
the Gronwall-Bellman lemma to inequality (5.3.7) yields the estimate
of divergence between solutions in the form (see Rozo [1])
x(t, ) x
(t, ) [2M T + (4M 2 T + M )t] exp(M T )
for all t [0, T ].
276
5.4
This section deals with the stability with respect to linear approximation of some periodic solutions to a system of nonlinear dierential
equations. This system describes some experimental realization of a
chaotic CO2 -laser with a 100 per cent depth-modulated periodic
pumping by alternate current.
Variation of the factor of strengthening g and amplitude E of a
synchronized eld of two optically coupled lasers is described by the
simplest model
g = g0 (t) g(1 + E 2 ),
(5.4.1)
E = (g
gth )E/2,
where is an ecient time of relaxation of the active medium
( 1), g0 (t) = A(1
+ sin t) is a (2/)-periodic pumping,
gth = gth + 2M (1 1 (/M )2 ) is a threshold coecient of
strengthening. Here gth means threshold strengthening, M is a real
positive coupling factor, is a value of resonance eigenfrequency detuning (further on detuning). For the problems considered below
the dierence of real medium kinetics of CO2 -from the model one is
not of essential importance (see Likhanski et al. [1]).
The mode of phase synchronization, for which the eld amplitudes
of both lasers are equal at any moment and the phase is constant and
depends on detuning, is realized under the condition || < M . Moreover, the dynamics of two coupled lasers coincides with the dynamics
of one equivalent laser whose threshold grows with the growth of detuning. In the mode of synchronous generation (for a xed M ) the
growth of detuning corresponds to lessening of the parameter A/
gth
for the equivalent laser. Due to the complex bifurcation diagram of
the laser with periodic pumping this results in generation of both
chaotic and regular signals.
Designate by (gT (t), ET (t))T , t [t0 , ) = T0 , t0 0, T -periodic
solution of system of equations (5.4.1) with the initial condition
g(t0 ) = g0 ,
E(t0 ) = E0
(5.4.2)
277
(5.4.1) as
y1 = g gT (t),
y2 = E ET (t).
(5.4.3)
(5.4.4)
(5.4.5)
(5.4.6)
t [t , t + T ],
t T0 .
278
(5.4.7)
|g| gmax ,
|E| Emax ,
(5.4.9)
(5.4.10)
279
Following the denition of T -system and relating with vectorfunction (f T , f3 )T and domain D the nonempty set Df of points
R3 contained in D together with its T2 (M T , M3 )T -neighborhood the
conditions dening T -system are obtained in the form of a system of
inequalities
2gmax T M1 > 0, 2Emax T M2 > 0,
2p11 max T M3 > 0,
2
T
T K11 + K22 + (K11 K22 ) + 4K12 K21
< 1,
K3 < 1.
Moreover, it is also assumed that the initial value (g0 , E0 , p110 ) belongs to Df .
The immediate construction of the desired T -periodic solutions is
achieved, for example, by the method of trigonometric collocations
by a numerical-analytical scheme. To this end, we assume that the
values of functions fj (t, g, E, p11 ), j = 1, 2, 3, calculated basing on
the m-th approximation to the desired periodic solution coincide in
T
N = 2r + 1 collocation points ti = i N
, i = 0, 1, . . . , 2r, with the
values of the trigonometric polynomials
fjm = m
j0 +
r
m
(m
jl cos lt + jl sin lt),
(5.4.11)
l=1
(5.4.12)
pq
p = 1,
N
2
p = 2, 4, . . . , 2r,
= N cos p(q 1) N ,
2 sin (p 1)(q 1) , p = 3, 5, . . . , N,
N
N
280
and
fjm = fjmM .
By introducing into consideration the N N -two-diagonal matrix
0 0
0
0
0
... 0
0
0 0 1 0
0
... 0
0
0 1
0
0
0
.
.
.
0
0
0
0 0
0
0 2 . . . 0
1
=
1
0
... 0
0
0
0 0
2
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
0 0
0
0
0
. . . 0 r
1
0 0
0
0
0
. . . r
0
and N -dimensional vectors
T
r
m
m
m
m
(jl cos lt0 + jl sin lt0 ), 0, . . . , 0 ,
zj = j0 +
l=1
where
m
m
m
m T
1 m
(m
j0 , j1 , j1 , . . . , jr , jr ) = fj ,
where g0 , E 0 and p0
11 are the vectors of the coecients of appropriate zero approximations.
The form of the zero approximation (g0 (t), E 0 (t))T and the vector
of the initial values at the collocation points and the initial vector
of the coecients of the right-hand sides f1 , f2 of equations (5.4.1)
respectively are taken based on solution of system (5.4.1) linearized
by the equation for g
t
g0 (t) = Cg e +
A
(sin t cos t) + A,
1 + 2 2
g0 (t0 ) = g0 ,
281
t
1
Cg e + (A gth )t
E (t) = CE exp
2
A
1
cos t + sin t
,
1 + 2 2
0
E 0 (t0 ) = E0 ,
gjm eijt ,
ET (t) E (t) =
m
j=r
Ejm eijt ,
j=r
m
m
m
m
m
m
where gjm = (m
gj igj )/2, gj = gj , Ej = (Ej iEj )/2,
m = E m , and m , m and m , m stand for coecients (5.4.12)
Ej
gj gj
Ej Ej
j
of the corresponding trigonometric series (5.4.11). Then
p011 (t)
= Cp11 exp{((1 +
m
Ejm Ej
)t
Ejm Esm
gth g0m )t
ei(j+s)t )/ + (
i(j + s)
s=j
gjm ijt
j=0
ij
},
282
283
284
Sustainable Development
285
5.5
The Forrester model of world dynamics (see Forrester [1]) is constructed in terms of the approach developed in the investigation of
complex systems with nonlinear feedbacks. In the modeling of world
dynamics the following global processes are taken into account:
(i) quick growth of the world population;
(ii) industrialization and the related production growth;
(iii) restricted food resources;
(vi) growth of industrial wastes;
(v) shortage of natural resources.
The main variables in the Forrester model are:
(1) population P (further on the designation X1 is used);
(2) capital stocks K (X2 );
(3) stock ratio in agricultural industry X (X3 );
(4) level of environmental pollution Z (X4 );
(5) quantity of nonrenewable natural resources R (X5 ).
Factors through which the variables X1 , . . . , X5 , eect one another
are:
286
(5.5.1)
are written, where y + is a positive rate of velocity growth of the variable; y is a negative rate of velocity diminishing of the variable y.
In a simplied form the world dynamics equations are
dP
= P (B D),
dt
dK
1
= K+ TK
K,
dt
dX
= X+ TX1 X,
dt
dZ
= Z+ TZ1 Z,
dt
dR
= R ,
dt
(5.5.2)
Sustainable Development
287
(t0 ) = 0 ,
(t0 ) = 0 .
(5.5.3)
= const > 0.
(5.5.5)
288
It is proposed to describe general nonlinear model of world dynamics by a system of dierential equations of the type
dXi
= Wi (X) + gi e|(t)| ,
dt
d2
+ m2 = 0, i = 1, 2, . . . , N.
dt2
(5.5.7)
(5.5.8)
(5.5.9)
289
Sustainable Development
i, j = 1, 2, . . . , m,
m < N,
w Rm ,
(5.5.10)
(5.5.11)
290
Sustainable Development
291
(5.5.12)
holds true.
Also, condition (3a) implies that there exists a 1 (0, H) such
that
a(H(t, x)) V (t, x, w)
for
H(t, x) 1 .
(5.5.13)
(5.5.14)
(5.5.15)
(5.5.16)
292
t t0
t [t0 , t1 ),
5.6
5.6.1
293
dx(t)
dt = Ai x(t), t = k ,
(5.6.1)
+ ) = B x(t), t = , k = 1, 2, . . . (k N),
x(t
i
k
+
x(t0 ) = x0 ,
where x(t) = (x1 , . . . , xn )T Rn is the state vector, z =
(z1 , . . . , zn )T Rn is the premise variable vector associated with
the systems states and inputs, x(t+ ) is the right value of x(t),
Ai Rnn , Bi Rnn are the system matrices, Mij () are the
membership functions of the fuzzy sets Mij and r is the number
of fuzzy rules. We suppose that Bi are non-singular matrices and
0 < 1 k+1 k 2 < .
We also suppose that at the moments of impulsive eects {k } the
solution x(t) is left continuous, i.e., x(k ) = x(k ).
The state equation can be dened as follows
r
dx(t)
=
i (z(t))Ai x(t),
dt
i=1
r
i (z(t))Bi x(t),
x(t+ ) =
i=1
+
x(t0 ) = x0 ,
t = k ,
t = k , k N,
where
i (z)
i (z) =
r
i=1 i (z)
with i (z) =
n
j=1
Mij (zj ).
(5.6.2)
294
Clearly
i=1
generality we take z = x.
The stability analysis in the sense of Lyapunov of zero solution
x = 0 of system (5.6.2) is the aim of this section. Before the main
results are obtained, the following assumption is made regarding the
T-S fuzzy system (5.6.2).
Assumption 5.6.1 There exist > 0 and > 0 such
that the functions i (x) for system (5.6.2) satisfy the inequality
Dx+ i (x) x1+ , i = 1, r.
In this assumption Dx+ i (x) denotes the upper Dini derivative of
i (x) i.e.
Dx+ i (x) = lim sup{(i (x(t + )) i (x(t)))/ : 0 }.
Remark 5.6.1 It should be noted that Assumption 5.6.1 admits
unique existence of solutions for system (5.6.2).
Let E denote the space of symmetric n n -matrices with
scalar
product (X, Y ) = tr(XY ) and corresponding norm X = (X, X),
where tr() denotes the trace of corresponding matrix. Let K E
be a cone of positive semi-denite symmetric matrices. Next we
will dene the following linear operators Fi X = ATi X + XAi ,
Bij X = BiT XBj , for all X E, i, j = 1, r.
Several theorems are rst proved to demonstrate that if certain
hypotheses are satised, the stability of the above nonlinear system
can be obtained using the direct Lyapunov method. It is shown
that stability conditions can be formulated in terms of linear matrix
inequalities.
Theorem 5.6.1 Under Assumption 5.6.1 the equilibrium state
x = 0 of fuzzy system (5.6.2) is asymptotically stable if for all
[1 , 2 ] there exists a common symmetric positive denite matrix X such that
p1
(1)k+1 (Fi )k k
1
(Bji + Bij ) I +
X < 0,
2
k!
i, j = 1, r,
k=1
(5.6.3)
295
(1)p (Fi )p X 0.
(5.6.4)
p
i (x)Fi
X=
r
ip =1
i=1
i1 =1
i (x)Fi (tk )
X
e i=1
r
t i (x)Fi (ts)
P (t, x) =
e i=1
dsQ, for t (k , k+1 ],
+
X,
for t = k+1
.
Q and X are symmetric positive denite n n-matrices. Later we
K
shall show that P (t, x) > 0 in some neighborhood of the origin. First
let us consider the derivative of V (t, x) with respect to time. If t = k ,
then we have
Dt+ V
r
&
T
&
(t, x) (5.6.2) = x
i (x)(ATi P (t, x) + P (t, x)Ai )x
i=1
+ xT Dt+ P (t, x)x
r
i (x)Fi P (t, x)x
= xT
i=1
where
&
r
i=1
i (x)Fi (tk )
r
i=1
Dx+ i (x)
dx
Fi (t k )
dt
296
t
i (x)Fi X
i=1
i (x)Fi (ts)
dx
Dx+ i (x) Fi (t
dt
i (x)Fi e
r
i=1
i (x)Fi (tk )
t
X
r
i=1
i (x)Fi (tk )
r
i (x)Fi (ts)
i=1
Dx+ i (x)
Dx+ i (x)
i=1
i (x)Fi P (t) e
r
i=1
i (x)Fi
r
i=1
i (x)Fi (ts)
ds Q
dx
Fi X(t k )
dt
dx
Fi (t s)ds Q Q
dt
r
i (x)Fi (tk )
i=1
i=1
t
t
s) dsQ Q
i=1
i=1
i=1
i=1
r
Dx+ i (x)
i=1
r
i=1
i (x)Fi (ts)
Dx+ i (x)
i=1
dx
Fi X(t k )
dt
dx
Fi (t s)ds Q Q.
dt
&
Hence, for the derivative Dt+ V (t, x)&(5.6.2) , we have the estimates:
Dt+ V
r
r
&
T
T
&
(t, x) (5.6.2) = x
i (x)Fi P (t, x)x x
i (x)Fi P (t, x)x
x Qx x
T
i=1
i=1
i (x)Fi (tk )
dx
Dx+ i (x) Fi X(t
i=1
t
+ xT
i=1
r
r
i=1
i (x)Fi (ts)
r
i=1
dt
k ) x
dx
Dx+ i (x) Fi (t s)ds Q x
dt
min (Q)x2
r
i=1
+ 2 e
i (x)Fi 2
r
i=1
1 dx 1
1 1
Dx+ i (x) Fi X 1 1 x2
dt
297
r
(x)Fi 2
2 i=1 i
2 e
r
i=1
1 dx 1
1 1
Dx+ i (x) Fi Q 1 1 x2 ,
dt
where min () > 0 is the minimal eigenvalue of corresponding matrix. Denote by a = max Ai . Then since Fi X ATi X +
i=1,r
2a2
+ 2a 2 e
i=1
+ 2a2 22 e2a2
i=1
min (Q) + 2a2 r2 e2a2 X + 2 Q x x2 .
&
Therefore Dt+ V (t, x)&(5.6.2) < 0 for all x from the ball x < R, where
R=
min (Q)
2
2a
2a r2 e 2 X + 2 Q
1/
xT e
i=1
i (x(k ))Fi (k k1 )
i=1
i (x(k ))Fi (k s)
ds Q x
k1
r
r
j=1 i=1
2
+x
e
0
r
i=1
i (x)Fi y
dy Qx,
x e
T
r
i=1
i (x(k ))Fi (k k1 )
Xx
298
where y = k s.
Next we shall prove the following inequality
r
i=1
i (x)Fi (k k1 )
I
p1 (1)k+1
i=1 i (x)Fi
(k k1 )k
k!
k=1
(5.6.5)
X.
i (x)Fi (k k1 )h
i=1
X X
(h) = tr e
p1 (1)k+1
i=1 i (x)Fi
(k k1 )k hk
k!
k=1
X
(p1)
(p)
(0)hp1 ()hp
+
,
(p 1)!
p!
(0, h).
(p1)
(0) = =
(0) = 0, we get
Let h = 1, then since (0) =
(p)
()
, where
(1) =
p!
p
r
(p)
i (x)Fi (k k1 )
() = tr (1)p
i=1
r
i=1
i (x)Fi (k k1 )
X
r
i=1
i (x)Fi (k k1 )
give
Inequality (5.6.4) and positivity of operator e
(p)
299
r
i=1
i (x)Fi y
dy Qx.
fx (2 ) = fx ()2 = xT 2 e
r
i=1
i (x)Fi
Qx,
i (x)Fi 2
(5.6.6)
(5.6.2)
r
i1 =1
ip1 =1
+ 2 e2a2 Qx2
r
r
i1 =1
+ 2 e2a2 Q x2 + 2 e2a2 Q x2 ,
where Qi1 i2 ...ip1 are positive denite matrices,
= min min (Qi1 i2 ...ip1 ),
i1 , . . . , ip1 = 1, r.
&
2a2
It is clear that V &(5.6.2) 0 if Q
e
(we can choose, for
2
e2a2 I).
example, Q =
2 n2
K
Next we shall show that P (t, x) > 0 for all t R i.e., V (t, x) is
a positive denite function. Since V (t, x) is decreasing function, we
have for x < R and t [k , k+1 ), k N
xT P (t, x)x xT (k+1 )P (k+1 , x(k+1 ))x(k+1 )
300
&
As a result, we have V (t, x) > 0, Dt+ V (t, x)&(5.6.2) < 0 and
&
V &(5.6.2) 0 for all x < R.
Hence, all conditions of Theorem 4.9.3 hold. Therefore the zero
solution of impulsive Takagi-Sugeno fuzzy system (5.6.2) is asymptotically stable. This completes the proof of Theorem 5.6.1.
Let p be xed then we shall name the LMIs (5.6.3)(5.6.4) by
p-order stability conditions of system (5.6.2).
Next we shall formulate 2-nd order stability conditions of system
(5.6.2).
Corollary 5.6.1 Under Assumption 5.6.1 the equilibrium state
x = 0 of fuzzy system (5.6.2) is asymptotically stable if for all
[1 , 2 ] there exists a common symmetric positive denite matrix
X such that
1 T
(B XBi + BiT XBj ) X + (ATj X + XAj ) < 0,
2 j
ATi ATj X + XAj Ai + ATj XAi + ATi XAj 0,
i, j = 1, r,
i, j = 1, r.
(5.6.7)
301
p1
1
(1)k+1 (Fi )k k
(Bji + Bji ) I +
X = Gi1 i2 ...ip1 . (5.6.9)
2
k!
k=1
(2a)p
<
,
p!
X
302
1, r.
Next, we state the following assumption.
Assumption 5.6.2. There exist R0 > 0, 1 > 0, 2 > 0 and > 0
such that the functions i (x), i = 1, r, satisfy the inequality
1 x1+ , for x R0 ,
Dx+ i (x)
2 x1 , for x R0 .
Taking into account Assumption 5.6.2 we can establish the following.
Theorem 5.6.3 Let in Assumption 5.6.2 constants 1 , 2 , R0
be such that
1 2 <
2min (Q)
4a4 r 2 22 e4a2 ( X + 2 Q )2
and
1/
min (Q)
< R0 ,
2a2 r2 2 e2a2 X + 2 Q
1/
min (Q)
,
R0 <
2a2 r2 1 e2a2 X + 2 Q
and X is a common symmetric positive denite matrix such that conditions (5.6.3)(5.6.4) of Theorem 5.6.1 hold. Then the zero solution
of impulsive fuzzy system (5.6.2) is globally asymptotically stable.
Proof Choose for a candidate the Lyapunov function from class
V0 , V (t, x) = xT P (t, x)x, where
r
i (x)Fi (tk )
X
e i=1
r
t i (x)Fi (ts)
P (t, x) =
e i=1
ds Q, for t (k , k+1 ],
+
,
X,
for t = k+1
303
(5.6.2)
&
&
Clearly Dt+ V (t, x)&
(5.6.2)
min (Q) + 2a2 r2 e2a2 1 X
+2 Q x x2 .
(5.6.2)
min (Q) + 2a2 r2 e2a2 2 X
+2 Q x
x2 .
&
Clearly Dt+ V (t, x)&(5.6.2) < 0 by conditions of Theorem 5.6.3. Thus
&
< 0 for all x Rn .
we have showed that D + V (t, x)&
t
(5.6.2)
304
5.6.2
It is well-known that control problem is an important task for mathematical theory of articial ecosystems. Impulsive control of such
systems is more favorable due to seasonal functioning of this type of
systems. Some problem of impulsive control for homotypical model
has been considered in the paper by Liu [1]. But for practice it is
suitable to consider models with fuzzy impulsive control because it
is almost impossible to accurately measure the biomass of one or
another biological species but possible to roughly estimate those.
Consider a Lotka-Volterra type preypredator model (with interspecic competition among preys) whose evolution is described by
the following equations
dN1
= N1 N1 N2 N12 ,
dt
dN2
= mN2 + sN1 N2 ,
dt
(5.6.10)
where N1 (t) is the biomass of preys, N2 (t) is the biomass of predators, is the growth rate of the preys, m is the death rate of the
predators, is the rate of the interspecic competition among preys,
is the per-head attack rate of the predators, and s is the eciency
of converting preys to predators.
Suppose that the ecosystem is controlled via regulation of the
number of species at certain xed moments of time (impulsive control) , 2, . . . , k, . . . and the regulation is reduced either to eliminate or fulminate the representatives of species. Taking into account
these assumptions we have to add the regulator equations to the
system of the evolution as
N1 = u1 (N1 , N2 ),
N2 = u2 (N1 , N2 ),
t = k,
k N,
305
tem become
dN1
= N1 N1 N2 N12 ,
dt
dN2
= mN2 sN1 N2 , t = k,
dt
N1 = u1 (N1 , N2 ),
N2 = u2 (N1 , N2 ),
t = k,
(5.6.11)
k N.
Besides the trivial equilibrium state, equation (5.6.10) has also the
positive asymptotically stable states
N1 =
m
,
s
N2 =
s m
.
s 2
, if x > y,
(x, y) = 1 + 1/(x y)2
0,
if x y.
Next, we dene the variables of disturbance of motion x1 (t) =
N1 (t) N1 , x2 (t) = N2 (t) N2 . Then the equations for system
(5.6.11) become (using linearization):
m
m
dx1
=
x1 ,
dt
s
s
dx2
s m
=
x1 , t = k,
(5.6.12)
dt
x1 = u1 ,
x2 = u2 , t = k.
306
The Takagi-Sugeno fuzzy model (5.6.1) of system (5.6.12) is specied by the following four rules:
R1 : if N1 N1 and N2 N2 , then
dx(t)
dt = Ax(t), t = k,
x(t+ ) = B1 x,
x(t+
0 ) = x0 .
t = k,
R2 : if N1 N1 and N2 N2 , then
dx(t)
dt = Ax(t), t = k,
+
x(t ) = B2 x,
x(t+
0 ) = x0 .
t = k,
R3 : if N1 N1 and N2 N2 , then
dx(t)
dt = Ax(t), t = k,
x(t+ ) = B3 x,
x(t+
0 ) = x0 .
t = k,
R4 : if N1 N1 and N2 N2 , then
dx(t)
dt = Ax(t), t = k,
+
x(t ) = B4 x,
x(t+
0 ) = x0 .
t = k,
It is obvious that Assumption 5.6.1 holds for membership function (x, y). Using Corollary 5.6.1 the stability analysis of nontrivial
equilibrium position for ecosystem is reduced to checking the existence of symmetric positive denite matrix X such that the following
LMIs hold true:
1 T
(B XBj + BjT XBi ) X + (AT X + XA) < 0, i, j = 1, 4,
2 i
(AT )2 X + 2AT XA + XA2 0.
(5.6.13)
307
m
m
s
s
A = s m
,
0
1 1
1 1
0
0
B1 =
, B2 =
,
0
1 2
0
1 2
1 1
1 1
0
0
, B4 =
.
B3 =
0
1 2
0
1 2
Next we consider the stability analysis of the obtained Takagi
Sugeno fuzzy model for ecosystems evolution with the following parameters: = 4, = 0.3, = 0.5, m = 1.2, s = 0.4, = 0.5 and
the parameters of impulsive control: 1 = 0.9, 2 = 0.5, 1 = 0.99,
2 = 0.6.
1.7427 1.8779
It is easy to check that matrix X =
satises
1.8779 8.2018
inequalities (5.6.13). Therefore by Corollary 5.6.1 the equilibrium
state of ecological system is asymptotically stable (see Figure 5.6).
5
x1
x2
Time
308
x1
x
25
20
15
10
5
0
5
10
Time
Based on the well-known Lyapunov direct method, sucient conditions have been derived to guarantee the asymptotic stability and
globally asymptotic stability of the equilibrium point of impulsive
T-S fuzzy systems. It is shown that these sucient conditions are
expressed easily as a set of LMIs. It is also concluded that the
obtained stability conditions allow to investigate the impulsive TS fuzzy system in which continuous and discrete components may be
all unstable.
5.7
Notes
Most of the results of Chapter 5 are taken from Martynyuk [15, 18].
The model of robot interacting with dynamic environment is due to
DeLuca and Manes [1]. It should be noted that the importance of
studying the problem of stability of motion of a robot interacting with
a dynamic environment was discussed in contemporary literature.
The contents of Sections 5.15.3 are adapted from the paper by
Notes
309
Martynyuk and Chernienko [1] and Martynyuk [15]. See also Louartassi et al. [1] and NDoye et al. [1].
Section 5.4 is based on the results by Lila and Martynyuk [1] and
Martynyuk [17].
In Section 5.5 the model of (5.5.2) is taken from the monograph
by Forrester [1]. The model (5.5.6) is new. Theorem 5.5.1 is taken
from Martynuyuk [14]. Some other models of the world dynamics
are in Egorov et al. [1], Levashov et al. [1], etc.
Section 5.6 is adapted from Denysenko et al. [1].
For recent development and applications of the main results of
the book see, for example: Martynyuk [1]; Lakshmikanthan, Leela
and Martynyuk [3]; Martynyuk and Sun Zhen qi [1]; Lakshmikanthan
and Mohapatra [1]; Lakshmikanthan, Bhaskar, and Devi [1]; Lakshmikanthan, Leela, and Devi [1]; Lakshmikanthan, Leela, Drici, and
McRae [1]; Lakshmikanthan, Leela, and Vatsala [1]; Martynyuk and
Yu. A. Martynyuk-Chernienko [1]; Martynyuk, Chernetskaya, and
V. Martynyuk [1].
REFERENCES
Abramovich, J.
[1] On Gronwall and Wendor type inequalities. Proc. Amer. Math.
Soc. 87 (1983) 481486.
Achmedov, K. T., Yakubov, M.A. and Veisov, I.A.
[1] Some integral inequalities. Izvestiya Acad. Nauk Uz. SSR.
(1972) 1622.
Aleksandrov, A. Yu. and Platonov, A. V.
[1] Construction of Lyapunovs functions for a class of nonlinear
systems. Nonlinear Dynamics and Systems Theory 6(1) (2006)
1729.
[2] Comparison Method and Motion Stability of Nonlinear Systems.
Izdat. Sankt.-Petersb. Univer., SPb., 2012.
Alekseev, V. M.
[1] An estimate for perturbations of the solution of ordinary dierential equations. Vestnik Mosk. Univ. Ser.1, Math., Mekh. (2)
(1961) 2836.
Amann, H.
[1] Invariant sets and existence theorems for semilinear parabolic
and elliptic systems. J. Math. Anal. Appl. 65 (1978) 432467.
Amundson, N. R.
[1] Nonlinear problems in chemical reactor theory.
Proc. Amer. Math. Soc. Vol. III (1974) 5984.
SIAM-AMS
Aris, R.
[1] The Mathematical Theory of Diusion and Reaction in Permeable Catalysts. Clarendon Press, Oxford, 1975.
311
312
References
References
313
314
References
Corduneanu, A.
[1] A note on the Gronwall inequality in two independent variables.
J. Integral Equations 4 (1982) 261276.
Corduneanu, C.
[1] The contribution of R. Conti to the comparison method in differential equations. Libertas Math. XXIX (2009) 113115.
Corduneanu, C. and Lakshmikantham, V.
[1] Equations with unbounded delay: A survey. Nonlinear Analysis
4 (1980) 831877.
De Luca, F. and Manes, C.
[1] Hybrid force/position control for robots on contact with dynamic
environments. Proc. Robot Control, SYROCO91, 1988, 377
382.
Demidovich, B. P.
[1] Lectures on Mathematical Theory of Stability. Nauka, Moscow,
1967.
Denysenko, V. S., Martynyuk, A. A. and Slynko, V. I.
[1] Stability analysis of impulsive Takagi-Sugeno systems. Int.
J. Innovative Computing, Information and Control 5 (10(A))
(2009) 31413155.
NDoye I., Zasadzinski M., Darouach M., Radhy, N-E., Bouaziz, A.
[1] Exponential stabilization of a class of nonlinear systems: A generalized Gronwall-Bellman lemma approach. Nonlinear Analysis. 74 (18) (2011) 73337341.
Egorov, V.A., Kallistov, Yu.N., Mitrofanov, V.B., Piontkovski, A.A.
[1] Mathematical Models of Sustainable Development. Gidrometeoizdat, Leningrad, 1980.
Forrester, J. W.
[1] World Dynamics. Nauka, Moscow, 1978.
Gamidov, Sh. G.
[1] Some integral inequalities for boundary value problems of dierential equations. Di. Eqns. 5 (1969) 463472.
References
315
Gronwall, T. H.
[1] Note on the derivatives with respect to a parameter of the solutions of a system of dierential equations. Ann. of Math. 20
(1919) 292296.
Gutowski, R. and Radziszewski, B.
[1] Asymptotic behavior and properties of solutions of a system of
nonlinear second order ordinary dierential equations describing
motion of mechanical systems. Arch. Mech. Stosow. 6 (22)
(1970) 675694.
Hahn, W.
[1] Stability of Motion. Springer-Verlag, Berlin, 1967.
Hale, J. K.
[1] Large diusivity and asymptotic behaviour in parabolic systems.
J. Math. Anal. Appl. 118 (1986) 455466.
Hale, J. K. and Kato, J.
[1] Phase space for retarded equations with innite delay. Funkc.
Ekvac. 21 (1978) 1141.
Hatvani, L.
[1] On the asymptotic stability by nondecrescent Lyapunov function. Nonlinear Analysis 8 (1984) 6777.
[2] On partial asymptotic stability and instability. Acta Sci. Math.
49 (1985) 157167.
Howes, F. A. and Whitaker, S.
[1] Asymptotic stability in the presence of convection. Nonlinear
Analysis 12 (1988) 14511459.
Hu, S., Lakshmikantham, V. and Rama Mohan Rao, M.
[1] Nonlinear variation of parameter formula for integro-dierential
equations of Volterra type. J. Math. Anal. Appl. 129 (1988)
223230.
Hu, S., Zhuang, W. and Khavinin, M.
[1] On the existence and uniqueness of nonlinear integro-dierential
equations. J. Math. Phys. Sci. 21 (1987) 93103.
316
References
Kato, J.
[1] Stability problem in functional dierential equations with innite
delay. Funkc. Ekvac. 21 (1978) 6380.
[2] Stability in functional dierential equations. In: Lecture Notes
in Math. 799, Springer-Verlag, New York, 1980, 252262.
Khavanin, M. and Lakshmikantham, V.
[1] The method of mixed monotony and rst order dierential systems. Nonlinear Analysis 10 (1986) 873877.
Khoroshun, A. S. and Martynyuk, A.A.
[1] Novel approach to absolute parametric stability of the uncertain
singularly perturbed systems. Communications in Appl. Anal.
17, no. 3 & 4 (2013) 439450.
Krasovski, N. N.
[1] Problems of the Theory of Stability of Motion. Stanford University Press, Stanford, Calif. 1963.
Ladde, G. S., Lakshmikantham, V. and Leela, S.
[1] A new technique in perturbation theory. Rocky Mountain J. 6
(1977) 133140.
Ladde, G. S., Lakshmikantham, V. and Vatsala, A. S.
[1] Monotone Iterative Techniques for Nonlinear Dierential Equations. Pitman, Boston, 1985.
Lakshmikantham, V.
[1] Several Lyapunov functions. Proc. Int. Sym. Nonlinear Oscillations. Kiev, Ukr. SSR, 1969.
[2] A variation of constants formula and Bellman-Gronwall-Reid
inequalities. J. Math. Anal. Appl. 41 (1973) 199204.
[3] Comparison results for reaction-diusion equations in Banach
spaces. Proc. of SAFA Conference. Bari, Italy, 1979, 121156.
[4] Some problems in integro-dierential equations of Volterra type.
J. Integral Eqns. 10 (1985) 137146.
Lakshmikantham, V., Bhaskar T. G., and Devi, J. V.
[1] Theory of Set Dierential Equations in Metric Spaces. Cambridge Scientic Publishers, Cambridge, 2006.
References
317
318
References
J. Di.
References
319
320
References
[4] Stability of Motion: The Role of Multicomponent Liapunov Functions. Cambridge Scientic Publishers, Cambridge, 2007.
[5] Hierarchical matrix Liapunov function. Dierential and Integral
Eqns. 2(4) (1989) 411417.
[6] Analysis of stability problems via matrix Liapunov functions. J.
Appl. Math. Stoch. Anal. 3(4) (1990) 209226.
[7] A theorem on polystability. Dokl. Akad. Nauk SSSR 318(4)
(1991) 808811.
[8] On the matrix comparison method in the theory of motion stability. Prikl. Mekh. 29(10) (1993) 116122.
[9]
References
321
322
References
References
323
324
References
Salvadori, L.
[1] Sulla stabilita del movimento. Matematiche 24 (1969) 218239.
[2] Famiglie ad un parametro di funzioni di Lyapunov nello studio
della stabilita. Symp. Math. 6 (1971) 309330.
[3] Sul problema della stabilita asintotica. Rondiconti dell. Accad.
Naz. Lineci 53 (1972) 3538.
[4] Sulleslensione ai sistemi dissipativi del criterio di stabilita del
Routh. Ricerche. Mat. 151 (1966) 162167.
Samoilenko, A. M. and Ronto, V. I.
[1] Numerical-Analitical Methods of Investigation of Periodic Solutions. Vussh. Shkola, Kiev, 1976.
Shendge, G. R.
[1] A new approach to the stability theory of functional dierential
systems. J. Math. Anal. Appl. 95 (1983) 319334.
Siljak,
D. D.
[1] Large scale dynamic systems. North-Holland, New York, 1978.
[2] Competitive economic systems: Stability, decomposition and aggregation. IEEE Trans. AE-21 (1976) 149160.
Spanier E. H.
[1] Algebraic Topology. Springer, New York, 1966.
Strauss, A. and Yorke, J.
[1] Perturbation theorems for ordinary dierential equations. J.
Di. Eqns. 3 (1967) 1530.
Turinici, M.
[1] Abstract Gronwall-Bellman inequalities in ordered metrizable
uniform spaces. J. Integral Eqns. 6 (1984) 105117.
[2]
Abstract comparison principles and multivariate GronwallBellman inequalities. J. Math. Anal. Appl. 117 (1986) 100
127.
Viswanatham, B.
[1] A generalization of Bellmans lemma. Proc. Amer. Math. Soc.
14 (1963) 1518.
325
References
Springer-Verlag, New
Young, E. C.
[1] Gronwalls inequality in n independent variables. Proc. Amer.
Math. Soc. 41 (1973) 241244.
Yoshizawa, T.
[1] Stability Theory by Liapunovs Second Method. The Mathematical Society of Japan, Tokyo, 1966.
Zhang, X., Li, D. and Dan, Y.
[1] Impulsive control of TakagiSugeno fuzzy systems. Fourth Int.
Conf. on Fuzzy Systems and Knowledge Discovery. Vol. 1, 2007,
321325.
Zhuang, Wan
[1] Existence and uniqueness of solutions of nonlinear integrodierential equations of Volterra type in a Banach space. Appl.
Anal. 22 (1986) 157166.
Zubov, V. I.
[1] The Methods of Liapunov and Their Applications.
Leningr. Univer., Leningrad, 1957.
Izdat.
INDEX
h0 is
asymptotically ner than h, 143
ner than h, 142
uniformly ner than h, 142
A technique in perturbation theory,
209
Ane system, stabilization, 270
Alekseevs formula, 70, 113
Antidierence operator, 47
Asymptotic equilibrium, 88
327
328
Index
329
Index
M0 -quasi uniform asymptotically stable, 172
M0 -uniformly asymptotically
stable, 172
M0 -uniformly stable, 171
Several Lyapunov functions, 138
Solution
asymptotically stable in a nonnegative cone, 181
extremal, 27
left maximal, 218
maximal, 27
minimal, 27
right maximal, 218
stable
equi-asymptotically, 89
exponentially asymptotically,
92
quasi-equi asymptotically, 89
quasi-uniformly asymptotically, 89, 205
uniformly asymptotically, 89
uniformly, 89, 205
trivial, 89, 202, 205
h0 -conditionally equistable,
202
equi-stable, 89, 143
Space B, 216
admissible, 216
Stability
in variation, 116
of asymptotically invariant set,
142
of conditionally invariant set,
142
properties in variation, 119
of the invariant set, 142
of the prescribed motion, 142
of the trivial solution, 142
partial of the trivial solution,
142
Sustainable development with re-