Synchronization Control of Markov Jump Neural Networks
Synchronization Control of Markov Jump Neural Networks
https://doi.org/10.1007/s11071-019-05293-y
ORIGINAL PAPER
Received: 11 June 2019 / Accepted: 7 October 2019 / Published online: 4 November 2019
© Springer Nature B.V. 2019
123
1878 N. Xu, L. Sun
In recent years, the stability of NN with Markov vex inequality [34] and Bessel–Legendre inequali-
jump has become a research hot spot. This model allows ties [21]. These methods have effectively improved
NN to have multiple modes, and different modes can be the convergence accuracy, but there is still room
switched under the drive of a Markov chain. Therefore, for improvement. Wirtinger double integral inequali-
the study of the stability of Markov jump model has ties and affine Bessel–Legendre inequalities improve
more potential application value [13,14,18,23,25,26, Wirtinger integral inequality and Bessel–Legendre
29,30]. In [25,29], by constructing suitable LKF and inequality, respectively. Therefore, the fourth problem
using linear matrix inequality (LMI), the mean-square that this paper focuses on is how to use Wirtinger dou-
global exponential stability of a class of reaction- ble integral inequalities and affine Bessel–Legendre
diffusion Hopfield MJNN and the global robust expo- inequalities to improve the convergence accuracy.
nential stability of a class of time-varying delay MJNN In addition, when discussing the interval range of
are studied, respectively. However, the traditional prob- time-varying delays, the defaults are h 1 ≤ h ≤ h 2
abilistic transfer matrix of Markov jump parameters and d1 ≤ ḣ ≤ d2 , which are conservative and can
often neglects the small time-varying errors in proba- be optimized in two-dimensional space. Therefore, the
bility transition rates, which may make the switching fifth problem that this paper focuses on is to discuss the
process unstable and cause the system to collapse in optimization of time-varying delay intervals based on
severe cases. Therefore, the second problem that this two-dimensional level.
paper focuses on is the time-varying probabilistic trans- In summary, the contributions of this paper and the
fer parameters in MJNN. difficulties to be solved are as follows: Firstly, how to
Synchronization, as a nonlinear phenomenon, has unify the mixed time-varying delay and time-varying
appeared in many practical problems, such as physics, probability transfer under one MJNN. Secondly, how to
ecology and physiology. Therefore, the application of apply Wirtinger double integral inequalities and affine
synchronization theory has been widely studied in dif- Bessel–Legendre inequalities to Lyapunov functional
ferent scientific fields. In particular since 1990s, Pec- processing. Thirdly, how to synchronize MJNN-DS
ora and Carroll have paid attention to the importance of and MJNN-RS through the control of sample point con-
control and synchronization of chaotic systems. They troller. Fourthly, how to optimize the two-dimensional
put forward the concept of drive-response to achieve geometric area of time delay. In addition, these meth-
synchronization of chaotic systems. This method con- ods have the following advantages: the affine Bessel–
trols the response system by driving the external input Legendre inequalities improves the traditional Bessel–
of the system to achieve synchronization. So the the- Legendre inequality, and with the increase in N , the
ory of chaos synchronization and chaos control has optimization effect will be better. Compared with the
been widely studied. In order to achieve synchroniza- traditional state feedback controller, the sample point
tion, many control systems have been proposed, such controller can better transmit the effective information
as: synchronization method of driving-response [19]; of the system and achieve better control effect. The tra-
active–passive synchronization method [20]; synchro- ditional two-dimensional geometric area of time delay
nization method based on mutual coupling [36]; adap- is a rectangle. We reduce the conservativeness of the
tive synchronization method [9]; feedback control syn- system by reducing the area to a parallelogram.
chronization method [15]; projection synchronization Next, this paper will be based on the following
control [11]; and impulse control [7]. Therefore, the four parts. The first part introduces MJNN-DS and
third problem that this paper focuses on is how to con- MJNN-RS, sample point controller, and relevant use-
struct a suitable sample point controller to synchronize ful lemmas. In the second part, the synchronous anal-
MJNN drive system (MJNN-DS) and MJNN response ysis of MJNN mixed-time-varying-delayed error sys-
system (MJNN-RS). tem is carried out, and the convergence accuracy of
On the other hand, the synchronous analysis of LKF is improved by using Wirtinger double integral
MJNN usually constructs a suitable LKF and then inequalities and affine Bessel–Legendre inequalities.
converges the inequality. In recent years, scholars In the third part, the range of time-varying delay in
have proposed many useful inequality methods, such two-dimensional space is discussed, and the conserva-
as: Jensen inequality [37], Wirtinger integral inequal- tiveness of the system is reduced by reducing the two-
ity [22], free matrix inequality [32], interactive con- dimensional geometric area. In the fourth part, a numer-
123
Synchronization control of Markov jump neural networks 1879
ical example is constructed. The parameters of the sam- Remark 1 The first item on the right side of the equa-
ple point controller, the chaotic curve of MJNN system, tion is the stable negative feedback of the system, which
Markov jump response curve, synchronization analy- is often referred to as the “leakage” item. Since the self-
sis response curve and error analysis response curve attenuation process of neurons is not instantaneous,
are obtained through actual simulation. when the neurons are cut off from the neural network
In this paper, “0” represents zero matrix of suitable and external inputs, it takes time to reset to the isolated
dimension. Rn and Rn×m represent n-dimensional and static state. In order to describe this phenomenon, it is
n ×m-dimensional Euclidean spaces, respectively. “T” necessary to introduce a “leakage” delay. In this paper,
represents the matrix transposition. {Ω, , P} repre- σ is called leakage delay.
sents the probability space.
Consider the following MJNN-RS with mixed time-
varying delay:
2 Preliminaries
ẏ(t) = −C(r (t))y(t − σ ) + A(r (t)) f (x(t))
Consider the following MJNN-DS with mixed time- +B(r (t)) f (y(t − d1 (t)))
t
varying delay:
+D(r (t)) f (y(s))ds + u(t) + J (2)
ẋ(t) = −C(r (t))x(t − σ ) + A(r (t)) f (x(t)) t−d2 (t)
+B(r (t)) f (x(t − d1 (t))) where y(t) = (y1 (t), y2 (t), · · · , yn (t))T ∈ Rn is the
t
neuron state vector. The meanings of other symbols are
+D(r (t)) f (x(s))ds + J (1)
t−d2 (t) equivalent to MJNN driving system (1). u(t) represents
the sample point controller, which is defined as follows:
where x(t) = (x1 (t), x2 (t), · · · , xn (t))T ∈ Rn is the
neuron state vector. A(·), B(·), C(·) and D(·) are matri- u(t) = K (r (tk ))e(tk ), tk ≤ t < tk+1
ces of suitable dimensions with uncertainties, which are
expressed as follows: where K (·) is the feedback gain matrix of the sample
point controller, e(tk ) represents the discrete control
A(·) = Ā(·) + ΔA B(·) = B̄(·) + ΔB function, and tk is the sample point and satisfies:
C(·) = C̄(·) + ΔC D(·) = D̄(·) + ΔD
0 = t0 < t1 < · · · < tk < · · · < lim tk = +∞
where ΔA, ΔB, ΔC and ΔD are uncertain parameter k→+∞
123
1880 N. Xu, L. Sun
n n
ė(t) = −C(r (t))e(t − σ ) + A(r (t))g(e(t)) (n − m)2 x T (u)Hx(u)dudθ
+ B(r (t))g(e(t − d1 (t))) m θ
t 2Θd1
T
HΘd1 + 4Θd2
T
H Θd2
+ D(r (t)) g(e(s))ds + u(t) (3)
t−d2 (t) where
Some lemmas are given below, which play a key role ⎧ n n
in the calculation of this paper. ⎨ Θd1 = m θ x(u)dudθ
n n
Θd2 = − m θ x(u)duds
⎩ nnn
Lemma 1 (Affine Bessel–Legendre inequalities)[8] If + s−r m θ u x(u)dvduds
3
(5)
Remark 3 Unlike the traditional Bessel–Legendre M
inequalities [21], the right side of the inequality of Remark 6 Easy to get l=1 rl (t) = 1 is equivalent to
M−1
Lemma 1 is the affine of the length of the integral ṙ l (t) + ṙ M (t) = 0. So ṙ M (t) is expressed by
l=1 M−1
interval, so it can be easily dealt with by convexity. | ṙ M (t) |≤ l=1 vl .
In addition, Lemma 1 can be transformed into existing
inequalities in literature under special conditions, such Lemma 4 [4] If the vector function x satisfies x :
as affine Jensen inequality [2] and affine Wirtinger inte- [0, ] → Rn , given any positive definite matrix U and
gral inequality [6], which shows that the inequality of positive scalar , the following relation holds
Lemma 1 is more general. T
−1
x(s)ds U x(s)ds
Remark 4 Lemma 1 has an additional decision vari-
able of (N + 1)(N + 2)n 2 because of the addition of 0 0
123
Synchronization control of Markov jump neural networks 1881
⎡ 2 2 ⎤
Assumption (A1) The neuron excitation function 2 d32
T ⎣ d1
f (·) satisfies the following conditions: + e21 Z1 + Z 2 ⎦ e21
2 2
f i (u i − vi )
0< ≤ li (i = 1, 2 · · · , N ) d1
u i − vi − [d1 e1 − e16 ]T Z 1 [d1 e1 − e16 ] − 2 − e1 − e16
2
where u i and vi are arbitrary real numbers, and u = v. T
li are known constants. 3 d1 3
+ e17 Z 1 − e1 − e16 + e17
d1 2 d1
d3
− [d3 e1 − e18 ] Z 2 [d3 e1 − e18 ] − 2 − e1
T
3 Main results 2
T
3 d3 3
Theorem 1 Given scalars di > 0, i = 1, 2, 3 and − e18 + e19 Z 2 − e1 − e18 + e19
d3 2 d3
ḋ1 (t), and satisfy, di (t) ∈ [0, di ], i = 1, 2, 3, ḋ1 (t) ∈
[h 1 , h 2 ], for any delay d(t), MJNN-DS (1) and MJNN- + d2 e1T Se1 − δ1 e23
T
e23 − e1T L T Le1
RS (2) achieve complete synchronization, if there exist
(l) − δ2 e24
T
e24 − e2T L T Le2 − δ3 d2−1 e22
T
e22 + Φ3
symmetry matrices PP > 0 ∈ R7n , Q i > 0, i =
1, 2, 3, 4 ∈ Rn , Ri > 0, i = 1, 2 ∈ Rn , Z i > (ls)
Ω P (di (t), ḋ1 (t)) (8)
0, i = 1, 2 ∈ Rn , S > 0 ∈ Rn , any matrices
X i , i = 1, 2, 3, 4 ∈ R4n×3n , M1 , M2 and χi are matri-
ces of suitable dimensions, such that the following hold:
(ls) (sl)
Υ P (di (t), ḋ1 (t)) + Υ P (di (t), ḋ1 (t)) < 0 (6)
(ls)
Υ P (di (t), ḋ1 (t))
⎡ ⎤
Ψ P (di (t), ḋ1 (t)) Π1T X 1 Π2T X 2 Π3T X 3 Π4T X 4 Φ1 Φ2
⎢ ∗ Δ22 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ Δ 0 0 0 0 ⎥
⎢ 33 ⎥
=⎢
⎢ ∗ ∗ ∗ Δ44 0 0 0 ⎥ ⎥ (7)
⎢ ∗ ∗ ∗ ∗ Δ55 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ −ε I 0 ⎦
∗ ∗ ∗ ∗ ∗ ∗ − 1ε I
4
Φ1 = col{M1 G 0 ·!"
· · 0# M2 G 0 0 0} (10)
+ e1T Q i e1 − (1 − ḋ1 (t))e2T Q 1 e2
19n
i=1
Φ2 = col{0 ·!"
· · 0# −E 3T 0 E 4T E 1T E 2T } (11)
− e20T
Q 2 e20 − e3T Q 3 e3
19n
− e11T
Q 4 e11 + e21 T
(d1 R1 + d3 R2 )e21 ⎡ ⎤
In −In 0n 0n
− Π1 (X 1 M + M X 1 )Π1 − Π2T (X 2 M
T T T
M = ⎣ In In −2In 0n ⎦ (12)
+ M T X 2T )Π2 − Π3T (X 3 M + M T X 3T )Π3 In −In 6In −12In
− Π4T (X 4 M + M T X 4T )Π4 Π1 = col{e1 e2 e6 e8 }
123
1882 N. Xu, L. Sun
0 t
Π2 = col{e2 e3 e7 e9 } 1 1
e(s)dsdθ
Π3 = col{e1 e10 e12 e14 } d1 (t) −d1 (t) t+θ d1 − d1 (t)
−d1 (t) t−d1 (t)
Π4 = col{e10 e11 e13 e15 } (13)
e(s)dsdθ
−d1 t+θ
⎡ ⎤
0 ·!"
· · 0# χi 0 ·!"
· · 0# −ε1 M2 C̄ P −ε1 M2 ε1 M2 D̄ P ε1 M2 Ā P ε1 M2 B̄ P
⎢ 9n ⎥
⎢ 9n ⎥
⎢0 · · · 0 0 0 ·!"
· · 0# 0 0 0 0 0 ⎥
⎢ !" # ⎥
⎢ 9n ⎥
⎢ 9n ⎥
⎢ .. .. .. ⎥
⎢ . . . ··· ··· ··· ··· ··· ⎥
⎢ ⎥
⎢0 · · · 0 χi 0 ·!"
· · 0# −M2 C̄ P −M2 − M2T M2 D̄ P M2 Ā P M2 B̄ P ⎥
⎢ !" # ⎥
Φ3 = ⎢ ⎥ (14)
⎢ 9n 9n ⎥
⎢0 · · · 0 0 0 ·!"
· · 0# 0 0 0 0 0 ⎥
⎢ !" # ⎥
⎢ 9n ⎥
⎢ 9n ⎥
⎢0 · · · 0 0 0 ·!"
· · 0# 0 0 0 0 0 ⎥
⎢ !" # ⎥
⎢ 9n ⎥
⎢ 9n ⎥
⎣0 · · · 0 0 0 ·!"
· · 0# 0 0 0 0 0 ⎦
!" #
9n 9n
ei (i = 1, · · · , 24) ∈ Rn×24n are identity matrices. By deriving V (x(t), t, r (t)), meanwhile, Lemma 1
Sample point controller parameters can be obtained: and Lemma 2 are used for V3 (t) and V4 (t) respectively,
K i = M2−1 χi . we get the following results
V̇1 (t) = H e{T0T (ḋ1 (t))PP T1 (d1 (t))} + T1T (d1 (t))
Proof An improved LKFs are defined: V (x(t), t, r (t))
N
= V1 (t) + V2 (t) + V3 (t) + V4 (t) + V5 (t). where μi j (r (t))P j (r (t))T1 (d1 (t))
j=1
V1 (t) = η1T (t)P(r (t))η1 (t)
t t d PP (r (t))
+ T1T (d1 (t)) T1 (d1 (t))
V2 (t) = e (s)Q 1 e(s)ds+
T
eT (s)Q 2 e(s)ds dt
t−d1 (t) t−σ
4
t t V̇2 (t) = e1T Q i e1 − (1 − ḋ1 (t))e2T Q 1 e2
+ eT (s)Q 3 e(s)ds+ eT (s)Q 4 e(s)ds i=1
t−d1 t−d3
0 t − e20
T
− e3T Q 3 e3 − e11
Q 2 e20 T
Q 4 e11
t
V3 (t) = ėT (s)R1 ė(s)dsdθ V̇3 (t) = e21 (d1 R1 + d3 R2 )e21 −
T
ėT (s)R1 ė(s)ds
−d1 t+θ
0 t t
t−d1
123
Synchronization control of Markov jump neural networks 1883
T t T
3 d1 3
−e16 +
d1
e17 Z 1 − e1 − e16 + e17
2 d1 ≤ −δ3 d2−1 g(e(s))ds
t−d2 (t)
− [d3 e1 − e18 ] Z 2 [d3 e1 − e18 ]
T t
T g(e(s))ds (18)
d3 3
− 2 − e1 − e18 + e19 Z 2 t−d2 (t)
2 d3
d3 3
− e1 − e18 + e19 Given any constant M1 and M2 , the following equa-
2 d3
t tion holds
V̇5 (t) = d2 e1T Se1 − eT (s)Se(s)ds
t−d2 (t)
0 = 2[eT (t)M1 + ėT (t)M2 ]
where [−ė(t) − C(r (t))e(t − σ ) + A(r (t))g(e(t))
T0 (ḋ1 (t)) +B(r (t))g(e(t − d1 (t))) + D(r (t))
t
= col e21 (1 − ḋ1 (t))e4 e5 e1 − (1 − ḋ1 (t))e2 g(e(s))ds + K (r (t))e(t − d3 (t))
(1 − ḋ1 (t))e2 − e3 e1 − (1 − ḋ1 (t))e6 t−d2 (t)
(19)
− ḋ1 (t)e8 (1 − ḋ1 (t))e2 − e7 + ḋ1 (t)e9
T1 (d1 (t)) Add (15)–(19) to V̇1 (t)-V̇5 (t), and then deal with the
= col{ e1 e2 e3 d1 (t)e6 (d1 − d1 (t))e7 d1 (t)e8 items. Separating the definite items from the uncertain
d1 − d1 (t))e9 } items in A(·), B(·), C(·) and D(·), the following results
can be obtained:
And the definitions of Φ1 , Φ2 , M, Π1 , Π2 , Π3 and
⎡ ⎤
Π4 are shown in (10)–(13). 0 ·!"
· · 0# −M1 ΔC 0 M1 ΔD M1 ΔA M1 ΔB
⎢ 19n ⎥
⎢ ⎥
The following inequalities are defined according to ⎢0 · · · 0 0 0 0 0 0 ⎥
⎢ !" # ⎥
Assumption (A1) ⎢ 19n ⎥
⎢ ⎥
g T (e(t))g(e(t)) − eT (t)L T Le(t) ≤ 0 ⎢ .. ⎥
⎢ . ··· ··· ··· ··· ··· ⎥
⎢ ⎥
g T (e(t − d1 (t)))g(e(t − d1 (t))) ⎢0 · · · 0 −M2 ΔC 0 M2 ΔD M2 ΔA M2 ΔB ⎥
⎢ ⎥
Φ̄3 = ⎢ !" # ⎥
−eT (t − d1 (t))L T Le(t − d1 (t)) ≤ 0 ⎢ 19n ⎥
⎢0 · · · 0 0 0 0 0 0 ⎥
t ⎢ !" # ⎥
g T (e(s))g(e(s))ds ⎢ 19n ⎥
⎢ ⎥
t−d2 (t) ⎢0 · · · 0 0 0 0 0 0 ⎥
t ⎢ !" # ⎥
⎢ 19n ⎥
− e (s)L Le(s)ds ≤ 0
T T ⎢ ⎥
⎣0 · · · 0 0 0 0 0 0 ⎦
t−d2 (t) !" #
where L = diag{l1 , l2 , · · · , ln }. Meanwhile, given any 19n
123
1884 N. Xu, L. Sun
⎡ ⎤⎡ ⎤T ⎡ ⎤⎡ ⎤T
M1 G M1 G 0 0 3
+ e19 + d2 e1T Se1
⎢ 0 ⎥⎢ 0 ⎥ ⎢ .. ⎥ ⎢ .. ⎥ d3
⎢ ⎥⎢ ⎥ ⎢ . ⎥⎢ . ⎥
⎢ .. ⎥ ⎢ .. ⎥ ⎢ ⎥⎢ ⎥ −δ1 [e23
T
e23 − e1T L T Le1 ] − δ2 [e24
T
e24 − e2T L T Le2 ]
⎢ . ⎥⎢ . ⎥ ⎢ 0 ⎥⎢ 0 ⎥
⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥⎢ ⎥ ⎢ T⎥ ⎢ T⎥
≤ ε−1 ⎢ 0 ⎥ ⎢ 0 ⎥ + ε ⎢−E 3 ⎥ ⎢−E 3 ⎥ −δ3 d2−1 e22
T
e22 + Φ3 (23)
⎢ M2 G ⎥ ⎢ M2 G ⎥ ⎢ 0 ⎥⎢ 0 ⎥ Ω(di (t), ḋ1 (t))
⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥
⎢ 0 ⎥⎢ 0 ⎥ ⎢ ET ⎥ ⎢ ET ⎥
⎢ ⎥⎢ ⎥ ⎢ 4 ⎥⎢ 4 ⎥ = H e{T0T (ḋ1 (t))PP (r (t))T1 (d1 (t))}
⎣ 0 ⎦⎣ 0 ⎦ ⎣ ET ⎦ ⎣ ET ⎦
1 1
0 0 E 2T E 2T
N
+T1T (d1 (t)) μi j P j (r (t))T1 (d1 (t))
(20) j=1
For convenience, let M1 = ε1 M2 and χi = M2 K i , d PP (r (t))
+T1T (d1 (t)) T1 (d1 (t)) (24)
ε1 is an arbitrary real number. To sum up, combined dt
with (20), we can get:
ξ(t)
V̇ (x(t), t, r (t)) ≤ ξ T Υ (di (t), ḋ1 (t))ξ(t) (21)
= col { e(t) e(t − d1 (t)) e(t − d1 )
where ė(t − d1 (t)) ė(t − d1 )
Υ (di (t), ḋ1 (t))
t
1 1
⎡ ⎤ e(s)ds
Ψ (di (t), ḋ1 (t)) Π1T X 1 Π2T X 2 Π3T X 3 Π4T X 4 Φ1 Φ2 d1 (t) t−d1 (t) d1 − d1 (t)
⎢ ∗ Δ22 0 0 0 0 0 ⎥ t−d1 (t) 0 t
⎢ ⎥
⎢ ∗ ∗ Δ 0 0 0 0 ⎥ 1
⎢ 33 ⎥ e(s)ds e(s)dsdθ
=⎢ ⎢ ∗ ∗ ∗ Δ 44 0 0 0 ⎥
⎥ d12 (t) −d1 (t) t+θ
⎢ ⎥ t−d1
⎢ ∗ ∗ ∗ ∗ Δ 55 0 0 ⎥ −d1 (t)
⎣ ∗ ∗ ∗ ∗ ∗ −ε I 0 ⎦ 1
∗ ∗ ∗ ∗ ∗ ∗ −ε I
1
(d1 − d1 (t))2 −d1
(22) t−d1 (t)
e(s)dsdθ e(t − d3 (t)) e(t − d3 )
Ψ (di (t), ḋ1 (t)) t+θ
t t−d3 (t)
1 1
4
e(s)ds e(s)ds
= Ω(di (t), ḋ1 (t)) + e1T Q i e1 d3 (t) t−d3 (t) d3 − d3 (t) t−d3
i=1 1
−(1 − ḋ1 (t))e2T Q 1 e2 d3 (t)
2
0 t
−e20
T
Q 2 e20 − e3T Q 3 e3
e(s)dsdθ
−e11
T
Q 4 e11 + e21
T
(d1 R1 + d3 R2 )e21 −d3 (t) t+θ
−d3 (t) t−d3 (t)
−Π1T (X 1 M + M T X 1T )Π1 − Π2T (X 2 M 1
e(s)dsdθ
+M T X 2T )Π2 − Π3T (X 3 M + M T X 3T )Π3 (d3 − d3 (t))2 −d3 t+θ
t t t
−Π4T (X 4 M + M T X 4T )Π4
⎡ 2 ⎤ e(θ )dθ e(s)dsdθ
2 2 d32
t−d1 t−d1 θ
T ⎣ d
+e21 1
Z1 + Z 2 ⎦ e21 t t
2 2 e(θ )dθ
t−d3 t−d3
−[d1 e1 − e16 ]T Z 1 [d1 e1 − e16 ] t
T e(s)dsdθ e(t − σ ) ė(t)
d1 3 θ
−2 − e1 − e16 + e17 t
2 d1
g(e(s))ds g(e(t))
d1 3 t−d2 (t)
Z 1 − e1 − e16 + e17
2 d1 g(e(t − d1 (t)))} (25)
−[d3 e1 − e18 ] Z 2 [d3 e1 − e18 ]
T
Therefore, as long as satisfying (22) is negative
T
d3 3 d3 definite, then V̇ (x(t), t, r (t)) is strictly negative def-
−2 − e1 − e18 + e19 Z 2 − e1 − e18 inite in the interval d1 (t) ∈ [0, d1 ], ḋ1 (t) ∈ [h 1 , h 2 ].
2 d3 2
123
Synchronization control of Markov jump neural networks 1885
123
1886 N. Xu, L. Sun
123
Synchronization control of Markov jump neural networks 1887
123
1888 N. Xu, L. Sun
123
Synchronization control of Markov jump neural networks 1889
polyhedron without changing the range of the delays. 12. Mohammadzadeh, A., Ghaemi, S.: Robust synchronization
Finally, it is verified by numerical simulation that of uncertain fractional-order chaotic systems with time-
varying delay. Nonlinear Dyn. 93(4), 1809–1821 (2018)
MJNN-DS and MJNN-RS are fully synchronized under 13. Nagamani, G., Joo, Y.H., Radhika, T.: Delay-dependent dis-
the control of sample point controller, and the parame- sipativity criteria for Markovian jump neural networks with
ters of the controller are obtained. random delays and incomplete transition probabilities. Non-
linear Dyn. 91(4), 2503–2522 (2018)
Acknowledgements Project supported by the National Natural 14. Nagamani, G., Joo, Y.H., Radhika, T.: Delay-dependent dis-
Science Foundation of China (Grant Nos. 61403278, 61503280). sipativity criteria for Markovian jump neural networks with
The authors are very indebted to the Editor and the anonymous random delays and incomplete transition probabilities. Non-
reviewers for their insightful comments and valuable suggestions linear Dyn. 91(56), 2503–2522 (2018)
that have helped improve the academic research. 15. Novienko, V., Ratas, I.: In-phase synchronization in complex
oscillator networks by adaptive delayed feedback control.
Compliance with ethical standards Phys. Rev. E 98(4), 042302 (2018)
16. Nuo, Xu, Sun, L.: An improved delay-dependent stability
Conflict of interest The authors declare that they have no con- analysis for Markovian jump systems with interval time-
flict of interest. varying-delays. IEEE Access 6, 33055–33061 (2018)
17. Park, M., Kwon, O., Park, J.H., Lee, S., Cha, E.: Stability
of time-delay systems via Wirtinger-based double integral
inequality. Automatica 55, 204–208 (2015)
18. Park, I.S., Kwon, N.K., Park, P.G.: Dynamic output-
References feedback control for singular Markovian jump systems with
partly unknown transition rates. Nonlinear Dyn. 95(4), 1–12
1. Ahn, C.K., Shi, P., Wu, L.: Receding horizon stabiliza- (2019)
tion and disturbance attenuation for neural networks with 19. Rong, Z., Yang, Y., Xu, Z., et al.: Function projective syn-
time-varying delay. IEEE Trans. Cybern. 45(12), 2680–2692 chronization in drive—response dynamical network. Phys.
(2017) Lett. A 374(30), 3025–3028 (2010)
2. Briat, C.: Convergence and equivalence results for 20. Schibli, T.R., Kim, J., Kuzucu, O., et al.: Attosecond active
the Jensen’s inequality—application to time-delay and synchronization of passively mode-locked lasers by bal-
sampled-data systems. IEEE Transa. Autom. Control 56(7), anced cross correlation. Opt. Lett. 28(11), 947–9 (2003)
1660–1665 (2012) 21. Seuret, A.: Frdric Gouaisbaut. Stability of linear systems
3. Ding, Y., Liu, H.: Stability analysis of continuous-time with time-varying delays using Bessel–Legendre inequali-
Markovian jump time-delay systems with time-varying tran- ties. IEEE Trans. Autom. Control 63(1), 225–232 (2017)
sition rates. J. Frankl. Inst. 353(11), 2418–2430 (2016) 22. Seuret, A., Gouaisbaut, F.: Wirtinger-based integral inequal-
4. Gu, K., Kharitonov, V.L., Chen, J.: Stability of Time-Delay ity: application to time-delay systems. Automatica 49(9),
Systems. Birkh-user, Boston (2003) 2860–2866 (2013)
5. Guan, H., Gao, L.: Delay-dependent robust stability and H∞ 23. Shu, Y., Liu, X.G., Qiu, S., et al.: Dissipativity analysis for
control for jump linear systems with interval time-varying generalized neural networks with Markovian jump param-
delay. In: Proceedings of the 26th Chinese Control Confer- eters and time-varying delay. Nonlinear Dyn. 89(3), 2125–
ence, pp. 609–614. IEEE, Zhangjiajie (2007) 2140 (2017)
6. Gyurkovics, Eva: A Note on Wirtinger-Type Integral 24. Sun, L., Nuo, X.: Stability analysis of Markovian jump
Inequalities for Time-Delay Systems. Pergamon Press, Inc, system with multi-time-varying disturbances based on
Oxford (2015) improved interactive convex inequality and positive definite
7. Kasemsuk, C., Oyama, G., Hattori, N.: Management of condition. IEEE Access 7, 54910–54917 (2019)
impulse control disorders with deep brain stimulation: a 25. Syed, A.M., Marudai, M.: Stochastic stability of discrete-
double-edged sword. J. Neurol. Sci. 374, 63–68 (2017) time uncertain recurrent neural networks with Markovian
8. Lee, W.I., Lee, S.Y., Park, P.G.: Affine Bessel–Legendre jumping and time-varying delays. Math. Comput. Model.
inequality: application to stability analysis for systems with 54(9–10), 1979–1988 (2011)
time-varying delays. Automatica 93, S0005109818301687 26. Tao, J., Wu, Z.G., Su, H., et al.: Asynchronous and resilient
(2018) filtering for Markovian jump neural networks subject to
9. Li, R.G., Wu, H.N.: Adaptive synchronization control based extended dissipativity. IEEE Trans. Cybern. 99, 1–10 (2018)
on QPSO algorithm with interval estimation for fractional- 27. Wang, J., Luo, Y.: Further improvement of delay-dependent
order chaotic systems and its application in secret commu- stability for Markov jump systems with time-varying delay.
nication. Nonlinear Dyn 92(3), 1–25 (2018) In Proceedings of the 7th World Congress on Interligent
10. Lin, F.F., Zeng, Z.Z.: Synchronization of uncertain Control and Automation, pp. 6319-6324. IEEE, Chongqing,
fractional-order chaotic systems with time delay based on (2008)
adaptive neural network control. Acta Phys. Sin. 66, 9 (2017) 28. Wang, Y., Xie, L., de Souza, C.E.: Robust control of a class of
11. Mayer, J., Schuster, H.G., Claussen, J.C., et al.: Corticotha- uncertain nonlinear system. Syst. Control Lett. 19(2), 139–
lamic projections control synchronization in locally coupled 149 (1992)
bistable thalamic oscillators. Phys. Rev. Lett. 99(6), 068102 29. Wang, Y.F., Lin, P., Wang, L.S.: Exponential stability
(2007) of reaction-diffusion high-order Markovian jump Hopfield
123
1890 N. Xu, L. Sun
neural networks with time-varying delays. Nonlinear Anal. 35. Zhao, X., Zeng, Q.: Delay-dependent stability analysis for
Real World Appl. 13(3), 1353–1361 (2012) Markovian jump systems with interval time-varying-delay.
30. Wang, J., Chen, X., Feng, J., et al.: Synchronization of net- Int. J. Autom. Comput. 7(2), 224–229 (2010)
worked harmonic oscillators subject to Markovian jumping 36. Zhi, Z., Liu, K., Wang, W.Q., et al.: Robust adaptive beam-
coupling strengths. Nonlinear Dyn. 91(1), 1–13 (2018) forming against mutual coupling based on mutual coupling
31. Xu, S., Lam, J., Mao, X.: Delay-dependent H∞ control and coefficients estimation. IEEE Trans. Veh. Technol. 99, 1–1
filtering for uncertain Markovian jump systems with time- (2017)
varying delays. IEEE Trans. Circuits Syst. I Regul. P. 54(9), 37. Zhu, X.L., Yang, G.H.: Jensen integral inequality approach
2070–2077 (2007) to stability analysis of continuous-time systems with time-
32. Zeng, H.B., He, Y., Wu, M., et al.: Free-matrix-based integral varying delay. IET Control Theory Appl. 2(6), 524–534
inequality for stablilty analysis of systems with time-varying (2008)
delay. IEEE Trans. Autumatic Control 60(10), 2768–2772
(2015)
33. Zhang, X., Lv, X., Li, X.: Sampled-data-based lag synchro- Publisher’s Note Springer Nature remains neutral with regard
nization of chaotic delayed neural networks with impulsive to jurisdictional claims in published maps and institutional affil-
control. Nonlinear Dyn. 90(3), 2199–2207 (2017) iations.
34. Zhang, X.M., Han, Q.L., Seuret, A., et al.: An
improved reciprocally convex inequality and an augmented
Lyapunov–Krasovskii functional for stability of linear sys-
tems with time-varying delay. Automatica 84, 221–226
(2017)
123