(Lecture Notes in Control and Information Sciences, Volume 492) Xiaoli Luan, Shuping He, Fei Liu - Robust Control for Discrete-Time Markovian Jump Systems in the Finite-Time Domain-Springer (2023)
(Lecture Notes in Control and Information Sciences, Volume 492) Xiaoli Luan, Shuping He, Fei Liu - Robust Control for Discrete-Time Markovian Jump Systems in the Finite-Time Domain-Springer (2023)
Xiaoli Luan
Shuping He
Fei Liu
Robust Control
for Discrete-Time
Markovian Jump
Systems in
the Finite-Time
Domain
Lecture Notes in Control and Information
Sciences
Volume 492
Series Editors
Frank Allgöwer, Institute for Systems Theory and Automatic Control,
Universität Stuttgart, Stuttgart, Germany
Manfred Morari, Department of Electrical and Systems Engineering,
University of Pennsylvania, Philadelphia, USA
Advisory Editors
P. Fleming, University of Sheffield, UK
P. Kokotovic, University of California, Santa Barbara, CA, USA
A. B. Kurzhanski, Moscow State University, Moscow, Russia
H. Kwakernaak, University of Twente, Enschede, The Netherlands
A. Rantzer, Lund Institute of Technology, Lund, Sweden
J. N. Tsitsiklis, MIT, Cambridge, MA, USA
This series reports new developments in the fields of control and information
sciences—quickly, informally and at a high level. The type of material considered
for publication includes:
1. Preliminary drafts of monographs and advanced textbooks
2. Lectures on a new field, or presenting a new angle on a classical field
3. Research reports
4. Reports of meetings, provided they are
(a) of exceptional interest and
(b) devoted to a specific topic. The timeliness of subject material is very
important.
Indexed by EI-Compendex, SCOPUS, Ulrich’s, MathSciNet, Current Index
to Statistics, Current Mathematical Publications, Mathematical Reviews,
IngentaConnect, MetaPress and Springerlink.
Xiaoli Luan · Shuping He · Fei Liu
Robust Control
for Discrete-Time Markovian
Jump Systems
in the Finite-Time Domain
Xiaoli Luan Shuping He
Key Laboratory of Advanced Process Key Laboratory of Intelligent Computing
Control for Light Industry (Ministry and Signal Processing (Ministry
of Education) of Education)
Institute of Automation School of Electrical Engineering
Jiangnan University and Automation
Wuxi, Jiangsu, China Anhui University
Hefei, Anhui, China
Fei Liu
Key Laboratory of Advanced Process
Control for Light Industry (Ministry
of Education)
Institute of Automation
Jiangnan University
Wuxi, Jiangsu, China
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature
Switzerland AG 2023
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse
of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by similar
or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
In the field of modern industry, there are many hybrid systems involving both contin-
uous state evolution and discrete event-driven, such as biochemical systems, commu-
nication networks, aerospace systems, manufacturing processes, economic systems.
These systems often encounter component failure, external environment change,
and subsystem correlation change, which will cause random jumping or switching
of system structure and parameters. That is, the switching between each mode is
random but may conform to certain statistical laws. If it conforms to Markovian
characteristics, it is called stochastic Markovian jump systems (MJSs). The dynamic
behavior of MJSs consists of two forms: one is the discrete mode, which is described
by a set of Markovian chains valued in a finite integer set. The other is a continuously
changing state, characterized by a differential (or difference) equation for each mode.
In this sense, the MJSs belong to a category of hybrid systems, and their particu-
larity lye in that the discrete events and continuous variables can be expressed by a
stochastic differential equation or difference equation. This provides ideas for people
to apply the state space method in modern control theory to study some problems of
MJSs.
On the other hand, the control theory has focused on the steady-state character-
istics of the systems in the infinite-time domain for a long time. However, for most
engineering systems, the transient characteristics over a finite-time interval are more
practical. On the one hand, an asymptotically stable system does not imply good
transition characteristics. Sometimes, the system even appears violent shocks, thus
cannot meet the production requirements; on the other hand, many practical produc-
tion processes are short time running systems, such as biochemical reaction systems,
economic systems, and people are more interested in their transient performance
in a given time domain. Therefore, this book introduces the theory of finite-time
control into stochastic discrete-time MJSs, considers the transient characteristics of
the discrete-time MJSs over a finite-time interval, establishes its stability, bounded-
ness, robustness, and other performances in a given time domain, and ensures that
the state trajectory of the system is limited within a certain range of the equilib-
rium point. In this way, the engineering conservativeness of asymptotic stability of
conventional control theory is reduced from the time dimension.
v
vi Preface
This book aims at developing less conservative analysis and design methodology
for discrete-time MJSs via finite-time control theory. It can be used for final year
undergraduates, postgraduates, and academic researchers. Prerequisite knowledge
includes linear algebra, linear system theory, theory of matrix, stochastic systems,
etc. It should be described as an advanced book.
The authors would like to express our sincere appreciation to those direct participation
in various aspects of the research leading to this book. Special thanks go to Prof.
Pedro Albertos from the Universidad Politécnica de Valencia in Spain, Prof. Peng
Shi from the University of Adelaide in Australia, Prof. Shuping He from Anhui
University in China, Profs. Fei Liu, Jiwei Wen, and Shunyi Zhao from Jiangnan
University in China for their helpful suggestions, valuable comments, and great
support. The authors also thank many colleagues and students who have contributed
technical support and assistance throughout this research. In particular, we would
like to acknowledge the contributions of Wei Xue, Haiying Wan, Peng He, Ziheng
Zhou, Chang’an Han, Chengcheng Ren, Xiang Zhang, and Shuang Gao. Finally,
we are incredibly grateful to our families for their never-ending encouragement and
support whenever necessary.
This book was supported in part by the National Natural Science Foundation
of China (Nos. 61991402, 61991400, 61833007, 62073154, 62073001), Scien-
tific Research Cooperation and High-level Personnel Training Programs with New
Zealand (No. 1252011004200040), the University Synergy Innovation Program of
Anhui Province (No. GXXT-2021-010), Anhui Provincial Key Research and Devel-
opment Project (No. 2022i01020013), and Anhui University Quality Engineering
Project (No. 2022i01020013, 2020jyxm0102, 021jxtd017).
vii
Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Markovian Jump Systems (MJSs) . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Nonlinear MJSs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2 Switching MJSs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.3 Non-homogenous MJSs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Finite-Time Stability and Control . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.1 FTS for Deterministic Systems . . . . . . . . . . . . . . . . . . . . . . 6
1.2.2 FTS for Stochastic MJSs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2 Finite-Time Stability and Stabilization for Discrete-Time
Markovian Jump Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2 Preliminaries and Problem Formulation . . . . . . . . . . . . . . . . . . . . . . 22
2.3 Stochastic Finite-Time Stabilization for Linear MJSs . . . . . . . . . . 24
2.4 Stochastic Finite-Time Stabilization for Nonlinear MJSs . . . . . . . 26
2.5 Simulation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3 Finite-Time Stability and Stabilization for Switching
Markovian Jump Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.2 Preliminaries and Problem Formulation . . . . . . . . . . . . . . . . . . . . . . 40
3.3 Stochastic Finite-Time H∞ Control . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.4 Observer-Based Finite-Time H∞ Control . . . . . . . . . . . . . . . . . . . . 51
3.5 Simulation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
ix
x Contents
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
Chapter 1
Introduction
Markovian jump system (MJS) was first proposed by Krasovskii in 1961 [1]. It was
initially regarded as a special stochastic system but did not attract enough attention.
With the development of hybrid system theory, people find that MJS is actually a
special kind of hybrid system, so it has attracted extensive attention from researchers
[2, 3]. MJS assumes that the system’s dynamics are switched in a set of known
subsystem models, and the switching law between subsystem models obeys the
finite-state Markovian process, in which subsystem models are also called modes. The
particularity of the MJS lies in that although it belongs to a hybrid system, its discrete-
event dynamics are random processes that follow statistical laws. Thanks to the
development of the theory of stochastic process, the MJS can be dynamically written
in the form of a stochastic differential equation or stochastic difference equation.
Then the analysis and synthesis of the MJS can be studied by using the methods
similar to the continuous variable dynamic system.
The MJS was proposed with a strong engineering background and practical sig-
nificance. The system model has been proved to be an effective method to describe
the industrial processes, which are often subjected to random disturbances from
the internal component failure and the change of the external environment. Com-
mon examples include biochemical systems, power systems, flexible manufacturing
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 1
X. Luan et al., Robust Control for Discrete-Time Markovian Jump Systems in the
Finite-Time Domain, Lecture Notes in Control and Information Sciences 492,
https://doi.org/10.1007/978-3-031-22182-8_1
2 1 Introduction
Compared with linear MJSs, the research of nonlinear MJSs progresses slowly [12–
14]. It is mainly caused by the complexity of real MJSs, the complex behavior of
nonlinearities, and the limitations of existing control theories and algorithms. For
linear MJSs, the control design can be transformed into the corresponding Ricatti
equation or linear matrix inequality solution. However, for nonlinear MJSs, it is
impossible to design a general controller satisfying the performance and stability
of the systems. Therefore, it is a difficult problem in the control field to control the
MJSs with nonlinearities.
Although some scholars have paid attention to this kind of system in the early
stage, the development of related theories is still slow. Aliyu and Boukas tried to use
the Hamilton–Jacobi equation to give sufficient conditions for stochastic stability of
nonlinear MJSs [15]. Unfortunately, it is very difficult to obtain a global solution of
the Hamilton–Jacobi equation by numerical method or analytical method because the
difficulties of its related mathematical theory are still unresolved. Therefore, many
scholars turn to nonlinear approximation methods (mainly fuzzy and neural network
technologies) to solve the control problem of nonlinear MJSs [16–18].
Takagi–Sugeno (T–S) fuzzy model is one of the effective methods to deal with
MJSs with nonlinearities. According to IF-THEN fuzzy rules, the local linear descrip-
tion or approximate representation of nonlinear MJSs is provided. For the suggested
T–S fuzzy MJSs, the hot issues in the nonlinear MJSs focus on the quantized feed-
back control, robust control, dissipative control, asynchronous dissipative control,
asynchronous sliding mode control, adaptive synchronous control, robust filtering,
asynchronous filtering, etc. [19–25]. With the expansion of the complexity of the
system, people’s requirements for the safety and reliability of the controlled system
are increasing day by day. Literatures [26–28] have carried out investigations on the
fault detection and fault diagnosis of T–S fuzzy MJSs based on the observer and filter
design.
Another effective technique to deal with nonlinear MJSs is neural networks. One
typical alternative is to use neural networks to linearize the nonlinearities of MJSs.
Through linear difference inclusions under the state-space representation, the optimal
1.1 Markovian Jump Systems (MJSs) 3
control, robust control, output feedback control, scheduling control, H∞ filtering, and
robust fault detection have been addressed in [29–35]. Then the sliding mode control
and the event-triggered fault detection were investigated in [36, 37] by employing a
multilayer neural network to substitute the nonlinearities. Combined with the back-
stepping scheme, the adaptive tracking control for nonlinear MJSs has been examined
in [38, 39]. By adopting neural networks to realize the online adaptive dynamic pro-
gramming learning algorithm, optimal control, optimal tracking control, and online
learning and control have been addressed in [40–42].
For a general linear switching system, the switching rules between subsystems are
deterministic, and each subsystems can be described by linear differential equations
(difference equations). However, for a single switching subsystem, component failure
and sudden environmental disturbance often occur, leading to system structure and
parameter jumping. Therefore, a single subsystem is more suitable to be described
by MJS. Taking the voltage conversion circuit as an example, we obtain the required
voltage by switching different gear positions. At a certain gear position, due to the
failure of electronic components, the system may undergo random jumps. It is not
appropriate to model such a system with a simple switching system. In such a system,
there are not only deterministic switching signals but also random jumping modes.
Such more complex systems are called switching MJSs, and the model of such
systems was first proposed by Bolzern [43].
In [43], mean square stability was dissolved by the time evolution of the second-
order moment of the state following constraints on the dwell time between switching
moments. Then exponential almost sure stability for switching signals that satisfy
an average dwell time restriction was investigated by Bolzern in [44], and the trade-
off is found between the average dwell time and the ratio of the residence times.
Similarly, the almost sure stability for linear switching MJSs in continuous time was
addressed in [45, 46] by applying the Lyapunov function approach. On the basis
of stability analysis, the exponential l2 − l∞ control, H∞ control, resilient dynamic
output feedback control, mean square stabilization, and almost sure stabilization
have been studied by researchers [47–51]. In recent years, the analysis and synthesis
results for switching MJSs have been extended to positive systems [52, 53], where
the state variables take only nonnegative values.
The development of switching MJSs enriches the research field of hybrid system
and provides a more general system modeling method. When the random jumping
is not considered, the system is a general switching system. When switching rules
are not considered, the single subsystem is the general MJS. Due to the coupling of
switching signals and jumping modes, the analysis and synthesis of the system bring
great challenges, and there are still many tough problems need to be solved.
4 1 Introduction
All the aforementioned results about MJSs are limited to systems with fixed jumping
transition probabilities (TPs). In engineering practice, the TP matrix often changes
with time, that is, the non-homogeneous Markovian process is universal. Therefore,
the theory of non-homogeneous Markovian process has become the research hotspot
of the majority of experts and scholars [54, 55]. In 2011, Aberkane explored non-
homogeneous discrete MJSs and proposed conclusions related to controller design
[56]. In this study, the time-varying TP is described in the form of the polytopic
description with fixed vertices. In the same way, the robust control, model predictive
control, output feedback control, and H∞ filtering for non-homogeneous MJSs have
been proposed in [57–61].
Although the above research results on non-homogeneous MJSs cover the three
aspects of control, filtering and stability analysis, the time-varying TPs are described
in the form of the polytopic description with fixed vertices. They focus on the change
of the TPs, but do not pay attention to how the TPs changes. Among the existing
research results, there are two ways to consider the change of TPs: one is periodic non-
homogeneous MJSs with periodic change. The TPs of this special non-homogeneous
MJSs change in a cyclic manner according to the period, and the system parameters
in each mode also change in a cyclic manner. The second is the non-homogeneous
MJSs that follows the high-order Markovian chain. This kind of system introduces
a high-order Markovian chain to express the change of the TPs.
For periodic non-homogeneous MJSs, the observability and detectability, l2 − l∞
control, H∞ filtering and strict dissipative filtering have been presented in [62–65].
For non-homogeneous MJSs following high-order Markovian chains, Zhang dis-
cussed the particularity of piecewise homogeneous MJSs and dealt with its H∞ esti-
mation problem [66]. In this paper, a high-order Markovian chain is used to indicate
that the change of TP matrix between segments is also random jumping accord-
ing to the probability, which brings a new way of thinking to the later study of the
change of TPs. In 2012, Wu discussed the stability of piecewise homogeneous MJSs
with time-delays [67]. Literatures [68, 69] proposed the H∞ control and filtering for
non-homogeneous MJSs with the TPs following Gaussian distribution.
Modern control theory covers a wide field and involves many methods, but stability
analysis is the core and basis of almost all methods, especially Lyapunov stability
and asymptotic stability. Lyapunov stability, as a sufficient condition, is simple and
intuitive, but it focuses on the system behavior in an infinite time domain, which
inevitably brings conservatism from the perspective of engineering practice. For
most engineering systems, the transient performance within a certain time are more
practical. On the one hand, an asymptotically stable system does not imply good
1.2 Finite-Time Stability and Control 5
transition characteristics. Sometimes, the system even appears violent shocks, thus
cannot meet the production requirements; on the other hand, many practical produc-
tion processes are short-time running systems, such as biochemical reaction systems,
economic systems, and people are more interested in their transient performance in
a given time domain.
In order to study the transient performance of the system, Kamenkov first proposed
the concept of finite-time stability (FTS) in the Russian journal PPM in 1953 [70].
Similar articles soon followed in the same journal [71], and the early articles on FTS
were mostly written by Russians, dealing with linear systems as well as nonlinear
systems. In 1961, there appeared some articles on the FTS of linear time-varying
systems, such as “short-time stability in linear time-varying systems” written by
Dorato [72]. The idea of short-time stability is essentially the same thing as FTS,
but the term FTS is more commonly used later. Also in 1961, LaSalle and Lefschetz
wrote “stability by Lyapunov’s direct methods: with applications,” and the concept
of “practical stability” is proposed in [73]. Both concepts are required to be bounded
in finite-time domain, but the length of the time interval of the two studies is slightly
different.
In 1965, Weiss and Infante made an in-depth discussion on the FTS analysis
of nonlinear systems, and introduced the concepts of quasi-contractive stability and
convergence stability over a certain finite-time interval [74]. Shortly thereafter, Weiss
and Infante further studied the FTS of nonlinear systems with perturbation, which
led to the new concept of finite-time bounded input bounded output (BIBO) stability
[75]. The concept of BIBO stability evolved into what is now known as “finite-time
bounded stability.” In 1969, Michel and Wu extended the FTS from continuous-time
to discrete-time systems on the basis of many existing research results [76]. In the
decade from 1965 to 1975, a large number of articles on FTS appeared. But all of
these articles are limited to the analysis of the stability and do not give the method
of control design [77–79].
In 1969, Garrard studied the finite-time control method for nonlinear systems [80].
In 1972, Van Mellaert and Dorato extended finite-time control to stochastic systems
[81]. During this period, San Filippo and Dorato studied the design of robust control
for linear systems based on linear quadratic and FTS, and applied the results to the
control problem of aircraft [82]. Grujic applied the concept of FTS to the controller
design of adaptive systems [83]. The design techniques proposed between 1969 and
1976 all required complex calculations. In the actual application process, there is
no absolutely ideal situation for the operating state of the system, and the system
is often affected by external disturbances and other factors during operation. In
order to better solve the stability problem of the system under external disturbances,
the Italian cybernetics scholar Amato gave “finite-time stability” and “finite-time
boundedness,” thus effectively avoiding external disturbances of the system.
In view of the importance of FTS in practical applications, more and more
researchers have devoted themselves to the study of finite-time control problems
in recent years [84–88]. FTS is a concept of stability which is different from asymp-
totic stability for studying transient performance of system. The so-called FTS is
to give a bound of the initial conditions of the system whose state norm does not
6 1 Introduction
exceed a certain threshold value in a given finite-time interval. FTS has three ele-
ments, namely a certain time interval, bounds of initial conditions and bounds of
system states. Therefore, to judge whether a system is finite-time stable, a period
of time interval, the boundary of initial conditions and the boundary of system state
should be given first according to requirements, and then the system state in this time
interval should be checked whether the system state is within the pre-given boundary.
Therefore, we can distinguish the FTS and asymptotic stability from the above three
factors:
(1) FTS examines the performance of the system in a specific time interval, and
asymptotic stability examines the performance of the system in an infinite-time
interval.
(2) FTS is for initial conditions within a given bound, and asymptotic stability is for
arbitrary initial conditions.
(3) FTS requires that the system state trajectory keep within predefined bounds,
while asymptotic stability requires that the system state converge asymptotically
(no specific bounds are required for the state trajectory).
Thus, these two kinds of stability are independent of each other. A system is FTS,
but beyond a given time interval, the state of the system may diverge so that the
system is not asymptotically stable. A system is asymptotically stable, but the state
of the system may exceed a given region for a certain period of time so that the
system does not satisfy the requirement of FTS. In general, the asymptotic stability
concerns the asymptotic convergence performance of the system in the infinite-time
domain, and the FTS concerns the transient performance of the system in a specific
time interval.
In recent years, with the development of linear matrix inequality (LMI) theory, the
problems related to FTS have been discussed again. In 1997, Dorato presented a
robust finite-time controller design for linear systems at the 36th IEEE CDC confer-
ence [89]. In this paper, the state feedback control law for finite-time stabilization
of linear systems is presented for the first time by employing LMI, and LMI theory
was introduced into the FTS analysis and controller design for linear systems. Sub-
sequently, Amato also presented a series of FTS (or finite-time boundedness) and
finite-time controller design methods for uncertain linear continuous-time systems
based on LMIs conditions [90, 91].
In 2005, Amato extended the above FTS and finite-time control problems for
linear continuous-time systems to linear discrete-time systems [92] and addressed
the design conditions for the finite-time stabilizing state feedback controller and
output feedback controller, respectively [93]. In subsequent studies, Amato further
extended the results of FTS to more general systems, and at the same time, other
scholars also began to study the FTS problems [94–100]. The traditional asymptotic
1.2 Finite-Time Stability and Control 7
stability requires the corresponding Lyapunov energy function to decrease strictly, but
the solutions abovementioned relax the requirements of Lyapunov energy function by
allowing it to increase within a certain range, thus transforming the FTS problem into
a series of feasibility problems of LMIs. Therefore, the FTS analysis and synthesis
conditions given in these literatures are easy to verify, and the difference between
FTS and traditional asymptotic stability can be clearly seen.
Differential linear matrix inequality (DLMIs) is another standard method to ana-
lyze the FTS problem [101]. Based on DLMIs, the design of a finite-time bounded
dynamic output feedback controller for time-varying linear systems was studied
[102]. In 2011, Amato studied the FTS problem of impulsive dynamic linear sys-
tems and the robust FTS problem of impulsive dynamic linear systems with norm-
bounded uncertainty [103–105]. In 2013, Amato gave the necessary and sufficient
conditions for the FTS of impulsive dynamic linear systems [106]. Compared with
LMIs, DLMI-based method is more suitable for linear time-varying systems and less
conservative, but they are computationally complex and difficult to be generalized
to some other types of complex systems. In addition, DLMI-based analysis method
can also be used to discuss the input–output finite-time stability of linear systems
[107–109].
The above contents are all for the FTS of linear systems. For the FTS of nonlinear
systems, the following two methods are generally used to deal with the problem:
(1) Directly use the knowledge related to nonlinear systems. This method does not
need to restrict the nonlinearity of the system, so it has universality, but the
result is difficult to realize in calculation. For example, some early literatures
on FTS of nonlinear systems adopted this approach [74, 110–113]. In 2004,
Mastellone studied the FTS of nonlinear discrete stochastic systems utilizing
the upper bound of exit probability and correlation function and further gave
the design method of the finite-time stabilizing controller [114]. In 2009, Yang
carried out FTS analysis and synthesis for nonlinear stochastic systems with
pulses based on Lyapunov function-like method [115].
(2) Use methods similar to the above linear systems. This method requires special
limitations on the nonlinearity of the system, and the result is generally expressed
as the feasibility of LMIs (or DLMIs) [116–120]. For example, in the literatures
[121–123], the robust finite-time control problem for a class of nonlinear systems
with norm-bounded parameter uncertainties and external disturbances (nonlinear
properties are initially approximated by a multilayer feedback neural network
model). Elbsat [124] studied the finite-time state feedback control problem for
a class of discrete-time nonlinear systems with conic-type nonlinear and exter-
nal interference inputs. A robust and elastic linear state feedback controller is
designed based on LMI technology to ensure that the closed-loop system is
finite-time stable for all nonlinearities (in a centrally uncertain hypersphere), all
admissible external disturbances and all controller gain disturbances within a set
boundary.
8 1 Introduction
With the development of MJSs theory, finite-time analysis and synthesis problems
are widely studied in MJSs. The existing research results mainly focus on three
categories. Firstly, considering the various complex situations of the system, such
as time-delay, uncertainty, time variation, external interference, nonlinear and other
factors, the finite-time analysis and synthesis problems of MJSs under complex sit-
uations are studied [125–129]. In the study of delay MJSs, two kinds of sufficient
conditions are concerned. One type is independent of the size of delay, which is
called delay independent. For the system with hour delays, this kind of condition has
strong conservativeness. Therefore, another type of FTS condition containing the
size of the time-delay, namely delay-dependent condition, has attracted widespread
attention. Since delay-dependent conditions can better regulate the system than delay-
independent conditions and have less conservativeness, scholars pay more attention
to delay-dependent FTS analysis and synthesis, such as model change determination
method, delay segmentation method, parameterized model transformation method,
free weight method, and so on [130–134].
Meanwhile, due to various uncertainties, robust control theory is used to adjust
the finite-time performance of MJSs. One is robust FTS for system analysis, and the
other is finite-time controller design for regulating system performance so that the
closed-loop system can be robust finite-time stabilized over the given time interval.
The common research methods include the Riccati equation, linear matrix inequality,
and robust H∞ control [134–139]. The advantage of H∞ control is that the H∞ norm
of the transfer function can describe the maximum gain from the input energy to
the output. By solving the optimization problem, the influence of the disturbance
with the finite power spectrum can be minimized. Of course, there are also many
other FTS analyses and comprehensive research results for complex MJSs, such as
2D MJSs, singular MJSs, nonlinear MJSs, positive MJSs, neutral MJSs, switching
MJSs, distributed parameter MJSs, etc. [140–145].
The second category of the research results of FTS for MJSs is reflected in the
change of TPs. Initially, the relevant studies were based on the time-invariant TPs
and all the elements in TP matrix are assumed to be known in advance. However, in
practical engineering applications, it is not easy to accurately obtain all the elements
in TP matrix. Therefore, some scholars have studied the finite-time performance for
MJSs with partially known TPs [146, 147]. In literatures [148, 149], considering the
TPs are unknown but bounded, convex polyhedra or bounded conditions are used to
define the change of TPs, and the robust FTS analysis and synthesis methods for such
systems were studied. Since the TPs often change with time, the non-homogeneous
Markovian process generally exists in practical engineering systems. As a result, the
FTS analysis and synthesis for non-homogeneous MJSs have also been extensively
investigated [150, 151].
Since the jumping time of MJSs follows an exponential distribution, the TP matrix
of MJSs is a time-invariant function matrix, which brings limitations to the appli-
cation of MJSs. Compared with MJSs, semi-MJSs are characterized by a fixed TP
matrix and a dwell time probability density function matrix. Because the restriction of
1.3 Outline 9
the probability distribution function is relaxed, the semi-MJSs have a more extensive
application importance. Therefore, it is of great theoretical value and practical signif-
icance to study the FTS analysis and synthesis of semi-MJSs. By the method of sup-
plementary variables and model transformation, the asynchronous event-triggered
sliding mode control, event-triggered guaranteed cost control, memory sampled-
data control, observer-based sliding mode control, and H∞ filtering were addressed
for semi-MJSs within a finite-time interval [152–156].
The third category of the research results is the finite-time performance study
combined with other control strategies. For example, the finite-time sliding mode
control methods for MJSs were presented in [157–160]. As a high-performance
robust control strategy, sliding mode control has the advantages of insensitivity to
parameter perturbation, good transient performance, fast response speed, and strong
robustness. It is a typical variable structure control. In other words, the sliding mode
controller of the closed-loop system design drives the state trajectory to the designed
sliding mode surface. When the state trajectory reaches the sliding mode surface and
maintains motion, it is not affected by other external factors. Therefore, as a common
design method, sliding mode control is applied to the finite-time performance study
of MJSs.
1.3 Outline
In order to facilitate readers to understand the context of the book clearly, the main
research content is shown in Fig. 1.1. The outline of the book is as follows.
This chapter introduces the research background, motivations, and research prob-
lems for finite-time analysis and synthesis of MJSs, including FTS for typical kinds of
MJSs (involving linear and nonlinear MJSs, switching MJSs, and non-homogeneous
MJSs) and FTS for MJSs combined with other control strategies (involving sliding
mode control, dissipative control, and non-periodic triggered control).
Chapter 2 investigates the stochastic FTS, stochastic finite-time boundedness, and
stochastic finite-time stabilization for discrete-time linear and nonlinear MJSs by
relaxing the strict decreasing of the Lyapunov energy function. For linear MJSs, the
finite-time control design can be transformed into the corresponding Ricatti equation
or linear matrix inequality solution. However, for nonlinear MJSs, it is impossible to
design a general controller satisfying the transient performance of the systems. To
deal with the nonlinearities of MJSs, the neural network is utilized to approximate the
nonlinearities by linear difference inclusions. The designed controller can keep the
state trajectories of the systems remain within the pre-specified bounds in the given
time interval rather than asymptotically converges to the equilibrium point despite
the approximation error and external disturbance.
Chapter 3 extends the results of stochastic FTS, stochastic finite-time bounded-
ness, and stochastic finite-time stabilization to switching MJSs with time-delay. Due
to the coupling of switching signals and jumping modes, it brings great challenges to
the finite-time analysis and synthesis of the system. To analyze the transient perfor-
10 1 Introduction
closed-loop system is finite-time bounded and meets the desired passive performance
requirement simultaneously under ideal conditions. Then, considering the more prac-
tical situation that the controller’s mode is not synchronized with the system mode, an
asynchronous finite-time passive controller is planned, which is for the more general
hidden Markovian jump systems.
Chapter 6 combines the finite-time performance with sliding mode control to
achieve better performance indicators for discrete-time MJSs. As a high-performance
robust control strategy, sliding mode control has the advantages of insensitivity to
parameter perturbation, good transient performance, fast response speed, and strong
robustness. Therefore, this chapter focuses on the finite-time sliding mode control
problem for MJSs with uncertainties. Firstly, the sliding mode function and slid-
ing mode controller are designed such that the closed-loop discrete-time MJSs are
stochastic finite-time stabilizable and fulfill the given H∞ performance index. More-
over, an appropriate asynchronous sliding mode controller is constructed, and the
rationality conditions of the coefficient parameter are given and proved for the pur-
pose that the closed-loop discrete-time MJSs can be driven onto the sliding surface.
Also, the transient performance of the discrete-time MJSs during the reaching and
sliding motion phase has been investigated, respectively.
Chapters 2–6 consider the transient performance of MJSs in the entire frequency
range, which leads to over-design and conservativeness. To reduce the engineering
conservation of controller design for MJSs from the perspective of the time domain
and the frequency domain, Chap. 7 presents the finite-time multiple-frequency con-
trol for MJSs by introducing frequency information into controller design, the
multiple-frequency control with finite-time performance is analyzed both in the time
domain and the frequency domain. Moreover, in order to overcome the effect of
stochastic jumping among different modes on system performance, the derandomiza-
tion method has been introduced into controller design by transforming the original
stochastic multimodal systems into deterministic single-mode ones.
Chapter 8 concerns not only the transient behavior of MJSs in the finite-time
domain but also the consistent state behavior of each subsystem. Therefore, the finite-
time consensus protocol design approach for network-connected systems with ran-
dom Markovian jump topologies, communication delays and external disturbances is
analyzed in this chapter. With relaxing the conditions that the disagreement dynam-
ics asymptotically converge to zero, the finite-time consensualization protocol is
employed to make sure the disagreement dynamics of interconnected networks are
confined within the prescribed bound in the fixed time interval. By taking advantage
of certain features of the Laplacian matrix in real Jordan form, the new model trans-
formation method has been proposed, which makes the designed control protocol
more general.
Chapter 9 proposes the higher-order moment stabilization in the finite-time
domain for MJSs to guarantee that not only the mean and variance of the states remain
within the desired range in the fixed time interval, but also the higher-order moment
of the states is limited to the given bound. Firstly, the derandomization method is
utilized to transform the multimode stochastic jumping systems into single-mode
deterministic systems. Then, with the help of the cumulant generating function in
12 1 Introduction
statistical theory, the higher-order moment components of the states are obtained
by first-order Taylor expansion. Compared with the existing control methods, the
high-order moment stabilization improves the effect of the control by taking the
higher-order moment information of the state into consideration.
Chapter 10 adopts model predictive control to optimize the finite-time perfor-
mance of MJSs. Firstly, by means of online rolling optimization, the minimum
energy consumption is realized, and the required transient performance is satis-
fied simultaneously under the assumption that the jumping time of MJSs follows an
exponential distribution. Then, the proposed results are extended to semi-MJSs. The
finite-time performance under the model predictive control scheme is analyzed in
the situation that the TP matrix at each time depends on the history information of
elapsed switching sequences. Compared with MJSs, semi-MJSs are characterized by
a fixed TP matrix and a dwell time probability density function matrix. Because the
restriction of the probability distribution function is relaxed, the finite-time model
predictive control for semi-MJSs has a more extensive application importance.
Chapter 11 sums up the results of the book and discusses the possible research
directions in future work.
References
1. Krasovskii, N.M., Lidskii, E.A.: Analytical design of controllers in systems with random
attributes. Automat. Rem. Control. 22, 1021–2025 (1961)
2. Ji, Y., Chizeck, H.J.: Controllability, stability and continuous-time Markovian jump linear
quadratic control. IEEE Trans. Autom. Control 35(7), 777–788 (1990)
3. Florentin, J.J.: Optimal control of continuous-time Markovian stochastic systems. J. Electron.
Control 10(6), 473–488 (1961)
4. Sworder, D.: Feedback control of a class of linear systems with jump parameters. IEEE Trans.
Autom. Control. 14(1), 9–14 (1969)
5. Wonham, W.M.: Random differential equations in control theory. Probab. Methods Appl.
Math. 2, 131–212 (1971)
6. Feng, X., Loparo, K.A., Ji, Y., Chizeck, H.J.: Stochastic stability properties of jump linear
systems. IEEE Trans. Autom. Control. 37, 38–53 (1992)
7. Karan, M., Shi, P., Kaya, Y.: Transition probability bounds for the stochastic stability robust-
ness of continuous and discrete-time Markovian jump linear systems. Automatica 42, 2159–
2168 (2006)
8. Mariton, M.: On controllability of linear systems with stochastic jump parameters. IEEE
Trans. Autom. Control. 31(7), 680–683 (1986)
9. Shi, P., Boukas, E.K., Agarwal, R.: Robust control for Markovian jumping discrete-time
systems. Int. J. Syst. Sci. 30(8), 787–797 (1999)
10. Mariton, M.: Robust jump linear quadratic control: a mode stabilizing solution. IEEE Trans.
Autom. Control 30(11), 1145–1147 (1985)
11. Shi, P., Boukas, E.K., Agarwal, R.: Kalman filtering for continuous-time uncertain systems
with Markovian jumping parameters. IEEE Trans. Autom. Control 44(8), 1592–1597 (1999)
12. Sthananthan, S., Keel, L.H.: Optimal practical stabilization and controllability of systems
with Marikovian jumps. Nonlinear Anal. 54(6), 1011–1027 (2003)
13. He, S.P., Liu, F.: Exponential passive filtering for a class of nonlinear jump systems. J. Syst.
Eng. Electron. 20(4), 829–837 (2009)
References 13
14. Yao, X.M., Guo, L.: Composite anti-disturbance control for Markovian jump nonlinear sys-
tems via disturbance observer. Automatica 49(8), 2538–2545 (2013)
15. Aliyu, M.D.S., Boukas. E.K.: H∞ control for Markovian jump nonlinear systems. In: Pro-
ceedings of the 37th IEEE Conference on Decision and Control, Tampa, FL, USA, vol. 1, pp.
766–771 (1998)
16. Liu, Y., Wang, Z., Liang, J., Liu, X.: Stability and synchronization of discrete-time Markovian
jumping neural networks with mixed mode-dependent time delays. IEEE Trans. Neural. Netw.
20(7), 1102–1116 (2009)
17. Zhang, Y., Xu, S., Zou, Y., Lu, J.: Delay-dependent robust stabilization for uncertain discrete-
time fuzzy Markovian jump systems with mode-dependent time delays. Fuzzy Sets Syst.
164(1), 66–81 (2011)
18. Balasubramaniam, P., Lakshmanan, S.: Delay-range dependent stability criteria for neural
networks with Markovian jumping parameters. Nonlinear Anal. Hybrid. Syst. 3(4), 749–756
(2009)
19. Zhang, M., Shi, P., Ma, L.H., Cai, J.P., Su, H.Y.: Quantized feedback control of fuzzy Marko-
vian jump systems. IEEE Trans. Cybern. 49(9), 3375–3384 (2019)
20. Wang, J.W., Wu, H.N., Guo, L.: Robust H∞ fuzzy control for uncertain nonlinear Markovian
jump systems with time-varying delay. Fuzzy Sets Syst. 212, 41–61 (2013)
21. Sheng, L., Gao, M., Zhang, W.H.: Dissipative control for Markovian jump non-linear stochas-
tic systems based on T-S fuzzy model. Int. J. Syst. Sci. 45(5), 1213–1224 (2014)
22. Wu, Z.G., Dong, S.L., Su, H.Y., Li, C.D.: Asynchronous dissipative control for fuzzy Marko-
vian jump systems. IEEE Trans. Cybern. 48(8), 2426–2436 (2018)
23. Song, J., Niu, Y.G., Zou, Y.Y.: Asynchronous sliding mode control of Markovian jump systems
with time-varying delays and partly accessible mode detection probabilities. Automatica 93,
33–41 (2018)
24. Tong, D.B., Zhu, Q.Y., Zhou, W.N.: Adaptive synchronization for stochastic T-S fuzzy neural
networks with time-delay and Markovian jumping parameters. Neurocomputing 17(14), 91–
97 (2013)
25. Tao, J., Lu, R.Q., Su, H.Y., Shi, P., Wu, Z.G.: Asynchronous filtering of nonlinear Markovian
jump systems with randomly occurred quantization via T-S fuzzy models. IEEE Trans. Fuzzy
Syst. 26(4), 1866–1877 (2018)
26. He, S.P., Liu, F.: Fuzzy model-based fault detection for Markovian jump systems. Int. J.
Robust. Nonlinear Control 19(11), 1248–1266 (2009)
27. He, S.P., Liu, F.: Filtering-based robust fault detection of fuzzy jump systems. Fuzzy Sets
Syst. 185(1), 95–110 (2011)
28. Cheng, P., Wang, J.C., He, S.P., Luan, X.L., Liu, F.: Observer-based asynchronous fault
detection for conic-type nonlinear jumping systems and its application to separately excited
DC motor. IEEE Trans. Circ. Syst.-I 67(3), 951–962 (2020)
29. Luan, X.L., Liu, F., Shi, P.: Neural network based stochastic optimal control for nonlinear
Markovian jump systems. Int. J. Innov. Comput. Inf. Control 6(8), 3715–3728 (2010)
30. Luan, X.L., Liu, F.: Design of performance robustness for uncertain nonlinear time-delay
systems via neural network. J. Syst. Eng. Electron. 18(4), 852–858 (2007)
31. Luan, X.L., Liu, F., Shi, P.: Passive output feedback control for non-linear systems with time
delays. Proc. Inst. Mech. Eng. Part I-J Syst. Control Eng. 223(16), 737–743 (2009)
32. Yin, Y., Shi, P., Liu, F.: H∞ scheduling control on stochastic neutral systems subject to actuator
nonlinearity. Int. J. Syst. Sci. 44(7), 1301–1311 (2013)
33. Luan, X.L., Liu, F., Shi, P.: H∞ filtering for nonlinear systems via neural networks. J. Frankl.
Inst. 347, 1035–1046 (2010)
34. Luan, X.L., Liu, F.: Neural network-based H∞ filtering for nonlinear systems with time-
delays. J. Syst. Eng. Electron 19(1), 141–147 (2008)
35. Luan, X.L., He, S.P., Liu, F.: Neural network-based robust fault detection for nonlinear jump
systems. Chaos Soliton Fract. 42(2), 760–766 (2009)
36. Tong, D.B., Xu, C., Chen, Q.Y., Zhou, W.N., Xu, Y.H.: Sliding mode control for nonlinear
stochastic systems with Markovian jumping parameters and mode-dependent time-varying
delays. Nonlinear Dyn. 100, 1343–1358 (2020)
14 1 Introduction
37. Liu, Q.D., Long, Y., Park, J.H., Li, T.S.: Neural network-based event-triggered fault detection
for nonlinear Markovian jump system with frequency specifications. Nonlinear Dyn. 103,
2671–2687 (2021)
38. Chang, R., Fang, Y.M., Li, J.X., Liu, L.: Neural-network-based adaptive tracking control for
Markovian jump nonlinear systems with unmodeled dynamics. Neurocomputing 179, 44–53
(2016)
39. Wang, Z., Yuan, J.P., Pan, Y.P., Che, D.J.: Adaptive neural control for high order Markovian
jump nonlinear systems with unmodeled dynamics and dead zone inputs. Neurocomputing
247, 62–72 (2017)
40. Zhong, X.N., He, H.B., Zhang, H.G., Wang, Z.S.: Optimal control for unknown discrete-
time nonlinear Markovian jump systems using adaptive dynamic programming. IEEE Trans.
Neural Netw. Learn. Syst. 25(12), 2141–2155 (2014)
41. Zhong, X.N., He, H.B., Zhang, H.G., Wang, Z.S.: A neural network based online learning and
control approach for Markovian jump systems. Neurocomputing 149(3), 116–123 (2015)
42. Jiang, H., Zhang, H.G., Luo, Y.H., Wang, J.Y.: Optimal tracking control for completely
unknown nonlinear discrete-time Markovian jump systems using data-based reinforcement
learning method. Neurocomputing 194(19), 176–182 (2016)
43. Bolzern, P., Colaneri, P., Nicolao, G.D.: Markovian jump linear systems with switching tran-
sition rates: mean square stability with dwell-time. Automatica 46, 1081–1088 (2010)
44. Bolzern, P., Colaneri, P., Nicolao, G.D.: Almost sure stability of Markovian jump linear
systems with deterministic switching. IEEE Trans. Autom. Control 58(1), 209–213 (2013)
45. Song, Y., Yang, J., Yang, T.C.: Almost sure stability of switching Markovian jump linear
systems. IEEE Trans. Autom. Control 61(9), 2638–2643 (2015)
46. Cong, S.: A result on almost sure stability of linear continuous-time Markovian switching
systems. IEEE Trans. Autom. Control 63(7), 2226–2233 (2018)
47. Hou, L.L., Zong, G.D., Zheng, W.X.: Exponential l2 − l∞ control for discrete-time switching
Markovian jump linear systems. Circ. Syst. Signal Process. 32, 2745–2759 (2013)
48. Chen, L.J., Leng, Y., Guo, A.F.: H∞ control of a class of discrete-time Markovian jump linear
systems with piecewise-constant TPs subject to average dwell time switching. J. Frankl. Inst.
349(6), 1989–2003 (2012)
49. Wang, J.M., Ma, S.P.: Resilient dynamic output feedback control for discrete-time descrip-
tor switching Markovian jump systems and its applications. Nonlinear Dyn. 93, 2233–2247
(2018)
50. Qu, H.B., Hu, J., Song, Y., Yang, T.H.: Mean square stabilization of discrete-time switching
Markovian jump linear systems. Optim. Control Appl. Methods 40(1), 141–151 (2019)
51. Wang, G.L., Xu, L.: Almost sure stability and stabilization of Markovian jump systems with
stochastic switching. IEEE Trans. Autom. Control (2021). https://doi.org/10.1109/TAC.2021.
3069705
52. Lian, J., Liu, J., Zhuang, Y.: Mean stability of positive Markovian jump linear systems with
homogeneous and switching transition probabilities. IEEE Trans. Circ. Syst.-II 62(8), 801–
805 (2015)
53. Bolzern, P., Colaneri, P., Nicolao, G.: Stabilization via switching of positive Markovian jump
linear systems. In: Proceedings of the 53rd IEEE Conference on Decision and Control, Los
Angeles, CA, USA (2014)
54. Aberkane, S.: Bounded real lemma for nonhomogeneous Markovian jump linear systems.
IEEE Trans. Autom. Control 58(3), 797–801 (2013)
55. Yin, Y.Y., Shi, P., Liu, F., Lim, C.C.: Robust control for nonhomogeneous Markovian jump
processes: an application to DC motor device. J. Frankl. Inst. 351(6), 3322–3338 (2014)
56. Aberkane, S.: Stochastic stabilization of a class of nonhomogeneous Markovian jump linear
systems. Syst. Control Lett. 60(3), 156–160 (2011)
57. Liu, Y.Q., Yin, Y.Y., Liu, F., Teo, K.L.: Constrained MPC design of nonlinear Markovian
jump system with nonhomogeneous process. Nonlinear Anal. Hybrid Syst. 17, 1–9 (2015)
58. Liu, Y.Q., Liu, F., Toe, K.L.: Output feedback control of nonhomogeneous Markovian jump
system with unit-energy disturbance. Circ. Syst. Signal Process. 33(9), 2793–2806 (2014)
References 15
59. Ding, Y.C., Liu, H., Shi, K.B.: H∞ state-feedback controller design for continuous-time
nonhomogeneous Markovian jump systems. Optimal Control Appl. Methods 20(1), 133–144
(2016)
60. Yin, Y.Y., Shi, P., Liu, F., Toe, K.L.: Filtering for discrete-time non-homogeneous Markovian
jump systems with uncertainties. Inf. Sci. 259, 118–127 (2014)
61. Yin, Y., Shi, P., Liu, F., Teo, K.L.: Fuzzy model-based robust H∞ filtering for a class of
nonlinear nonhomogeneous Markov jump systems. Signal Process 93(9), 2381–2391 (2013)
62. Hou, T., Ma, H.J., Zhang, W.H.: Spectral tests for observability and detectability of periodic
Markovian jump systems with nonhomogeneous Markovian chain. Automatica 63, 175–181
(2016)
63. Hou, T., Ma, H.J.: Stochastic H2 /H∞ control of discrete-time periodic Markovian jump
systems with detectability. In: Proceedings of the 54th Annual Conference of the Society of
Instrument and Control Engineers of Japan, Hangzhou, China, pp. 530–535 (2015)
64. Tao, J., Su, H., Lu, R., Wu, Z.G.: Dissipativity-based filtering of nonlinear periodic Markovian
jump systems: the discrete-time case. Neurocomputing 171, 807–814 (2016)
65. Aberkane, S., Dragan, V.: H∞ filtering of periodic Markovian jump systems: application to
filtering with communication constraints. Automatica 48(12), 3151–3156 (2012)
66. Zhang, L.X.: H∞ estimation for discrete-time piecewise homogeneous Markovian jump linear
systems. Automatica 45(11), 2570–2576 (2009)
67. Wu, Z.G., Ju, H.P., Su, H., Chu, J.: Stochastic stability analysis of piecewise homogeneous
Markovian jump neural networks with mixed time-delays. J. Frankl. Inst. 349(6), 2136–2150
(2012)
68. Luan, X.L., Shunyi, Z., Shi, P., Liu, F.: H∞ filtering for discrete-time Markovian jump systems
with unknown transition probabilities. Int. J. Adapt. Control Signal Process 28(2), 138–148
(2014)
69. Luan, X.L., Shunyi, Z., Liu, F.: H∞ control for discrete-time Markovian jump systems with
uncertain transition probabilities. IEEE Trans. Autom. Control. 58(6), 1566–1572 (2013)
70. Kamenkov, G.: On stability of motion over a finite interval of time. J. Appl. Math. Mech. 17,
529–540 (1953)
71. Lebedev, A.: On stability of motion during a given interval of time. J. Appl. Math. Mech. 18,
139–148 (1954)
72. Dorato, P.: Short-Time Stability in Linear Time-Varying Systems. Polytechnic Institute of
Brooklyn Publishing, Brooklyn, New York (1961)
73. Liasalle, J., Lefechetz, S.: Stability by Lyapunov’s Direct Methods: With Applications. Aca-
demic Press Publishing, New York (1961)
74. Weiss, L., Infante, E.F.: On the stability of systems defined over a finite time interval. Natl.
Acad. Sci. 54(1), 44–48 (1965)
75. Weiss, L., Infante, E.: Finite time stability under perturbing forces and on product spaces.
IEEE Trans. Autom. Control 12(1), 54–59 (1967)
76. Michel, A.N., Wu, S.H.: Stability of discrete systems over a finite interval of time. Int. J.
Control 9(6), 679–693 (1969)
77. Weiss, L.: On uniform and nonuniform finite-time stability. IEEE Trans. Autom. Control
14(3), 313–314 (1969)
78. Bhat, S.P., Bernstein, D.S.: Finite-time stability of continuous autonomous systems. SIAM J.
Control Opim. 38(3), 751–766 (2000)
79. Chen, W., Jiao, L.C.: Finite-time stability theorem of stochastic nonlinear systems. Automatica
46(12), 2105–2108 (2010)
80. Garrard, W.L., McClamroch, N.H., Clark, L.G.: An approach to suboptimal feedback control
of nonlinear systems. Int. J. Control 5(5), 425–435 (1967)
81. Van Mellaert, L., Dorato, P.: Nurmerical solution of an optimal control problem with a prob-
ability vriterion. IEEE Trans. Autom. Control 17(4), 543–546 (1972)
82. San Filippo, F.A., Dorato, P.: Short-time prarmeter optimization with flight control application.
Automatica 10(4), 425–430 (1974)
16 1 Introduction
83. Gmjic, W.L.: Finite time stability in control system synthesis. In: Proceedings of the 4th IFAC
Congress, Warsaw, Poland, pp. 21–31 (1969)
84. Haimo, V.T.: Finite-time control and optimization. SIAM J Control Opim. 24(4), 760–770
(1986)
85. Liu, L., Sun, J.: Finite-time stabilization of linear systems via impulsive control. Int. J. Control
8(6), 905–909 (2008)
86. Germain, G., Sophie, T., Jacques, B.: Finite-time stabilization of linear time-varying contin-
uous systems. IEEE Trans. Autom. Control 4(2), 364–369 (2009)
87. Moulay, E., Perruquetti, W.: Finite time stability and stabilization of a class of continuous
systems. J. Math. Anal. Appl. 323(2), 1430–1443 (2006)
88. Abdallah, C.T., Amato, F., Ariola, M.: Statistical learning methods in linear algebra and
control problems: the examples of finite-time control of uncertain linear systems. Linear
Algebra Appl. 351, 11–26 (2002)
89. Dorato, P., Famularo, D.: Robust finite-time stability design via linear matrix inequalities. In:
Proceedings of the 36th IEEE Conference on Desicion and Control, San Diego, pp. 1305–1306
(1997)
90. Amato, F., Ariola, M., Dorato, P.: Robust finite-time stabilization of linear systems depending
on parametric uncertainties. In: Proceedings of the 37th IEEE Conference on Decision and
Control, Tampa, Florida, pp. 1207–1208 (1998)
91. Amato, F., Ariola, M., Dorato, P.: Finite-time control of linear systems subject to parametric
uncertainties and disturbances. Automatica 37(9), 1459–1463 (2001)
92. Amato, F., Ariola, M.: Finite-time control of discrete-time linear system. IEEE Trans. Autom.
Control 50(5), 724–729 (2005)
93. Amato, F., Ariola, M., Cosentino, C.: Finite-time stabilization via dynamic output feedback.
Automatica 42(2), 337–342 (2006)
94. Hong, Y.G., Huang, J., Yu, Y.: On an output feedback finite-time stabilization problem. IEEE
Trans. Autom. Control 46(2), 305–309 (2001)
95. Yu, S., Yu, X., Shirinzadeh, B.: Continuous finite-time control for robotic manipulators with
terminal sliding mode. Automatica 41(11), 1957–1964 (2005)
96. Huang, X., Lin, W., Yang, B.: Global finite-time stabilization of a class of uncertain nonlinear
systems. Automatica 41(5), 881–888 (2005)
97. Feng, J.E., Wu, Z., Sun, J.B.: Finite-time control of linear singular systems with parametric
uncertainties and disturbances. Acta Automatica Sinica 31(4), 634–637 (2005)
98. Moulay, E., Dambrine, M., Yeganefax, N.: Finite time stability and stabilization of time-delay
systems. Syst. Control Lett. 57(7), 561–566 (2008)
99. Zuo, Z., Li, H., Wang, Y.: New criterion for finite-time stability of linear discrete-time systems
with time-varying delay. J. Frankl. Inst. 350(9), 2745–2756 (2013)
100. Stojanovic, S.B., Debeljkovic, D.L., Antic, D.S.: Robust finite-lime stability and stabilization
of linear uncertain time-delay systems. Asian J. Control 15(5), 1548–1554 (2013)
101. Amato, F., Ariola, M., Cosentino, C.: Finite-time control of discrete-time linear systems:
analysis and design conditions. Automatica 46(5), 919–924 (2010)
102. Amato, F., Ariola, M., Cosentino, C.: Necessary and sufficient conditions for finite-time
stability of linear systems. In: Proceedings of the 2003 American Control Conference, Denver,
Colorado, pp. 4452–4456 (2003)
103. Amato, F., Ariola, M., Cosentino, C.: Finite-time stability of linear time-varying systems:
analysis and controller design. IEEE Trans. Autom. Control 55(4), 1003–1008 (2009)
104. Amato, F., Ambrosino, R., Ariola, M.: Robust finite-time stability of impulsive dynamical
linear systems subject to norm-bounded uncertainties. Int. J. Robust Nonlinear Control 21(10),
1080–1092 (2011)
105. Amato, F., Ariola, M., Cosentino, C.: Finite-time stabilization of impulsive dynamical linear
systems. Nonlinear Anal. Hybrid Syst. 5(1), 89–101 (2011)
106. Amato, F., Tommasig, D., Pironti, A.: Necessary and sufficient conditions for finite-time
stability of impulsive dynamical linear systems. Automatica 49(8), 2546–2550 (2013)
References 17
107. Amato, F., Ambrosino, R., Cosentino, C.: Input-output finite time stabilization of linear sys-
tems. Automatica 46(9), 1558–1562 (2010)
108. Amato, F., Ambrosino, R., Cosentino, C.: Input-output finite-time stability of linear systems.
In: Proceedings of the 17th Mediterranean Conference on Control and Automation, Makedo-
nia, Palace, Thessaloniki, Greece, pp. 342–346 (2009)
109. Amato, F., Carannante, G., De Tommasi, G.: Input-output finite-time stabilization of a class
of hybrid systems via static output feedback. Int. J. Control 84(6), 1055–1066 (2011)
110. Weiss, L.: Converse theorems for finite time stability. SIAM J. Appl. Math. 16(6), 1319–1324
(1968)
111. Ryan, E.P.: Finite-time stabilization of uncertain nonlinear planar systems. Dyn. Control 1(1),
83–94 (1991)
112. Hong, Y.G., Wang, J., Cheng, D.: Adaptive finite-time control of nonlinear systems with
parametric uncertainty. IEEE Trans. Autom. Control 51(5), 858–862 (2006)
113. Nersesov, S.G., Nataxaj, C., Avis, J.M.: Design of finite time stabilizing controller for non-
linear dynamical systems. Int. J. Robust Nonlinear Control 19(8), 900–918 (2009)
114. Mastellone, S., Dorato, P., Abdallah, C.T.: Finite-time stability of discrete-time nonlinear
systems: analysis and design. In: Proceedings of the 43rd IEEE Conference on Decision and
Control, Atlantis, Paradise Island, Bahamas, pp. 2572–2577 (2004)
115. Yang, Y., Li, J., Chen, G.: Finite-time stability and stabilization of nonlinear stochastic hybrid
systems. J. Math. Anal. Appl. 356(1), 338–345 (2009)
116. Chen, F., Xu, S., Zou, Y.: Finite-time boundedness and stabilization for a class of non-linear
quadratic time-delay systems with disturbances. IET Control Theor. Appl. 7(13), 1683–1688
(2013)
117. Yin, J., Khoo, S., Man, Z.: Finite-time stability and instability of stochastic nonlinear systems.
Automatica 47(12), 2671–2677 (2011)
118. Khoo, S., Yin, J.L., Man, Z.H.: Finite-time stabilization of stochastic nonlinear systems in
strict-feedback form. Automatica 49(5), 1403–1410 (2013)
119. Amato, F., Cosentesto, C., Merola, A.: Sufficient conditions for finite-time stability and stabi-
lization of nonlinear quadratic systems. IEEE Trans. Autom. Control 55(2), 430–434 (2010)
120. He, S., Liu, F.: Finite-time H∞ fuzzy control of nonlinear jump systems with time delays via
dynamic observer-based state feedback. IEEE Trans. Fuzzy. Syst. 20(4), 605–614 (2012)
121. Luan, X.L., Liu, F., Shi, P.: Robust finite-time H∞ control for nonlinear jump systems via
neural networks. Circ. Syst. Signal Process 29(3), 481–498 (2010)
122. Luan, X.L., Liu, F., Shi, P.: Neural-network-based finite-time H∞ control for extended Marko-
vian jump nonlinear systems. Int. J. Adapt. Control Signal Process 24(7), 554–567 (2010)
123. Luan, X.L., Liu, F., Shi, P.: Finite-time filtering for nonlinear stochastic systems with partially
known transition jump rates. IET Control Theor. Appl. 4(5), 735–745 (2010)
124. Elbsat, M.N., Yaz, E.E.: Robust and resilient finite-time bounded control of discrete-time
uncertain nonlinear systems. Automatica 49(7), 2292–2296 (2013)
125. Zhang, Y., Shi, P., Nguang, S.K.: Robust finite-time fuzzy H∞ control for uncertain time-delay
systems with stochastic jumps. J. Frankl. Inst. 351(8), 4211–4229 (2014)
126. Luan, X.L., Zhao, C.Z., Liu, F.: Finite-time H∞ control with average dwell-time constraint
for time-delay Markovian jump systems governed by deterministic switches. IET Control
Theor. Appl. 8(11), 968–977 (2014)
127. Chen, C., Gao, Y., Zhu, S.: Finite-time dissipative control for stochastic interval systems with
time-delay and Markovian switching. Appl. Math. Comput. 310, 169–181 (2017)
128. Yan, Z., Zhang, W., Zhang, G.: Finite-time stability and stabilization of It ô stochastic sys-
tems with Markovian switching: mode-dependent parameters approach. IEEE Trans. Autom.
Control 60(9), 2428–2433 (2015)
129. Lyu, X.X., Ai, Q.L., Yan, Z.G., He, S.P., Luan, X.L., Liu, F.: Finite-time asynchronous resilient
observer design of a class of non-linear switched systems with time-delays and uncertainties.
IET Control. Theor. Appl. 14(7), 952–963 (2020)
130. Nie, R., He, S.P., Luan, X.L.: Finite-time stabilization for a class of time-delayed Markovian
jump systems with conic nonlinearities. IET Control Theor. Appl. 13(9), 1279–1283 (2019)
18 1 Introduction
131. Yan, Z., Song, Y., Park, J.H.: Finite-time stability and stabilization for stochastic Markov
jump systems with mode-dependent time delays. ISA Trans. 68, 141–149 (2017)
132. Wen, J., Nguang, S.K., Shi, P.: Finite-time stabilization of Markovian jump delay systems–a
switching control approach. Int. J. Robust Nonlinear Control 7(2), 298–318 (2016)
133. Chen, Y., Liu, Q., Lu, R., Xue, A.: Finite-time control of switched stochastic delayed systems.
Neurocomputing 191, 374–379 (2016)
134. Ma, Y., Jia, X., Zhang, Q.: Robust observer-based finite-time H∞ control for discrete-time
singular Markovian jumping system with time delay and actuator saturation. Nonlinear Anal.
Hybrid. Syst. 28, 1–22 (2018)
135. Shen, H., Li, F., Yan, H.C., Karimi, H.R., Lam, H.K.: Finite-time event-triggered H∞ control
for T-S fuzzy Markovian jump systems. IEEE Trans. Fuzzy Syst. 26(5), 3122–3135 (2018)
136. Luan, X.L., Min, Y., Ding, Z.T., Liu, F.: Stochastic given-time H∞ consensus over Markovian
jump networks with disturbance constraint. Trans. Inst. Meas. Control 39(8), 1253–1261
(2017)
137. Cheng, J., Zhu, H., Zhong, S.M., Zeng, Y., Dong, X.C.: Finite-time H∞ control for a class
of Markovian jump systems with mode-dependent time-varying delays via new Lyapunov
functionals. ISA Trans. 52(6), 768–774 (2013)
138. Song, X.N., Wang, M., Ahn, C.K., Song, S.: Finite-time H∞ asynchronous control for non-
linear Markovian jump distributed parameter systems via quantized fuzzy output-feedback
approach. IEEE Trans. Cybern. 50(9), 4098–4109 (2020)
139. Ma, Y.C., Jia, X.R., Zhang, Q.L.: Robust finite-time non-fragile memory H∞ control for
discrete-time singular Markovian jump systems subject to actuator saturation. J. Frankl. Inst.
354(18), 8256–8282 (2017)
140. Cheng, P., He, S.P., Luan, X.L., Liu, F.: Finite-region asynchronous H∞ control for 2D Marko-
vian jump systems. Automatica (2021). https://doi.org/10.1016/j.automatica.2021.109590
141. Ren, H.L., Zong, G.D., Karimi, H.R.: Asynchronous finite-time filtering of Markovian jump
nonlinear systems and its applications. IEEE Trans. Syst. Man Cybern. Syst. 51(3), 1725–1734
(2019)
142. Li, S.Y., Ma, Y.: Finite-time dissipative control for singular Markovian jump systems via
quantizing approach. Nonlinear Anal. Hybrid Syst. 27, 323–340 (2018)
143. Ren, C.C., He, S.P., Luan, X.L., Liu, F., Karimi, H.R.: Finite-time l2 -gain asynchronous
control for continuous-time positive hidden Markovian jump systems via T-S fuzzy model
approach. IEEE Trans. Cybern. 51(1), 77–87 (2021)
144. Yan, H.C., Tian, Y.X., Li, H.Y., Zhang, H., Li, Z.C.: Input-output finite-time mean square
stabilization of nonlinear semi-Markovian jump systems. Automatica 104, 82–89 (2021)
145. Ju, Y.Y., Cheng, G.F., Ding, Z.S.: Stochastic H∞ finite-time control for linear neutral semi-
Markovian jump systems under event-triggering scheme. J. Frankl. Inst. 358(2), 1529–1552
(2021)
146. Ren, H.L., Zong, G.D.: Robust input-output finite-time filtering for uncertain Markovian jump
nonlinear systems with partially known transition probabilities. Int. J. Adapt. Control Signal.
Process. 31(10), 1437–1455 (2017)
147. Zong, G.D., Yang, D., Hou, L.L., Wang, Q.Z.: Robust finite-time H∞ control for Markovian
jump systems with partially known transition probabilities. J. Frankl. Inst. 350(6), 1562–1578
(2013)
148. Cheng, J., Park, J.H., Liu, Y.J., Liu, Z.J., Tang, L.M.: Finite-time H∞ fuzzy control of nonlinear
Markovian jump delayed systems with partly uncertain transition descriptions. Fuzzy Sets
Syst. 314, 99–115 (2017)
149. Luan, X.L., Zhao, C.Z., Liu, F.: Finite-time stabilization of switching Markovian jump systems
with uncertain transition rates. Circ. Syst. Signal Process 34(12), 3741–3756 (2015)
150. Chen, F., Luan, X.L., Liu, F.: Observer based finite-time stabilization for discrete-time Marko-
vian jump systems with Gaussian transition probabilities. Circ. Syst. Signal Process 33(10),
3019–3035 (2014)
151. Luan, X.L., Shi, P., Liu, F.: Finite-time stabilization for Markovian jump systems with Gaus-
sian transition probabilities. IET Control Theor. Appl. 7(2), 298–304 (2013)
References 19
152. Wang, J., Ru, T.T., Xia, J.W., Shen, H., Sreeram, V.: Asynchronous event-triggered sliding
mode control for semi-Markovian jump systems within a finite-time interval. IEEE Trans.
Circuits Syst.-I 68(1), 458–468 (2021)
153. Zong, G.D., Ren, H.L.: Guaranteed cost finite-time control for semi-Markovian jump systems
with event-triggered scheme and quantization input. Int. J. Robust. Nonlinear Control 29(15),
5251–5273 (2019)
154. Chen, J., Zhang, D., Qi, W.H., Cao, J.D., Shi, K.B.: Finite-time stabilization of T-S fuzzy
semi-Markovian switching systems: a coupling memory sampled-data control approach. J.
Frankl. Inst. 357(16), 11265–11280 (2020)
155. Wang, J.M., Ma, S.P., Zhang, C.H.: Finite-time H∞ filtering for nonlinear continuous-time
singular semi-Markovian jump systems. Asian J. Control 21(2), 1017–1027 (2019)
156. Qi, W.H., Zong, G.D., Karimi, H.R.: Finite-time observer-based sliding mode control for
quantized semi-Markovian switching systems with application. IEEE Trans. Ind. Electron
16(2), 1259–1271 (2020)
157. Song, J., Niu, Y.G., Zou, Y.Y.: A parameter-dependent sliding mode approach for finite-time
bounded control of uncertain stochastic systems with randomly varying actuator faults and its
application to a parallel active suspension system. IEEE Trans. Ind. Electron 65(10), 2455–
2461 (2018)
158. Cao, Z.R., Niu, Y.G., Zhao, H.J.: Finite-time sliding mode control of Markovian jump systems
subject to actuator faults. Int. J. Control Autom. Syst. 16, 2282–2289 (2018)
159. Li, F.B., Du, C.L., Yang, C.H., Wu, L.G., Gui, W.H.: Finite-time asynchronous sliding
mode control for Markovian jump systems. Automatica (2021). https://doi.org/10.1016/j.
automatica.2019.108503
160. Ren, C.C., He, S.P.: Sliding mode control for a class of nonlinear positive Markovian jump
systems with uncertainties in a finite-time interval. Int. J. Control Autom. Syst. 17(7), 1634–
1641 (2019)
Chapter 2
Finite-Time Stability and Stabilization
for Discrete-Time Markovian Jump
Systems
2.1 Introduction
In practical engineering applications, more consideration has been paid to the sys-
tem’s transient behavior in a restricted time instead of the steady-state performance in
the infinite-time domain. Subsequently, to decrease the conservativeness of controller
design, the finite-time stability theory was proposed by Dorato in 1961. Consider the
impacts of exogenous disturbance on the system, finite-time boundedness was fur-
ther explored. Since then, an incredible number of research results on finite-time
stability, finite-time boundedness, and finite-time stabilization for linear determin-
istic systems have been intensively studied [1–3]. Furthermore, by considering the
influence of the transition probability on the control performance, the stochastic
finite-time stability, stochastic finite-time boundedness, and stochastic finite-time
stabilization for stochastic Markovian jump systems (MJSs) have also been exten-
sively investigated [4–6].
On the other hand, nonlinearities are the common feature of practical plants. How
to guarantee the transient performance of nonlinear MJSs in the finite-time span is
a challenging issue. Assuming that the nonlinear terms satisfy the Lipschitz con-
ditions, the asynchronous finite-time filtering, finite-time dissipative filtering, and
asynchronous output finite-time control problems have been solved [7–9]. Using
the Takagi–Sugeno fuzzy model to represent the nonlinear MJSs, the asynchronous
finite-time control, finite-time H∞ control, finite-time H∞ filtering, etc., have been
addressed in [10–12]. In addition to the above methods of dealing with nonlinearities
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 21
X. Luan et al., Robust Control for Discrete-Time Markovian Jump Systems in the
Finite-Time Domain, Lecture Notes in Control and Information Sciences 492,
https://doi.org/10.1007/978-3-031-22182-8_2
22 2 Finite-Time Stability and Stabilization . . .
of MJSs, neural networks were also efficient tools to analyze the transient perfor-
mance of nonlinear MJSs [13, 14].
The primary purpose of this chapter is to investigate the FTS and finite-time sta-
bilization problems for discrete-time linear and nonlinear MJSs. For nonlinear MJSs
with time-delays and external disturbances, neural networks are utilized to repre-
sent nonlinear terms through linear difference inclusions (LDIs) under state-space
representation. The mode-dependent finite-time controllers are designed to make
the linear and nonlinear MJSs stochastic finite-time stabilizable. By constructing
the appropriate stochastic Lyapunov function, sufficient conditions are derived from
linear matrix inequalities (LMIs).
M
πi j ≥ 0, πi j = 1, ∀i ∈ M.
j=1
A(rk ) = Ai , A(rk ) = Ai , Bu (rk ) = Bui , Bu (rk ) = Bui , Bw (rk ) = Bwi .
(2.3)
Ai and Bui are the time-varying but norm-bounded uncertainties that satisfy
Ai Bui = Mi Fi (k) N1i N2i . (2.4)
2.2 Preliminaries and Problem Formulation 23
Mi , N1i and N2i are known mode-dependent matrices with suitable dimensions and
Fi (k) is the time-varying unknown matrix function with Lebesgue norm assessable
elements satisfying FiT (k) Fi (k) ≤ I .
Concerning uncertain linear MJS (2.1), the following state feedback controller is
constructed:
u(k) = K i x(k) (2.5)
where K i ∈ R m×n is state feedback gain to be designed. Then, the resulting closed-
loop MJS ensures that:
x(k + 1) = Āi + Āi x(k) + Bwi w(k)
(2.6)
x(k) = x0 , rk = r0 , k = 0
Remark 2.1 In fact, stochastic finite-time stability in the presence of external distur-
bance results in the concept of stochastic finite-time boundedness. Vice versa, letting
w(k) = 0, the concept in Definition 2.1 is equivalent to stochastic FTS. In other
words, the uncertain discrete-time linear MJS (2.1) (setting w(k) = 0, u(k) = 0)
is said to be stochastic finite-time stable (FTS) with respect to (c1 c2 N R)
if Eq. (2.7) holds. It is obvious that stochastic finite-time boundedness indicates
stochastic finite-time stability, but in turn, may not be set up.
Remark 2.2 Both stochastic finite-time boundedness and stochastic finite-time sta-
bility are open-loop concepts, which belong to the analysis of open-loop MJS with
u(k) = 0. With the designed control in the form of formulation (2.5), the closed-
loop system (2.6) is stochastic finite-time stabilizable, which gives the concept of
stochastic finite-time stabilization.
Remark 2.3 Note that Lyapunov asymptotic stability and stochastic finite-time sta-
bility are different concepts. The concept of Lyapunov asymptotic stability is pri-
marily known to the control community. However, an MJS is stochastic FTS if its
state stays inside the desired bounds during the fixed time interval. Therefore, it can
be concluded that an MJS that is stochastic FTS may not be Lyapunov asymptotic
stability. Conversely, the Lyapunov asymptotic stability could be not stochastic FTS
if its state exceeds the given bounds during the transient response process.
24 2 Finite-Time Stability and Stabilization . . .
This subsection will first consider the stochastic finite-time stabilization problem
for uncertain discrete-time linear MJS (2.1). Before presenting the main results, the
following lemma will be helpful.
Lemma 2.1 [15] Assume that H , L, Q, and S are real matrices with appropriate
dimensions, for the positive scalar θ > 0 and U T U ≤ I , we can get
H + LU S + S T U T L T ≤ H + θ L L T + θ −1 S T S. (2.8)
S11 S12
Lemma 2.2 (Schur complement lemma) For a given matric S = with
S21 S22
S11 ∈ R r ×r , the following statements are equivalent:
(a) S < 0 ;
T −1
(b) S11 < 0, S22 − S12 S11 S12 < 0;
−1 T
(c) S22 < 0, S11 − S12 S22 S12 < 0.
λ2 c1 + λ3 d 2 < α −N c2 λ1 (2.10)
where λ1 = λmin P̃i , λ2 = λmax P̃i , λ3 = λmax (Q), P̃i = R −1/2 Pi R −1/2 .
Along the state trajectories of system (2.1) with u(k) = 0, the corresponding time
derivative of Vi (k) is given by
2.3 Stochastic Finite-Time Stabilization for Linear MJSs 25
M
Vi (k + 1) = πi j x T (k + 1)P j x(k + 1)
j=1
M
= πi j [x T (k)(Ai + Ai )T P j (Ai + Ai ) x(k)
j=1
+ 2x T (k)(Ai + Ai )T P j w(k) + w T (k)Bwi
T
P j Bwi w(k)].
Listing the above Eq. (2.11) at different sampling time, we can get
k
Vi (k) < α k Vr0 (0) + αl w T (k − l)Qw(k − l)
l=1
k
<α k
Vr0 (0) + λ3 α l−k
w (k − l)w(k − l) .
T
l=1
Denote P̃i = R −1/2 Pi R −1/2 , λ1 = λmin P̃i , λ2 = λmax P̃i , λ3 = λmax (Q).
For α ≥ 1, we have the following relationship:
k
Vi (k) < α k
Vr0 (0) + λ3 α l−k
w (k − l)w(k − l)
T
l=1
< α N λ2 c1 + λ3 d 2 .
Condition (2.10) means that for k ∈ {1, . . . , N }, the equality E x T (k)Rx(k) < c2
holds. This completes the proof.
Theorem 2.2 For a given positive scalar α ≥ 1, the closed-loop system (2.6) is said
to be stochastic finite-time stabilizable with respect to (c1 c2 N R d), if there
exist symmetric positive-definite matrices X i and Q, matrix Yi , and positive scalars
θi , i ∈ M such that
26 2 Finite-Time Stability and Stabilization . . .
⎡ ⎤
−α X i 0 L T1i L T3i
⎢ 0 −α Q L T2i 0 ⎥
⎢ ⎥
⎣ L 1i L 2i Z i + θi MiT Mi 0 ⎦ < 0 (2.12)
L 3i 0 0 −θi I
where
√ √
L T1i = πi1 (Ai X i + Bui Yi )T . . . πi M (Ai X i + Bui Yi )T ,
√ √
L T2i = πi1 Bwi
T
. . . πi M Bwi
T
,
√ √
L T3i = πi1 (N1i X i + N2i Yi )T . . . πi M (N1i X i + N2i Yi )T ,
Z i = −diag X 1 · · · X M .
Proof Inequalities (2.12)–(2.15) in Theorem 2.2 can be derived from Theorem 2.1
by some matrix operations and transformations.
Remark 2.4 To obtain the optimal stochastic finite-time controller for uncertain
discrete-time linear MJS (2.1), the upper bound c2 can be described as the following
optimization problem:
min c2
X i ,Yi ,Q,λ1 ,λ2 ,c1 ,θi (2.16)
s.t. LMI (2.12)−(2.15).
In this subsection, we consider the following discrete-time nonlinear MJS with time-
delay:
⎧
⎨ x(k + 1) = A(rk )x(k) + Ad (rk )x(k − h) + Bu (rk )u(k)
+ Bw (rk )w(k) + C(rk ) f (x(k), rk ) (2.17)
⎩
x f = ϕ f , f ∈ {−h, . . . , 0}, rk = r0 , k = 0
2.4 Stochastic Finite-Time Stabilization for Nonlinear MJSs 27
Ni (x(k), Wi1 , Wi2 , . . . , Wi L ) = ψi L [Wi L . . . ψi2 ][Wi2 [ψi1 [Wi1 x(k)]]] (2.18)
where the weight matrices Wir ∈ R nir ×ni(r −1) , r = 1, . . . , L from the r − l th layer to
the r − L th layer are the parameters to be determined, ψir [·], r = 1, . . . , L is the
activation function defined as ψir [·] = [φi1 (ςi1 ), φi2 (ςi2 ), . . . , φinr (ςinr )]T , where
n r indicates the neurons of r -th layer and
1 − e−ςi h /qi h
φi h (ςi h ) = δi h , qi h , δi h > 0, h = 1, 2, . . . , n r . (2.19)
1 + e−ςi h /qi h
The minimum and maximum derivatives of the activation function φi h are desig-
nated as follows:
minζi h ∂φ∂ζ
i h (ζi h )
,v = 0
si h (v, φi h ) = ih
∂φi h (ζi h ) (2.20)
maxζi h ∂ζi h , v = 1.
For the r -th layer of a neural network, the activation function φi h can be rewritten
as following min-max manner:
where z i h (v), v = 0, 1 are a series of positive real numbers with z i h (v) > 0 and
z i h (0) + z i h (1) = 1. According to the approximation ability of the neural network,
there exist weight matrices Wir∗ defined as:
For each mode i, denote a set of n r dimensional index vectors of the r -th layer as
where σi is utilized as a binary character. Clearly, the r -th layer with n r neurons has
2nr combinations of the binary character with v = 0, 1 and the elements of indicator
vectors for all L layers neural network have 2n L × · · · × 2n 2 × 2n 1 combinations as
follows:
= γn L ⊕ · · · ⊕ γn 2 ⊕ γ n 1 .
By applying condition (2.20) and resorting to the compact description [16], the
multilayer neural network (2.18) can be expressed as follows:
where
Aσi = diag[s Li h (σ Li h , φ Li h )]
W L∗ · · · diag[s2i h (σ2i h , φ2i h )]W2∗ diag[s1i h (σ1i h , φ1i h )]W1∗ ,
μσi
σi ∈
1
1
= ··· z i Ln L (vi Ln L ) · · · z i L1 (vi L1 ) · · ·z i1n 1 (vi1n 1 ) · · · z i11 (vr 11 ) = 1.
vi Ln L =0 vi1n 1 =0
.. ..
. .
vi L1 =0 vi11 =0
where
Ãi = μσi Aσi + Ai ,
σi ∈
Remark 2.5 The detailed structure and quantitative value of approximation error
f i (xk ) are not necessary, but the norm-bounded assumption is expected. This
requirement is surely accomplished in practical situations. Also, the bounds of
approximation error can be different on the basis of different nonlinearities in each
mode.
Based on the LDI representation (2.23) and the state feedback control law in
Eq. (2.5), we obtain the following closed-loop system:
x(k + 1) = Āi x(k) + Adi x(k − h) + Bwi w(k) + Ci f i (x(k))
(2.25)
x f = ϕ f , f ∈ {−h, . . . , 0}, rk = r0 , k = 0
where
Āi = Ãi + Bui K i . (2.26)
This chapter aims to find sufficient conditions that ensure the closed-loop system
(2.25) stochastic finite-time stabilizable. Before proceeding further, we introduce
the following proposition for the derivation of our main results.
(2.27)
c22 λ1
c12 λ2 + c12 λ3 + d 2 λ3 + c22 ρi2 λ4 < (2.28)
(1 + α) N
Proof For the closed-loop system (2.25), choose a stochastic Lyapunov function
candidate as
k−1
Vi (k) = x T (k)Pi x(k) + x Tf Sx f .
f =k−h
where
M
T
P̄ j = πi j P j , ζ (k) = x T (k) x T (k − h) w T (k) f iT (x(k)) ,
j=1
⎡ ⎤
ĀiT P̄ j Āi − Pi + S ∗ ∗ ∗
⎢ ATdi P̄ j Āi ATdi P̄ j Adi − S ∗ ∗ ⎥
i = ⎢
⎣
⎥.
⎦
T
Bwi P̄ j Āi T
Bwi T
P̄ j Adi Bwi P̄ j Bwi ∗
CiT P̄ j Āi CiT P̄ j Adi CiT P̄ j Bwi CiT P̄ j Ci
Vi (k)
k
< (1 + α)k Vi (0) + (1 + α)k− f +1 w( f − 1)T Qw( f − 1)
f =1
2.4 Stochastic Finite-Time Stabilization for Nonlinear MJSs 31
k
+ (1 + α)k− f +1 c22 ρi2 λmax (G)
f =1
⎡
−1
= (1 + α)k ⎣x(0)T Pi x(0) + x( f )T Sx( f )
f =−h
⎤
k
k
+ (1 + α)1− f w( f − 1)T Qw( f − 1) + (1 + α)1− f c22 ρi2 λmax (G)⎦
f =1 f =1
< (1 + α) c12 λ2 + c12 λ5 + d 2 λ3 + c22 ρi2 λ4 .
N
(2.31)
Note that
k−1
Vi (k) = x T (k)Pi x(k) + x( f )T Sx( f )
f =k−h
Condition (2.28) implies that for k ∈ {1, 2, . . . , N }, the state trajectories do not
exceed the upper bound c2 , i.e., E{x T (k)Rx(k)} < c2 . This completes the proof.
Theorem 2.3 For given scalars α ≥ 0, h > 0, and ρi > 0, the closed-loop system
(2.25) is stochastic finite-time stabilizable via state feedback controller in the form
of (2.5) respect to (c1 c2 N R d), if there exist matrices X i = X iT > 0, Yi , H =
H T > 0, Q = Q T > 0, and G = G T > 0 such that
⎡ ⎤
−(1 + α)X i N1iT 0 0 Xi
⎢ N −M + N M M 0 ⎥
⎢ 1i 5i 5i 3i 4i ⎥
⎢ −(1 + α)Q 0 ⎥
⎢ 0 M3iT 0 ⎥<0 (2.34)
⎣ 0 M4iT 0 −(1 + α)G 0 ⎦
Xi 0 0 0 −H
Proof By using the Schur complement lemma, from condition (2.27) in Proposition
2.1, it follows that:
⎡ ⎤
−(1 + α)Pi + S ∗ ∗ ∗ ∗
⎢ 0 −S ∗ ∗ ∗ ⎥
⎢ ⎥
⎢ 0 −(1 + α)Q ∗ ∗ ⎥
⎢ 0 ⎥≤0 (2.37)
⎣ 0 0 0 −(1 + α)G ∗ ⎦
M1i M2i M3i M4i −M5i
where √ √ T
M1i = π ĀT , . . . , πi M ĀiT ,
√ i1 iT √ T
M2i = π A , . . . , πi M ATdi ,
√ i1 di √ T T
M3i = π B T , . . . , πi M Bwi ,
√ i1 wi √ T
M4i = πi1 CiT , . . . , πi M CiT ,
M5i = diag P1−1 , . . . , PM−1 .
where T
√ √ T
N1i = πi1 (X i + Bui Yi )T , . . . , πi M Ãi X i + Bui Yi ,
⎡ √ √ √ √ ⎤
πi1 Adi H ATdi πi1 πi2 Adi H ATdi · · · πi1 πi M Adi H ATdi
√ √
⎢ πi2 πi1 Adi H ATdi √ √
⎢ πi2 Adi H ATdi · · · πi2 πi M Adi H ATdi ⎥
⎥
N5i = ⎢ .. .. .. .. ⎥.
⎣ . . . . ⎦
√ √ √ √
πi M πi1 Adi H ATdi πi M πi2 Adi H ATdi πi M Adi H ATdi
1
λmax ( X̃ i ) = ,
λmin ( P̃i )
and
X̃ i = P̃i−1 = R 1/2 X i R 1/2 .
It is easy to check that the above inequality is guaranteed by imposing the following
conditions:
λmax ( X̃ i ) < 1, λ6 < λmin ( X̃ i ),
c12 c22
+ c12 hλ5 + d 2 λ3 + c22 ρi2 λ4 < ,
λ6 (1 + α) N
To verify the control effect of the designed finite-time controller for discrete-time
nonlinear MJSs, the parameters for system (2.17) with three operation modes are as
follows:
0.88 −0.05 −0.2 0.1 2
A1 = , Ad1 = , Bu1 = ,
0.40 −0.72 0.2 0.15 1
0.4 0
Bw1 = , C1 = ,
0.5 0.1
2 0.24 −0.6 0.4 1
A2 = , Ad2 = , Bu2 = ,
0.80 0.32 0.2 0.6 −1
0.2 0
Bw2 = , C2 = ,
0.6 0.3
−0.8 0.16 −0.3 0.1 1
A3 = , Ad3 = , Bu3 = ,
0.80 0.64 0.2 0.5 1
0.1 0
Bw2 = , C3 = ,
0.3 0.5
f 1 (x(k)) = f 2 (x(k)) = f 3 (x(k)) = sin(x1 (k)) cos(x2 (k)).
34 2 Finite-Time Stability and Stabilization . . .
Choose a single hidden layer neural network with two hidden neurons to approx-
imate the nonlinear function f i (x(k)). Select the parameters of activation function
associated with the hidden layer to be qi h = 0.5, δil = 1. For the chosen activation
function, one has si h (0, φi h ) = 0, si h (1, φi h ) = 1. The optimal weight parameters
Wir∗ to be trained offline are acquired by the back propagation algorithm as follows:
−0.86017 −0.81881
Wi1∗ = ,
−0.95025 0.96405
Wi2∗ = −0.57752 −0.58342 .
Then the approximation errors of the neural networks can be obtained as ρi = 0.022.
According to the obtained Wir∗ , Aσi can be obtained as follows:
The state trajectories of the open-loop and closed-loop nonlinear MJS (2.23) and
(2.25) are shown in Figs. 2.2 and 2.3, respectively. From Fig. 2.2, it could be easily
found that the free MJS (2.23) is not stochastic finite-time stable because the state
trajectories exceed the given bound c2 = 2. With the designed controller, the state
trajectories are retained within two ellipsoid regions, which satisfactorily verify the
closed-loop MJS (2.25) is stochastic finite-time stabilizable.
2.5 Simulation Analysis 35
3.5
3
jumping modes
2.5
1.5
0.5
0 1 2 3 4 5 6 7
time
15
10
5
x2
-5
-10
-15
-5 0 5 10 15 20 25
x1
1
x2
-1
-2
-3
-3 -2 -1 0 1 2 3
x1
2.6 Conclusion
In this chapter, the stochastic finite-time stability and finite-time stabilization prob-
lems are investigated for a class of discrete-time linear MJSs. Based on the derived
results, the finite-time stability and finite-time stabilization problems for discrete-
time MJSs with nonlinearities, time-delays and the norm-bounded exogneous dis-
turbances are further addressed. The multi-layer neural networks are utilized to
parameterize the nonlinearities. Despite of the approximation errors of neural net-
works, time-delays, and the exogneous disturbances, the designed controller can
make the closed-loop systems finite-time stabilized and finite-time bounded. In the
next chapter, the results of finite-time controller design will be extended to discrete-
time MJSs governed by deterministic switches.
References
1. Amato, F., Ariola, M., Dorato, P.: Finite-time control of linear systems subject to parametric
uncertainties and disturbances. Automatica 37(9), 1459–1463 (2001)
2. Moulay, E., Dambrine, M., Yeganefar, N., Perruquetti, W.: Finite-time stability and stabilization
of time-delay systems. Syst. Control Lett. 57(7), 561–566 (2008)
3. Amato, F., Ambrosino, R., Ariola, M., Cosentino, C.: Finite-time stability of linear time-varying
systems with jumps. Automatica 45(5), 1354–1358 (2009)
4. Luan, X.L., Liu, F., Shi, P.: Finite-time stabilization of stochastic systems with partially known
transition probabilities. J. Dyn. Syst. Measur. Control 133(1), 014504–014510 (2011)
References 37
5. Gao, X.B., Ren, H.R., Deng, F.Q., Zhou, Q.: Observer-based finite-time H∞ control for uncer-
tain discrete-time nonhomogeneous Markovian jump systems. J. Franklin Inst. 356(4), 1730–
1749 (2019)
6. He, Q.G., Xing, M.L., Gao, X.B., Deng, F.Q.: Robust finite-time H∞ synchronization for
uncertain discrete-time systems with nonhomogeneous Markovian jump: observer-based case.
Nonlinear Control 30(10), 3982–4002 (2020)
7. Ren, H.L., Zong, G.D., Karimi, H.R.: Asynchronous finite-time filtering of Markovian jump
nonlinear systems and its applications. IEEE Trans. Syst. Man Cybern. Syst. 51(3), 1725–1734
(2019)
8. Zhang, X., He, S.P., Stojanovic, V., Luan, X.L., Liu, F.: Finite-time asynchronous dissipative
filtering of conic-type nonlinear Markovian jump systems. Sci. China Inform. Sci. 64, 1–12
(2021)
9. Cheng, P., He, S.P., Cheng, J., Luan, X.L., Liu, F.: Asynchronous output feedback control for
a class of conic-type nonlinear hidden Markovian jump systems within a finite-time interval.
IEEE Trans. Syst. Man Cybern. Syst. (2020). https://doi.org/10.1109/TSMC.2020.2980312
10. Wang, J.M., Ma, S.P., Zhang, C.H.: Finite-time H∞ control for T-S fuzzy descriptor semi-
Markovian jump systems via static output feedback. Fuzzy Sets Syst. 15, 60–80 (2019)
11. Wang, J.M., Ma, S.P., Zhang, C.H., Fu, M.Y.: Finite-time H∞ filtering for nonlinear singular
systems with nonhomogeneous Markovian jumps. IEEE Trans. Cybern. 49(6), 2133–2143
(2019)
12. Song, X.N., Wang, M., Ahn, C.K., Song, S.: Finite-time H∞ asynchronous control for nonlinear
Markovian jump distributed parameter systems via quantized fuzzy output-feedback approach.
IEEE Trans. Cybern. 50(9), 4098–4109 (2020)
13. Luan, X.L., Liu, F., Shi, P.: Robust finite-time H∞ control for nonlinear jump systems via
neural networks. Circuit. Syst. Signal Process 29(3), 481–498 (2010)
14. Luan, X.L., Liu, F., Shi, P.: Neural-network-based finite-time H∞ control for extended Marko-
vian jump nonlinear systems. Int. J. Adapt. Control Signal Process 24(7), 554–567 (2010)
15. Wang, Y., Xie, L., De Souza, C.E.: Robust control of a class of uncertain nonlinear systems.
Syst. Control Lett. 19, 139–149 (1992)
16. Limanond, S., Si, J.: Neural-network-based control design: an LMI approach. IEEE Trans.
Neural Netw. 9(6), 1422–1429 (1998)
Chapter 3
Finite-Time Stability and Stabilization
for Switching Markovian Jump Systems
Abstract This chapter extends the results of finite-time controller design to discrete-
time switching Markovian jump systems with time-delay. Considering the effect of
the average dwell time on the finite-time performance, some results on the stochastic
finite-time boundedness and stochastic finite-time stabilization with H∞ disturbance
attenuation level are given, and the relationship among three kinds of time scales,
such as time-delay, average dwell time and finite-time interval, are derived by means
of the average dwell time constraint condition.
3.1 Introduction
It is widely known that hybrid systems have been applied in diverse fields char-
acterized by the interconnection of continuous state evolution and discrete mode
switching. The stochastic Markovian jump systems (MJSs) and the switched sys-
tems, in which the jumping among different modes is the stochastic or deterministic
signal, are typical classes of hybrid systems [1–3]. Because of their broad application
prospects, numerous achievements have been made to solve the analysis of stabil-
ity and synthesis of controller design for MJSs and switched systems, respectively
[4–6]. In order to improve the performance of systems, a deterministic switching sig-
nal can be imposed on MJSs. In other words, the MJS is to be dominated following
a hierarchical structure, where a top-level supervisor is responsible for selecting the
appropriate feedback controller among several alternatives. This hierarchical system
is named as switching MJS, and was firstly introduced in [7].
Some fundamental issues of both MJSs and switched systems have been investi-
gated and lots of results have been achieved, but fewer contributions have been made
for hybrid systems subject to both stochastic jumping and deterministic switching.
The mean square stability, the almost sure stability and the exponential l2 − l∞ sta-
bility of switching MJSs have been dissolved in references [8, 9] for a different class
of switching signals satisfying the average dwell time constraint. With the same aver-
age dwell time method, references [10, 11] extended the results to switching MJSs
with uncertain transition probabilities or time-delays.
where the state variable, the control input, and the exogenous disturbances are the
same as those defined in Chap. 2. z(k) ∈ R l is the control output of the system,
h denotes the delay time, σk is the deterministic switching signal taking values in
a finite set S = {1, 2, . . . , S}, rk is a discrete-time, discrete-state Markovian chain
taking values in a finite set M = {1, 2, . . . , M} with transition probabilities:
where πiαj is the transition probabilities from mode i to mode j under switching
signal σk = α satisfying πiαj ≥ 0, M α
j=1 πi j = 1, ∀i, j ∈ M.
For the simplicity of the denotation, for each possible value of σk = α, α ∈ S,
rk = i, i ∈ M, the following equivalent substitution has been made:
A(rk ,σk ) = Aα,i , Ad (rk ,σk ) = Adα,i , Bu (rk ,σk ) = Bu α,i , Bw (rk ,σk ) = Bwα,i ,
C(rk ,σk ) = Cα,i , Cd (rk ,σk ) = Cdα,i , Du (rk ,σk ) = Du α,i , Dw (rk ,σk ) = Dwα,i .
For the system (3.1), the following state feedback controller is designed:
where
Āα,i = Aα,i + Bu α,i K α,i ,
Before moving further, we present the following definitions and lemmas that will
be necessary to derive the main results.
Definition 3.1 The free system (3.1) (setting u(k) = 0) is said to be stochastic FTB
concerning (c1 c2 N R d), where 0 < c1 < c2 , R > 0, if
x T (k1 )Rx(k1 ) ≤ c12 ⇒ x T (k2 )Rx(k2 ) < c22 , k1 ∈ {−h, . . . , 0}, k2 ∈ {1, 2 . . . , N }.
(3.5)
Definition 3.2 For the given scalars 0 < c1 < c2 , R > 0, γ > 0, the closed-loop
system (3.4) is said to be stochastic finite-time H∞ stabilizable concerning (c1 c2
N R d γ ), if the system (3.4) is stochastic finite-time stabilizable for the state
feedback controller (3.3) and under the zero-initial condition the output z(k) satisfies
N
N
E z T (k)z(k) < γ 2 E w T (k)w(k) . (3.6)
k=0 k=0
Definition 3.3 For the switching signal σk and the sampling time k > k0 , L α (k0 , k)
is used to denote the switching times of σk during the finite interval [k0 , k). If for
any given scalars L 0 > 0 and τa > 0, it has L a (k0 , k) ≤ L 0 + (k − k0 )/τa , then the
variables τa and N0 are referred to as average dwell time and chatter bound. As reg-
ularly utilized in the existing references, L 0 = 0 is chosen to simplify the controller
design.
Lemma 3.1 [12] For the symmetric positive matrix M and the matrix N , the fol-
lowing condition is met:
− N M −1 N T ≤ M − N T − N . (3.7)
42 3 Finite-Time Stability and Stabilization for Switching . . .
In this section, sufficient conditions will be presented such that the free system
(3.1) is stochastic FTB and the closed-loop system (3.4) is stochastic finite-time H∞
stabilizable.
Proposition 3.1 For given scalars δ ≥ 1, h > 0, and μ > 1, the system (3.4) is
stochastic finite-time stabilizable in regard to (c1 c2 N R d), if there are positive-
definite matrices Pα,i > 0, Pβ,i > 0, G α,i > 0, i ∈ M, α, β ∈ S and Q such that the
subsequent inequalities hold:
⎡ ⎤
ĀTα,i P̄α,i Āα,i − μPα,i + Q ĀTα,i P̄α,i Adα,i ĀTα,i P̄α,i Bwα,i
⎣ ∗ −Q + ATdα,i P̄α,i Adα,i ATdα,i P̄α,i Bwα,i ⎦<0
∗ ∗ Bwα,i P̄α,i Bwα,i − G α,i
T
(3.8)
2 < 1 (3.10)
N ln δ
τa > = τa∗ (3.11)
ln 1 − ln 2
where
M
P̄α,i = πiαj Pα, j , P̃α,i = R −1/2 Pα,i R −1/2 , Q̃ = R −1/2 Q R −1/2 ,
j=1
2 =μ N
max λmax ( P̃α,i )c12 + λmax ( Q̃)hc12 + max λmax (G α,i )d 2
.
i∈M,α∈S i∈M,α∈S
k−1
Vα,i (k) = x (k)Pα,i x(k) +
T
x T ( f )Qx( f ). (3.12)
f =k−h
3.3 Stochastic Finite-Time H∞ Control 43
where
ζ T (k) = [x T (k) x T (k − h) w T (k)],
⎡ ⎤
ĀTα,i P̄α,i Āα,i − Pα,i + Q ĀTα,i P̄α,i Adα,i ĀTα,i P̄α,i Bwα,i
α,i =⎣ ∗ −Q + ATdα,i P̄α,i Adα,i ATdα,i P̄α,i Bwα,i ⎦ .
∗ ∗ T
Bwα,i P̄α,i Bwα,i
k−1
E Vα, j (k + 1) < μx (k)Pα,i x(k) + w (k)G α,i w(k) + μ
T T
x T ( f )Qx( f )
f =k−h
Let kl , kl−1 , kl−2 , . . . be the switching instants, then in the same mode, formula
(3.15) gives
k−1
< μk−kl V (rkl , σkl , kl ) + max λmax (G α,i ) μk−θ−1 w T (θ )w(θ ).
i∈M,α∈S
θ=kl
(3.16)
44 3 Finite-Time Stability and Stabilization for Switching . . .
l −1
k
V (rkl , σkl , kl ) = x (kl ) P̄(rkl , σkl , kl )x(kl ) +
T
x T ( f )Qx( f )
f =kl −h
l −1
k
< δx T (kl ) P̄(rkl−1 , σkl , kl )x(kl ) + x T ( f )Qx( f ).
f =kl −h
According to condition (3.15), for the different switching mode, one has
l −1
k
V (rkl−1 , σkl , kl ) = x T (kl ) P̄(rkl−1 , kl )x(kl ) + x T ( f )Qx( f )
f =kl −h
l −1
k
< μkl −kl−1 V (rkl−1 , σkl−1 , kl−1 ) + max λmax (G α,i ) μkl −θ−1 w T (θ )w(θ ).
i∈M,α∈S
θ=kl−1
(3.17)
l −1
k
= δV (rkl−1 , σkl , kl ) + (1 − δ) x T ( f )Qx( f )
f =kl −h
l −1
k
< δμkl −kl−1 V (rkl−1 , σkl−1 , kl−1 ) + δ max λmax (G α,i ) μkl −θ−1 w T (θ )w(θ ).
i∈M,α∈S
θ=kl−1
(3.18)
Substituting the inequality (3.18) into formula (3.17) with μ > 1, δ ≥ 1, it yields
V (rkl , σk , k)
k−1
< μk−kl V (rkl , σkl , kl ) + max λmax (G α,i ) μk−θ−1 w T (θ )w(θ )
i∈M,α∈S
θ=kl
l −1
k
< δμk−kl−1 V (rkl−1 , σkl−1 , kl−1 ) + δ max λmax (G α,i ) μk−θ−1 w T (θ )w(θ )
i∈M,α∈S
θ=kl−1
3.3 Stochastic Finite-Time H∞ Control 45
k−1
+ max λmax (G α,i ) μk−θ−1 w T (θ )w(θ )
i∈M,α∈S
θ=kl
1 −1
k
<δ μ La k−k0
V (rk0 , σk0 , k0 ) + max λmax (G α,i ) δ L a μk−θ−1 w T (θ )w(θ )
i∈M,α∈S
θ=k0
⎤
2 −1
k
k−1
+ δ L a −1 μk−θ−1 w T (θ )w(θ ) + · · · + δ 0 μk−θ−1 w T (θ )w(θ )⎦
θ=k1 θ=kl
k−k0 /τa
<δ μ k−k0
V (rk0 , σk0 , k0 )
k
k−k0 /τa
+ max λmax (G α,i )δ μk−θ−1
w T (θ )w(θ )
i∈M,α∈S
θ=k0
N /τa
<δ μ N
V (rk0 , σk0 , k0 ) + max λmax (G α,i )d 2
. (3.19)
i∈M,α∈S
k−1
Vα,i (k) > min λmin ( P̃α,i )x T (k)Rx(k) + λmin ( Q̃) x T ( f )Rx( f )
i∈M,α∈S
f =k−h
Therefore,
δ N /τa μ N ( max λmax ( P̃α,i )c12 + λmax ( Q̃)hc12 + max λmax (G α,i )d 2 )
i∈M,α∈S i∈M,α∈S
x T (k)Rx(k) < .
min λmin ( P̃α,i )
i∈M,α∈S
(3.22)
46 3 Finite-Time Stability and Stabilization for Switching . . .
Define
which means
x T (k)Rx(k) < c2 .
Remark 3.1 It should be noticed that the derived conditions for discrete-time switch-
ing MJS (3.1) have relationships with the stochastic Markovian chain rk and the
deterministic switching signal σk . The influence of jumping and switching signals
on Lyapunov function recurrence from instant kl to kl−1 is shown in condition (3.18).
According to the recursive expression (3.18), the essential Lyapunov function rela-
tionship between k and k0 is acquired, which constitutes the foundation to guarantee
the stochastic finite-time stabilization of the closed-loop system (3.4).
Remark 3.2 From the derived condition of the average dwell time (3.11), it can be
recognized that if the time-delay is larger, the corresponding minimum average dwell
time τa∗ is also more prolonged. Therefore, to compensate for the effect of time-delay
on the instability of the system (3.1), the switching frequency among different modes
should be a little bit slow. In other words, the average dwell time stays in the same
mode for a little longer.
Based on the derived stochastic FTB conditions in Proposition 3.1, the next goal
is to acquire sufficient conditions for stochastic finite-time H∞ controller design.
Proposition 3.2 For given scalars δ ≥ 1, and μ > 1, the closed-loop system (3.4) is
stochastic finite-time stabilizable with H∞ disturbance rejection performance con-
cerning (c1 c2 N R d γ ), if there are positive-definite matrices Pα,i > 0, Pβ,i > 0
and Q such that the subsequent inequalities hold
α,i
⎡ T C̄ T T T T
⎤
11 + C̄α,i α,i μ Āα,i P̄α,i Adα,i + C̄ α,i C dα,i μ Āα,i P̄α,i Bwα,i + C̄ α,i Dwα,i
⎢ T C ⎥
=⎣ ∗ 12 + Cdα,i dα,i μATdα,i P̄α,i Bwα,i + Cdα,i
T D
wα,i ⎦ < 0
∗ ∗ T
13 + Dwα,i Dwα,i
(3.23)
N ln δ
τa > = τa∗ (3.26)
ln[c22 min λmin ( P̃α,i )] − ln μ N γ 2 d 2
i∈M,α∈S
where
11 = μ ĀTα,i P̄α,i Āα,i − μPα,i + μQ,
13 = −γ 2 + μBwα,i
T
P̄α,i Bwα,i .
Note that
⎡ T
⎤
C̄α,i
⎣ Cdα,i
T ⎦
C̄α,i Cdα,i Dwα,i ≥ 0.
T
Dwα,i
Similar to the same principles of the proof of Proposition 3.1, and under the zero
initial condition V (rk0 , σk0 , k0 ) = 0, c1 = 0, we have
48 3 Finite-Time Stability and Stabilization for Switching . . .
δ N /τa μ N γ 2 d 2
x T (k)Rx(k) < . (3.28)
min λmin ( P̃α,i )
i∈M,α∈S
Combined with conditions (3.25) and (3.26), the stochastic finite-time stabiliza-
tion with H∞ disturbance rejection performance for the closed-loop system (3.4) can
be guaranteed. Define
N
J=E z T (k)z(k) − γ 2 w T (k)w(k) .
k=0
N
≤ z T (k)z(k) − γ 2 w T (k)w(k) + μE{Vα, j (k + 1)} − μVα,i (k)
k=0
N
= ζ T (k)α,i ζ (k).
k=0
Our next target is to find the feasible solutions of the results in Proposition 3.2 by
transforming them into linear matrix inequalities (LMIs).
Theorem 3.1 The closed-loop system (3.4) is stochastic finite-time H∞ stabilizable
via state feedback controller (3.3) concerning (c1 c2 N R d γ ) with δ ≥ 1 and
μ > 1, if there are positive-definite matrices X α,i > 0, Yα,i , α ∈ M, i ∈ S and H
such that
⎡ ⎤
−μX α,i 0 T
C̃α,i L̃ T1,i X α,i
⎢ ∗ −γ 2 I T
Dwα,i L T3,i 0 ⎥
⎢ ⎥
⎢ ∗ −1 −1
0 ⎥
∗ −I + μ Cdα,i H Cdα,i μ Cdα,i H L 2,i ⎥<0
T T
⎢
⎣ ∗ ∗ ∗ −μ−1 X + μ−1 L 2,i H L T2,i 0 ⎦
∗ ∗ ∗ ∗ −μ−1 H
(3.29)
3.3 Stochastic Finite-Time H∞ Control 49
⎡ ⎤
M
β α
⎢μ j=1 πi j X β, j − 2μX α,i πi1 X α,i ··· πiαM X α,i ⎥
⎢ ⎥
⎢ ∗ −X α,1 · · · ⎥
⎢ 0 ⎥≤0 (3.30)
⎢ .. .. ⎥
⎣ ∗ ∗ . . ⎦
∗ ∗ ∗ −X α,M
N ln δ
τa > = τa∗ (3.33)
ln c22 μ−N − ln λγ 2 d 2
where
T
C̃α,i = (Cα,i X α,i + Du α,i Yα,i )T ,
α T
α T
L̃ T1,i = πi1 Ãα,i πi2 Ãα,i · · · πiαM ÃTα,i ,
where α T
α T
L T1,i = πi1 Āα,i πi2 Āα,i . . . πiαM ĀTα,i ,
α T
α T
L T2,i = πi1 Adα,i πi2 Adα,i . . . πiαM ATdα,i ,
α T
α T
L T3,i = πi1 Bwα,i πi2 Bwα,i . . . πiαM Bwα,i
T
,
Implementing the matrix conversion to the above condition, it leads to the subse-
quent inequality
⎡ ⎤
−μPα,i + μQ 0 T
C̄α,i L T1,i 0
⎢ ∗ −γ 2 I Dwα,i
T
L T3,i 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −I 0 C ⎥
dα,i ⎥ < 0.
⎢
⎣ ∗ ∗ ∗ −μ−1 Pα,i−1
L 2,i ⎦
∗ ∗ ∗ ∗ −μQ
Using Schur complement lemma to the above inequality again, one has
⎡ ⎤
−μPα,i 0 T
C̄α,i L T1,i I
⎢ ∗ −γ I
2 T
Dwα,i T
L 3,i 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −I + μ−1 C Q −1 C T μ−1 C Q −1 L T 0 ⎥ < 0.
⎢ dα,i dα,i dα,i 2,i ⎥
⎣ ∗ ∗ ∗ −1
−μ−1 Pα,i + μ−1 L 2,i Q −1 L T2,i 0 ⎦
∗ ∗ ∗ ∗ −μ−1 Q −1
−1
Implementing a congruence to the above inequality by diag{Pα,i , I, I, I, I } and
−1 −1
letting X α,i = Pα,i , Yα,i = K α,i X α,i , H = Q , the LMI (3.29) can be derived.
By using Schur complement lemma, inequality (3.24) can be rewritten as
⎡
M α ⎤
β
−μ πi j Pβ, j πi1 · · · πiαM
⎢ ⎥
⎢ j=1 ⎥
⎢ ∗ −1
−Pα,1 ··· 0 ⎥
⎢ ⎥ ≤ 0. (3.34)
⎢ .. .. ⎥
⎣ ∗ ∗ . . ⎦
−1
∗ ∗ ∗ −Pα,M
−1
Implementing a congruence to condition (3.34) by diag{Pα,i , I, . . . , I }, it leads
to the following inequality:
⎡ ⎤
M
β α α
⎢−μ j=1 πi j X α,i Pβ, j X α,i πi1 X α,i ··· πi M X α,i ⎥
⎢ ⎥
⎢ ∗ −X α,1 ··· ⎥
⎢ 0 ⎥ ≤ 0. (3.35)
⎢ .. .. ⎥
⎣ ∗ ∗ . . ⎦
∗ ∗ ∗ −X α,M
Then
M
β
M
β
−μ πi j X α,i Pβ, j X α,i ≤ μ πi j X β, j − 2μX α,i . (3.36)
j=1 j=1
3.4 Observer-Based Finite-Time H∞ Control 51
Formulas (3.35) and (3.36) lead to linear matrix inequality (3.30) in Theorem 3.1.
On the other hand, consider the following conditions:
1
min λmin ( P̃α,i ) = ,
i∈M,α∈S max λmax ( X̃ α,i )
i∈M,α∈S
and
−1
X̃ α,i = P̃α,i = R 1/2 X α,i R 1/2 .
Making the assumption max λmax ( X̃ α,i ) < λ, it implies R 1/2 X α,i R 1/2 < λI .
i∈M,α∈S
Then, inequality (3.37) is equal to LMI (3.31) in Theorem 3.1. Meanwhile, condition
(3.28) follows that:
where x̄k and ȳk are the state and output variables to be estimated, K α,i and Hα,i are
the controller and the observer gains to be designed simultaneously.
52 3 Finite-Time Stability and Stabilization for Switching . . .
T
Letting ek = xk − x̄k and x̃k = xkT ekT , the corresponding error closed-loop sys-
tem follows that:
⎧
⎪
⎨ x̃k+1 = Ãα,i x̃k + Ãdα,i x̃k−h + B̃wα,i wk
z k = C̃α,i x̃k + C̃dα,i x̃k−h + Dwα,i wk (3.39)
⎩ x̃ = ϕ T ϕ T − ηT T , f ∈ {−h, . . . , 0}, r (0) = r
⎪
f f f f 0
where
Aα,i + Bu α,i K α,i −Buα,i K α,i Adα,i 0
Ãα,i = , Ãdα,i = ,
0 Aα,i − Hα,i E α,i 0 Adα,i − Hα,i E dα,i
Bwi
B̃wi = , C̃α,i = Cα,i + Duα,i K α,i,ξk −Duα,i K α,i , C̃dα,i = Cdα,i 0 .
Bwi
The control problem to be dealt with in this subsection is to find suitable con-
troller and the observer gains K α,i and Hα,i such that the error closed-loop sys-
tem (3.39) is stochastic finite-time stabilizable with H∞ performance concerning
(c1 c2 N R d γ ).
The following Proposition 3.3 gives the sufficient conditions to investigate if the
error closed-loop system (3.39) is finite-time stabilizable with H∞ performance, and
it will be utilized in the solution of controller and observer gains.
Proposition 3.3 For given scalars δ ≥ 1, h > 0, and μ > 1, the error closed-
loop system (3.39) is finite-time stabilizable with H∞ performance concerning
(c1 c2 N R d γ ), if there are positive-definite matrices Pα,i > 0, Pβ,i > 0 and Q
such that the subsequent inequalities hold
⎡ ⎤
11 + C̃α,i
T
C̃α,i μ ÃTα,i P̄α,i Ãdα,i + C̃α,i
T
C̃dα,i μ ÃTα,i P̄α,i B̃wα,i + C̃α,i
T
Dwα,i
⎣ ∗ 12 + C̃dα,iT
C̃dα,i μ ÃTdα,i P̄α,i B̃α,i + C̃dα,i
T
Dwα,i ⎦ < 0
∗ ∗ 13 + Dwα,i Dwα,i
T
(3.40)
P̄α,i ≤ δ P̄β,i (3.41)
1 < 2 (3.42)
N ln δ
τa > = τa∗ (3.43)
ln 2 − ln 1
where
M
P̄α,i = πiαj Pα, j , P̃α,i = R −1/2 Pα,i R −1/2 ,
j=1
3.4 Observer-Based Finite-Time H∞ Control 53
13 = −γ 2 + μ B̃wα,i
T
P̄α,i B̃wα,i ,
1 =μ N
γ d + max
2 2
λmax ( P̃α,i )c12 + λmax ( Q̃)hc12 ,
i∈M,α∈S
k−1
Vα,i (k) = x T (k)Pα,i x(k) + x T ( f )Qx( f ). (3.44)
f =k−h
Because
⎡ T
⎤
C̃α,i
⎣ C̃ T ⎦ C̃α,i C̃dα,i Dwα,i ≥ 0.
dα,i
T
Dwα,i
It is easy to get
⎡ ⎤
11 μ ĀTα,i P̄α,i Adα,i μ ĀTα,i P̄α,i Bwα,i
α,i =⎣ ∗ 12 μATdα,i P̄α,i Bwα,i ⎦ < 0 (3.45)
∗ ∗ 13
which implies
μE Vα, j (k + 1) < μVα,i (k) + γ 2 w T (k)w(k).
Assuming that kl , kl−1 , kl−2 , . . . are the switching instants, then in the same mode
without switching, formula (3.46) leads to
Combining Eqs. (3.41) and (3.44), during the different switching modes, it yields
l −1
k
V (rkl , σkl , kl ) = x̃ T (kl ) P̄(rkl , σkl , kl )x̃(kl ) + x T ( f ) Q̃x( f )
f =kl −h
l −1
k
< δx T (kl ) P̄(rkl−1 , σkl , kl )x(kl ) + x T ( f ) Q̃x( f ).
f =kl −h
l −1
k
V (rkl , σkl , kl ) < δx (kl ) P̄(rkl−1 , σkl , kl )x(kl ) +
T
x T ( f ) Q̃x( f )
f =kl −h
⎡ ⎤
l −1
k l −1
k
= δ ⎣V (rkl−1 , σkl , kl ) − x T ( f ) Q̃x( f )⎦ + x T ( f ) Q̃x( f )
f =kl −h f =kl −h
l −1
k
= δV (rkl−1 , σkl , kl ) + (1 − δ) x T ( f ) Q̃x( f )
f =kl −h
l −1
k
< δμkl −kl−1 V (rkl−1 , σkl−1 , kl−1 ) + δ μkl −θ−1 γ 2 w T (θ )w(θ ).
θ=kl−1
(3.48)
Noticing that μ > 1, δ ≥ 1 and substituting Eq. (3.48) into Eq. (3.47), it has
3.4 Observer-Based Finite-Time H∞ Control 55
l −1
k
E V (rkl , σk , k) < μk−kl V (rkl , σkl , kl ) + μk−θ−1 γ 2 w T (θ )w(θ )
θ=kl−1
l −1
k
< δμk−kl−1 V (rkl−1 , σkl−1 , kl−1 ) + δ μk−θ−1 γ 2 w T (θ )w(θ )
θ=kl−1
k−1
+ μk−θ−1 γ 2 w T (θ )w(θ )
θ=kl
1 −1
k
<δ μ La k−k0
V (rk0 , σk0 , k0 ) + δ L a μk−θ−1 γ 2 w T (θ )w(θ )
θ=k0
⎤
2 −1
k
k−1
+δ L a −1
μ k−θ−1
γ w (θ )w(θ ) + · · · + δ
2 T 0
μ k−θ−1
γ w (θ )w(θ )⎦
2 T
θ=k1 θ=kl
k
< δ k−k0 /τa μk−k0 V (rk0 , σk0 , k0 ) + δ k−k0 /τa μk−θ−1 γ 2 w T (θ )w(θ )
θ=k0
N /τa
<δ μ N
V (rk0 , σk0 , k0 ) + γ d2 2
. (3.49)
Note that
0 −1
k
V (rk0 , σk0 , k0 ) = x T (k0 ) P̄(rk0 , σk0 , k0 )x(k0 ) + x T ( f )Qx( f )
f =k0 −h
0 −1
k
≤ max λmax ( P̃α,i )x T (k0 )Rx(k0 ) + λmax ( Q̃) x T ( f )Rx( f )
i∈M,α∈S
f =k0 −h
k−1
E Vα,i (k) > min λmin ( P̃α,i )x T (k)Rx(k) + λmin ( Q̃) x T ( f )Rx( f )
i∈M,α∈S
f =k−h
Under the zero initial condition V (rk0 , σk0 , k0 ) = 0, and from Eqs. (3.49) and
(3.50), we can obtain
δ N /τa μ N γ 2 d 2 + max λmax ( P̃α,i )c12 + λmax ( Q̃)hc12
i∈M,α∈S
x T (k)Rx(k) < .
min λmin ( P̃α,i )
i∈M,α∈S
56 3 Finite-Time Stability and Stabilization for Switching . . .
Combining the above condition with Eqs. (3.42) and (3.43), it is followed that:
x T (k)Rx(k) < c2 ,
N
≤ z T (k)z(k) − γ 2 w T (k)w(k) + μE{Vα, j (k + 1)} − μVα,i (k)
k=0
N
= ζ T (k)α,i ζ (k).
k=0
The following theorem shows the solutions of the controller and observer gains
in term of LMIs.
Theorem 3.2 The error closed-loop system (3.39) is stochastic finite-time stabiliz-
able with H∞ performance via observer-based controller (3.38) concerning given
(c1 c2 N R d γ ) with δ ≥ 1, and μ > 1, if there are positive-definite matrices
P̃α,i > 0, X α,i > 0, Yα,i and Q such that
⎡ ⎤
−μPα,i + μQ 0 0 L T1,i T
C̃α,i
⎢ ∗ −μQ 0 L T2,i T ⎥
⎢ C̃dα,i ⎥
⎢ ∗ ∗ −γ I
2
L T3,i T ⎥
Dwα,i ⎥ < 0
⎢ (3.51)
⎢ ⎥
⎣ ∗ ∗ ∗ −μ−1 X α,i 0 ⎦
∗ ∗ ∗ ∗ −I
N ln δ
τa > = τa∗ (3.56)
ln c22 λ − ln μ N γ 2 d 2
where α T
α T
L T1,i = πi1 Ãα,i πi2 Ãα,i . . . πiαM ÃTα,i ,
α T
α T
L T2,i = πi1 Ãdα,i πi2 Ãdα,i . . . πiαM ÃTdα,i ,
α T
α T
L T3,i = πi1 B̃wα,i πi2 B̃wα,i . . . πiαM B̃wα,i
T
,
X α,i = diag{X α,1 X α,2 . . . X α,M },
−1
Implementing a congruence to the above inequality by diag{Pα,i , I, . . . , I }, it
leads to ⎡ ⎤
M
β α α
⎢ −δ π X P X
i j α,i β, j α,i π X
i1 α,i · · · π X
i M α,i ⎥
⎢ j=1 ⎥
⎢ ∗ −X α,1 · · · ⎥
⎢ 0 ⎥ ≤ 0. (3.57)
⎢ .. .. ⎥
⎣ ∗ ∗ . . ⎦
∗ ∗ ∗ −X α,M
Then
M
β
M
β
−δ πi j X α,i Pβ, j X α,i ≤ δ πi j X β, j − 2δ X α,i .
j=1 j=1
where
P α,i = diag{Pα,1 Pα,2 , . . . , Pα,M }.
−1
Substituting Eqs. (3.59)–(3.63) into Eq. (3.58), and denoting X̃ i,ξk = P̃i,ξk
, the
LMIs (3.51) and (3.52) can be obtained in Theorem 3.2.
On the other hand, considering the following conditions:
1
min λmin ( P̃α,i ) = ,
i∈M,α∈S max λmax ( X̃ α,i )
i∈M,α∈S
and
−1
X̃ α,i = P̃α,i = R 1/2 X α,i R 1/2 .
Assuming max λmax ( X̃ α,i ) < λ, which means min λmin ( P̃α,i ) > λ, then we
i∈M,α∈S i∈M,α∈S
can get condition (3.55).
Moreover, define c1 = 0, then conditions (3.54) and (3.56) are equivalent to con-
ditions (3.42) and (3.43) in Proposition 3.3, and we can also get
x T (k)Rx(k) < c2 .
Remark 3.3 It should be seen that the derived conditions in Theorem 3.2 are not
strict linear matrix inequalities because of the coupling relationship between different
matrix variables. Therefore, the non-feasibility problem in Theorem 3.2 should be
transformed into the subsequent optimization problem including the LMI conditions:
Minimize trace Pα,i X α,i = I
subject to (3.51), (3.53)−(3.55), Pα,i > 0, X α,i > 0
Pα,i I
and > 0. (3.64)
I X α,i
Then, for given (c1 c2 N R d γ ) and scalars δ ≥ 1, and μ ≥ 1, the matrices K α,i ,
and Hα,i can be solved with the following algorithm [13]:
0
Algorithm 3.1 (1) Determine an initial feasible value Pα,i , X α,i
0
, Q 0 satisfying
conditions (3.51), (3.53)–(3.55) and (3.64). Let k = 0.
(2) Find the solution of the following linear matrix inequality optimization problem:
0
Minimize trace Pα,i X α,i + X α,i
0
Pα,i
subject to (3.51), (3.53)−(3.55) and (3.64).
k
Substituting the acquired matrices Pα,i , X α,i
k
, Q k into Eqs. (3.51) and (3.55).
(3) If condition (3.64) is guaranteed with
trace Pα,i X α,i − n < ζ,
forsome sufficiently small scalar ζ > 0, give the feasible solution Pα,i , X α,i , Q
= Pα,ik
, X α,i
k
, Q k and stop.
k k+1 k+1
(4) If k > N , gives it up and stops. Set k = k + 1, Pα,i , X α,i
k
, Q k = Pα,i , X α,i ,
k+1
Q and go to step (2).
Two examples will be given in this subsection to illustrate the efficacy of the proposed
finite-time H∞ controller configuration approach for stochastic MJSs supervised by
the deterministic switching. The first example shows that for the finite-time unstable
system, the closed-loop system is finite-time stabilizable with the designed controller.
The second example is adapted from a typical economic system to demonstrate the
practical applicability of the theoretical results.
3.5 Simulation Analysis 61
Du1,1 = 0.4, Du1,2 = 0.5, Du1,3 = 0.6, Dw1,1 = 0.2, Dw1,2 = 0.3, Dw1,3 = 1.1.
MJS 2:
1 −0.05 0.8 0.8 −0.3 0.6
A2,1 = , A2,2 = , A2,3 = ,
0.4 −0.72 0.6 1 0.4 0.34
0.2 0 0.8 −0.24 0.6 0.4
Ad2,1 = , Ad2,2 = , Ad2,3 = ,
0 0.5 −0.7 −0.32 0.2 −0.3
switching signal
4.5
2
4
jumping modes
1
3.5
0 5 10
time
3
2.5
1.5
0.5
0 1 2 3 4 5 6 7 8 9 10
time
1
x2
-1
-2
-3
-3 -2 -1 0 1 2 3
x1
1.5
0.5
x2
-0.5
-1
-1.5
-2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x1
Example 3.2 The second example considers an economic system adapted from [14].
There are three operation modes representing different financial situations: normal,
boom and slump. The Markovian chain governs the stochastic transitions among
the three modes. On account of the variation in domestic and international economic
environment, the macroeconomic control from government is necessary. Government
intervention leads to a change in the economic model, which can be viewed as a top-
level supervisor. The detailed parameters of the economic system are as follows:
64 3 Finite-Time Stability and Stabilization for Switching . . .
MJS 1:
0 1 0 1 0 1
A1,1 = , A1,2 = , A1,3 = ,
−2.6 3.3 −4.4 4.6 5.4 −5.3
T
Bu1,1 = Bu1,2 = Bu1,3 = 0 1 ,
T T T
Bw 1,1 = 0.3 0.24 , Bw1,2 = −0.15 −0.3 , Bw 1,3 = 0.3 0.45 ,
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
1.5477 −1.0976 3.1212 −0.5082 1.8385 −1.2728
C1,1 = ⎣−1.0976 1.9145 ⎦ , C1,2 = ⎣−0.5082 2.7824 ⎦ , C1,3 = ⎣−1.2728 1.6971 ⎦ ,
0 0 0 0 0 0
T T
Du1,1 = 0 0 1.6125 , Du1,2 = 0 0 1.0794 , Du1,3 = 0 0 1.0540 T ,
T T T
Dw11 = 0.18 0.3 0.36 , Dw12 = −0.27 0.3 0.18 , Dw13 = 0.3 0.12 0.3 .
MLS 2:
0 1 0 1 0 1
A2,1 = , A2,2 = , A2,3 = ,
−2.4 3.1 −4.2 4.4 5.2 −5.1
Other parameters of the system are same as MJS 1 and the transition probabilities
matrix is ⎡ ⎤
0.79 0.11 0.1
2 = ⎣0.27 0.53 0.2⎦ .
0.23 0.07 0.7
switching signal
4.5
2
4
1
3.5
jumping modes
0 10 20
time
3
2.5
1.5
0.5
0 2 4 6 8 10 12 14 16 18 20
time
1
x2
-1
-2
-3
-4
-4 -3 -2 -1 0 1 2 3 4
x1
with λ = 0.1498 and τa∗ = 3.8653. Here we choose the average dwell time τa = 4.
T
The initial state, mode and external disturbance are taken as x0 = 0 0 , r0 = 1 and
w(k) = 0.5e−k , respectively. The following figures show the jumping modes and
switching signals, the state trajectories of the free and closed-loop economic system
(Figs. 3.4, 3.5, and 3.6).
From Fig. 3.6 we can see that the economic situation is kept within the desired
bound with the designed controller.
66 3 Finite-Time Stability and Stabilization for Switching . . .
1.5
0.5
x2
-0.5
-1
-1.5
-2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x1
3.6 Conclusion
References
1. Feng, X., Loparo, K.A., Ji, Y., Chizeck, H.J.: Stochastic stability properties of jump linear
systems. IEEE Trans. Autom. Control 37, 38–53 (1992)
2. Shi, P., Boukas, E.K., Agarwal, R.: Kalman filtering for continuous-time uncertain systems
with Markovian jumping parameters. IEEE Trans. Autom. Control 44(8), 1592–1597 (1999)
3. Boukas, E.K.: Stochastic Switching Systems: Analysis and Design. Birkhauser Publishing,
Berlin (2005)
4. Zhai, G.S., Hu, B., Yasuda, K., Michel, A.N.: Stability analysis of switched systems with stable
and unstable subsystems: an average dwell time approach. Int. J. Syst. Sci. 32(8), 1055–1061
(2001)
5. Shi, P., Xia, Y., Liu, G., Rees, D.: On designing of sliding mode control for stochastic jump
systems. IEEE Trans. Autom. Control 51(1), 97–103 (2006)
6. Luan, X.L., Shunyi, Zhao, Liu, F.: H∞ control for discrete-time Markovian jump systems with
uncertain transition probabilities. IEEE Trans. Autom. Control 58(6), 1566–1572 (2013)
References 67
7. Bolzern, P., Colaneri, P., Nicolao, G.D.: Markovian jump linear systems with switching tran-
sition rates: mean square stability with dwell-time. Automatica 46, 1081–1088 (2010)
8. Hou, L.L., Zong, G.D., Zheng, W.X.: Exponential l2 -l∞ control for discrete-time switching
Markovian jump linear systems. Circ. Syst. Signal Process 32(6), 2745–2759 (2013)
9. Bolzern, P., Colaneri, P., Nicolao, G.D.: Almost sure stability of Markovian jump linear systems
with deterministic switching. IEEE Trans. Autom. Control 58(1), 209–213 (2013)
10. Luan, X.L., Zhao, C.Z., Liu, F.: Finite-time H∞ control with average dwell-time constraint for
time-delay Markovian jump systems governed by deterministic switches. IET Control Theor.
Appl. 8(11), 968–977 (2014)
11. Luan, X.L., Zhao, C.Z., Liu, F.: Finite-time stabilization of switching Markovian jump systems
with uncertain transition rates. Circ. Syst. Signal Process 34(12), 3741–3756 (2015)
12. Yin, Y., Shi, P., Liu, F., Teo, K.L.: Observer-based H∞ control on nonhomogeneous discrete-
time Markov jump systems. J. Dyn. Syst. Meas. Control 135(4), 1–8 (2013)
13. He, Y., Wu, M., Liu, G.P., She, J.H.: Output feedback stabilization for a discrete-time system
with a time-varying delay. IEEE Trans. Autom. Control 53(10), 2372–2377 (2008)
14. Costa, O., Assumpcão, E.O., Boukas, E.K., Marques, R.P.: Constrained quadratic state feed-
back control of discrete-time Markovian jump linear systems. Automatica 35(4), 617–626
(1999)
Chapter 4
Finite-Time Stability and Stabilization
for Non-homegeneous Markovian Jump
Systems
Abstract Considering the practical case that the transition probabilities jumping
among different modes are random time-varying, the finite-time stabilization, finite-
time H∞ control and the observer-based state feedback finite-time control prob-
lems for discrete-time Markovian jump systems with non-homogeneous transition
probabilities are investigated in this chapter. Gaussian transition probability den-
sity function is utilized to describe the random time-varying property of transition
probabilities. Then, the variation-dependent controller is devised to guarantee the
corresponding closed-loop systems finite-time stabilization for random time-varying
transition probabilities.
4.1 Introduction
Markovian jump systems (MJSs) are a set of dynamic systems with random jumps
among finite subsystems. As essential system parameters, the jump transition prob-
abilities (TPs) determine which mode the system is in at the current moment. Under
the hypothesis that TPs are known accurately in advance, many problems of this kind
of MJSs with homogeneous TPs have been well studied [1–3]. However, the assump-
tion that the TPs are exactly known may lead to instability or deterioration of system
performance. Therefore, more practical MJSs with uncertain TPs are investigated to
address the related research problems.
Similar to the uncertainties about the system matrices, one frequently used form
of uncertainty is the polytopic description, where the TP matrix is supposed to be in
a convex framework with associate vertices [4–6]. The other type is specified in an
element-wise style. In this form, the components of the TP matrix are estimated in
practice, and error bounds are provided in the meantime. Then, the robust methodolo-
gies can be employed to tackle the norm-bounded or polytopic uncertainties supposed
in the TPs [6, 7].
Considering more practical cases that some components in the TP matrix are
precious to collect, the partially unknown TPs of MJSs has been recommended in
[8, 9]. Different with the uncertain TPs considered in [4–7], the notion of partially
unknown TPs does not expect any information of the unknown components. How-
ever, it is essential to point out that the more details in TPs are unknown, the more
conservativeness of the controller or filter design. In extreme circumstances, such
as all the elements in TPs are unavailable, the MJSs are equivalent to the switching
systems in a particular case.
In this chapter, the random time-varying TPs are considered from the stochastic
viewpoint. The Gaussian probability density function (PDF) is employed to describe
the relevant probability concerning TPs to occur at a provided constant. In this way,
the random time-varying TPs can be characterized with a Gaussian PDF. The vari-
ance of Gaussian PDF can quantize the uncertainties of TPs. Then the finite-time
stabilization, finite-time H∞ control and the observer-based state feedback finite-
time control problems are presented to deal with the transient performance analysis
of discrete-time non-homogeneous MJSs.
Consider the discrete-time MJS with the same structure in the preceding chapters:
x(k + 1) = A(rk )x(k) + Bu (rk )u(k) + Bw (rk )w(k)
(4.1)
x(k) = x0 , rk = r0 , k = 0
where the state variable, the control input, and the exogenous disturbances are the
same with those defined in the preceding chapters. The system matrices A(rk ), Bu (rk )
and Bw (rk ) are denoted as Ai , Bui , and Bwi , respectively. rk is a time-varying Marko-
vian chain taking values in M = {1, 2, ..., M} with transition probabilities
where πi(ξj k ) is the transition probabilities from mode i to mode j satisfying πi(ξj k ) ≥
(ξk )
0, M j=1 πi j = 1, ∀i, j ∈ M.
In this chapter, the random time-varying TPs are characterized by a Gaussian
stochastic process {ξk , k ∈ N}. The pruned Gaussian PDF of random variables πi(ξj k )
can be denoted as follows:
( ξk )
1 πi j −μi j
√ f √
σi j σi j
p πi(ξj k ) = (4.2)
1−μ 0−μ
F √σi ij j − F √σi ij j
where f (·) is the PDF of the standard normal distribution, F(·) is the cumulative
distribution of f (·), μi j and σi j are the means and variances of Gaussian PDFs,
respectively. Therefore, the matrix of transition probability can be expressed as:
4.2 Preliminaries and Problem Formulation 71
10
8 s=0.2
P robability dens ity
s=0.1
6 s=0.05
2
TP
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
50 50 50
s=0.1 s=0.05
s=0.2
40 40 40
30 30 30
Count
Count
Count
20 20 20
10 10 10
0 0 0
0.5 0.5 0.5
Fig. 4.1 Gaussian PDF for (0.5, 0.2), (0.5, 0.1) and (0.5, 0.05)
⎡ ⎤
n(μ11 , σ11 ) n(μ12 , σ12 ) · · · n(μ1M , σ1M )
⎢ n(μ21 , σ21 ) n(μ22 , σ22 ) · · · n(μ2M , σ2M ) ⎥
⎢ ⎥
N =⎢ .. .. .. .. ⎥ (4.3)
⎣ . . . . ⎦
n(μ M1 , σ M1 ) n(μ M2 , σ M2 ) · · · n(μ M M , σ M M )
1
π̂i(ξj k ) (ξk )
= πi j = πi(ξj k ) p πi(ξj k ) dπi(ξj k )
0
0−μi j 1−μi j
f √
σi j
− f √
σi j √
= μi j + σi j (4.4)
0−μi j 1−μi j
F √
σi j
−F √
σi j
with
M
E πi(ξj k ) = 1, E πi(ξj k ) ≥ 0, 1 ≤ i, j ≤ M.
j
Design the following state feedback controller for the discrete-time MJS (4.1):
where K i,π (ξk ) are controller gains to be calculated. Substituting controller (4.6) into
ij
system (4.1) leads to the following closed-loop system:
⎡ ⎤
ĀT (ξk ) P̄ j,πikj Āi,π (ξk ) − α Pi,π (ξk ) ĀT (ξk ) P̄ j,π (ξk ) Bwi
⎣ i,πi j ij ij i,πi j ij ⎦<0 (4.8)
∗ T
Bwi P̄ j,π (ξk ) Bwi − α Q
ij
4.3 Stochastic Finite-Time Stabilization 73
where P̃i,π (ξk ) = R −1/2 Pi,π (ξk ) R −1/2 , λmax (·), λmin (·) mean the maximal and mini-
ij ij
mum eigenvalue of the matrix, respectively.
Proof For the system (4.7), choose the following Lyapunov function:
where Pi,π (ξk ) P(rk , πi(ξj k ) ) are the variation-dependent positive-definite symmetric
ij
matrices. Simple calculation gives that
+ 2x(k)T ĀT (ξ ) P̄ j,π (ξk ) Bwi w(k) + w(k)T BwiT P̄ j,π (ξk ) Bwi w(k)
i,πi j k ij ij
(4.10)
M
where P̄ j,π (ξk ) j=1 E πi(ξj k ) P j,π (ξk ) .
ij ij
Combining Eqs. (4.8) and (4.10), it yields
For α ≥ 1 and P̃i,π (ξk ) = R −1/2 Pi,π (ξk ) R −1/2 , condition (4.11) can be rewritten as
ij ij
k
V (x(k)) < α k V (x(0)) + α k− j+1 w( j − 1)T Qw( j − 1)
j=1
⎡ ⎤
k
= α k ⎣V (x(0)) + α 1− j w( j − 1)T Qw( j − 1)⎦
j=1
< α N c1 λmax ( P̃i,π (ξk ) ) + d 2 λmax (Q) . (4.12)
ij
V (X (k)) = x(k)T Pi,π (ξk ) x(k) ≥ λmin ( P̃i,π (ξk ) )x(k)T Rx(k). (4.13)
ij ij
74 4 Finite-Time Stability and Stabilization . . .
Equation (4.9) means that for ∀k ∈ {1, 2, . . . , N }, x(k)T Rx(k) < c2 . Then the
MJS (4.7) is said to be stochastic finite-time stabilizable concerning to (c1 c2 N R d).
This completes the proof.
To find the solution the finite-time stabilizing controller, the following theorem
is needed:
Theorem 4.1 For a given scalar α ≥ 1, the closed-loop MJS (4.7) is stochas-
tic finite-time stabilizable concerning (c1 c2 N R d) for the state feedback con-
troller K i,π (ξk ) = Yi,π (ξk ) X −1(ξk ) , if there exists mode-dependent symmetric positive-
ij ij i,πi j
definite matrix X i,π (ξk ) ∈ R n×n , mode-dependent matrix Yi,π (ξk ) ∈ R m×n and symmet-
ij ij
ric positive-definite matrix Q ∈ R p× p satisfying the following coupled linear matrix
inequalities (LMIs):
⎡ ⎤
−α X i,π (ξk ) 0 V1i
⎢ ij ⎥
⎣ ∗ −α Q U2i ⎦ < 0 (4.14)
∗ ∗ −
√
− αc2N + λ2 d 2 c1
√ <0 (4.17)
c1 −λ1
where
⎡ ⎤
(ξk )
⎢ E πi1 X i,π (ξk ) Ai − Y (ξk ) Bui , . . . , ⎥
T T T
V1i = ⎢ ij ⎥,
i,πi j
⎣ ⎦
(ξk )
E πi M X i,π (ξk ) Ai − Y (ξk ) Bui
T T T
ij i,πi j
U2i = E πi1 Bwi , . . . , E πi(ξMk ) Bwi
(ξk ) T T ,
4.4 Stochastic Finite-Time H∞ Control 75
= diag X 1,πi(jξk ) , . . . , X M,πi(jξk ) .
Applying
−1Schur complement lemma, implementing a congruence to Eq. (4.18)
by diag Pi,π (ξk ) I , and denoting X i = P −1(ξk ) and Yi,π (ξk ) = K i,π (ξk ) X i,π (ξk ) , then it
ij i,πi j ij ij ij
1
λmax ( X̃ i,π (ξk ) ) = . (4.19)
ij λmin ( P̃i,π (ξk ) )
ij
0 < λmin (Q), λmax (Q) < λ2 , λmax ( X̃ i,π (ξk ) ) < 1, λ1 < λmin ( X̃ i,π (ξk ) ) (4.21)
ij ij
c1 c2
+ d 2 λ2 < N (4.22)
λ1 α
where the definition of z k ∈ R l is the same as that in Chap. 3. Substituting the con-
troller (4.6) into the system (4.23) yields the following closed-loop system:
where
To achieve the target that the closed-loop system is stochastic finite-time stabiliz-
able with H∞ disturbance rejection performance, the following sufficient conditions
are presented:
Theorem 4.2 The closed-loop system (4.24) is stochastic finite-time stabilizable
concerning (c1 c2 N R d) for scalars α ≥ 0, and maintains the γ -disturbance atten-
uation attribute, if for each i ∈ M, there are positive-definite matrices X i,π (ξk ) and
ij
matrices Yi,π (ξk ) satisfying the following equations:
ij
⎡ T ⎤
√
⎢ −(1 + α)X i,π (ξk ) 0 Ci X (ξ )
i,πi j k
− Dui Y (ξ )
i,πi j k
(1 + α)U1iT ⎥
⎢ ij √ ⎥
⎢ ∗ −γ 2 I T (1 + α)U2iT ⎥<0 (4.25)
⎢ Dwi ⎥
⎣ ∗ ∗ −I 0 ⎦
∗ ∗ ∗ −Z
−c2 + γ 2 d 2
(1 + α) N c1
<0 (4.27)
(1 + α) N c1 −λ
where
T T
(ξk ) (ξk )
U1iT = E πi1 Ai,πi j k , . . . , E πi M Ai,πi j k ,
( ξ ) ( ξ )
(ξk ) (ξk )
U2iT = E πi1 Bwi , . . . , E πi M Bwi ,
T T
4.4 Stochastic Finite-Time H∞ Control 77
Z = X 1,πi(jξk ) , . . . , X M,πi(jξk ) ,
Proof For the closed-loop system (4.24), choose the following Lyapunov function:
where Pi,π (ξk ) P(rk , πi(ξj k ) ) are the variation-dependent positive-definite symmetric
ij
matrices. Then, it follows that:
(ξ )
V (x(k)) = E V ( x(k + 1), rk+1 , πi j k+1 ) x(k), rk , ξk − V (x(k), rk , πi(ξj k ) )
= x(k + 1)T P̄ j,π (ξk ) x(k + 1) − x(k)T Pi,π (ξk ) x(k)
ij ij
+ 2x(k) T
ĀT (ξk ) P̄ j,π (ξk ) Bwi w(k) + w(k)T BwiT P̄ j,π (ξk ) Bwi w(k) (4.28)
i,πi j ij ij
M
where P̄ j,π (ξk ) j=1 E πi(ξj k ) P j,π (ξk ) .
ij ij
Assume the zero initial condition V (x(k))|k=0 = 0, and denote
!
N
JE z(k) z(k) − γ w(k) w(k)
T 2 T
. (4.29)
k=0
N
= ζkT ζk ,
k=0
78 4 Finite-Time Stability and Stabilization . . .
where
T
ζk x(k)T w(k)T ,
(1 + α) ĀT (ξ ) P̄ j,π (ξk ) Āi,π (ξk ) − (1 + α)Pi,π (ξk ) + C̄ T (ξ ) C̄ i,π (ξk )
= i,πi j k ij ij ij i,πi j k ij
∗
⎤
(1 + α) ĀT (ξ ) P̄ j,π (ξk ) Bwi + C̄ (ξk ) Dwi
T
i,πi j k ij i,πi j ⎦.
(1 + α)Bwi
T
P̄ j,π (ξk ) Bwi + Dwi T
Dwi − γ 2 I
ij
a congruence to by diag X i,ξk I , it follows that < 0 is equal to the linear matrix
inequality (4.25).
On the other hand, linear matrix inequality (4.25) means
(1 + α)V (x(k + 1)) < (1 + α)V (x(k)) + γ 2 w(k)T w(k) − z(k)T z(k). (4.30)
Denoting P̃i,π (ξk ) = R −1/2 Pi,π (ξk ) R −1/2 , Eq. (4.31) implies
ij ij
k
V (x(k)) < (1 + α)k V (x(0)) + γ 2 w( j − 1)T w( j − 1)
j=1
< α k (1 + α) N c1 λmax ( P̃i,π (ξk ) ) + d 2 γ 2 . (4.32)
ij
V (x(k)) = x(k)T Pi,π (ξk ) x(k) > λmin ( P̃i,π (ξk ) )x(k)T Rx(k). (4.33)
ij ij
c1 (1 + α) N c2
+ d 2γ 2 < . (4.35)
λmin ( X̃ i,π (ξk ) ) λmax ( X̃ i,π (ξk ) )
ij ij
c1 (1 + α) N
+ d 2 γ 2 < c2 (4.37)
λ
which are equal to Eqs. (4.26) and (4.27). Thus the proof is completed.
where y(k) ∈ R p is the measured output. Design the following observer and state
feedback controller:
⎧
⎪
⎪ x̄(k + 1) = Ai x̄(k) + Adi x̄(k − h) + Bui u(k) + Hi,π (ξk ) (y(k) − ȳ(k))
⎪
⎨ ij
ȳ(k) = E i x̄(k) + E di x̄(k − h)
(4.39)
⎪ u(k) = K i,πi(jξk ) x̄(k) + K di,πi(jξk ) x̄(k − h)
⎪
⎪
⎩
x̄ f = η f , f ∈ {−h, . . . , 0}, r (0) = r0
where K i,π (ξk ) , K di,π (ξk ) and Hi,π (ξk ) are the controller and observer gains to be calcu-
ij ij ij
T
lated. Letting e(k) = x(k) − x̄(k) and x̃(k) = x(k)T e(k)T , the closed-loop error
dynamic MJS follows that:
x̃(k + 1) = Ãi x̃(k) + Ãdi x̃(k − h) + B̃wi w(k)
T (4.40)
x̃ f = ϕ Tf ϕ Tf − ηTf , f ∈ {−h, . . . , 0}, r (0) = r0
80 4 Finite-Time Stability and Stabilization . . .
where
Ai + Bui K i,π (ξk ) −Bui K i,π (ξk )
Ãi = ij ij
,
0 Ai − Hi,π (ξk ) E i
ij
Adi + Bui K di,π (ξk ) −Bui K di,π (ξk )
Ãdi = ij ij
,
0 Adi − Hi,π (ξk ) E di
ij
Bwi
B̃wi = .
Bwi
Before presenting the main results, the following definition and proposition are
necessary to develop the main results.
Definition 4.1 The closed-loop error dynamic system (4.40) is said to be stochas-
tic finite-time stabilizable via observer-based state feedback controller (4.39) with
respect to (c1 c2 N G̃), where c1 < c2 , G̃ > 0, if the following condition holds:
E x̃ T (0)G̃ x̃(0) ≤ c12 ⇒ E x̃ T (k)G̃ x̃(k) < c22 , ∀k ∈ {1, 2, . . . , N } (4.41)
k0 −h≤k≤k0
Proposition 4.2 For scalars α ≥ 0, and h > 0, the closed-loop error dynamic sys-
tem (4.40) is stochastic finite-time stabilizable concerning to (c1 c2 N G̃ d), if
there are symmetric positive-definite matrices P̃i,ξk , Q̃ and S such that
⎡
ÃiT P̄ j,π (ξk ) Ãi − (1 + α) P̃i,π (ξk ) + Q̃ ÃiT P̄ j,π (ξk ) Ãdi
⎢ ij ij ij
⎢ ∗ ÃTdi P̄ j,π (ξk ) Ãdi − Q̃
⎣ ij
∗ ∗
⎤
ÃiT P̄ j,π (ξk ) B̃wi
ij
⎥
ÃTdi P̄ j,π (ξk ) B̃wi ⎥<0 (4.42)
ij ⎦
T
B̃wi P̄ j,π (ξk ) B̃wi − (1 + α)S
ij
%
% c22 min λmin P̂i,π (ξk )
i∈M ij
c12 max λmax P̂i,π (ξk ) + c12 hλmax ( Q̂) + d 2 λmax (S) <
i∈M ij (1 + α) N
(4.43)
−1/2 −1/2 −1/2 −1/2
where P̂i,π (ξk ) = G̃ P̃i,π (ξk ) G̃ , Q̂ = G̃ Q̃ G̃ .
ij ij
4.5 Observer-Based Finite-Time Control 81
Proof For the closed-loop system (4.40), choose the following Lyapunov function:
k−1
Vi (k) = x̃(k) P̃ T
i,πi(ξj
k) x̃(k) + x̃( j)T Q̃ x̃( j).
j=k−h
+ 2 x̃(k − h) T
ÃTdi P̄ j,π (ξk ) Ãdi B̃wi w(k) + w(k) T T
B̃wi P̄ j,π (ξk ) B̃wi w(k)
ij ij
= ζkT i ζk (4.44)
where
M
E πi(ξj k ) P̃ j,π (ξk ) , ζk = x(k)T x(k − h)T w(k)T ,
T
P̄ j,π (ξk )
ij ij
j=1
⎡ ⎤
ÃiT P̄ j,π (ξk ) Ãi − P̃i,π (ξk ) + Q̃ ∗ ∗
⎢ ij ij
⎥
i = ⎢
⎣
ÃTdi P̄ j,π (ξk ) Ãi ÃTdi P̄ j,π (ξk ) Ãdi − Q̃ ∗ ⎥.
⎦
ij ij
T T T
Bwi P̄ j,π (ξk ) Ãi B̃wi P̄ j,π (ξk ) Ãdi B̃wi P̄ j,π (ξk ) B̃wi
ij ij ij
k−1
+ (1 + α) x̃( j)T Q̃ x̃( j)
j=k−h
= (1 + α)k
82 4 Finite-Time Stability and Stabilization . . .
⎡
−1
× ⎣x̃(0)T P̃i,π (ξk ) x̃(0) + x̃( j)T Q̃ x̃( j)
ij
j=−h
⎤
k
+ (1 + α)1− j w( j − 1)T Sw( j − 1)⎦
j=1
%
≤ (1 + α) N c12 max λmax P̂i,π (ξk ) + c12 hλmax ( Q̂) + d 2 λmax (S) .
i∈M ij
(4.46)
Note that
k−1
Vi (k) = x̃(k) P̃i,π (ξ(k)) x̃(k) +
T
x̃( j)T Q̃ x̃( j)
ij
j=k−h
(4.48)
Equations (4.43) and (4.48) imply that for k ∈ {1, 2, . . . , N }, E{x̃(k) G̃ x̃(k)} < T
With the derived results presented in Proposition 4.2, the controller gains can be
solved using the following theorem:
Theorem 4.3 For scalars α ≥ 0, and h > 0, the closed-loop system (4.40) is
stochastic finite-time stabilizale via observer-based state feedback concerning
(c1 c2 N G̃ d), if there are matrices P̃i,π (ξk ) = P̃ T (ξk ) > 0 ∈ R 2n×2n , X̃ i,π (ξk ) =
ij i,πi j ij
X̃ T (ξ ) > 0 ∈ R 2n×2n , Q̃ > 0 ∈ R 2n×2n , S > 0 ∈ R n×n , and real matrices K i,π (ξk ) ∈
i,πi j k ij
R m×n
, K di,π (ξk ) ∈ R m×n and Hi,π (ξk ) ∈ R n× p such that
ij ij
⎡ ⎤
−(1 + α) P̃i,π (ξk ) + Q̃ 0 0 ∗
⎢ ij ⎥
⎢ 0 − Q̃ ∗ ∗ ⎥
⎢ ⎥≤0 (4.49)
⎢ 0 0 −(1 + α)S ∗ ⎥
⎣ ⎦
1 2 B̄wi − X̃ i,π (ξk )
ij
4.5 Observer-Based Finite-Time Control 83
λ1 G̃ −1 < X̃ i,π (ξk ) < G̃ −1 , 0 < Q̃ < λ2 G̃, 0 < S < λ3 I (4.51)
ij
c2
− (1+α)
2
N + c1 λ2 h + d λ3
2 2
c1
<0 (4.52)
c1 −λ1
where
T
(ξ ) (ξ )
1 = E πi1 k 1,
T
..., E πi Mk T
1
,
T
(ξ ) (ξ )
2 = E πi1 k 2,
T
..., E πi Mk T
2
,
Ai 0n×n Bui 0n×n
= , = , = ,
11
0n×n Ai 12
0n×m 14
−In×n
Adi 0n×n
15 = 0 p×n E i , 21 = , 22 = 0 p×n E di .
0n×n Adi
Proof By Schur complement lemma, Eq. (4.42) in Proposition 4.2 can be rewritten
as
⎡ ⎤
−(1 + α) P̃i,π (ξk ) + Q̃ ∗ ∗ ∗
⎢ ij ⎥
⎢ 0 − Q̃ ∗ ∗ ⎥
⎢ ⎥≤0 (4.53)
⎢ 0 0 −(1 + α)S ∗ ⎥
⎣ ⎦
Āi Ādi B̄wi − X̃ i,π (ξk )
ij
where
T
(ξk ) (ξk )
Āi = E πi1 Ãi , . . . , E πi M ÃiT ,
T
84 4 Finite-Time Stability and Stabilization . . .
T
(ξk ) (ξk )
Ādi = E πi1 Ãdi , . . . , E πi M Ãdi ,
T T
T
(ξk ) (ξk )
B̄wi = E πi1 B̃wi , . . . , E πi M B̃wi ] ,
T T
−1 −1
X̃ i,π (ξk ) = diag P̃1,π (ξk ) , . . . , P̃M,π (ξk ) .
ij ij ij
Ai 0n×n Bi
= + K i,π (ξk ) In×n −In×n
0n×n Ai 0n×m ij
0n×n
+ Hi,π (ξk ) 0 p×n E i (4.54)
−In×n ij
Adi + Bui K di,π (ξk ) −Bui K di,π (ξk )
Ãdi = ij ij
0 Adi − Hi,π (ξk ) E di
ij
Adi 0n×n Bi
= + K di,π (ξk ) In×n −In×n
0n×n Adi 0n×m ij
0n×n
+ Hi,π (ξk ) 0 p×n E di . (4.55)
−In×n ij
Substituting Eqs. (4.54)–(4.55) into Eq. (4.53), and denoting X̃ i,π (ξk ) = P̃ −1(ξk ) ,
ij i,πi j
Eqs. (4.49) and (4.50) in Theorem 4.2 can be derived.
On the other hand,
1
λmax ( X̂ i,π (ξk ) ) = ,
ij λmin ( P̂i,π (ξk ) )
ij
and
c12
% + c12 hλmax ( Q̂) + d 2 λmax (S)
min λmin X̂ i,π (ξk )
i∈M ij
c2
< 2 % .
max λmax X̂ i,π (ξk ) (1 + α) N
i∈M ij
0 < λmin ( Q̂), λmax ( Q̂) < λ2 , λmax ( X̂ i,π (ξk ) ) < 1,
ij
λ1 < λmin ( X̂ i,π (ξk ) ), 0 < λmin (S), λmax (S) < λ3 ,
ij
c12 c22
+ c12 hλ2 + d 2 λ3 < ,
λ1 (1 + α) N
which are equal to Eqs. (4.51) and (4.52). Thus the proof is completed.
Remark 4.1 It should be pointed out that the derived inequalities in Theorem 4.3
are not strict LMIs. With the same algorithm mentioned in Chap. 3 [10], the original
non-feasibility problem can be converted to the feasible solution to LMIs.
In this subsection, two examples will be presented to illustrate the effectiveness and
validity of the obtained results.
Example 4.1 An example from reference [11] is adopted here, which is an applica-
tion of discrete-time MJS to the economic system to discuss the income measurement
and the market period problems. The specific model parameter information can refer
to [11].
The following Gaussian PDF matrix is included to represent the matrix of TP:
⎡ ⎤
n(0.67, σ ) n(0.17, σ ) n(0.16, σ )
N = ⎣ n(0.30, σ ) n(0.47, σ ) n(0.23, σ ) ⎦ ,
n(0.26, σ ) n(0.10, σ ) n(0.64, σ )
where the values of the mean are taken from the components of corresponding TP
matrix, and the same value of variance for different components is utilized to simplify
the discussion.
86 4 Finite-Time Stability and Stabilization . . .
From the above equation, it can be seen that larger values of σ lead to greater
uncertainty of TPs. It implies that the possibility for TPs to occur at a constant is
less. Table 4.1 displays the relevant TP matrix with different variance values.
It can be clear seen from Table 4.1 that as the variance increases, the uncertainties
of the TP matrix turn into greater.
To verify the validity of the observer-based finite-time controller design, the fol-
lowing parameters are provided:
4.6 Simulation Analysis 87
0 1 0 1 0 1
A1 = , A2 = , A3 = ,
−2.5 3.2 −43.7 45.7 5.3 −5.2
⎡ ⎤ ⎡ ⎤
1.5477 −1.0976 3.1212 −0.5082
E 1 = ⎣ −1.0976 1.9145 ⎦ , E 2 = ⎣ −0.5082 2.7824 ⎦ ,
0 0 0 0
⎡ ⎤
1.8385 −1.2728
E 3 = ⎣ −1.2728 1.6971 ⎦ ,
0 0
T T
Bu1 = Bu2 = Bu3 = 0 1 , Bw1 = Bw2 = Bw3 = 0 0.2 ,
K 3,π (ξk ) = −0.5026 0.9163 ,
ij
K d1,π (ξk ) = 0.3752 −0.0344 , K d2,π (ξk ) = −0.5335 −22.7511 ,
ij ij
−1.5579 −2.1878 0
K d3,π (ξk ) = −0.5026 0.9157 , H1,π (ξk ) = ,
ij ij 2.5927 3.1712 0
−2.6357 −16.1875 0 4.4970 6.4957 0
H2,π (ξk ) = , H3,π (ξk ) = .
ij 3.0684 16.8775 0 ij −3.2808 −5.5246 0
Example 4.2 Consider the system (4.38) with three operation modes and the fol-
lowing parameters:
0.88 −0.05 2 0.24 −0.8 0.16
A1 = , A2 = , A3 = ,
0.40 −0.72 0.80 0.32 0.80 0.64
−0.2 0.1 −0.6 0.4 −0.3 0.1
Ad1 = , Ad2 = , Ad3 = ,
0.2 0.15 0.2 0.5 0.2 0.3
2 1 1 0.4 0.2 0.1
Bu1 = , Bu2 = , Bu3 = , Bw1 = , Bw2 = , Bw3 = ,
1 −1 1 0.5 0.6 0.3
88 4 Finite-Time Stability and Stabilization . . .
E 1 = 0.2 0.1 , E 2 = 0.3 0.4 , E 3 = −0.1 0.2 ,
E d1 = 0.03 −0.05 , E d2 = 0.1 0.2 , E d3 = −0.03 0.05 .
K 3,π (ξk ) = 0.1793 −0.4536 ,
ij
K d1,π (ξk ) = −0.0219 0.2912 , K d2,π (ξk ) = 0.1504 0.1833 ,
ij ij
K d3,π (ξk ) = −0.0828 0.4602 ,
ij
−3.8188 0.5472 5.0641
H1,π (ξk ) = , H2,π (ξk ) = , H3,π (ξk ) = .
ij −2.4398 ij 3.6000 ij 4.7931
The mode route is created randomly and given in Fig. 4.2. The free and controlled
discrete-time MJS state trajectories are illustrated in Figs. 4.3 and 4.4, respectively.
It could be observed that the closed-loop MJS (4.40) is stochastic finite-time stable
and the state trajectory is kept within the prescribed bound c2 .
4.6 Simulation Analysis 89
2.8
2.6
2.4
jumping modes
2.2
1.8
1.6
1.4
1.2
1
0 1 2 3 4 5 6 7
time
1
x2
-1
-2
-3
-7 -6 -5 -4 -3 -2 -1 0 1 2 3
x1
2.5
1.5
0.5
x2
-0.5
-1
-1.5
-2
-2.5
-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5
x1
4.7 Conclusion
Unlike the previous works presented in the preceding chapters, the main purpose of
this chapter is to conclude a uniform framework to solve the finite-time performance
for a class of discrete-time MJSs with non-homogeneous TPs. The time-varying prop-
erties of TPs are characterized by the Gaussian transition probability density function.
Then sufficient conditions guaranteeing finite-time stabilization are obtained for all
possible unknown external disturbances and random time-varying TPs in the form
of LMIs. In the following chapters, in addition to the finite-time performance, other
control performances such as passive control sliding mode control, finite-frequency
control, consensus control, model predictive control, and so on will be considered
for discrete-time MJSs.
References
1. Shi, P., Boukas, E.K., Agarwal, R.: Kalman filtering for continuous-time uncertain systems
with Markovian jumping parameters. IEEE Trans. Autom. Control 44(8), 1592–1597 (1999)
2. Luan, X.L., Liu, F., Shi, P.: Finite-time filtering for nonlinear stochastic systems with partially
known transition jump rates. IET Control Theor. Appl. 4(5), 735–745 (2010)
3. Wu, L., Shi, P., Gao, H.: State estimation and sliding mode control of Markovian jump singular
systems. IEEE Trans. Autom. Control 55(5), 1213–1219 (2010)
4. Ghaoui, L., Rami, M.A.: Robust state-feedback stabilization of jump linear systems via LMIs.
Int. J. Robust Nonlinear Control 6(9–10), 1015–1022 (1996)
5. Costa, O., Val, J., Geromel, J.: Continuous-time state-feedback H2 control of Markovian jump
linear system via convex analysis. Automatica 35, 259–268 (1999)
6. Xiong, J.L., Lam, J., Gao, H.J., Ho, D.W.C.: On robust stabilization of Markovian jump systems
with uncertain switching probabilities. Automatica 41(5), 897–903 (2005)
References 91
7. Xiong, J.L., Lam, J.: Fixed-order robust H∞ filter design for Markovian jump systems with
uncertain switching probabilities. IEEE Trans. Signal Process 54(4), 1421–1430 (2006)
8. Zhang, L.X., Boukas, E.K., Lam, J.: Analysis and synthesis of Markovian jump linear systems
with time-varying delays and partially known transition probabilities. IEEE Trans. Autom.
Control 53(10), 2458–2464 (2008)
9. Zhang, L.X., Boukas, E.K.: Stability and stabilization of Markovian jump linear systems with
partly unknown transition probability. Automatica 45(2), 463–468 (2009)
10. He, Y., Wu, M., Liu, G.P., She, J.H.: Output feedback stabilization for a discrete-time system
with a time-varying delay. IEEE Trans. Autom. Control 53(10), 2372–2377 (2008)
11. Costa, O., Assumpcão, E.O., Boukas, E.K., Marques, R.P.: Constrained quadratic state feedback
control of discrete-time Markovian jump linear systems. Automatica 35(4), 617–626 (1999)
Chapter 5
Asynchronous Finite-Time Passive
Control for Discrete-Time Markovian
Jump Systems
Abstract This chapter focuses on the finite-time passive controller design scheme
for discrete-time Markovian jump systems (MJSs). Firstly, a finite-time passive con-
troller is proposed to guarantee that the closed-loop system is finite-time bounded
and meets the desired passive performance requirement simultaneously under ideal
conditions. Then, considering the more practical situation that the controller’s mode
is not synchronized with the system mode, an asynchronous finite-time passive con-
troller is planned, which is for the more general hidden MJSs. Finally, by adopting
the controller gains solved by the linear matrix inequalities (LMIs), one simulation
example is presented to verify that the designed two controllers are feasible and
effective.
5.1 Introduction
Markovian jump systems (MJSs) are special random hybrid systems that provide a
unified application theory for many engineering directions, such as flight control [1]
and finance [2], etc. However, considering that the controller mode is not always
synchronized with the system mode, the hidden Markovian model is introduced to
manage this asynchrony. In [3], the asynchronous controller was designed for fuzzy
MJSs. Then, the relevant asynchronous filtering problems were reviewed in [4, 5].
As a partial generalized dissipative theory [6], passivity provides a new method for
studying system stability and candidates for the construction of Lyapunov function
of the complex system through its energy storage function [7], which is of great
significance in modern control theory. There have been a lot of achievements in
passivity analysis. The authors designed the passive controller for MJSs in [8, 9].
The passive filter design for MJSs was also investigated in [10, 11].
On the other hand, although significant progress has been made in the discussion of
the finite-time control issues of MJSs, most of them are to analyze the stabilization
problem or H∞ performance. Finite-time stabilization is to devise a controller to
make the system satisfy the transient performance in the limited time available, and
finite-time H∞ control means that the system not only meets the expected transient
performance, but also has the expected anti-interference ability in the finite-time
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 93
X. Luan et al., Robust Control for Discrete-Time Markovian Jump Systems in the
Finite-Time Domain, Lecture Notes in Control and Information Sciences 492,
https://doi.org/10.1007/978-3-031-22182-8_5
94 5 Asynchronous Finite-Time Passive Control for Discrete-Time …
domain. The authors addressed the finite-time stabilization problem for MJSs in
[12–14]. In [15–17], the finite-time H∞ controller was designed such that the closed-
loop system is stochastic finite-time stabilizable and meets the H∞ performance.
Although the controller with stabilization and anti-interference ability can produce
a good performance, more results need to be considered from the perspective of
system internal stability and energy relationship. Therefore, the combination of finite-
time control and passive control are of great significance, which motivates us to study
the finite-time passive control (FTPC) in this chapter. Moreover, the asynchronous
FTPC problem is considered to make the discrete-time MJSs hidden MJSs stochastic
finite-time bounded with passive performance.
where the state variable, the control input, the control output, and the exogenous
disturbances are the same as those defined in the preceding chapters. The system
parameters such as πi j , A(rk ), Bu (rk ), Bw (rk ), C(rk ), and Dw (rk ) are denoted as
those in Chap. 2.
In this subsection, a FTPC will be designed to make the MJS (5.1) FTB and
passive. Then, the controller is designed by:
Combining the MJS (5.1) and the controller (5.2), it yields the following closed-
loop MJS:
x(k + 1) = (Ai + Bui K i )x(k) + Bwi w(k)
. (5.3)
z(k) = Ci x(k) + Dwi w(k)
Definition 5.1 [8] For given parameters 0 < c1 < c2 , N , R > 0, the closed-loop
MJS (5.3) is stochastic finite-time stabilizable and satisfies the required passive per-
formance index γ , if the following inequality holds for the zero initial condition:
N
N
E w T (k)z(k) > γ 2 E w T (k)w(k) . (5.4)
k=0 k=0
5.2 Finite-Time Passive Control 95
Thus, the tasks for FTPC design for MJSs in this subsection are summarized as:
design a FTPC (5.2) for the MJS (5.1) such that the closed-loop MJS (5.3) is stochastic
finite-time stabilizable with respect to (c1 c2 N R d) and meets the desired passive
performance simultaneously.
Then, the following theorem is given to guarantee the finite-time boundedness
and passivity of the closed-loop MJS (5.3).
Theorem 5.1 For a given scalar α ≥ 1 and δ, the closed-loop system (5.3) is
stochastic finite-time stabilizable in regard to (c1 c2 N R d) and satisfies the
passive performance index γ , if there exist matrices K i , symmetric positive-definite
matrices Pi > 0 such that
⎡ ⎤
−α Pi 0 1 2 ··· M
⎢ ∗ −I 1 2 ··· M ⎥
⎢ ⎥
⎢ ∗ ∗ −P1 −1 0 ··· 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −P2 −1 ··· 0 ⎥<0 (5.5)
⎢ ⎥
⎢ .. .. .. .. .. .. ⎥
⎣ . . . . . . ⎦
∗ ∗ ∗ ∗ ∗ −PM −1
⎡ ⎤
−α Pi −CiT 1 2 ··· M
⎢ ∗ 2γ I − Dwi − Dwi
2 T
··· M ⎥
⎢ 1 2 ⎥
⎢ ∗ ∗ −P1 −1 0 ··· 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −P2 −1 0 ⎥<0 (5.6)
⎢ ⎥
⎢ .. .. .. .. .. .. ⎥
⎣ . . . . . . ⎦
∗ ∗ ∗ ∗ ∗ −PM −1
d2
α N c1 δ + < c2 (5.8)
α−1
where
√ √
j = πi j ĀiT , j = πi j Bwi
T
, Āi = Ai + Bui K i .
Proof Consider a Lyapunov candidate function as: V (x(k)) = x T (k)Pi x(k). Then,
we have
96 5 Asynchronous Finite-Time Passive Control for Discrete-Time …
where ηT (k) = x T (k) w T (k) .
Meanwhile, Eq. (5.9) can be converted to
where
⎡ ⎤
M
M
⎢ ĀiT πi j P j Āi − α Pi ĀiT πi j P j Bwi ⎥
⎢ j=1 j=1 ⎥
=⎢ ⎥.
1
⎣
M
⎦
∗ T
Bwi πi j P j Bwi − I
j=1
Employing Schur complement lemma to Eq. (5.5) indicates that 1 < 0, which
means
k−1
E {V (x(k))} < α k V (x(0)) + α k−1−l w T (l)w(l)
l=0
N −1
< α N V (x(0)) + α N −1−l w T (l)w(l). (5.12)
l=0
5.2 Finite-Time Passive Control 97
N
Notice that k=1 w T (k)w(k) ≤ d 2 , then the above inequality can be changed to
d2
E {V (x(k))} = E x (k)Pi x(k) < α T N
V (x(0)) + . (5.13)
α−1
Furthermore, we have
σmax (R − 2 Pi R − 2 )x T (0)Rx(0) + d2
1 1
α−1
E x (k)Rx(k) < α
T N
. (5.14)
σmin (R − 2 Pi R − 2 )
1 1
It can be seen from inequality (5.7) that σmax (R − 2 Pi R − 2 ) < δ I and σmin (R − 2
1 1 1
d2
E x (k)Rx(k) < α c1 δ +
T N
.
α−1
According to condition (5.8), it can be further deduced that E x T (k)Rx(k) < c2 .
Based on Definition 2.1, the finite-time boundedness of the closed-loop MJS (5.3)
is proved.
In the next content, the passive performance of the closed-loop MJS (5.3) will be
analyzed.
From the closed-loop MJS (5.3), one has
2γ 2 w T (k)w(k) − 2w T (k)z(k)
T 0 −CiT x(k)
= x (k) w (k)T
. (5.15)
∗ 2γ 2 I − Dwi
T
− Dwi w(k)
where
⎡ ⎤
M
M
⎢ ĀiT πi j P j Āi − α Pi ĀiT πi j P j Bwi − CiT ⎥
⎢ j=1 j=1 ⎥
=⎢ ⎥.
2
⎣
M
⎦
∗ T
Bwi πi j P j Bwi + 2γ 2 I − Dwi
T
− Dwi
j=1
By condition (5.6), one obtains 2 < 0. Then, similar to the iteration (5.12), it
yields
98 5 Asynchronous Finite-Time Passive Control for Discrete-Time …
E {V (x(k))}
N −1 N −1
< E α N V (x(0)) − 2γ 2 α N −1−l w T (l)w(l) + 2 α N −1−l w T (l)z(l) .
l=0 l=0
(5.17)
Based on the Definition 2.1, the closed-loop MJS (5.3) is stochastic finite-time
stabilizable and satisfies the passive performance index. The proof is completed.
Next, Theorem 5.2 will be adopted to design the corresponding FTPC for the
closed-loop MJS (5.3).
Theorem 5.2 Considering a given scalar α ≥ 1, the closed-loop MJS (5.3) is
stochastic finite-time stabilizable in regard to (c1 c2 N R d) and satisfies the
prescribed passive performance index γ , if there exist matrices Ni , and real symmet-
ric matrices Wi > 0 such that inequality (5.8) and the following conditions hold:
⎡ ⎤
−αWi 0 1 2 ··· M
⎢ ∗ −I 1 2 · · · M ⎥
⎢ ⎥
⎢ ∗ ∗ −W1 0 ··· 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −W2 ··· 0 ⎥ <0 (5.20)
⎢ ⎥
⎢ .. .. .. .. .. . ⎥
⎣ . . . . . .. ⎦
∗ ∗ ∗ ∗ ∗ −W M
⎡ ⎤
−αWi −Wi CiT 1 2 ··· M
⎢ ∗ 2γ I − Dwi − Dwi
2 T
· · · M ⎥
⎢ 1 2 ⎥
⎢ ∗ ∗ −W1 0 ··· 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −W2 ··· 0 ⎥ <0 (5.21)
⎢ ⎥
⎢ .. .. .. .. .. . ⎥
⎣ . . . . . .. ⎦
∗ ∗ ∗ ∗ ∗ −W M
5.3 Asynchronous Finite-Time Passive Control 99
Wi − R −1 < 0 (5.22)
−Wi I
<0 (5.23)
∗ −δ R
where
√
Wi = Pi−1 , j = πi j [Wi AiT + G iT Bui
T
].
From the results shown above, we can see that closed-loop MJS (5.3) is stochas-
tic finite-time stabilizable and passive by designing the FTPC, which is under an
assumption that the controller mode and the system mode are synchronized. How-
ever, this assumption cannot be maintained in some practical applications, which
prompts us to use hidden Markovian model to design asynchronous controllers in
the next subsection.
Similarly, the tasks in this subsection are summarized as: design an asynchronous
controller (5.24) for the MJS (5.1) so that the closed-loop MJS (5.26) is stochastic
finite-time stabilizable and meets the desired passive performance. The results are
shown in following theorem.
Theorem 5.3 For given scalars α ≥ 1, if there exist matrices K q , X , real symmetric
matrices Pi > 0, and positive scalars δ, and γ such that
−1
M
− πi j P j < −X (5.27)
j=1
⎡ ⎤
−α Pi 0 i1 i2 ···i Q
⎢ ∗ −I ϒi1 ϒi2 ···ϒi Q ⎥
⎢ ⎥
⎢ ∗ ∗ −X 0 ··· 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −X ··· 0 ⎥ <0 (5.28)
⎢ ⎥
⎢ .. .. .. .. .. .. ⎥
⎣ . . . . . . ⎦
∗ ∗ ∗ ∗ ∗ −X
⎡ ⎤
−α Pi −CiT i1 i2 ··· i Q
⎢ ∗ 2γ 2 I − Dwi − Dwi
T
ϒi1 ϒi2 ··· ϒi Q ⎥
⎢ ⎥
⎢ ∗ ∗ −X 0 ··· 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −X ··· 0 ⎥ <0 (5.29)
⎢ ⎥
⎢ .. .. .. .. .. .. ⎥
⎣ . . . . . . ⎦
∗ ∗ ∗ ∗ ∗ −X
d2
α N c1 δ + < c2 (5.31)
α−1
where
iq = φiq Ãiq
T
, ϒiq = φiq Bwi
T
, Ãiq = Ai + Bui K q .
Proof Consider the following Lyapunov candidate function: V (x(k)) = x T (k)Pi x(k).
Then, we have
Q
M
T
= φiq πi j (Ai + Bui K q )x(k) + Bwi w(k)
q=1 j=1
P j (Ai + Bui K q )x(k) + Bwi w(k) − αx T (k)Pi x(k). (5.32)
Then, the rest of the proof that the closed-loop MJS (5.26) is stochastic finite-time
stabilizable is the same as that in Theorem 5.1.
Next, we will analyze the passive performance of the closed-loop MJS (5.26).
Combining the closed-loop MJS (5.26) and Eq. (5.32), it yields
Recalling to inequalities (5.27) and (5.29), and using Schur complement lemma,
it yields
Then, similar to the proof in the Theorem 5.1, the passivity of the closed-loop
MJS (5.26) can be guaranteed. Generally speaking, under the conditions that matrix
inequalities (5.27)–(5.31) hold, the closed-loop MJS (5.26) is stochastic finite-time
stabilizable and passive. The proof is completed.
Due to the nonlinear matrix inequalities (5.27)–(5.31), the asynchronous FTPC
gains cannot be obtained. Therefore, it is necessary to introduce the following theo-
rem to solve the nonlinear matrix inequalities (5.27)–(5.31).
Theorem 5.4 Considering a positive scalar α ≥ 1, the closed-loop MJS (5.26) is
stochastic finite-time stabilizable in regard to (c1 c2 N R d) and satisfies the
prescribed passive performance index γ , if there exist real symmetric matrices Wi >
0, matrices Hq , S, X and positive scalars δ, and γ such that LMI (5.31) and the
following conditions hold:
⎡ √ √ √ ⎤
−X πi1 X πi1 X ··· πi M X
⎢ ∗ −W1 0 ··· 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −W ··· 0 ⎥
⎢ 2 ⎥<0 (5.37)
⎢ .. .. .. .. .. ⎥
⎣ . . . . . ⎦
∗ ∗ ∗ ∗ −W M
⎡ ⎤
α(Wi − S T − S) 0 i1 i2 · · · i Q
⎢ ∗ −I ϒi1 ϒi2 · · · ϒi Q ⎥
⎢ ⎥
⎢ ∗ ∗ −X 0 · · · 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −X · · · 0 ⎥ <0 (5.38)
⎢ ⎥
⎢ .. .. .. .. . . .. ⎥
⎣ . . . . . . ⎦
∗ ∗ ∗ ∗ ∗ −X
⎡ ⎤
α(Wi − S T − S) −SCiT i1 i2 ··· i Q
⎢ ∗ 2γ I − Dwi − Dwi
2 T
ϒi1 ϒi2 ··· ϒi Q ⎥
⎢ ⎥
⎢ ∗ ∗ −X 0 ··· 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −X ··· 0 ⎥ <0 (5.39)
⎢ ⎥
⎢ .. .. .. .. .. .. ⎥
⎣ . . . . . . ⎦
∗ ∗ ∗ ∗ ∗ −X
Wi − R −1 < 0 (5.40)
−Wi I
<0 (5.41)
∗ −δ R
where
Wi = Pi−1 , iq = φiq [S AiT + HqT Bui
T
].
5.4 Simulation Analysis 103
M
−X −1 + πi j P j < 0. (5.42)
j=1
Using Schur complement lemma and letting Wi = Pi−1 , inequality (5.42) can be
transformed into
⎡ √ √ √ ⎤
−X −1 πi1 πi2 · · · πi M
⎢ ∗ −W1 0 · · · 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −W2 · · · 0 ⎥
⎢ ⎥ < 0. (5.43)
⎢ .. .. .. . . .. ⎥
⎣ . . . . . ⎦
∗ ∗ ∗ ∗ −W M
In this section, an example will be applied to show the effectiveness and feasibility
of the developed results. Consider a two-mode discrete-time MJS (5.1) with the
following form:
0.8 0.1 0.7 0.2 1.1 0.1
A1 = , A2 = , Bu1 = Bu2 = , Bw1 = Bw2 = ,
0.7 0.4 0.3 0.9 1.1 0.1
C1 = C2 = 0.1 0.1 , Dw1 = Dw2 = [0.5] .
Applying the obtained FTPC gains to the closed-loop MJS (5.3), the simulation
results are depicted in Figs. 5.1 and 5.2. Figure 5.1 describes the jumping modes. The
states of the open-loop and closed-loop MJSs are shown in Fig. 5.2 simultaneously.
Comparing the trajectories of the open-loop and closed-loop MJSs, it shows that
the closed-loop MJS (5.3) is stochastic finite-time stabilizable and meets the desired
performance level.
Case II: Asynchronous finite-time passive control case.
By solving LMIs (5.31) and (5.37)–(5.41), we have:
K 1 = −0.2493 −0.2057 , K 2 = −0.2784 −0.1609 ,
Fig. 5.2 The states of the open-loop and closed-loop MJSs in Case I
The simulation results under the asynchronous FTPC are shown in Figs. 5.3 and
5.4. Figure 5.3 depicts the jumping modes. Figure 5.4 describes the state of the open-
loop and closed-loop MJSs simultaneously, which indicates that the closed-loop MJS
(5.26) is stochastic finite-time stabilizable and meets the desired performance level.
Comparing the results in these two cases, the asynchronous FTPC is more realistic
than the synchronous FTPC, but the boundedness and passive performance are not
as good as that of the synchronous FTPC due to that the relevant parameters c2 and
106 5 Asynchronous Finite-Time Passive Control for Discrete-Time …
x2
x1
time
Fig. 5.4 The states of the open-loop and closed-loop MJSs in Case II
γ under the designed asynchronous FTPC is greater than by the synchronous FTPC.
It also indicates that the asynchronous FTPC sacrifices the system performance in
order to meet the actual situation.
5.5 Conclusions
From the perspective of system internal stability and energy relationship, this chapter
studies the finite-time passive control for discrete-time MJSs to make sure that the
closed-loop system is finite-time bounded and the system energy function decaying
according to the desired rate. Then the asynchronous FTPC is studied by considering
the more practical situation that the controller’s mode is not synchronized with the
system mode. Next chapter will combine the finite-time performance with sliding
mode control to achieve better performance indicators for discrete-time MJSs.
References
1. Zhang, H., Gray, W.S., Gonzalez, O.R.: Performance analysis of digital flight control systems
with rollback error recovery subject to simulated neutron-induced upsets. IEEE Trans. Control
Syst. Technol. 16(1), 46–59 (2007)
2. Bäuerle, N., Rieder, U.: Markovian Decision Processes with Applications to Finance. Springer
Science & Business Media Publishing, Berlin (2011)
References 107
3. Dong, S., Wu, Z.G., Su, H., Shi, P., Karimi, H.R.: Asynchronous control of continuous-time
nonlinear Markovian jump systems subject to strict dissipativity. IEEE Trans. Autom. Control
64(3), 1250–1256 (2018)
4. Zhang, X., Wang, H., Stojanovic, V., Cheng, P., He, S., Luan, X., Liu, F.: Asynchronous fault
detection for interval type-2 fuzzy nonhomogeneous higher-level Markovian jump systems
with uncertain transition probabilities. IEEE Trans. Fuzzy Syst. (2021). https://doi.org/10.
1109/TFUZZ.2021.3086224
5. Zhang, X., He, S.P., Stojanovic, V., Luan, X.L., Liu, F.: Finite-time asynchronous dissipative
filtering of conic-type nonlinear Markovian jump systems. Sci. China Inf. Sci. 64, 1–12 (2021)
6. Willems, J.C.: Dissipative dynamical systems part I: general theory. Arch. Ration. Mech. Anal.
45(5), 321–351 (1972)
7. Shan, Y., She, K., Zhong, S., Cheng, J., Wang, W., Zhao, C.: Event-triggered passive control for
Markovian jump discrete-time systems with incomplete transition probability and unreliable
channels. J. Franklin Inst. 356(15), 8093–8117 (2019)
8. Wu, Z.G., Shi, P., Shu, Z., Su, H., Lu, R.: Passivity-based asynchronous control for Markovian
jump systems. IEEE Trans. Autom. Control 62(4), 2020–2025 (2016)
9. Chen, Y., Chen, Z., Chen, Z., Xue, A.: Observer-based passive control of non-homogeneous
Markovian jump systems with random communication delays. Int. J. Syst. Sci. 51(6), 1133–
1147 (2020)
10. Shen, H., Su, L., Park, J.H.: Extended passive filtering for discrete-time singular Markovian
jump systems with time-varying delays. Signal Process. 128, 68–77 (2016)
11. He, S.P., Liu, F.: Exponential passive filtering for a class of nonlinear jump systems. J. Syst.
Eng. Electron. 20(4), 829–837 (2009)
12. Luan, X.L., Zhao, C.Z., Liu, F.: Finite-time stabilization of switching Markovian jump systems
with uncertain transition rates. Circ. Syst. Signal Process. 34(12), 3741–3756 (2015)
13. Yan, Z., Zhang, W., Zhang, G.: Finite-time stability and stabilization of Itô stochastic systems
with Markovian switching: mode-dependent parameters approach. IEEE Trans. Autom. Control
60(9), 2428–2433 (2015)
14. Qi, W., Kao, Y., Gao, X.: Further results on finite-time stabilization for stochastic Markovian
jump systems with time-varying delay. Int. J. Syst. Sci. 48(14), 2967–2975 (2017)
15. Zhang, Y., Liu, C., Mu, X.: Robust finite-time stabilization of uncertain singular Markovian
jump systems. Appl. Math. Model. 36(10), 5109–5121 (2012)
16. Shen, M., Yan, S., Zhang, G., Park, J.H.: Finite-time H∞ static output control of Markovian
jump systems with an auxiliary approach. Appl. Math. Comput. 273, 553–561 (2016)
17. Zong, G., Yang, D., Hou, L., Wang, Q.: Robust finite-time H∞ control for Markovian jump sys-
tems with partially known transition probabilities. J. Franklin Inst. 350(6), 1562–1578 (2013)
Chapter 6
Finite-Time Sliding Mode Control
for Discrete-Time Markovian Jump
Systems
Abstract This chapter focuses on the finite-time sliding mode control problem for
discrete-time Markovian jump systems (MJSs) with uncertainties. Firstly, the sliding
mode function and sliding mode controller are designed for discrete-time MJSs. By
using Lyapunov–Krasovskii functional method, some mode-dependent weight matri-
ces are obtained such that the closed-loop MJSs are stochastic finite-time stabilizable
and fulfill the given H∞ performance index. Moreover, an appropriate asynchronous
sliding mode controller is constructed and the rationality conditions of the coefficient
parameter are given and proved for the purpose that the closed-loop MJSs can be
driven onto the sliding surface. Also, the transient performance of the discrete-time
MJSs during the reaching and sliding motion phase has been investigated, respec-
tively. Finally, we use a numerical example to show the effectiveness of the designed
results.
6.1 Introduction
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 109
X. Luan et al., Robust Control for Discrete-Time Markovian Jump Systems in the
Finite-Time Domain, Lecture Notes in Control and Information Sciences 492,
https://doi.org/10.1007/978-3-031-22182-8_6
110 6 Finite-Time Sliding Mode Control for Discrete-Time Markovian …
In recent years, along with the research boom of stochastic systems, sliding mode
control method has been introduced into Markovian jump systems (MJSs), and some
meaningful results have been achieved [8, 9]. The sliding mode control method
can suppress parameter perturbation and external interference to obtain the desired
performance. This ideal robustness is extremely effective in practical applications.
The sliding mode control problem for Markovian jump systems with delays via
asynchronous approach is studied in [10]. Considering the Markovian jump non-
linear systems with actuator faults, the adaptive sliding mode control problem was
studied in [5]. Considering the asynchronous between system state and controller,
the asynchronous sliding mode control problem of Markovian jump systems with
time-varying delays and partly accessible mode detection probabilities was addressed
in [6].
On the other hand, a large number of research results of MJSs based on sliding
mode control scheme are carried out in sense of Lyapunov asymptotic stability,
focusing on the problem that the system asymptotically astringents to the equilibrium
state when the running time is infinite. However, in many practical applications, it is
often necessary to study the dynamic boundedness of the system within a finite-time
interval [11–16]. For example, to control the aircraft to operate in a specific area,
the chemical reaction requires that the temperature and pressure do not exceed a
bounded value in finite-time. Therefore, it is of profound significance and necessity
to study the finite-time stabilization of MJSs by means of the sliding mode control
method.
In this chapter, the sliding mode control and asynchronous sliding mode control
problems for discrete-time MJSs in the finite-time domain have been discussed.
Firstly, a mode-dependent sliding SMC has been designed such that the closed-loop
MJSs can be driven onto the sliding surface. Then, some sufficient conditions on
the stochastic finite-time stabilization of the closed-loop MJSs are given. Moreover,
the main design results are extended to the discrete-time MJSs with asynchronous
phenomena. Finally, a numerical example is given to show the effectiveness of the
designed results.
F(rk ), Bu (rk ), C(rk ), Bw (rk ), and Dw (rk ) are known given matrices. Without special
instructions, Bu (rk ) is a full column rank matrix.
For any rk = i, we denote A(rk ), E(rk ), F(rk ), Bu (rk ), C(rk ), Bw (rk ) and Dw (rk ),
as Ai , E i , Fi , Bui , Ci , Bwi , and Dwi , respectively. Thus, the discrete-time MJS (6.1)
can be rewritten as
⎧
⎨ x(k + 1) = (Ai + E i (k)Fi )x(k) + Bui u(k) + Bwi w(k)
z(k) = Ci x(k) + Dwi w(k) . (6.2)
⎩
x(k) = x0 , k = 0
For the discrete-time MJS (6.2), we select the following mode-dependent SMF:
Proof For the discrete-time MJS (6.2), it follows from the mode-dependent SMF
S(k) and the closed-loop MJS (6.5) that:
Recalling to the coefficient parameter ηi (k) in condition (6.6), one can get
Thus, the closed-loop MJS (6.5) can be driven onto S(k) = 0 under SMC (6.4).
The proof is completed.
It follows from the sliding mode control theory that the following equivalent
controller (6.11) can be obtained if the closed-loop MJS (6.5) maintains in the sliding
surface S(k + 1) = S(k) = 0:
Substituting the equivalent controller (6.11) into the discrete-time MJS (6.2), the
following closed-loop MJS can be obtained:
⎧
⎨ x(k + 1) = (Ai + E i (k)Fi )x(k) + Bwi [I − Bui (G i Bui )−1 G i ]w(t)
z(k) = Ci x(k) + Dwi w(k) . (6.12)
⎩
x(k) = x0 , k = 0
M
E{V2 (k + 1, i)} = πi j x T (k + 1)P j x(k + 1)
j=1
M
= πi j (x T (k)(AiT + FiT (k)T E iT )
j=1
Lemma 2.1 and using the Schur complement lemma simultaneously, inequality (6.18)
holds according to inequality (6.13).
114 6 Finite-Time Sliding Mode Control for Discrete-Time Markovian …
k
E{V2 (k, i)} < βik V2 (x(0), r0 ) + βil w T (k − l)w(k − l)
l=1
k
< βik V2 (x(0), r0 ) + βik−l w T (k − l)w(k − l)
l=1
k
< βik x T (0)Pi x(0) + βik−l w T (k − l)w(k − l) . (6.19)
l=1
−1 −1 −1 −1
Defining λ Pi = maxi∈M {λmax (R 2 Pi R 2 )} and λ Pi = mini∈M {λmin (R 2 Pi R 2 )},
we can get from inequality (6.19) that
λ Pi E{x T (k)Rx(k)}
k
< βiN λ Pi x T (0)Rx(0) + βik−l w T (k − l)w(k − l) < βiN (λ Pi c1 + d 2 ). (6.20)
l=1
βiN (λ Pi c1 + d 2 )
E{x T (k)Rx(k)} < . (6.21)
λ Pi
It follows from condition (6.14) that E{x T (k)Rx(k)} < c2 for any k ∈ [0 N ]. That
means the closed-loop MJS (6.12) is stochastic finite-time stabilizable in regard to
(c1 c2 N R d). The proof is completed.
Proof The same Lyapunov function is selected as that in (6.15). Considering the
H∞ performance index, we introduce the following auxiliary function:
Recalling to Lemma 2.1 and using Schur complement lemma, inequality (6.24) holds
according to inequality (6.22). Define
N
J=E [z (k)z(k) − βi w (k)w(k)] .
T T
(6.25)
k=1
k=1
Considering the inequality (6.23), it follows that J < 0. Thus, one can get:
N
N
E z (k)z(k) < βi E
T
w (k)w(k) .
T
(6.27)
k=1 k=1
√
According to the Definition 3.2 in Chap. 3, we get J < 0 with γ = βi . The
proof is completed.
Q
where φiq ≥ 0 and q=1 φiq = 1.
Then, the following asynchronous mode-dependent SMC is selected as:
Firstly, the reachability problem of the sliding surface will be analyzed in the
following theorem.
Theorem 6.4 The closed-loop MJS (6.31) can be driven onto S(k) = 0 during the
finite-time interval [0 N ] under the asynchronous SMC (6.30), if the coefficient
parameter ηi (k) satisfies
Proof For the discrete-time MJS (6.2), it follows from (6.28) that:
Thus, we have
E {V3 (k, i)} = E S T (k + 1)S(k + 1) − S T (k)S(k)
= νi + S T (k)(G i Bui )−1 (S(k + 1) − S(k))
= νi + S T (k)(K q x(k) + u(k) + (G i Bui )−1 (G i Bwi )w(k)
− (G i Bui )−1 S(k)) (6.36)
−1
where νi = (S (k+1)−S (k))(G i2Bui ) (S(k+1)−S(k)) .
T T
Thus, there exists a large enough coefficient parameter κi such that E {V3
(k, i)} < 0 holds, which means the closed-loop MJS (6.31) can be driven onto
S(k) = 0 during the finite-time interval [0 N ] under the asynchronous SMC (6.30).
Then, we prove that the closed-loop MJS (6.31) can be driven onto S(k) = 0 during
[0 N ] with N < N .
From condition (6.37), one can get
N
S T (k)(G i Bui )−1 (S(k) − S(k + 1))
(1 + N )κi ≤ . (6.39)
k=0
S(k)
N
S T (k)(G i Bui )−1 (S(k) − S(k + 1))
(1 + N )κi ≤
k=0
S(k)
S T (0)(G i Bui )−1 (S(0) − S(1)) 2V1 (0)
≤ ≤ . (6.40)
S(0) S(0)
118 6 Finite-Time Sliding Mode Control for Discrete-Time Markovian …
2V1 (0)
N ≤ − 1. (6.41)
κi S(0)
Thus, we have N < N by means of equation (6.33). That means the closed-loop
MJS (6.31) can be driven onto S(k) = 0 during [0 N ] with N < N . The proof is
completed.
It is known that the closed-loop MJS (6.31) has two phases for the given [0 N ].
The first one is the reaching phase within [0 N ], and the second one is the sliding
motion phase within [N N ]. Next, we will analyze the transient performance of
system (6.31) over [0 N ] and [N N ].
Theorem 6.5 For given scalars βi ≥ 1, and c > c1 , the closed-loop MJS (6.31) is
stochastic finite-time stabilizable with respect to (c1 c N R d), if there exist
mode-dependent weight matrix Pi such that
⎡ Q M
⎤
⎢ −βi Pi 0 φiq πi j ÂiT P j ⎥
⎢ q=1 j=1 ⎥
⎢ ⎥
⎢ M
⎥
⎢ ∗ −βi I πi j Bwi
T
Pj ⎥<0 (6.42)
⎢ ⎥
⎢ j=1
⎥
⎣ M ⎦
∗ ∗ − πi j P j
j=1
Proof Selecting the same Lyapunov function as that in (6.15) and considering the
closed-loop MJS (6.31), one can get
Q
M
E{V2 (k + 1, i)} = φiq πi j x T (k + 1)P j x(k + 1)
q=1 j=1
Q
M
= φiq πi j ( Âi x(k) − Bui ηi (k)sign(S(k)) + Bwi w(k))T P j
q=1 j=1
Q Q
where B1i = q=1 φiq M j=1 πi j Âi P j Âi − βi Pi
T
and B2i = q=1 φiq M
j=1
πi j ( Âi P j Bwi ). Recalling to Lemma 2.1 and using the Schur complement lemma,
inequality (6.47) holds by inequality (6.13).
From Eq. (6.46), we have
k
E{V2 (k, i)} < βik V2 (x(0), r0 ) + βil w T (k − l)w(k − l)
l=1
k
< βi V2 (x(0), r0 ) +
k
βik−l w T (k − l)w(k − l)
l=1
k
< βik x T (0)Pi x(0) + βik−l w T (k − l)w(k − l) . (6.48)
l=1
−1 −1 −1 −1
Defining λ Pi = maxi∈M {λmax (R 2 Pi R 2 )} and λ Pi = mini∈M {λmin (R 2 Pi R 2 )},
we can get from condition (6.48) that
k
λ Pi E{x (k)Rx(k)} <
T
βiN λ Pi x (0)Rx(0) +
T
βik−l w T (k − l)w(k − l)
l=1
βiN (λ Pi c1 + d 2 )
E{x T (k)Rx(k)} < . (6.50)
λ Pi
It follows from condition (6.43) that E{x T (k)Rx(k)} < c for any k ∈ [0 N ]. That
means the closed-loop MJS (6.31) is stochastic finite-time stabilizable in regard to
(c1 c N R d). The proof is completed.
Theorem 6.6 For given scalars βi ≥ 1 and c > c1 , the closed-loop MJS (6.31) is
stochastic finite-time stabilizable with H∞ performance in regard to (c1 c N R d),
if there exist mode-dependent weight matrix Pi such that Eqs. (6.43) and (6.44) and
the following inequality hold:
120 6 Finite-Time Sliding Mode Control for Discrete-Time Markovian …
⎡ Q M
⎤
T
⎢ Ci Ci − βi Pi CiT Dwi φiq πi j ÂiT P j ⎥
⎢ q=1 j=1 ⎥
⎢ ⎥
⎢ M
⎥
⎢ ∗ −βi I + Dwi
T
Dwi πi j Bwi
T
Pj ⎥ < 0. (6.51)
⎢ ⎥
⎢ j=1
⎥
⎣ M ⎦
∗ ∗ − πi j P j
j=1
Proof The same Lyapunov function is selected as that in (6.15). Considering the
H∞ performance index, we introduce the following auxiliary function:
Recalling to Lemma 2.1 and using Schur complement lemma, inequality (6.53)
holds according to inequality (6.51). Define
N
J=E [z (k)z(k) − βi w (k)w(k)] .
T T
(6.54)
k=1
Considering the inequality (6.52), it follows that J < 0. Thus, one can get:
N
N
E z (k)z(k) < βi E
T
w (k)w(k) .
T
(6.56)
k=1 k=1
√
Recalling to Definition 3.2 and inequality (6.56), we have J < 0 with γ = βi .
The proof is completed.
It follows from the sliding mode control theory that the equivalent controller (6.57)
can be obtained if the closed-loop MJS (6.31) maintain in S(k) = 0
Substituting the equivalent controller (6.57) into discrete-time MJS (6.2), the
following closed-loop MJS can be obtained:
6.3 Asynchronous Finite-Time Sliding Mode Control 121
⎧
⎨ x(k + 1) = (Ai + E i (k)Fi − Bui K q )x(t)
z(k) = Ci x(k) + Dwi w(k) . (6.58)
⎩
x(k) = x0 , k = 0
Theorem 6.7 For given scalars βi ≥ 1, N < N , and c < c2 , the closed-loop
MJS (6.58) is stochastic finite-time stabilizable with H∞ performance in regard
to (c1 c c2 N N R d), if there exist mode-dependent weight matrix Pi such that
⎡ ⎤
Q M T
⎢ −βi Pi 0 φiq
πi j Ai P j ⎥
⎢ q=1 j=1 ⎥
⎢ ⎥
⎢ ∗ −βi I 0 ⎥<0 (6.59)
⎢ ⎥
⎣ M
⎦
∗ ∗ − πi j P j
j=1
Proof For the closed-loop MJS (6.58), the same Lyapunov function is selected as
(6.15). Thus, we have
Q
M
E{V2 (k + 1, i)} = φiq πi j x T (k + 1)P j x(k + 1)
q=1 j=1
Q
M
T
= φiq πi j x T (t)Ai P j Ai . (6.62)
q=1 j=1
k
E{V2 (k, i)} < βik V2 (x(N )) + βil w T (k − l)w(k − l)
l=N
k
< βik (V2 (x(N ))) + βik−l w T (k − l)w(k − l)
l=1
k
< βik E{x T (N )Pi x(N )} + βik−l w T ((k − l)w(k − l)). (6.64)
l=1
Defining
−1 −1
λ Pi = max{λmax (R 2 Pi R 2 )},
i∈M
−1 −1
and λ Pi = mini∈M {λmin (R 2 Pi R 2 )}, we can get from condition (6.48) that
k
λ Pi E{x T (k)Rx(k)} < βiN −N λ Pi x T (N )Rx(N ) + βik−l w T (k − l)w(k − l)
l=1
< βiN −N (λ Pi c + d ). 2
(6.65)
βiN −N (λ Pi c + d 2 )
E{x T (k)Rx(k)} < . (6.66)
λ Pi
It follows from Eq. (6.60) that E{x T (k)Rx(k)} < c2 for any k ∈ [N N ]. That
means the closed-loop MJS (6.58) is stochastic finite-time stabilizable with respect
to (c1 c c2 N N R d). The proof is completed.
Proof The same Lyapunov function is selected as (6.15). Considering the H∞ per-
formance index, we introduce the same auxiliary function as (6.52). Then, similar to
the proof in the Theorem 6.6, the stochastic finite-time stabilizable with H∞ perfor-
mance of the closed-loop MJS (6.58) can be guaranteed. The proof is completed.
It follows from Theorems 6.5 to 6.8 that the closed-loop discrete-time MJS is
stochastic finite-time stabilizable with H∞ performance over the [0 N ] and [N N ]
if and only if the inequalities (6.43)–(6.44), (6.51), (6.60)–(6.61), and (6.67) hold
simultaneously. In the following theorem, the asynchronous controller gain K q will
be obtained to ensure the closed-loop MJS stochastic finite-time stabilizable with
H∞ performance over [0 N ] and [N N ] simultaneously.
Theorem 6.9 For given scalars βi ≥ 1, and 0 < c1 < c < c2 , the closed-loop MJS
(6.5) is stochastic finite-time stabilizable with H∞ performance with respect to
(c1 c2 N R d) over [0 N ] and [N N ] simultaneously, if there exist mode-dependent
weight matrix X i , matrices Hq and Yq such that
1iq 2i
<0 (6.68)
∗ −diag{−I, −βi−1 I, −I, −I, −I, }
3iq 2i
<0 (6.69)
∗ −diag{−I, −βi−1 I, −I, −I, −I, }
where
⎡ Q M
⎤
⎢ −βi X i φiq πi j Hq AiT + T
0 YqT Bui 0 ⎥
⎢ q=1 j=1 ⎥
⎢ ⎥
⎢ M
⎥
⎢ ∗ −βi Hi πi j X i Bwi
T
0 ⎥
1iq =⎢ ⎥,
⎢ j=1
⎥
⎢ M M ⎥
⎢ ∗ ∗ − πi j X i πi j X i E i ⎥
⎣ j=1 j=1
⎦
∗ ∗ ∗ −βi I
124 6 Finite-Time Sliding Mode Control for Discrete-Time Markovian …
⎡ ⎤
X i CiT X i FiT X i CiT 0 0
⎢ 0 0 T
0 X i Dwi T ⎥
X i Dwi
=⎢
⎣ 0
⎥,
2i
0 0 0 0 ⎦
0 0 0 0 0
⎡ ⎤
Q M
⎢ −βi X i φiq πi j Hq AiT + T
0 YqT Bui 0 ⎥
⎢ q=1 j=1 ⎥
⎢ ⎥
⎢ ∗ −βi Hi 0 0 ⎥
3iq =⎢ ⎥.
⎢ M M ⎥
⎢ ∗ ∗ − πi j X i πi j X i E i ⎥
⎣ j=1 j=1 ⎦
∗ ∗ ∗ −βi I
⎡ Q
⎤
M
⎢ Ci φiq πi j (AiT − K qT Bui
CiT Dwi T )P 0
j ⎥
⎢ q=1 j=1 ⎥
⎢ ⎥
⎢ ∗ −βi I + Dwi
T D
wi 0 0 ⎥
⎢ ⎥<0 (6.74)
⎢ M M ⎥
⎢ ∗ ∗ − πi j P j πi j P j E i ⎥
⎣ j=1 j=1
⎦
∗ ∗ ∗ −βi I
3iq 2i
<0 (6.76)
∗ −diag{−I, −βi−1 I, −I, −I, −I, }
6.4 Simulation Analysis 125
where
⎡ Q M
⎤
⎢ −βi X i φiq πi j X i (AiT + K qT Bui )
T
0 0 ⎥
⎢ q=1 j=1 ⎥
⎢ ⎥
⎢ M
⎥
⎢ ∗ −βi Hi πi j X i Bwi
T
0 ⎥
1iq =⎢ ⎥,
⎢ j=1
⎥
⎢ M M ⎥
⎢ ∗ ∗ − πi j X i πi j X i E i ⎥
⎣ j=1 j=1
⎦
∗ ∗ ∗ −βi I
⎡ ⎤
X i CiT X i FiT X i CiT 0 0
⎢ 0 0 T
0 X i Dwi T ⎥
X i Dwi
=⎢
⎣ 0
⎥,
2i
0 0 0 0 ⎦
0 0 0 0 0
⎡ ⎤
Q M
⎢ −βi X i φiq πi j X i (AiT + K qT Bui )
T
0 0 ⎥
⎢ q=1 j=1 ⎥
⎢ ⎥
⎢ ∗ −βi Hi 0 0 ⎥
3iq =⎢ ⎥.
⎢ M M ⎥
⎢ ∗ ∗ − πi j X i πi j X i E i ⎥
⎣ j=1 j=1 ⎦
∗ ∗ ∗ −βi I
Q M
q=1 φiq j=1 πi j X i (Ai + K q Bui ) in 3iq , we define X iq =
T T T
For 1iq and
Q
L i Hq with a non-singular unit matrix L i . q=1 φiq M j=1 i j X i (Ai + K q Bui ) =
π T T T
Q M
q=1 φiq j=1 πi j Hq Ai + Yq Bui can be obtained by defining Yq = K q Hq . Thus,
T T T
In this section, a numerical example is given to show the effectiveness of our devel-
oped results. Consider the following two-mode discrete-time MJS with parameters
given by:
−0.25 −0.15 −0.46 0.26 0.1 0.2
A1 = , A2 = , Bu1 = , Bu2 = ,
0.43 −0.31 0.24 −0.57 0.5 0.3
126 6 Finite-Time Sliding Mode Control for Discrete-Time Markovian …
0.2 0.4 0.3 0.5
E1 = , E2 = , F1 = 0.2 0.5 , F2 = 0.3 0.4 ,
0.7 0.1 0.3 0.4
C1 = 0.1 0.5 , C2 = 1.2 1.6 ,
Dw1 = 0.5 , Dw2 = 0.3 .
0.4 0.6
Assume the transition probability matrix as [πi j ] = . The initial conditions
0.3 0.7
are given by x0 = 0.3 0.4 , c1 = 0.4, and w(t) = 0.8 cos(t). We also set β1 = 0.2,
β2 = 0.3, N = 3, λ Pi = 0.8, λ Pi = 0.3, and d 2 = 0.6.
By solving Theorem 6.3, we obtain
8.2520 1.7494 −13.0905 −2.6415
P1 = , P2 = ,
1.7494 9.2918 −2.6415 −8.8332
G 1 = 1.6999 4.8209 , G 2 = −3.4106 −3.1782 , c2 = 5.9353.
6.5 Conclusion
The finite-time SMC and asynchronous SMC design problems for discrete-time MJSs
are investigated in this chapter. Firstly, we design a mode-dependent SMC to drive
the closed-loop MJSs onto the sliding surface with the help of the Lyapunov function
method. Moreover, some sufficient conditions on the stochastic finite-time stabiliza-
tion with H∞ performance of the closed-loop MJSs are given. Then, considering the
asynchronous phenomenon of the discrete-time MJSs, we design a mode-dependent
asynchronous SMC to drive the closed-loop systems onto the sliding surface. In
addition, some sufficient conditions on the stochastic finite-time stabilization with
H∞ performance over the finite-time interval [0 N ] are provided. The next chapter
will consider the transient performance at a specific frequency band to reduce the
conservativeness of controller design from the perspective of frequency domain.
References
1. Ren, C.C., He, S.P.: Sliding mode control for a class of positive systems with Lipschitz non-
linearities. IEEE Access 6, 49811–49816 (2018)
2. Cao, Z.R., Niu, Y.G., Lam, J., Song, X.Q.: A hybrid sliding mode control scheme of Markovian
jump systems via transition rates optimal design. IEEE Trans. Syst. Man Cybern. Syst. (2020).
https://doi.org/10.1109/TSMC.2020.2980851
References 129
3. Yang, Y.K., Niu, Y.G., Zhang, Z.N.: Dynamic event-triggered sliding mode control for interval
type-2 fuzzy systems with fading channels. ISA Trans. 110, 53–62 (2020)
4. Ren, C.C., He, S.P.: Sliding mode control for a class of nonlinear positive Markovian jump
systems with uncertainties in a finite-time interval. Int. J. Control Autom. Syst. 17(7), 1634–
1641 (2019)
5. Li, H.Y., Shi, P., Yao, D.Y.: Adaptive sliding-mode control of Markovian jump nonlinear
systems with actuator faults. IEEE Trans. Autom. Control 62(4), 1933–1939 (2017)
6. Song, J., Niu, Y.G., Zou, Y.Y.: Asynchronous sliding mode control of Markovian jump systems
with time-varying delays and partly accessible mode detection probabilities. Automatica 93,
33–41 (2018)
7. Tong, D.B., Xu, C., Chen, Q.Y., Zhou, W.N., Xu, Y.H.: Sliding mode control for nonlinear
stochastic systems with Markovian jumping parameters and mode-dependent time-varying
delays. Nonlinear Dynam. 100, 1343–1358 (2020)
8. Dong, S.L., Liu, M.Q., Wu, Z.G., Shi, K.B.: Observer-based sliding mode control for Markovian
jump systems with actuator failures and asynchronous modes. IEEE Trans. Circ. Syst.-II 68(6),
1967–1971 (2021)
9. Du, C.L., Li, F.B., Yang, C.H.: An improved homogeneous polynomial approach for adaptive
sliding-mode control of Markovian jump systems with actuator faults. IEEE Trans. Autom.
Control 65(3), 955–969 (2020)
10. Fang, M., Shi, P., Dong, S.L.: Sliding mode control for Markovian jump systems with delays
via asynchronous approach. IEEE Trans. Syst. Man Cybern. Syst. 51(5), 2916–2925 (2021)
11. Bhat, S.P., Bernstein, D.S.: Finite-time stability of continuous autonomous systems. SIAM J.
Control Optim. 38(3), 751–766 (2000)
12. Chen, W., Jiao, L.C.: Finite-time stability theorem of stochastic nonlinear systems. Automatica
46(12), 2105–2108 (2010)
13. Garrard, W.L., McClamroch, N.H., Clark, L.G.: An approach to suboptimal feedback control
of nonlinear systems. Int. J. Control 5(5), 425–435 (1967)
14. Van Mellaert, L., Dorato, P.: Numerical solution of an optimal control problem with a probability
criterion. IEEE Trans. Autom. Control 17(4), 543–546 (1972)
15. San Filippo, F.A., Dorato, P.: Short-time parameter optimization with flight control application.
Automatica 10(4), 425–430 (1974)
16. Gmjic, W.L.: Finite time stability in control system synthesis. In: Proceedings of the 4th IFAC
Congress, pp. 21–31. Warsaw, Poland (1969)
Chapter 7
Finite-Frequency Control
with Finite-Time Performance
for Markovian Jump Systems
7.1 Introduction
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 131
X. Luan et al., Robust Control for Discrete-Time Markovian Jump Systems in the
Finite-Time Domain, Lecture Notes in Control and Information Sciences 492,
https://doi.org/10.1007/978-3-031-22182-8_7
132 7 Finite-Frequency Control with Finite-Time Performance for Markovian …
Design the following state feedback controller for the system (5.1):
where = ⊗ Pi + ⊗ Q i .
I 0
Remark 7.1 Generally, = is used to represent the H∞ performance
0 −γ 2 I
index. In this situation, Eq. (7.3) is equal to G i (λ)∞ < γ . Therefore, different
performance indices can be set up to depict different performance indices.
The purpose of this chapter is to design the appropriate controller (7.1) so that the
controlled system (7.2) meets the corresponding finite-time stability requirements
and the following requirement:
G zw (e jϑ )ϑl <ϑ<ϑh < γ . (7.5)
∞
The following theorem focuses on the transient performance of the system (7.2)
to realize that the state trajectory of the controlled system does not exceed a certain
bound in a given time. At the same time, it also meets the performance index of
finite-frequency band.
Theorem 7.1 For given γ > 0 and α ≥ 0, the discrete-time closed-loop system (7.2)
is said to be finite-time stabilizable with respect to (c1 c2 N R) and meet the
134 7 Finite-Frequency Control with Finite-Time Performance for Markovian …
performance index (7.5), if there exists mode-dependent symmetric matrix P̃ki > 0
and P̃i > 0, matrix η̃i , K̃ i , and Q̃ i > 0 satisfying the following conditions:
⎛ ⎞
− P̃i e jϑm Q̃ i − η̃iT 0 0 0 0
⎜ ˜ + H e Ai η̃i + Bui K̃ i η̃iT CiT D̄wi + Bwi ⎟
⎜ ∗ 0 η̃iT CiT 0 ⎟
⎜ ⎟
⎜ ∗ ∗ −γ 2 I + Dwi T
Dwi 0 0 0 ⎟
⎜ ⎟<0 (7.6)
⎜ ∗ ∗ ∗ −I 0 0 ⎟
⎜ ⎟
⎝ ∗ ∗ ∗ ∗ −I 0 ⎠
∗ ∗ ∗ ∗ ∗ −I
P k − η̃iT − η̃i Ai η̃i + Bui K̃ i < 0 (7.7)
∗ −(1 + α) P̃ki
− P̃ki η̃iT
−1 −1 <0 (7.8)
∗ −λ1 R
P̃ki + λ−1
2 R
−1
− η̃iT − η̃i < 0 (7.9)
(α + 1) N c1 λ−1 −1
1 − c2 λ2 < 0 (7.10)
where
˜ = P̃i − 2 cos ϑd Q̃ i , P k = πi1 P̃k1 + πi2 P̃k2 + · · · + πi M P̃k M , P̃i = η̃iT Pi η̃i ,
P̃ki = η̃iT Pki η̃i , Q̃ i = η̃iT Q i η̃i , P̂ki = R 1/2 Pki R 1/2 , η̃i = ηi−1 ,
T
H e Ai η̃i + Bui K̃ i = Ai η̃i + Bui K̃ i + Ai η̃i + Bui K̃ i ,
Similarly, by Schur complement lemma, the condition (7.8) can be converted into
Pre- and post-multiplying the condition (7.7) by diag (ηi , ηi ), and by using Lemma
7.2, it can also be obtained that
M
x T (k) ĀiT πi j Pki Āi x(k) − x T (k)Pki x(k) < αx T (k)Pki x(k). (7.15)
j=1
In combination with Formula (7.13), the left side of the above formula can be
converted into
E {Vi (k)} = E x T (k)R 1/2 Pki R 1/2 x(k)
> λmin (Pki )E x T (k)Rx(k)
= λ1 E x T (k)Rx(k) . (7.16)
136 7 Finite-Frequency Control with Finite-Time Performance for Markovian …
By combining Eqs. (7.16) and (7.17), the original inequality can be converted into
λ1 E x T (k)Rx(k) < (α + 1) N λ2 x T (0)Rx(0)
< (α + 1)k λ2 c1 . (7.18)
+ L ⊥ζ + ζ T L ⊥ < 0
T
(7.21)
where
T T
ĀiT I 0
L ⊥ = −I Āi Bwi ,L = T ,
Bwi 0I
⎛ ⎞
−Pi e jϑm Q i 0
=⎝ ∗ CiT Dwi ⎠.
∗ ∗ −γ I + Dwi T Dwi
2
7.3 Finite-Time Multiple-Frequency Control Based on Derandomization 137
where
−Pi e jϑm Q i I 0
= , = .
∗ Pi − 2 cos ϑd Q i 0 −γ 2 I
From the GKYP Lemma 7.1, it can be obtained that the controlled system (7.2)
meets the medium-frequency performance index (7.5). This completes the proof.
The main purpose of this subsection is to design the appropriate mode-dependent state
feedback controller (7.1) so that the system (7.2) meets the corresponding finite-time
stability requirement and the following multiple-frequency performance indices:
! !
!G zw (e jϑ )! < γ , ∀ |ϑ| ≤ ϑl (7.24)
! !
!G uw (e jϑ )! < ρ, ∀ |ϑ| ≥ ϑh . (7.25)
The results in Theorem 7.1 just guarantee the finite-frequency performance of each
sub-modal system in a given time interval rather than the whole stochastic system.
It is well known that the performance of the sub-modal system is not equivalent to
that of the whole system.
In order to fully consider the effect of random jumping on the performance of
the system in different frequency ranges and a given time interval, derandomization
method is proposed to improve the performance of the system by transforming the
original stochastic multimodal systems into deterministic single-mode ones, where
the parameter matrices contain the information of the transition probability.
138 7 Finite-Frequency Control with Finite-Time Performance for Markovian …
Defining
si (k) = E x(k)1{rk =i} . (7.26)
M
= πi j Āi si (k) + Bwi w(k). (7.27)
i=1
ũ (k) = K̂ s(k),
T
w̃ (k) = w (k) . . . w (k) .
The original stochastic system (5.1) can be transformed into the following deter-
ministic one:
s(k + 1) = As(k) + B u ũ(k) + Bw w̃(k)
(7.28)
z̃(k) = Cs(k) + Dw w̃(k)
where
A = T ⊗ In · diag{A1 , A2 , . . . , A M }, Bu = T ⊗ In · diag{Bu1 , Bu2 , . . . , Bu M },
According to the definition of z̃(k), w̃(k), and ũ(k), the performance indices (7.24)
and (7.25) are equal to
! !
!Tz̃ w̃ (e jϑ )! = z(k)2 < γ , ∀ |ϑ| ≤ ϑl (7.29)
w(k)2
! !
!Tũ w̃ (e jϑ )! = u(k)2 < ρ, ∀ |ϑ| ≥ ϑh . (7.30)
w(k)2
This means that the low-frequency (or high-frequency) performance index of the
system (7.28) is equivalent to the low-frequency (or high-frequency) performance
index of the original stochastic jumping system (7.2).
Remark 7.3 Based on the derandomization method, the original stochastic jumping
system is equivalently transformed into a deterministic one. This approach has two
advantages. First of all, GKYP lemma can be directly used to the finite-frequency
performance analysis of the deterministic system. Secondly, the parameter matri-
ces contain two kinds of information, namely, Ai , i = 1, 2, . . . , M and the transi-
tion probability πi j , so that the transition probability information is successfully
introduced into the linear matrix inequality of the performance index of the finite-
frequency band.
Therefore, the target of this subsection can be transformed into designing an
appropriate state feedback controller K̂ to make the controlled system (7.2) meet the
corresponding design indices.
The following theorem can be used to obtain sufficient conditions to ensure the
finite-time stability of the controlled system (7.2) and to meet the performance indi-
cators (7.24) and (7.25).
(α + 1) N c1 λ3 − c2 λ4 < 0 (7.33)
λ3
E s T (k)Rs T (k) ≤ (α + 1) N c1 < c2 ,
λ4
On the other hand, according to GKYP Lemma 7.1, condition (7.31) is equivalent
to T T
MT ⊗ Pl + l ⊗ Q l 0 M
T T T
<0 (7.36)
I 0 I
where
01 −1 0 I 0
= , l = , = ,
10 0 −ϑl2 0 −γ 2 I
A + Bu K̂ Bw A Bw Bu
M= = + K̂ I 0 = + K̂ Z ,
C Dw C Dw 0
A Bw Bu T
= , = , Z = I 0 , Z + = Z ∗ (Z Z ∗ )−1 = I 0 .
C Dw 0
In this section, two examples will be given to show the effectiveness and practi-
cal application of our developed theoretical results. The first numerical example is
used to show the transient performance of the system in given time interval and the
multiple-frequency performance in high-frequency and low-frequency bands. The
second example focuses on the advantage of the derandomization method presented
in Theorem 7.2.
Example 7.1 Consider system (5.1) with two operation modes and the following
parameters:
−0.3 0.9 0.2 −0.8 0.2 0
A1 = , A2 = , Bu1 = , Bu2 = ,
1.1 −1.5 −0.3 1 0.1 0.1
T
0 0 0.5
Bw1 = , Bw2 = , C1 = C2 = , Dw1 = Dw2 = 0.1.
0.2 0.1 0.4
Similarly, applying the results derived in Theorem 7.2, the controller parameters to
be solved are as follows:
K 1 = −4.8506 −2.8687 , K 2 = −9.5549 −12.6071 .
Using the obtained controllers, Fig. 7.1a, b are drawn. It is obvious from the
Figs. 7.1a, b that the controlled system satisfies the condition of finite-time stability.
In order to compare the effectiveness of the method, the state trajectory of the open-
loop system without control is given in Fig. 7.2, where the state trajectory exceeds
the desired bound c2 = 2.
Meanwhile, in order to verify that the system meets the multiple-frequency per-
formance indices, the amplitude-frequency characteristic curve of the closed-loop
system is shown in Fig. 7.3. The solid line in the Fig. 7.3 represents the amplitude-
7.4 Simulation Analysis 143
Fig. 7.1 a State trajectory of the closed-loop system (based on Theorem 7.1). b State trajectory of
the closed-loop system (based on Theorem 7.2)
144 7 Finite-Frequency Control with Finite-Time Performance for Markovian …
10
5
x1
0
-5
0 1 2 3 4 5 6 7 8 9 10
time
10
0
x2
-10
-20
0 1 2 3 4 5 6 7 8 9 10
time
! !
frequency characteristic curve of !G zw (e jϑ )!!, and the !dashed line represents the
amplitude-frequency characteristic curve of !G uw (e jϑ )!. The shaded part in blue
shows the limits of γ and ρ in low-frequency and high-frequency
! ! bands. It can be
clearly seen in Fig. 7.3 that !the performance indicator !G zw (e jϑ )! can be satisfied
!
!in low-frequency
! band and !G uw (e jϑ )! is satisfied in high-frequency
! ! band, while
!G zw (e jϑ )! is greater than γ in low-frequency band and !G uw (e jϑ )! is greater than
ρ in high-frequency band. That explains the reason why the proposed method in this
chapter can reduce the conservativeness of the controller design.
To show the advantage of the derandomization-based result presented in Theo-
rem
! 7.2,jϑthe ! results in Theorem 7.1 are compared. The low-frequency performance! !
!G zw (e )! < γ , ∀ |ϑ| ≤ ϑl is taken as an example, and the curve of !G zw (e jϑ )!
is drawn in Fig. 7.4, where the effect of transition probability on low-frequency
performance is not considered. ! It can be ! seen from Fig. 7.4 that for the desired
bound γ = 0.4, performance !G zw (e jϑ )! < 0.4 in low-frequency band is not satis-
fied, which implies the limitation of the method proposed in Theorem 7.1.
Next example will be used to show the practical application of the theoretical
results presented in Theorem 7.2.
Figures 7.5 and 7.6 show the state trajectories of the open-loop and closed-loop
systems. From these figures, we can see that the original unstable system has a
bounded state under the control of the designed controller.
Meanwhile, the amplitude-frequency characteristic curve of the result in Theorem
7.2 is shown in Fig. 7.7. ! !
It can be seen from Fig. 7.7 that! the performance
! indicator !G zw (e jϑ )! can be
satisfied in low-frequency band and !G uw (e jϑ )! is satisfied in high-frequency band.
However, if the result in Theorem 7.1 is used to the cart–spring system, we can find
that it does not meet the desired multiple-frequency performance, which is shown in
Fig. 7.8.
7.5 Conclusion
Fig. 7.6 a State trajectory of the MJLS system (based on Theorem 7.1). b State trajectory of the
derandomization system (based on Theorem 7.2)
7.5 Conclusion 149
shows the effectiveness and validity of the results, but also the practical application
value of the results. Next chapter will concern not only the transient behavior of
discrete-time MJSs in the finite-time domain but also the consistent state behavior
of each subsystem.
References
1. Zhou, K., Doyle, J.C., Glover, K.: Robust and Optimal Control. Prentice Hall Publishing, New
Jersey (1996)
2. Iwasaki, T., Meinsma, G., Fu, M.: Generalized S-procedure and finite frequency KYP lemma.
Math. Probl. Eng. 6(2–3), 305–320 (2000)
3. Iwasaki, T., Hara, S.: Generalized KYP lemma: unified frequency domain inequalities with
design applications. IEEE Trans. Autom. Control 50(1), 41–59 (2005)
4. Hara, S., Iwasaki, T., Shiokata, D.: Robust PID control using generalized KYP synthesis: direct
open-loop shaping in multiple frequency ranges. IEEE Control Syst. Mag. 26(1), 80–91 (2006)
5. Wan, H.Y., Luan, X.L., Karimi, H.R., Liu, F.: Higher-order moment filtering for Markovian
jump systems in finite frequency domain. IEEE Trans. Circ. Syst-II 66(7), 1217–1221 (2019)
6. Zhou, Z.H., Luan, X.L., Liu, F.: Finite-frequency fault detection based on derandomisation for
Markovian jump linear system. IET Control Theor. Appl. 12(08), 1148–1155 (2018)
7. Iwasaki, T., Hara, S., Yamauchi, H.: Dynamical system design from a control perspective: finite
frequency positive-realness approach. IEEE Trans. Autom. Control 48(8), 1337–1354 (2003)
8. Iwasaki, T., Hara, S.: Dynamic output feedback synthesis with general frequency domain
specifications. IFAC Proc. Volumes 38(1), 345–350 (2005)
9. Mei, P., Fu, J., Liu, Y.: Finite frequency filtering for time-delayed singularly perturbed systems.
Math. Probl. Eng. 4, 1–7 (2015)
10. Ding, D.W., Yang, G.H.: Fuzzy filter design for nonlinear systems in finite-frequency domain.
IEEE Trans. Fuzzy Syst. 18(5), 935–945 (2010)
11. Wang, H., Ju, H., Wang, Y.L.: Finite frequency H∞ filtering for switching LPV systems. In:
Proceedings of the 24th Chinese Control and Decision Conference, pp. 4008–4013, Taiyuan,
China (2012)
12. Luan, X.L., Zhou, C.Z., Ding, Z.T., Liu, F.: Stochastic consensus control with finite frequency
specification for Markovian jump networks. Nonlinear Control 13(2), 1833–1838 (2015)
13. Luan, X.L., Shi, P., Liu, F.: Given-time multiple frequency control for Markovian jump systems
based on derandomization. Inf. Sci. 451, 134–142 (2018)
14. Skelton, R.E., Iwasaki, T., Grigoriadis, K.M.: A Unified Algebraic Approach to Control Design.
CRC Press Publishing, Los Angeles (1997)
15. Iwasaki, T., Hara, S.: Robust control synthesis with general frequency domain specifications:
static gain feedback case. In: Proceedings of the 2004 American Control Conference, vol. 5,
pp. 4613–4618, Boston, MA, USA (2004)
Chapter 8
Stochastic Finite-Time Consensualization
for Markovian Jump Networks
with Disturbances
8.1 Introduction
With the rapid development of computer technology, network technology, and com-
munication technology, the research on the consensus of network-connected systems
has become a popular topic in the field of control [1, 2]. Taking the industrial heating
furnace with multiple passes for example, it is a significant equipment in petrochem-
ical processes. The outlet temperature of the furnace can directly impact the recovery
efficiency, stability of subsequent production, and product quality. Principally, the
process operation has a consistent temperature across all passes. However, the fluc-
tuations of inlet flow pressure and feed composition may cause difference of outlet
temperature among passes [3, 4]. Such temperature variations could result in unsta-
ble operation and even equipment failure caused by material coking in the pipeline.
Hence, it is really important for the furnace’s outlet temperatures to be consistent.
Therefore, the consensus problem of network-connected dynamic systems has
attracted extensive attention from numerous researchers in mathematics, control,
and system science. Carding the existing literature research on consensus of dynamic
systems, it includes the following three aspects: (1) dynamic systems in a network,
from the first-order integrator model to linear system, nonlinear system, singular
system, and so on [5, 6]; (2) network topology, from undirected to directed, from
fixed topology to switching topology, including time-varying topology, stochastic
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 151
X. Luan et al., Robust Control for Discrete-Time Markovian Jump Systems in the
Finite-Time Domain, Lecture Notes in Control and Information Sciences 492,
https://doi.org/10.1007/978-3-031-22182-8_8
152 8 Stochastic Finite-Time Consensualization for Markovian Jump…
topology, etc. [7, 8]; (3) interference degree of network connection, from instant
messaging to communication delay, data packet loss, random interference, etc. [9,
10].
However, all the above research results require that the inconsistency states of
the system converge to zero asymptotically in the infinite-time region. As we have
emphasized in the previous chapters, people are often more interested in whether the
systems can meet transient requirements in a limited short time. Still taking the heat-
ing furnace as an example, the target is to keep the temperature difference between
passes not exceed a given limit within a certain period of time. Although the control,
filtering, and optimization problems based on the finite-time theory have been widely
studied, it is a new and challenging issue worthy of attention for systems connected
together through a network. Of course, due to limited bandwidth, transmission time-
delay, topology connection uncertainty, and other factors, how to further consider
the finite-time consensus protocol involved in network connection systems in com-
plex cases is more meaningful. Therefore, the finite-time consensus protocol design
problem for discrete-time network-connected systems with random Markovian jump
topologies has been addressed in this chapter. If the state variable is measurable, how
to guarantee the disagreement dynamics do not exceed the desired bound in the fixed
time interval? Simultaneously, if the state variable is not available, how to design
the control protocol to achieve the same target? This chapter will answer these two
questions.
M
= πi j , πi j = Pr (rk = j|rk−1 = i), πi j ∈ (0, 1) , πi j = 1.
i=1
8.2 Preliminaries and Problem Formulation 153
The connection can be represented by an adjacency matrix Q(rk ) = {qr h (rk )}.
qr h (rk ) = 1 indicates that there is a connection between subsystems r and h. Other-
wise, qr h (rk ) = 0.
The Laplacian matrix L(rk ) = {lr h (rk )} is defined as
⎧
⎨ −qr h (rk ) r = h
lr h (rk ) =
S
. (8.2)
⎩ qr h (rk ) r = h
h=1,h=r
For a directed graph G(rk ), zero is the eigenvalue of Laplacian matrix L(rk ) with
T
1 = 1 1 · · · 1 as the corresponding right eigenvector. All the non-zero eigenvalues
have positive real parts. Moreover, zero is a single eigenvalue of L(rk ) if and only
if the graph G(rk ) contains a directed spanning tree. For the convenience of the
controller design, the eigenvalues of the Laplacian matrix are assumed to be distinct
[11].
Our target is to design a control protocol to make the disagreement dynamics keep
within the desired bound:
S
zr (k) = xr (k) − κ(rk )x h (k) (8.4)
h=1
Definition 8.1 [12]. For a given time-constant N > 0, the discrete-time network-
connected dynamic system (8.1) (setting u r (k) = 0, wr (k) = 0) is said to be finite-
time consensus with respect to (c1 c2 N R), where c1 < c2 , R > 0, if
E z T (0)Rz(0) ≤ c1 ⇒ E z T (k)Rz(k) ≤ c2 , ∀k ∈ {1, 2, . . . , N } (8.6)
154 8 Stochastic Finite-Time Consensualization for Markovian Jump…
T
where w(k) = w1 T (k), ..., wr T (k), ..., w S T (k) .
Lemma 8.1 [1] For a Laplacian matrix with distinct eigenvalues, there exists a
T
similarity transformation F(rk ) = F1 (rk ) 1 and F(rk )−1 = F2 (rk ) κ(rk ) such
that
F(rk )−1 L(rk )F(rk ) = J (rk ) (8.8)
⎡ ⎤
λ1 (rk )
⎢ .. ⎥
⎢ . ⎥
where J (rk ) = diag {J1 (rk ), 0} = ⎢ ⎥, λr (rk ) are the eigen-
⎣ λ(S−1) (rk ) ⎦
0
values of Laplacian matrix L(rk ).
Design the following state feedback controller for the system (8.1):
S
u r = K (rk ) qr h (rk ) (x h − xr ) (8.9)
h=1
where K (rk ) is the controller gain to be designed for each mode i ∈ M. According
to the relationship between Q(rk ) and L(rk ), the controller can be rewritten as:
S
u r = −K (rk ) lr h (rk )x h . (8.10)
h=1
Substituting controller (8.10) into system (8.1) yields the following closed-loop
system:
where √ √
L T1i = πi1 (AX i − λri Bu Yi )T · · · πi M (AX i − λri Bu Yi )T
√ −1 √
T
Bwtri = πi1 Fi Mi r ⊗ Bw · · · πi M Fi −1 Mi r ⊗ Bw .
Then we have
Summing the right-hand side of the inequality (8.22) from time 0 to k, it yields:
That is
ξ T (k)Pi ξ(k) < α N ξ T (0)Pi ξ(0) + γ 2 d 2 . (8.24)
Letting P̃i = R −1/2 Ti−T Pi Ti−1 R −1/2 , the above inequality can be converted into
z T (k)R 1/2 P̃i R 1/2 z(k) < α N z T (0)R 1/2 P̃i R 1/2 z(0) + γ 2 d 2 . (8.25)
Denoting σ1 = λmin P̃r and σ2 = λmax P̃r , inequality (8.25) is rewritten as
σ1 z T (k)R 1/2 R 1/2 z(k) < α N σ2 z T (0)R 1/2 R 1/2 z(0) + γ 2 d 2 . (8.26)
σ2 c1 + γ 2 d 2
< α −N . (8.27)
σ1
In the situation that the state is not accessible, the following dynamic output controller
should be designed:
⎧
⎪ S
⎪
⎨ vr (k + 1) = Ãi vr (k) + B̃i qr h (rk ) (yr − yh )
h=1
(8.28)
⎪
⎪ S
⎩ u r (k) = C̃i vr (k) + D̃i qr h (rk ) (yr − yh )
h=1
where yr (k) = C xr (k) + Dwr (k) is the output of the system (8.1). According to the
relationship between Q(rk ) and L(rk ), we have
S S
qr h (rk ) (yr − yh ) = lr h (rk ) yh . (8.29)
h=1 h=1
T
Letting ε(k) = ξ T (k) v T (k) , Eq. (8.31) is equivalent to
where
⎡ ⎤ ⎡ ⎤
I S ⊗ A + Ji ⊗ Bu D̃i C I S ⊗ Bu C̃i Bw + Ji ⊗ Bu D̃i Dw
i =⎣ ⎦, i = ⎣ ⎦.
Ji ⊗ B̃i C I S ⊗ Ãi Ji ⊗ B̃i Dw
If ε(k) satisfies the condition of finite-time stability, then the disagreement tra-
jectory z(k) will be confined within the prescribed bound in the fixed time interval,
which means the network-connected system (8.1) is finite-time consensus.
We! shall design
" the dynamic output controller (8.28) with controller gains
Ãi B̃i
K̃ i = to make sure that the system (8.1) is finite-time consensus with H∞
C̃i D̃i
performance.
8.4 Finite-Time Consensualization with Output Feedback 159
where
T T
√ √
i1
T
= πi1 1 + 2 K̃ i i3 Pi · · · πi M 1 + 2 K̃ i i3 Pi ,
T
√ T √
i2
T
= πi1 4 + 2 K̃ i i5 Pi · · · πi M 4 + 2 K̃ i i5 Pi ,
! " ! " ! "
A0 0 Bu 0 I
1 = , 2 = , i3 = ,
0 0 I 0 λri C 0
! " ! " ! "
Bw 0 I 0
4 = , i5 = ,I = .
0 λri D 00
Then following the similar proof as Theorem 8.1, the inequality (8.33) will be
derived.
It should be noted that the derived condition in Theorem 8.2 is not strict linear
matrix inequality (LMI). Therefore, the non-feasibility problem should be converted
to LMI by using the algorithm proposed in [13].
160 8 Stochastic Finite-Time Consensualization for Markovian Jump…
In this section, we will use the following example to verify the effectiveness of our
developed theoretical results. Consider system (8.1) with four subsystems and the
following parameters:
! "
−1.48 −1.96 T T
A= , Bu = 1 0.5 , Bw = 0.1 0.3 .
1.57 1.95
Using the obtained controllers, Figs. 8.1 and 8.2 show the disagreement trajectory
of the controller system from different perspectives, respectively. It can be seen that
the state disagreement stays within the specified bound c2 = 6 over the given time
horizon N = 20 with the designed controller, which means that the designed finite-
time controllers can achieve the consensus of network-connected systems in spite of
the communication delays and external disturbances.
8.6 Conclusion
In this chapter, the finite-time consensus controller design problem has been addressed
for a class of discrete-time network-connected systems with stochastic jumping
topologies. The state feedback controller and the dynamic output feedback con-
troller are designed to make sure that the disagreement trajectory of interconnected
networks keep confined within the prescribed bound in the fixed time interval rather
162 8 Stochastic Finite-Time Consensualization for Markovian Jump…
6 6
4 4
2 2
x22
x12
0 0
-2 -2
-4 -4
-6 -6
-6 -4 -2 0 2 4 6 -6 -4 -2 0 2 4 6
x 11 x21
6 6
4 4
2 2
x32
x42
0 0
-2 -2
-4 -4
-6 -6
-6 -4 -2 0 2 4 6 -6 -4 -2 0 2 4 6
x31 x41
than asymptotically converge to zero, respectively. Next chapter will consider the
higher-order moment finite-time stabilization problem to ensure that not only the
mean and variance of the states remain within the desired range in the fixed time
interval, but also the higher-order moment of the states are limited to the given bound.
References
1. Ding, Z.: Consensus output regulation of a class of heterogeneous nonlinear systems. IEEE
Trans. Autom. Control 58, 2648–2653 (2013)
2. Dong, W.J., Farrell, J.A.: Cooperative control of multiple nonholonomic mobile agents. IEEE
Trans. Autom. Control 53(6), 262–268 (2009)
3. Luan, X.L., Min, Y., Albertos, P., Liu, F.: Feed furnace temperature control based on the
distributed deviations. Ind. Eng. Chem. Res. 20, 6035–6042 (2017)
4. Wang, X.X., Zheng, D.Z.: Load balancing control of furnace with multiple parallel passes.
Control Eng. Pract. 15(5), 521–531 (2007)
5. Ding, Z.T.: Consensus control of a class of Lipschitz nonlinear systems. Int. J. Control 87(11),
2372–2382 (2014)
6. Wang, C.Y., Zuo, Z.Y., Lin, Z.L.: Consensus control of a class of Lipschitz nonlinear systems
with input delay. IEEE Trans. Circ. Syst.-I 62(11), 2730–2738 (2015)
References 163
7. Luan, X.L., Zhou, C.Z., Ding, Z.T., Liu, F.: Stochastic consensus control with finite frequency
specification for Markovian jump networks. Nonlinear Control 13(2), 1833–1838 (2015)
8. Saboori, I., Khorasani, K.: Consensus achievement of multiagent systems with directed and
switching topology networks. IEEE Trans. Autom. Control 59(11), 3104–3109 (2014)
9. You, K.Y., Li, Z.K., Xie, L.H.: Consensus condition for linear multi-agent systems over ran-
domly switching topologies. Automatica 49(10), 3125–3132 (2013)
10. Zeng, L., Hu, G.D.: Consensus of linear multi-agent systems with communication and input
delays. Acta Autom. Sin. 39(7), 1133–1140 (2013)
11. Cai, N., Cao, J.W., Khan, M.J.: Almost decouplability of any directed weighted network topol-
ogy. Phys. A 436, 637–645 (2015)
12. Luan, X.L., Min, Y., Ding, Z.T., Liu, F.: Stochastic finite-time consensualization for Markovian
jump networks with disturbance. IET Control Theory Appl. 9(16), 2340–2347 (2015)
13. He, Y., Wu, M., Liu, G.P., She, J.H.: Output feedback stabilization for a discrete-time system
with a time-varying delay. IEEE Trans. Autom. Control 53(10), 2372–2377 (2008)
Chapter 9
Higher-Order Moment Finite-Time
Stabilization for Discrete-Time
Markovian Jump Systems
9.1 Introduction
The research on the control problem for Markovian jump systems (MJSs) has a long
history. Because MJSs can represent most of the actual industrial processes, many
interesting results have been reported including the stability analysis and stabilization
[1, 2], state filtering [3, 4], fault detection [5, 6], and so on. The research results of
MJSs are summarized, including the following three aspects: (1) the structure of the
system, such as switching jump systems [7], non-homogeneous jump systems [8],
semi-Markovian jump systems [9], linear MJSs [10], nonlinear MJSs [11], and so on;
(2) control methods and performance, such as robust control [12], adaptive control
[13], optimal control [14], and intelligent control [15]; (3) transition probability (TP)
of the system, from completely known to partially known [16], from constant to time
varying [3], etc.
It should be noted that all the above research results consider the first-order or
second-order stability of MJSs. In other words, the control target is to make sure
that the mean and variance of the states satisfy the required performance. However,
in many control fields, such as machine tool production, spacecraft control, and
economic regulation and control, there are higher requirements about the control
performance more than the mean and variance of the states. In this situation, it is
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 165
X. Luan et al., Robust Control for Discrete-Time Markovian Jump Systems in the
Finite-Time Domain, Lecture Notes in Control and Information Sciences 492,
https://doi.org/10.1007/978-3-031-22182-8_9
166 9 Higher-Order Moment Finite-Time Stabilization for Discrete-Time …
not enough to control the mean and variance of the states. The higher-order moment
control performance is preferred to satisfy the higher demand of the system.
Take the precision machining of numerical control machine as an example. There
are very strict requirements for the feed speed of mechanical parts. Generally, it
is desired that the speed, acceleration, and the accelerated acceleration of the parts
are zero when they reach the specified position. Another example can be found in
economic field. The unbiased volatility index (VIX), as a measure of expected market
returns, is always subject to significant biases due to the volatility of the market.
The third-order moment index, the generalized VIX, is introduced to improve the
precision of regulating market expected returns [17].
Therefore, it is necessary to study the higher-order control performance for MJSs.
In the last decade or so, there has been some research results in this area. In 2006, Sun
introduced some new concepts of p-order moment stability for stochastic differential
equations with impulse jumps and Markovian switches [18]. In 2011, the p-order
moment asymptotical stability of stochastic difference systems was studied [19]. In
[20], the p-order moment exponential stability of impulsive functional differential
equations was addressed. In 2018, Luan introduced the cumulant generating function
to deal with the higher-order moment stability for MJSs [21]. Then, the higher-order
moment filtering, higher-order moment stabilization, and higher-order moment fault
detection were investigated [22–24].
Different from the abovementioned results in higher-order moment analysis and
synthesis, in this chapter, the higher-order moment performance in the finite-time
domain and specific finite-frequency domain for discrete-time MJSs have been dis-
cussed. Firstly, the finite-time stability problem with higher-order moment charac-
teristics has been addressed. Then, the higher-order moment finite-frequency perfor-
mance has been investigated. The derived results can not only cover the mean and
variance stability of the states as special cases, but also reduce the conservativeness
of the controller design.
Design the following state feedback controller for the system (9.1):
Considering the characteristic of δ Āi (rk ) and Markovian chain, the following equa-
tion holds:
M
q j (k + 1) = Āi x(k)δrk =i (rk )δrk+1 = j (rk+1 ) + Bwi w(k)
i=1
M
= πi j Āi qi (k) + Bwi w(k). (9.5)
i=1
Definition 9.1 [21] For a random variable z with distribution density function p(z),
the moment generating function (MGF) is defined as z ( ) = R n z e z p(z)dz, and
T
∞
⊗p
z ( ) = c( p, n)T (9.7)
p=0
p!
p−1
p−1
c( p, n) = m( p, n) − Q l c( p − l, n) ⊗ m(l, n) (9.8)
l
l=1
∞
⊗p
q j (k+1) = cq j (k+1) ( p, n)T (9.9)
p=0
p!
Then, it yields
M
cq j (k+1) ( p, n) = cπi j Āi qi (k) ( p, n). (9.11)
i=1
T
Defining s j (k, p) = cq j (k) ( p), S(k, p) = s1 T (k, p), . . . , s M T (k, p) , w M p (k) =
T T T
w (k), . . . , w TM (k) , z M p (k) = z T (k), . . . , z TM (k) , equality (9.12) can be
rewritten as
S(k + 1, p) = Ā M p S(k, p) + BwM p w M p (k)
(9.13)
z M p (k) = C M p S(k) + DwM p w M p (k)
where ⊗p ⊗p
Ā M p = T ⊗ In p · diag{ Ā1 , . . . , Ā M } ∈ R (n ×M)×(n ×M) ,
p p
So far, based on the cumulant generating function, the original discrete-time linear
MJS has been transformed into a deterministic linear system with the same norm.
Since the state of the transformed deterministic system has the same norm as the
higher-order moment of the original MJS, the finite-time stability of the transformed
deterministic system is equivalent to the higher-order moment finite-time stability
of the original MJS.
Our first target in this subsection is to design the controller in form of (9.2) to make
sure that the MJS (9.3) is higher-order moment finite-time stabilizable with H∞
interference suppression performance.
Before deriving the main results, the following definition and lemma are intro-
duced first.
Definition 9.2 [22] For a given time-constant N > 0, the transformed deterministic
system (9.13) (setting u(k) = 0, w(k) = 0) is said to be finite-time stable with respect
to (c1 c2 N R), where c1 < c2 , R > 0 , if
E S T (0, p)RS(0, p) ≤ c1 ⇒ E S T (k, p)RS(k, p) ≤ c2 , ∀k ∈ {1, 2, · · · , N } .
(9.14)
To eliminate the influence of external disturbances on the finite-time stability of
system (9.13), the following H∞ performance indicator should be satisfied:
N
E z M p (k)z M p (k) ≤ γ 2 w TM p (k)w M p (k).
T
(9.15)
k=0
√
Lemma 9.1 [25] If a1 , a2 , · · · at > 0, then t a1 a2 · · · at ≤ 1t a1 + a2 + · · · + at . The
equal sign will be taken if and only if a1 = a2 = · · · = at .
Then, the following theorem is given to provide the finite-time controller design
scheme for system (9.3), and realizes the requirement of higher-order moment finite-
time stabilization with H∞ performance for system (9.3).
Theorem 9.1 For given γ and α ≥ 0, the discrete-time closed-loop system (9.3) is
said to be higher-order moment finite-time stabilizable with respect to (c1 c2 N R d)
and meet the robust performance index, if there exist mode-dependent symmetric
matrix P̃ > 0 satisfying the following conditions:
170 9 Higher-Order Moment Finite-Time Stabilization for Discrete-Time …
⎡ √ T ⎤
2 − (1 + α) X̃ 0 T
X̃C M (1 + α) X̃ A M p + Bu M p Ỹ
⎢ p
√ ⎥
⎢ ∗ −γ 2 I DwM
T
(1 + α)BwMT ⎥
⎢ p p ⎥<0 (9.16)
⎣ ∗ ∗ −I 0 ⎦
∗ ∗ ∗ − X̃
(α + 1) N λ2 c1 + γ 2 d 2 < λ1 c2 (9.17)
where
1 T
AMp = ⊗ In p + diag {A1 , A2 , . . . , A M } ⊗ In p ,
p+1
Bu M p = diag {B1 , B2 , . . . , B M } ⊗ In p ,
Then, it has
N
= ζ (k)T ζ (k) (9.21)
k=0
where " #T
ζ (k) S(k, p)T w M p (k)T ,
$
(1 + α) ĀTM p P̃ Ā M p − (1 + α) P̃ + C M
T
pCMp
=
∗ %
(1 + α) ĀTM p P̃ BwM p + C MT
p DwM p
.
(1 + α)BwM
T
p P̃ BwM p + DwM p DwM p − γ I
T 2
1 T p
Ā M p ≤ Ã M p = ⊗ In p + diag{ Ā1 , . . . , Ā M } ⊗ In p . (9.22)
p+1 p+1
⎢ ∗ −γ 2 I DwM
T
(1 + α)B MT ⎥
⎢ p p ⎥ < 0. (9.23)
⎣ ∗ ∗ −I 0 ⎦
∗ ∗ ∗ − P̃ −1
< (α + 1) V (0, p) + γ d .
N 2 2
(9.25)
T
Considering that there exists a symmetric matrix Pt 1/2 , which has P̃ = Pt 1/2
R Pt 1/2 , then the Lyapunov function yields
Similarly,
1
S T (k, p)RS(k, p) < (α + 1) N λmax (Pt )c1 + γ 2 d 2 < c2 (9.29)
λmin (Pt )
which means the system (9.13) is finite-time stabilizable. Thus, the higher-order
moment finite-time stabilization of MJS (9.3) has been realized. This completes the
proof.
9.4 Higher-Order Moment Finite-Time Stabilization … 173
The above content in Sect. 9.3 considers the higher-order moment finite-time perfor-
mance with interference suppression level under the full frequency domain. However,
most of the external disturbances are energy bounded. Therefore, the controller design
considering the performance in the entire frequency range will lead to over-design
and conservativeness. The main purpose of this subsection is to design the appropri-
ate controller (9.2) so that the system (9.3) meet the higher-order moment finite-time
stabilization requirement and the following multiple-frequency performance indices:
& &
&G z (e jϑ )& < β, ∀ |ϑ| ≤ ϑl (9.30)
M p wM p
& &
&G u (e jϑ )& < ρ, ∀ |ϑ| ≥ ϑh (9.31)
M p wM p
T
where u M p (k) = u T (k), . . . , u TM (k) .
To improve the controller performance and reduce the conservativeness of the
controller, the higher-order moment finite-time stabilization with finite-frequency
performance will be given in the next theorem.
Theorem 9.2 The discrete-time closed-loop system (9.3) is said to be higher-order
moment finite-time stabilizable with respect to (c1 c2 N R d) and meet the finite-
frequency performance indicators (9.30) and (9.31), if for given scalars ϑl , ϑh , α ≥ 0,
γ , and ρ, there exist symmetric matrix Pl , Ph , Q l > 0 and Q h > 0, matrice X̃ , Vl ,
and Vh satisfying the following conditions:
⎡ ⎤
−Pl Q l + X̃ Rl 0 0
⎢ ⎥
⎢ ∗ Pl − 2 cos ϑl Q l − H e A M p X̃ Rl + Bu M p Ỹ Rl B M p Vl −C M p X̃ Rl ⎥
⎢ ⎥<0
⎣ ∗ ∗ −γ 2 I DwM p Vl ⎦
∗ ∗ ∗ −I
⎡ ⎤ (9.32)
−Ph Q h + X̃ Rh 0 0
⎢ ⎥
⎢ ∗ Ph − 2 cos ϑh Q h − H e A M p X̃ Rh + Bu M p Ỹ Rh B M p Vh Ỹ Rh ⎥
⎢ ⎥<0
⎣ ∗ ∗ −ρ 2 I 0 ⎦
∗ ∗ ∗ −I
' ( (9.33)
T
(1 + α) X̃ (A M p X̃ + Bu M p Ỹ )
<0 (9.34)
∗ X̃
(α + 1) N λ2 c1 + γ 2 d 2 < λ1 c2 . (9.36)
174 9 Higher-Order Moment Finite-Time Stabilization for Discrete-Time …
−(1 + α) P̃ (A M p + Bu M p K M p )T
< 0. (9.37)
∗ − P̃ −1
1
S T (k, p)RS(k, p) < (α + 1) N λmax (Pt )c1 + γ 2 d 2 < c2 ,
λmin (Pt )
which means the system (9.13) is finite-time stabilizable. Thus, the higher-order
moment finite-time stabilization of MJS (9.3) has been realized.
On the other hand, according to GKYP Lemma 7.1, condition (9.32) is equivalent
to
T
MT ⊗ Pl + l ⊗ Q l 0 MT
T TT <0 (9.39)
I 0 I
where
01 −1 0 I 0
= , l = , = ,
10 0 −ϑl2 0 −γ 2 I
A M p + Bu M p K M p BwM p
M=
CMp DwM p
A M p BwM p Bu M p
= + K Mp I 0
C M p DwM p 0
= + K M p Z ,
9.4 Higher-Order Moment Finite-Time Stabilization … 175
A M p BwM p Bu M p
= , = ,
C M p DwM p 0
T
Z = I 0 , Z + = Z ∗ (Z Z ∗ )−1 = I 0 .
Through Lemma 7.2, Formula (9.39) can be deduced from inequality (9.41),
which means that the low-frequency performance index (9.30) can be derived from
the condition (9.32). Similarly, the high-frequency performance index (9.31) can also
be deduced from the condition (9.33) in Theorem 9.2. This completes the proof.
176 9 Higher-Order Moment Finite-Time Stabilization for Discrete-Time …
Fig. 9.1 a State trajectory of the closed-loop system with p = 2. b State trajectory of the closed-
loop system with p = 3
In this section, two examples will be given to show the effectiveness and practi-
cal application of our developed theoretical results. The first numerical example is
9.4 Higher-Order Moment Finite-Time Stabilization … 177
used to show the transient performance of the system in given time interval and the
multiple-frequency performance in high-frequency and low-frequency bands.
Example 9.1 Use the Example 9.1 in Chap. 7, and the results obtained from Theorem
9.1, when order moment p = 2, the controller parameter to be solved is
K 1 = −0.1604 −3.3373 , K 2 = 0.6055 −0.7477 .
Using the obtained controller, Fig. 9.1a, b are given to show the second-order
moment response and the third-order moment response of the states, respectively.
It is obvious from the Fig. 9.1 that the controlled system satisfies the condition of
finite-time stabilization. In order to compare the effectiveness of the method, the
state trajectory of the open-loop system without control is given in Fig. 9.2, where
the state trajectory exceeds the desired bound c2 = 4.
To verify the effectiveness of the results presented in Theorem 9.2, when order
moment p = 2, the controller parameter to be solved are
K 1 = −5.9370 −3.4052 , K 2 = −13.2598 −18.4628 .
Fig. 9.3 a State trajectory of the closed-loop system. b State trajectory of the open-loop system
Figure 9.3a, b are given to show the third-order moment response of the closed-
loop system and open-loop system, respectively. It can be shown that even the open-
loop system is unstable, the controlled closed-loop system satisfies the required
performance.
9.4 Higher-Order Moment Finite-Time Stabilization … 179
Meanwhile, in order to verify that the system meets the multiple-frequency per-
formance indices, the amplitude-frequency characteristic curve of the closed-loop
system is shown in Fig. 9.4a. The solid& line in the& Fig. 9.4 represents the amplitude-
frequency characteristic curve of &G z M p w M p (e jϑ )& . The shaded part in blue shows
the limits of β and ρ in low-frequency and high-frequency
& bands.
& It can be clearly
seen in Fig. 9.4 that the performance indicator &G z w (e jϑ )& can be satisfied in
& & Mp Mp
9.6 Conclusion
In this chapter, the issue of higher-order moment performance for stochastic discrete-
time MJSs has been addressed with the help of the cumulant generating function
by translating the original stochastic MJSs into deterministic ones. To reduce the
conservativeness of the controller design, the requirement of asymptotic stability
in infinite-time domain is relaxed to guarantee that the state is restricted within a
certain range of the equilibrium point in the fixed time interval. Furthermore, from
the frequency point of view, the finite-frequency controller in finite-time domain
with higher-order moment performance has been designed by introducing frequency
information into controller design. In the next chapter, the model predictive controller
will be designed to online optimize the finite-time performance of the considered
systems.
References
1. Luan, X.L., Liu, F., Shi, P.: Observer-based finite-time stabilization for extended Markovian
jump systems. Asian J. Control 13(6), 925–935 (2011)
2. Oliveira, R.C.L.F., Vargas, A.N., Val, J.B.R.D.: Mode-independent H2 -control of a DC motor
modeled as a Markovian jump linear system. IEEE Trans. Control Syst. Technol. 22(5), 1915–
1919 (2014)
3. Luan, X.L., Liu, F., Shi, P.: Finite-time filtering for nonlinear stochastic systems with partially
known transition jump rates. IET Control Theor. Appl. 4(5), 735–745 (2010)
4. Luan, X.L., Liu, F., Shi, P.: H∞ filtering for nonlinear systems via neural networks. J. Frankl.
Inst. 347, 1035–1046 (2010)
5. Cheng, P., Wang, J.C., He, S.P., Luan, X.L., Liu, F.: Observer-based asynchronous fault detec-
tion for conic-type nonlinear jumping systems and its application to separately excited DC
motor. IEEE Trans. Circ. Syst-I 67(3), 951–962 (2020)
6. Luan, X.L., He, S.P., Liu, F.: Neural network-based robust fault detection for nonlinear jump
systems. Chaos Soliton Fract. 42(2), 760–766 (2009)
References 181
7. Luan, X.L., Zhao, C.Z., Liu, F.: Finite-time H∞ control with average dwell-time constraint for
time-delay Markovian jump systems governed by deterministic switches. IET Control Theor.
Appl. 8(11), 968–977 (2014)
8. Luan, X.L., Shunyi, Zhao, Liu, F.: H∞ control for discrete-time Markovian jump systems with
uncertain transition probabilities. IEEE Trans. Autom. Control 58(6), 1566–1572 (2013)
9. Ning, Z.P., Zhang, L.X., Mesbah, A., Colaneri, P.: Stability analysis and stabilization of discrete-
time non-homogeneous semi-Markovian jump linear systems: a polytopic approach. Automat-
ica 120, 1–9 (2020)
10. Ma, S., Boukas, E.K.: A descriptor system approach to sliding mode control for uncertain
Markovian jump systems. Automatica 45(11), 2707–2713 (2009)
11. Zhao, S.Y., Liu, F., Luan, X.L.: Risk-sensitive filtering for nonlinear Markovian jump systems
on the basis of particle approximation. Int. J. Adapt. Control 26(2), 158–170 (2012)
12. Luan, X.L., Shi, P., Liu, F.: Finite-time stabilization for Markovian jump systems with Gaussian
transition probabilities. IET Control Theor. Appl. 7(2), 298–304 (2013)
13. Cheng, D.Z., Zhang, L.J.: Adaptive control of linear Markovian jump systems. Int. J. Syst. Sci.
37(7), 477–483 (2006)
14. Geromel, J.C., Gabriel, G.W.: Optimal state feedback sampled-data control design of Marko-
vian jump linear systems. Automatica 54, 182–188 (2015)
15. Luan, X.L., Liu, F., Shi, P.: Neural-network-based finite-time H∞ control for extended Marko-
vian jump nonlinear systems. Int. J. Adapt. Control Signal Process 24(7), 554–567 (2010)
16. Luan, X.L., Shunyi, Zhao, Shi, P., Liu, F.: H∞ filtering for discrete-time Markovian jump
systems with unknown transition probabilities. Int. J. Adapt. Control Signal Process 28(2),
138–148 (2014)
17. Chow, V., Jiang, W., Li, J.V.: Does VIX truly measure return volatility? SSRN Electron. J.
(2014). https://doi.org/10.2139/ssrn.2489345
18. Wu, H.J., Sun, J.: P-moment stability of stochastic differential equations with impulsive jump
and Markovian switching. Automatica 42(10), 1753–1759 (2006)
19. Liu, L., Shen, Y., Jiang, F.: The almost sure asymptotic stability and p-th moment asymptotic
stability of nonlinear stochastic differential systems with polynomial growth. IEEE Trans.
Autom. Control 56(8), 1985–1990 (2011)
20. Li, X., Zhu, Q., Regan, D.: P-th moment exponential stability of impulsive stochastic functional
differential equations and application to control problems of NNs. J. Franklin Inst. 351(9),
4435–4456 (2014)
21. Luan, X.L., Huang, B., Liu, F.: Higher order moment stability region for Markovian jump
systems based on cumulant generating function. Automatica 93, 389–396 (2018)
22. Zhou, Z.H., Luan, X.L., Liu, F.: High-order moment stabilization for Markovian jump systems
with attenuation rate. J Franklin Inst. 356, 9677–9688 (2019)
23. Wan, H.Y., Luan, X.L., Karimi, H.R., Liu, F.: Higher-order moment filtering for Markovian
jump systems in finite frequency domain. IEEE Trans. Circ. Syst-II 66(7), 1217–1221 (2019)
24. Zhou, Z.H., Luan, X.L., Liu, F.: Finite-frequency fault detection based on derandomisation for
Markovian jump linear system. IET Control Theor. Appl. 12(08), 1148–1155 (2018)
25. Maligranda, L.: The AM-GM inequality is equivalent to the Bernoulli inequality. Math. Intell.
34(1), 1–2 (2012)
Chapter 10
Model Predictive Control for Markovian
Jump Systems in the Finite-Time Domain
Abstract The model predictive control is adopted to optimize the finite-time per-
formance for discrete-time Markovian jump systems and semi-Markovian jump sys-
tems. Our target is to minimize the control inputs in a given time interval while sat-
isfying the required transient performance by means of online rolling optimization.
In this way, the minimum energy consumption can be realized. Furthermore, for the
semi-Markovian jump systems whose transition probability depends on sojourn-time,
the finite-time performance under the model predictive control scheme is analyzed
in the situation that the transition probability at each time depends on the history
information of elapsed switching sequences.
10.1 Introduction
Model predictive control (MPC) has been extensively studied as a powerful tool for
managing the industrial processes. Differing from conventional control where the
control law is pre-computed offline, MPC is a form of control scheme in which the
control action is obtained online [1]. By solving an optimal control problem in which
the initial state is the current state of the processes, a control sequence is yielded at
each sampling instant and only the first control action is applied to the procedures [2].
The advantages of the MPC scheme include that it can guarantee closed stability, opti-
mality, adaptation to change parameters, and convenience to deal with constraints [3].
MPC for Markovian jump systems (MJSs) also has been attracting more and
more attention. MPC can re-compute the optimal control problem with both the
measured state and mode at each sampling time. Therefore, the performance index
using MPC has a significant reduction compared with the state feedback gain or
output feedback gain for each mode. Since Park et al. [4] firstly used MPC to optimize
control problems of MJSs, scholars have used MPC to solve many problems including
constrained issues [5], exogenous disturbances [6], uncertain transition probabilities
[7], resource saving [8], etc.
In MJSs, the sojourn-time (the interval between two consecutive jumps) of each
subsystem is subject to exponential distribution in the continuous-time domain or
geometric distribution in the discrete-time domain. However, the transition proba-
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 183
X. Luan et al., Robust Control for Discrete-Time Markovian Jump Systems in the
Finite-Time Domain, Lecture Notes in Control and Information Sciences 492,
https://doi.org/10.1007/978-3-031-22182-8_10
184 10 Model Predictive Control for Markovian Jump Systems …
bilities (TPs) can be a memory when describing the mode switching of practical
application, so semi-MJSs are proposed to explain why the TP at each time depends
on the history information of elapsed switching sequences [9]. To deal with the
complexity caused by the generality of the semi-Markovian chain in the capabil-
ity of modeling stochastic switching, some advances have been achieved so far, for
example, taking upper bounds on the sojourn-time [10].
The discussions above are all about Lyapunov asymptotic stability and optimal
performance of systems over the infinite-time domain. When transient performance
is required, MPC can also satisfy the performance requirements and optimization
objectives. As the finite-time performance of systems does not require the asymptotic
convergence of the states, the MPC algorithm can consider the minimum energy
consumption of the control inputs by minimizing control actions. This chapter focuses
on using MPC to satisfy the transient performance in a given time interval for both
discrete-time MJSs and semi-MJSs.
Consider the discrete-time MJS and semi-MJS with the following structure:
Definition 10.2 The matrix (τ ) = [πi j (τ )]i, j∈M is called discrete-time semi-
Markovian kernel
(SMK) with πi j (τ ) Pr(Rn+1 = j, Sn+1 = τ |Rn = i), ∀i, j ∈
M, ∀τ ∈ N and ∞ τ =0 j∈M πi j (τ ) = 1, πi j (0) = 0.
10.1 Introduction 185
With the above concepts, the definition of the semi-Markovian chain is given as
follows.
Definition 10.3 {rk }k∈N is said to be a semi-Markovian chain associated with MRC
{(Rn , kn )}n∈N , if rk = R N (k) , ∀k ∈ N, where N (k) max{n ∈ N|k ≥ kn }.
Although embedded Markovian chain {(Rn , kn )}n∈N and semi-Markovian chain
{rk }k∈N are used to describe the variation of system modes, the difference is that the
stochastic variable varies with jump instant kn in the former, while with sampling
instant k in the latter. Because the evolution of the semi-Markovian chain is generated
by the SMK (τ ) related to sojourn-time τ , the probability density function (PDF)
of sojourn-time is required. The PDF here is depending on both the current and next
system mode and defined as ωi j (τ ) Pr(Sn+1 = τ |Rn+1 = j, Rn = i), ∀i, j ∈ M,
∀τ ∈ N. Therefore,
Remark 10.1 In part of the study related to the semi-MJSs, the PDF that only
depends on the current system mode is considered. It can be seen from the remark
of [10] that the PDF depending on mode jumping is more specific and capable of
describing the corresponding semi-Markovian chain accurately rather than relying
on only one system mode.
186 10 Model Predictive Control for Markovian Jump Systems …
To optimize the finite-time performance of the system (10.1), the cost function of
MPC is considered as:
T −1
J (k) = u T (k + f |k)Q i u(k + f |k) (10.3)
f =0
min max γ
U (k) rk ,...,rk+T −1
min max γ
U (k) rk ,...,rk+T −1
γ ∗
s.t. >0 (10.7)
U (k) Q −1
T
c2 ∗
> 0, f ∈ N[0,T ] (10.8)
H1 ( f )x(k) + H2 ( f )U (k) X
X ∗
≥0 (10.9)
Ai X + Bui Y X
where
⎡ ⎤
Q rk 0
⎢ .. ⎥
X = P −1 , F = Y X −1 , Q T = ⎣ . ⎦,
0 Q rk+T −1
I, f =0
H1 ( f ) = ,
A(rk+ f −1 )A(rk+ f −2 ) . . . A(rk ), f ∈ {1, . . . , T }
⎧
⎨ 0 · · · 0 n×mT , f =0
H2 ( f ) = A(rk+ f −1 )A(rk+ f −2 ) . . . A(rk+1 )Bu (rk ) . . . .
⎩ , f ∈ N[0,T ]
A(rk+ f −1 )Bu (rk+ f −2 ) Bu (rk+ f −1 ) 0 · · · 0]n×mT
Proof The foregoing SDP can be simply obtained from Remark 10.2 through the
T
Schur complement lemma. With U (k)= u T (k|k) u T (k+1|k) . . . u T (k + T − 1|k) ,
inequality (10.4) can be written as γ − U T (k)Q i U (k) > 0. Then, using Schur com-
plement lemma, the above inequality can be expressed as inequality (10.7).
Due to the fact that x(k + f |k) = H1 ( f )x(k) + H2 ( f )U (k), f ∈ N[0,T ] , inequal-
ity (10.5) can be transformed to inequality (10.8) by Schur complement lemma.
Define the following Lyapunov function V (x(k)) x T (k)P x(k). A feedback
controller u(k + f |k) = F x(k + f |k), f ∈ N≥T satisfying the following condition
is considered for the part in a finite-time interval that goes over the predictive horizon:
and condition (10.9) is obtained through Schur complement lemma. Therefore, the
SDP equals to Remark 10.2, which is a solvable problem. Based on expressions
(10.8) and (10.9), it is found that inequality x T (k + f |k)G i x(k + f |k) < c2 holds at
f ∈ N[0,N −k] . Therefore, if the initial state satisfies the condition x T (0)G r0 x(0) ≤ c1 ,
then the system (10.1) is stochastic finite-time stable with respect to (c1 c2 N G i )
and the proof is completed.
Remark 10.3 Inequality (10.9) is used to ensure that control inputs can be found in
the interval after the predictive horizon. When the feedback gain F cannot be found,
the predictive horizon is considered as T = N and inequality (10.9) is not required.
β N c1 < c2 . (10.13)
where Vi (x(k)) satisfies inequalities (10.10), (10.11), and (10.12). At any instant
k ∈ N[0,N ] , the following condition is ensured by inequality (10.11)
10.3 Stochastic Finite-Time MPC for Semi-MJSs 189
λmax (Pi )
E{V (x(k), rks )} < β N c1 (10.16)
λmax (G i )
where λmax (Pi ) and λmax (G i ) denote the maximal eigenvalue of Pi and G i . In com-
bination with equations (10.13) and (10.16), the following condition holds:
λmax (Pi )
E{V (x(k), rks )} < c2 . (10.17)
λmax (G i )
On the other hand, at any instant k ∈ N[1,N ] with condition (10.17), it has
λmax (G i ) T
x T (k)G i x(k) < λmax (G i )x T (k)x(k) < x (k)Pi x(k) < c2 .
λmax (Pi )
Next, the following theorem gives a criterion of stochastic finite-time stability for
the free semi-MJSs.
Theorem 10.2 Consider the semi-MJSs (10.1) with u(k) ≡ 0 and given parameters
(c1 c2 N G i ). If ∀i ∈ M, there exist Tmax
i
∈ N≥1 , β > 1 and matrices Pi > 0 such
that
AiT Pi Ai − β Pi < 0 (10.18)
i
Tmax
T
(AiT ) Pi (τ )AiT − β T Pi < 0 (10.19)
τ =1
β N c1 < c2 (10.20)
T i
where Pi (τ ) j∈M πi j (τ )P j /ηi with ηi τ max
=1 j∈M πi j (τ ), and Tmax denotes
i
the upper bound of sojourn-time for the i th mode of system (10.1), then the system
(10.1) is stochastic finite-time stable.
190 10 Model Predictive Control for Markovian Jump Systems …
where λmin (Pi ) and λmax (Pi ) denote the minimal and maximal eigenvalue of Pi . For
the case Rn = i, it is ensured by condition (10.18) that ∀k ∈ N[ks ,ks +Tmax
i −1]
k−k T k−k
V (x(k + 1), rks ) − βV (x(k), rks ) = x T (ks )(Ai s ) (AiT Pi Ai − β Pi )Ai s x(ks ) < 0.
(10.22)
By conditions (10.20), (10.21), (10.22), and (10.23), it follows that the free semi-MJS
(10.1) is stochastic finite-time stable.
To optimize the finite-time performance of the system (10.1), the cost function of
MPC is considered as:
N −1
J (k) = u T ( f |k)Q i u( f |k),
f =k
Considering the cost function and Theorem 10.2, the proposed MPC algorithm
solves the following optimization problem at each time instant.
Remark 10.4 At any time instant k ∈ N[0,N ] , the feedback gain Fi can be obtained
by solving
min γ
Fi ,β,Pi
Theorem 10.3 Consider the discrete-time semi-MJS (10.1) with given parameters
(c1 c2 N G i ). For ∀i ∈ M, there exist Tmax i
∈ N≥1 , λ1 , λ2 , and β > 1 and a set of
matrices Hi and H̃i (t), ∀t ∈ N[0,Tmax
i ] , Z i , Ui such that, ∀t ∈ N[0,T i −1] , the following
max
SDP is solvable:
min γ
λ1 ,λ2 ,Z i ,Ui ,Hi , H̃i (t)
N −k T
β (Z i + Z i − Hi ) x(k)
>0 (10.29)
∗ γ
⎡ ⎤
Z i + Z iT − Hi 0 Ai Z i + Bui Ui
⎣ ∗ Q i−1 Ui ⎦>0 (10.30)
∗ ∗ β Hi
192 10 Model Predictive Control for Markovian Jump Systems …
⎡ ⎤
Z i + Z iT − H̃i (t + 1) 0 0 (Ai Z i + Bui Ui ) L i (t + 1)
⎢ ⎥
⎢ ∗ Z̃ + Z̃ T − H̃ 0 Ãi Z̃ i + B̃ui Ũi L̃ i (t + 1)⎥
⎢ ⎥>0
⎣ ∗ ∗ Q −1 Ui ⎦
i
∗ ∗ ∗ β H̃i (t)
(10.31)
i
β Tmax Hi − H̃i (0) > 0 (10.32)
λ−1
1 Gi
−1
Zi
>0 (10.33)
∗ Hi
Z i + Z iT − Hi − λ−1 −1
2 Gi > 0 (10.34)
λ−1
2 c2 − β
N −k −1 T
λ1 x (k)G i x(k) > 0 (10.35)
where Ãi diag(M) {Ai }, B̃ui diag(M) {Bui }, Z̃ i diag(M) {Z i }, Ũi diag(M) {Ui },
H̃ diag{H1 , H2 , . . . , HM }, Z̃ diag{Z 1 , Z 2 , . . . , Z M }, L i (t) = I , ∀t ∈ N[1,Tmax
i −1]
Then, consider the upper bound of the cost function in terms of the Lyapunov
function
N −1
u( f )T Q i u( f ) < x(k)T Pi x(k). (10.37)
f =k
N −1
u( f )T Q i u( f ) < β N −k x(k)T Pi x(k). (10.40)
f =k
i
i
Tmax
T
β Tmax
Pi − πi j (τ )/ηi (Ai + Bui Fi )T P j (Ai + Bui Fi )T
τ =1 j∈M
T
+ (Ai + Bui Fi )τ −1 FiT Q i Fi (Ai + Bui Fi )τ −1 > 0 (10.47)
T i
where ηi = τ max=1 j∈M πi j (τ ).
To circumvent the difficulty caused by the power of Ai + Bui Fi , we define a set
of matrices Oi (τ, t), ∀τ ∈ N[1,Tmax i ] , ∀t ∈ N[0,τ −1] which satisfies
i
Tmax
β Oi (τ, t) − (Ai + Bui Fi )T Oi (τ, t + 1)(Ai + Bui Fi ) − FiT Q i Fi > 0
τ =t+1
(10.48)
i
Tmax
i
Oi (τ, 0) − β Tmax Pi < 0 (10.49)
τ =1
where Oi (l, l) j∈M πi j (l)P j /ηi .
Thus, we have
i
−1 i
Tmax
Tmax
t T
(Ai + Bui Fi ) β Oi (τ, t) − (Ai + Bui Fi )T Oi (τ, t + 1)(Ai + Bui Fi )
t=0 τ =n+1
−FiT Q i Fi (Ai + Bui Fi )t > 0 (10.50)
which is equivalent to
Tmaxi τ −1
T
(Ai + Bui Fi )t β Oi (τ, t) − (Ai + Bui Fi )T Oi (τ, t + 1)(Ai + Bui Fi )
τ =1 t=0
−FiT Q i Fi (Ai + Bui Fi )t > 0 (10.51)
and implies
Tmax
i
T
β T Oi (τ, 0) − (Ai + Bui Fi )T Oi (τ, τ )(Ai + Bui Fi )T
τ =1
T
− (Ai + Bui Fi )τ −1 FiT Q i Fi (Ai + Bui Fi )τ −1 > 0. (10.52)
Combining the Formulas (10.49) and (10.52) and letting Oi (τ, τ )= j∈M πi j (τ )
P j /ηi , it derives condition (10.47).
10.4 Simulation Analysis 195
T i
Letting Õi (l) τ max
=l+1 Oi (τ, l), ∀l ∈ N[0,Tmax
i −1] , and Õi (T
max ) 0, we can
i
Applying the same technique as that in (10.41) ensures condition (10.29), inequal-
ities (10.31) and (10.32) can be obtained from inequalities (10.53) and (10.54) with
T T
H̃ = (Ṽ −1 ) O Ṽ −1 , H̃i (t) = (Vi−1 ) Õi (t)Vi−1 where Ṽ diag{V1 , V2 , . . . , VM }.
To satisfy condition (10.28), the following condition is given first
By Schur complement lemma and the same technique from conditions (10.45) to
(10.46), inequalities (10.33) and (10.34) are satisfied with condition (10.55).
As mentioned in Remark 10.4, β N c1 is replaced by β N −k x(k)T G i x(k) at each
instant k to obtain a more accurate value of β. Hence, the inequality (10.35) is given
to satisfy condition (10.28). This completes the proof.
In this section, two examples will be given to show the effectiveness and practical
application of our developed theoretical results. The first numerical example is used
to show the transient performance of the discrete-time MJS in given time interval by
MPC. The second example focuses on the discrete-time semi-MJSs with different
probability density functions of sojourn-time in different modes.
Example 10.1 Consider the system (10.1) with two operation modes and the fol-
lowing parameters:
0.8 0.28 0.02
A1 = , Bu1 = ,
0 0.9 0.16
1.2 0.25 0.032
A2 = , Bu2 = .
0 1.12 0.28
196 10 Model Predictive Control for Markovian Jump Systems …
Example 10.2 Consider the discrete-time semi-MJS (10.1) with three operation
modes and the following parameters:
−0.36 0.69 −0.1
A1 = , Bu1 = ,
−1.81 1.97 0.1
0.64 0.62 0.1
A2 = , Bu2 = ,
−0.37 1.36 −0.1
0.64 0.62 0
A3 = , Bu3 = .
−0.37 1.36 0
⎡ 0.6T 0.410−τ 10
⎤
0.4T 0.610−τ 10
0
⎢
(10−τ )!τ ! (10−τ )!τ !
⎥
ωi j (τ ) ∀i, j∈M = ⎣ 0.9(τ −1) − 0.9τ 0.510 10
2
.
2
(10−τ )!τ ! ⎦
0
(τ −1)1.3 τ 1.3 (τ −1)0.8
− 0.3τ
0.8
0.4 − 0.4 0.3 0
198 10 Model Predictive Control for Markovian Jump Systems …
(a) closed-loop
state
15 c1
c2
10
5
x2
-5
-10
-15
-15 -10 -5 0 5 10 15
x1
(b) open-loop
state
15 c
1
c
2
10
5
x2
-5
-10
-15
-15 -10 -5 0 5 10 15
x1
It should be noted that the PDF of sojourn-time are Bernoulli distribution and
Weibull distribution with different parameters. It can be seen that mode 1 corresponds
to the Bernoulli distribution, mode 3 corresponds to the Weibull distribution, and
mode 2 has both two types of distributions.
It can be checked that the open-loop system is not stochastic finite-time stable
by Theorem 10.2. Then, the model predictive controller can be designed at each
instant by Theorem 10.3 such that the resulting closed-loop system (10.1) satisfies
the required stochastic finite-time stability performance.
10
By setting parameters c1 = 1, c2 = 25, N = 10, G 1 = G 2 = G 3 = ,T1 =
0 1 max
0.7
Tmax = Tmax = 4, and giving the initial conditions x(0) =
2 3
, Figs. 10.5 and 10.6
0.7
show the state responses of the open-loop and closed-loop of semi-MJS.
In Figs. 10.5 and 10.6, it is clear that the states of the closed-loop system satisfy
the finite-time performance. Then, to show it more clearly, the state trajectories con-
taining the range of finite-time performance are shown in Fig. 10.7.
200 10 Model Predictive Control for Markovian Jump Systems …
Remark 10.5 It is worth mentioning that the parameter c2 is chosen as the mini-
mal case in both examples. Theoretically, the controller design approach adopted
in Example 10.1 can result in a smaller c2 . The reason is that solving the control
sequence can directly consider the conditions of finite-time performance for pre-
dicted states, while solving the state feedback gain (the controller design method
used in Example 10.2) requires the transformation of these conditions.
10.5 Conclusion
(a) closed-loop
6
state
c
1
4 c
2
2
x2
-2
-4
-6
-6 -4 -2 0 2 4 6
x1
(b) open-loop
state
6 c
1
c
2
4
2
x2
-2
-4
-6
-6 -4 -2 0 2 4 6
x1
References
1. Kothare, M.V., Balakrishnan, V., Morari, M.: Robust constrained model predictive control using
linear matrix inequalities. Automatica 32(10), 1361–1379 (1996)
2. Mayne, D.Q., Rawlings, J.B., Rao, C.V., Scokaert, P.O.: Constrained model predictive control:
stability and optimality. Automatica 36(6), 789–814 (2000)
3. Kouvaritakis, B., Cannon, M.: Model Predictive Control. Springer International Publishing,
Switzerland (2016)
4. Park, B.G., Lee, J.W., Kwon, W.H.: Receding horizon control for linear discrete systems with
jump parameters. In: Proceedings of the 36th IEEE Conference on Decision and Control, San
Diego, CA, USA, vol. 4, pp. 3956–3957 (1997)
5. Patrinos, P., Sopasakis, P., Sarimveis, H., Bemporad, A.: Stochastic model predictive control
for constrained discrete-time Markovian switching systems. Automatica 50(10), 2504–2514
(2014)
6. Lu, J., Xi, Y., Li, D., Gan, Z.: Model predictive control synthesis for constrained Markovian
jump linear systems with bounded disturbance. IET Control Theor. Appl. 11(18), 3288–3296
(2017)
7. Zhang, Y., Lim, C.C., Liu, F.: Robust mixed H2 /H∞ model predictive control for Markovian
jump systems with partially uncertain transition probabilities. J. Franklin Inst. 355(8), 3423–
3437 (2018)
8. He, P., Wen, J.W., Luan, X.L., Liu, F.: Finite-time self-triggered model predictive control of
discrete-time Markovian jump linear systems. Int. J. Robust Nonlinear Control 31(13), 6166–
6178 (2021)
9. Zhang, L., Yang, T., Colaneri, P.: Stability and stabilization of semi-Markovian jump linear
systems with exponentially modulated periodic distributions of sojourn time. IEEE Trans.
Autom. Control 62(6), 2870–2885 (2016)
10. Zhang, L., Leng, Y., Colaneri, P.: Stability and stabilization of discrete-time semi-Markovian
jump linear systems via semi-Markovian kernel approach. IEEE Trans. Autom. Control 61(2),
503–508 (2015)
11. Amato, F., Ariola, M.: Finite-time control of discrete-time linear system. IEEE Trans. Autom.
Control 50(5), 724–729 (2005)
Chapter 11
Conclusion
Abstract This chapter summarizes the book and suggests some possible research
directions related to the work of the book.
Transient behavior in a given time interval for discrete-time MJSs has been researched
in this book to develop less conservative analysis and design methodology for control
engineering practice. The tools provided in the book could be applied to ecological
power systems, economic systems, power systems, and engineering designs under
environmental disturbances. Furthermore, the book also offers many methods and
algorithms to solve the finite-time stability and finite-time stabilization problems of
discrete-time MJSs with simulation examples to illustrate the design procedure and
confirm the results of the proposed methods.
Firstly, we aim at the finite-time stability and finite-time stabilization for differ-
ent kinds of discrete-time MJSs. For the simplest discrete-time linear MJSs, Chap.
2 designs a less conservative finite-time controller by relaxing the strict decreasing
requirement on the system energy function. Furthermore, combining neural networks
with robust control, the finite-time performance analysis and synthesis is extended
to discrete-time nonlinear MJSs by the intelligent method. Furthermore, for a more
general class of hybrid systems, considering the influence of the transition probabil-
ity (TPs) of the random jump and the determination of the average dwell time on the
system performance, Chap. 3 studies the finite-time stability and finite-time stabi-
lization of the discrete-time switching MJSs under complex conditions, centering on
the factors such as the time-delay and the unavailable state. Then, for discrete-time
non-homogeneous MJSs, Chap. 4 utilizes the Gaussian probability density function
(PDF) to describe the random distribution characteristics of TPs. Using the mean and
variance information of the Gaussian PDF, the expected value of TPs is obtained.
Then the finite-time stability and finite-time stabilization are investigated based on
the obtained expected value.
Then, combined with other control strategies, such as sliding mode control, pas-
sive control, and consensus control, Chaps. 5–7 present the finite-time sliding mode
control, finite-time passive control, and finite-time consensus control. Our target is
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 203
X. Luan et al., Robust Control for Discrete-Time Markovian Jump Systems in the
Finite-Time Domain, Lecture Notes in Control and Information Sciences 492,
https://doi.org/10.1007/978-3-031-22182-8_11
204 11 Conclusion
to ensure that the state trajectory of the system is restricted within a certain range of
the equilibrium point while satisfying other performance indicators, such as robust-
ness, dissipativity, and consistency between sub-modes. Especially, considering the
systems have different performance requirements at a specific frequency band or
multiple-frequency bands, in Chap. 8, the design method of finite-frequency state
feedback controller for discrete-time MJSs over the finite-time interval is studied. In
fact, it is not enough just to limit the mean or variance of the states to the desired
range. Therefore, a higher-order moment finite-time controller is designed in Chap.
9 to guarantee that not only the mean and variance of the states remain within the
desired range in the fixed time interval, but also the higher-order moment of the states
is limited to the given bound.
Next, different from the preceding finite-time control strategies proposed from
Chaps. 2– 9, where the control law is calculated offline, the model predictive control
is adopted to minimize the control inputs in a given time interval while satisfying
the required transient performance for discrete-time MJSs through online rolling
optimization in Chap. 10. Finally, for the semi-Markovian jump systems whose
transition probability depends on sojourn time, the finite-time performance under
the model predictive control scheme is analyzed in the situation that the transition
probability at each time depends on the history information of elapsed switching
sequences.
The future research direction is to introduce the reinforcement learning control
to the transient performance for discrete-time MJSs in the situation that some key
parameters of the system are unavailable. The strategy iterative learning method
can be used to find the controller while learning unknown parameters to ensure the
finite-time stabilization of the system.
Index
D F
Desired, 147, 166 Filter, 2, 70, 93
bound, 23, 62, 65, 142, 144, 152, 153, 177 Filtering, 2–4, 9, 21, 93, 132, 152, 165, 166
Finite-state, 1
FTPC gains, 99 Finite-time analysis, 1, 8, 9
passive performance, 11, 93, 95, 99 Finite-time bounded, 5–7, 9, 21, 23, 36, 39,
performance, 104, 105, 110 93–95, 106
range, 11, 162, 165, 204 Finite-time boundedness, 5, 6, 9, 21, 23, 40,
rate, 10, 106 66, 95, 97
state trajectory, 109 Finite-time consensualization, 11, 151, 154,
Discrete, 4, 7, 39 157
Discrete-event, 1 Finite-time consensus, 11, 151–153, 157,
Discrete-time, 5–7, 9, 11, 21–24, 26, 28, 33, 158, 161, 203
36, 39, 40, 46, 61, 66, 70, 72, 75, 85, Finite-time control, 5–7, 9, 21–23, 40, 69,
88, 90, 93, 94, 103, 106, 109–120, 123, 70, 79, 93, 94, 204
125–128, 131, 133, 146, 147, 151–153, Finite-time controller, 6, 8, 22, 24, 26, 33,
155, 157, 158, 161, 165–169, 173, 183, 36, 39, 66, 86, 161, 169, 203
186, 188, 191, 195, 196, 200, 203, 204 Finite-time dissipative filtering, 21
Dissipative control, 2, 9 Finite-time domain, 5, 11, 94, 110, 150, 165,
Dissipative filtering, 4, 21 169, 180
Dissipative theory, 93 Finite-time filtering, 21
Dissipativity, 204 Finite-time H∞ control, 10, 21, 42, 51, 69,
Disturbance, 1, 3, 5, 7–9, 11, 21–23, 36, 39, 70, 72, 93
40, 46–49, 51, 62, 65, 70, 76–78, 90, Finite-time H∞ controller, 46, 60, 94
94, 109, 151, 152, 161, 169, 173, 183, Finite-time interval, 5, 6, 9, 10, 39, 110, 111,
196, 203 116, 128, 186, 187
attenuation, 10, 39, 76, 77 Finite-time model predictive control, 12
input, 110 Finite-time multiple-frequency control, 11,
rejection, 46–49, 76, 78 137
signals, 62 Finite-time passive control, 10, 94, 99, 104,
Dynamic, 1–3, 7, 69, 79, 80, 109, 110, 146, 106, 203
151, 153, 155, 157, 158, 161, 190 Finite-time passive controller, 11, 93
performance, 109 Finite-time performance, 1, 8–12, 39, 90,
systems, 69, 109, 151 106, 173, 180, 183, 184, 186, 188, 190,
Dynamically, 1 199, 203, 204
Finite-time sliding mode control, 9, 11, 109,
110, 115
E Finite-time stability, 4, 5, 7, 21, 23, 36, 39,
Effective, 1–2, 93, 110 69, 133, 137, 139, 142, 147, 155, 158,
Effectively, 5 166, 169, 189, 199, 203
Effectiveness, 85, 103, 109, 110, 126, 142, Finite-time stabilizable, 11, 22, 25, 29, 31,
150, 160, 176, 195 34, 41, 46, 52, 56, 60, 72, 74, 76, 80, 94,
Error, 9, 21, 28, 34, 36, 52, 56, 69, 79, 80 95, 98–101, 104, 105, 109, 112, 114,
Existence, 191 118–123, 125, 127, 133, 136, 139, 140,
Exogenous disturbance, 21, 22, 40, 70, 94, 172–174, 187, 189
152, 183 Finite-time stabilization, 6, 9, 10, 21–23, 69,
Exponential almost sure stability, 3 70, 90, 93, 94, 110, 112, 114, 128, 162,
Exponential distribution, 8, 12, 183 169, 172–174, 177, 192, 203, 204
Exponential l2 − l∞ , 3 Finite-time stabilized, 8, 36
Exponential l2 − l∞ stability, 39 Finite-time stabilizing, 6, 7, 74, 77
Exponential stability, 166 Finite-time stable, 6, 7, 23, 34, 88, 169, 188,
199
Finite-time state feedback control, 7
Index 207
O
Optimal control, 2, 3, 165, 183, 186 S
Optimality, 183 Sampling instant, 183
Optimal stochastic finite-time controller, 26 Sampling time, 25, 41, 135, 171, 183
Optimal tracking control, 3 Short time, 5, 152
Optimal weight, 34 Single-mode, 11, 131, 137, 147, 165
Optimization, 8, 12, 26, 60, 152, 183, 184, Stability analysis, 3, 4, 165
186, 191, 200, 204 State feedback control, 6, 7, 29
Optimize, 12, 180, 183, 186, 190 State feedback controller, 6, 26, 31, 48, 72,
74, 79, 80, 132, 134, 137, 139, 140, 154,
155, 161, 166, 170, 174, 204
P State feedback finite-time control, 10, 69, 70
Packet, 152 State feedback gain, 23, 200
Parameter changes, 109 Steady-state performance, 21, 131
Parameter description, 144 Stochastic differential equation, 1
Parameter information, 85 Stochastic finite-time boundedness (FTB), 9,
Parameterize, 36 21, 40
Parameterized, 8 Stochastic finite-time performance, 10
Index 209
Stochastic finite-time stability (FTS), 21, 36, Symmetric, 24, 29, 41, 72–74, 77, 80, 95, 98,
199 100, 132, 155, 169, 172, 173, 186
Stochastic finite-time stabilizable, 11, 22, Synthesis, 1, 3, 7–9, 39, 166, 203
29, 34, 41, 46, 52, 74, 76, 80, 94, 95,
98, 100, 102, 104, 105, 114, 115, 118,
122–191 T
Stochastic finite-time stabilization, 9, 21, Tracking control, 3
24–26, 39, 46, 48, 56, 72, 110, 186, 188, Transfer function, 8, 131, 147
191 Transformation, 8, 9, 11, 26, 32, 151, 154,
Stochastic finite-time stable, 34, 188–190, 160, 200
199 Transformed, 9, 28, 60, 99, 103, 132, 139,
Stochastic jumping, 10, 11, 39, 62, 131, 139, 169, 187, 192
161, 165 Transforming, 7, 11, 48, 131, 137
Stochastic Lyapunov function, 22, 30, 134 Transient behavior, 11, 21, 150, 203
Stochastic Markovian chain, 46 Transient performance, 4–6, 9–12, 21, 22,
Stochastic Markovian jump systems (MJSs), 40, 70, 93, 109, 118, 128, 132, 133, 142,
21, 39 177, 183, 184, 195, 200, 204
Stochastic Markovian process, 28 Transient requirements, 152
Stochastic multimodal systems, 131, 137, Transient response, 23
147 Transition characteristics, 5
Stochastic process, 1, 70, 184 Transition probability density function, 10,
Stochastic stability, 2 69
Stochastic system, 1, 5, 7, 110 Transition probability (TP), 10, 21, 69, 70,
Sub-modal systems, 132 90, 103, 126, 137, 139, 142, 144, 146,
Subsequent, 2, 6, 21, 42, 46, 50, 52, 60, 62, 147, 160, 165, 183, 203, 204
151
Subsystem, 1, 3, 69, 150, 152, 153, 160, 183
Subsystem models, 1 U
Switched systems, 39 Uncertain, 6, 23, 24, 26, 39, 69, 86, 110, 132,
Switches, 36, 40, 166 183
Switching different gear positions, 3 Uncertainty, 7, 8, 69, 86, 152
Switching frequency, 46 Unified, 93
Switching instants, 43, 54, 66 Uniform, 90
Switching jump systems, 165 Universal, 4, 109
Switching MJS, 1, 3, 9, 39, 46, 61, 203 Unknown, 8, 23, 27, 69, 86, 90, 110, 204
Switching mode, 44, 54 Unstable, 60, 151, 178
Switching rules, 3 Unstable operation, 151
Switching sequences, 12, 183, 184, 204
Switching signals, 3, 9, 39, 46, 62, 65
Switching subsystem, 3 V
Switching systems, 132 Variable, 1, 3, 9, 22, 40, 41, 51, 60, 70, 71,
Switching times, 41 94, 127, 128, 132, 133, 138, 152, 153,
Switching topology, 151 167, 185