0% found this document useful (0 votes)
11 views213 pages

(Lecture Notes in Control and Information Sciences, Volume 492) Xiaoli Luan, Shuping He, Fei Liu - Robust Control for Discrete-Time Markovian Jump Systems in the Finite-Time Domain-Springer (2023)

Este libro es de recomendacion oblegada

Uploaded by

zurdo1981
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views213 pages

(Lecture Notes in Control and Information Sciences, Volume 492) Xiaoli Luan, Shuping He, Fei Liu - Robust Control for Discrete-Time Markovian Jump Systems in the Finite-Time Domain-Springer (2023)

Este libro es de recomendacion oblegada

Uploaded by

zurdo1981
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 213

Lecture Notes in Control and Information Sciences 492

Xiaoli Luan
Shuping He
Fei Liu

Robust Control
for Discrete-Time
Markovian Jump
Systems in
the Finite-Time
Domain
Lecture Notes in Control and Information
Sciences

Volume 492

Series Editors
Frank Allgöwer, Institute for Systems Theory and Automatic Control,
Universität Stuttgart, Stuttgart, Germany
Manfred Morari, Department of Electrical and Systems Engineering,
University of Pennsylvania, Philadelphia, USA

Advisory Editors
P. Fleming, University of Sheffield, UK
P. Kokotovic, University of California, Santa Barbara, CA, USA
A. B. Kurzhanski, Moscow State University, Moscow, Russia
H. Kwakernaak, University of Twente, Enschede, The Netherlands
A. Rantzer, Lund Institute of Technology, Lund, Sweden
J. N. Tsitsiklis, MIT, Cambridge, MA, USA
This series reports new developments in the fields of control and information
sciences—quickly, informally and at a high level. The type of material considered
for publication includes:
1. Preliminary drafts of monographs and advanced textbooks
2. Lectures on a new field, or presenting a new angle on a classical field
3. Research reports
4. Reports of meetings, provided they are
(a) of exceptional interest and
(b) devoted to a specific topic. The timeliness of subject material is very
important.
Indexed by EI-Compendex, SCOPUS, Ulrich’s, MathSciNet, Current Index
to Statistics, Current Mathematical Publications, Mathematical Reviews,
IngentaConnect, MetaPress and Springerlink.
Xiaoli Luan · Shuping He · Fei Liu

Robust Control
for Discrete-Time Markovian
Jump Systems
in the Finite-Time Domain
Xiaoli Luan Shuping He
Key Laboratory of Advanced Process Key Laboratory of Intelligent Computing
Control for Light Industry (Ministry and Signal Processing (Ministry
of Education) of Education)
Institute of Automation School of Electrical Engineering
Jiangnan University and Automation
Wuxi, Jiangsu, China Anhui University
Hefei, Anhui, China
Fei Liu
Key Laboratory of Advanced Process
Control for Light Industry (Ministry
of Education)
Institute of Automation
Jiangnan University
Wuxi, Jiangsu, China

ISSN 0170-8643 ISSN 1610-7411 (electronic)


Lecture Notes in Control and Information Sciences
ISBN 978-3-031-22181-1 ISBN 978-3-031-22182-8 (eBook)
https://doi.org/10.1007/978-3-031-22182-8

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature
Switzerland AG 2023
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse
of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by similar
or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface

In the field of modern industry, there are many hybrid systems involving both contin-
uous state evolution and discrete event-driven, such as biochemical systems, commu-
nication networks, aerospace systems, manufacturing processes, economic systems.
These systems often encounter component failure, external environment change,
and subsystem correlation change, which will cause random jumping or switching
of system structure and parameters. That is, the switching between each mode is
random but may conform to certain statistical laws. If it conforms to Markovian
characteristics, it is called stochastic Markovian jump systems (MJSs). The dynamic
behavior of MJSs consists of two forms: one is the discrete mode, which is described
by a set of Markovian chains valued in a finite integer set. The other is a continuously
changing state, characterized by a differential (or difference) equation for each mode.
In this sense, the MJSs belong to a category of hybrid systems, and their particu-
larity lye in that the discrete events and continuous variables can be expressed by a
stochastic differential equation or difference equation. This provides ideas for people
to apply the state space method in modern control theory to study some problems of
MJSs.
On the other hand, the control theory has focused on the steady-state character-
istics of the systems in the infinite-time domain for a long time. However, for most
engineering systems, the transient characteristics over a finite-time interval are more
practical. On the one hand, an asymptotically stable system does not imply good
transition characteristics. Sometimes, the system even appears violent shocks, thus
cannot meet the production requirements; on the other hand, many practical produc-
tion processes are short time running systems, such as biochemical reaction systems,
economic systems, and people are more interested in their transient performance
in a given time domain. Therefore, this book introduces the theory of finite-time
control into stochastic discrete-time MJSs, considers the transient characteristics of
the discrete-time MJSs over a finite-time interval, establishes its stability, bounded-
ness, robustness, and other performances in a given time domain, and ensures that
the state trajectory of the system is limited within a certain range of the equilib-
rium point. In this way, the engineering conservativeness of asymptotic stability of
conventional control theory is reduced from the time dimension.

v
vi Preface

This book aims at developing less conservative analysis and design methodology
for discrete-time MJSs via finite-time control theory. It can be used for final year
undergraduates, postgraduates, and academic researchers. Prerequisite knowledge
includes linear algebra, linear system theory, theory of matrix, stochastic systems,
etc. It should be described as an advanced book.

Wuxi, Jiangsu, China Xiaoli Luan


[email protected]
Hefei, Anhui, China Shuping He
[email protected]
Wuxi, Jiangsu, China Fei Liu
[email protected]
Acknowledgements

The authors would like to express our sincere appreciation to those direct participation
in various aspects of the research leading to this book. Special thanks go to Prof.
Pedro Albertos from the Universidad Politécnica de Valencia in Spain, Prof. Peng
Shi from the University of Adelaide in Australia, Prof. Shuping He from Anhui
University in China, Profs. Fei Liu, Jiwei Wen, and Shunyi Zhao from Jiangnan
University in China for their helpful suggestions, valuable comments, and great
support. The authors also thank many colleagues and students who have contributed
technical support and assistance throughout this research. In particular, we would
like to acknowledge the contributions of Wei Xue, Haiying Wan, Peng He, Ziheng
Zhou, Chang’an Han, Chengcheng Ren, Xiang Zhang, and Shuang Gao. Finally,
we are incredibly grateful to our families for their never-ending encouragement and
support whenever necessary.
This book was supported in part by the National Natural Science Foundation
of China (Nos. 61991402, 61991400, 61833007, 62073154, 62073001), Scien-
tific Research Cooperation and High-level Personnel Training Programs with New
Zealand (No. 1252011004200040), the University Synergy Innovation Program of
Anhui Province (No. GXXT-2021-010), Anhui Provincial Key Research and Devel-
opment Project (No. 2022i01020013), and Anhui University Quality Engineering
Project (No. 2022i01020013, 2020jyxm0102, 021jxtd017).

vii
Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Markovian Jump Systems (MJSs) . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Nonlinear MJSs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2 Switching MJSs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.3 Non-homogenous MJSs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Finite-Time Stability and Control . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.1 FTS for Deterministic Systems . . . . . . . . . . . . . . . . . . . . . . 6
1.2.2 FTS for Stochastic MJSs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2 Finite-Time Stability and Stabilization for Discrete-Time
Markovian Jump Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2 Preliminaries and Problem Formulation . . . . . . . . . . . . . . . . . . . . . . 22
2.3 Stochastic Finite-Time Stabilization for Linear MJSs . . . . . . . . . . 24
2.4 Stochastic Finite-Time Stabilization for Nonlinear MJSs . . . . . . . 26
2.5 Simulation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3 Finite-Time Stability and Stabilization for Switching
Markovian Jump Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.2 Preliminaries and Problem Formulation . . . . . . . . . . . . . . . . . . . . . . 40
3.3 Stochastic Finite-Time H∞ Control . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.4 Observer-Based Finite-Time H∞ Control . . . . . . . . . . . . . . . . . . . . 51
3.5 Simulation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

ix
x Contents

4 Finite-Time Stability and Stabilization for Non-homegeneous


Markovian Jump Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.2 Preliminaries and Problem Formulation . . . . . . . . . . . . . . . . . . . . . . 70
4.3 Stochastic Finite-Time Stabilization . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.4 Stochastic Finite-Time H∞ Control . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.5 Observer-Based Finite-Time Control . . . . . . . . . . . . . . . . . . . . . . . . 79
4.6 Simulation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
5 Asynchronous Finite-Time Passive Control for Discrete-Time
Markovian Jump Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.2 Finite-Time Passive Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
5.3 Asynchronous Finite-Time Passive Control . . . . . . . . . . . . . . . . . . 99
5.4 Simulation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
6 Finite-Time Sliding Mode Control for Discrete-Time
Markovian Jump Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
6.2 Finite-Time Sliding Mode Control . . . . . . . . . . . . . . . . . . . . . . . . . . 110
6.3 Asynchronous Finite-Time Sliding Mode Control . . . . . . . . . . . . . 115
6.4 Simulation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
6.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
7 Finite-Frequency Control with Finite-Time Performance
for Markovian Jump Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
7.2 Finite-Time Stabilization with Finite-Frequency
Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
7.3 Finite-Time Multiple-Frequency Control Based
on Derandomization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
7.4 Simulation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
7.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
8 Stochastic Finite-Time Consensualization for Markovian
Jump Networks with Disturbances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
8.2 Preliminaries and Problem Formulation . . . . . . . . . . . . . . . . . . . . . . 152
8.3 Finite-Time Consensualization with State Feedback . . . . . . . . . . . 154
8.4 Finite-Time Consensualization with Output Feedback . . . . . . . . . 157
8.5 Simulation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
Contents xi

8.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161


References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
9 Higher-Order Moment Finite-Time Stabilization
for Discrete-Time Markovian Jump Systems . . . . . . . . . . . . . . . . . . . . . 165
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
9.2 Preliminaries and Problem Formulation . . . . . . . . . . . . . . . . . . . . . . 166
9.3 Higher-Order Moment Stabilization in the Finite-Time
Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
9.4 Higher-Order Moment Finite-Time Stabilization
with Finite-Frequency Performance . . . . . . . . . . . . . . . . . . . . . . . . . 173
9.5 Simulation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
9.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
10 Model Predictive Control for Markovian Jump Systems
in the Finite-Time Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
10.2 Stochastic Finite-Time MPC for MJSs . . . . . . . . . . . . . . . . . . . . . . . 186
10.3 Stochastic Finite-Time MPC for Semi-MJSs . . . . . . . . . . . . . . . . . 188
10.4 Simulation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
10.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
11 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
Chapter 1
Introduction

Abstract Due to the engineering practicability of finite-time theory, a great number


of research results of finite-time analysis and synthesis have been achieved, especially
for Markovian jump systems (MJSs). MJS is a special kind of hybrid system, the
particularity of which lies in that although it belongs to a hybrid system, its discrete-
event dynamics are random processes that follow Markovian chains. Due to the
strong engineering background and practical significance, MJSs can describe most
of the industrial processes. This chapter mainly introduces the background and the
research status of MJSs (involving linear and nonlinear MJSs, switching MJSs, and
non-homogeneous MJSs) and finite-time performance for deterministic systems and
stochastic MJSs.

1.1 Markovian Jump Systems (MJSs)

Markovian jump system (MJS) was first proposed by Krasovskii in 1961 [1]. It was
initially regarded as a special stochastic system but did not attract enough attention.
With the development of hybrid system theory, people find that MJS is actually a
special kind of hybrid system, so it has attracted extensive attention from researchers
[2, 3]. MJS assumes that the system’s dynamics are switched in a set of known
subsystem models, and the switching law between subsystem models obeys the
finite-state Markovian process, in which subsystem models are also called modes. The
particularity of the MJS lies in that although it belongs to a hybrid system, its discrete-
event dynamics are random processes that follow statistical laws. Thanks to the
development of the theory of stochastic process, the MJS can be dynamically written
in the form of a stochastic differential equation or stochastic difference equation.
Then the analysis and synthesis of the MJS can be studied by using the methods
similar to the continuous variable dynamic system.
The MJS was proposed with a strong engineering background and practical sig-
nificance. The system model has been proved to be an effective method to describe
the industrial processes, which are often subjected to random disturbances from
the internal component failure and the change of the external environment. Com-
mon examples include biochemical systems, power systems, flexible manufacturing
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 1
X. Luan et al., Robust Control for Discrete-Time Markovian Jump Systems in the
Finite-Time Domain, Lecture Notes in Control and Information Sciences 492,
https://doi.org/10.1007/978-3-031-22182-8_1
2 1 Introduction

systems, aerospace systems, communication networks, robotics, energy systems,


economic systems, etc. Although Krasovskii proposed the specific model of MJS,
it is only used as an example of mathematical analysis without practical application
background. It is Sworder who really applies it to actual control problems [4]. In
1969, he first discussed the optimal control problem of mixed linear systems with
Markovian jump parameters from the perspective of the random maximum princi-
ple. In 1971, Wohnam proposed the dynamic programming problem of stochastic
control systems and successfully applied it to the optimal control of linear MJSs [5].
Subsequently, issues such as stochastic stability, controllability, observability, robust
control, and filtering, and fault detection for linear MJSs have been solved [6–11].

1.1.1 Nonlinear MJSs

Compared with linear MJSs, the research of nonlinear MJSs progresses slowly [12–
14]. It is mainly caused by the complexity of real MJSs, the complex behavior of
nonlinearities, and the limitations of existing control theories and algorithms. For
linear MJSs, the control design can be transformed into the corresponding Ricatti
equation or linear matrix inequality solution. However, for nonlinear MJSs, it is
impossible to design a general controller satisfying the performance and stability
of the systems. Therefore, it is a difficult problem in the control field to control the
MJSs with nonlinearities.
Although some scholars have paid attention to this kind of system in the early
stage, the development of related theories is still slow. Aliyu and Boukas tried to use
the Hamilton–Jacobi equation to give sufficient conditions for stochastic stability of
nonlinear MJSs [15]. Unfortunately, it is very difficult to obtain a global solution of
the Hamilton–Jacobi equation by numerical method or analytical method because the
difficulties of its related mathematical theory are still unresolved. Therefore, many
scholars turn to nonlinear approximation methods (mainly fuzzy and neural network
technologies) to solve the control problem of nonlinear MJSs [16–18].
Takagi–Sugeno (T–S) fuzzy model is one of the effective methods to deal with
MJSs with nonlinearities. According to IF-THEN fuzzy rules, the local linear descrip-
tion or approximate representation of nonlinear MJSs is provided. For the suggested
T–S fuzzy MJSs, the hot issues in the nonlinear MJSs focus on the quantized feed-
back control, robust control, dissipative control, asynchronous dissipative control,
asynchronous sliding mode control, adaptive synchronous control, robust filtering,
asynchronous filtering, etc. [19–25]. With the expansion of the complexity of the
system, people’s requirements for the safety and reliability of the controlled system
are increasing day by day. Literatures [26–28] have carried out investigations on the
fault detection and fault diagnosis of T–S fuzzy MJSs based on the observer and filter
design.
Another effective technique to deal with nonlinear MJSs is neural networks. One
typical alternative is to use neural networks to linearize the nonlinearities of MJSs.
Through linear difference inclusions under the state-space representation, the optimal
1.1 Markovian Jump Systems (MJSs) 3

control, robust control, output feedback control, scheduling control, H∞ filtering, and
robust fault detection have been addressed in [29–35]. Then the sliding mode control
and the event-triggered fault detection were investigated in [36, 37] by employing a
multilayer neural network to substitute the nonlinearities. Combined with the back-
stepping scheme, the adaptive tracking control for nonlinear MJSs has been examined
in [38, 39]. By adopting neural networks to realize the online adaptive dynamic pro-
gramming learning algorithm, optimal control, optimal tracking control, and online
learning and control have been addressed in [40–42].

1.1.2 Switching MJSs

For a general linear switching system, the switching rules between subsystems are
deterministic, and each subsystems can be described by linear differential equations
(difference equations). However, for a single switching subsystem, component failure
and sudden environmental disturbance often occur, leading to system structure and
parameter jumping. Therefore, a single subsystem is more suitable to be described
by MJS. Taking the voltage conversion circuit as an example, we obtain the required
voltage by switching different gear positions. At a certain gear position, due to the
failure of electronic components, the system may undergo random jumps. It is not
appropriate to model such a system with a simple switching system. In such a system,
there are not only deterministic switching signals but also random jumping modes.
Such more complex systems are called switching MJSs, and the model of such
systems was first proposed by Bolzern [43].
In [43], mean square stability was dissolved by the time evolution of the second-
order moment of the state following constraints on the dwell time between switching
moments. Then exponential almost sure stability for switching signals that satisfy
an average dwell time restriction was investigated by Bolzern in [44], and the trade-
off is found between the average dwell time and the ratio of the residence times.
Similarly, the almost sure stability for linear switching MJSs in continuous time was
addressed in [45, 46] by applying the Lyapunov function approach. On the basis
of stability analysis, the exponential l2 − l∞ control, H∞ control, resilient dynamic
output feedback control, mean square stabilization, and almost sure stabilization
have been studied by researchers [47–51]. In recent years, the analysis and synthesis
results for switching MJSs have been extended to positive systems [52, 53], where
the state variables take only nonnegative values.
The development of switching MJSs enriches the research field of hybrid system
and provides a more general system modeling method. When the random jumping
is not considered, the system is a general switching system. When switching rules
are not considered, the single subsystem is the general MJS. Due to the coupling of
switching signals and jumping modes, the analysis and synthesis of the system bring
great challenges, and there are still many tough problems need to be solved.
4 1 Introduction

1.1.3 Non-homogenous MJSs

All the aforementioned results about MJSs are limited to systems with fixed jumping
transition probabilities (TPs). In engineering practice, the TP matrix often changes
with time, that is, the non-homogeneous Markovian process is universal. Therefore,
the theory of non-homogeneous Markovian process has become the research hotspot
of the majority of experts and scholars [54, 55]. In 2011, Aberkane explored non-
homogeneous discrete MJSs and proposed conclusions related to controller design
[56]. In this study, the time-varying TP is described in the form of the polytopic
description with fixed vertices. In the same way, the robust control, model predictive
control, output feedback control, and H∞ filtering for non-homogeneous MJSs have
been proposed in [57–61].
Although the above research results on non-homogeneous MJSs cover the three
aspects of control, filtering and stability analysis, the time-varying TPs are described
in the form of the polytopic description with fixed vertices. They focus on the change
of the TPs, but do not pay attention to how the TPs changes. Among the existing
research results, there are two ways to consider the change of TPs: one is periodic non-
homogeneous MJSs with periodic change. The TPs of this special non-homogeneous
MJSs change in a cyclic manner according to the period, and the system parameters
in each mode also change in a cyclic manner. The second is the non-homogeneous
MJSs that follows the high-order Markovian chain. This kind of system introduces
a high-order Markovian chain to express the change of the TPs.
For periodic non-homogeneous MJSs, the observability and detectability, l2 − l∞
control, H∞ filtering and strict dissipative filtering have been presented in [62–65].
For non-homogeneous MJSs following high-order Markovian chains, Zhang dis-
cussed the particularity of piecewise homogeneous MJSs and dealt with its H∞ esti-
mation problem [66]. In this paper, a high-order Markovian chain is used to indicate
that the change of TP matrix between segments is also random jumping accord-
ing to the probability, which brings a new way of thinking to the later study of the
change of TPs. In 2012, Wu discussed the stability of piecewise homogeneous MJSs
with time-delays [67]. Literatures [68, 69] proposed the H∞ control and filtering for
non-homogeneous MJSs with the TPs following Gaussian distribution.

1.2 Finite-Time Stability and Control

Modern control theory covers a wide field and involves many methods, but stability
analysis is the core and basis of almost all methods, especially Lyapunov stability
and asymptotic stability. Lyapunov stability, as a sufficient condition, is simple and
intuitive, but it focuses on the system behavior in an infinite time domain, which
inevitably brings conservatism from the perspective of engineering practice. For
most engineering systems, the transient performance within a certain time are more
practical. On the one hand, an asymptotically stable system does not imply good
1.2 Finite-Time Stability and Control 5

transition characteristics. Sometimes, the system even appears violent shocks, thus
cannot meet the production requirements; on the other hand, many practical produc-
tion processes are short-time running systems, such as biochemical reaction systems,
economic systems, and people are more interested in their transient performance in
a given time domain.
In order to study the transient performance of the system, Kamenkov first proposed
the concept of finite-time stability (FTS) in the Russian journal PPM in 1953 [70].
Similar articles soon followed in the same journal [71], and the early articles on FTS
were mostly written by Russians, dealing with linear systems as well as nonlinear
systems. In 1961, there appeared some articles on the FTS of linear time-varying
systems, such as “short-time stability in linear time-varying systems” written by
Dorato [72]. The idea of short-time stability is essentially the same thing as FTS,
but the term FTS is more commonly used later. Also in 1961, LaSalle and Lefschetz
wrote “stability by Lyapunov’s direct methods: with applications,” and the concept
of “practical stability” is proposed in [73]. Both concepts are required to be bounded
in finite-time domain, but the length of the time interval of the two studies is slightly
different.
In 1965, Weiss and Infante made an in-depth discussion on the FTS analysis
of nonlinear systems, and introduced the concepts of quasi-contractive stability and
convergence stability over a certain finite-time interval [74]. Shortly thereafter, Weiss
and Infante further studied the FTS of nonlinear systems with perturbation, which
led to the new concept of finite-time bounded input bounded output (BIBO) stability
[75]. The concept of BIBO stability evolved into what is now known as “finite-time
bounded stability.” In 1969, Michel and Wu extended the FTS from continuous-time
to discrete-time systems on the basis of many existing research results [76]. In the
decade from 1965 to 1975, a large number of articles on FTS appeared. But all of
these articles are limited to the analysis of the stability and do not give the method
of control design [77–79].
In 1969, Garrard studied the finite-time control method for nonlinear systems [80].
In 1972, Van Mellaert and Dorato extended finite-time control to stochastic systems
[81]. During this period, San Filippo and Dorato studied the design of robust control
for linear systems based on linear quadratic and FTS, and applied the results to the
control problem of aircraft [82]. Grujic applied the concept of FTS to the controller
design of adaptive systems [83]. The design techniques proposed between 1969 and
1976 all required complex calculations. In the actual application process, there is
no absolutely ideal situation for the operating state of the system, and the system
is often affected by external disturbances and other factors during operation. In
order to better solve the stability problem of the system under external disturbances,
the Italian cybernetics scholar Amato gave “finite-time stability” and “finite-time
boundedness,” thus effectively avoiding external disturbances of the system.
In view of the importance of FTS in practical applications, more and more
researchers have devoted themselves to the study of finite-time control problems
in recent years [84–88]. FTS is a concept of stability which is different from asymp-
totic stability for studying transient performance of system. The so-called FTS is
to give a bound of the initial conditions of the system whose state norm does not
6 1 Introduction

exceed a certain threshold value in a given finite-time interval. FTS has three ele-
ments, namely a certain time interval, bounds of initial conditions and bounds of
system states. Therefore, to judge whether a system is finite-time stable, a period
of time interval, the boundary of initial conditions and the boundary of system state
should be given first according to requirements, and then the system state in this time
interval should be checked whether the system state is within the pre-given boundary.
Therefore, we can distinguish the FTS and asymptotic stability from the above three
factors:
(1) FTS examines the performance of the system in a specific time interval, and
asymptotic stability examines the performance of the system in an infinite-time
interval.
(2) FTS is for initial conditions within a given bound, and asymptotic stability is for
arbitrary initial conditions.
(3) FTS requires that the system state trajectory keep within predefined bounds,
while asymptotic stability requires that the system state converge asymptotically
(no specific bounds are required for the state trajectory).
Thus, these two kinds of stability are independent of each other. A system is FTS,
but beyond a given time interval, the state of the system may diverge so that the
system is not asymptotically stable. A system is asymptotically stable, but the state
of the system may exceed a given region for a certain period of time so that the
system does not satisfy the requirement of FTS. In general, the asymptotic stability
concerns the asymptotic convergence performance of the system in the infinite-time
domain, and the FTS concerns the transient performance of the system in a specific
time interval.

1.2.1 FTS for Deterministic Systems

In recent years, with the development of linear matrix inequality (LMI) theory, the
problems related to FTS have been discussed again. In 1997, Dorato presented a
robust finite-time controller design for linear systems at the 36th IEEE CDC confer-
ence [89]. In this paper, the state feedback control law for finite-time stabilization
of linear systems is presented for the first time by employing LMI, and LMI theory
was introduced into the FTS analysis and controller design for linear systems. Sub-
sequently, Amato also presented a series of FTS (or finite-time boundedness) and
finite-time controller design methods for uncertain linear continuous-time systems
based on LMIs conditions [90, 91].
In 2005, Amato extended the above FTS and finite-time control problems for
linear continuous-time systems to linear discrete-time systems [92] and addressed
the design conditions for the finite-time stabilizing state feedback controller and
output feedback controller, respectively [93]. In subsequent studies, Amato further
extended the results of FTS to more general systems, and at the same time, other
scholars also began to study the FTS problems [94–100]. The traditional asymptotic
1.2 Finite-Time Stability and Control 7

stability requires the corresponding Lyapunov energy function to decrease strictly, but
the solutions abovementioned relax the requirements of Lyapunov energy function by
allowing it to increase within a certain range, thus transforming the FTS problem into
a series of feasibility problems of LMIs. Therefore, the FTS analysis and synthesis
conditions given in these literatures are easy to verify, and the difference between
FTS and traditional asymptotic stability can be clearly seen.
Differential linear matrix inequality (DLMIs) is another standard method to ana-
lyze the FTS problem [101]. Based on DLMIs, the design of a finite-time bounded
dynamic output feedback controller for time-varying linear systems was studied
[102]. In 2011, Amato studied the FTS problem of impulsive dynamic linear sys-
tems and the robust FTS problem of impulsive dynamic linear systems with norm-
bounded uncertainty [103–105]. In 2013, Amato gave the necessary and sufficient
conditions for the FTS of impulsive dynamic linear systems [106]. Compared with
LMIs, DLMI-based method is more suitable for linear time-varying systems and less
conservative, but they are computationally complex and difficult to be generalized
to some other types of complex systems. In addition, DLMI-based analysis method
can also be used to discuss the input–output finite-time stability of linear systems
[107–109].
The above contents are all for the FTS of linear systems. For the FTS of nonlinear
systems, the following two methods are generally used to deal with the problem:
(1) Directly use the knowledge related to nonlinear systems. This method does not
need to restrict the nonlinearity of the system, so it has universality, but the
result is difficult to realize in calculation. For example, some early literatures
on FTS of nonlinear systems adopted this approach [74, 110–113]. In 2004,
Mastellone studied the FTS of nonlinear discrete stochastic systems utilizing
the upper bound of exit probability and correlation function and further gave
the design method of the finite-time stabilizing controller [114]. In 2009, Yang
carried out FTS analysis and synthesis for nonlinear stochastic systems with
pulses based on Lyapunov function-like method [115].
(2) Use methods similar to the above linear systems. This method requires special
limitations on the nonlinearity of the system, and the result is generally expressed
as the feasibility of LMIs (or DLMIs) [116–120]. For example, in the literatures
[121–123], the robust finite-time control problem for a class of nonlinear systems
with norm-bounded parameter uncertainties and external disturbances (nonlinear
properties are initially approximated by a multilayer feedback neural network
model). Elbsat [124] studied the finite-time state feedback control problem for
a class of discrete-time nonlinear systems with conic-type nonlinear and exter-
nal interference inputs. A robust and elastic linear state feedback controller is
designed based on LMI technology to ensure that the closed-loop system is
finite-time stable for all nonlinearities (in a centrally uncertain hypersphere), all
admissible external disturbances and all controller gain disturbances within a set
boundary.
8 1 Introduction

1.2.2 FTS for Stochastic MJSs

With the development of MJSs theory, finite-time analysis and synthesis problems
are widely studied in MJSs. The existing research results mainly focus on three
categories. Firstly, considering the various complex situations of the system, such
as time-delay, uncertainty, time variation, external interference, nonlinear and other
factors, the finite-time analysis and synthesis problems of MJSs under complex sit-
uations are studied [125–129]. In the study of delay MJSs, two kinds of sufficient
conditions are concerned. One type is independent of the size of delay, which is
called delay independent. For the system with hour delays, this kind of condition has
strong conservativeness. Therefore, another type of FTS condition containing the
size of the time-delay, namely delay-dependent condition, has attracted widespread
attention. Since delay-dependent conditions can better regulate the system than delay-
independent conditions and have less conservativeness, scholars pay more attention
to delay-dependent FTS analysis and synthesis, such as model change determination
method, delay segmentation method, parameterized model transformation method,
free weight method, and so on [130–134].
Meanwhile, due to various uncertainties, robust control theory is used to adjust
the finite-time performance of MJSs. One is robust FTS for system analysis, and the
other is finite-time controller design for regulating system performance so that the
closed-loop system can be robust finite-time stabilized over the given time interval.
The common research methods include the Riccati equation, linear matrix inequality,
and robust H∞ control [134–139]. The advantage of H∞ control is that the H∞ norm
of the transfer function can describe the maximum gain from the input energy to
the output. By solving the optimization problem, the influence of the disturbance
with the finite power spectrum can be minimized. Of course, there are also many
other FTS analyses and comprehensive research results for complex MJSs, such as
2D MJSs, singular MJSs, nonlinear MJSs, positive MJSs, neutral MJSs, switching
MJSs, distributed parameter MJSs, etc. [140–145].
The second category of the research results of FTS for MJSs is reflected in the
change of TPs. Initially, the relevant studies were based on the time-invariant TPs
and all the elements in TP matrix are assumed to be known in advance. However, in
practical engineering applications, it is not easy to accurately obtain all the elements
in TP matrix. Therefore, some scholars have studied the finite-time performance for
MJSs with partially known TPs [146, 147]. In literatures [148, 149], considering the
TPs are unknown but bounded, convex polyhedra or bounded conditions are used to
define the change of TPs, and the robust FTS analysis and synthesis methods for such
systems were studied. Since the TPs often change with time, the non-homogeneous
Markovian process generally exists in practical engineering systems. As a result, the
FTS analysis and synthesis for non-homogeneous MJSs have also been extensively
investigated [150, 151].
Since the jumping time of MJSs follows an exponential distribution, the TP matrix
of MJSs is a time-invariant function matrix, which brings limitations to the appli-
cation of MJSs. Compared with MJSs, semi-MJSs are characterized by a fixed TP
matrix and a dwell time probability density function matrix. Because the restriction of
1.3 Outline 9

the probability distribution function is relaxed, the semi-MJSs have a more extensive
application importance. Therefore, it is of great theoretical value and practical signif-
icance to study the FTS analysis and synthesis of semi-MJSs. By the method of sup-
plementary variables and model transformation, the asynchronous event-triggered
sliding mode control, event-triggered guaranteed cost control, memory sampled-
data control, observer-based sliding mode control, and H∞ filtering were addressed
for semi-MJSs within a finite-time interval [152–156].
The third category of the research results is the finite-time performance study
combined with other control strategies. For example, the finite-time sliding mode
control methods for MJSs were presented in [157–160]. As a high-performance
robust control strategy, sliding mode control has the advantages of insensitivity to
parameter perturbation, good transient performance, fast response speed, and strong
robustness. It is a typical variable structure control. In other words, the sliding mode
controller of the closed-loop system design drives the state trajectory to the designed
sliding mode surface. When the state trajectory reaches the sliding mode surface and
maintains motion, it is not affected by other external factors. Therefore, as a common
design method, sliding mode control is applied to the finite-time performance study
of MJSs.

1.3 Outline

In order to facilitate readers to understand the context of the book clearly, the main
research content is shown in Fig. 1.1. The outline of the book is as follows.
This chapter introduces the research background, motivations, and research prob-
lems for finite-time analysis and synthesis of MJSs, including FTS for typical kinds of
MJSs (involving linear and nonlinear MJSs, switching MJSs, and non-homogeneous
MJSs) and FTS for MJSs combined with other control strategies (involving sliding
mode control, dissipative control, and non-periodic triggered control).
Chapter 2 investigates the stochastic FTS, stochastic finite-time boundedness, and
stochastic finite-time stabilization for discrete-time linear and nonlinear MJSs by
relaxing the strict decreasing of the Lyapunov energy function. For linear MJSs, the
finite-time control design can be transformed into the corresponding Ricatti equation
or linear matrix inequality solution. However, for nonlinear MJSs, it is impossible to
design a general controller satisfying the transient performance of the systems. To
deal with the nonlinearities of MJSs, the neural network is utilized to approximate the
nonlinearities by linear difference inclusions. The designed controller can keep the
state trajectories of the systems remain within the pre-specified bounds in the given
time interval rather than asymptotically converges to the equilibrium point despite
the approximation error and external disturbance.
Chapter 3 extends the results of stochastic FTS, stochastic finite-time bounded-
ness, and stochastic finite-time stabilization to switching MJSs with time-delay. Due
to the coupling of switching signals and jumping modes, it brings great challenges to
the finite-time analysis and synthesis of the system. To analyze the transient perfor-
10 1 Introduction

Fig. 1.1 The main research content

mance of switching MJSs, the finite-time boundedness, finite-time H∞ stabilization,


and observer-based finite-time H∞ control for a class of stochastic jumping systems
governed by deterministic switching signals are investigated in this chapter. Con-
sidering the effect of the average dwell time on the finite-time performance, some
results on the stochastic finite-time boundedness and stochastic finite-time stabiliza-
tion with H∞ disturbance attenuation level are given. The relationship among three
kinds of time scales, such as time-delay, average dwell time, and finite-time interval,
is derived by means of the average dwell time constraint condition.
Chapter 4 addresses the stochastic finite-time stabilization, stochastic finite-time
H∞ control, and the observer-based state feedback finite-time control problems for
non-homogeneous MJSs by considering the random variation of TPs. Different from
the results presented in Chaps. 2 and 3, this chapter focuses on the random change of
TPs. Gaussian transition probability density function (PDF) is utilized to describe the
random time-varying property of TPs. In this way, the random time-varying TPs can
be characterized with a Gaussian PDF. The variance of Gaussian PDF can quantize the
uncertainties of TPs. Then, the variation-dependent controller is devised to guarantee
the corresponding finite-time stabilization with H∞ disturbance attenuation level for
discrete-time MJSs with random time-varying transition probabilities.
Chapter 5 focuses on the stochastic finite-time performance analysis and synthesis
for MJSs from the perspective of system internal stability and energy relationship.
Passivity represents the energy change attribute of the system. If the controller can
make the system energy function decaying according to the desired rate, the control
goal can be achieved. Therefore, this chapter studies the finite-time passive control
for MJSs. Firstly, a finite-time passive controller is proposed to guarantee that the
1.3 Outline 11

closed-loop system is finite-time bounded and meets the desired passive performance
requirement simultaneously under ideal conditions. Then, considering the more prac-
tical situation that the controller’s mode is not synchronized with the system mode, an
asynchronous finite-time passive controller is planned, which is for the more general
hidden Markovian jump systems.
Chapter 6 combines the finite-time performance with sliding mode control to
achieve better performance indicators for discrete-time MJSs. As a high-performance
robust control strategy, sliding mode control has the advantages of insensitivity to
parameter perturbation, good transient performance, fast response speed, and strong
robustness. Therefore, this chapter focuses on the finite-time sliding mode control
problem for MJSs with uncertainties. Firstly, the sliding mode function and slid-
ing mode controller are designed such that the closed-loop discrete-time MJSs are
stochastic finite-time stabilizable and fulfill the given H∞ performance index. More-
over, an appropriate asynchronous sliding mode controller is constructed, and the
rationality conditions of the coefficient parameter are given and proved for the pur-
pose that the closed-loop discrete-time MJSs can be driven onto the sliding surface.
Also, the transient performance of the discrete-time MJSs during the reaching and
sliding motion phase has been investigated, respectively.
Chapters 2–6 consider the transient performance of MJSs in the entire frequency
range, which leads to over-design and conservativeness. To reduce the engineering
conservation of controller design for MJSs from the perspective of the time domain
and the frequency domain, Chap. 7 presents the finite-time multiple-frequency con-
trol for MJSs by introducing frequency information into controller design, the
multiple-frequency control with finite-time performance is analyzed both in the time
domain and the frequency domain. Moreover, in order to overcome the effect of
stochastic jumping among different modes on system performance, the derandomiza-
tion method has been introduced into controller design by transforming the original
stochastic multimodal systems into deterministic single-mode ones.
Chapter 8 concerns not only the transient behavior of MJSs in the finite-time
domain but also the consistent state behavior of each subsystem. Therefore, the finite-
time consensus protocol design approach for network-connected systems with ran-
dom Markovian jump topologies, communication delays and external disturbances is
analyzed in this chapter. With relaxing the conditions that the disagreement dynam-
ics asymptotically converge to zero, the finite-time consensualization protocol is
employed to make sure the disagreement dynamics of interconnected networks are
confined within the prescribed bound in the fixed time interval. By taking advantage
of certain features of the Laplacian matrix in real Jordan form, the new model trans-
formation method has been proposed, which makes the designed control protocol
more general.
Chapter 9 proposes the higher-order moment stabilization in the finite-time
domain for MJSs to guarantee that not only the mean and variance of the states remain
within the desired range in the fixed time interval, but also the higher-order moment
of the states is limited to the given bound. Firstly, the derandomization method is
utilized to transform the multimode stochastic jumping systems into single-mode
deterministic systems. Then, with the help of the cumulant generating function in
12 1 Introduction

statistical theory, the higher-order moment components of the states are obtained
by first-order Taylor expansion. Compared with the existing control methods, the
high-order moment stabilization improves the effect of the control by taking the
higher-order moment information of the state into consideration.
Chapter 10 adopts model predictive control to optimize the finite-time perfor-
mance of MJSs. Firstly, by means of online rolling optimization, the minimum
energy consumption is realized, and the required transient performance is satis-
fied simultaneously under the assumption that the jumping time of MJSs follows an
exponential distribution. Then, the proposed results are extended to semi-MJSs. The
finite-time performance under the model predictive control scheme is analyzed in
the situation that the TP matrix at each time depends on the history information of
elapsed switching sequences. Compared with MJSs, semi-MJSs are characterized by
a fixed TP matrix and a dwell time probability density function matrix. Because the
restriction of the probability distribution function is relaxed, the finite-time model
predictive control for semi-MJSs has a more extensive application importance.
Chapter 11 sums up the results of the book and discusses the possible research
directions in future work.

References

1. Krasovskii, N.M., Lidskii, E.A.: Analytical design of controllers in systems with random
attributes. Automat. Rem. Control. 22, 1021–2025 (1961)
2. Ji, Y., Chizeck, H.J.: Controllability, stability and continuous-time Markovian jump linear
quadratic control. IEEE Trans. Autom. Control 35(7), 777–788 (1990)
3. Florentin, J.J.: Optimal control of continuous-time Markovian stochastic systems. J. Electron.
Control 10(6), 473–488 (1961)
4. Sworder, D.: Feedback control of a class of linear systems with jump parameters. IEEE Trans.
Autom. Control. 14(1), 9–14 (1969)
5. Wonham, W.M.: Random differential equations in control theory. Probab. Methods Appl.
Math. 2, 131–212 (1971)
6. Feng, X., Loparo, K.A., Ji, Y., Chizeck, H.J.: Stochastic stability properties of jump linear
systems. IEEE Trans. Autom. Control. 37, 38–53 (1992)
7. Karan, M., Shi, P., Kaya, Y.: Transition probability bounds for the stochastic stability robust-
ness of continuous and discrete-time Markovian jump linear systems. Automatica 42, 2159–
2168 (2006)
8. Mariton, M.: On controllability of linear systems with stochastic jump parameters. IEEE
Trans. Autom. Control. 31(7), 680–683 (1986)
9. Shi, P., Boukas, E.K., Agarwal, R.: Robust control for Markovian jumping discrete-time
systems. Int. J. Syst. Sci. 30(8), 787–797 (1999)
10. Mariton, M.: Robust jump linear quadratic control: a mode stabilizing solution. IEEE Trans.
Autom. Control 30(11), 1145–1147 (1985)
11. Shi, P., Boukas, E.K., Agarwal, R.: Kalman filtering for continuous-time uncertain systems
with Markovian jumping parameters. IEEE Trans. Autom. Control 44(8), 1592–1597 (1999)
12. Sthananthan, S., Keel, L.H.: Optimal practical stabilization and controllability of systems
with Marikovian jumps. Nonlinear Anal. 54(6), 1011–1027 (2003)
13. He, S.P., Liu, F.: Exponential passive filtering for a class of nonlinear jump systems. J. Syst.
Eng. Electron. 20(4), 829–837 (2009)
References 13

14. Yao, X.M., Guo, L.: Composite anti-disturbance control for Markovian jump nonlinear sys-
tems via disturbance observer. Automatica 49(8), 2538–2545 (2013)
15. Aliyu, M.D.S., Boukas. E.K.: H∞ control for Markovian jump nonlinear systems. In: Pro-
ceedings of the 37th IEEE Conference on Decision and Control, Tampa, FL, USA, vol. 1, pp.
766–771 (1998)
16. Liu, Y., Wang, Z., Liang, J., Liu, X.: Stability and synchronization of discrete-time Markovian
jumping neural networks with mixed mode-dependent time delays. IEEE Trans. Neural. Netw.
20(7), 1102–1116 (2009)
17. Zhang, Y., Xu, S., Zou, Y., Lu, J.: Delay-dependent robust stabilization for uncertain discrete-
time fuzzy Markovian jump systems with mode-dependent time delays. Fuzzy Sets Syst.
164(1), 66–81 (2011)
18. Balasubramaniam, P., Lakshmanan, S.: Delay-range dependent stability criteria for neural
networks with Markovian jumping parameters. Nonlinear Anal. Hybrid. Syst. 3(4), 749–756
(2009)
19. Zhang, M., Shi, P., Ma, L.H., Cai, J.P., Su, H.Y.: Quantized feedback control of fuzzy Marko-
vian jump systems. IEEE Trans. Cybern. 49(9), 3375–3384 (2019)
20. Wang, J.W., Wu, H.N., Guo, L.: Robust H∞ fuzzy control for uncertain nonlinear Markovian
jump systems with time-varying delay. Fuzzy Sets Syst. 212, 41–61 (2013)
21. Sheng, L., Gao, M., Zhang, W.H.: Dissipative control for Markovian jump non-linear stochas-
tic systems based on T-S fuzzy model. Int. J. Syst. Sci. 45(5), 1213–1224 (2014)
22. Wu, Z.G., Dong, S.L., Su, H.Y., Li, C.D.: Asynchronous dissipative control for fuzzy Marko-
vian jump systems. IEEE Trans. Cybern. 48(8), 2426–2436 (2018)
23. Song, J., Niu, Y.G., Zou, Y.Y.: Asynchronous sliding mode control of Markovian jump systems
with time-varying delays and partly accessible mode detection probabilities. Automatica 93,
33–41 (2018)
24. Tong, D.B., Zhu, Q.Y., Zhou, W.N.: Adaptive synchronization for stochastic T-S fuzzy neural
networks with time-delay and Markovian jumping parameters. Neurocomputing 17(14), 91–
97 (2013)
25. Tao, J., Lu, R.Q., Su, H.Y., Shi, P., Wu, Z.G.: Asynchronous filtering of nonlinear Markovian
jump systems with randomly occurred quantization via T-S fuzzy models. IEEE Trans. Fuzzy
Syst. 26(4), 1866–1877 (2018)
26. He, S.P., Liu, F.: Fuzzy model-based fault detection for Markovian jump systems. Int. J.
Robust. Nonlinear Control 19(11), 1248–1266 (2009)
27. He, S.P., Liu, F.: Filtering-based robust fault detection of fuzzy jump systems. Fuzzy Sets
Syst. 185(1), 95–110 (2011)
28. Cheng, P., Wang, J.C., He, S.P., Luan, X.L., Liu, F.: Observer-based asynchronous fault
detection for conic-type nonlinear jumping systems and its application to separately excited
DC motor. IEEE Trans. Circ. Syst.-I 67(3), 951–962 (2020)
29. Luan, X.L., Liu, F., Shi, P.: Neural network based stochastic optimal control for nonlinear
Markovian jump systems. Int. J. Innov. Comput. Inf. Control 6(8), 3715–3728 (2010)
30. Luan, X.L., Liu, F.: Design of performance robustness for uncertain nonlinear time-delay
systems via neural network. J. Syst. Eng. Electron. 18(4), 852–858 (2007)
31. Luan, X.L., Liu, F., Shi, P.: Passive output feedback control for non-linear systems with time
delays. Proc. Inst. Mech. Eng. Part I-J Syst. Control Eng. 223(16), 737–743 (2009)
32. Yin, Y., Shi, P., Liu, F.: H∞ scheduling control on stochastic neutral systems subject to actuator
nonlinearity. Int. J. Syst. Sci. 44(7), 1301–1311 (2013)
33. Luan, X.L., Liu, F., Shi, P.: H∞ filtering for nonlinear systems via neural networks. J. Frankl.
Inst. 347, 1035–1046 (2010)
34. Luan, X.L., Liu, F.: Neural network-based H∞ filtering for nonlinear systems with time-
delays. J. Syst. Eng. Electron 19(1), 141–147 (2008)
35. Luan, X.L., He, S.P., Liu, F.: Neural network-based robust fault detection for nonlinear jump
systems. Chaos Soliton Fract. 42(2), 760–766 (2009)
36. Tong, D.B., Xu, C., Chen, Q.Y., Zhou, W.N., Xu, Y.H.: Sliding mode control for nonlinear
stochastic systems with Markovian jumping parameters and mode-dependent time-varying
delays. Nonlinear Dyn. 100, 1343–1358 (2020)
14 1 Introduction

37. Liu, Q.D., Long, Y., Park, J.H., Li, T.S.: Neural network-based event-triggered fault detection
for nonlinear Markovian jump system with frequency specifications. Nonlinear Dyn. 103,
2671–2687 (2021)
38. Chang, R., Fang, Y.M., Li, J.X., Liu, L.: Neural-network-based adaptive tracking control for
Markovian jump nonlinear systems with unmodeled dynamics. Neurocomputing 179, 44–53
(2016)
39. Wang, Z., Yuan, J.P., Pan, Y.P., Che, D.J.: Adaptive neural control for high order Markovian
jump nonlinear systems with unmodeled dynamics and dead zone inputs. Neurocomputing
247, 62–72 (2017)
40. Zhong, X.N., He, H.B., Zhang, H.G., Wang, Z.S.: Optimal control for unknown discrete-
time nonlinear Markovian jump systems using adaptive dynamic programming. IEEE Trans.
Neural Netw. Learn. Syst. 25(12), 2141–2155 (2014)
41. Zhong, X.N., He, H.B., Zhang, H.G., Wang, Z.S.: A neural network based online learning and
control approach for Markovian jump systems. Neurocomputing 149(3), 116–123 (2015)
42. Jiang, H., Zhang, H.G., Luo, Y.H., Wang, J.Y.: Optimal tracking control for completely
unknown nonlinear discrete-time Markovian jump systems using data-based reinforcement
learning method. Neurocomputing 194(19), 176–182 (2016)
43. Bolzern, P., Colaneri, P., Nicolao, G.D.: Markovian jump linear systems with switching tran-
sition rates: mean square stability with dwell-time. Automatica 46, 1081–1088 (2010)
44. Bolzern, P., Colaneri, P., Nicolao, G.D.: Almost sure stability of Markovian jump linear
systems with deterministic switching. IEEE Trans. Autom. Control 58(1), 209–213 (2013)
45. Song, Y., Yang, J., Yang, T.C.: Almost sure stability of switching Markovian jump linear
systems. IEEE Trans. Autom. Control 61(9), 2638–2643 (2015)
46. Cong, S.: A result on almost sure stability of linear continuous-time Markovian switching
systems. IEEE Trans. Autom. Control 63(7), 2226–2233 (2018)
47. Hou, L.L., Zong, G.D., Zheng, W.X.: Exponential l2 − l∞ control for discrete-time switching
Markovian jump linear systems. Circ. Syst. Signal Process. 32, 2745–2759 (2013)
48. Chen, L.J., Leng, Y., Guo, A.F.: H∞ control of a class of discrete-time Markovian jump linear
systems with piecewise-constant TPs subject to average dwell time switching. J. Frankl. Inst.
349(6), 1989–2003 (2012)
49. Wang, J.M., Ma, S.P.: Resilient dynamic output feedback control for discrete-time descrip-
tor switching Markovian jump systems and its applications. Nonlinear Dyn. 93, 2233–2247
(2018)
50. Qu, H.B., Hu, J., Song, Y., Yang, T.H.: Mean square stabilization of discrete-time switching
Markovian jump linear systems. Optim. Control Appl. Methods 40(1), 141–151 (2019)
51. Wang, G.L., Xu, L.: Almost sure stability and stabilization of Markovian jump systems with
stochastic switching. IEEE Trans. Autom. Control (2021). https://doi.org/10.1109/TAC.2021.
3069705
52. Lian, J., Liu, J., Zhuang, Y.: Mean stability of positive Markovian jump linear systems with
homogeneous and switching transition probabilities. IEEE Trans. Circ. Syst.-II 62(8), 801–
805 (2015)
53. Bolzern, P., Colaneri, P., Nicolao, G.: Stabilization via switching of positive Markovian jump
linear systems. In: Proceedings of the 53rd IEEE Conference on Decision and Control, Los
Angeles, CA, USA (2014)
54. Aberkane, S.: Bounded real lemma for nonhomogeneous Markovian jump linear systems.
IEEE Trans. Autom. Control 58(3), 797–801 (2013)
55. Yin, Y.Y., Shi, P., Liu, F., Lim, C.C.: Robust control for nonhomogeneous Markovian jump
processes: an application to DC motor device. J. Frankl. Inst. 351(6), 3322–3338 (2014)
56. Aberkane, S.: Stochastic stabilization of a class of nonhomogeneous Markovian jump linear
systems. Syst. Control Lett. 60(3), 156–160 (2011)
57. Liu, Y.Q., Yin, Y.Y., Liu, F., Teo, K.L.: Constrained MPC design of nonlinear Markovian
jump system with nonhomogeneous process. Nonlinear Anal. Hybrid Syst. 17, 1–9 (2015)
58. Liu, Y.Q., Liu, F., Toe, K.L.: Output feedback control of nonhomogeneous Markovian jump
system with unit-energy disturbance. Circ. Syst. Signal Process. 33(9), 2793–2806 (2014)
References 15

59. Ding, Y.C., Liu, H., Shi, K.B.: H∞ state-feedback controller design for continuous-time
nonhomogeneous Markovian jump systems. Optimal Control Appl. Methods 20(1), 133–144
(2016)
60. Yin, Y.Y., Shi, P., Liu, F., Toe, K.L.: Filtering for discrete-time non-homogeneous Markovian
jump systems with uncertainties. Inf. Sci. 259, 118–127 (2014)
61. Yin, Y., Shi, P., Liu, F., Teo, K.L.: Fuzzy model-based robust H∞ filtering for a class of
nonlinear nonhomogeneous Markov jump systems. Signal Process 93(9), 2381–2391 (2013)
62. Hou, T., Ma, H.J., Zhang, W.H.: Spectral tests for observability and detectability of periodic
Markovian jump systems with nonhomogeneous Markovian chain. Automatica 63, 175–181
(2016)
63. Hou, T., Ma, H.J.: Stochastic H2 /H∞ control of discrete-time periodic Markovian jump
systems with detectability. In: Proceedings of the 54th Annual Conference of the Society of
Instrument and Control Engineers of Japan, Hangzhou, China, pp. 530–535 (2015)
64. Tao, J., Su, H., Lu, R., Wu, Z.G.: Dissipativity-based filtering of nonlinear periodic Markovian
jump systems: the discrete-time case. Neurocomputing 171, 807–814 (2016)
65. Aberkane, S., Dragan, V.: H∞ filtering of periodic Markovian jump systems: application to
filtering with communication constraints. Automatica 48(12), 3151–3156 (2012)
66. Zhang, L.X.: H∞ estimation for discrete-time piecewise homogeneous Markovian jump linear
systems. Automatica 45(11), 2570–2576 (2009)
67. Wu, Z.G., Ju, H.P., Su, H., Chu, J.: Stochastic stability analysis of piecewise homogeneous
Markovian jump neural networks with mixed time-delays. J. Frankl. Inst. 349(6), 2136–2150
(2012)
68. Luan, X.L., Shunyi, Z., Shi, P., Liu, F.: H∞ filtering for discrete-time Markovian jump systems
with unknown transition probabilities. Int. J. Adapt. Control Signal Process 28(2), 138–148
(2014)
69. Luan, X.L., Shunyi, Z., Liu, F.: H∞ control for discrete-time Markovian jump systems with
uncertain transition probabilities. IEEE Trans. Autom. Control. 58(6), 1566–1572 (2013)
70. Kamenkov, G.: On stability of motion over a finite interval of time. J. Appl. Math. Mech. 17,
529–540 (1953)
71. Lebedev, A.: On stability of motion during a given interval of time. J. Appl. Math. Mech. 18,
139–148 (1954)
72. Dorato, P.: Short-Time Stability in Linear Time-Varying Systems. Polytechnic Institute of
Brooklyn Publishing, Brooklyn, New York (1961)
73. Liasalle, J., Lefechetz, S.: Stability by Lyapunov’s Direct Methods: With Applications. Aca-
demic Press Publishing, New York (1961)
74. Weiss, L., Infante, E.F.: On the stability of systems defined over a finite time interval. Natl.
Acad. Sci. 54(1), 44–48 (1965)
75. Weiss, L., Infante, E.: Finite time stability under perturbing forces and on product spaces.
IEEE Trans. Autom. Control 12(1), 54–59 (1967)
76. Michel, A.N., Wu, S.H.: Stability of discrete systems over a finite interval of time. Int. J.
Control 9(6), 679–693 (1969)
77. Weiss, L.: On uniform and nonuniform finite-time stability. IEEE Trans. Autom. Control
14(3), 313–314 (1969)
78. Bhat, S.P., Bernstein, D.S.: Finite-time stability of continuous autonomous systems. SIAM J.
Control Opim. 38(3), 751–766 (2000)
79. Chen, W., Jiao, L.C.: Finite-time stability theorem of stochastic nonlinear systems. Automatica
46(12), 2105–2108 (2010)
80. Garrard, W.L., McClamroch, N.H., Clark, L.G.: An approach to suboptimal feedback control
of nonlinear systems. Int. J. Control 5(5), 425–435 (1967)
81. Van Mellaert, L., Dorato, P.: Nurmerical solution of an optimal control problem with a prob-
ability vriterion. IEEE Trans. Autom. Control 17(4), 543–546 (1972)
82. San Filippo, F.A., Dorato, P.: Short-time prarmeter optimization with flight control application.
Automatica 10(4), 425–430 (1974)
16 1 Introduction

83. Gmjic, W.L.: Finite time stability in control system synthesis. In: Proceedings of the 4th IFAC
Congress, Warsaw, Poland, pp. 21–31 (1969)
84. Haimo, V.T.: Finite-time control and optimization. SIAM J Control Opim. 24(4), 760–770
(1986)
85. Liu, L., Sun, J.: Finite-time stabilization of linear systems via impulsive control. Int. J. Control
8(6), 905–909 (2008)
86. Germain, G., Sophie, T., Jacques, B.: Finite-time stabilization of linear time-varying contin-
uous systems. IEEE Trans. Autom. Control 4(2), 364–369 (2009)
87. Moulay, E., Perruquetti, W.: Finite time stability and stabilization of a class of continuous
systems. J. Math. Anal. Appl. 323(2), 1430–1443 (2006)
88. Abdallah, C.T., Amato, F., Ariola, M.: Statistical learning methods in linear algebra and
control problems: the examples of finite-time control of uncertain linear systems. Linear
Algebra Appl. 351, 11–26 (2002)
89. Dorato, P., Famularo, D.: Robust finite-time stability design via linear matrix inequalities. In:
Proceedings of the 36th IEEE Conference on Desicion and Control, San Diego, pp. 1305–1306
(1997)
90. Amato, F., Ariola, M., Dorato, P.: Robust finite-time stabilization of linear systems depending
on parametric uncertainties. In: Proceedings of the 37th IEEE Conference on Decision and
Control, Tampa, Florida, pp. 1207–1208 (1998)
91. Amato, F., Ariola, M., Dorato, P.: Finite-time control of linear systems subject to parametric
uncertainties and disturbances. Automatica 37(9), 1459–1463 (2001)
92. Amato, F., Ariola, M.: Finite-time control of discrete-time linear system. IEEE Trans. Autom.
Control 50(5), 724–729 (2005)
93. Amato, F., Ariola, M., Cosentino, C.: Finite-time stabilization via dynamic output feedback.
Automatica 42(2), 337–342 (2006)
94. Hong, Y.G., Huang, J., Yu, Y.: On an output feedback finite-time stabilization problem. IEEE
Trans. Autom. Control 46(2), 305–309 (2001)
95. Yu, S., Yu, X., Shirinzadeh, B.: Continuous finite-time control for robotic manipulators with
terminal sliding mode. Automatica 41(11), 1957–1964 (2005)
96. Huang, X., Lin, W., Yang, B.: Global finite-time stabilization of a class of uncertain nonlinear
systems. Automatica 41(5), 881–888 (2005)
97. Feng, J.E., Wu, Z., Sun, J.B.: Finite-time control of linear singular systems with parametric
uncertainties and disturbances. Acta Automatica Sinica 31(4), 634–637 (2005)
98. Moulay, E., Dambrine, M., Yeganefax, N.: Finite time stability and stabilization of time-delay
systems. Syst. Control Lett. 57(7), 561–566 (2008)
99. Zuo, Z., Li, H., Wang, Y.: New criterion for finite-time stability of linear discrete-time systems
with time-varying delay. J. Frankl. Inst. 350(9), 2745–2756 (2013)
100. Stojanovic, S.B., Debeljkovic, D.L., Antic, D.S.: Robust finite-lime stability and stabilization
of linear uncertain time-delay systems. Asian J. Control 15(5), 1548–1554 (2013)
101. Amato, F., Ariola, M., Cosentino, C.: Finite-time control of discrete-time linear systems:
analysis and design conditions. Automatica 46(5), 919–924 (2010)
102. Amato, F., Ariola, M., Cosentino, C.: Necessary and sufficient conditions for finite-time
stability of linear systems. In: Proceedings of the 2003 American Control Conference, Denver,
Colorado, pp. 4452–4456 (2003)
103. Amato, F., Ariola, M., Cosentino, C.: Finite-time stability of linear time-varying systems:
analysis and controller design. IEEE Trans. Autom. Control 55(4), 1003–1008 (2009)
104. Amato, F., Ambrosino, R., Ariola, M.: Robust finite-time stability of impulsive dynamical
linear systems subject to norm-bounded uncertainties. Int. J. Robust Nonlinear Control 21(10),
1080–1092 (2011)
105. Amato, F., Ariola, M., Cosentino, C.: Finite-time stabilization of impulsive dynamical linear
systems. Nonlinear Anal. Hybrid Syst. 5(1), 89–101 (2011)
106. Amato, F., Tommasig, D., Pironti, A.: Necessary and sufficient conditions for finite-time
stability of impulsive dynamical linear systems. Automatica 49(8), 2546–2550 (2013)
References 17

107. Amato, F., Ambrosino, R., Cosentino, C.: Input-output finite time stabilization of linear sys-
tems. Automatica 46(9), 1558–1562 (2010)
108. Amato, F., Ambrosino, R., Cosentino, C.: Input-output finite-time stability of linear systems.
In: Proceedings of the 17th Mediterranean Conference on Control and Automation, Makedo-
nia, Palace, Thessaloniki, Greece, pp. 342–346 (2009)
109. Amato, F., Carannante, G., De Tommasi, G.: Input-output finite-time stabilization of a class
of hybrid systems via static output feedback. Int. J. Control 84(6), 1055–1066 (2011)
110. Weiss, L.: Converse theorems for finite time stability. SIAM J. Appl. Math. 16(6), 1319–1324
(1968)
111. Ryan, E.P.: Finite-time stabilization of uncertain nonlinear planar systems. Dyn. Control 1(1),
83–94 (1991)
112. Hong, Y.G., Wang, J., Cheng, D.: Adaptive finite-time control of nonlinear systems with
parametric uncertainty. IEEE Trans. Autom. Control 51(5), 858–862 (2006)
113. Nersesov, S.G., Nataxaj, C., Avis, J.M.: Design of finite time stabilizing controller for non-
linear dynamical systems. Int. J. Robust Nonlinear Control 19(8), 900–918 (2009)
114. Mastellone, S., Dorato, P., Abdallah, C.T.: Finite-time stability of discrete-time nonlinear
systems: analysis and design. In: Proceedings of the 43rd IEEE Conference on Decision and
Control, Atlantis, Paradise Island, Bahamas, pp. 2572–2577 (2004)
115. Yang, Y., Li, J., Chen, G.: Finite-time stability and stabilization of nonlinear stochastic hybrid
systems. J. Math. Anal. Appl. 356(1), 338–345 (2009)
116. Chen, F., Xu, S., Zou, Y.: Finite-time boundedness and stabilization for a class of non-linear
quadratic time-delay systems with disturbances. IET Control Theor. Appl. 7(13), 1683–1688
(2013)
117. Yin, J., Khoo, S., Man, Z.: Finite-time stability and instability of stochastic nonlinear systems.
Automatica 47(12), 2671–2677 (2011)
118. Khoo, S., Yin, J.L., Man, Z.H.: Finite-time stabilization of stochastic nonlinear systems in
strict-feedback form. Automatica 49(5), 1403–1410 (2013)
119. Amato, F., Cosentesto, C., Merola, A.: Sufficient conditions for finite-time stability and stabi-
lization of nonlinear quadratic systems. IEEE Trans. Autom. Control 55(2), 430–434 (2010)
120. He, S., Liu, F.: Finite-time H∞ fuzzy control of nonlinear jump systems with time delays via
dynamic observer-based state feedback. IEEE Trans. Fuzzy. Syst. 20(4), 605–614 (2012)
121. Luan, X.L., Liu, F., Shi, P.: Robust finite-time H∞ control for nonlinear jump systems via
neural networks. Circ. Syst. Signal Process 29(3), 481–498 (2010)
122. Luan, X.L., Liu, F., Shi, P.: Neural-network-based finite-time H∞ control for extended Marko-
vian jump nonlinear systems. Int. J. Adapt. Control Signal Process 24(7), 554–567 (2010)
123. Luan, X.L., Liu, F., Shi, P.: Finite-time filtering for nonlinear stochastic systems with partially
known transition jump rates. IET Control Theor. Appl. 4(5), 735–745 (2010)
124. Elbsat, M.N., Yaz, E.E.: Robust and resilient finite-time bounded control of discrete-time
uncertain nonlinear systems. Automatica 49(7), 2292–2296 (2013)
125. Zhang, Y., Shi, P., Nguang, S.K.: Robust finite-time fuzzy H∞ control for uncertain time-delay
systems with stochastic jumps. J. Frankl. Inst. 351(8), 4211–4229 (2014)
126. Luan, X.L., Zhao, C.Z., Liu, F.: Finite-time H∞ control with average dwell-time constraint
for time-delay Markovian jump systems governed by deterministic switches. IET Control
Theor. Appl. 8(11), 968–977 (2014)
127. Chen, C., Gao, Y., Zhu, S.: Finite-time dissipative control for stochastic interval systems with
time-delay and Markovian switching. Appl. Math. Comput. 310, 169–181 (2017)
128. Yan, Z., Zhang, W., Zhang, G.: Finite-time stability and stabilization of It ô stochastic sys-
tems with Markovian switching: mode-dependent parameters approach. IEEE Trans. Autom.
Control 60(9), 2428–2433 (2015)
129. Lyu, X.X., Ai, Q.L., Yan, Z.G., He, S.P., Luan, X.L., Liu, F.: Finite-time asynchronous resilient
observer design of a class of non-linear switched systems with time-delays and uncertainties.
IET Control. Theor. Appl. 14(7), 952–963 (2020)
130. Nie, R., He, S.P., Luan, X.L.: Finite-time stabilization for a class of time-delayed Markovian
jump systems with conic nonlinearities. IET Control Theor. Appl. 13(9), 1279–1283 (2019)
18 1 Introduction

131. Yan, Z., Song, Y., Park, J.H.: Finite-time stability and stabilization for stochastic Markov
jump systems with mode-dependent time delays. ISA Trans. 68, 141–149 (2017)
132. Wen, J., Nguang, S.K., Shi, P.: Finite-time stabilization of Markovian jump delay systems–a
switching control approach. Int. J. Robust Nonlinear Control 7(2), 298–318 (2016)
133. Chen, Y., Liu, Q., Lu, R., Xue, A.: Finite-time control of switched stochastic delayed systems.
Neurocomputing 191, 374–379 (2016)
134. Ma, Y., Jia, X., Zhang, Q.: Robust observer-based finite-time H∞ control for discrete-time
singular Markovian jumping system with time delay and actuator saturation. Nonlinear Anal.
Hybrid. Syst. 28, 1–22 (2018)
135. Shen, H., Li, F., Yan, H.C., Karimi, H.R., Lam, H.K.: Finite-time event-triggered H∞ control
for T-S fuzzy Markovian jump systems. IEEE Trans. Fuzzy Syst. 26(5), 3122–3135 (2018)
136. Luan, X.L., Min, Y., Ding, Z.T., Liu, F.: Stochastic given-time H∞ consensus over Markovian
jump networks with disturbance constraint. Trans. Inst. Meas. Control 39(8), 1253–1261
(2017)
137. Cheng, J., Zhu, H., Zhong, S.M., Zeng, Y., Dong, X.C.: Finite-time H∞ control for a class
of Markovian jump systems with mode-dependent time-varying delays via new Lyapunov
functionals. ISA Trans. 52(6), 768–774 (2013)
138. Song, X.N., Wang, M., Ahn, C.K., Song, S.: Finite-time H∞ asynchronous control for non-
linear Markovian jump distributed parameter systems via quantized fuzzy output-feedback
approach. IEEE Trans. Cybern. 50(9), 4098–4109 (2020)
139. Ma, Y.C., Jia, X.R., Zhang, Q.L.: Robust finite-time non-fragile memory H∞ control for
discrete-time singular Markovian jump systems subject to actuator saturation. J. Frankl. Inst.
354(18), 8256–8282 (2017)
140. Cheng, P., He, S.P., Luan, X.L., Liu, F.: Finite-region asynchronous H∞ control for 2D Marko-
vian jump systems. Automatica (2021). https://doi.org/10.1016/j.automatica.2021.109590
141. Ren, H.L., Zong, G.D., Karimi, H.R.: Asynchronous finite-time filtering of Markovian jump
nonlinear systems and its applications. IEEE Trans. Syst. Man Cybern. Syst. 51(3), 1725–1734
(2019)
142. Li, S.Y., Ma, Y.: Finite-time dissipative control for singular Markovian jump systems via
quantizing approach. Nonlinear Anal. Hybrid Syst. 27, 323–340 (2018)
143. Ren, C.C., He, S.P., Luan, X.L., Liu, F., Karimi, H.R.: Finite-time l2 -gain asynchronous
control for continuous-time positive hidden Markovian jump systems via T-S fuzzy model
approach. IEEE Trans. Cybern. 51(1), 77–87 (2021)
144. Yan, H.C., Tian, Y.X., Li, H.Y., Zhang, H., Li, Z.C.: Input-output finite-time mean square
stabilization of nonlinear semi-Markovian jump systems. Automatica 104, 82–89 (2021)
145. Ju, Y.Y., Cheng, G.F., Ding, Z.S.: Stochastic H∞ finite-time control for linear neutral semi-
Markovian jump systems under event-triggering scheme. J. Frankl. Inst. 358(2), 1529–1552
(2021)
146. Ren, H.L., Zong, G.D.: Robust input-output finite-time filtering for uncertain Markovian jump
nonlinear systems with partially known transition probabilities. Int. J. Adapt. Control Signal.
Process. 31(10), 1437–1455 (2017)
147. Zong, G.D., Yang, D., Hou, L.L., Wang, Q.Z.: Robust finite-time H∞ control for Markovian
jump systems with partially known transition probabilities. J. Frankl. Inst. 350(6), 1562–1578
(2013)
148. Cheng, J., Park, J.H., Liu, Y.J., Liu, Z.J., Tang, L.M.: Finite-time H∞ fuzzy control of nonlinear
Markovian jump delayed systems with partly uncertain transition descriptions. Fuzzy Sets
Syst. 314, 99–115 (2017)
149. Luan, X.L., Zhao, C.Z., Liu, F.: Finite-time stabilization of switching Markovian jump systems
with uncertain transition rates. Circ. Syst. Signal Process 34(12), 3741–3756 (2015)
150. Chen, F., Luan, X.L., Liu, F.: Observer based finite-time stabilization for discrete-time Marko-
vian jump systems with Gaussian transition probabilities. Circ. Syst. Signal Process 33(10),
3019–3035 (2014)
151. Luan, X.L., Shi, P., Liu, F.: Finite-time stabilization for Markovian jump systems with Gaus-
sian transition probabilities. IET Control Theor. Appl. 7(2), 298–304 (2013)
References 19

152. Wang, J., Ru, T.T., Xia, J.W., Shen, H., Sreeram, V.: Asynchronous event-triggered sliding
mode control for semi-Markovian jump systems within a finite-time interval. IEEE Trans.
Circuits Syst.-I 68(1), 458–468 (2021)
153. Zong, G.D., Ren, H.L.: Guaranteed cost finite-time control for semi-Markovian jump systems
with event-triggered scheme and quantization input. Int. J. Robust. Nonlinear Control 29(15),
5251–5273 (2019)
154. Chen, J., Zhang, D., Qi, W.H., Cao, J.D., Shi, K.B.: Finite-time stabilization of T-S fuzzy
semi-Markovian switching systems: a coupling memory sampled-data control approach. J.
Frankl. Inst. 357(16), 11265–11280 (2020)
155. Wang, J.M., Ma, S.P., Zhang, C.H.: Finite-time H∞ filtering for nonlinear continuous-time
singular semi-Markovian jump systems. Asian J. Control 21(2), 1017–1027 (2019)
156. Qi, W.H., Zong, G.D., Karimi, H.R.: Finite-time observer-based sliding mode control for
quantized semi-Markovian switching systems with application. IEEE Trans. Ind. Electron
16(2), 1259–1271 (2020)
157. Song, J., Niu, Y.G., Zou, Y.Y.: A parameter-dependent sliding mode approach for finite-time
bounded control of uncertain stochastic systems with randomly varying actuator faults and its
application to a parallel active suspension system. IEEE Trans. Ind. Electron 65(10), 2455–
2461 (2018)
158. Cao, Z.R., Niu, Y.G., Zhao, H.J.: Finite-time sliding mode control of Markovian jump systems
subject to actuator faults. Int. J. Control Autom. Syst. 16, 2282–2289 (2018)
159. Li, F.B., Du, C.L., Yang, C.H., Wu, L.G., Gui, W.H.: Finite-time asynchronous sliding
mode control for Markovian jump systems. Automatica (2021). https://doi.org/10.1016/j.
automatica.2019.108503
160. Ren, C.C., He, S.P.: Sliding mode control for a class of nonlinear positive Markovian jump
systems with uncertainties in a finite-time interval. Int. J. Control Autom. Syst. 17(7), 1634–
1641 (2019)
Chapter 2
Finite-Time Stability and Stabilization
for Discrete-Time Markovian Jump
Systems

Abstract To relax the requirement of asymptotic stability of discrete-time Marko-


vian jump systems (MJSs), the finite-time stability, finite-time boundedness, and
finite-time stabilization for discrete-time linear and nonlinear MJSs have been inves-
tigated in this chapter. To deal with the nonlinear part of MJSs, the neural network
is utilized to approximate the nonlinearities by linear difference inclusions. The
designed controller can keep the state trajectories of the systems remain within the
pre-specified bounds in the given time interval rather than asymptotically converges
to the equilibrium point despite the approximation error and external disturbance.

2.1 Introduction

In practical engineering applications, more consideration has been paid to the sys-
tem’s transient behavior in a restricted time instead of the steady-state performance in
the infinite-time domain. Subsequently, to decrease the conservativeness of controller
design, the finite-time stability theory was proposed by Dorato in 1961. Consider the
impacts of exogenous disturbance on the system, finite-time boundedness was fur-
ther explored. Since then, an incredible number of research results on finite-time
stability, finite-time boundedness, and finite-time stabilization for linear determin-
istic systems have been intensively studied [1–3]. Furthermore, by considering the
influence of the transition probability on the control performance, the stochastic
finite-time stability, stochastic finite-time boundedness, and stochastic finite-time
stabilization for stochastic Markovian jump systems (MJSs) have also been exten-
sively investigated [4–6].
On the other hand, nonlinearities are the common feature of practical plants. How
to guarantee the transient performance of nonlinear MJSs in the finite-time span is
a challenging issue. Assuming that the nonlinear terms satisfy the Lipschitz con-
ditions, the asynchronous finite-time filtering, finite-time dissipative filtering, and
asynchronous output finite-time control problems have been solved [7–9]. Using
the Takagi–Sugeno fuzzy model to represent the nonlinear MJSs, the asynchronous
finite-time control, finite-time H∞ control, finite-time H∞ filtering, etc., have been
addressed in [10–12]. In addition to the above methods of dealing with nonlinearities
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 21
X. Luan et al., Robust Control for Discrete-Time Markovian Jump Systems in the
Finite-Time Domain, Lecture Notes in Control and Information Sciences 492,
https://doi.org/10.1007/978-3-031-22182-8_2
22 2 Finite-Time Stability and Stabilization . . .

of MJSs, neural networks were also efficient tools to analyze the transient perfor-
mance of nonlinear MJSs [13, 14].
The primary purpose of this chapter is to investigate the FTS and finite-time sta-
bilization problems for discrete-time linear and nonlinear MJSs. For nonlinear MJSs
with time-delays and external disturbances, neural networks are utilized to repre-
sent nonlinear terms through linear difference inclusions (LDIs) under state-space
representation. The mode-dependent finite-time controllers are designed to make
the linear and nonlinear MJSs stochastic finite-time stabilizable. By constructing
the appropriate stochastic Lyapunov function, sufficient conditions are derived from
linear matrix inequalities (LMIs).

2.2 Preliminaries and Problem Formulation

Consider the following discrete-time linear MJS with uncertainties:



x(k + 1) = [A(rk ) + A(rk )] x(k) + [Bu (rk ) + Bu (rk )] u(k) + Bw (rk )w(k)
x(k) = x0 , rk = r0 , k = 0
(2.1)
where k ∈ {1, . . . , N }, N ∈ N, and N is the set of positive integers. x(k) ∈ R n is the
q
vector of state variables, u(k) ∈ R m is the controlled input,and w(k) ∈ l2 0 + ∞
N
k=0 w(k) w(k) < d .
T 2
is the exogenous disturbances with bounded energy E
x0 , r0 are the initial state and mode respectively, and rk is a discrete-time, discrete-
state Markovian chain taking values in M = {1, 2, . . . , i, . . . , M} with transition
probabilities
πi j = Pr (rk = j|rk−1 = i) (2.2)

where πi j is the transition probabilities from mode i to mode j satisfying


M
πi j ≥ 0, πi j = 1, ∀i ∈ M.
j=1

For each possible value of rk = i, it has

A(rk ) = Ai , A(rk ) = Ai , Bu (rk ) = Bui , Bu (rk ) = Bui , Bw (rk ) = Bwi .
(2.3)

Ai and Bui are the time-varying but norm-bounded uncertainties that satisfy
 
Ai Bui = Mi Fi (k) N1i N2i . (2.4)
2.2 Preliminaries and Problem Formulation 23

Mi , N1i and N2i are known mode-dependent matrices with suitable dimensions and
Fi (k) is the time-varying unknown matrix function with Lebesgue norm assessable
elements satisfying FiT (k) Fi (k) ≤ I .
Concerning uncertain linear MJS (2.1), the following state feedback controller is
constructed:
u(k) = K i x(k) (2.5)

where K i ∈ R m×n is state feedback gain to be designed. Then, the resulting closed-
loop MJS ensures that:
 
x(k + 1) = Āi +  Āi x(k) + Bwi w(k)
(2.6)
x(k) = x0 , rk = r0 , k = 0

where Āi = Ai + Bui K i ,  Āi = Ai + Bui K i .


This paper aims to investigate the finite-time control problem for uncertain
discrete-time linear MJS (2.1). By choosing the proper Lyapunov function, the main
results will be presented in the form of LMIs. The following definitions can formalize
the general idea of stochastic finite-time control problem of discrete-time MJSs.
Definition 2.1 (Stochastic FTB) For a given time-constant N > 0 and positive
scalars c1 , c2 , the uncertain discrete-time linear MJS (2.1) (setting u(k) = 0) is said
to be stochastic finite-time bounded (FTB) with respect to (c1 c2 N R d), where
c1 < c2 , R > 0, if

E x T (0)Rx(0) ≤ c1 ⇒ E x T (k)Rx(k) < c2 , ∀ k ∈ {1, 2, . . . , N } . (2.7)

Remark 2.1 In fact, stochastic finite-time stability in the presence of external distur-
bance results in the concept of stochastic finite-time boundedness. Vice versa, letting
w(k) = 0, the concept in Definition 2.1 is equivalent to stochastic FTS. In other
words, the uncertain discrete-time linear MJS (2.1) (setting w(k) = 0, u(k) = 0)
is said to be stochastic finite-time stable (FTS) with respect to (c1 c2 N R)
if Eq. (2.7) holds. It is obvious that stochastic finite-time boundedness indicates
stochastic finite-time stability, but in turn, may not be set up.
Remark 2.2 Both stochastic finite-time boundedness and stochastic finite-time sta-
bility are open-loop concepts, which belong to the analysis of open-loop MJS with
u(k) = 0. With the designed control in the form of formulation (2.5), the closed-
loop system (2.6) is stochastic finite-time stabilizable, which gives the concept of
stochastic finite-time stabilization.
Remark 2.3 Note that Lyapunov asymptotic stability and stochastic finite-time sta-
bility are different concepts. The concept of Lyapunov asymptotic stability is pri-
marily known to the control community. However, an MJS is stochastic FTS if its
state stays inside the desired bounds during the fixed time interval. Therefore, it can
be concluded that an MJS that is stochastic FTS may not be Lyapunov asymptotic
stability. Conversely, the Lyapunov asymptotic stability could be not stochastic FTS
if its state exceeds the given bounds during the transient response process.
24 2 Finite-Time Stability and Stabilization . . .

2.3 Stochastic Finite-Time Stabilization for Linear MJSs

This subsection will first consider the stochastic finite-time stabilization problem
for uncertain discrete-time linear MJS (2.1). Before presenting the main results, the
following lemma will be helpful.
Lemma 2.1 [15] Assume that H , L, Q, and S are real matrices with appropriate
dimensions, for the positive scalar θ > 0 and U T U ≤ I , we can get

H + LU S + S T U T L T ≤ H + θ L L T + θ −1 S T S. (2.8)

S11 S12
Lemma 2.2 (Schur complement lemma) For a given matric S = with
S21 S22
S11 ∈ R r ×r , the following statements are equivalent:
(a) S < 0 ;
T −1
(b) S11 < 0, S22 − S12 S11 S12 < 0;
−1 T
(c) S22 < 0, S11 − S12 S22 S12 < 0.

Now, we focus our attention on presenting a solution to stochastic finite-time


controller design, which is derived through the following theorem:
Theorem 2.1 For a given scalar α ≥ 1, the uncertain discrete-time linear MJS (2.1)
(setting u(k) = 0) is said to be stochastic FTB with respect to (c1 c2 N R d), if
there exist symmetric positive-definite matrices Pi ∈ R n×n , i ∈ M and Q ∈ R q×q
such that
⎡ ⎤

M M
⎢ πi j (Ai + Ai ) P j (Ai + Ai ) − α Pi
T
πi j (Ai + Ai ) P j Bwi ⎥
T
⎢ j=1 j=1 ⎥
⎢   ⎥<0
⎣ M M

πi j Bwi
T
P j (Ai + Ai ) πi j Bwi
T
P j Bwi − α Q
j=1 j=1
(2.9)

λ2 c1 + λ3 d 2 < α −N c2 λ1 (2.10)
   
where λ1 = λmin P̃i , λ2 = λmax P̃i , λ3 = λmax (Q), P̃i = R −1/2 Pi R −1/2 .

Proof Define the following Lyapunov function:

Vi (k) = x T (k)Pi x(k).

Along the state trajectories of system (2.1) with u(k) = 0, the corresponding time
derivative of Vi (k) is given by
2.3 Stochastic Finite-Time Stabilization for Linear MJSs 25


M
Vi (k + 1) = πi j x T (k + 1)P j x(k + 1)
j=1

M
= πi j [x T (k)(Ai + Ai )T P j (Ai + Ai ) x(k)
j=1
+ 2x T (k)(Ai + Ai )T P j w(k) + w T (k)Bwi
T
P j Bwi w(k)].

Condition (2.9) implies

Vi (k + 1) < αVi (k) + αw T (k)Qw(k). (2.11)

Listing the above Eq. (2.11) at different sampling time, we can get


k
Vi (k) < α k Vr0 (0) + αl w T (k − l)Qw(k − l)
l=1
 

k
<α k
Vr0 (0) + λ3 α l−k
w (k − l)w(k − l) .
T

l=1

   
Denote P̃i = R −1/2 Pi R −1/2 , λ1 = λmin P̃i , λ2 = λmax P̃i , λ3 = λmax (Q).
For α ≥ 1, we have the following relationship:
 

k
Vi (k) < α k
Vr0 (0) + λ3 α l−k
w (k − l)w(k − l)
T

l=1

< α N λ2 c1 + λ3 d 2 .

On the other hand, it yields

Vi (k) = x T (k)Pi x(k) ≥ λ1 x T (k)Rx(k).

Then we can get 


α N λ2 c1 + λ3 d 2
E x (k)Rx(k) <
T
.
λ1

Condition (2.10) means that for k ∈ {1, . . . , N }, the equality E x T (k)Rx(k) < c2
holds. This completes the proof. 

Theorem 2.2 For a given positive scalar α ≥ 1, the closed-loop system (2.6) is said
to be stochastic finite-time stabilizable with respect to (c1 c2 N R d), if there
exist symmetric positive-definite matrices X i and Q, matrix Yi , and positive scalars
θi , i ∈ M such that
26 2 Finite-Time Stability and Stabilization . . .
⎡ ⎤
−α X i 0 L T1i L T3i
⎢ 0 −α Q L T2i 0 ⎥
⎢ ⎥
⎣ L 1i L 2i Z i + θi MiT Mi 0 ⎦ < 0 (2.12)
L 3i 0 0 −θi I

λ1 R −1 < X i < R −1 (2.13)

0 < Q < λ2 I (2.14)


√ 
λ2 d 2 − c2
c1
√ αN <0 (2.15)
c1 −λ1

where
√ √
L T1i = πi1 (Ai X i + Bui Yi )T . . . πi M (Ai X i + Bui Yi )T ,
√ √
L T2i = πi1 Bwi
T
. . . πi M Bwi
T
,
√ √
L T3i = πi1 (N1i X i + N2i Yi )T . . . πi M (N1i X i + N2i Yi )T ,

Z i = −diag X 1 · · · X M .

Then the state feedback controller gain can be obtained as K i = Yi X i−1 .

Proof Inequalities (2.12)–(2.15) in Theorem 2.2 can be derived from Theorem 2.1
by some matrix operations and transformations. 

Remark 2.4 To obtain the optimal stochastic finite-time controller for uncertain
discrete-time linear MJS (2.1), the upper bound c2 can be described as the following
optimization problem:
min c2
X i ,Yi ,Q,λ1 ,λ2 ,c1 ,θi (2.16)
s.t. LMI (2.12)−(2.15).

2.4 Stochastic Finite-Time Stabilization for Nonlinear


MJSs

In this subsection, we consider the following discrete-time nonlinear MJS with time-
delay:

⎨ x(k + 1) = A(rk )x(k) + Ad (rk )x(k − h) + Bu (rk )u(k)
+ Bw (rk )w(k) + C(rk ) f (x(k), rk ) (2.17)

x f = ϕ f , f ∈ {−h, . . . , 0}, rk = r0 , k = 0
2.4 Stochastic Finite-Time Stabilization for Nonlinear MJSs 27

where f (·) is a discrete-time nonlinear mapping with f (0) = 0 but assumed to be


unknown a prior. For simplicity of notation, Ad (rk ), C(rk ), and f (x(k), rk ) can be
denoted as Adi , Ci , and f i (x(k)) respectively. For each mode i, nonlinear function
f i (x(k)) can be linearized by multi-layer neural networks. The L-layered percep-
tron neural network Ni (x(k), Wr 1 , Wr 2 , . . . , Wr L ) will be trained to approximate the
nonlinear term f i (x(k)), which is specified in matrix-vector mathematics as follows:

Ni (x(k), Wi1 , Wi2 , . . . , Wi L ) = ψi L [Wi L . . . ψi2 ][Wi2 [ψi1 [Wi1 x(k)]]] (2.18)

where the weight matrices Wir ∈ R nir ×ni(r −1) , r = 1, . . . , L from the r − l th layer to
the r − L th layer are the parameters to be determined, ψir [·], r = 1, . . . , L is the
activation function defined as ψir [·] = [φi1 (ςi1 ), φi2 (ςi2 ), . . . , φinr (ςinr )]T , where
n r indicates the neurons of r -th layer and
 
1 − e−ςi h /qi h
φi h (ςi h ) = δi h , qi h , δi h > 0, h = 1, 2, . . . , n r . (2.19)
1 + e−ςi h /qi h

The minimum and maximum derivatives of the activation function φi h are desig-
nated as follows: 
minζi h ∂φ∂ζ
i h (ζi h )
,v = 0
si h (v, φi h ) = ih
∂φi h (ζi h ) (2.20)
maxζi h ∂ζi h , v = 1.

For the r -th layer of a neural network, the activation function φi h can be rewritten
as following min-max manner:

φi h = z i h (0)si h (0, φi h ) + z i h (1)si h (1, φi h ),

where z i h (v), v = 0, 1 are a series of positive real numbers with z i h (v) > 0 and
z i h (0) + z i h (1) = 1. According to the approximation ability of the neural network,
there exist weight matrices Wir∗ defined as:

(Wi1∗ , Wi2∗ , . . . , Wi∗L )


 
= arg min max f i (x(k)) − Ni (x(k), Wi1 , Wi2 , . . . , Wi L ) ,
(Wi1∗ ,Wi2∗ ,...,Wi∗L ) x(k)∈D

where D is a compact set, such that

max f i (x(k)) − Ni (x(k), Wi1∗ , Wi2∗ , . . . , Wi∗L ) ≤ ρi x(k) . (2.21)


x(k)∈D

For each mode i, denote a set of n r dimensional index vectors of the r -th layer as

γnr = γnr (σi ) = σi ∈ R nr |σi h ∈ {0, 1}, h = 1, . . . , n r ,


28 2 Finite-Time Stability and Stabilization . . .

where σi is utilized as a binary character. Clearly, the r -th layer with n r neurons has
2nr combinations of the binary character with v = 0, 1 and the elements of indicator
vectors for all L layers neural network have 2n L × · · · × 2n 2 × 2n 1 combinations as
follows:
 = γn L ⊕ · · · ⊕ γn 2 ⊕ γ n 1 .

By applying condition (2.20) and resorting to the compact description [16], the
multilayer neural network (2.18) can be expressed as follows:

N (x(k), Wi1∗ , Wi2∗ , . . . , Wi∗L )


⎡ ⎡ ⎡ ⎤⎤ ⎤

1

⎢ ⎢ ⎢ z i11 (k)si11 (v, φi11 ) × (Wi1∗ x(k))i1 ⎥⎥ ⎥


⎢ ⎢ ⎢ m=0 ⎥⎥ ⎥
⎢ ∗ ⎢ ∗⎢ .. ⎥⎥ ⎥
= ψi L ⎢Wi L · · · ψi2 ⎢Wi2 ⎢ . ⎥⎥ · · ·⎥
⎢ ⎢ ⎢ ⎥⎥ ⎥
⎣ ⎣ ⎣ 
1 ⎦⎦ ⎦

z i1n 1 (k)si1n 1 (v, φi1n 1 ) × (Wi1 x(k))in 1
m=0


= μσi Aσi (σi , ψi , Wi )x(k), (2.22)
σi ∈

where

Aσi = diag[s Li h (σ Li h , φ Li h )]
W L∗ · · · diag[s2i h (σ2i h , φ2i h )]W2∗ diag[s1i h (σ1i h , φ1i h )]W1∗ ,


μσi
σi ∈


1 
1
= ··· z i Ln L (vi Ln L ) · · · z i L1 (vi L1 ) · · ·z i1n 1 (vi1n 1 ) · · · z i11 (vr 11 ) = 1.
vi Ln L =0 vi1n 1 =0
.. ..
. .
vi L1 =0 vi11 =0

Thus employing multilayer neural networks, the discrete-time nonlinear MJS


(2.17) is transformed into a group of LDIs with approximation errors, in which the
different formation is powered by the stochastic Markovian process, i.e.:

x(k + 1) = Ãi x(k) + Adi x(k − h) + Bui u(k) + Bwi w(k) + Ci  f i (x(k))
x f = ϕ f , f ∈ {−h, . . . , 0}, rk = r0 , k = 0
(2.23)
2.4 Stochastic Finite-Time Stabilization for Nonlinear MJSs 29

where 
Ãi = μσi Aσi + Ai ,
σi ∈

 f i (x(k)) = max f i (x(k)) − Ni (x(k), Wi1∗ , Wi2∗ , . . . , Wi∗L ) ≤ ρi x(k) .


x(k)∈D
(2.24)
 f i (x(k)) denotes the approximation error of the neural network.

Remark 2.5 The detailed structure and quantitative value of approximation error
 f i (xk ) are not necessary, but the norm-bounded assumption is expected. This
requirement is surely accomplished in practical situations. Also, the bounds of
approximation error can be different on the basis of different nonlinearities in each
mode.
Based on the LDI representation (2.23) and the state feedback control law in
Eq. (2.5), we obtain the following closed-loop system:

x(k + 1) = Āi x(k) + Adi x(k − h) + Bwi w(k) + Ci  f i (x(k))
(2.25)
x f = ϕ f , f ∈ {−h, . . . , 0}, rk = r0 , k = 0

where
Āi = Ãi + Bui K i . (2.26)

This chapter aims to find sufficient conditions that ensure the closed-loop system
(2.25) stochastic finite-time stabilizable. Before proceeding further, we introduce
the following proposition for the derivation of our main results.

Proposition 2.1 For given scalars α ≥ 0 and ρi ≥ 0, the nonlinear closed-loop


system (2.25) is stochastic finite-time stabilizable in regard to (c1 c2 N R d), if
there exist symmetric positive-definite matrices Pi , Q, G and S such that

ĀiT P̄ j Āi − (1 + α)Pi + S ĀiT P̄ j Adi
⎢ ∗ Adi P̄ j Adi − S
T

⎣ ∗ ∗
∗ ∗

ĀiT P̄ j Bwi ĀiT P̄ j Ci
ATdi P̄ j Bwi ATdi P̄ j Ci ⎥
⎥<0
Bwi P̄ j Bwi − (1 + α)Q
T T
Bwi P̄ j Ci ⎦
∗ Ci P̄ j Ci − (1 + α)G
T

(2.27)

c22 λ1
c12 λ2 + c12 λ3 + d 2 λ3 + c22 ρi2 λ4 < (2.28)
(1 + α) N

where Q̃ = R −1/2 Q R −1/2 , G̃ = G −1/2 RG −1/2 , λ4 = λmax (G), λ5 = λmax (S).


30 2 Finite-Time Stability and Stabilization . . .

Proof For the closed-loop system (2.25), choose a stochastic Lyapunov function
candidate as

k−1
Vi (k) = x T (k)Pi x(k) + x Tf Sx f .
f =k−h

Simple calculation shows that


E {Vi (k + 1)} − Vi (k)
 
= x T (k) ĀiT P̄ j Āi − Pi + S x(k) + 2x T (k) ĀiT P̄ j Adi x(k − h)

+ 2x T (k) ĀiT P̄ j Bwi w(k) + 2x T (k) ĀiT P̄ j Ci  f i (x(k))


 
+ x T (k − h) ATdi P̄ j Adi − S x(k − h)

+ 2x T (k − h)ATdi P̄ j Bwi w(k) + 2x T (k − h)ATdi P̄ j Ci  f i (x(k))


+ w T (k)Bwi
T
P̄ j Bwi w(k)
+ 2w T (k)Bwi
T
P̄ j Ci  f i (x(k)) +  f i T (x(k))CiT P̄ j Ci  f i (x(k))
= ζ T (k)i ζ (k) (2.29)

where



M
 T
P̄ j = πi j P j , ζ (k) = x T (k) x T (k − h) w T (k)  f iT (x(k)) ,
j=1

⎡ ⎤
ĀiT P̄ j Āi − Pi + S ∗ ∗ ∗
⎢ ATdi P̄ j Āi ATdi P̄ j Adi − S ∗ ∗ ⎥
i = ⎢

⎥.

T
Bwi P̄ j Āi T
Bwi T
P̄ j Adi Bwi P̄ j Bwi ∗
CiT P̄ j Āi CiT P̄ j Adi CiT P̄ j Bwi CiT P̄ j Ci

Conditions (2.27) and (2.29) imply that


E {Vi (k + 1)}
< (1 + α)x(k)T Pi x(k)

k−1
+ (1 + α)w(k)T Qw(k) + (1 + α) f iT (x(k))G f i (x(k)) + (1 + α) x( f )T Sx( f )
f =k−d

= (1 + α) Vi (k) + (1 + α) w(k)T Qw(k) + (1 + α) f iT (x(k))G f i (x(k)). (2.30)

Noting that α ≥ 0, we can obtain from condition (2.30) that

Vi (k)

k
< (1 + α)k Vi (0) + (1 + α)k− f +1 w( f − 1)T Qw( f − 1)
f =1
2.4 Stochastic Finite-Time Stabilization for Nonlinear MJSs 31


k
+ (1 + α)k− f +1 c22 ρi2 λmax (G)
f =1

−1

= (1 + α)k ⎣x(0)T Pi x(0) + x( f )T Sx( f )
f =−h


k 
k
+ (1 + α)1− f w( f − 1)T Qw( f − 1) + (1 + α)1− f c22 ρi2 λmax (G)⎦
f =1 f =1

< (1 + α) c12 λ2 + c12 λ5 + d 2 λ3 + c22 ρi2 λ4 .
N
(2.31)

Note that


k−1
Vi (k) = x T (k)Pi x(k) + x( f )T Sx( f )
f =k−h

> x (k)Pi x(k)


T

> λ1 x T (k)Rx(k). (2.32)

According to conditions (2.31)–(2.32), one has



(1 + α) N c12 λ2 + c12 λ5 + d 2 λ3 + c22 ρi2 λ4
x (k)Rx(k) <
T
. (2.33)
λ1

Condition (2.28) implies that for k ∈ {1, 2, . . . , N }, the state trajectories do not
exceed the upper bound c2 , i.e., E{x T (k)Rx(k)} < c2 . This completes the proof. 
Theorem 2.3 For given scalars α ≥ 0, h > 0, and ρi > 0, the closed-loop system
(2.25) is stochastic finite-time stabilizable via state feedback controller in the form
of (2.5) respect to (c1 c2 N R d), if there exist matrices X i = X iT > 0, Yi , H =
H T > 0, Q = Q T > 0, and G = G T > 0 such that
⎡ ⎤
−(1 + α)X i N1iT 0 0 Xi
⎢ N −M + N M M 0 ⎥
⎢ 1i 5i 5i 3i 4i ⎥
⎢ −(1 + α)Q 0 ⎥
⎢ 0 M3iT 0 ⎥<0 (2.34)
⎣ 0 M4iT 0 −(1 + α)G 0 ⎦
Xi 0 0 0 −H

λ6 R −1 < X i < R −1 (2.35)


⎡ √ ⎤
c22
− (1+α) N + d λ3 + c2 ρi λ4
2 2 2
c1 hc1
⎢ ⎥

√c1 −λ6 0 ⎦ < 0. (2.36)
hc1 0 −λ5
32 2 Finite-Time Stability and Stabilization . . .

Proof By using the Schur complement lemma, from condition (2.27) in Proposition
2.1, it follows that:
⎡ ⎤
−(1 + α)Pi + S ∗ ∗ ∗ ∗
⎢ 0 −S ∗ ∗ ∗ ⎥
⎢ ⎥
⎢ 0 −(1 + α)Q ∗ ∗ ⎥
⎢ 0 ⎥≤0 (2.37)
⎣ 0 0 0 −(1 + α)G ∗ ⎦
M1i M2i M3i M4i −M5i

where √ √ T
M1i = π ĀT , . . . , πi M ĀiT ,
 √ i1 iT √ T
M2i = π A , . . . , πi M ATdi ,
 √ i1 di √ T T
M3i = π B T , . . . , πi M Bwi ,
 √ i1 wi √ T
M4i = πi1 CiT , . . . , πi M CiT ,
M5i = diag P1−1 , . . . , PM−1 .

Performing matrix elementary transformation to the above inequality (2.37), we


have
⎡ ⎤
−(1 + α)Pi + S M1iT 0 0 0
⎢ M1i −M5i M3i M4i M2i ⎥
⎢ ⎥
⎢ −(1 + α)Q 0 ⎥⎥ ≤ 0.
T
⎢ 0 M 3i 0 (2.38)
⎣ 0 M4iT 0 −(1 + α)G 0 ⎦
0 M2iT 0 0 −S

Performing a congruence to the above condition by diag Pi−1 I I I I , using


Schur complement lemma and letting X i = Pi−1 , Yi = K i X i , we get
⎡ ⎤
−(1 + α)X i + X i S X i N1iT 0 0
⎢ N1i −M5i + N5i M3i M4i ⎥
⎢ ⎥≤0 (2.39)
⎣ 0 M3iT
−(1 + α)Q 0 ⎦
0 M4iT
0 −(1 + α)G

where  T
√ √  T
N1i = πi1 (X i + Bui Yi )T , . . . , πi M Ãi X i + Bui Yi ,

⎡ √ √ √ √ ⎤
πi1 Adi H ATdi πi1 πi2 Adi H ATdi · · · πi1 πi M Adi H ATdi
√ √
⎢ πi2 πi1 Adi H ATdi √ √
⎢ πi2 Adi H ATdi · · · πi2 πi M Adi H ATdi ⎥

N5i = ⎢ .. .. .. .. ⎥.
⎣ . . . . ⎦
√ √ √ √
πi M πi1 Adi H ATdi πi M πi2 Adi H ATdi πi M Adi H ATdi

By using Schur complement lemma to (2.39) and letting H = S −1 , we obtain the


LMI (2.34). On the other hand, we consider
2.5 Simulation Analysis 33

1
λmax ( X̃ i ) = ,
λmin ( P̃i )

and
X̃ i = P̃i−1 = R 1/2 X i R 1/2 .

Condition (2.28) follows that:


c12 c22
!  " + c12 hλmax (S) + d 2 λmax (Q) + c22 ρi2 λmax (G) < !  " .
min λmin X̃ i max λmax X̃ i (1 + α) N
i∈M i∈M

It is easy to check that the above inequality is guaranteed by imposing the following
conditions:
λmax ( X̃ i ) < 1, λ6 < λmin ( X̃ i ),

c12 c22
+ c12 hλ5 + d 2 λ3 + c22 ρi2 λ4 < ,
λ6 (1 + α) N

which are equivalent to inequalities (2.35)–(2.36). This completes the proof. 

2.5 Simulation Analysis

To verify the control effect of the designed finite-time controller for discrete-time
nonlinear MJSs, the parameters for system (2.17) with three operation modes are as
follows:
  
0.88 −0.05 −0.2 0.1 2
A1 = , Ad1 = , Bu1 = ,
0.40 −0.72 0.2 0.15 1
 
0.4 0
Bw1 = , C1 = ,
0.5 0.1
  
2 0.24 −0.6 0.4 1
A2 = , Ad2 = , Bu2 = ,
0.80 0.32 0.2 0.6 −1
 
0.2 0
Bw2 = , C2 = ,
0.6 0.3
  
−0.8 0.16 −0.3 0.1 1
A3 = , Ad3 = , Bu3 = ,
0.80 0.64 0.2 0.5 1
 
0.1 0
Bw2 = , C3 = ,
0.3 0.5
f 1 (x(k)) = f 2 (x(k)) = f 3 (x(k)) = sin(x1 (k)) cos(x2 (k)).
34 2 Finite-Time Stability and Stabilization . . .

Choose a single hidden layer neural network with two hidden neurons to approx-
imate the nonlinear function f i (x(k)). Select the parameters of activation function
associated with the hidden layer to be qi h = 0.5, δil = 1. For the chosen activation
function, one has si h (0, φi h ) = 0, si h (1, φi h ) = 1. The optimal weight parameters
Wir∗ to be trained offline are acquired by the back propagation algorithm as follows:

−0.86017 −0.81881
Wi1∗ = ,
−0.95025 0.96405

Wi2∗ = −0.57752 −0.58342 .

Then the approximation errors of the neural networks can be obtained as ρi = 0.022.
According to the obtained Wir∗ , Aσi can be obtained as follows:

Ai1 = Ai2 = Ai3 = Ai4 = Ai5 = A1⊕[0,0,0]T



00
= A0⊕[i, j,k]T , (m, n, k ∈ {0, 1}),
00

0 0
Ai6 = A1⊕[1,0]T = ,
0.49677 0.47288

0 0
Ai7 = A1⊕[0,1]T = ,
0.55439 −0.56245

0 0
Ai8 = A1⊕[1,1] T = .
1.0512 −0.089567
 T
Let c1 = 0.5, c2 = 2, N = 7, R = I , h = 0.5, α = 0.5, d 2 = 1, x0 = −0.3 0.4 ,
and r0 = 1. The jump mode path is generated randomly and shown in Fig. 2.1. Using
MATLAB toolbox to solve the inequalities (2.34)–(2.36) in Theorem 2.3, the fol-
lowing controller gains will be obtained:

K 1 = −0.9304 −0.0683 ,

K 2 = −1.7231 0.3654 ,

K 3 = 1.1486 −0.1588 .

The state trajectories of the open-loop and closed-loop nonlinear MJS (2.23) and
(2.25) are shown in Figs. 2.2 and 2.3, respectively. From Fig. 2.2, it could be easily
found that the free MJS (2.23) is not stochastic finite-time stable because the state
trajectories exceed the given bound c2 = 2. With the designed controller, the state
trajectories are retained within two ellipsoid regions, which satisfactorily verify the
closed-loop MJS (2.25) is stochastic finite-time stabilizable.
2.5 Simulation Analysis 35

3.5

3
jumping modes

2.5

1.5

0.5
0 1 2 3 4 5 6 7
time

Fig. 2.1 Jump modes

15

10

5
x2

-5

-10

-15
-5 0 5 10 15 20 25
x1

Fig. 2.2 State trajectory of free MJS


36 2 Finite-Time Stability and Stabilization . . .

1
x2

-1

-2

-3
-3 -2 -1 0 1 2 3
x1

Fig. 2.3 State trajectory with the designed controller

2.6 Conclusion

In this chapter, the stochastic finite-time stability and finite-time stabilization prob-
lems are investigated for a class of discrete-time linear MJSs. Based on the derived
results, the finite-time stability and finite-time stabilization problems for discrete-
time MJSs with nonlinearities, time-delays and the norm-bounded exogneous dis-
turbances are further addressed. The multi-layer neural networks are utilized to
parameterize the nonlinearities. Despite of the approximation errors of neural net-
works, time-delays, and the exogneous disturbances, the designed controller can
make the closed-loop systems finite-time stabilized and finite-time bounded. In the
next chapter, the results of finite-time controller design will be extended to discrete-
time MJSs governed by deterministic switches.

References

1. Amato, F., Ariola, M., Dorato, P.: Finite-time control of linear systems subject to parametric
uncertainties and disturbances. Automatica 37(9), 1459–1463 (2001)
2. Moulay, E., Dambrine, M., Yeganefar, N., Perruquetti, W.: Finite-time stability and stabilization
of time-delay systems. Syst. Control Lett. 57(7), 561–566 (2008)
3. Amato, F., Ambrosino, R., Ariola, M., Cosentino, C.: Finite-time stability of linear time-varying
systems with jumps. Automatica 45(5), 1354–1358 (2009)
4. Luan, X.L., Liu, F., Shi, P.: Finite-time stabilization of stochastic systems with partially known
transition probabilities. J. Dyn. Syst. Measur. Control 133(1), 014504–014510 (2011)
References 37

5. Gao, X.B., Ren, H.R., Deng, F.Q., Zhou, Q.: Observer-based finite-time H∞ control for uncer-
tain discrete-time nonhomogeneous Markovian jump systems. J. Franklin Inst. 356(4), 1730–
1749 (2019)
6. He, Q.G., Xing, M.L., Gao, X.B., Deng, F.Q.: Robust finite-time H∞ synchronization for
uncertain discrete-time systems with nonhomogeneous Markovian jump: observer-based case.
Nonlinear Control 30(10), 3982–4002 (2020)
7. Ren, H.L., Zong, G.D., Karimi, H.R.: Asynchronous finite-time filtering of Markovian jump
nonlinear systems and its applications. IEEE Trans. Syst. Man Cybern. Syst. 51(3), 1725–1734
(2019)
8. Zhang, X., He, S.P., Stojanovic, V., Luan, X.L., Liu, F.: Finite-time asynchronous dissipative
filtering of conic-type nonlinear Markovian jump systems. Sci. China Inform. Sci. 64, 1–12
(2021)
9. Cheng, P., He, S.P., Cheng, J., Luan, X.L., Liu, F.: Asynchronous output feedback control for
a class of conic-type nonlinear hidden Markovian jump systems within a finite-time interval.
IEEE Trans. Syst. Man Cybern. Syst. (2020). https://doi.org/10.1109/TSMC.2020.2980312
10. Wang, J.M., Ma, S.P., Zhang, C.H.: Finite-time H∞ control for T-S fuzzy descriptor semi-
Markovian jump systems via static output feedback. Fuzzy Sets Syst. 15, 60–80 (2019)
11. Wang, J.M., Ma, S.P., Zhang, C.H., Fu, M.Y.: Finite-time H∞ filtering for nonlinear singular
systems with nonhomogeneous Markovian jumps. IEEE Trans. Cybern. 49(6), 2133–2143
(2019)
12. Song, X.N., Wang, M., Ahn, C.K., Song, S.: Finite-time H∞ asynchronous control for nonlinear
Markovian jump distributed parameter systems via quantized fuzzy output-feedback approach.
IEEE Trans. Cybern. 50(9), 4098–4109 (2020)
13. Luan, X.L., Liu, F., Shi, P.: Robust finite-time H∞ control for nonlinear jump systems via
neural networks. Circuit. Syst. Signal Process 29(3), 481–498 (2010)
14. Luan, X.L., Liu, F., Shi, P.: Neural-network-based finite-time H∞ control for extended Marko-
vian jump nonlinear systems. Int. J. Adapt. Control Signal Process 24(7), 554–567 (2010)
15. Wang, Y., Xie, L., De Souza, C.E.: Robust control of a class of uncertain nonlinear systems.
Syst. Control Lett. 19, 139–149 (1992)
16. Limanond, S., Si, J.: Neural-network-based control design: an LMI approach. IEEE Trans.
Neural Netw. 9(6), 1422–1429 (1998)
Chapter 3
Finite-Time Stability and Stabilization
for Switching Markovian Jump Systems

Abstract This chapter extends the results of finite-time controller design to discrete-
time switching Markovian jump systems with time-delay. Considering the effect of
the average dwell time on the finite-time performance, some results on the stochastic
finite-time boundedness and stochastic finite-time stabilization with H∞ disturbance
attenuation level are given, and the relationship among three kinds of time scales,
such as time-delay, average dwell time and finite-time interval, are derived by means
of the average dwell time constraint condition.

3.1 Introduction

It is widely known that hybrid systems have been applied in diverse fields char-
acterized by the interconnection of continuous state evolution and discrete mode
switching. The stochastic Markovian jump systems (MJSs) and the switched sys-
tems, in which the jumping among different modes is the stochastic or deterministic
signal, are typical classes of hybrid systems [1–3]. Because of their broad application
prospects, numerous achievements have been made to solve the analysis of stabil-
ity and synthesis of controller design for MJSs and switched systems, respectively
[4–6]. In order to improve the performance of systems, a deterministic switching sig-
nal can be imposed on MJSs. In other words, the MJS is to be dominated following
a hierarchical structure, where a top-level supervisor is responsible for selecting the
appropriate feedback controller among several alternatives. This hierarchical system
is named as switching MJS, and was firstly introduced in [7].
Some fundamental issues of both MJSs and switched systems have been investi-
gated and lots of results have been achieved, but fewer contributions have been made
for hybrid systems subject to both stochastic jumping and deterministic switching.
The mean square stability, the almost sure stability and the exponential l2 − l∞ sta-
bility of switching MJSs have been dissolved in references [8, 9] for a different class
of switching signals satisfying the average dwell time constraint. With the same aver-
age dwell time method, references [10, 11] extended the results to switching MJSs
with uncertain transition probabilities or time-delays.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 39


X. Luan et al., Robust Control for Discrete-Time Markovian Jump Systems in the
Finite-Time Domain, Lecture Notes in Control and Information Sciences 492,
https://doi.org/10.1007/978-3-031-22182-8_3
40 3 Finite-Time Stability and Stabilization for Switching . . .

To analyze the transient performance of discrete-time switching MJSs, the finite-


time stability and stabilization problems for a class of switching MJSs have been
investigated in this chapter. From the perspective of the modal transition probabilities
and modal dwell time, the stochastic finite-time boundedness and finite-time H∞
control problems have been discussed. Furthermore, if the state of the discrete-time
switching MJSs is unavailable, the observer-based finite-time control strategy with
H∞ performance for MJSs with deterministic switches has been addressed.

3.2 Preliminaries and Problem Formulation

Consider the discrete-time switching MJS with the following structure:




⎪ x(k + 1) = A(rk , σk )x(k) + Ad (rk , σk )x(k − h)

+Bu (rk , σk )u(k) + Bw (rk , σk )w(k)
.

⎪ z(k) = C(r k , σk )x(k) + Cd (rk , σk )x(k − h) + Du (rk , σk )u(k) + Dw (rk , σk )w(k)

x(g) = ϕ(g), g ∈ {−h, . . . , 0}
(3.1)

where the state variable, the control input, and the exogenous disturbances are the
same as those defined in Chap. 2. z(k) ∈ R l is the control output of the system,
h denotes the delay time, σk is the deterministic switching signal taking values in
a finite set S = {1, 2, . . . , S}, rk is a discrete-time, discrete-state Markovian chain
taking values in a finite set M = {1, 2, . . . , M} with transition probabilities:

Pr{rk+1 = j |rk = i, σk = α} = πiαj (3.2)

where πiαj is the transition probabilities from mode i to mode j under switching

signal σk = α satisfying πiαj ≥ 0, M α
j=1 πi j = 1, ∀i, j ∈ M.
For the simplicity of the denotation, for each possible value of σk = α, α ∈ S,
rk = i, i ∈ M, the following equivalent substitution has been made:

A(rk ,σk ) = Aα,i , Ad (rk ,σk ) = Adα,i , Bu (rk ,σk ) = Bu α,i , Bw (rk ,σk ) = Bwα,i ,

C(rk ,σk ) = Cα,i , Cd (rk ,σk ) = Cdα,i , Du (rk ,σk ) = Du α,i , Dw (rk ,σk ) = Dwα,i .

For the system (3.1), the following state feedback controller is designed:

u(k) = K α,i x(k). (3.3)


3.2 Preliminaries and Problem Formulation 41

Then the closed-loop discrete-time switching MJS is obtained



⎨ x(k + 1) = Āα,i x(k) + Adα,i x(k − h) + Bwα,i w(k)
z(k) = C̄α,i x(k) + Cdα,i x(k − h) + Dwα,i w(k) (3.4)

x(g) = ϕ(g), g ∈ {−h, . . . , 0}

where
Āα,i = Aα,i + Bu α,i K α,i ,

C̄α,i = Cα,i + Du α,i K α,i .

Before moving further, we present the following definitions and lemmas that will
be necessary to derive the main results.

Definition 3.1 The free system (3.1) (setting u(k) = 0) is said to be stochastic FTB
concerning (c1 c2 N R d), where 0 < c1 < c2 , R > 0, if

x T (k1 )Rx(k1 ) ≤ c12 ⇒ x T (k2 )Rx(k2 ) < c22 , k1 ∈ {−h, . . . , 0}, k2 ∈ {1, 2 . . . , N }.
(3.5)

Definition 3.2 For the given scalars 0 < c1 < c2 , R > 0, γ > 0, the closed-loop
system (3.4) is said to be stochastic finite-time H∞ stabilizable concerning (c1 c2
N R d γ ), if the system (3.4) is stochastic finite-time stabilizable for the state
feedback controller (3.3) and under the zero-initial condition the output z(k) satisfies
 

N 
N
E z T (k)z(k) < γ 2 E w T (k)w(k) . (3.6)
k=0 k=0

Definition 3.3 For the switching signal σk and the sampling time k > k0 , L α (k0 , k)
is used to denote the switching times of σk during the finite interval [k0 , k). If for
any given scalars L 0 > 0 and τa > 0, it has L a (k0 , k) ≤ L 0 + (k − k0 )/τa , then the
variables τa and N0 are referred to as average dwell time and chatter bound. As reg-
ularly utilized in the existing references, L 0 = 0 is chosen to simplify the controller
design.

Lemma 3.1 [12] For the symmetric positive matrix M and the matrix N , the fol-
lowing condition is met:

− N M −1 N T ≤ M − N T − N . (3.7)
42 3 Finite-Time Stability and Stabilization for Switching . . .

3.3 Stochastic Finite-Time H∞ Control

In this section, sufficient conditions will be presented such that the free system
(3.1) is stochastic FTB and the closed-loop system (3.4) is stochastic finite-time H∞
stabilizable.
Proposition 3.1 For given scalars δ ≥ 1, h > 0, and μ > 1, the system (3.4) is
stochastic finite-time stabilizable in regard to (c1 c2 N R d), if there are positive-
definite matrices Pα,i > 0, Pβ,i > 0, G α,i > 0, i ∈ M, α, β ∈ S and Q such that the
subsequent inequalities hold:
⎡ ⎤
ĀTα,i P̄α,i Āα,i − μPα,i + Q ĀTα,i P̄α,i Adα,i ĀTα,i P̄α,i Bwα,i
⎣ ∗ −Q + ATdα,i P̄α,i Adα,i ATdα,i P̄α,i Bwα,i ⎦<0
∗ ∗ Bwα,i P̄α,i Bwα,i − G α,i
T

(3.8)

P̄α,i ≤ δ P̄β,i (3.9)

2 < 1 (3.10)

with average dwell time satisfying

N ln δ
τa > = τa∗ (3.11)
ln 1 − ln 2

where


M
P̄α,i = πiαj Pα, j , P̃α,i = R −1/2 Pα,i R −1/2 , Q̃ = R −1/2 Q R −1/2 ,
j=1

1= c22 min λmin ( P̃α,i ),


i∈M,α∈S

 
2 =μ N
max λmax ( P̃α,i )c12 + λmax ( Q̃)hc12 + max λmax (G α,i )d 2
.
i∈M,α∈S i∈M,α∈S

Proof Choose the following Lyapunov function:


k−1
Vα,i (k) = x (k)Pα,i x(k) +
T
x T ( f )Qx( f ). (3.12)
f =k−h
3.3 Stochastic Finite-Time H∞ Control 43

Simple calculation explicates that

E{Vα, j (k + 1)} − Vα,i (k)


= x T (k + 1) P̄α,i x(k + 1) − x T (k)Pα,i x(k) + x T (k)Qx(k) − x T (k − h)Qx(k − h)
 
= x T (k) ĀTα,i P̄α,i Āα,i − Pα,i + Q x(k) + 2x T (k) ĀTα,i P̄α,i Adα,i x(k − h)
 
+ 2x T (k) ĀTα,i P̄α,i Bwα,i w(k) + x T (k − h) ATdα,i P̄α,i Adα,i − Q x(k − h)
+ 2x T (k − h)ATdα,i P̄α,i Bwα,i w(k) + w T (k)Bwα,i
T
P̄α,i Bwα,i w(k)
= ζ T (k) α,i ζ (k) (3.13)

where
ζ T (k) = [x T (k) x T (k − h) w T (k)],

⎡ ⎤
ĀTα,i P̄α,i Āα,i − Pα,i + Q ĀTα,i P̄α,i Adα,i ĀTα,i P̄α,i Bwα,i
α,i =⎣ ∗ −Q + ATdα,i P̄α,i Adα,i ATdα,i P̄α,i Bwα,i ⎦ .
∗ ∗ T
Bwα,i P̄α,i Bwα,i

Combining formulas (3.8) and (3.13) and μ > 1, one has

  
k−1
E Vα, j (k + 1) < μx (k)Pα,i x(k) + w (k)G α,i w(k) + μ
T T
x T ( f )Qx( f )
f =k−h

= μVα,i (k) + w T (k)G α,i w(k). (3.14)

From the above inequality (3.14), one has


 
E Vα, j (k + 1) < μVα,i (k) + max λmax (G α,i )w T (k)w(k). (3.15)
i∈M,α∈S

Let kl , kl−1 , kl−2 , . . . be the switching instants, then in the same mode, formula
(3.15) gives

V (rkl , σk , k) < μV (rkl , σk−1 , k − 1) + max λmax (G α,i )w T (k − 1)w(k − 1)


i∈M,α∈S


k−1
< μk−kl V (rkl , σkl , kl ) + max λmax (G α,i ) μk−θ−1 w T (θ )w(θ ).
i∈M,α∈S
θ=kl
(3.16)
44 3 Finite-Time Stability and Stabilization for Switching . . .

According to the inequalities (3.9) and (3.12), it yields

l −1
k
V (rkl , σkl , kl ) = x (kl ) P̄(rkl , σkl , kl )x(kl ) +
T
x T ( f )Qx( f )
f =kl −h
l −1
k
< δx T (kl ) P̄(rkl−1 , σkl , kl )x(kl ) + x T ( f )Qx( f ).
f =kl −h

According to condition (3.15), for the different switching mode, one has

l −1
k
V (rkl−1 , σkl , kl ) = x T (kl ) P̄(rkl−1 , kl )x(kl ) + x T ( f )Qx( f )
f =kl −h
l −1
k
< μkl −kl−1 V (rkl−1 , σkl−1 , kl−1 ) + max λmax (G α,i ) μkl −θ−1 w T (θ )w(θ ).
i∈M,α∈S
θ=kl−1
(3.17)

The above two formulas lead to


l −1
k
V (rkl , σkl , kl ) < δx (kl ) P̄(rkl−1 , σkl , kl )x(kl ) +
T
x T ( f )Qx( f )
f =kl −h
⎡ ⎤
l −1
k l −1
k
= δ ⎣V (rkl−1 , σkl , kl ) − x T ( f )Qx( f )⎦ + x T ( f )Qx( f )
f =kl −h f =kl −h

l −1
k
= δV (rkl−1 , σkl , kl ) + (1 − δ) x T ( f )Qx( f )
f =kl −h
l −1
k
< δμkl −kl−1 V (rkl−1 , σkl−1 , kl−1 ) + δ max λmax (G α,i ) μkl −θ−1 w T (θ )w(θ ).
i∈M,α∈S
θ=kl−1
(3.18)

Substituting the inequality (3.18) into formula (3.17) with μ > 1, δ ≥ 1, it yields

V (rkl , σk , k)

k−1
< μk−kl V (rkl , σkl , kl ) + max λmax (G α,i ) μk−θ−1 w T (θ )w(θ )
i∈M,α∈S
θ=kl
l −1
k
< δμk−kl−1 V (rkl−1 , σkl−1 , kl−1 ) + δ max λmax (G α,i ) μk−θ−1 w T (θ )w(θ )
i∈M,α∈S
θ=kl−1
3.3 Stochastic Finite-Time H∞ Control 45


k−1
+ max λmax (G α,i ) μk−θ−1 w T (θ )w(θ )
i∈M,α∈S
θ=kl

1 −1
k
<δ μ La k−k0
V (rk0 , σk0 , k0 ) + max λmax (G α,i ) δ L a μk−θ−1 w T (θ )w(θ )
i∈M,α∈S
θ=k0

2 −1
k 
k−1
+ δ L a −1 μk−θ−1 w T (θ )w(θ ) + · · · + δ 0 μk−θ−1 w T (θ )w(θ )⎦
θ=k1 θ=kl
k−k0 /τa
<δ μ k−k0
V (rk0 , σk0 , k0 )

k
k−k0 /τa
+ max λmax (G α,i )δ μk−θ−1
w T (θ )w(θ )
i∈M,α∈S
θ=k0
 
N /τa
<δ μ N
V (rk0 , σk0 , k0 ) + max λmax (G α,i )d 2
. (3.19)
i∈M,α∈S

On the one hand


0 −1
k
V (rk0 , σk0 , k0 ) = x T (k0 ) P̄(rk0 , σk0 , k0 )x(k0 ) + x T ( f )Qx( f )
f =k0 −h
0 −1
k
< max λmax ( P̃α,i )x T (k0 )Rx(k0 ) + λmax ( Q̃) x T ( f )Rx( f )
i∈M,α∈S
f =k0 −h

< max λmax ( P̃α,i )c12 + λmax ( Q̃)hc12 . (3.20)


i∈M,α∈S

On the other hand


k−1
Vα,i (k) > min λmin ( P̃α,i )x T (k)Rx(k) + λmin ( Q̃) x T ( f )Rx( f )
i∈M,α∈S
f =k−h

> min λmin ( P̃α,i )x (k)Rx(k). T


(3.21)
i∈M,α∈S

Therefore,
δ N /τa μ N ( max λmax ( P̃α,i )c12 + λmax ( Q̃)hc12 + max λmax (G α,i )d 2 )
i∈M,α∈S i∈M,α∈S
x T (k)Rx(k) < .
min λmin ( P̃α,i )
i∈M,α∈S
(3.22)
46 3 Finite-Time Stability and Stabilization for Switching . . .

Define

c22 min λmin ( P̃α,i )


i∈M,α∈S
= .
μ N [ max λmax ( P̃α,i )c12 + λmax ( Q̃)hc12 + max λmax (G α,i )d 2 ]
i∈M,α∈S i∈M,α∈S

From the conditions (3.10) and (3.11), we can get

 > 1, δ N /τa < ,

which means
x T (k)Rx(k) < c2 .

This completes the proof.

Remark 3.1 It should be noticed that the derived conditions for discrete-time switch-
ing MJS (3.1) have relationships with the stochastic Markovian chain rk and the
deterministic switching signal σk . The influence of jumping and switching signals
on Lyapunov function recurrence from instant kl to kl−1 is shown in condition (3.18).
According to the recursive expression (3.18), the essential Lyapunov function rela-
tionship between k and k0 is acquired, which constitutes the foundation to guarantee
the stochastic finite-time stabilization of the closed-loop system (3.4).

Remark 3.2 From the derived condition of the average dwell time (3.11), it can be
recognized that if the time-delay is larger, the corresponding minimum average dwell
time τa∗ is also more prolonged. Therefore, to compensate for the effect of time-delay
on the instability of the system (3.1), the switching frequency among different modes
should be a little bit slow. In other words, the average dwell time stays in the same
mode for a little longer.
Based on the derived stochastic FTB conditions in Proposition 3.1, the next goal
is to acquire sufficient conditions for stochastic finite-time H∞ controller design.

Proposition 3.2 For given scalars δ ≥ 1, and μ > 1, the closed-loop system (3.4) is
stochastic finite-time stabilizable with H∞ disturbance rejection performance con-
cerning (c1 c2 N R d γ ), if there are positive-definite matrices Pα,i > 0, Pβ,i > 0
and Q such that the subsequent inequalities hold

α,i
⎡ T C̄ T T T T

11 + C̄α,i α,i μ Āα,i P̄α,i Adα,i + C̄ α,i C dα,i μ Āα,i P̄α,i Bwα,i + C̄ α,i Dwα,i
⎢ T C ⎥
=⎣ ∗ 12 + Cdα,i dα,i μATdα,i P̄α,i Bwα,i + Cdα,i
T D
wα,i ⎦ < 0
∗ ∗ T
13 + Dwα,i Dwα,i
(3.23)

P̄α,i ≤ δ P̄β,i (3.24)


3.3 Stochastic Finite-Time H∞ Control 47

μ N γ 2 d 2 < c22 min λmin ( P̃α,i ) (3.25)


i∈M,α∈S

with average dwell time satisfying

N ln δ
τa > = τa∗ (3.26)
ln[c22 min λmin ( P̃α,i )] − ln μ N γ 2 d 2
i∈M,α∈S

where
11 = μ ĀTα,i P̄α,i Āα,i − μPα,i + μQ,

12 = −μQ + μATdα,i P̄α,i Adα,i ,

13 = −γ 2 + μBwα,i
T
P̄α,i Bwα,i .

Proof Inequality (3.23) can be rewritten as


⎡ ⎤ ⎡ T ⎤
11 μ ĀTα,i P̄α,i Adα,i μ ĀTα,i P̄α,i Bwα,i C̄α,i  
⎣ ∗ 12 μATdα,i P̄α,i Bwα,i ⎦ + ⎣ Cdα,i
T ⎦ C̄α,i Cdα,i Dwα,i < 0.
∗ ∗ 13 T
Dwα,i

Note that
⎡ T

C̄α,i  
⎣ Cdα,i
T ⎦
C̄α,i Cdα,i Dwα,i ≥ 0.
T
Dwα,i

Then we can get


⎡ ⎤
11 μ ĀTα,i P̄α,i Adα,i μ ĀTα,i P̄α,i Bwα,i
⎣ ∗ 12 μATdα,i P̄α,i Bwα,i ⎦ < 0.
∗ ∗ 13

The above inequality implies


 
μE Vα, j (k + 1) < μVα,i (k) + γ 2 w T (k)w(k).

Based on the condition μ ≥ 1, it has


 
E Vα, j (k + 1) < μVα,i (k) + γ 2 w T (k)w(k). (3.27)

Similar to the same principles of the proof of Proposition 3.1, and under the zero
initial condition V (rk0 , σk0 , k0 ) = 0, c1 = 0, we have
48 3 Finite-Time Stability and Stabilization for Switching . . .

δ N /τa μ N γ 2 d 2
x T (k)Rx(k) < . (3.28)
min λmin ( P̃α,i )
i∈M,α∈S

Combined with conditions (3.25) and (3.26), the stochastic finite-time stabiliza-
tion with H∞ disturbance rejection performance for the closed-loop system (3.4) can
be guaranteed. Define
 N
 
J=E z T (k)z(k) − γ 2 w T (k)w(k) .
k=0

Under zero initial condition V (rk0 , σk0 , k0 ) = 0, we have




N
 
J≤E z T (k)z(k) − γ 2 w T (k)w(k) + Vα, j (k + 1) − Vα,i (k)
k=0


N
 
≤ z T (k)z(k) − γ 2 w T (k)w(k) + μE{Vα, j (k + 1)} − μVα,i (k)
k=0


N
= ζ T (k)α,i ζ (k).
k=0

Condition (3.23) leads to α,i < 0,i.e., which implies


 

N 
N
E z (k)z(k) < γ E
T 2
w T (k)w(k) .
k=0 k=0

Consequently, the proof is finished.

Our next target is to find the feasible solutions of the results in Proposition 3.2 by
transforming them into linear matrix inequalities (LMIs).
Theorem 3.1 The closed-loop system (3.4) is stochastic finite-time H∞ stabilizable
via state feedback controller (3.3) concerning (c1 c2 N R d γ ) with δ ≥ 1 and
μ > 1, if there are positive-definite matrices X α,i > 0, Yα,i , α ∈ M, i ∈ S and H
such that
⎡ ⎤
−μX α,i 0 T
C̃α,i L̃ T1,i X α,i
⎢ ∗ −γ 2 I T
Dwα,i L T3,i 0 ⎥
⎢ ⎥
⎢ ∗ −1 −1
0 ⎥
∗ −I + μ Cdα,i H Cdα,i μ Cdα,i H L 2,i ⎥<0
T T

⎣ ∗ ∗ ∗ −μ−1 X + μ−1 L 2,i H L T2,i 0 ⎦
∗ ∗ ∗ ∗ −μ−1 H
(3.29)
3.3 Stochastic Finite-Time H∞ Control 49
⎡ ⎤

M
β  α

⎢μ j=1 πi j X β, j − 2μX α,i πi1 X α,i ··· πiαM X α,i ⎥
⎢ ⎥
⎢ ∗ −X α,1 · · · ⎥
⎢ 0 ⎥≤0 (3.30)
⎢ .. .. ⎥
⎣ ∗ ∗ . . ⎦
∗ ∗ ∗ −X α,M

c22 μ−N − λγ 2 d 2 > 0 (3.31)

R 1/2 X α,i R 1/2 < λI (3.32)

with average dwell time satisfying

N ln δ
τa > = τa∗ (3.33)
ln c22 μ−N − ln λγ 2 d 2

where
T
C̃α,i = (Cα,i X α,i + Du α,i Yα,i )T ,
  α T  
α T
L̃ T1,i = πi1 Ãα,i πi2 Ãα,i · · · πiαM ÃTα,i ,

ÃTα,i = (Aα,i X α,i + Bu α,i Yα,i )T .

A finite-time H∞ stabilizing controller satisfying the γ -disturbance rejection level


−1
can be built as K α,i = Yα,i X α,i .

Proof By Schur complement lemma, inequality (3.23) is equal to


⎡ ⎤
−μPα,i + μQ 0 0 L T1,i T
C̄α,i
⎢ ∗ −μQ 0 L T2,i T ⎥
Cdα,i
⎢ ⎥
⎢ ∗ ∗ −γ T ⎥
L T3,i ⎥ < 0,
2
⎢ I Dwα,i
⎣ ∗ ∗ ∗ −μ−1 Pα,i −1
0 ⎦
∗ ∗ ∗ ∗ −I

where   α T  
α T
L T1,i = πi1 Āα,i πi2 Āα,i . . . πiαM ĀTα,i ,
  α T  
α T
L T2,i = πi1 Adα,i πi2 Adα,i . . . πiαM ATdα,i ,
  α T  
α T
L T3,i = πi1 Bwα,i πi2 Bwα,i . . . πiαM Bwα,i
T
,

Pα = diag{Pα,1 Pα,2 . . . Pα,M }.


50 3 Finite-Time Stability and Stabilization for Switching . . .

Implementing the matrix conversion to the above condition, it leads to the subse-
quent inequality
⎡ ⎤
−μPα,i + μQ 0 T
C̄α,i L T1,i 0
⎢ ∗ −γ 2 I Dwα,i
T
L T3,i 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −I 0 C ⎥
dα,i ⎥ < 0.

⎣ ∗ ∗ ∗ −μ−1 Pα,i−1
L 2,i ⎦
∗ ∗ ∗ ∗ −μQ

Using Schur complement lemma to the above inequality again, one has
⎡ ⎤
−μPα,i 0 T
C̄α,i L T1,i I
⎢ ∗ −γ I
2 T
Dwα,i T
L 3,i 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −I + μ−1 C Q −1 C T μ−1 C Q −1 L T 0 ⎥ < 0.
⎢ dα,i dα,i dα,i 2,i ⎥
⎣ ∗ ∗ ∗ −1
−μ−1 Pα,i + μ−1 L 2,i Q −1 L T2,i 0 ⎦
∗ ∗ ∗ ∗ −μ−1 Q −1

−1
Implementing a congruence to the above inequality by diag{Pα,i , I, I, I, I } and
−1 −1
letting X α,i = Pα,i , Yα,i = K α,i X α,i , H = Q , the LMI (3.29) can be derived.
By using Schur complement lemma, inequality (3.24) can be rewritten as
⎡ 
M  α  ⎤
β
−μ πi j Pβ, j πi1 · · · πiαM
⎢ ⎥
⎢ j=1 ⎥
⎢ ∗ −1
−Pα,1 ··· 0 ⎥
⎢ ⎥ ≤ 0. (3.34)
⎢ .. .. ⎥
⎣ ∗ ∗ . . ⎦
−1
∗ ∗ ∗ −Pα,M

−1
Implementing a congruence to condition (3.34) by diag{Pα,i , I, . . . , I }, it leads
to the following inequality:
⎡ ⎤
M
β  α  α
⎢−μ j=1 πi j X α,i Pβ, j X α,i πi1 X α,i ··· πi M X α,i ⎥
⎢ ⎥
⎢ ∗ −X α,1 ··· ⎥
⎢ 0 ⎥ ≤ 0. (3.35)
⎢ .. .. ⎥
⎣ ∗ ∗ . . ⎦
∗ ∗ ∗ −X α,M

Based on Lemma 3.1, one has

−X α,i Pβ, j X α,i ≤ X β, j − 2X α,i .

Then

M
β

M
β
−μ πi j X α,i Pβ, j X α,i ≤ μ πi j X β, j − 2μX α,i . (3.36)
j=1 j=1
3.4 Observer-Based Finite-Time H∞ Control 51

Formulas (3.35) and (3.36) lead to linear matrix inequality (3.30) in Theorem 3.1.
On the other hand, consider the following conditions:

1
min λmin ( P̃α,i ) = ,
i∈M,α∈S max λmax ( X̃ α,i )
i∈M,α∈S

and
−1
X̃ α,i = P̃α,i = R 1/2 X α,i R 1/2 .

Inequality (3.25) follows that:

μ N γ 2 d 2 max λmax ( X̃ α,i ) < c22 . (3.37)


i∈M,α∈S

Making the assumption max λmax ( X̃ α,i ) < λ, it implies R 1/2 X α,i R 1/2 < λI .
i∈M,α∈S
Then, inequality (3.37) is equal to LMI (3.31) in Theorem 3.1. Meanwhile, condition
(3.28) follows that:

x T (k)Rx(k) < δ N /τa μ N γ 2 d 2 max λmax ( X̃ α,i ) < λδ N /τa μ N γ 2 d 2 .


i∈M,α∈S

Conditions (3.31) and (3.33) guarantee that

c22 μ−N N /τa c22 μ−N


> 1, δ < .
λγ 2 d 2 λγ 2 d 2

Therefore, it has x T (k)Rx(k) < λδ N /τa μ N γ 2 d 2 < c2 . Thus the proof is


completed.

3.4 Observer-Based Finite-Time H∞ Control

In this subsection, the following observer-based feedback controller will be designed


to guarantee the stochastic finite-time H∞ disturbance rejection performance for the
closed-loop system (3.4):


⎪ x̄k+1 = Aα,i x̄k + Adα,i x̄k−d + Bα,i u k + Hα,i (yk − ȳk )

ȳk = E α,i x̄k + E dα,i x̄k−d
(3.38)

⎪ u k = K α,i x̄ k

x̄ f = η f , f ∈ {−d, . . . , 0}, r (0) = r0

where x̄k and ȳk are the state and output variables to be estimated, K α,i and Hα,i are
the controller and the observer gains to be designed simultaneously.
52 3 Finite-Time Stability and Stabilization for Switching . . .
 T
Letting ek = xk − x̄k and x̃k = xkT ekT , the corresponding error closed-loop sys-
tem follows that:


⎨ x̃k+1 = Ãα,i x̃k + Ãdα,i x̃k−h + B̃wα,i wk
z k = C̃α,i x̃k + C̃dα,i x̃k−h + Dwα,i wk (3.39)
⎩ x̃ =  ϕ T ϕ T − ηT T , f ∈ {−h, . . . , 0}, r (0) = r

f f f f 0

where
   
Aα,i + Bu α,i K α,i −Buα,i K α,i Adα,i 0
Ãα,i = , Ãdα,i = ,
0 Aα,i − Hα,i E α,i 0 Adα,i − Hα,i E dα,i
 
Bwi    
B̃wi = , C̃α,i = Cα,i + Duα,i K α,i,ξk −Duα,i K α,i , C̃dα,i = Cdα,i 0 .
Bwi

The control problem to be dealt with in this subsection is to find suitable con-
troller and the observer gains K α,i and Hα,i such that the error closed-loop sys-
tem (3.39) is stochastic finite-time stabilizable with H∞ performance concerning
(c1 c2 N R d γ ).
The following Proposition 3.3 gives the sufficient conditions to investigate if the
error closed-loop system (3.39) is finite-time stabilizable with H∞ performance, and
it will be utilized in the solution of controller and observer gains.
Proposition 3.3 For given scalars δ ≥ 1, h > 0, and μ > 1, the error closed-
loop system (3.39) is finite-time stabilizable with H∞ performance concerning
(c1 c2 N R d γ ), if there are positive-definite matrices Pα,i > 0, Pβ,i > 0 and Q
such that the subsequent inequalities hold
⎡ ⎤
11 + C̃α,i
T
C̃α,i μ ÃTα,i P̄α,i Ãdα,i + C̃α,i
T
C̃dα,i μ ÃTα,i P̄α,i B̃wα,i + C̃α,i
T
Dwα,i
⎣ ∗ 12 + C̃dα,iT
C̃dα,i μ ÃTdα,i P̄α,i B̃α,i + C̃dα,i
T
Dwα,i ⎦ < 0
∗ ∗ 13 + Dwα,i Dwα,i
T

(3.40)
P̄α,i ≤ δ P̄β,i (3.41)

1 < 2 (3.42)

with the average dwell time satisfying

N ln δ
τa > = τa∗ (3.43)
ln 2 − ln 1

where

M
P̄α,i = πiαj Pα, j , P̃α,i = R −1/2 Pα,i R −1/2 ,
j=1
3.4 Observer-Based Finite-Time H∞ Control 53

11 = μ ÃTα,i P̄α,i Ãα,i − μ P̃α,i + μ Q̃,

12 = −μ Q̃ + μ ÃTdα,i P̄α,i Ãdα,i ,

13 = −γ 2 + μ B̃wα,i
T
P̄α,i B̃wα,i ,
 
1 =μ N
γ d + max
2 2
λmax ( P̃α,i )c12 + λmax ( Q̃)hc12 ,
i∈M,α∈S

2 = c22 min λmin ( P̃α,i ).


i∈M,α∈S

Proof Choose the following Lyapunov function:


k−1
Vα,i (k) = x T (k)Pα,i x(k) + x T ( f )Qx( f ). (3.44)
f =k−h

Inequality (3.40) can be rewritten as


⎡ ⎤ ⎡ T ⎤
11 μ ÃTα,i P̄α,i Ãdα,i μ ÃTα,i P̄α,i B̃wα,i C̃α,i  
⎣ ∗ 12 μ ÃTdα,i P̄α,i B̃wα,i ⎦ + ⎣ C̃dα,i
T ⎦ C̃α,i C̃dα,i Dwα,i < 0.
∗ ∗ 13 T
Dwα,i

Because
⎡ T

C̃α,i  
⎣ C̃ T ⎦ C̃α,i C̃dα,i Dwα,i ≥ 0.
dα,i
T
Dwα,i

It is easy to get
⎡ ⎤
11 μ ĀTα,i P̄α,i Adα,i μ ĀTα,i P̄α,i Bwα,i
α,i =⎣ ∗ 12 μATdα,i P̄α,i Bwα,i ⎦ < 0 (3.45)
∗ ∗ 13

which implies
 
μE Vα, j (k + 1) < μVα,i (k) + γ 2 w T (k)w(k).

With the fact that μ ≥ 1, following inequality holds:


 
E Vα, j (k + 1) < μVα,i (k) + γ 2 w T (k)w(k). (3.46)
54 3 Finite-Time Stability and Stabilization for Switching . . .

Assuming that kl , kl−1 , kl−2 , . . . are the switching instants, then in the same mode
without switching, formula (3.46) leads to

V (rkl , σk , k) < μV (rkl , σk−1 , k − 1) + γ 2 w T (k − 1)w(k − 1)



k−1
<μ k−kl
V (rkl , σkl , kl ) + μk−θ−1 γ 2 w T (θ )w(θ ). (3.47)
θ=kl

Combining Eqs. (3.41) and (3.44), during the different switching modes, it yields

l −1
k
V (rkl , σkl , kl ) = x̃ T (kl ) P̄(rkl , σkl , kl )x̃(kl ) + x T ( f ) Q̃x( f )
f =kl −h
l −1
k
< δx T (kl ) P̄(rkl−1 , σkl , kl )x(kl ) + x T ( f ) Q̃x( f ).
f =kl −h

Similar as Eq. (3.47), we have

V (rkl−1 , σkl , kl ) < μV (rkl−1 , σkl −1 , kl − 1) + γ 2 w T (kl − 1)w(kl − 1)


l −1
k
< μkl −kl−1 V (rkl−1 , σkl−1 , kl−1 ) + μkl −θ−1 γ 2 w T (θ )w(θ ).
θ=kl−1

The above two equations lead to

l −1
k
V (rkl , σkl , kl ) < δx (kl ) P̄(rkl−1 , σkl , kl )x(kl ) +
T
x T ( f ) Q̃x( f )
f =kl −h
⎡ ⎤
l −1
k l −1
k
= δ ⎣V (rkl−1 , σkl , kl ) − x T ( f ) Q̃x( f )⎦ + x T ( f ) Q̃x( f )
f =kl −h f =kl −h

l −1
k
= δV (rkl−1 , σkl , kl ) + (1 − δ) x T ( f ) Q̃x( f )
f =kl −h
l −1
k
< δμkl −kl−1 V (rkl−1 , σkl−1 , kl−1 ) + δ μkl −θ−1 γ 2 w T (θ )w(θ ).
θ=kl−1
(3.48)

Noticing that μ > 1, δ ≥ 1 and substituting Eq. (3.48) into Eq. (3.47), it has
3.4 Observer-Based Finite-Time H∞ Control 55

l −1
k
 
E V (rkl , σk , k) < μk−kl V (rkl , σkl , kl ) + μk−θ−1 γ 2 w T (θ )w(θ )
θ=kl−1
l −1
k
< δμk−kl−1 V (rkl−1 , σkl−1 , kl−1 ) + δ μk−θ−1 γ 2 w T (θ )w(θ )
θ=kl−1


k−1
+ μk−θ−1 γ 2 w T (θ )w(θ )
θ=kl

1 −1
k
<δ μ La k−k0
V (rk0 , σk0 , k0 ) + δ L a μk−θ−1 γ 2 w T (θ )w(θ )
θ=k0

2 −1
k 
k−1
+δ L a −1
μ k−θ−1
γ w (θ )w(θ ) + · · · + δ
2 T 0
μ k−θ−1
γ w (θ )w(θ )⎦
2 T

θ=k1 θ=kl


k
< δ k−k0 /τa μk−k0 V (rk0 , σk0 , k0 ) + δ k−k0 /τa μk−θ−1 γ 2 w T (θ )w(θ )
θ=k0
N /τa
 
<δ μ N
V (rk0 , σk0 , k0 ) + γ d2 2
. (3.49)

Note that
0 −1
k
V (rk0 , σk0 , k0 ) = x T (k0 ) P̄(rk0 , σk0 , k0 )x(k0 ) + x T ( f )Qx( f )
f =k0 −h
0 −1
k
≤ max λmax ( P̃α,i )x T (k0 )Rx(k0 ) + λmax ( Q̃) x T ( f )Rx( f )
i∈M,α∈S
f =k0 −h

≤ max λmax ( P̃α,i )c12 + λmax ( Q̃)hc12 .


i∈M,α∈S

On the other hand,

  
k−1
E Vα,i (k) > min λmin ( P̃α,i )x T (k)Rx(k) + λmin ( Q̃) x T ( f )Rx( f )
i∈M,α∈S
f =k−h

> min λmin ( P̃α,i )x (k)Rx(k).T


(3.50)
i∈M,α∈S

Under the zero initial condition V (rk0 , σk0 , k0 ) = 0, and from Eqs. (3.49) and
(3.50), we can obtain
 
δ N /τa μ N γ 2 d 2 + max λmax ( P̃α,i )c12 + λmax ( Q̃)hc12
i∈M,α∈S
x T (k)Rx(k) < .
min λmin ( P̃α,i )
i∈M,α∈S
56 3 Finite-Time Stability and Stabilization for Switching . . .

Combining the above condition with Eqs. (3.42) and (3.43), it is followed that:

x T (k)Rx(k) < c2 ,

which implies the stochastic finite-time stabilization of system (3.39). Define


 N
 
J=E z T (k)z(k) − γ 2 w T (k)w(k) .
k=0

Under the zero initial condition, one has




N
 
J≤E z T (k)z(k) − γ 2 w T (k)w(k) + Vα,i (k)
k=0


N
 
≤ z T (k)z(k) − γ 2 w T (k)w(k) + μE{Vα, j (k + 1)} − μVα,i (k)
k=0


N
= ζ T (k)α,i ζ (k).
k=0

According to Eq. (3.45), it is followed that:


 

N 
N
E z (k)z(k) < γ E
T 2
w T (k)w(k) .
k=0 k=0

Thus the proof is completed.

The following theorem shows the solutions of the controller and observer gains
in term of LMIs.
Theorem 3.2 The error closed-loop system (3.39) is stochastic finite-time stabiliz-
able with H∞ performance via observer-based controller (3.38) concerning given
(c1 c2 N R d γ ) with δ ≥ 1, and μ > 1, if there are positive-definite matrices
P̃α,i > 0, X α,i > 0, Yα,i and Q such that
⎡ ⎤
−μPα,i + μQ 0 0 L T1,i T
C̃α,i
⎢ ∗ −μQ 0 L T2,i T ⎥
⎢ C̃dα,i ⎥
⎢ ∗ ∗ −γ I
2
L T3,i T ⎥
Dwα,i ⎥ < 0
⎢ (3.51)
⎢  ⎥
⎣ ∗ ∗ ∗ −μ−1 X α,i 0 ⎦
∗ ∗ ∗ ∗ −I

Pi,ξk X i,ξk = I (3.52)


3.4 Observer-Based Finite-Time H∞ Control 57
⎡ M ⎤
 β  α 
⎢δ j=1 πi j X β, j − 2δ X α,i πi1 X α,i ··· πiαM X α,i ⎥
⎢ ⎥
⎢ ∗ −X α,1 ··· ⎥
⎢ 0 ⎥≤0 (3.53)
⎢ .. .. ⎥
⎣ ∗ ∗ . . ⎦
∗ ∗ ∗ −X α,M

μ N γ 2 d 2 < c22 λ (3.54)

R 1/2 X α,i R 1/2 < λI (3.55)

with average dwell time satisfying

N ln δ
τa > = τa∗ (3.56)
ln c22 λ − ln μ N γ 2 d 2

where   α T  
α T
L T1,i = πi1 Ãα,i πi2 Ãα,i . . . πiαM ÃTα,i ,
  α T  
α T
L T2,i = πi1 Ãdα,i πi2 Ãdα,i . . . πiαM ÃTdα,i ,
  α T  
α T
L T3,i = πi1 B̃wα,i πi2 B̃wα,i . . . πiαM B̃wα,i
T
,


X α,i = diag{X α,1 X α,2 . . . X α,M },

Ãα,i = 1 + 2 K α,i,ξk 3 + 4 Hα,i,ξk 5 ,

B̃wα,i = 6 Bwα,i Ãdα,i = 7 + 8 Hα,i,ξk 9 ,


 
C̃α,i = Cα,i + Du α,i K α,i,ξk 0 = 10 + Duα,i K α,i,ξk 11 ,
 
C̃dα,i = Cdα,i 0m×n = 12 ,
   
Aα,i 0n×n Buα,i  
1 = , 2 = , 3 = In×n −In×n ,
0n×n Aα,i 0n×m
   
0n×n   I
4 = , 5 = 0q×n E α,i , 6 = n×n ,
−In×n In×n
   
Adα,i 0n×n 0n×n  
7 = , 8 = , 9 = 0q×n E dα,i ,
0n×n Adα,i −In×n
58 3 Finite-Time Stability and Stabilization for Switching . . .
     
10 = Cα,i 0m×n , 11 = In×n 0n×n , 12 = Cdα,i 0m×n .

Proof By Schur complement lemma, inequality (3.51) is equal to


⎡ 
M   α ⎤
β α
−δ πi j Pβ, j πi1 ··· πi M
⎢ ⎥
⎢ j=1 ⎥
⎢ ∗ −1
−Pα,1 ··· 0 ⎥
⎢ ⎥ ≤ 0.
⎢ .. .. ⎥
⎣ ∗ ∗ . . ⎦
−1
∗ ∗ ∗ −Pα,M

−1
Implementing a congruence to the above inequality by diag{Pα,i , I, . . . , I }, it
leads to ⎡ ⎤
M
β  α  α
⎢ −δ π X P X
i j α,i β, j α,i π X
i1 α,i · · · π X
i M α,i ⎥
⎢ j=1 ⎥
⎢ ∗ −X α,1 · · · ⎥
⎢ 0 ⎥ ≤ 0. (3.57)
⎢ .. .. ⎥
⎣ ∗ ∗ . . ⎦
∗ ∗ ∗ −X α,M

Based on Lemma 3.1, it is followed that:

−X α,i Pβ, j X α,i ≤ X β, j − 2X α,i .

Then

M
β

M
β
−δ πi j X α,i Pβ, j X α,i ≤ δ πi j X β, j − 2δ X α,i .
j=1 j=1

Inequality (3.57) can be rewritten as


⎡ M ⎤
 β  α 
⎢δ j=1 πi j X β, j − 2δ X α,i πi1 X α,i ··· πiαM X α,i ⎥
⎢ ⎥
⎢ ∗ −X α,1 ··· ⎥
⎢ 0 ⎥ ≤ 0,
⎢ .. .. ⎥
⎣ ∗ ∗ . . ⎦
∗ ∗ ∗ −X α,M

which is linear matrix inequality (3.53) in Theorem 3.2.


Similarly, with Schur complement lemma, inequality (3.40) can be rewritten as
⎡ ⎤
−μPα,i + μQ 0 0 L T1,i T
C̄α,i
⎢ ∗ −μQ 0 L T2,i T ⎥
Cdα,i
⎢ ⎥
⎢ ∗ ∗ −γ I 2 T
L 3,i T ⎥
Dwα,i ⎥ < 0
⎢ (3.58)
⎢  −1 ⎥
⎣ ∗ ∗ ∗ −μ−1 P α,i 0 ⎦
∗ ∗ ∗ ∗ −I
3.4 Observer-Based Finite-Time H∞ Control 59

where 
P α,i = diag{Pα,1 Pα,2 , . . . , Pα,M }.

With some simple mathematical arrangements, we have


 
Aα,i + Bu α,i K α,i,ξk −Buα,i K α,i,ξk
Ãα,i =
0 Aα,i − Hα,i,ξk E α,i
     
Aα,i 0n×n Buα,i   0n×n  
= + K α,i,ξk In×n − In×n + Hα,i,ξk 0 p×n E α,i
0n×n Aα,i 0n×m −In×n
= 1 + 2 K α,i,ξk 3 + 4 Hα,i,ξk 5 (3.59)
   
Bwi In×n
B̃wi = = B = 6 Bwi (3.60)
Bwi In×n wi
 
Adα,i 0
Ãdα,i =
0 Adα,i − Hα,i,ξk E dα,i
   
Aα,i 0n×n 0n×n  
= + Hα,i,ξk 0q×n E dα,i
0n×n Aα,i −In×n
= 7 + 8 Hα,i,ξk 9 (3.61)
 
C̃α,i = Cα,i + Duα,i K α,i,ξk 0
   
= Cα,i 0m×n + Duα,i K α,i,ξk In×n 0n×n
= 10 + Dα,i K α,i,ξk 11 (3.62)
 
C̃dα,i = Cdα,i 0m×n = 12 (3.63)

−1
Substituting Eqs. (3.59)–(3.63) into Eq. (3.58), and denoting X̃ i,ξk = P̃i,ξk
, the
LMIs (3.51) and (3.52) can be obtained in Theorem 3.2.
On the other hand, considering the following conditions:

1
min λmin ( P̃α,i ) = ,
i∈M,α∈S max λmax ( X̃ α,i )
i∈M,α∈S

and
−1
X̃ α,i = P̃α,i = R 1/2 X α,i R 1/2 .

Assuming max λmax ( X̃ α,i ) < λ, which means min λmin ( P̃α,i ) > λ, then we
i∈M,α∈S i∈M,α∈S
can get condition (3.55).
Moreover, define c1 = 0, then conditions (3.54) and (3.56) are equivalent to con-
ditions (3.42) and (3.43) in Proposition 3.3, and we can also get

x T (k)Rx(k) < c2 .

Thus the proof is completed.


60 3 Finite-Time Stability and Stabilization for Switching . . .

Remark 3.3 It should be seen that the derived conditions in Theorem 3.2 are not
strict linear matrix inequalities because of the coupling relationship between different
matrix variables. Therefore, the non-feasibility problem in Theorem 3.2 should be
transformed into the subsequent optimization problem including the LMI conditions:
 
Minimize trace Pα,i X α,i = I
subject to (3.51), (3.53)−(3.55), Pα,i > 0, X α,i > 0
 
Pα,i I
and > 0. (3.64)
I X α,i

Then, for given (c1 c2 N R d γ ) and scalars δ ≥ 1, and μ ≥ 1, the matrices K α,i ,
and Hα,i can be solved with the following algorithm [13]:
 0 
Algorithm 3.1 (1) Determine an initial feasible value Pα,i , X α,i
0
, Q 0 satisfying
conditions (3.51), (3.53)–(3.55) and (3.64). Let k = 0.
(2) Find the solution of the following linear matrix inequality optimization problem:
 0 
Minimize trace Pα,i X α,i + X α,i
0
Pα,i
subject to (3.51), (3.53)−(3.55) and (3.64).
 k 
Substituting the acquired matrices Pα,i , X α,i
k
, Q k into Eqs. (3.51) and (3.55).
(3) If condition (3.64) is guaranteed with
   
trace Pα,i X α,i − n  < ζ,
 
forsome sufficiently  small scalar ζ > 0, give the feasible solution Pα,i , X α,i , Q
= Pα,ik
, X α,i
k
, Q k and stop.
 k   k+1 k+1
(4) If k > N , gives it up and stops. Set k = k + 1, Pα,i , X α,i
k
, Q k = Pα,i , X α,i ,
k+1

Q and go to step (2).

3.5 Simulation Analysis

Two examples will be given in this subsection to illustrate the efficacy of the proposed
finite-time H∞ controller configuration approach for stochastic MJSs supervised by
the deterministic switching. The first example shows that for the finite-time unstable
system, the closed-loop system is finite-time stabilizable with the designed controller.
The second example is adapted from a typical economic system to demonstrate the
practical applicability of the theoretical results.
3.5 Simulation Analysis 61

Example 3.1 Consider the discrete-time switching MJS (3.1) with M = 2, S = 3,


and the following parameters:
MJS 1:
     
1 −0.4 0 −0.26 0.2 −1.1
A1,1 = , A1,2 = , A1,3 = ,
2 0.81 0.9 1.13 0.2 0.4
     
−0.2 0.1 −0.5 0 −0.3 0.2
Ad1,1 = , Ad1,2 = , Ad1,3 = ,
0.2 0.15 0.3 −0.5 0.4 0.5
 T  T  T
Bu1,1 = 1 1 , Bu1,2 = 1 1 , Bu1,3 = 2 −1 ,
 T  T  T
Bw1,1 = −0.4 0.3 , Bw1,2 = 0.2 0.26 , Bw1,3 = 0.5 −0.3 ,

C1,1 = [0.5 0.4] , C1,2 = [0.1 0.3] , C1,3 = [0.4 0.3] ,


     
Cd1,1 = 0.1 −0.2 , Cd1,2 = −0.3 0.6 , Cd1,3 = 0.07 0.4 ,

Du1,1 = 0.4, Du1,2 = 0.5, Du1,3 = 0.6, Dw1,1 = 0.2, Dw1,2 = 0.3, Dw1,3 = 1.1.

The transition probabilities matrix is assumed to be known as follows in advance:


⎡ ⎤
0.3 0.6 0.1
1 = ⎣0.2 0.5 0.3⎦ .
0.2 0.2 0.6

MJS 2:
     
1 −0.05 0.8 0.8 −0.3 0.6
A2,1 = , A2,2 = , A2,3 = ,
0.4 −0.72 0.6 1 0.4 0.34
     
0.2 0 0.8 −0.24 0.6 0.4
Ad2,1 = , Ad2,2 = , Ad2,3 = ,
0 0.5 −0.7 −0.32 0.2 −0.3

Bu2,1 = Bu1,1 , Bu2,2 = Bu1,2 , Bu2,3 = Bu1,3 ,

Bw2,1 = Bw1,1 , Bw2,2 = Bw1,2 , Bw2,3 = Bw1,3 ,

C2,1 = [0.2 0.1] , C2,2 = [0.3 0.4] , C2,3 = [−0.1 0.2] ,


     
Cd2,1 = 0.03 −0.05 , Cd2,2 = 0.1 0.2 , Cd2,3 = −0.3 0.5 ,

Du2,1 = Du1,1 , Du2,2 = Du1,2 , Du2,3 = Du1,3 ,


62 3 Finite-Time Stability and Stabilization for Switching . . .

switching signal
4.5
2
4
jumping modes

1
3.5
0 5 10
time
3

2.5

1.5

0.5
0 1 2 3 4 5 6 7 8 9 10
time

Fig. 3.1 Jumping modes and switching signals

Dw2,1 = Dw1,1 , Dw2,2 = Dw1,2 , Dw2,3 = Dw1,3 .

The transition probabilities matrix is assumed as follows:


⎡ ⎤
0.5 0.2 0.3
2 = ⎣0.7 0.1 0.2⎦ .
0.2 0.6 0.2

Letting c22 = 2, R = I2 , h = 1, N = 10, d 2 = 0.5, μ = 1.01, γ = 2.3 and δ =


1.8, and by solving the LMIs in Theorem 3.1 with MATLAB tool box, the sub-
sequent state feedback controller gains can be calculated as follows:
     
K 1,1 = −1.5722 −0.2230 , K 1,2 = −0.6270 −0.7086 , K 1,3 = −0.0939 0.5256 ,
     
K 2,1 = −0.6244 0.4674 , K 2,2 = −0.7615 0.8487 , K 2,3 = 0.1251 −0.3600 ,
with λ = 0.1998. The minimum average dwell time can be solved as τa∗ = 3.0541,
so we take the average dwell time τa = 3.3333.
To show the efficacy of the calculated controller, we present the state trajectory of
the free system (3.1) and the closed-loop system (3.4) with the controller in the form
 T
of (3.3). The initial state, mode and disturbance signals are set as x0 = 0.2 0.3 ,
r0 = 1 and w(k) = 0.6e−k , respectively. Figure 3.1 shows the stochastic jumping
modes and the switching signals. The state trajectories of the free and controlled
systems are displayed in Figs. 3.2 and 3.3. It can be observed that the state trajectory
of the free system surpasses the given bound c22 . Hence, the original free system is
not stochastic FTB. However, with the designed controller (3.3), the state trajectory
of the closed-loop system is restricted within the desired bound during the given time
interval.
3.5 Simulation Analysis 63

1
x2

-1

-2

-3
-3 -2 -1 0 1 2 3
x1

Fig. 3.2 Trajectory of the free system

1.5

0.5
x2

-0.5

-1

-1.5

-2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x1

Fig. 3.3 Trajectory of the closed-loop system

Example 3.2 The second example considers an economic system adapted from [14].
There are three operation modes representing different financial situations: normal,
boom and slump. The Markovian chain governs the stochastic transitions among
the three modes. On account of the variation in domestic and international economic
environment, the macroeconomic control from government is necessary. Government
intervention leads to a change in the economic model, which can be viewed as a top-
level supervisor. The detailed parameters of the economic system are as follows:
64 3 Finite-Time Stability and Stabilization for Switching . . .

MJS 1:
     
0 1 0 1 0 1
A1,1 = , A1,2 = , A1,3 = ,
−2.6 3.3 −4.4 4.6 5.4 −5.3
 T
Bu1,1 = Bu1,2 = Bu1,3 = 0 1 ,
 T  T  T
Bw 1,1 = 0.3 0.24 , Bw1,2 = −0.15 −0.3 , Bw 1,3 = 0.3 0.45 ,
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
1.5477 −1.0976 3.1212 −0.5082 1.8385 −1.2728
C1,1 = ⎣−1.0976 1.9145 ⎦ , C1,2 = ⎣−0.5082 2.7824 ⎦ , C1,3 = ⎣−1.2728 1.6971 ⎦ ,
0 0 0 0 0 0
 T  T  
Du1,1 = 0 0 1.6125 , Du1,2 = 0 0 1.0794 , Du1,3 = 0 0 1.0540 T ,
 T  T  T
Dw11 = 0.18 0.3 0.36 , Dw12 = −0.27 0.3 0.18 , Dw13 = 0.3 0.12 0.3 .

The transition probabilities among three modes are given as follows:


⎡ ⎤
0.55 0.23 0.22
1 = ⎣0.36 0.35 0.29⎦ .
0.32 0.16 0.52

MLS 2:
     
0 1 0 1 0 1
A2,1 = , A2,2 = , A2,3 = ,
−2.4 3.1 −4.2 4.4 5.2 −5.1

Bu2,1 = Bu1,1 , Bu2,2 = Bu1,2 , Bu2,3 = Bu1,3 .

Other parameters of the system are same as MJS 1 and the transition probabilities
matrix is ⎡ ⎤
0.79 0.11 0.1
2 = ⎣0.27 0.53 0.2⎦ .
0.23 0.07 0.7

Denoting c22 = 2, R = I2 , d = 0, N = 20, h 2 = 0.5, μ = 1.02, γ = 2.1 and δ =


1.5, and solving the inequalities in Theorem 3.1, the controller gains can be calculated
as follows:
     
K 1,1 = 2.2920 −2.5385 , K 1,2 = 4.4520 −4.1158 , K 1,3 = −5.5034 5.6540 ,
     
K 2,1 = 2.2190 −2.4395 , K 2,2 = 4.1318 −3.8865 , K 2,3 = −5.1032 5.3619 ,
3.5 Simulation Analysis 65

switching signal
4.5
2
4
1
3.5
jumping modes

0 10 20
time
3

2.5

1.5

0.5
0 2 4 6 8 10 12 14 16 18 20
time

Fig. 3.4 Jumping modes and switching signals

1
x2

-1

-2

-3

-4
-4 -3 -2 -1 0 1 2 3 4
x1

Fig. 3.5 State trajectory of the free system

with λ = 0.1498 and τa∗ = 3.8653. Here we choose the average dwell time τa = 4.
 T
The initial state, mode and external disturbance are taken as x0 = 0 0 , r0 = 1 and
w(k) = 0.5e−k , respectively. The following figures show the jumping modes and
switching signals, the state trajectories of the free and closed-loop economic system
(Figs. 3.4, 3.5, and 3.6).
From Fig. 3.6 we can see that the economic situation is kept within the desired
bound with the designed controller.
66 3 Finite-Time Stability and Stabilization for Switching . . .

1.5

0.5
x2

-0.5

-1

-1.5

-2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x1

Fig. 3.6 State trajectory of the closed-loop system

3.6 Conclusion

The finite-time boundedness, finite-time H∞ stabilization, and observer-based finite-


time H∞ control for a class of stochastic discrete-time jumping systems governed
by deterministic switching signals are investigated in this chapter. With the help of
average dwell time method and by allowing the stochastic Lyapunov energy function
to increase at switching instants, the state feedback and observer-based finite-time
H∞ controllers are designed such that the corresponding closed-loop system is finite-
time stabilizable with H∞ decay rate under average constraints on the dwell time
between switching instants. The next chapter will extend the finite-time controller
design to discrete-time MJSs with non-homogeneous transition probabilities.

References

1. Feng, X., Loparo, K.A., Ji, Y., Chizeck, H.J.: Stochastic stability properties of jump linear
systems. IEEE Trans. Autom. Control 37, 38–53 (1992)
2. Shi, P., Boukas, E.K., Agarwal, R.: Kalman filtering for continuous-time uncertain systems
with Markovian jumping parameters. IEEE Trans. Autom. Control 44(8), 1592–1597 (1999)
3. Boukas, E.K.: Stochastic Switching Systems: Analysis and Design. Birkhauser Publishing,
Berlin (2005)
4. Zhai, G.S., Hu, B., Yasuda, K., Michel, A.N.: Stability analysis of switched systems with stable
and unstable subsystems: an average dwell time approach. Int. J. Syst. Sci. 32(8), 1055–1061
(2001)
5. Shi, P., Xia, Y., Liu, G., Rees, D.: On designing of sliding mode control for stochastic jump
systems. IEEE Trans. Autom. Control 51(1), 97–103 (2006)
6. Luan, X.L., Shunyi, Zhao, Liu, F.: H∞ control for discrete-time Markovian jump systems with
uncertain transition probabilities. IEEE Trans. Autom. Control 58(6), 1566–1572 (2013)
References 67

7. Bolzern, P., Colaneri, P., Nicolao, G.D.: Markovian jump linear systems with switching tran-
sition rates: mean square stability with dwell-time. Automatica 46, 1081–1088 (2010)
8. Hou, L.L., Zong, G.D., Zheng, W.X.: Exponential l2 -l∞ control for discrete-time switching
Markovian jump linear systems. Circ. Syst. Signal Process 32(6), 2745–2759 (2013)
9. Bolzern, P., Colaneri, P., Nicolao, G.D.: Almost sure stability of Markovian jump linear systems
with deterministic switching. IEEE Trans. Autom. Control 58(1), 209–213 (2013)
10. Luan, X.L., Zhao, C.Z., Liu, F.: Finite-time H∞ control with average dwell-time constraint for
time-delay Markovian jump systems governed by deterministic switches. IET Control Theor.
Appl. 8(11), 968–977 (2014)
11. Luan, X.L., Zhao, C.Z., Liu, F.: Finite-time stabilization of switching Markovian jump systems
with uncertain transition rates. Circ. Syst. Signal Process 34(12), 3741–3756 (2015)
12. Yin, Y., Shi, P., Liu, F., Teo, K.L.: Observer-based H∞ control on nonhomogeneous discrete-
time Markov jump systems. J. Dyn. Syst. Meas. Control 135(4), 1–8 (2013)
13. He, Y., Wu, M., Liu, G.P., She, J.H.: Output feedback stabilization for a discrete-time system
with a time-varying delay. IEEE Trans. Autom. Control 53(10), 2372–2377 (2008)
14. Costa, O., Assumpcão, E.O., Boukas, E.K., Marques, R.P.: Constrained quadratic state feed-
back control of discrete-time Markovian jump linear systems. Automatica 35(4), 617–626
(1999)
Chapter 4
Finite-Time Stability and Stabilization
for Non-homegeneous Markovian Jump
Systems

Abstract Considering the practical case that the transition probabilities jumping
among different modes are random time-varying, the finite-time stabilization, finite-
time H∞ control and the observer-based state feedback finite-time control prob-
lems for discrete-time Markovian jump systems with non-homogeneous transition
probabilities are investigated in this chapter. Gaussian transition probability den-
sity function is utilized to describe the random time-varying property of transition
probabilities. Then, the variation-dependent controller is devised to guarantee the
corresponding closed-loop systems finite-time stabilization for random time-varying
transition probabilities.

4.1 Introduction

Markovian jump systems (MJSs) are a set of dynamic systems with random jumps
among finite subsystems. As essential system parameters, the jump transition prob-
abilities (TPs) determine which mode the system is in at the current moment. Under
the hypothesis that TPs are known accurately in advance, many problems of this kind
of MJSs with homogeneous TPs have been well studied [1–3]. However, the assump-
tion that the TPs are exactly known may lead to instability or deterioration of system
performance. Therefore, more practical MJSs with uncertain TPs are investigated to
address the related research problems.
Similar to the uncertainties about the system matrices, one frequently used form
of uncertainty is the polytopic description, where the TP matrix is supposed to be in
a convex framework with associate vertices [4–6]. The other type is specified in an
element-wise style. In this form, the components of the TP matrix are estimated in
practice, and error bounds are provided in the meantime. Then, the robust methodolo-
gies can be employed to tackle the norm-bounded or polytopic uncertainties supposed
in the TPs [6, 7].
Considering more practical cases that some components in the TP matrix are
precious to collect, the partially unknown TPs of MJSs has been recommended in
[8, 9]. Different with the uncertain TPs considered in [4–7], the notion of partially
unknown TPs does not expect any information of the unknown components. How-

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 69


X. Luan et al., Robust Control for Discrete-Time Markovian Jump Systems in the
Finite-Time Domain, Lecture Notes in Control and Information Sciences 492,
https://doi.org/10.1007/978-3-031-22182-8_4
70 4 Finite-Time Stability and Stabilization . . .

ever, it is essential to point out that the more details in TPs are unknown, the more
conservativeness of the controller or filter design. In extreme circumstances, such
as all the elements in TPs are unavailable, the MJSs are equivalent to the switching
systems in a particular case.
In this chapter, the random time-varying TPs are considered from the stochastic
viewpoint. The Gaussian probability density function (PDF) is employed to describe
the relevant probability concerning TPs to occur at a provided constant. In this way,
the random time-varying TPs can be characterized with a Gaussian PDF. The vari-
ance of Gaussian PDF can quantize the uncertainties of TPs. Then the finite-time
stabilization, finite-time H∞ control and the observer-based state feedback finite-
time control problems are presented to deal with the transient performance analysis
of discrete-time non-homogeneous MJSs.

4.2 Preliminaries and Problem Formulation

Consider the discrete-time MJS with the same structure in the preceding chapters:

x(k + 1) = A(rk )x(k) + Bu (rk )u(k) + Bw (rk )w(k)
(4.1)
x(k) = x0 , rk = r0 , k = 0

where the state variable, the control input, and the exogenous disturbances are the
same with those defined in the preceding chapters. The system matrices A(rk ), Bu (rk )
and Bw (rk ) are denoted as Ai , Bui , and Bwi , respectively. rk is a time-varying Marko-
vian chain taking values in M = {1, 2, ..., M} with transition probabilities

πi(ξj k ) = Pr (rk = j|rk−1 = i, k),

where πi(ξj k ) is the transition probabilities from mode i to mode j satisfying πi(ξj k ) ≥
 (ξk )
0, M j=1 πi j = 1, ∀i, j ∈ M.
In this chapter, the random time-varying TPs are characterized by a Gaussian
stochastic process {ξk , k ∈ N}. The pruned Gaussian PDF of random variables πi(ξj k )
can be denoted as follows:
 ( ξk ) 
1 πi j −μi j
  √ f √
σi j σi j
p πi(ξj k ) =     (4.2)
1−μ 0−μ
F √σi ij j − F √σi ij j

where f (·) is the PDF of the standard normal distribution, F(·) is the cumulative
distribution of f (·), μi j and σi j are the means and variances of Gaussian PDFs,
respectively. Therefore, the matrix of transition probability can be expressed as:
4.2 Preliminaries and Problem Formulation 71

10

8 s=0.2
P robability dens ity

s=0.1
6 s=0.05

2
TP
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
50 50 50
s=0.1 s=0.05
s=0.2
40 40 40

30 30 30
Count

Count

Count
20 20 20

10 10 10

0 0 0
0.5 0.5 0.5

Fig. 4.1 Gaussian PDF for (0.5, 0.2), (0.5, 0.1) and (0.5, 0.05)

⎡ ⎤
n(μ11 , σ11 ) n(μ12 , σ12 ) · · · n(μ1M , σ1M )
⎢ n(μ21 , σ21 ) n(μ22 , σ22 ) · · · n(μ2M , σ2M ) ⎥
⎢ ⎥
N =⎢ .. .. .. .. ⎥ (4.3)
⎣ . . . . ⎦
n(μ M1 , σ M1 ) n(μ M2 , σ M2 ) · · · n(μ M M , σ M M )

where n(μi j , σi j ) = p(πi(ξj k ) ) means the pruned Gaussian PDF of πi(ξj k ) .


To explicitly demonstrate how the pruned Gaussian PDF represents the relevant
probability for TPs to occur at a provided constant value, the charts of distribution
functions for different variance with the same mean are displayed in Fig. 4.1.
From Fig. 4.1, it is clearly noticed that larger values of variance yield the flatter
graphs, and most of the areas in the chart are far from the mean of Gaussian PDF.
In contrast, smaller values of variance yield sharper graphs, and most of the areas in
the graph are pretty close to the mean of Gaussian PDF. Based in the above analysis,
the expectation of random variable πi(ξj k ) can be expressed as follows:
72 4 Finite-Time Stability and Stabilization . . .

  1  
π̂i(ξj k ) (ξk )
=  πi j = πi(ξj k ) p πi(ξj k ) dπi(ξj k )
0
   
0−μi j 1−μi j
f √
σi j
− f √
σi j √
= μi j +     σi j (4.4)
0−μi j 1−μi j
F √
σi j
−F √
σi j

As a consequence, the expected TP matrix can be denoted as follows:



(ξk ) (ξk ) (ξk ) ⎤
E(π11 ) E(π12 ) · · · E(π1M )
⎢ E(π ) E(π (ξk ) )
(ξk )
· · · E(π2M (ξk ) ⎥
)⎥
⎢ 21 22
=⎢ .. .. .. .. ⎥ (4.5)
⎣ . . . . ⎦
(ξk ) (ξk ) (ξk )
E(π M1 ) E(π M2 ) · · · E(π M M )

with


M    
E πi(ξj k ) = 1, E πi(ξj k ) ≥ 0, 1 ≤ i, j ≤ M.
j

4.3 Stochastic Finite-Time Stabilization

Design the following state feedback controller for the discrete-time MJS (4.1):

u(k) = −K i,π (ξk ) x(k) (4.6)


ij

where K i,π (ξk ) are controller gains to be calculated. Substituting controller (4.6) into
ij
system (4.1) leads to the following closed-loop system:

x(k + 1) = Āi,π (ξk ) x(k) + Bwi w(k) (4.7)


ij

where Āi,π (ξk ) = Ai − Bui K i,π (ξk ) .


ij ij
Firstly, the following proposition is presented to develop the main results.
Proposition 4.1 For a given scalar α ≥ 1, the closed-loop MJS (4.7) is stochastic
finite-time stabilizable concerning (c1 c2 N R d), if there exist symmetric positive-
definite matrices Pi,π (ξk ) and Q such that
ij

⎡ ⎤
ĀT (ξk ) P̄ j,πikj Āi,π (ξk ) − α Pi,π (ξk ) ĀT (ξk ) P̄ j,π (ξk ) Bwi
⎣ i,πi j ij ij i,πi j ij ⎦<0 (4.8)
∗ T
Bwi P̄ j,π (ξk ) Bwi − α Q
ij
4.3 Stochastic Finite-Time Stabilization 73

c2 λmin ( P̃i,π (ξk ) )


c1 λmax ( P̃i,π (ξk ) ) + d 2 λmax (Q) <
ij
(4.9)
ij αN

where P̃i,π (ξk ) = R −1/2 Pi,π (ξk ) R −1/2 , λmax (·), λmin (·) mean the maximal and mini-
ij ij
mum eigenvalue of the matrix, respectively.

Proof For the system (4.7), choose the following Lyapunov function:

V (x(k), rk , πi(ξj k ) ) = x(k)T P(rk , πi(ξj k ) )x(k),

where Pi,π (ξk )  P(rk , πi(ξj k ) ) are the variation-dependent positive-definite symmetric
ij
matrices. Simple calculation gives that

V (x(k + 1)) = x(k)T ĀT (ξ ) P̄ j,π (ξk ) Āi,π (ξk ) x(k)


i,πi j k ij ij

+ 2x(k)T ĀT (ξ ) P̄ j,π (ξk ) Bwi w(k) + w(k)T BwiT P̄ j,π (ξk ) Bwi w(k)
i,πi j k ij ij

(4.10)

M  
where P̄ j,π (ξk )  j=1 E πi(ξj k ) P j,π (ξk ) .
ij ij
Combining Eqs. (4.8) and (4.10), it yields

V (x(k + 1)) < αV (x(k)) + αw(k)T Qw(k). (4.11)

For α ≥ 1 and P̃i,π (ξk ) = R −1/2 Pi,π (ξk ) R −1/2 , condition (4.11) can be rewritten as
ij ij


k
V (x(k)) < α k V (x(0)) + α k− j+1 w( j − 1)T Qw( j − 1)
j=1
⎡ ⎤

k
= α k ⎣V (x(0)) + α 1− j w( j − 1)T Qw( j − 1)⎦
j=1
 
< α N c1 λmax ( P̃i,π (ξk ) ) + d 2 λmax (Q) . (4.12)
ij

On the other hand,

V (X (k)) = x(k)T Pi,π (ξk ) x(k) ≥ λmin ( P̃i,π (ξk ) )x(k)T Rx(k). (4.13)
ij ij
74 4 Finite-Time Stability and Stabilization . . .

Putting together Eqs. (4.12) and (4.13), one has


 
α c1 λmax ( P̃i,π (ξk ) ) + d λmax (Q)
N 2
ij
x(k)T Rx(k) < .
λmin ( P̃i,π (ξk ) )
ij

Equation (4.9) means that for ∀k ∈ {1, 2, . . . , N }, x(k)T Rx(k) < c2 . Then the
MJS (4.7) is said to be stochastic finite-time stabilizable concerning to (c1 c2 N R d).
This completes the proof.

To find the solution the finite-time stabilizing controller, the following theorem
is needed:
Theorem 4.1 For a given scalar α ≥ 1, the closed-loop MJS (4.7) is stochas-
tic finite-time stabilizable concerning (c1 c2 N R d) for the state feedback con-
troller K i,π (ξk ) = Yi,π (ξk ) X −1(ξk ) , if there exists mode-dependent symmetric positive-
ij ij i,πi j
definite matrix X i,π (ξk ) ∈ R n×n , mode-dependent matrix Yi,π (ξk ) ∈ R m×n and symmet-
ij ij
ric positive-definite matrix Q ∈ R p× p satisfying the following coupled linear matrix
inequalities (LMIs):
⎡ ⎤
−α X i,π (ξk ) 0 V1i
⎢ ij ⎥
⎣ ∗ −α Q U2i ⎦ < 0 (4.14)
∗ ∗ −

λ1 R −1 < X i,π (ξk ) < R −1 (4.15)


ij

0 < Q < λ2 I (4.16)

 √ 
− αc2N + λ2 d 2 c1
√ <0 (4.17)
c1 −λ1

where

⎡    ⎤
(ξk )
⎢ E πi1 X i,π (ξk ) Ai − Y (ξk ) Bui , . . . , ⎥
T T T

V1i = ⎢    ij  ⎥,
i,πi j
⎣  ⎦
(ξk )
E πi M X i,π (ξk ) Ai − Y (ξk ) Bui
T T T
ij i,πi j

     
U2i = E πi1 Bwi , . . . , E πi(ξMk ) Bwi
(ξk ) T T ,
4.4 Stochastic Finite-Time H∞ Control 75
 
= diag X 1,πi(jξk ) , . . . , X M,πi(jξk ) .

Proof From Eq. (4.8) in Proposition 4.1, it follows that:


   
ĀT (ξ k )
  −α Pi,π (ξk ) 0
i,πi j P̄ j,π (ξk ) Āi,πi(jξk ) Bwi + ij < 0. (4.18)
T
Bwi ij 0 −α Q

Applying
 −1Schur complement lemma, implementing a congruence to Eq. (4.18)
by diag Pi,π (ξk ) I , and denoting X i = P −1(ξk ) and Yi,π (ξk ) = K i,π (ξk ) X i,π (ξk ) , then it
ij i,πi j ij ij ij

implies inequality (4.14).


Defining X̃ i,π (ξk ) = P̃ −1(ξk ) = R 1/2 X i,π (ξk ) R 1/2 , and considering
ij i,πi j ij

1
λmax ( X̃ i,π (ξk ) ) = . (4.19)
ij λmin ( P̃i,π (ξk ) )
ij

From Eq. (4.9) it follows that:


c1 c2
+ d 2 λmax (Q) < . (4.20)
λmin ( X̃ i,π (ξk ) ) λmax ( X̃ i,π (ξk ) )α N
ij ij

Equation (4.20) holds with the following assumptions:

0 < λmin (Q), λmax (Q) < λ2 , λmax ( X̃ i,π (ξk ) ) < 1, λ1 < λmin ( X̃ i,π (ξk ) ) (4.21)
ij ij

c1 c2
+ d 2 λ2 < N (4.22)
λ1 α

which are equivalent to Eqs. (4.15)–(4.17). Thus the proof is completed.

4.4 Stochastic Finite-Time H∞ Control

Consider the following discrete-time MJS:



⎨ x(k + 1) = A(rk )x(k) + Bu (rk )u(k) + Bw (rk )w(k)
z(k) = C(rk )x(k) + Du (rk )u(k) + Dw (rk )w(k) (4.23)

x(k) = x0 , rk = r0 , k = 0
76 4 Finite-Time Stability and Stabilization . . .

where the definition of z k ∈ R l is the same as that in Chap. 3. Substituting the con-
troller (4.6) into the system (4.23) yields the following closed-loop system:

x(k + 1) = Āi,π (ξk ) x(k) + Bwi w(k) (4.24)


ij

z(k) = C̄i,π (ξk ) x(k) + Dwi w(k),


ij

where

C̄i,π (ξk ) = Ci − Dui K i,π (ξk ) .


ij ij

To achieve the target that the closed-loop system is stochastic finite-time stabiliz-
able with H∞ disturbance rejection performance, the following sufficient conditions
are presented:
Theorem 4.2 The closed-loop system (4.24) is stochastic finite-time stabilizable
concerning (c1 c2 N R d) for scalars α ≥ 0, and maintains the γ -disturbance atten-
uation attribute, if for each i ∈ M, there are positive-definite matrices X i,π (ξk ) and
ij
matrices Yi,π (ξk ) satisfying the following equations:
ij

⎡  T ⎤

⎢ −(1 + α)X i,π (ξk ) 0 Ci X (ξ )
i,πi j k
− Dui Y (ξ )
i,πi j k
(1 + α)U1iT ⎥
⎢ ij √ ⎥
⎢ ∗ −γ 2 I T (1 + α)U2iT ⎥<0 (4.25)
⎢ Dwi ⎥
⎣ ∗ ∗ −I 0 ⎦
∗ ∗ ∗ −Z

λR −1 < X i,π (ξk ) < R −1 (4.26)


ij

  
−c2 + γ 2 d 2
 (1 + α) N c1
<0 (4.27)
(1 + α) N c1 −λ

where
   T    T 
(ξk ) (ξk )
U1iT = E πi1 Ai,πi j k , . . . , E πi M Ai,πi j k ,
( ξ ) ( ξ )

      
(ξk ) (ξk )
U2iT = E πi1 Bwi , . . . , E πi M Bwi ,
T T
4.4 Stochastic Finite-Time H∞ Control 77

 
Z = X 1,πi(jξk ) , . . . , X M,πi(jξk ) ,

Ai,π (ξk ) = Ai X i,π (ξk ) − Bui Yi,π (ξk ) .


ij ij ij

A finite-time stabilizing controller with γ -disturbance attenuation level can be


derived as K i,π (ξk ) = Yi,π (ξk ) X −1(ξk ) .
ij ij i,πi j

Proof For the closed-loop system (4.24), choose the following Lyapunov function:

V (x(k), rk , πi(ξj k ) ) = x(k)T P(rk , πi(ξj k ) )x(k),

where Pi,π (ξk )  P(rk , πi(ξj k ) ) are the variation-dependent positive-definite symmetric
ij
matrices. Then, it follows that:
  
(ξ ) 
V (x(k)) = E V ( x(k + 1), rk+1 , πi j k+1 ) x(k), rk , ξk − V (x(k), rk , πi(ξj k ) )
= x(k + 1)T P̄ j,π (ξk ) x(k + 1) − x(k)T Pi,π (ξk ) x(k)
 
ij ij

= x(k) Ā (ξk ) P̄ j,π (ξk ) Āi,π (ξk ) − Pi,π (ξk ) x(k)


T T
i,πi j ij ij ij

+ 2x(k) T
ĀT (ξk ) P̄ j,π (ξk ) Bwi w(k) + w(k)T BwiT P̄ j,π (ξk ) Bwi w(k) (4.28)
i,πi j ij ij

M  
where P̄ j,π (ξk )  j=1 E πi(ξj k ) P j,π (ξk ) .
ij ij
Assume the zero initial condition V (x(k))|k=0 = 0, and denote
 !

N

JE z(k) z(k) − γ w(k) w(k)
T 2 T
. (4.29)
k=0

Since V (x(k))|k=0 = 0, one has


 !

N

J≤E z(k)T z(k) − γ 2 w(k)T w(k) + (1 + α) V (x(k))
k=0


N
= ζkT ζk ,
k=0
78 4 Finite-Time Stability and Stabilization . . .

where
 T
ζk  x(k)T w(k)T ,

(1 + α) ĀT (ξ ) P̄ j,π (ξk ) Āi,π (ξk ) − (1 + α)Pi,π (ξk ) + C̄ T (ξ ) C̄ i,π (ξk )
= i,πi j k ij ij ij i,πi j k ij



(1 + α) ĀT (ξ ) P̄ j,π (ξk ) Bwi + C̄ (ξk ) Dwi
T
i,πi j k ij i,πi j ⎦.
(1 + α)Bwi
T
P̄ j,π (ξk ) Bwi + Dwi T
Dwi − γ 2 I
ij

If  < 0, which means J < 0, the H∞ disturbance rejection performance (4.29) is


guaranteed. Denoting X i,π (ξk )  P −1(ξk ) , Yi,π (ξk )  K i,π (ξk ) X i,π (ξk ) , and implementing
"
ij
#
i,πi j ij ij ij

a congruence to  by diag X i,ξk I , it follows that  < 0 is equal to the linear matrix
inequality (4.25).
On the other hand, linear matrix inequality (4.25) means

(1 + α)V (x(k + 1)) < (1 + α)V (x(k)) + γ 2 w(k)T w(k) − z(k)T z(k). (4.30)

For α ≥ 0, it follows that:

V (x(k + 1)) < (1 + α)V (x(k)) + γ 2 w(k)T w(k). (4.31)

Denoting P̃i,π (ξk ) = R −1/2 Pi,π (ξk ) R −1/2 , Eq. (4.31) implies
ij ij


k
V (x(k)) < (1 + α)k V (x(0)) + γ 2 w( j − 1)T w( j − 1)
j=1
 
< α k (1 + α) N c1 λmax ( P̃i,π (ξk ) ) + d 2 γ 2 . (4.32)
ij

On the other hand,

V (x(k)) = x(k)T Pi,π (ξk ) x(k) > λmin ( P̃i,π (ξk ) )x(k)T Rx(k). (4.33)
ij ij

Putting together Eqs. (4.32) and (4.33), it follows that:

(1 + α) N c1 λmax ( P̃i,π (ξk ) ) + d 2 γ 2


x(k) Rx(k) <
T
< c2 .
ij
(4.34)
λmin ( P̃i,π (ξk ) )
ij
4.5 Observer-Based Finite-Time Control 79

Condition (4.34) can be rewritten as

c1 (1 + α) N c2
+ d 2γ 2 < . (4.35)
λmin ( X̃ i,π (ξk ) ) λmax ( X̃ i,π (ξk ) )
ij ij

Equation (4.35) holds with the following conditions:

λmax ( X̃ i,π (ξk ) ) < 1, λ < λmin ( X̃ i,π (ξk ) ) (4.36)


ij ij

c1 (1 + α) N
+ d 2 γ 2 < c2 (4.37)
λ
which are equal to Eqs. (4.26) and (4.27). Thus the proof is completed.

4.5 Observer-Based Finite-Time Control

Consider the following mathematical model:



⎨ x(k + 1) = A(rk )x(k) + Ad (rk )x(k − h) + Bu (rk )u(k) + Bw (rk )w(k)
y(k) = E(rk )x(k) + E d (rk )x(k − h) (4.38)

x f = ϕ f , f ∈ {−h, . . . , 0}, r (0) = r0

where y(k) ∈ R p is the measured output. Design the following observer and state
feedback controller:


⎪ x̄(k + 1) = Ai x̄(k) + Adi x̄(k − h) + Bui u(k) + Hi,π (ξk ) (y(k) − ȳ(k))

⎨ ij
ȳ(k) = E i x̄(k) + E di x̄(k − h)
(4.39)
⎪ u(k) = K i,πi(jξk ) x̄(k) + K di,πi(jξk ) x̄(k − h)



x̄ f = η f , f ∈ {−h, . . . , 0}, r (0) = r0

where K i,π (ξk ) , K di,π (ξk ) and Hi,π (ξk ) are the controller and observer gains to be calcu-
ij ij ij
 T
lated. Letting e(k) = x(k) − x̄(k) and x̃(k) = x(k)T e(k)T , the closed-loop error
dynamic MJS follows that:

x̃(k + 1) = Ãi x̃(k) + Ãdi x̃(k − h) + B̃wi w(k)
 T (4.40)
x̃ f = ϕ Tf ϕ Tf − ηTf , f ∈ {−h, . . . , 0}, r (0) = r0
80 4 Finite-Time Stability and Stabilization . . .

where
 
Ai + Bui K i,π (ξk ) −Bui K i,π (ξk )
Ãi = ij ij
,
0 Ai − Hi,π (ξk ) E i
ij

 
Adi + Bui K di,π (ξk ) −Bui K di,π (ξk )
Ãdi = ij ij
,
0 Adi − Hi,π (ξk ) E di
ij

 
Bwi
B̃wi = .
Bwi

Before presenting the main results, the following definition and proposition are
necessary to develop the main results.

Definition 4.1 The closed-loop error dynamic system (4.40) is said to be stochas-
tic finite-time stabilizable via observer-based state feedback controller (4.39) with
respect to (c1 c2 N G̃), where c1 < c2 , G̃ > 0, if the following condition holds:
   
E x̃ T (0)G̃ x̃(0) ≤ c12 ⇒ E x̃ T (k)G̃ x̃(k) < c22 , ∀k ∈ {1, 2, . . . , N } (4.41)
k0 −h≤k≤k0

where G̃ = diag{G, G}.

Proposition 4.2 For scalars α ≥ 0, and h > 0, the closed-loop error dynamic sys-
tem (4.40) is stochastic finite-time stabilizable concerning to (c1 c2 N G̃ d), if
there are symmetric positive-definite matrices P̃i,ξk , Q̃ and S such that

ÃiT P̄ j,π (ξk ) Ãi − (1 + α) P̃i,π (ξk ) + Q̃ ÃiT P̄ j,π (ξk ) Ãdi
⎢ ij ij ij
⎢ ∗ ÃTdi P̄ j,π (ξk ) Ãdi − Q̃
⎣ ij
∗ ∗

ÃiT P̄ j,π (ξk ) B̃wi
ij

ÃTdi P̄ j,π (ξk ) B̃wi ⎥<0 (4.42)
ij ⎦
T
B̃wi P̄ j,π (ξk ) B̃wi − (1 + α)S
ij

  %
  % c22 min λmin P̂i,π (ξk )
i∈M ij
c12 max λmax P̂i,π (ξk ) + c12 hλmax ( Q̂) + d 2 λmax (S) <
i∈M ij (1 + α) N
(4.43)
−1/2 −1/2 −1/2 −1/2
where P̂i,π (ξk ) = G̃ P̃i,π (ξk ) G̃ , Q̂ = G̃ Q̃ G̃ .
ij ij
4.5 Observer-Based Finite-Time Control 81

Proof For the closed-loop system (4.40), choose the following Lyapunov function:


k−1
Vi (k) = x̃(k) P̃ T
i,πi(ξj
k) x̃(k) + x̃( j)T Q̃ x̃( j).
j=k−h

Simple calculation follows that:

E {Vi (k + 1)} − Vi (k)


 
= x̃(k) Ãi P̄ j,π (ξk ) Ãi − P̃i,π (ξk ) + Q̃ x̃(k) + 2 x̃(k)T ÃiT P̄ j,π (ξk ) Ãdi x̃(k − h)
T T
ij ij ij
 
+ 2 x̃(k) Ãi P̄ j,π (ξk ) B̃wi w(k) + x̃(k − h) Ãdi P̄ j,π (ξk ) Ãdi − Q̃ x̃(k − h)
T T T T
ij ij

+ 2 x̃(k − h) T
ÃTdi P̄ j,π (ξk ) Ãdi B̃wi w(k) + w(k) T T
B̃wi P̄ j,π (ξk ) B̃wi w(k)
ij ij

= ζkT i ζk (4.44)

where


M   
E πi(ξj k ) P̃ j,π (ξk ) , ζk = x(k)T x(k − h)T w(k)T ,
T
P̄ j,π (ξk ) 
ij ij
j=1

⎡ ⎤
ÃiT P̄ j,π (ξk ) Ãi − P̃i,π (ξk ) + Q̃ ∗ ∗
⎢ ij ij

i = ⎢

ÃTdi P̄ j,π (ξk ) Ãi ÃTdi P̄ j,π (ξk ) Ãdi − Q̃ ∗ ⎥.

ij ij
T T T
Bwi P̄ j,π (ξk ) Ãi B̃wi P̄ j,π (ξk ) Ãdi B̃wi P̄ j,π (ξk ) B̃wi
ij ij ij

Combining Eqs. (4.42) and (4.44), it follows that:

E {Vi (k + 1)} ≤ (1 + α)x̃(k)T P̃i,π (ξk ) x̃(k) + (1 + α)w(k)T Sw(k)


ij


k−1
+ (1 + α) x̃( j)T Q̃ x̃( j)
j=k−h

= (1 + α) Vi (k) + (1 + α) w(k)T Sw(k). (4.45)

For α ≥ 0, Eq. (4.45) can be rewritten as



k
Vi (k) ≤ (1 + α) Vi (0) + k
(1 + α)k− j+1 w( j − 1)T Sw( j − 1)
j=1

= (1 + α)k
82 4 Finite-Time Stability and Stabilization . . .

−1

× ⎣x̃(0)T P̃i,π (ξk ) x̃(0) + x̃( j)T Q̃ x̃( j)
ij
j=−h


k
+ (1 + α)1− j w( j − 1)T Sw( j − 1)⎦
j=1
   % 
≤ (1 + α) N c12 max λmax P̂i,π (ξk ) + c12 hλmax ( Q̂) + d 2 λmax (S) .
i∈M ij

(4.46)

Note that


k−1
Vi (k) = x̃(k) P̃i,π (ξ(k)) x̃(k) +
T
x̃( j)T Q̃ x̃( j)
ij
j=k−h

≥ x̃(k)T P̃i,π (ξk ) x̃(k)


  %
ij

≥ min λmin P̂i,π (ξk ) x̃(k)T G̃ x̃(k). (4.47)


i∈M ij

According to Eqs. (4.46)–(4.47), it follows that:


   % 
(1 + α) c1 max λmax P̂i,π (ξk )
2 N
+ c1 hλmax ( Q̂) + d λmax (S)
2 2
i∈M ij
x̃(k)T G̃ x̃(k) ≤   % .
min λmin P̂i,π (ξk )
i∈M ij

(4.48)
Equations (4.43) and (4.48) imply that for k ∈ {1, 2, . . . , N }, E{x̃(k) G̃ x̃(k)} < T

c22 . This completes the proof.

With the derived results presented in Proposition 4.2, the controller gains can be
solved using the following theorem:
Theorem 4.3 For scalars α ≥ 0, and h > 0, the closed-loop system (4.40) is
stochastic finite-time stabilizale via observer-based state feedback concerning
(c1 c2 N G̃ d), if there are matrices P̃i,π (ξk ) = P̃ T (ξk ) > 0 ∈ R 2n×2n , X̃ i,π (ξk ) =
ij i,πi j ij

X̃ T (ξ ) > 0 ∈ R 2n×2n , Q̃ > 0 ∈ R 2n×2n , S > 0 ∈ R n×n , and real matrices K i,π (ξk ) ∈
i,πi j k ij

R m×n
, K di,π (ξk ) ∈ R m×n and Hi,π (ξk ) ∈ R n× p such that
ij ij

⎡ ⎤
−(1 + α) P̃i,π (ξk ) + Q̃ 0 0 ∗
⎢ ij ⎥
⎢ 0 − Q̃ ∗ ∗ ⎥
⎢ ⎥≤0 (4.49)
⎢ 0 0 −(1 + α)S ∗ ⎥
⎣ ⎦
1 2 B̄wi − X̃ i,π (ξk )
ij
4.5 Observer-Based Finite-Time Control 83

P̃i,π (ξk ) X̃ i,π (ξk ) = I (4.50)


ij ij

λ1 G̃ −1 < X̃ i,π (ξk ) < G̃ −1 , 0 < Q̃ < λ2 G̃, 0 < S < λ3 I (4.51)
ij

 
c2
− (1+α)
2
N + c1 λ2 h + d λ3
2 2
c1
<0 (4.52)
c1 −λ1

where
      T
(ξ ) (ξ )
1 = E πi1 k 1,
T
..., E πi Mk T
1
,

      T
(ξ ) (ξ )
2 = E πi1 k 2,
T
..., E πi Mk T
2
,

1 = 11 + 12 K i,π (ξk ) 13 + 14 Hi,π (ξk ) 15 ,


ij ij

2 = 21 + 12 K di,π (ξk ) 13 + 14 Hi,π (ξk ) 22 ,


ij ij

     
Ai 0n×n Bui 0n×n
= , = , = ,
11
0n×n Ai 12
0n×m 14
−In×n

 
 Adi 0n×n 
15 = 0 p×n E i , 21 = , 22 = 0 p×n E di .
0n×n Adi

Proof By Schur complement lemma, Eq. (4.42) in Proposition 4.2 can be rewritten
as
⎡ ⎤
−(1 + α) P̃i,π (ξk ) + Q̃ ∗ ∗ ∗
⎢ ij ⎥
⎢ 0 − Q̃ ∗ ∗ ⎥
⎢ ⎥≤0 (4.53)
⎢ 0 0 −(1 + α)S ∗ ⎥
⎣ ⎦
Āi Ādi B̄wi − X̃ i,π (ξk )
ij

where
      T
(ξk ) (ξk )
Āi = E πi1 Ãi , . . . , E πi M ÃiT ,
T
84 4 Finite-Time Stability and Stabilization . . .

      T
(ξk ) (ξk )
Ādi = E πi1 Ãdi , . . . , E πi M Ãdi ,
T T

      T
(ξk ) (ξk )
B̄wi = E πi1 B̃wi , . . . , E πi M B̃wi ] ,
T T

 −1 −1

X̃ i,π (ξk ) = diag P̃1,π (ξk ) , . . . , P̃M,π (ξk ) .
ij ij ij

After simple calculation, we have


 
Ai + Bui K i,π (ξk ) −Bui K i,π (ξk )
Ãi = ij ij
0 Ai − Hi,π (ξk ) E i
   
ij

Ai 0n×n Bi 
= + K i,π (ξk ) In×n −In×n
0n×n Ai 0n×m ij
 
0n×n 
+ Hi,π (ξk ) 0 p×n E i (4.54)
−In×n ij

 
Adi + Bui K di,π (ξk ) −Bui K di,π (ξk )
Ãdi = ij ij
0 Adi − Hi,π (ξk ) E di
   
ij

Adi 0n×n Bi 
= + K di,π (ξk ) In×n −In×n
0n×n Adi 0n×m ij
 
0n×n 
+ Hi,π (ξk ) 0 p×n E di . (4.55)
−In×n ij

Substituting Eqs. (4.54)–(4.55) into Eq. (4.53), and denoting X̃ i,π (ξk ) = P̃ −1(ξk ) ,
ij i,πi j
Eqs. (4.49) and (4.50) in Theorem 4.2 can be derived.
On the other hand,

1
λmax ( X̂ i,π (ξk ) ) = ,
ij λmin ( P̂i,π (ξk ) )
ij

and

X̂ i,π (ξk ) = P̂ −1(ξk ) = R 1/2 X̃ i,π (ξk ) R 1/2 .


ij i,πi j ij
4.6 Simulation Analysis 85

Equation (4.43) implies that

c12
  % + c12 hλmax ( Q̂) + d 2 λmax (S)
min λmin X̂ i,π (ξk )
i∈M ij

c2
<   2 % .
max λmax X̂ i,π (ξk ) (1 + α) N
i∈M ij

The above condition can be satisfied with the following assumptions:

0 < λmin ( Q̂), λmax ( Q̂) < λ2 , λmax ( X̂ i,π (ξk ) ) < 1,
ij

λ1 < λmin ( X̂ i,π (ξk ) ), 0 < λmin (S), λmax (S) < λ3 ,
ij

c12 c22
+ c12 hλ2 + d 2 λ3 < ,
λ1 (1 + α) N

which are equal to Eqs. (4.51) and (4.52). Thus the proof is completed.
Remark 4.1 It should be pointed out that the derived inequalities in Theorem 4.3
are not strict LMIs. With the same algorithm mentioned in Chap. 3 [10], the original
non-feasibility problem can be converted to the feasible solution to LMIs.

4.6 Simulation Analysis

In this subsection, two examples will be presented to illustrate the effectiveness and
validity of the obtained results.

Example 4.1 An example from reference [11] is adopted here, which is an applica-
tion of discrete-time MJS to the economic system to discuss the income measurement
and the market period problems. The specific model parameter information can refer
to [11].
The following Gaussian PDF matrix is included to represent the matrix of TP:
⎡ ⎤
n(0.67, σ ) n(0.17, σ ) n(0.16, σ )
N = ⎣ n(0.30, σ ) n(0.47, σ ) n(0.23, σ ) ⎦ ,
n(0.26, σ ) n(0.10, σ ) n(0.64, σ )

where the values of the mean are taken from the components of corresponding TP
matrix, and the same value of variance for different components is utilized to simplify
the discussion.
86 4 Finite-Time Stability and Stabilization . . .

Table 4.1 TP matrix with different variance values


⎡ ⎤
⎢ 0.6558 0.1761 0.1681 ⎥
⎢ ⎥
σ = 0.01 =⎢ ⎥
⎢ 0.2995 0.4685 0.2321 ⎥
⎣ ⎦
0.2537 0.1250 0.6213
⎡ ⎤
⎢ 0.5578 0.2235 0.2187 ⎥
⎢ ⎥
σ = 0.05 =⎢ ⎥
⎢ 0.3068 0.4293 0.2639 ⎥
⎣ ⎦
0.2714 0.1918 0.5368
⎡ ⎤
⎢ 0.4848 0.2594 0.2557 ⎥
⎢ ⎥
σ = 0.1 =⎢ ⎥
⎢ 0.3167 0.3964 0.2869 ⎥
⎣ ⎦
0.2949 0.2343 0.4708
⎡ ⎤
⎢ 0.4201 0.2911 0.2887 ⎥
⎢ ⎥
σ = 0.2 =⎢ ⎥
⎢ 0.3244 0.3688 0.3068 ⎥
⎣ ⎦
0.3132 0.2746 0.4122

From formula (4.5), it can be immediately derived that for σ = 0, we have N = .


On the contrary, when σ  ∞, it has (πi(ξj k ) )  ∞. Therefore, with the help of
Gaussian PDF, the uncertain TP addressed in this chapter can include the two extreme
cases, namely entirely known and unknown TPs, as particular cases.
Another advantage of applying Gaussian PDF to represent TPs of MJSs is that
we can set different values of variance to quantify the corresponding possibility
concerning TPs to occur at a provided constant. The size of uncertainty is utilized to
express the degree of possibility:
0−μi j 1−μi j
f( √
σi j
)− f( √
σi j √
)
 π̂i(ξj k ) = 0−μi j 1−μ
σi j .
F( √σi j )− F( √σi ij j )

From the above equation, it can be seen that larger values of σ lead to greater
uncertainty of TPs. It implies that the possibility for TPs to occur at a constant is
less. Table 4.1 displays the relevant TP matrix with different variance values.
It can be clear seen from Table 4.1 that as the variance increases, the uncertainties
of the TP matrix turn into greater.
To verify the validity of the observer-based finite-time controller design, the fol-
lowing parameters are provided:
4.6 Simulation Analysis 87
     
0 1 0 1 0 1
A1 = , A2 = , A3 = ,
−2.5 3.2 −43.7 45.7 5.3 −5.2
⎡ ⎤ ⎡ ⎤
1.5477 −1.0976 3.1212 −0.5082
E 1 = ⎣ −1.0976 1.9145 ⎦ , E 2 = ⎣ −0.5082 2.7824 ⎦ ,
0 0 0 0
⎡ ⎤
1.8385 −1.2728
E 3 = ⎣ −1.2728 1.6971 ⎦ ,
0 0

 T  T
Bu1 = Bu2 = Bu3 = 0 1 , Bw1 = Bw2 = Bw3 = 0 0.2 ,

Ad1 = A1 , Ad2 = A2 , Ad3 = A3 , E d1 = E 1 , E d2 = E 2 , E d3 = E 3 .

Letting σ = 0.1, c1 = 1, c2 = 4, h = 1, d 2 = 1, N = 7, the observer and con-


troller gains are calculated as:
 
K 1,π (ξk ) = −0.0723 −1.5839 , K 2,π (ξk ) = −0.5336 −22.7618 ,
ij ij


K 3,π (ξk ) = −0.5026 0.9163 ,
ij

 
K d1,π (ξk ) = 0.3752 −0.0344 , K d2,π (ξk ) = −0.5335 −22.7511 ,
ij ij

 
 −1.5579 −2.1878 0
K d3,π (ξk ) = −0.5026 0.9157 , H1,π (ξk ) = ,
ij ij 2.5927 3.1712 0
   
−2.6357 −16.1875 0 4.4970 6.4957 0
H2,π (ξk ) = , H3,π (ξk ) = .
ij 3.0684 16.8775 0 ij −3.2808 −5.5246 0

Example 4.2 Consider the system (4.38) with three operation modes and the fol-
lowing parameters:
     
0.88 −0.05 2 0.24 −0.8 0.16
A1 = , A2 = , A3 = ,
0.40 −0.72 0.80 0.32 0.80 0.64
     
−0.2 0.1 −0.6 0.4 −0.3 0.1
Ad1 = , Ad2 = , Ad3 = ,
0.2 0.15 0.2 0.5 0.2 0.3
           
2 1 1 0.4 0.2 0.1
Bu1 = , Bu2 = , Bu3 = , Bw1 = , Bw2 = , Bw3 = ,
1 −1 1 0.5 0.6 0.3
88 4 Finite-Time Stability and Stabilization . . .
  
E 1 = 0.2 0.1 , E 2 = 0.3 0.4 , E 3 = −0.1 0.2 ,

  
E d1 = 0.03 −0.05 , E d2 = 0.1 0.2 , E d3 = −0.03 0.05 .

The Gaussian PDF matrix is assumed as:


⎡ ⎤
n(0.2, 0.05) n(0.3, 0.05) n(0.5, 0.05)
N = ⎣ n(0.2, 0.05) n(0.4, 0.05) n(0.4, 0.05) ⎦ .
n(0.2, 0.05) n(0.6, 0.05) n(0.2, 0.05)

The corresponding TP matrix can be calculated as:


⎡ ⎤
0.6558 0.1761 0.1681
 = ⎣ 0.2995 0.4685 0.2321 ⎦ .
0.2537 0.1250 0.6213
 T
The initial states and mode are denoted as x0 = −0.3 0.4 , r0 = 1, respec-
tively. Set c12 = 0.5, c22 = 2.2, h = 1, d = 1, α = 0.4, G = diag{1, 1} and N = 7,
the observer and controller gains are derived as follows:
 
K 1,π (ξk ) = −0.0014 −0.1187 , K 2,π (ξk ) = −0.3541 0.1265 ,
ij ij


K 3,π (ξk ) = 0.1793 −0.4536 ,
ij

 
K d1,π (ξk ) = −0.0219 0.2912 , K d2,π (ξk ) = 0.1504 0.1833 ,
ij ij


K d3,π (ξk ) = −0.0828 0.4602 ,
ij

     
−3.8188 0.5472 5.0641
H1,π (ξk ) = , H2,π (ξk ) = , H3,π (ξk ) = .
ij −2.4398 ij 3.6000 ij 4.7931

The mode route is created randomly and given in Fig. 4.2. The free and controlled
discrete-time MJS state trajectories are illustrated in Figs. 4.3 and 4.4, respectively.
It could be observed that the closed-loop MJS (4.40) is stochastic finite-time stable
and the state trajectory is kept within the prescribed bound c2 .
4.6 Simulation Analysis 89

2.8

2.6

2.4
jumping modes

2.2

1.8

1.6

1.4

1.2

1
0 1 2 3 4 5 6 7
time

Fig. 4.2 Jump modes

1
x2

-1

-2

-3
-7 -6 -5 -4 -3 -2 -1 0 1 2 3
x1

Fig. 4.3 State trajectory of the free MJS


90 4 Finite-Time Stability and Stabilization . . .

2.5

1.5

0.5
x2

-0.5

-1

-1.5

-2

-2.5
-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5
x1

Fig. 4.4 State trajectory under finite-time control

4.7 Conclusion

Unlike the previous works presented in the preceding chapters, the main purpose of
this chapter is to conclude a uniform framework to solve the finite-time performance
for a class of discrete-time MJSs with non-homogeneous TPs. The time-varying prop-
erties of TPs are characterized by the Gaussian transition probability density function.
Then sufficient conditions guaranteeing finite-time stabilization are obtained for all
possible unknown external disturbances and random time-varying TPs in the form
of LMIs. In the following chapters, in addition to the finite-time performance, other
control performances such as passive control sliding mode control, finite-frequency
control, consensus control, model predictive control, and so on will be considered
for discrete-time MJSs.

References

1. Shi, P., Boukas, E.K., Agarwal, R.: Kalman filtering for continuous-time uncertain systems
with Markovian jumping parameters. IEEE Trans. Autom. Control 44(8), 1592–1597 (1999)
2. Luan, X.L., Liu, F., Shi, P.: Finite-time filtering for nonlinear stochastic systems with partially
known transition jump rates. IET Control Theor. Appl. 4(5), 735–745 (2010)
3. Wu, L., Shi, P., Gao, H.: State estimation and sliding mode control of Markovian jump singular
systems. IEEE Trans. Autom. Control 55(5), 1213–1219 (2010)
4. Ghaoui, L., Rami, M.A.: Robust state-feedback stabilization of jump linear systems via LMIs.
Int. J. Robust Nonlinear Control 6(9–10), 1015–1022 (1996)
5. Costa, O., Val, J., Geromel, J.: Continuous-time state-feedback H2 control of Markovian jump
linear system via convex analysis. Automatica 35, 259–268 (1999)
6. Xiong, J.L., Lam, J., Gao, H.J., Ho, D.W.C.: On robust stabilization of Markovian jump systems
with uncertain switching probabilities. Automatica 41(5), 897–903 (2005)
References 91

7. Xiong, J.L., Lam, J.: Fixed-order robust H∞ filter design for Markovian jump systems with
uncertain switching probabilities. IEEE Trans. Signal Process 54(4), 1421–1430 (2006)
8. Zhang, L.X., Boukas, E.K., Lam, J.: Analysis and synthesis of Markovian jump linear systems
with time-varying delays and partially known transition probabilities. IEEE Trans. Autom.
Control 53(10), 2458–2464 (2008)
9. Zhang, L.X., Boukas, E.K.: Stability and stabilization of Markovian jump linear systems with
partly unknown transition probability. Automatica 45(2), 463–468 (2009)
10. He, Y., Wu, M., Liu, G.P., She, J.H.: Output feedback stabilization for a discrete-time system
with a time-varying delay. IEEE Trans. Autom. Control 53(10), 2372–2377 (2008)
11. Costa, O., Assumpcão, E.O., Boukas, E.K., Marques, R.P.: Constrained quadratic state feedback
control of discrete-time Markovian jump linear systems. Automatica 35(4), 617–626 (1999)
Chapter 5
Asynchronous Finite-Time Passive
Control for Discrete-Time Markovian
Jump Systems

Abstract This chapter focuses on the finite-time passive controller design scheme
for discrete-time Markovian jump systems (MJSs). Firstly, a finite-time passive con-
troller is proposed to guarantee that the closed-loop system is finite-time bounded
and meets the desired passive performance requirement simultaneously under ideal
conditions. Then, considering the more practical situation that the controller’s mode
is not synchronized with the system mode, an asynchronous finite-time passive con-
troller is planned, which is for the more general hidden MJSs. Finally, by adopting
the controller gains solved by the linear matrix inequalities (LMIs), one simulation
example is presented to verify that the designed two controllers are feasible and
effective.

5.1 Introduction

Markovian jump systems (MJSs) are special random hybrid systems that provide a
unified application theory for many engineering directions, such as flight control [1]
and finance [2], etc. However, considering that the controller mode is not always
synchronized with the system mode, the hidden Markovian model is introduced to
manage this asynchrony. In [3], the asynchronous controller was designed for fuzzy
MJSs. Then, the relevant asynchronous filtering problems were reviewed in [4, 5].
As a partial generalized dissipative theory [6], passivity provides a new method for
studying system stability and candidates for the construction of Lyapunov function
of the complex system through its energy storage function [7], which is of great
significance in modern control theory. There have been a lot of achievements in
passivity analysis. The authors designed the passive controller for MJSs in [8, 9].
The passive filter design for MJSs was also investigated in [10, 11].
On the other hand, although significant progress has been made in the discussion of
the finite-time control issues of MJSs, most of them are to analyze the stabilization
problem or H∞ performance. Finite-time stabilization is to devise a controller to
make the system satisfy the transient performance in the limited time available, and
finite-time H∞ control means that the system not only meets the expected transient
performance, but also has the expected anti-interference ability in the finite-time
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 93
X. Luan et al., Robust Control for Discrete-Time Markovian Jump Systems in the
Finite-Time Domain, Lecture Notes in Control and Information Sciences 492,
https://doi.org/10.1007/978-3-031-22182-8_5
94 5 Asynchronous Finite-Time Passive Control for Discrete-Time …

domain. The authors addressed the finite-time stabilization problem for MJSs in
[12–14]. In [15–17], the finite-time H∞ controller was designed such that the closed-
loop system is stochastic finite-time stabilizable and meets the H∞ performance.
Although the controller with stabilization and anti-interference ability can produce
a good performance, more results need to be considered from the perspective of
system internal stability and energy relationship. Therefore, the combination of finite-
time control and passive control are of great significance, which motivates us to study
the finite-time passive control (FTPC) in this chapter. Moreover, the asynchronous
FTPC problem is considered to make the discrete-time MJSs hidden MJSs stochastic
finite-time bounded with passive performance.

5.2 Finite-Time Passive Control

Consider the following discrete-time MJS described as:



⎨ x(k + 1) = A(rk )x(k) + Bu (rk )u(k) + Bw (rk )w(k)
z(k) = C(rk )x(k) + Dw (rk )w(k) (5.1)

x(k) = x0 , rk = r0 , k = 0

where the state variable, the control input, the control output, and the exogenous
disturbances are the same as those defined in the preceding chapters. The system
parameters such as πi j , A(rk ), Bu (rk ), Bw (rk ), C(rk ), and Dw (rk ) are denoted as
those in Chap. 2.
In this subsection, a FTPC will be designed to make the MJS (5.1) FTB and
passive. Then, the controller is designed by:

u(k) = K i x(k). (5.2)

Combining the MJS (5.1) and the controller (5.2), it yields the following closed-
loop MJS:

x(k + 1) = (Ai + Bui K i )x(k) + Bwi w(k)
. (5.3)
z(k) = Ci x(k) + Dwi w(k)

Definition 5.1 [8] For given parameters 0 < c1 < c2 , N , R > 0, the closed-loop
MJS (5.3) is stochastic finite-time stabilizable and satisfies the required passive per-
formance index γ , if the following inequality holds for the zero initial condition:
 N   
 
N
E w T (k)z(k) > γ 2 E w T (k)w(k) . (5.4)
k=0 k=0
5.2 Finite-Time Passive Control 95

Thus, the tasks for FTPC design for MJSs in this subsection are summarized as:
design a FTPC (5.2) for the MJS (5.1) such that the closed-loop MJS (5.3) is stochastic
finite-time stabilizable with respect to (c1 c2 N R d) and meets the desired passive
performance simultaneously.
Then, the following theorem is given to guarantee the finite-time boundedness
and passivity of the closed-loop MJS (5.3).

Theorem 5.1 For a given scalar α ≥ 1 and δ, the closed-loop system (5.3) is
stochastic finite-time stabilizable in regard to (c1 c2 N R d) and satisfies the
passive performance index γ , if there exist matrices K i , symmetric positive-definite
matrices Pi > 0 such that
⎡ ⎤
−α Pi 0 1 2 ··· M
⎢ ∗ −I 1 2 ··· M ⎥
⎢ ⎥
⎢ ∗ ∗ −P1 −1 0 ··· 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −P2 −1 ··· 0 ⎥<0 (5.5)
⎢ ⎥
⎢ .. .. .. .. .. .. ⎥
⎣ . . . . . . ⎦
∗ ∗ ∗ ∗ ∗ −PM −1
⎡ ⎤
−α Pi −CiT 1 2 ··· M
⎢ ∗ 2γ I − Dwi − Dwi
2 T
  ··· M ⎥
⎢ 1 2 ⎥
⎢ ∗ ∗ −P1 −1 0 ··· 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −P2 −1 0 ⎥<0 (5.6)
⎢ ⎥
⎢ .. .. .. .. .. .. ⎥
⎣ . . . . . . ⎦
∗ ∗ ∗ ∗ ∗ −PM −1

R < Pi < δ R (5.7)

 
d2
α N c1 δ + < c2 (5.8)
α−1

where
√ √
j = πi j ĀiT ,  j = πi j Bwi
T
, Āi = Ai + Bui K i .

Proof Consider a Lyapunov candidate function as: V (x(k)) = x T (k)Pi x(k). Then,
we have
96 5 Asynchronous Finite-Time Passive Control for Discrete-Time …

E {V (x(k + 1))} − αV (x(k))


⎧ ⎫
⎨  M ⎬
= E x T (k + 1) πi j P j x(k + 1) − αx T (k)Pi x(k)
⎩ ⎭
j=1
⎡ ⎤
M M
T
⎢ Āi πi j P j Āi − α Pi Āi
T
πi j P j Bwi ⎥
⎢ j=1 j=1 ⎥
= ηT (k) ⎢ ⎥ η(k) (5.9)
⎣ 
M

∗ T
Bwi πi j P j Bwi
j=1

 
where ηT (k) = x T (k) w T (k) .
Meanwhile, Eq. (5.9) can be converted to

E {V (x(k + 1))} − αV (x(k)) = ηT (k) 1 η(k) + w T (k)w(k) (5.10)

where
⎡ ⎤

M 
M
⎢ ĀiT πi j P j Āi − α Pi ĀiT πi j P j Bwi ⎥
⎢ j=1 j=1 ⎥
=⎢ ⎥.
1
⎣ 
M

∗ T
Bwi πi j P j Bwi − I
j=1

Employing Schur complement lemma to Eq. (5.5) indicates that 1 < 0, which
means

E {V (x(k + 1))} < αV (x(k)) + wT (k)w(k). (5.11)

For k ∈ {1, 2, . . . , N }, it follows from Eq. (5.11) that:

E {V (x(k))} < αV (x(k − 1)) + wT (k − 1)w(k − 1),


E {V (x(k − 1))} < αV (x(k − 2)) + wT (k − 2)w(k − 2),
..
.
E {V (x(1))} < αV (x(0)) + wT (0)w(0).

By iteration, it gives that


k−1
E {V (x(k))} < α k V (x(0)) + α k−1−l w T (l)w(l)
l=0
N −1

< α N V (x(0)) + α N −1−l w T (l)w(l). (5.12)
l=0
5.2 Finite-Time Passive Control 97

N
Notice that k=1 w T (k)w(k) ≤ d 2 , then the above inequality can be changed to
 
  d2
E {V (x(k))} = E x (k)Pi x(k) < α T N
V (x(0)) + . (5.13)
α−1

Furthermore, we have
 
σmax (R − 2 Pi R − 2 )x T (0)Rx(0) + d2
1 1
  α−1
E x (k)Rx(k) < α
T N
. (5.14)
σmin (R − 2 Pi R − 2 )
1 1

It can be seen from inequality (5.7) that σmax (R − 2 Pi R − 2 ) < δ I and σmin (R − 2
1 1 1

Pi R − 2 ) > I hold, which means


1

 
  d2
E x (k)Rx(k) < α c1 δ +
T N
.
α−1
 
According to condition (5.8), it can be further deduced that E x T (k)Rx(k) < c2 .
Based on Definition 2.1, the finite-time boundedness of the closed-loop MJS (5.3)
is proved.
In the next content, the passive performance of the closed-loop MJS (5.3) will be
analyzed.
From the closed-loop MJS (5.3), one has

2γ 2 w T (k)w(k) − 2w T (k)z(k)
  
 T  0 −CiT x(k)
= x (k) w (k)T
. (5.15)
∗ 2γ 2 I − Dwi
T
− Dwi w(k)

Then, it follows the similar approaches of the first part as:

E {V (x(k + 1))} − αV (x(k)) + 2γ 2 w T (k)w(k) − 2w T (k)z(k)


= ηT (k) 2 η(k) (5.16)

where
⎡ ⎤

M 
M
⎢ ĀiT πi j P j Āi − α Pi ĀiT πi j P j Bwi − CiT ⎥
⎢ j=1 j=1 ⎥
=⎢ ⎥.
2
⎣ 
M

∗ T
Bwi πi j P j Bwi + 2γ 2 I − Dwi
T
− Dwi
j=1

By condition (5.6), one obtains 2 < 0. Then, similar to the iteration (5.12), it
yields
98 5 Asynchronous Finite-Time Passive Control for Discrete-Time …

E {V (x(k))}
 N −1 N −1

 
< E α N V (x(0)) − 2γ 2 α N −1−l w T (l)w(l) + 2 α N −1−l w T (l)z(l) .
l=0 l=0
(5.17)

By zero initial condition and V (x(k)) ≥ 0, we have


 N −1   N −1

 
N −1−l N −1−l
E α w (l)z(l) > E γ
T 2
α w (l)w(l) .
T
(5.18)
l=0 l=0

For α ≥ 1, the above inequality results in


   

N 
N
E w (k)z(k) > γ E
T 2
w (k)w(k) .
T
(5.19)
k=0 k=0

Based on the Definition 2.1, the closed-loop MJS (5.3) is stochastic finite-time
stabilizable and satisfies the passive performance index. The proof is completed. 

Next, Theorem 5.2 will be adopted to design the corresponding FTPC for the
closed-loop MJS (5.3).
Theorem 5.2 Considering a given scalar α ≥ 1, the closed-loop MJS (5.3) is
stochastic finite-time stabilizable in regard to (c1 c2 N R d) and satisfies the
prescribed passive performance index γ , if there exist matrices Ni , and real symmet-
ric matrices Wi > 0 such that inequality (5.8) and the following conditions hold:
⎡ ⎤
−αWi 0 1 2 ··· M
⎢ ∗ −I 1 2 · · · M ⎥
⎢ ⎥
⎢ ∗ ∗ −W1 0 ··· 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −W2 ··· 0 ⎥ <0 (5.20)
⎢ ⎥
⎢ .. .. .. .. .. . ⎥
⎣ . . . . . .. ⎦
∗ ∗ ∗ ∗ ∗ −W M

⎡ ⎤
−αWi −Wi CiT 1 2 ··· M
⎢ ∗ 2γ I − Dwi − Dwi
2 T
  · · · M ⎥
⎢ 1 2 ⎥
⎢ ∗ ∗ −W1 0 ··· 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −W2 ··· 0 ⎥ <0 (5.21)
⎢ ⎥
⎢ .. .. .. .. .. . ⎥
⎣ . . . . . .. ⎦
∗ ∗ ∗ ∗ ∗ −W M
5.3 Asynchronous Finite-Time Passive Control 99

Wi − R −1 < 0 (5.22)
 
−Wi I
<0 (5.23)
∗ −δ R

where

Wi = Pi−1 , j = πi j [Wi AiT + G iT Bui
T
].

Moreover, the desired FTPC gains are calculated as K i = G i Wi−1 .


 
Proof Pre- and post-multiplying inequalities (5.5)–(5.6) by Pi−1 I I I · · · I ,
respectively, and letting G iT = Pi−1 K iT and Wi = Pi−1 , inequalities (5.5)–(5.6) can
be transformed into Eqs. (5.20)–(5.21). Furthermore, we transform Pi > R into
Wi − R −1 < 0, which can be reflected by Eq. (5.22), and transform Pi < δ R into
Eq. (5.23) by Schur complement lemma, respectively. Thus, Eqs. (5.20)–(5.23) are
the solvable form of matrix inequalities (5.5)–(5.7), and the FTPC gains can be given
by K i = G i Wi−1 . The proof is completed. 

From the results shown above, we can see that closed-loop MJS (5.3) is stochas-
tic finite-time stabilizable and passive by designing the FTPC, which is under an
assumption that the controller mode and the system mode are synchronized. How-
ever, this assumption cannot be maintained in some practical applications, which
prompts us to use hidden Markovian model to design asynchronous controllers in
the next subsection.

5.3 Asynchronous Finite-Time Passive Control

In this subsection, based on the hidden Markovian model, an asynchronous FTPC


will be designed to make the closed-loop MJSs stochastic FTB and passive. Then,
the controller is designed as:

u(k) = K q x(k) (5.24)

where q can be reflected by another Markovian chain {φk , k ≥ 0} with values in


Q = {1, 2, . . . , Q}. The conditional probabilities matrix = [φiq ] can be given by:

φiq = P {φk = q | rk = i} (5.25)


Q
where φiq ≥ 0 and q=1 φiq = 1.
Combining the MJS (5.1) and the asynchronous FTPC (5.24), it yields the closed-
loop MJS as follows:
100 5 Asynchronous Finite-Time Passive Control for Discrete-Time …

x(k + 1) = (Ai + Bui K q )x(k) + Bwi w(k)
. (5.26)
z(k) = Ci x(k) + Dwi w(k)

Similarly, the tasks in this subsection are summarized as: design an asynchronous
controller (5.24) for the MJS (5.1) so that the closed-loop MJS (5.26) is stochastic
finite-time stabilizable and meets the desired passive performance. The results are
shown in following theorem.

Theorem 5.3 For given scalars α ≥ 1, if there exist matrices K q , X , real symmetric
matrices Pi > 0, and positive scalars δ, and γ such that
 −1

M
− πi j P j < −X (5.27)
j=1

⎡ ⎤
−α Pi 0 i1 i2 ···i Q
⎢ ∗ −I ϒi1 ϒi2 ···ϒi Q ⎥
⎢ ⎥
⎢ ∗ ∗ −X 0 ··· 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −X ··· 0 ⎥ <0 (5.28)
⎢ ⎥
⎢ .. .. .. .. .. .. ⎥
⎣ . . . . . . ⎦
∗ ∗ ∗ ∗ ∗ −X
⎡ ⎤
−α Pi −CiT i1 i2 ··· i Q
⎢ ∗ 2γ 2 I − Dwi − Dwi
T
ϒi1 ϒi2 ··· ϒi Q ⎥
⎢ ⎥
⎢ ∗ ∗ −X 0 ··· 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −X ··· 0 ⎥ <0 (5.29)
⎢ ⎥
⎢ .. .. .. .. .. .. ⎥
⎣ . . . . . . ⎦
∗ ∗ ∗ ∗ ∗ −X

R < Pi < δ R (5.30)

 
d2
α N c1 δ + < c2 (5.31)
α−1

where
 
iq = φiq Ãiq
T
, ϒiq = φiq Bwi
T
, Ãiq = Ai + Bui K q .

Then, the closed-loop MJS (5.26) is stochastic finite-time stabilizable in regard to


(c1 c2 N R d) and satisfies the passive performance index γ .
5.3 Asynchronous Finite-Time Passive Control 101

Proof Consider the following Lyapunov candidate function: V (x(k)) = x T (k)Pi x(k).
Then, we have

E {V (x(k + 1))} − αV (x(k))


⎧ ⎧ ⎫⎫
⎨ Q ⎨ 
M ⎬⎬
=E φiq x T (k + 1) πi j P j x(k + 1) − αx T (k)Pi x(k)
⎩ ⎩ ⎭⎭
q=1 j=1


Q 
M
 T
= φiq πi j (Ai + Bui K q )x(k) + Bwi w(k)
q=1 j=1
 
P j (Ai + Bui K q )x(k) + Bwi w(k) − αx T (k)Pi x(k). (5.32)

Meanwhile, Eq. (5.32) can be rewritten as

E {V (x(k + 1))} − αV (x(k)) − wT (k)w(k)



Q 
M
 T
= φiq πi j (Ai + Bui K q )x(k) + Bwi w(k)
q=1 j=1
 
P j (Ai + Bui K q )x(k) + Bwi w(k) − αx T (k)Pi x(k) − w T (k)w(k). (5.33)

Recalling to inequalities (5.27)–(5.28) and using Schur complement lemma, it


yields that

E {V (x(k + 1))} < αV (x(k)) + wT (k)w(k). (5.34)

Then, the rest of the proof that the closed-loop MJS (5.26) is stochastic finite-time
stabilizable is the same as that in Theorem 5.1.
Next, we will analyze the passive performance of the closed-loop MJS (5.26).
Combining the closed-loop MJS (5.26) and Eq. (5.32), it yields

E {V (x(k + 1))} − αV (x(k)) + 2γ 2 w T (k)w(k) − 2w T (k)z(k)



Q 
M
 T
= φiq πi j (Ai + Bui K q )x(k) + Bwi w(k)
q=1 j=1
 
P j (Ai + Bui K q )x(k) + Bwi w(k) − αx T (k)Pi x(k) + 2γ 2 w T (k)w(k)
− 2wT (k)[Ci x(k) + Dwi w(k)]. (5.35)

Recalling to inequalities (5.27) and (5.29), and using Schur complement lemma,
it yields

E {V (x(k + 1))} < αV (x(k)) − 2γ 2 w T (k)w(k) + 2w T (k)z(k). (5.36)


102 5 Asynchronous Finite-Time Passive Control for Discrete-Time …

Then, similar to the proof in the Theorem 5.1, the passivity of the closed-loop
MJS (5.26) can be guaranteed. Generally speaking, under the conditions that matrix
inequalities (5.27)–(5.31) hold, the closed-loop MJS (5.26) is stochastic finite-time
stabilizable and passive. The proof is completed. 
Due to the nonlinear matrix inequalities (5.27)–(5.31), the asynchronous FTPC
gains cannot be obtained. Therefore, it is necessary to introduce the following theo-
rem to solve the nonlinear matrix inequalities (5.27)–(5.31).
Theorem 5.4 Considering a positive scalar α ≥ 1, the closed-loop MJS (5.26) is
stochastic finite-time stabilizable in regard to (c1 c2 N R d) and satisfies the
prescribed passive performance index γ , if there exist real symmetric matrices Wi >
0, matrices Hq , S, X and positive scalars δ, and γ such that LMI (5.31) and the
following conditions hold:
⎡ √ √ √ ⎤
−X πi1 X πi1 X ··· πi M X
⎢ ∗ −W1 0 ··· 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −W ··· 0 ⎥
⎢ 2 ⎥<0 (5.37)
⎢ .. .. .. .. .. ⎥
⎣ . . . . . ⎦
∗ ∗ ∗ ∗ −W M
⎡ ⎤
α(Wi − S T − S) 0 i1 i2 · · · i Q
⎢ ∗ −I ϒi1 ϒi2 · · · ϒi Q ⎥
⎢ ⎥
⎢ ∗ ∗ −X 0 · · · 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −X · · · 0 ⎥ <0 (5.38)
⎢ ⎥
⎢ .. .. .. .. . . .. ⎥
⎣ . . . . . . ⎦
∗ ∗ ∗ ∗ ∗ −X
⎡ ⎤
α(Wi − S T − S) −SCiT i1 i2 ··· i Q
⎢ ∗ 2γ I − Dwi − Dwi
2 T
ϒi1 ϒi2 ··· ϒi Q ⎥
⎢ ⎥
⎢ ∗ ∗ −X 0 ··· 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −X ··· 0 ⎥ <0 (5.39)
⎢ ⎥
⎢ .. .. .. .. .. .. ⎥
⎣ . . . . . . ⎦
∗ ∗ ∗ ∗ ∗ −X

Wi − R −1 < 0 (5.40)

 
−Wi I
<0 (5.41)
∗ −δ R

where

Wi = Pi−1 , iq = φiq [S AiT + HqT Bui
T
].
5.4 Simulation Analysis 103

Moreover, the asynchronous FTPC gains are derived by K q = Hq S −1 .

Proof From inequality (5.27), it obtains


M
−X −1 + πi j P j < 0. (5.42)
j=1

Using Schur complement lemma and letting Wi = Pi−1 , inequality (5.42) can be
transformed into
⎡ √ √ √ ⎤
−X −1 πi1 πi2 · · · πi M
⎢ ∗ −W1 0 · · · 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −W2 · · · 0 ⎥
⎢ ⎥ < 0. (5.43)
⎢ .. .. .. . . .. ⎥
⎣ . . . . . ⎦
∗ ∗ ∗ ∗ −W M

Implementing a congruence to inequality (5.43) by diag{X I I I · · · I }, respec-


tively, inequality (5.37) can be obtained. Then, implementing a congruence to
inequalities (5.28)–(5.29) by diag{S I I I · · · I }, respectively, and using S T + S −
Wi−1 ≤ SWi S T , inequalities (5.38)–(5.39) can be confirmed with HqT = S K qT . Simi-
lar to the proof of Theorem 5.2, we can also get inequalities (5.40)–(5.41). Thus, the
asynchronous FTPC gains can be given by K q = Hq S −1 . The proof is completed.

5.4 Simulation Analysis

In this section, an example will be applied to show the effectiveness and feasibility
of the developed results. Consider a two-mode discrete-time MJS (5.1) with the
following form:
       
0.8 0.1 0.7 0.2 1.1 0.1
A1 = , A2 = , Bu1 = Bu2 = , Bw1 = Bw2 = ,
0.7 0.4 0.3 0.9 1.1 0.1

 
C1 = C2 = 0.1 0.1 , Dw1 = Dw2 = [0.5] .

Assume the transition probability matrix  and as:


   
0.8 0.2 0.2 0.8
= , = .
0.2 0.8 0.7 0.3
 
The initial conditions are given by x0 = 1 1 , t = 10 s, and T = 0.1 s, and the
external noise is assumed to be w(k) = sin(k). We also choose α = 0.8, c1 = 1,
N = t/T , d 2 = 1, and R = I2 .
104 5 Asynchronous Finite-Time Passive Control for Discrete-Time …

Fig. 5.1 The jumping modes in Case I

We will analyze the simulation results as the following two cases:


Case I: Finite-time passive control case.
By solving LMIs (5.8) and (5.20)–(5.23), we have:
   
K 1 = −0.6761 −0.2230 , K 2 = −0.4907 −0.4997 ,

γ = 0.1910, c2 = 1.6265, δ = 2.0924 .

Applying the obtained FTPC gains to the closed-loop MJS (5.3), the simulation
results are depicted in Figs. 5.1 and 5.2. Figure 5.1 describes the jumping modes. The
states of the open-loop and closed-loop MJSs are shown in Fig. 5.2 simultaneously.
Comparing the trajectories of the open-loop and closed-loop MJSs, it shows that
the closed-loop MJS (5.3) is stochastic finite-time stabilizable and meets the desired
performance level.
Case II: Asynchronous finite-time passive control case.
By solving LMIs (5.31) and (5.37)–(5.41), we have:
   
K 1 = −0.2493 −0.2057 , K 2 = −0.2784 −0.1609 ,

γ = 0.1963, c2 = 2.9646, δ = 3.4164 .


5.4 Simulation Analysis 105

Fig. 5.2 The states of the open-loop and closed-loop MJSs in Case I

Fig. 5.3 The jumping modes in Case II

The simulation results under the asynchronous FTPC are shown in Figs. 5.3 and
5.4. Figure 5.3 depicts the jumping modes. Figure 5.4 describes the state of the open-
loop and closed-loop MJSs simultaneously, which indicates that the closed-loop MJS
(5.26) is stochastic finite-time stabilizable and meets the desired performance level.
Comparing the results in these two cases, the asynchronous FTPC is more realistic
than the synchronous FTPC, but the boundedness and passive performance are not
as good as that of the synchronous FTPC due to that the relevant parameters c2 and
106 5 Asynchronous Finite-Time Passive Control for Discrete-Time …

x2

x1

time

Fig. 5.4 The states of the open-loop and closed-loop MJSs in Case II

γ under the designed asynchronous FTPC is greater than by the synchronous FTPC.
It also indicates that the asynchronous FTPC sacrifices the system performance in
order to meet the actual situation.

5.5 Conclusions

From the perspective of system internal stability and energy relationship, this chapter
studies the finite-time passive control for discrete-time MJSs to make sure that the
closed-loop system is finite-time bounded and the system energy function decaying
according to the desired rate. Then the asynchronous FTPC is studied by considering
the more practical situation that the controller’s mode is not synchronized with the
system mode. Next chapter will combine the finite-time performance with sliding
mode control to achieve better performance indicators for discrete-time MJSs.

References

1. Zhang, H., Gray, W.S., Gonzalez, O.R.: Performance analysis of digital flight control systems
with rollback error recovery subject to simulated neutron-induced upsets. IEEE Trans. Control
Syst. Technol. 16(1), 46–59 (2007)
2. Bäuerle, N., Rieder, U.: Markovian Decision Processes with Applications to Finance. Springer
Science & Business Media Publishing, Berlin (2011)
References 107

3. Dong, S., Wu, Z.G., Su, H., Shi, P., Karimi, H.R.: Asynchronous control of continuous-time
nonlinear Markovian jump systems subject to strict dissipativity. IEEE Trans. Autom. Control
64(3), 1250–1256 (2018)
4. Zhang, X., Wang, H., Stojanovic, V., Cheng, P., He, S., Luan, X., Liu, F.: Asynchronous fault
detection for interval type-2 fuzzy nonhomogeneous higher-level Markovian jump systems
with uncertain transition probabilities. IEEE Trans. Fuzzy Syst. (2021). https://doi.org/10.
1109/TFUZZ.2021.3086224
5. Zhang, X., He, S.P., Stojanovic, V., Luan, X.L., Liu, F.: Finite-time asynchronous dissipative
filtering of conic-type nonlinear Markovian jump systems. Sci. China Inf. Sci. 64, 1–12 (2021)
6. Willems, J.C.: Dissipative dynamical systems part I: general theory. Arch. Ration. Mech. Anal.
45(5), 321–351 (1972)
7. Shan, Y., She, K., Zhong, S., Cheng, J., Wang, W., Zhao, C.: Event-triggered passive control for
Markovian jump discrete-time systems with incomplete transition probability and unreliable
channels. J. Franklin Inst. 356(15), 8093–8117 (2019)
8. Wu, Z.G., Shi, P., Shu, Z., Su, H., Lu, R.: Passivity-based asynchronous control for Markovian
jump systems. IEEE Trans. Autom. Control 62(4), 2020–2025 (2016)
9. Chen, Y., Chen, Z., Chen, Z., Xue, A.: Observer-based passive control of non-homogeneous
Markovian jump systems with random communication delays. Int. J. Syst. Sci. 51(6), 1133–
1147 (2020)
10. Shen, H., Su, L., Park, J.H.: Extended passive filtering for discrete-time singular Markovian
jump systems with time-varying delays. Signal Process. 128, 68–77 (2016)
11. He, S.P., Liu, F.: Exponential passive filtering for a class of nonlinear jump systems. J. Syst.
Eng. Electron. 20(4), 829–837 (2009)
12. Luan, X.L., Zhao, C.Z., Liu, F.: Finite-time stabilization of switching Markovian jump systems
with uncertain transition rates. Circ. Syst. Signal Process. 34(12), 3741–3756 (2015)
13. Yan, Z., Zhang, W., Zhang, G.: Finite-time stability and stabilization of Itô stochastic systems
with Markovian switching: mode-dependent parameters approach. IEEE Trans. Autom. Control
60(9), 2428–2433 (2015)
14. Qi, W., Kao, Y., Gao, X.: Further results on finite-time stabilization for stochastic Markovian
jump systems with time-varying delay. Int. J. Syst. Sci. 48(14), 2967–2975 (2017)
15. Zhang, Y., Liu, C., Mu, X.: Robust finite-time stabilization of uncertain singular Markovian
jump systems. Appl. Math. Model. 36(10), 5109–5121 (2012)
16. Shen, M., Yan, S., Zhang, G., Park, J.H.: Finite-time H∞ static output control of Markovian
jump systems with an auxiliary approach. Appl. Math. Comput. 273, 553–561 (2016)
17. Zong, G., Yang, D., Hou, L., Wang, Q.: Robust finite-time H∞ control for Markovian jump sys-
tems with partially known transition probabilities. J. Franklin Inst. 350(6), 1562–1578 (2013)
Chapter 6
Finite-Time Sliding Mode Control
for Discrete-Time Markovian Jump
Systems

Abstract This chapter focuses on the finite-time sliding mode control problem for
discrete-time Markovian jump systems (MJSs) with uncertainties. Firstly, the sliding
mode function and sliding mode controller are designed for discrete-time MJSs. By
using Lyapunov–Krasovskii functional method, some mode-dependent weight matri-
ces are obtained such that the closed-loop MJSs are stochastic finite-time stabilizable
and fulfill the given H∞ performance index. Moreover, an appropriate asynchronous
sliding mode controller is constructed and the rationality conditions of the coefficient
parameter are given and proved for the purpose that the closed-loop MJSs can be
driven onto the sliding surface. Also, the transient performance of the discrete-time
MJSs during the reaching and sliding motion phase has been investigated, respec-
tively. Finally, we use a numerical example to show the effectiveness of the designed
results.

6.1 Introduction

As known that, solving the inaccuracy of modeling, structures or parameters is one of


the great challenges when investigating the control strategy of dynamic systems. As
a simple way of robust control, sliding mode control scheme has become a universal
design method to solve the inaccuracy [1–3]. For sliding mode control strategy, the
system structure is not fixed, and it can be changed purposefully and continuously
according to the present state of the system, then compelling the system to move in
accordance with the desired state trajectory. Sliding mode control method is usually
divided into two parts [4, 5]. One is to construct a suitable sliding mode surface
(SMS) to make the system have ideal dynamic performance after reaching the SMS.
The second is to design the corresponding sliding mode controller (SMC) to ensure
the state trajectories of the sliding mode can arrive at the SMS within a certain
time. Since the characteristics and parameter values of the sliding mode system only
depend on the constructed SMS and have nothing to do with external disturbances,
sliding mode control scheme has the advantages of rapid response, insensitivity to
disturbances and parameter changes, and simple physical implementation [6, 7].

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 109
X. Luan et al., Robust Control for Discrete-Time Markovian Jump Systems in the
Finite-Time Domain, Lecture Notes in Control and Information Sciences 492,
https://doi.org/10.1007/978-3-031-22182-8_6
110 6 Finite-Time Sliding Mode Control for Discrete-Time Markovian …

In recent years, along with the research boom of stochastic systems, sliding mode
control method has been introduced into Markovian jump systems (MJSs), and some
meaningful results have been achieved [8, 9]. The sliding mode control method
can suppress parameter perturbation and external interference to obtain the desired
performance. This ideal robustness is extremely effective in practical applications.
The sliding mode control problem for Markovian jump systems with delays via
asynchronous approach is studied in [10]. Considering the Markovian jump non-
linear systems with actuator faults, the adaptive sliding mode control problem was
studied in [5]. Considering the asynchronous between system state and controller,
the asynchronous sliding mode control problem of Markovian jump systems with
time-varying delays and partly accessible mode detection probabilities was addressed
in [6].
On the other hand, a large number of research results of MJSs based on sliding
mode control scheme are carried out in sense of Lyapunov asymptotic stability,
focusing on the problem that the system asymptotically astringents to the equilibrium
state when the running time is infinite. However, in many practical applications, it is
often necessary to study the dynamic boundedness of the system within a finite-time
interval [11–16]. For example, to control the aircraft to operate in a specific area,
the chemical reaction requires that the temperature and pressure do not exceed a
bounded value in finite-time. Therefore, it is of profound significance and necessity
to study the finite-time stabilization of MJSs by means of the sliding mode control
method.
In this chapter, the sliding mode control and asynchronous sliding mode control
problems for discrete-time MJSs in the finite-time domain have been discussed.
Firstly, a mode-dependent sliding SMC has been designed such that the closed-loop
MJSs can be driven onto the sliding surface. Then, some sufficient conditions on
the stochastic finite-time stabilization of the closed-loop MJSs are given. Moreover,
the main design results are extended to the discrete-time MJSs with asynchronous
phenomena. Finally, a numerical example is given to show the effectiveness of the
designed results.

6.2 Finite-Time Sliding Mode Control

The following uncertain discrete-time MJS is described by:



⎨ x(k + 1) = (A(rk ) + E(rk )(k)F(rk ))x(k) + Bu (rk )u(k) + Bw (rk )w(k)
z(k) = C(rk )x(k) + Dw (rk )w(k)

x(k) = x0 , rk = r0 , k = 0
(6.1)
where the state, the controlled output, the controlled input, and the disturbance input
are defined the same as those in the proceeding chapters, respectively. For any k ∈
{1, 2, . . . , N }, (k) is a unknown matrix function with  T (k)(k) < I . A(rk ), E(rk ),
6.2 Finite-Time Sliding Mode Control 111

F(rk ), Bu (rk ), C(rk ), Bw (rk ), and Dw (rk ) are known given matrices. Without special
instructions, Bu (rk ) is a full column rank matrix.
For any rk = i, we denote A(rk ), E(rk ), F(rk ), Bu (rk ), C(rk ), Bw (rk ) and Dw (rk ),
as Ai , E i , Fi , Bui , Ci , Bwi , and Dwi , respectively. Thus, the discrete-time MJS (6.1)
can be rewritten as

⎨ x(k + 1) = (Ai + E i (k)Fi )x(k) + Bui u(k) + Bwi w(k)
z(k) = Ci x(k) + Dwi w(k) . (6.2)

x(k) = x0 , k = 0

For the discrete-time MJS (6.2), we select the following mode-dependent SMF:

S(k) = G i x(k) − G i (Ai + E i (k)Fi )x(k − 1) (6.3)

where G i is a mode-dependent weight matrix needed to be designed such that G i Bui


is non-singular. In the following, the G i = BuiT Pi with Pi > 0 will be selected to
ensure the non-singularity of G i Bui .
Then, the following mode-dependent SMC is selected as:
 
u(k) = −(G i Bui )−1 G i Ai x(k) + ηi (k)sign (S(k)) (6.4)

where ηi (k) is the coefficient parameter to be designed.


Substituting the SMC (6.4) into the discrete-time MJS (6.2), the following closed-
loop MJS can be obtained:

⎨ x(k + 1) = Âi x(k) − Bui (G i Bui )−1 ηi (k)sign(S(k)) + Bwi w(k)
z(k) = Ci x(k) + Dwi w(k) (6.5)

x(k) = x0 , k = 0

where Âi = (Ai + E i (k)Fi − Bui (G i Bui )−1 G i Ai ).


Firstly, the reachability problem of the sliding surface will be analyzed in the
following theorem:
Theorem 6.1 The closed-loop MJS (6.5) can be driven onto S(k) = 0 during the
finite-time interval [0 N ] under SMC (6.4), if the coefficient parameter ηi (k) satisfies

ηi (k) = −G i Ai x(k) + G i Bwi w(k). (6.6)

Proof For the discrete-time MJS (6.2), it follows from the mode-dependent SMF
S(k) and the closed-loop MJS (6.5) that:

S(k + 1) = G i x(k + 1) − G i Ai x(k) = G i Bui u(k) + G i Bwi w(k)


= −G i Ai x(k) + G i Bwi w(k) − ηi (k)sign(S(k)). (6.7)
112 6 Finite-Time Sliding Mode Control for Discrete-Time Markovian …

Then, the following stochastic Lyapunov–Krasovskii functional is chosen as:

V1 (k, i) = S T (k)S(k). (6.8)

Recalling to Eq. (6.7), we have

E {V1 (k, i)}


 
= E S T (k + 1)S(k + 1) − S T (k)S(k)
= (−G i Ai x(k) + G i Bwi w(k) − ηi (k)sign(S(k)))T (−G i Ai x(k)
+ G i Bwi w(k) − ηi (k)sign(S(k))) − S T (k)S(k)
≤ x T (k)AiT G iT G i Ai x(k) − x T (k)AiT G iT G i Bwi w(k) + x T (k)AiT G iT ηi (k)sign(S(k))
− wT (k)Bwi
T
G iT G i Ai x(k) + w T (k)Bwi
T
G iT G i Bwi w(k)
− wT (k)Bwi
T
G iT ηi (k)sign(S(k)) + (ηi (k)sign(S(k)))T G i Ai x(k)
− (ηi (k)sign(S(k)))T G i Bwi w(k) + (ηi (k)sign(S(k)))T (ηi (k)sign(S(k)))
− S T (k)S(k). (6.9)

Recalling to the coefficient parameter ηi (k) in condition (6.6), one can get

E {V1 (k, i)} ≤ −S T (k)S(k) < 0. (6.10)

Thus, the closed-loop MJS (6.5) can be driven onto S(k) = 0 under SMC (6.4).
The proof is completed. 

It follows from the sliding mode control theory that the following equivalent
controller (6.11) can be obtained if the closed-loop MJS (6.5) maintains in the sliding
surface S(k + 1) = S(k) = 0:

u eq (k) = −(G i Bui )−1 G i Bwi w(t). (6.11)

Substituting the equivalent controller (6.11) into the discrete-time MJS (6.2), the
following closed-loop MJS can be obtained:

⎨ x(k + 1) = (Ai + E i (k)Fi )x(k) + Bwi [I − Bui (G i Bui )−1 G i ]w(t)
z(k) = Ci x(k) + Dwi w(k) . (6.12)

x(k) = x0 , k = 0

In the following theorem, some sufficient conditions on the stochastic finite-time


stabilization with respect to (c1 c2 N R d) for the closed-loop MJS (6.12) will be
given.
Theorem 6.2 For a given scalar βi ≥ 1, the closed-loop MJS (6.12) is stochastic
finite-time stabilizable with respect to (c1 c2 N R d), if there exist mode-dependent
weight matrix Pi such that
6.2 Finite-Time Sliding Mode Control 113
⎡ ⎤
M
⎢ −βi Pi + βi FiT Fi 0 πi j AiT P j 0 ⎥
⎢ j=1 ⎥
⎢ ⎥
⎢ ∗ −βi I 0 0 ⎥
⎢ ⎥<0 (6.13)
⎢ M M ⎥
⎢ ∗ ∗ − πi j P j πi j P j E i ⎥
⎣ j=1 j=1 ⎦
∗ ∗ ∗ −βi I

βiN (λ Pi c1 + d 2 ) < c2 λ Pi . (6.14)

Proof Select the following Lyapunov function

V2 (k, i) = x T (k)Pi x(k). (6.15)

Considering the closed-loop MJS (6.12), one can get


M
E{V2 (k + 1, i)} = πi j x T (k + 1)P j x(k + 1)
j=1


M
= πi j (x T (k)(AiT + FiT (k)T E iT )
j=1

+ w T (k)[I − Bui (G i Bui )−1 G i ]T Bwi


T
)P j
× (Ai + E i (k)Fi )x(k) + Bwi [I − Bui (G i Bui )−1 G i ]w(t)).
(6.16)

We introduce the following auxiliary function:

E{V2 (k + 1, i)} − βi V2 (k, i) < βi w T (k)w(k). (6.17)

It follows from inequality (6.17) that:


  
  Ai 0 x(k)
x (k) w (k)
T T
<0 (6.18)
∗ −βi I w(k)

where Ai = M j=1 πi j (Ai + Fi  (k)E i )P j (Ai + E i (k)Fi ) − βi Pi . Recalling to


T T T T

Lemma 2.1 and using the Schur complement lemma simultaneously, inequality (6.18)
holds according to inequality (6.13).
114 6 Finite-Time Sliding Mode Control for Discrete-Time Markovian …

From condition (6.17), we have


k
E{V2 (k, i)} < βik V2 (x(0), r0 ) + βil w T (k − l)w(k − l)
l=1

 
k

< βik V2 (x(0), r0 ) + βik−l w T (k − l)w(k − l)
l=1

 
k

< βik x T (0)Pi x(0) + βik−l w T (k − l)w(k − l) . (6.19)
l=1

−1 −1 −1 −1
Defining λ Pi = maxi∈M {λmax (R 2 Pi R 2 )} and λ Pi = mini∈M {λmin (R 2 Pi R 2 )},
we can get from inequality (6.19) that

λ Pi E{x T (k)Rx(k)}
 
k

< βiN λ Pi x T (0)Rx(0) + βik−l w T (k − l)w(k − l) < βiN (λ Pi c1 + d 2 ). (6.20)
l=1

Thus, one can get

βiN (λ Pi c1 + d 2 )
E{x T (k)Rx(k)} < . (6.21)
λ Pi

It follows from condition (6.14) that E{x T (k)Rx(k)} < c2 for any k ∈ [0 N ]. That
means the closed-loop MJS (6.12) is stochastic finite-time stabilizable in regard to
(c1 c2 N R d). The proof is completed. 

In the following theorem, some sufficient conditions on the stochastic finite-time


stabilization with H∞ performance for the closed-loop MJS (6.12) will be given.
Theorem 6.3 For a given scalar βi ≥ 1, the closed-loop MJS (6.12) is stochastic
finite-time stabilizable with H∞ performance in regard to (c1 c2 N R d), if there
exist mode-dependent weight matrix Pi such that (6.14) and the following inequality
hold:
⎡ ⎤
M
⎢ Ci Ci − βi Pi + βi Fi Fi πi j AiT P j
T T
CiT Dwi 0 ⎥
⎢ j=1 ⎥
⎢ ⎥
⎢ ∗ −βi I + DwiT
Dwi 0 0 ⎥
⎢ ⎥ < 0.
⎢ M M ⎥
⎢ ∗ ∗ − πi j P j πi j P j E i ⎥
⎣ j=1 j=1 ⎦
∗ ∗ ∗ −βi I
(6.22)
6.3 Asynchronous Finite-Time Sliding Mode Control 115

Proof The same Lyapunov function is selected as that in (6.15). Considering the
H∞ performance index, we introduce the following auxiliary function:

E{V2 (k + 1, i)} − βi V2 (k, i) < βi w T (k)w(k) − z T (k)z(k). (6.23)

It follows from inequality (6.23) that:


  
  Ai CiT Dwi x(k)
x T (k) w T (k) <0 (6.24)
∗ −βi I + DwiT
Dwi w(k)

where Ai = M j=1 πi j (Ai + Fi  (k)E i )P j (Ai + E i (k)Fi ) − βi Pi + C i C i .


T T T T T

Recalling to Lemma 2.1 and using Schur complement lemma, inequality (6.24) holds
according to inequality (6.22). Define
 N 

J=E [z (k)z(k) − βi w (k)w(k)] .
T T
(6.25)
k=1

Under zero initial condition, it yields


 N 

J≤E [z (k)z(k) − βi w (k)w(k) + E{V2 (k + 1, i)} − βi V2 (k, i)] . (6.26)
T T

k=1

Considering the inequality (6.23), it follows that J < 0. Thus, one can get:
   

N 
N
E z (k)z(k) < βi E
T
w (k)w(k) .
T
(6.27)
k=1 k=1


According to the Definition 3.2 in Chap. 3, we get J < 0 with γ = βi . The
proof is completed. 

6.3 Asynchronous Finite-Time Sliding Mode Control

In this subsection, based on the hidden Markovian model, an asynchronous SMC


will be designed to make the closed-loop MJS stochastic finite-time stabilizable
with H∞ performance. For the discrete-time MJS (6.2), we select the following
mode-dependent SMF:

S(k) = G i x(k) − G i (Ai + E i (k)Fi + Bui K q )x(k − 1) (6.28)

where K i is the controller gain and G i is mode-dependent weight matrix needed to


be designed such that G i Bui is non-singular. In the following, the G i = Bui
T
Pi with
116 6 Finite-Time Sliding Mode Control for Discrete-Time Markovian …

Pi > 0 will be selected to ensure the non-singularity of G i Bui . q can be reflected by


another Markovian chain {φk , k ≥ 0} with values in Q = {1, 2, . . . , Q}. The condi-
tional probabilities matrix = [φiq ] can be given by:

φiq = P {φk = q | rk = i} (6.29)

Q
where φiq ≥ 0 and q=1 φiq = 1.
Then, the following asynchronous mode-dependent SMC is selected as:

u(k) = K q x(k) − ηi (k)sign(S(k)) (6.30)

where ηi (k) is the coefficient parameter to be designed.


Substituting the asynchronous SMC (6.30) into the discrete-time MJS (6.2), we
have the following closed-loop MJS:

⎨ x(k + 1) = Âi x(k) − Bui ηi (k)sign(S(k)) + Bwi w(k)
z(k) = Ci x(k) + Dwi w(k) (6.31)

x(k) = x0 , t = 0

where Âi = (Ai + E i (k)Fi + Bui K q ).

Firstly, the reachability problem of the sliding surface will be analyzed in the
following theorem.
Theorem 6.4 The closed-loop MJS (6.31) can be driven onto S(k) = 0 during the
finite-time interval [0 N ] under the asynchronous SMC (6.30), if the coefficient
parameter ηi (k) satisfies

ηi (k) = κi + 2K q x(k) + (G i Bui )−1 (G i Bwi )w(k) (6.32)

λmax [(G i Bui )−1 ]S(0)


κi ≥ . (6.33)
N +1

Proof For the discrete-time MJS (6.2), it follows from (6.28) that:

S(k + 1) = G i x(k + 1) − G i (Ai + E i (k)Fi + Bui K q )x(k)


= G i Bui K q x(k) + G i Bui u(k) + G i Bwi w(k). (6.34)

Then, the following Lyapunov function is chosen as:

S T (k)(G i Bui )−1 S(k)


V3 (k, i) = . (6.35)
2
6.3 Asynchronous Finite-Time Sliding Mode Control 117

Thus, we have
 
E {V3 (k, i)} = E S T (k + 1)S(k + 1) − S T (k)S(k)
= νi + S T (k)(G i Bui )−1 (S(k + 1) − S(k))
= νi + S T (k)(K q x(k) + u(k) + (G i Bui )−1 (G i Bwi )w(k)
− (G i Bui )−1 S(k)) (6.36)
−1
where νi = (S (k+1)−S (k))(G i2Bui ) (S(k+1)−S(k)) .
T T

Recalling to conditions (6.30) and (6.32), one can get


 
E {V3 (k, i)} = E S T (k + 1)S(k + 1) − S T (k)S(k)
= νi + S T (k)(G i Bui )−1 (S(k + 1) − S(k))
= νi + (−κi S(k) − (G i Bui )−1 (G i Bwi )w(k)S(k)
+ S T (k)(G i Bui )−1 (G i Bwi )w(k) − S T (k)(G i Bui )−1 S(k))
≤ νi − κi S(k). (6.37)

Thus, there exists a large enough coefficient parameter κi such that E {V3
(k, i)} < 0 holds, which means the closed-loop MJS (6.31) can be driven onto
S(k) = 0 during the finite-time interval [0 N ] under the asynchronous SMC (6.30).
Then, we prove that the closed-loop MJS (6.31) can be driven onto S(k) = 0 during
[0 N ] with N < N .
From condition (6.37), one can get

νi E {V3 (k, i)} S T (k)(G i Bui )−1 (S(k) − S(k + 1))


κi ≤ − ≤ . (6.38)
S(k) S(k) S(k)

Summing of the condition (6.37) from 0 → N , one can obtain

N
S T (k)(G i Bui )−1 (S(k) − S(k + 1))
(1 + N )κi ≤ . (6.39)
k=0
S(k)

Recalling to Eq. (6.35), we have

N
S T (k)(G i Bui )−1 (S(k) − S(k + 1))
(1 + N )κi ≤
k=0
S(k)
S T (0)(G i Bui )−1 (S(0) − S(1)) 2V1 (0)
≤ ≤ . (6.40)
S(0) S(0)
118 6 Finite-Time Sliding Mode Control for Discrete-Time Markovian …

It follows from condition (6.40) that:

2V1 (0)
N ≤ − 1. (6.41)
κi S(0)

Thus, we have N < N by means of equation (6.33). That means the closed-loop
MJS (6.31) can be driven onto S(k) = 0 during [0 N ] with N < N . The proof is
completed. 
It is known that the closed-loop MJS (6.31) has two phases for the given [0 N ].
The first one is the reaching phase within [0 N ], and the second one is the sliding
motion phase within [N N ]. Next, we will analyze the transient performance of
system (6.31) over [0 N ] and [N N ].
Theorem 6.5 For given scalars βi ≥ 1, and c > c1 , the closed-loop MJS (6.31) is
stochastic finite-time stabilizable with respect to (c1 c N R d), if there exist
mode-dependent weight matrix Pi such that
⎡ Q M

⎢ −βi Pi 0 φiq πi j ÂiT P j ⎥
⎢ q=1 j=1 ⎥
⎢ ⎥
⎢ M

⎢ ∗ −βi I πi j Bwi
T
Pj ⎥<0 (6.42)
⎢ ⎥
⎢ j=1

⎣ M ⎦
∗ ∗ − πi j P j
j=1

βiN (λ Pi c1 + d 2 ) < c λ Pi (6.43)

c1 < c < c2 . (6.44)

Proof Selecting the same Lyapunov function as that in (6.15) and considering the
closed-loop MJS (6.31), one can get


Q 
M
E{V2 (k + 1, i)} = φiq πi j x T (k + 1)P j x(k + 1)
q=1 j=1


Q 
M
= φiq πi j ( Âi x(k) − Bui ηi (k)sign(S(k)) + Bwi w(k))T P j
q=1 j=1

× ( Âi x(k) − Bui ηi (k)sign(S(k)) + Bwi w(k))



Q 
M
< φiq πi j ( Âi x(k) + Bwi w(k))T P j ( Âi x(k) + Bwi w(k)).
q=1 j=1
(6.45)

We introduce the following auxiliary function:


6.3 Asynchronous Finite-Time Sliding Mode Control 119

E{V2 (k + 1, i)} − βi V2 (k, i) < βi w T (k)w(k). (6.46)

It follows from inequality (6.46) that:


  
  B1i B2i x(k)
x (k) w (k)
T T
<0 (6.47)
∗ Bwi
T
Bwi − βi I w(k)

Q Q
where B1i = q=1 φiq M j=1 πi j Âi P j Âi − βi Pi
T
and B2i = q=1 φiq M
j=1
πi j ( Âi P j Bwi ). Recalling to Lemma 2.1 and using the Schur complement lemma,
inequality (6.47) holds by inequality (6.13).
From Eq. (6.46), we have


k
E{V2 (k, i)} < βik V2 (x(0), r0 ) + βil w T (k − l)w(k − l)
l=1

 
k

< βi V2 (x(0), r0 ) +
k
βik−l w T (k − l)w(k − l)
l=1

 
k

< βik x T (0)Pi x(0) + βik−l w T (k − l)w(k − l) . (6.48)
l=1

−1 −1 −1 −1
Defining λ Pi = maxi∈M {λmax (R 2 Pi R 2 )} and λ Pi = mini∈M {λmin (R 2 Pi R 2 )},
we can get from condition (6.48) that

 
k

λ Pi E{x (k)Rx(k)} <
T
βiN λ Pi x (0)Rx(0) +
T
βik−l w T (k − l)w(k − l)
l=1

< βiN (λ Pi c1 + d 2 ). (6.49)

Thus, one can get

βiN (λ Pi c1 + d 2 )
E{x T (k)Rx(k)} < . (6.50)
λ Pi

It follows from condition (6.43) that E{x T (k)Rx(k)} < c for any k ∈ [0 N ]. That
means the closed-loop MJS (6.31) is stochastic finite-time stabilizable in regard to
(c1 c N R d). The proof is completed. 

In the following theorem, stabilizable with H∞ performance with respect to


(c1 c N R d) of the closed-loop MJS (6.31) will be given.

Theorem 6.6 For given scalars βi ≥ 1 and c > c1 , the closed-loop MJS (6.31) is
stochastic finite-time stabilizable with H∞ performance in regard to (c1 c N R d),
if there exist mode-dependent weight matrix Pi such that Eqs. (6.43) and (6.44) and
the following inequality hold:
120 6 Finite-Time Sliding Mode Control for Discrete-Time Markovian …
⎡ Q M

T
⎢ Ci Ci − βi Pi CiT Dwi φiq πi j ÂiT P j ⎥
⎢ q=1 j=1 ⎥
⎢ ⎥
⎢ M

⎢ ∗ −βi I + Dwi
T
Dwi πi j Bwi
T
Pj ⎥ < 0. (6.51)
⎢ ⎥
⎢ j=1

⎣ M ⎦
∗ ∗ − πi j P j
j=1

Proof The same Lyapunov function is selected as that in (6.15). Considering the
H∞ performance index, we introduce the following auxiliary function:

E{V2 (k + 1, i)} − βi V2 (k, i) < βi w T (k)w(k) − z T (k)z(k). (6.52)

It follows from inequality (6.52) that:


  
  B1i + CiT Ci B2i + CiT Dwi x(k)
x T (k) w T (k) < 0. (6.53)
∗ −βi I + 2Dwi
T
Dwi w(k)

Recalling to Lemma 2.1 and using Schur complement lemma, inequality (6.53)
holds according to inequality (6.51). Define
 N 

J=E [z (k)z(k) − βi w (k)w(k)] .
T T
(6.54)
k=1

Under zero initial condition, it yields


 N

J≤E [z (k)z(k) − βi w (k)w(k) + E{V2 (k + 1, i)} − βi V2 (k, i)] . (6.55)
T T
k=1

Considering the inequality (6.52), it follows that J < 0. Thus, one can get:
   

N 
N
E z (k)z(k) < βi E
T
w (k)w(k) .
T
(6.56)
k=1 k=1


Recalling to Definition 3.2 and inequality (6.56), we have J < 0 with γ = βi .
The proof is completed. 

It follows from the sliding mode control theory that the equivalent controller (6.57)
can be obtained if the closed-loop MJS (6.31) maintain in S(k) = 0

u eq (t) = −K q x(t) − (G i Bui )−1 G i Bwi w(t). (6.57)

Substituting the equivalent controller (6.57) into discrete-time MJS (6.2), the
following closed-loop MJS can be obtained:
6.3 Asynchronous Finite-Time Sliding Mode Control 121

⎨ x(k + 1) = (Ai + E i (k)Fi − Bui K q )x(t)
z(k) = Ci x(k) + Dwi w(k) . (6.58)

x(k) = x0 , k = 0

In the following theorem, some sufficient conditions on the stochastic finite-time


stabilizable in regard to (c1 c c2 N N R d) of the closed-loop MJS (6.58) will
be given.

Theorem 6.7 For given scalars βi ≥ 1, N < N , and c < c2 , the closed-loop
MJS (6.58) is stochastic finite-time stabilizable with H∞ performance in regard
to (c1 c c2 N N R d), if there exist mode-dependent weight matrix Pi such that
⎡ ⎤
Q M T
⎢ −βi Pi 0 φiq
πi j Ai P j ⎥
⎢ q=1 j=1 ⎥
⎢ ⎥
⎢ ∗ −βi I 0 ⎥<0 (6.59)
⎢ ⎥
⎣ M

∗ ∗ − πi j P j
j=1

βiN −N (λ Pi c + d 2 ) < c2 λ Pi (6.60)

c1 < c < c2 . (6.61)

Proof For the closed-loop MJS (6.58), the same Lyapunov function is selected as
(6.15). Thus, we have


Q 
M
E{V2 (k + 1, i)} = φiq πi j x T (k + 1)P j x(k + 1)
q=1 j=1


Q 
M
T
= φiq πi j x T (t)Ai P j Ai . (6.62)
q=1 j=1

where Ai = (Ai + E i (k)Fi − Bui K q ).


We introduce the following auxiliary function:

E{V2 (k + 1, i)} − βi V2 (k, i) < βi w T (k)w(k). (6.63)

Considering condition (6.63), inequality (6.59) can be obtained by means of Schur


complement lemma.
Recalling Theorem 6.5, we have E{x T (N )Pi x(N )} < c . From Eq. (6.63), we
have
122 6 Finite-Time Sliding Mode Control for Discrete-Time Markovian …


k
E{V2 (k, i)} < βik V2 (x(N )) + βil w T (k − l)w(k − l)
l=N


k
 
< βik (V2 (x(N ))) + βik−l w T (k − l)w(k − l)
l=1


k
< βik E{x T (N )Pi x(N )} + βik−l w T ((k − l)w(k − l)). (6.64)
l=1

Defining
−1 −1
λ Pi = max{λmax (R 2 Pi R 2 )},
i∈M

−1 −1
and λ Pi = mini∈M {λmin (R 2 Pi R 2 )}, we can get from condition (6.48) that

 
k

λ Pi E{x T (k)Rx(k)} < βiN −N λ Pi x T (N )Rx(N ) + βik−l w T (k − l)w(k − l)
l=1

< βiN −N (λ Pi c + d ). 2
(6.65)

Thus, one can get

βiN −N (λ Pi c + d 2 )
E{x T (k)Rx(k)} < . (6.66)
λ Pi

It follows from Eq. (6.60) that E{x T (k)Rx(k)} < c2 for any k ∈ [N N ]. That
means the closed-loop MJS (6.58) is stochastic finite-time stabilizable with respect
to (c1 c c2 N N R d). The proof is completed. 

In the following theorem, some sufficient conditions on the stochastic finite-time


stabilizable with H∞ performance with respect to (c c2 [N N ] R) of the closed-
loop MJS (6.58) will be given.
Theorem 6.8 For given scalars βi ≥ 1, N < N , and c < c2 , the closed-loop
MJS (6.58) is stochastic finite-time stabilizable with H∞ performance in regard
to (c1 c c2 N N R d), if there exist mode-dependent weight matrix Pi such that
conditions (6.60) and (6.61) and the following inequality hold:
⎡ ⎤
Q M T
T
⎢ Ci Ci − βi Pi CiT Dwi φiq πi j Ai P j ⎥
⎢ q=1 j=1 ⎥
⎢ ⎥
⎢ ∗ −βi I + T
Dwi Dwi 0 ⎥ < 0. (6.67)
⎢ ⎥
⎣ M

∗ ∗ − πi j P j
j=1
6.3 Asynchronous Finite-Time Sliding Mode Control 123

Proof The same Lyapunov function is selected as (6.15). Considering the H∞ per-
formance index, we introduce the same auxiliary function as (6.52). Then, similar to
the proof in the Theorem 6.6, the stochastic finite-time stabilizable with H∞ perfor-
mance of the closed-loop MJS (6.58) can be guaranteed. The proof is completed. 

It follows from Theorems 6.5 to 6.8 that the closed-loop discrete-time MJS is
stochastic finite-time stabilizable with H∞ performance over the [0 N ] and [N N ]
if and only if the inequalities (6.43)–(6.44), (6.51), (6.60)–(6.61), and (6.67) hold
simultaneously. In the following theorem, the asynchronous controller gain K q will
be obtained to ensure the closed-loop MJS stochastic finite-time stabilizable with
H∞ performance over [0 N ] and [N N ] simultaneously.
Theorem 6.9 For given scalars βi ≥ 1, and 0 < c1 < c < c2 , the closed-loop MJS
(6.5) is stochastic finite-time stabilizable with H∞ performance with respect to
(c1 c2 N R d) over [0 N ] and [N N ] simultaneously, if there exist mode-dependent
weight matrix X i , matrices Hq and Yq such that
 
1iq 2i
<0 (6.68)
∗ −diag{−I, −βi−1 I, −I, −I, −I, }
 
3iq 2i
<0 (6.69)
∗ −diag{−I, −βi−1 I, −I, −I, −I, }

βiN (λ Pi c1 + d 2 ) < c λ Pi (6.70)

βiN −N (λ Pi c + d 2 ) < c2 λ Pi (6.71)

c1 < c < c2 (6.72)

where
⎡ Q M

⎢ −βi X i φiq πi j Hq AiT + T
0 YqT Bui 0 ⎥
⎢ q=1 j=1 ⎥
⎢ ⎥
⎢ M

⎢ ∗ −βi Hi πi j X i Bwi
T
0 ⎥
1iq =⎢ ⎥,
⎢ j=1

⎢ M M ⎥
⎢ ∗ ∗ − πi j X i πi j X i E i ⎥
⎣ j=1 j=1

∗ ∗ ∗ −βi I
124 6 Finite-Time Sliding Mode Control for Discrete-Time Markovian …
⎡ ⎤
X i CiT X i FiT X i CiT 0 0
⎢ 0 0 T
0 X i Dwi T ⎥
X i Dwi
=⎢
⎣ 0
⎥,
2i
0 0 0 0 ⎦
0 0 0 0 0

⎡ ⎤
Q M
⎢ −βi X i φiq πi j Hq AiT + T
0 YqT Bui 0 ⎥
⎢ q=1 j=1 ⎥
⎢ ⎥
⎢ ∗ −βi Hi 0 0 ⎥
3iq =⎢ ⎥.
⎢ M M ⎥
⎢ ∗ ∗ − πi j X i πi j X i E i ⎥
⎣ j=1 j=1 ⎦
∗ ∗ ∗ −βi I

Moreover, the asynchronous finite-time asynchronous SMC gain is given by K q =


Hq Yq−1 .

Proof Considering Âi = (Ai + E i (k)Fi + Bui K q ) and Ai = (Ai + E i (k)Fi −


Bui K q ) and Lemma 2.1, the inequality (6.51) and the inequality (6.67) can be rewrit-
ten as the following inequalities by means of Schur complement lemma:
⎡ Q M ⎤
⎢ Ci φiq πi j (AiT + K qT Bui
CiT Dwi T )P 0
j ⎥
⎢ q=1 j=1 ⎥
⎢ M ⎥
⎢ ⎥
⎢ ∗ −βi I + Dwi
T D
wi πi j Bwi
T P
j 0 ⎥
⎢ ⎥<0 (6.73)
⎢ j=1 ⎥
⎢ M M ⎥
⎢ ∗ ∗ − πi j P j πi j P j E i ⎥
⎣ ⎦
j=1 j=1
∗ ∗ ∗ −βi I

⎡ Q

M
⎢ Ci φiq πi j (AiT − K qT Bui
CiT Dwi T )P 0
j ⎥
⎢ q=1 j=1 ⎥
⎢ ⎥
⎢ ∗ −βi I + Dwi
T D
wi 0 0 ⎥
⎢ ⎥<0 (6.74)
⎢ M M ⎥
⎢ ∗ ∗ − πi j P j πi j P j E i ⎥
⎣ j=1 j=1

∗ ∗ ∗ −βi I

where Ci = CiT Ci − βi Pi + βi FiT Fi .


Pre- and post-multiplying inequalities (6.73) and (6.74) by {Pi−1 Pi−1 Pi−1 I },
respectively, and letting X i = Pi−1 and Hi = Pi−1 X i = Pi−1 , we have
 
1iq 2i
<0 (6.75)
∗ −diag{−I, −βi−1 I, −I, −I, −I, }

 
3iq 2i
<0 (6.76)
∗ −diag{−I, −βi−1 I, −I, −I, −I, }
6.4 Simulation Analysis 125

where
⎡ Q M

⎢ −βi X i φiq πi j X i (AiT + K qT Bui )
T
0 0 ⎥
⎢ q=1 j=1 ⎥
⎢ ⎥
⎢ M

⎢ ∗ −βi Hi πi j X i Bwi
T
0 ⎥
1iq =⎢ ⎥,
⎢ j=1

⎢ M M ⎥
⎢ ∗ ∗ − πi j X i πi j X i E i ⎥
⎣ j=1 j=1

∗ ∗ ∗ −βi I

⎡ ⎤
X i CiT X i FiT X i CiT 0 0
⎢ 0 0 T
0 X i Dwi T ⎥
X i Dwi
=⎢
⎣ 0
⎥,
2i
0 0 0 0 ⎦
0 0 0 0 0

⎡ ⎤
Q M
⎢ −βi X i φiq πi j X i (AiT + K qT Bui )
T
0 0 ⎥
⎢ q=1 j=1 ⎥
⎢ ⎥
⎢ ∗ −βi Hi 0 0 ⎥
3iq =⎢ ⎥.
⎢ M M ⎥
⎢ ∗ ∗ − πi j X i πi j X i E i ⎥
⎣ j=1 j=1 ⎦
∗ ∗ ∗ −βi I

Q M
q=1 φiq j=1 πi j X i (Ai + K q Bui ) in 3iq , we define X iq =
T T T
For 1iq and
Q
L i Hq with a non-singular unit matrix L i . q=1 φiq M j=1 i j X i (Ai + K q Bui ) =
π T T T
Q M
q=1 φiq j=1 πi j Hq Ai + Yq Bui can be obtained by defining Yq = K q Hq . Thus,
T T T

inequalities (6.68)–(6.69) can ensure that inequalities (6.75)–(6.76) hold simulta-


neously. Combining with inequalities (6.70)–(6.72), we know that the closed-loop
MJS (6.5) is stochastic finite-time stabilizable with H∞ performance with respect to
(c1 c2 N R) over [0 N ] and [N N ] simultaneously. The proof is completed. 

6.4 Simulation Analysis

In this section, a numerical example is given to show the effectiveness of our devel-
oped results. Consider the following two-mode discrete-time MJS with parameters
given by:
       
−0.25 −0.15 −0.46 0.26 0.1 0.2
A1 = , A2 = , Bu1 = , Bu2 = ,
0.43 −0.31 0.24 −0.57 0.5 0.3
126 6 Finite-Time Sliding Mode Control for Discrete-Time Markovian …
   
0.2 0.4 0.3 0.5    
E1 = , E2 = , F1 = 0.2 0.5 , F2 = 0.3 0.4 ,
0.7 0.1 0.3 0.4

   
C1 = 0.1 0.5 , C2 = 1.2 1.6 ,

   
Dw1 = 0.5 , Dw2 = 0.3 .
 
0.4 0.6
Assume the transition probability matrix as [πi j ] = . The initial conditions
0.3 0.7
 
are given by x0 = 0.3 0.4 , c1 = 0.4, and w(t) = 0.8 cos(t). We also set β1 = 0.2,
β2 = 0.3, N = 3, λ Pi = 0.8, λ Pi = 0.3, and d 2 = 0.6.
By solving Theorem 6.3, we obtain
   
8.2520 1.7494 −13.0905 −2.6415
P1 = , P2 = ,
1.7494 9.2918 −2.6415 −8.8332

   
G 1 = 1.6999 4.8209 , G 2 = −3.4106 −3.1782 , c2 = 5.9353.

Applying the obtained mode-dependent weight matrices Pi , G i to the discrete-


time MJS (6.5), the simulation results are plotted in Figs. 6.1, 6.2, 6.3, and 6.4.
Figure 6.1 depicts the jumping mode of the discrete-time MJSs. The state trajec-
tories of x(k) and E{x T (k)Rx(k)} of the closed-loop MJS are shown in Figs. 6.2
and 6.3, respectively. From Figs. 6.2 and 6.3, we know that the closed-loop MJS

Fig. 6.1 The jumping modes


6.4 Simulation Analysis 127

Fig. 6.2 The state trajectories of x(k)

Fig. 6.3 The state trajectories of E{x T (k)Rx(k)}

(6.12) is stochastic finite-time stabilizable with H∞ performance with respect to


(0.4 5.9353 3 I ). Figure 6.4 displays the mode-dependent sliding variable trajec-
tories of S(k). From Fig. 6.4, we know that the closed-loop MJS can be driven onto
the sliding surface under the designed sliding mode control scheme.
128 6 Finite-Time Sliding Mode Control for Discrete-Time Markovian …

Fig. 6.4 The sliding variable trajectories of S(k)

6.5 Conclusion

The finite-time SMC and asynchronous SMC design problems for discrete-time MJSs
are investigated in this chapter. Firstly, we design a mode-dependent SMC to drive
the closed-loop MJSs onto the sliding surface with the help of the Lyapunov function
method. Moreover, some sufficient conditions on the stochastic finite-time stabiliza-
tion with H∞ performance of the closed-loop MJSs are given. Then, considering the
asynchronous phenomenon of the discrete-time MJSs, we design a mode-dependent
asynchronous SMC to drive the closed-loop systems onto the sliding surface. In
addition, some sufficient conditions on the stochastic finite-time stabilization with
H∞ performance over the finite-time interval [0 N ] are provided. The next chapter
will consider the transient performance at a specific frequency band to reduce the
conservativeness of controller design from the perspective of frequency domain.

References

1. Ren, C.C., He, S.P.: Sliding mode control for a class of positive systems with Lipschitz non-
linearities. IEEE Access 6, 49811–49816 (2018)
2. Cao, Z.R., Niu, Y.G., Lam, J., Song, X.Q.: A hybrid sliding mode control scheme of Markovian
jump systems via transition rates optimal design. IEEE Trans. Syst. Man Cybern. Syst. (2020).
https://doi.org/10.1109/TSMC.2020.2980851
References 129

3. Yang, Y.K., Niu, Y.G., Zhang, Z.N.: Dynamic event-triggered sliding mode control for interval
type-2 fuzzy systems with fading channels. ISA Trans. 110, 53–62 (2020)
4. Ren, C.C., He, S.P.: Sliding mode control for a class of nonlinear positive Markovian jump
systems with uncertainties in a finite-time interval. Int. J. Control Autom. Syst. 17(7), 1634–
1641 (2019)
5. Li, H.Y., Shi, P., Yao, D.Y.: Adaptive sliding-mode control of Markovian jump nonlinear
systems with actuator faults. IEEE Trans. Autom. Control 62(4), 1933–1939 (2017)
6. Song, J., Niu, Y.G., Zou, Y.Y.: Asynchronous sliding mode control of Markovian jump systems
with time-varying delays and partly accessible mode detection probabilities. Automatica 93,
33–41 (2018)
7. Tong, D.B., Xu, C., Chen, Q.Y., Zhou, W.N., Xu, Y.H.: Sliding mode control for nonlinear
stochastic systems with Markovian jumping parameters and mode-dependent time-varying
delays. Nonlinear Dynam. 100, 1343–1358 (2020)
8. Dong, S.L., Liu, M.Q., Wu, Z.G., Shi, K.B.: Observer-based sliding mode control for Markovian
jump systems with actuator failures and asynchronous modes. IEEE Trans. Circ. Syst.-II 68(6),
1967–1971 (2021)
9. Du, C.L., Li, F.B., Yang, C.H.: An improved homogeneous polynomial approach for adaptive
sliding-mode control of Markovian jump systems with actuator faults. IEEE Trans. Autom.
Control 65(3), 955–969 (2020)
10. Fang, M., Shi, P., Dong, S.L.: Sliding mode control for Markovian jump systems with delays
via asynchronous approach. IEEE Trans. Syst. Man Cybern. Syst. 51(5), 2916–2925 (2021)
11. Bhat, S.P., Bernstein, D.S.: Finite-time stability of continuous autonomous systems. SIAM J.
Control Optim. 38(3), 751–766 (2000)
12. Chen, W., Jiao, L.C.: Finite-time stability theorem of stochastic nonlinear systems. Automatica
46(12), 2105–2108 (2010)
13. Garrard, W.L., McClamroch, N.H., Clark, L.G.: An approach to suboptimal feedback control
of nonlinear systems. Int. J. Control 5(5), 425–435 (1967)
14. Van Mellaert, L., Dorato, P.: Numerical solution of an optimal control problem with a probability
criterion. IEEE Trans. Autom. Control 17(4), 543–546 (1972)
15. San Filippo, F.A., Dorato, P.: Short-time parameter optimization with flight control application.
Automatica 10(4), 425–430 (1974)
16. Gmjic, W.L.: Finite time stability in control system synthesis. In: Proceedings of the 4th IFAC
Congress, pp. 21–31. Warsaw, Poland (1969)
Chapter 7
Finite-Frequency Control
with Finite-Time Performance
for Markovian Jump Systems

Abstract The multiperformance controller design problem for discrete-time


Markovian jump systems is analyzed both in time domain and frequency domain.
The proposed control scheme provides new thoughts for reducing engineering con-
servation of controller design for jumping systems not only from the perspective of
time domain, but also from frequency domain by introducing frequency informa-
tion into controller design. Moreover, in order to overcome the effect of stochastic
jumping among different modes on system performance, derandomization method
has been introduced into controller design by transforming the original stochastic
multimodal systems into deterministic ones.

7.1 Introduction

Usually, the traditional controller design approach considers the performance in


the entire frequency range, which leads to over-design and conservativeness. In the
practical engineering applications, it is often necessary to examine the performance
of the system in a particular frequency band or a number of different frequency
bands. For example, in the design of the servo system, it is necessary to realize that
the system has a good tracking performance in middle-frequency band and high
steady-state performance in low-frequency band. Concerned about the performance
of the system in the limited frequency band, researchers have carried out extensive
and in-depth researches in single-mode.
Frequency weighting function is an existing method to solve the problem of
finite-frequency band [1]. The principle of the frequency weighting method is to
approximate the frequency domain inequality of the original transfer function by
designing an appropriate weighted transfer function, so as to realize the introducing
of frequency information. However, the frequency-weighted function method also
has some disadvantages. Finding the right weighting function is a complex and time-
consuming process. On the other hand, the more complex the weighting function is,
the higher dimension of the controller is, which makes the design difficult. Therefore,
the research progress of finite-frequency band problem based on frequency weighting
function is slow.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 131
X. Luan et al., Robust Control for Discrete-Time Markovian Jump Systems in the
Finite-Time Domain, Lecture Notes in Control and Information Sciences 492,
https://doi.org/10.1007/978-3-031-22182-8_7
132 7 Finite-Frequency Control with Finite-Time Performance for Markovian …

A new method to solve the problems of finite-frequency band is proposed in


[2, 3], where a generalized KYP (GKYP) lemma is obtained by extending KYP
lemma. GKYP lemma can be directly applied to engineering applications with con-
vex constraints, including loop forming problems of control systems [4], filtering
problems [5], fault detection problem [6], integrated control design problems [7], etc.
For the system with non-convex, it needs to be transformed into convex constraints.
For example, Iwasaki transformed the design objective of non-convex constraints into
a convex linear matrix inequality to solve the problem by means of multiplier exten-
sion method and variable substitution [8]. Since then, researchers have extended the
GKYP lemma to singularly perturbing systems [9], nonlinear systems [10], uncer-
tain switching systems [11], and Markovian jump systems (MJSs) [12, 13], gradually
expanding the application range of the finite-frequency band theory.
On the one hand, the available results on the finite-frequency band focus on the
Lyapunov asymptotic stability of systems over the infinite-time domain. The transient
performance in a given time interval is more required. On the other hand, most of
the finite-frequency studies of multimodal systems only focus on the performance
of the sub-modal systems satisfying the finite-frequency performance, but ignore
the finite-frequency performance of whole system. Therefore, in this chapter, two
contributions have been made to improve the results in the existing literature. One is
to reduce the engineering conservation by allowing the Lyapunov function to increase
with a limited increasing rate. Another is to introduce the jumping probability into
controller design to guarantee the finite-frequency performance of the whole systems
rather than the performance of the sub-modal systems.

7.2 Finite-Time Stabilization with Finite-Frequency


Performance

Design the following state feedback controller for the system (5.1):

u(k) = K i x(k) (7.1)

where K i is the controller gain to be designed for each mode i ∈ M. Substituting


controller (7.1) into system (5.1) yields the following closed-loop system:

x(k + 1) = Āi x(k) + Bwi w(k)
(7.2)
z(k) = Ci x(k) + Dwi w(k)

where Āi = Ai + Bui K i .


Firstly, the following lemma is introduced before getting the main results.
Lemma 7.1 (GKYP Lemma) For given symmetric matrices  and , and consid-
ering the closed-loop system (7.2), the following two descriptions are equivalent:
7.2 Finite-Time Stabilization with Finite-Frequency Performance 133

(1) The finite-frequency band inequality


 T  
G i (λ) G i (λ)
 < 0, λ ∈ (, ) (7.3)
I I
 −1
where G i (λ) = Ci λI − Āi Bwi + Dwi indicates the transfer function of the sys-
tem (7.2), and the set (, ) represents a set of complex planes, which defines
the range of frequency variable λ. The matrix  describes the performance of the
system in finite-frequency band. (2) There exist symmetric matrices Pi and Q i > 0
such that  T    T  
Āi Bwi Āi Bwi Ci Dwi Ci Dwi
 +  <0 (7.4)
I 0 I 0 0 I 0 I

where  =  ⊗ Pi +  ⊗ Q i .
 
I 0
Remark 7.1 Generally,  = is used to represent the H∞ performance
0 −γ 2 I
index. In this situation, Eq. (7.3) is equal to G i (λ)∞ < γ . Therefore, different
performance indices can be set up to depict different performance indices.

Remark 7.2 In this chapter, the performance of the medium-frequency


 band of the

−Pi e jϑm Q i
system (7.2) is considered. Therefore,  can be set to  = ,
∗ Pi − 2 cos ϑd Q i
where ϑm = (ϑl + ϑh ) /2 and ϑd = (ϑl − ϑh ) /2.

Lemma 7.2 (Finsler’s Lemma [14]) For H ∈ R n , ∈ R n×n ,  ∈ R n×m , and ⊥


satisfying ⊥  = 0, the following expressions are equivalent:
(1) ηT H < 0, ∀T H = 0, H = 0;
(2) ⊥ ⊥ < 0;
T

(3) ∃μ ∈ R n , making  − μT < 0;


(4) ∃ ∈ R m×n , making +  + T T < 0.

The purpose of this chapter is to design the appropriate controller (7.1) so that the
controlled system (7.2) meets the corresponding finite-time stability requirements
and the following requirement:
 
G zw (e jϑ )ϑl <ϑ<ϑh < γ . (7.5)

The following theorem focuses on the transient performance of the system (7.2)
to realize that the state trajectory of the controlled system does not exceed a certain
bound in a given time. At the same time, it also meets the performance index of
finite-frequency band.
Theorem 7.1 For given γ > 0 and α ≥ 0, the discrete-time closed-loop system (7.2)
is said to be finite-time stabilizable with respect to (c1 c2 N R) and meet the
134 7 Finite-Frequency Control with Finite-Time Performance for Markovian …

performance index (7.5), if there exists mode-dependent symmetric matrix P̃ki > 0
and P̃i > 0, matrix η̃i , K̃ i , and Q̃ i > 0 satisfying the following conditions:
⎛ ⎞
− P̃i e jϑm Q̃ i − η̃iT 0 0 0 0
⎜ ˜ + H e Ai η̃i + Bui K̃ i η̃iT CiT D̄wi + Bwi ⎟
⎜ ∗ 0 η̃iT CiT 0 ⎟
⎜ ⎟
⎜ ∗ ∗ −γ 2 I + Dwi T
Dwi 0 0 0 ⎟
⎜ ⎟<0 (7.6)
⎜ ∗ ∗ ∗ −I 0 0 ⎟
⎜ ⎟
⎝ ∗ ∗ ∗ ∗ −I 0 ⎠
∗ ∗ ∗ ∗ ∗ −I
 
P k − η̃iT − η̃i Ai η̃i + Bui K̃ i < 0 (7.7)
∗ −(1 + α) P̃ki
 
− P̃ki η̃iT
−1 −1 <0 (7.8)
∗ −λ1 R

P̃ki + λ−1
2 R
−1
− η̃iT − η̃i < 0 (7.9)

(α + 1) N c1 λ−1 −1
1 − c2 λ2 < 0 (7.10)

where

˜ = P̃i − 2 cos ϑd Q̃ i , P k = πi1 P̃k1 + πi2 P̃k2 + · · · + πi M P̃k M , P̃i = η̃iT Pi η̃i ,

P̃ki = η̃iT Pki η̃i , Q̃ i = η̃iT Q i η̃i , P̂ki = R 1/2 Pki R 1/2 , η̃i = ηi−1 ,
   T
H e Ai η̃i + Bui K̃ i = Ai η̃i + Bui K̃ i + Ai η̃i + Bui K̃ i ,

λ1 = λmin (Pki ), λ2 = λmax (Pki ).

Then the state feedback controller gain can be obtained as K i = K̃ i η̃i−1 .

Proof Define the following mode-dependent stochastic Lyapunov function:

Vi (k) = x T (k) P̂ki x(k).

Applying Lemma 7.2 condition (7.9) can be rewritten as

P̃ki < η̃iT λ2 R η̃i . (7.11)

Similarly, by Schur complement lemma, the condition (7.8) can be converted into

η̃iT λ1 R η̃i < P̃ki . (7.12)


7.2 Finite-Time Stabilization with Finite-Frequency Performance 135

By combining Eqs. (7.11) and (7.12), it can be obtained

λ1 R < Pki < λ2 R. (7.13)

Pre- and post-multiplying the condition (7.7) by diag (ηi , ηi ), and by using Lemma
7.2, it can also be obtained that

ĀiT P̂ki Āi − (α + 1)Pki < 0. (7.14)

Pre-multiplying the above Eq. (7.14) by x T (k) and post-multiplying it by x(k), it


yields:


M 
x T (k) ĀiT πi j Pki Āi x(k) − x T (k)Pki x(k) < αx T (k)Pki x(k). (7.15)
j=1

The above equation is equivalent to

E {Vi (k + 1)} − Vi (k) < αVi (k).

Listing the above equation at different sampling time, we have

E {Vi (1)} − Vi (0) < αVi (0),


E {Vi (2)} − Vi (1) < αVi (1),
···
E {Vi (k + 1)} − Vi (k) < αVi (k).

It can be obtained from the above inequalities

E {Vi (k)} < (α + 1)Vi (k − 1)


< (α + 1)2 Vi (k − 2)
...
< (α + 1)k+1 Vi (0)
< (α + 1) N Vi (0).

In combination with Formula (7.13), the left side of the above formula can be
converted into
 
E {Vi (k)} = E x T (k)R 1/2 Pki R 1/2 x(k)
 
> λmin (Pki )E x T (k)Rx(k)
 
= λ1 E x T (k)Rx(k) . (7.16)
136 7 Finite-Frequency Control with Finite-Time Performance for Markovian …

Similarly, the right-hand side of the inequality tells us that

(α + 1) N Vi (0) = (α + 1) N x T (0)R 1/2 Pki R 1/2 x(0)


< λmax (Pki )x T (0)Rx(0)
= λ2 x T (0)Rx(0). (7.17)

By combining Eqs. (7.16) and (7.17), the original inequality can be converted into
 
λ1 E x T (k)Rx(k) < (α + 1) N λ2 x T (0)Rx(0)
< (α + 1)k λ2 c1 . (7.18)

Combined with the condition (7.10), it can be obtained that


 
E x T (k)Rx(k) < c2 ,

which means the system (7.2) is finite-time stabilizable.


Our another target is to guarantee the finite-frequency performance index (7.5)
with the designed controller (7.1). By Schur complement lemma, the condition (7.6)
is equivalent to
⎛ ⎞
− P̃i e jϑm Q̃ i − η̃i 0
⎜ ˜ + H e Ai η̃i + Bui K̃ i + η̃iT CiT Ci η̃i η̃iT CiT Dwi + Bwi ⎟
⎝ ∗ ⎠ < 0. (7.19)
∗ ∗ −γ I + Dwi Dwi
2 T

Pre- and post-multiplying the condition (7.19) by diag{ηi , ηi , I }, we can get:


⎛ ⎞
−Pi e jϑm Q i − ηi 0
⎝ ∗ + ηi Āi + ĀiT ηiT CiT Dwi + ηi Bwi ⎠ < 0 (7.20)
∗ ∗ −γ 2 I + Dwi T Dwi

where = Pi −  2 cos ϑd Q i + Ci Ci .


T

Letting ζ = 0 ηi 0 , the above Eq. (7.20) can be converted into

 + L ⊥ζ + ζ T L ⊥ < 0
T
(7.21)

where  
 T T
 ĀiT I 0
L ⊥ = −I Āi Bwi ,L = T ,
Bwi 0I
⎛ ⎞
−Pi e jϑm Q i 0
=⎝ ∗ CiT Dwi ⎠.
∗ ∗ −γ I + Dwi T Dwi
2
7.3 Finite-Time Multiple-Frequency Control Based on Derandomization 137

Using Lemma 7.2, Eq. (7.21) is equivalent to

LL T < 0 (7.22)

where  can be rewritten as


⎡⎤ ⎡ ⎤T ⎡ ⎤ ⎡ ⎤T
I 0  I 0 0 0  0 0
−P e jϑm Q i ⎣ 0 I ⎦ + ⎣ CT 0 ⎦ I 0 2 ⎣ CT 0 ⎦ .
 = 0 I ⎦ jϑm i

e Q i Pi − 2 cos ϑd Q i i 0 −γ i
00 00 Dwi T I Dwi T I

Equivalent deformation of Eq. (7.22) can be obtained as


 T    T  
Āi Bwi Āi Bwi Ci Dwi Ci Dwi
 +  <0 (7.23)
I 0 I 0 0 I 0 I

where    
−Pi e jϑm Q i I 0
= , = .
∗ Pi − 2 cos ϑd Q i 0 −γ 2 I

From the GKYP Lemma 7.1, it can be obtained that the controlled system (7.2)
meets the medium-frequency performance index (7.5). This completes the proof. 

7.3 Finite-Time Multiple-Frequency Control Based


on Derandomization

The main purpose of this subsection is to design the appropriate mode-dependent state
feedback controller (7.1) so that the system (7.2) meets the corresponding finite-time
stability requirement and the following multiple-frequency performance indices:
! !
!G zw (e jϑ )! < γ , ∀ |ϑ| ≤ ϑl (7.24)
! !
!G uw (e jϑ )! < ρ, ∀ |ϑ| ≥ ϑh . (7.25)

The results in Theorem 7.1 just guarantee the finite-frequency performance of each
sub-modal system in a given time interval rather than the whole stochastic system.
It is well known that the performance of the sub-modal system is not equivalent to
that of the whole system.
In order to fully consider the effect of random jumping on the performance of
the system in different frequency ranges and a given time interval, derandomization
method is proposed to improve the performance of the system by transforming the
original stochastic multimodal systems into deterministic single-mode ones, where
the parameter matrices contain the information of the transition probability.
138 7 Finite-Frequency Control with Finite-Time Performance for Markovian …

For the set A ∈ R and φ, define the following indicator function:



1 if φ ∈ A
1A (φ) = .
0 otherwise

Defining  
si (k) = E x(k)1{rk =i} . (7.26)

In combination with Eqs. (7.2) and (7.26), it has


" M #

s j (k + 1) = E (Ai + Bui K i ) x(k)1{r(k+1) = j } 1{rk =i} + Bwi w(k)
i=1


M
= πi j Āi si (k) + Bwi w(k). (7.27)
i=1

The above Eq. (7.27) can be rewritten as:


⎧⎛ ⎞ ⎛ ⎞⎛ ⎞ ⎛ ⎞

⎪ s1 (k + 1) π11 Ā1 π21 Ā2 · · · π M1 Ā M s1 (k) Bw1 w(k)

⎪⎜ .. ⎟ ⎜ .. .. ⎟ ⎜ .. ⎟ ⎜ .. ⎟

⎪ ⎝ ⎠=⎝ . ⎠⎝ . ⎠ + ⎝ ⎠

⎪ . . .


⎨ s M (k + 1)⎛ π1M Ā1 π⎞
2M Ā2 · · · π M M Ā M s M (k) BwM w(k)
⎛ ⎞ C1 0 · · · 0 ⎛ ⎞ ⎛ ⎞ .



z(k) ⎜ 0 C1 · · · 0 ⎟ s1 (k) Dw1 w(k)
⎪⎜ . ⎟ ⎜
⎪ ⎟⎜ . ⎟ ⎜ .. ⎟


⎪ ⎝ .. ⎠ = ⎜ .. .. . . .. ⎟ ⎝ .. ⎠ + ⎝ . ⎠

⎪ ⎝ . . . . ⎠
⎩ z(k) s M (k) DwM w(k)
0 0 · · · CM

Define the following augmented variables:


 T
s (k) = s1 (k) . . . s M (k) ,

ũ (k) = K̂ s(k),
 T
w̃ (k) = w (k) . . . w (k) .

The original stochastic system (5.1) can be transformed into the following deter-
ministic one: 
s(k + 1) = As(k) + B u ũ(k) + Bw w̃(k)
(7.28)
z̃(k) = Cs(k) + Dw w̃(k)

where
   
A = T ⊗ In · diag{A1 , A2 , . . . , A M }, Bu = T ⊗ In · diag{Bu1 , Bu2 , . . . , Bu M },

Bw = diag {Bw1 , Bw2 , . . . , BwM } , C = diag {C1 , C2 , . . . , C M } ,


7.3 Finite-Time Multiple-Frequency Control Based on Derandomization 139

Dw = diag {Dw1 , Dw2 , . . . , DwM } , K̂ = diag{K 1 , K 2 , . . . , K M },


⎛ ⎞
π11 π12 · · · π1M
⎜ π21 π22 · · · π2M ⎟
⎜ ⎟
=⎜ . .. .. . ⎟.
⎝ .. . . .. ⎠
π M1 π M2 · · · πM M

According to the definition of z̃(k), w̃(k), and ũ(k), the performance indices (7.24)
and (7.25) are equal to

! !
!Tz̃ w̃ (e jϑ )! = z(k)2 < γ , ∀ |ϑ| ≤ ϑl (7.29)
w(k)2

! !
!Tũ w̃ (e jϑ )! = u(k)2 < ρ, ∀ |ϑ| ≥ ϑh . (7.30)
w(k)2

This means that the low-frequency (or high-frequency) performance index of the
system (7.28) is equivalent to the low-frequency (or high-frequency) performance
index of the original stochastic jumping system (7.2).

Remark 7.3 Based on the derandomization method, the original stochastic jumping
system is equivalently transformed into a deterministic one. This approach has two
advantages. First of all, GKYP lemma can be directly used to the finite-frequency
performance analysis of the deterministic system. Secondly, the parameter matri-
ces contain two kinds of information, namely, Ai , i = 1, 2, . . . , M and the transi-
tion probability πi j , so that the transition probability information is successfully
introduced into the linear matrix inequality of the performance index of the finite-
frequency band.
Therefore, the target of this subsection can be transformed into designing an
appropriate state feedback controller K̂ to make the controlled system (7.2) meet the
corresponding design indices.
The following theorem can be used to obtain sufficient conditions to ensure the
finite-time stability of the controlled system (7.2) and to meet the performance indi-
cators (7.24) and (7.25).

Theorem 7.2 The closed-loop


 system (7.2) is said to be finite-time stabilizable with
respect to c1 c2 N R and meet the performance indicators (7.24) and (7.25), if for
given scalars ϑl , ϑh , α ≥ 0, γ , and ρ, there exists symmetric matrices Pl , Ph , Q l > 0,
and Q h > 0, and matrices Vl , Vh , and W̄ satisfying the following conditions:
⎡ ⎤
−Pl Q l + W̄ Rl  0 0
⎢ ∗ Pl − 2 cos ϑl Q l − H e AW̄ Rl + Bu K̄ Rl Bw Vl −C W̄ Rl ⎥
⎢ ⎥<0 (7.31)
⎣ ∗ ∗ −γ 2 I Dw Vl ⎦
∗ ∗ ∗ −I
140 7 Finite-Frequency Control with Finite-Time Performance for Markovian …
⎡ ⎤
−Ph Q h + W̄ Rh  0 0
⎢ ∗ Ph + 2 cos ϑh Q h − H e AW̄ Rh + Bu K̄ Rh Bw Vh K̄ Rh ⎥
⎢ ⎥<0 (7.32)
⎣ ∗ ∗ −ρ 2 I 0 ⎦
∗ ∗ ∗ −I

(α + 1) N c1 λ3 − c2 λ4 < 0 (7.33)

− λ3 R −1 < W̄ < −λ4 R −1 (7.34)


 
T
(1 + α)W̄ (AW̄ + Bu K̄ )
< 0. (7.35)
∗ W̄

Then, the state feedback controller gain can be obtained as K̂ = K̄ W̄ −1 .

Proof Choose the following Lyapunov function candidate as:

V (k) = s T (k) P̂s(k).

Letting W̄ −1 = − P̂, and pre- and post-multiplying inequality (7.35) by


diag{W̄ −1 , I }, it has  
T
−(1 + α) P̂ (A + Bu K̂ )
< 0.
∗ − P̂ −1

By using Schur complement lemma, the above inequality can be converted to


T
(A + Bu K̂ ) P̂(A + Bu K̂ ) − (1 + α) P̂ < 0.

Pre-multiplying the above equation by s T (k) and post-multiplying it by s(k), it


yields:

E {V (s(k + 1))} < (1 + α)V (s(k))


< (1 + α)2 V (s(k − 1))
···
< (1 + α)k+1 V (s(0))
< (1 + α) N V (s(0)).

Combining the conditions (7.33) and (7.34), we can get

  λ3
E s T (k)Rs T (k) ≤ (α + 1) N c1 < c2 ,
λ4

which means the system (7.2) is finite-time stabilizable.


7.3 Finite-Time Multiple-Frequency Control Based on Derandomization 141

On the other hand, according to GKYP Lemma 7.1, condition (7.31) is equivalent
to  T    T
MT  ⊗ Pl + l ⊗ Q l 0 M
T T T
<0 (7.36)
I 0  I

where      
01 −1 0 I 0
= , l = , = ,
10 0 −ϑl2 0 −γ 2 I
     
A + Bu K̂ Bw A Bw Bu  
M= = + K̂ I 0 =  + K̂ Z ,
C Dw C Dw 0
   
A Bw Bu    T
= , = , Z = I 0 , Z + = Z ∗ (Z Z ∗ )−1 = I 0 .
C Dw 0

Performance index (7.24) can be rewritten as:


⎧⎛ ⎞ ⎫

⎪ −I 0 0 ⎛ ⎞⎪
  ⎨⎜ W̄ Rl ⎪⎬
 ⊗ Pl + l ⊗ Q l 0 ⎜ 0 −I 0 ⎟
⎟ ⎝ ⎠
T T < He ⎝
T
V
0  ⎪
⎪ A Bw Bu ⎠ l

⎩ K̄ Rl ⎪⎭
C Dw 0
⎧⎛ ⎞ ⎫
⎨ −Z + Z + Z − I  ⎬
< H e ⎝ A + Bu K̂ Bw ⎠ W̄ Rl .
⎩ Vl ⎭
C Dw

The above inequality is equivalent to:


    -
 ⊗ Pl +  ⊗ Q l 0 −Z + Z+Z − I W̄ Rl
T T < He
T
0   Z + + K̂ 1 (I − Z + Z ) Vl
(7.37)
Letting
W = Z + W̄ Rl + (I − Z + Z )Vl , det(W ) = 0.

Then inequality (7.37) is equivalent to:


   -   -
 ⊗ Pl +  ⊗ Q l 0 −W −I
T T < He
T
= He ∗W .
0  W + K̂ Z W M
(7.38)
Through Lemma 7.2, Formula (7.36) can be deduced from inequality (7.38),
which means that the low-frequency performance index (7.24) can be derived from
the condition (7.31). Similarly, the high-frequency performance index (7.25) can also
be deduced from the condition (7.32). This completes the proof. 
142 7 Finite-Frequency Control with Finite-Time Performance for Markovian …

7.4 Simulation Analysis

In this section, two examples will be given to show the effectiveness and practi-
cal application of our developed theoretical results. The first numerical example is
used to show the transient performance of the system in given time interval and the
multiple-frequency performance in high-frequency and low-frequency bands. The
second example focuses on the advantage of the derandomization method presented
in Theorem 7.2.

Example 7.1 Consider system (5.1) with two operation modes and the following
parameters:
       
−0.3 0.9 0.2 −0.8 0.2 0
A1 = , A2 = , Bu1 = , Bu2 = ,
1.1 −1.5 −0.3 1 0.1 0.1
     T
0 0 0.5
Bw1 = , Bw2 = , C1 = C2 = , Dw1 = Dw2 = 0.1.
0.2 0.1 0.4

The transition probability matrix is as follows:


 
0.65 0.35
= .
0.45 0.55
 T
Taking the initial state of the system x0 = 0.1 0.3 , letting
 c1 = 1, c2 = 2,
α
 = 1.5, N = 10, ϑ l = 20, ϑ h = 75, γ = 2, ρ = 1.5, R l = 0 0 I 0 , and Rh =
I 0 0 0 , the external noise as w = sin(k), and by using the results obtained in
Theorem 7.1, the controller gains to be solved are as follows:
   
K 1 = −5.8936 −6.2709 , K 2 = 2.5988 −16.0379 .

Similarly, applying the results derived in Theorem 7.2, the controller parameters to
be solved are as follows:
   
K 1 = −4.8506 −2.8687 , K 2 = −9.5549 −12.6071 .

Using the obtained controllers, Fig. 7.1a, b are drawn. It is obvious from the
Figs. 7.1a, b that the controlled system satisfies the condition of finite-time stability.
In order to compare the effectiveness of the method, the state trajectory of the open-
loop system without control is given in Fig. 7.2, where the state trajectory exceeds
the desired bound c2 = 2.
Meanwhile, in order to verify that the system meets the multiple-frequency per-
formance indices, the amplitude-frequency characteristic curve of the closed-loop
system is shown in Fig. 7.3. The solid line in the Fig. 7.3 represents the amplitude-
7.4 Simulation Analysis 143

Fig. 7.1 a State trajectory of the closed-loop system (based on Theorem 7.1). b State trajectory of
the closed-loop system (based on Theorem 7.2)
144 7 Finite-Frequency Control with Finite-Time Performance for Markovian …

10

5
x1
0

-5
0 1 2 3 4 5 6 7 8 9 10
time

10

0
x2

-10

-20
0 1 2 3 4 5 6 7 8 9 10
time

Fig. 7.2 State trajectory of the open-loop system

! !
frequency characteristic curve of !G zw (e jϑ )!!, and the !dashed line represents the
amplitude-frequency characteristic curve of !G uw (e jϑ )!. The shaded part in blue
shows the limits of γ and ρ in low-frequency and high-frequency
! ! bands. It can be
clearly seen in Fig. 7.3 that !the performance indicator !G zw (e jϑ )! can be satisfied
!
!in low-frequency
! band and !G uw (e jϑ )! is satisfied in high-frequency
! ! band, while
!G zw (e jϑ )! is greater than γ in low-frequency band and !G uw (e jϑ )! is greater than
ρ in high-frequency band. That explains the reason why the proposed method in this
chapter can reduce the conservativeness of the controller design.
To show the advantage of the derandomization-based result presented in Theo-
rem
! 7.2,jϑthe ! results in Theorem 7.1 are compared. The low-frequency performance! !
!G zw (e )! < γ , ∀ |ϑ| ≤ ϑl is taken as an example, and the curve of !G zw (e jϑ )!
is drawn in Fig. 7.4, where the effect of transition probability on low-frequency
performance is not considered. ! It can be ! seen from Fig. 7.4 that for the desired
bound γ = 0.4, performance !G zw (e jϑ )! < 0.4 in low-frequency band is not satis-
fied, which implies the limitation of the method proposed in Theorem 7.1.
Next example will be used to show the practical application of the theoretical
results presented in Theorem 7.2.

Example 7.2 We borrowed an example from [15], which is an application of Marko-


vian jump systems to cart–spring system. The detailed model parameter description
can refer to the reference [15].
7.4 Simulation Analysis 145

Fig. 7.3 Amplitude-frequency characteristic curve of the result in Theorem 7.2

Fig. 7.4 Amplitude-frequency characteristic curve of the result in Theorem 7.1


146 7 Finite-Frequency Control with Finite-Time Performance for Markovian …
⎛ ⎞ ⎛ ⎞
0 0 1 0 0 0 1 0
⎜0 0 0 1⎟ ⎜0 0 0 1⎟
A1 = ⎜
⎝−1
⎟, A = ⎜ ⎟,
1 0 0⎠ 2 ⎝−2 2 0 0⎠
1 −1 0 0 2 −2 0 0
 T  T
Bu1 = Bu2 = 0 0 0 1 , Bw1 = Bw2 = 0 0 1 1 ,
 
C1 = C2 = 0 1 0 0 , Dw1 = Dw2 = 1.

With sampling time k, the continuous-time dynamic equation can be discretized,


and the discrete-time model parameters are as follows:
⎛ ⎞
0.5780 0.4220 0.8492 0.1508
⎜ 0.4220 0.5780 0.1508 0.8492⎟
A1 = ⎜
⎝−0.6985 0.6985 0.5780 0.4220⎠ ,

0.6985 −0.6985 0.4220 0.5780


⎛ ⎞
0.2919 0.7081 0.7273 0.2727
⎜ 0.7081 0.2919 0.2727 0.7273⎟
A2 = ⎜
⎝−0.9093
⎟,
0.9093 0.2919 0.7081⎠
0.9093 −0.9093 0.7081 0.2919
 T
Bu1 = 0.0390 0.4610 0.1508 0.8492 ,
 T
Bu2 = 0.0730 0.4270 0.2727 0.7273 .

The according transition probability matrix is as follows:


 
0.65 0.35
= .
0.45 0.55
 T
Taking the initial state of the system x0 = 0.5 0.2 −0.3 −0.2  , letting 
c1 =1, c2 =4, α=0.2,
 N = 10, ϑl = 20, ϑh = 60, γ = 0.9, ρ = 1.5, Rl = 0 0 I 0 ,
Rh = I 0 0 0 , and w = sin(k) and by using the results obtained from Theorems
7.1 and 7.2, the controller parameters to be solved are as follows, respectively:
 
K 1 = 0.0265 −0.5976 −0.3268 −1.4221 ,
 
K 2 = −0.4484 −0.1657 −0.4495 −1.3464 ,
 
K 1 = −0.8546 0.7694 0.4123 −0.4082 ,
 
K 2 = 0.5166 −1.3152 −0.5694 −1.3648 .
7.5 Conclusion 147

Fig. 7.5 State trajectory of the open-loop system

Figures 7.5 and 7.6 show the state trajectories of the open-loop and closed-loop
systems. From these figures, we can see that the original unstable system has a
bounded state under the control of the designed controller.
Meanwhile, the amplitude-frequency characteristic curve of the result in Theorem
7.2 is shown in Fig. 7.7. ! !
It can be seen from Fig. 7.7 that! the performance
! indicator !G zw (e jϑ )! can be
satisfied in low-frequency band and !G uw (e jϑ )! is satisfied in high-frequency band.
However, if the result in Theorem 7.1 is used to the cart–spring system, we can find
that it does not meet the desired multiple-frequency performance, which is shown in
Fig. 7.8.

7.5 Conclusion

In this chapter, combined with finite-time stability, the finite-frequency theory is


generalized to the stochastic discrete-time multimodal jump systems. Considering
that it is difficult to obtain the frequency domain energy index represented by the
transfer function of stochastic multimodal jump system, derandomization method
has been used to transform the original stochastic multimodal systems into deter-
ministic single-mode ones. By introducing the information of transition probability
into controller design, the performance of the obtained results has been improved.
The application of the proposed theoretical results to the cart–spring system not only
148 7 Finite-Frequency Control with Finite-Time Performance for Markovian …

Fig. 7.6 a State trajectory of the MJLS system (based on Theorem 7.1). b State trajectory of the
derandomization system (based on Theorem 7.2)
7.5 Conclusion 149

Fig. 7.7 Amplitude-frequency characteristic curve of the result in Theorem 7.2

Fig. 7.8 Amplitude-frequency characteristic curve of the result in Theorem 7.1


150 7 Finite-Frequency Control with Finite-Time Performance for Markovian …

shows the effectiveness and validity of the results, but also the practical application
value of the results. Next chapter will concern not only the transient behavior of
discrete-time MJSs in the finite-time domain but also the consistent state behavior
of each subsystem.

References

1. Zhou, K., Doyle, J.C., Glover, K.: Robust and Optimal Control. Prentice Hall Publishing, New
Jersey (1996)
2. Iwasaki, T., Meinsma, G., Fu, M.: Generalized S-procedure and finite frequency KYP lemma.
Math. Probl. Eng. 6(2–3), 305–320 (2000)
3. Iwasaki, T., Hara, S.: Generalized KYP lemma: unified frequency domain inequalities with
design applications. IEEE Trans. Autom. Control 50(1), 41–59 (2005)
4. Hara, S., Iwasaki, T., Shiokata, D.: Robust PID control using generalized KYP synthesis: direct
open-loop shaping in multiple frequency ranges. IEEE Control Syst. Mag. 26(1), 80–91 (2006)
5. Wan, H.Y., Luan, X.L., Karimi, H.R., Liu, F.: Higher-order moment filtering for Markovian
jump systems in finite frequency domain. IEEE Trans. Circ. Syst-II 66(7), 1217–1221 (2019)
6. Zhou, Z.H., Luan, X.L., Liu, F.: Finite-frequency fault detection based on derandomisation for
Markovian jump linear system. IET Control Theor. Appl. 12(08), 1148–1155 (2018)
7. Iwasaki, T., Hara, S., Yamauchi, H.: Dynamical system design from a control perspective: finite
frequency positive-realness approach. IEEE Trans. Autom. Control 48(8), 1337–1354 (2003)
8. Iwasaki, T., Hara, S.: Dynamic output feedback synthesis with general frequency domain
specifications. IFAC Proc. Volumes 38(1), 345–350 (2005)
9. Mei, P., Fu, J., Liu, Y.: Finite frequency filtering for time-delayed singularly perturbed systems.
Math. Probl. Eng. 4, 1–7 (2015)
10. Ding, D.W., Yang, G.H.: Fuzzy filter design for nonlinear systems in finite-frequency domain.
IEEE Trans. Fuzzy Syst. 18(5), 935–945 (2010)
11. Wang, H., Ju, H., Wang, Y.L.: Finite frequency H∞ filtering for switching LPV systems. In:
Proceedings of the 24th Chinese Control and Decision Conference, pp. 4008–4013, Taiyuan,
China (2012)
12. Luan, X.L., Zhou, C.Z., Ding, Z.T., Liu, F.: Stochastic consensus control with finite frequency
specification for Markovian jump networks. Nonlinear Control 13(2), 1833–1838 (2015)
13. Luan, X.L., Shi, P., Liu, F.: Given-time multiple frequency control for Markovian jump systems
based on derandomization. Inf. Sci. 451, 134–142 (2018)
14. Skelton, R.E., Iwasaki, T., Grigoriadis, K.M.: A Unified Algebraic Approach to Control Design.
CRC Press Publishing, Los Angeles (1997)
15. Iwasaki, T., Hara, S.: Robust control synthesis with general frequency domain specifications:
static gain feedback case. In: Proceedings of the 2004 American Control Conference, vol. 5,
pp. 4613–4618, Boston, MA, USA (2004)
Chapter 8
Stochastic Finite-Time Consensualization
for Markovian Jump Networks
with Disturbances

Abstract The finite-time consensus protocol design approach is investigated for


discrete-time network-connected systems with random Markovian jump topologies,
communication delays, and external disturbances. With relaxing the conditions that
the disagreement dynamics asymptotically converges to zero, the finite-time consen-
sualization protocol is employed to make sure the disagreement dynamics of inter-
connected networks confined within the prescribed bound in the fixed time interval.
By taking advantage of certain features of Laplacian matrix in real Jordan form, the
new model transformation method has been proposed, which makes the designed
control protocol more general.

8.1 Introduction

With the rapid development of computer technology, network technology, and com-
munication technology, the research on the consensus of network-connected systems
has become a popular topic in the field of control [1, 2]. Taking the industrial heating
furnace with multiple passes for example, it is a significant equipment in petrochem-
ical processes. The outlet temperature of the furnace can directly impact the recovery
efficiency, stability of subsequent production, and product quality. Principally, the
process operation has a consistent temperature across all passes. However, the fluc-
tuations of inlet flow pressure and feed composition may cause difference of outlet
temperature among passes [3, 4]. Such temperature variations could result in unsta-
ble operation and even equipment failure caused by material coking in the pipeline.
Hence, it is really important for the furnace’s outlet temperatures to be consistent.
Therefore, the consensus problem of network-connected dynamic systems has
attracted extensive attention from numerous researchers in mathematics, control,
and system science. Carding the existing literature research on consensus of dynamic
systems, it includes the following three aspects: (1) dynamic systems in a network,
from the first-order integrator model to linear system, nonlinear system, singular
system, and so on [5, 6]; (2) network topology, from undirected to directed, from
fixed topology to switching topology, including time-varying topology, stochastic

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 151
X. Luan et al., Robust Control for Discrete-Time Markovian Jump Systems in the
Finite-Time Domain, Lecture Notes in Control and Information Sciences 492,
https://doi.org/10.1007/978-3-031-22182-8_8
152 8 Stochastic Finite-Time Consensualization for Markovian Jump…

topology, etc. [7, 8]; (3) interference degree of network connection, from instant
messaging to communication delay, data packet loss, random interference, etc. [9,
10].
However, all the above research results require that the inconsistency states of
the system converge to zero asymptotically in the infinite-time region. As we have
emphasized in the previous chapters, people are often more interested in whether the
systems can meet transient requirements in a limited short time. Still taking the heat-
ing furnace as an example, the target is to keep the temperature difference between
passes not exceed a given limit within a certain period of time. Although the control,
filtering, and optimization problems based on the finite-time theory have been widely
studied, it is a new and challenging issue worthy of attention for systems connected
together through a network. Of course, due to limited bandwidth, transmission time-
delay, topology connection uncertainty, and other factors, how to further consider
the finite-time consensus protocol involved in network connection systems in com-
plex cases is more meaningful. Therefore, the finite-time consensus protocol design
problem for discrete-time network-connected systems with random Markovian jump
topologies has been addressed in this chapter. If the state variable is measurable, how
to guarantee the disagreement dynamics do not exceed the desired bound in the fixed
time interval? Simultaneously, if the state variable is not available, how to design
the control protocol to achieve the same target? This chapter will answer these two
questions.

8.2 Preliminaries and Problem Formulation

Consider the following discrete-time dynamics with S subsystems:

xr (k + 1) = Axr (k) + Bu u r (k) + Bw wr (k) (8.1)

for r = 1, . . . , S and k = 1, . . . , N , where xr (k) ∈ R n is the vector of state vari-


 r , u r (k) ∈ R is the controlled input of subsystem r , 2and
m
ables of subsystem
q
wr(k) ∈ l2 0 +∞ is the w
N  exogenous disturbances with bounded energy r (k)2 =
k=0 wr (k) wr (k) < d , A, Bu , and Bw are known constant matrices with
T 2
E
appropriate dimensions.
The connections between different subsystems can be represented by a directed
graph G(rk ), where rk the network connection topology is stochastically jumping,
and the jumping process rk is governed by a Markovian chain taking values in a finite
set M = {1, 2, . . . , i, . . . , M} with transition probabilities matrix

  M
 = πi j , πi j = Pr (rk = j|rk−1 = i), πi j ∈ (0, 1) , πi j = 1.
i=1
8.2 Preliminaries and Problem Formulation 153

The connection can be represented by an adjacency matrix Q(rk ) = {qr h (rk )}.
qr h (rk ) = 1 indicates that there is a connection between subsystems r and h. Other-
wise, qr h (rk ) = 0.
The Laplacian matrix L(rk ) = {lr h (rk )} is defined as

⎨ −qr h (rk ) r = h
lr h (rk ) = 
S
. (8.2)
⎩ qr h (rk ) r = h
h=1,h=r

For a directed graph G(rk ), zero is the eigenvalue of Laplacian matrix L(rk ) with
 T
1 = 1 1 · · · 1 as the corresponding right eigenvector. All the non-zero eigenvalues
have positive real parts. Moreover, zero is a single eigenvalue of L(rk ) if and only
if the graph G(rk ) contains a directed spanning tree. For the convenience of the
controller design, the eigenvalues of the Laplacian matrix are assumed to be distinct
[11].
Our target is to design a control protocol to make the disagreement dynamics keep
within the desired bound:

|xr (k) − x h (k)| < c, r, h ∈ {1, 2, . . . , S} , k ∈ {1, 2, . . . , N } . (8.3)

To this end, the state disagreement variable is introduced as follows:

S
zr (k) = xr (k) − κ(rk )x h (k) (8.4)
h=1

where κ(rk ) is the left eigenvector of the jumping Laplacian matrix


 L(rk ) associated
with the eigenvalue 0 and satisfies κ(rk )1 = 1. Denoting z(k) = z 1 T (k), ..., zr T (k), ...,
T
z S T (k) , equality (8.4) can be rewritten as follows:
  
z(k) = x(k) − 1κ(rk )T ⊗ I S x(k)
= (M(rk ) ⊗ I S ) x(k) (8.5)
 T
where M(rk ) = I S − 1κ(rk )T , x(k) = x1T (k), ..., xrT (k), ..., x ST (k) .
To make sure the disagreement trajectory remains confined within the prescribed
bound in the fixed time interval, the following definition is necessary.

Definition 8.1 [12]. For a given time-constant N > 0, the discrete-time network-
connected dynamic system (8.1) (setting u r (k) = 0, wr (k) = 0) is said to be finite-
time consensus with respect to (c1 c2 N R), where c1 < c2 , R > 0, if
   
E z T (0)Rz(0) ≤ c1 ⇒ E z T (k)Rz(k) ≤ c2 , ∀k ∈ {1, 2, . . . , N } (8.6)
154 8 Stochastic Finite-Time Consensualization for Markovian Jump…

Furthermore, to eliminate the influence of external disturbances on the finite-time


consistency, the consensus protocol is designed such that the disagreement trajectory
not only satisfies Eq. (8.5), but also satisfies the following condition:
 N
  N

E z T (k)z(k) ≤ γ 2 E w T (k)w(k) (8.7)
k=0 k=0

 T
where w(k) = w1 T (k), ..., wr T (k), ..., w S T (k) .

Lemma 8.1 [1] For a Laplacian matrix with distinct eigenvalues, there exists a
  T
similarity transformation F(rk ) = F1 (rk ) 1 and F(rk )−1 = F2 (rk ) κ(rk ) such
that
F(rk )−1 L(rk )F(rk ) = J (rk ) (8.8)
⎡ ⎤
λ1 (rk )
⎢ .. ⎥
⎢ . ⎥
where J (rk ) = diag {J1 (rk ), 0} = ⎢ ⎥, λr (rk ) are the eigen-
⎣ λ(S−1) (rk ) ⎦
0
values of Laplacian matrix L(rk ).

8.3 Finite-Time Consensualization with State Feedback

Design the following state feedback controller for the system (8.1):

S
u r = K (rk ) qr h (rk ) (x h − xr ) (8.9)
h=1

where K (rk ) is the controller gain to be designed for each mode i ∈ M. According
to the relationship between Q(rk ) and L(rk ), the controller can be rewritten as:

S
u r = −K (rk ) lr h (rk )x h . (8.10)
h=1

Substituting controller (8.10) into system (8.1) yields the following closed-loop
system:

x(k + 1) = (I S ⊗ A − L(rk ) ⊗ Bu K (rk )) x(k) + (I S ⊗ Bw ) w(k) (8.11)

where ⊗ is the Kronecker product of matrices.


8.3 Finite-Time Consensualization with State Feedback 155

Combining Eqs. (8.5) with (8.11), it yields

z(k + 1) = (I S ⊗ A − L(rk ) ⊗ Bu K (rk )) z(k) + (M(rk ) ⊗ Bw ) w(k). (8.12)

Letting ξ(k) = F(rk )−1 ⊗ I S z(k), the system (8.12) is rewritten as


 
ξ(k + 1) = (I S ⊗ A − J (rk ) ⊗ Bu K (rk )) ξ(k) + F(rk )−1 M(rk ) ⊗ Bw w(k).
(8.13)
According to the definition of ξ(k), if ξ(k) satisfies the condition of finite-time
stability, then the disagreement trajectory will be confined within the prescribed
bound in the fixed time interval. For simplicity of notation, for rk = i, the matrices
J (rk ), K (rk ), F(rk ), and M(rk ) can be denoted as Ji , K i , Fi , and Mi .
Theorem 8.1 For given γ and α ≥ 1, the discrete-time network-connected dynamic
system (8.1) is said to be finite-time consensus with respect to (c1 c2 N R d) and
meets the performance index (8.7), if there exists mode-dependent symmetric matrices
X i > 0 and matrices Yi satisfying the following conditions:
⎡ ⎤
diag {−α X i } 0 L T1i
⎣ ∗ −γ 2 I T
Bwtri ⎦<0 (8.14)
∗ ∗ diag {−X i }

σ1 R −1 < X i < R −1 (8.15)


 σ1 c2 √ 
γ 2d 2 − c1
√ αN <0 (8.16)
c1 −σ2

where √ √
L T1i = πi1 (AX i − λri Bu Yi )T · · · πi M (AX i − λri Bu Yi )T
 √  −1   √   
T
Bwtri = πi1 Fi Mi r ⊗ Bw · · · πi M Fi −1 Mi r ⊗ Bw .

Then the state feedback controller gain can be obtained as K i = Yi X i−1 .

Proof Define the following mode-dependent stochastic Lyapunov function:

V (ξr (k), rk = i) = ξr T (k)Pi ξr (k). (8.17)

According to condition (8.13), we have


  
ξr (k + 1) = (A − λri Bu K i ) ξr (k) + Fi −1 Mi r
⊗ Bw w(k). (8.18)
156 8 Stochastic Finite-Time Consensualization for Markovian Jump…

Simple calculation shows that

V (ξr (k + 1), rk = i) − αV (ξr (k), rk = i)


⎡ ⎤
M
= ξr T (k) ⎣(A − λri Bu K i )T πi1 P j (A − λri Bu K i ) − α Pi ⎦ ξr (k)
j=1
⎧ ⎡ ⎤ ⎫
⎨ M
  ⎬ 
+ H e ξr T (k) ⎣(A − λri Bu K i )T πi1 P j Fi −1 Mi r ⊗ Bw ⎦ w(k)
⎩ ⎭
j=1
⎡ ⎤
  T M   
+ wT (k) ⎣ Fi −1 Mi r ⊗ Bw πi1 P j Fi −1 Mi r ⊗ Bw ⎦ w(k). (8.19)
j=1

If the following inequality holds:


⎡ ⎤

M 
M
⎢ (A − λri Bu K i ) πi1 P j (A − λri Bu K i ) − α Pi (A − λri Bu K i )T πi1 P j Bwtri ⎥
T
⎢ j=1 j=1 ⎥
⎢ ⎥ < 0.
⎣ 
M ⎦
∗ T
Bwtri πi1 P j Bwtri − γ I 2
j=1
(8.20)

Then we have

V (ξr (k + 1), θk = i) − αV (ξr (k), θk = i) < γ 2 w T (k)w(k). (8.21)

According to the relationship  ξr (k) and ξ(k), performing a congruence


 between
to the above condition by diag Pi−1 I , using Schur complement lemma, and letting
X i = Pi−1 and Yi = K i X i , inequality (8.20) leads to inequality (8.14), which means
that condition (8.14) can make sure the following inequality holds:

V (ξ(k), rk = i) < αV (ξ(k), rk = i) + γ 2 w T (k)w(k). (8.22)

Summing the right-hand side of the inequality (8.22) from time 0 to k, it yields:

V (ξ(k), rk = i) < αV (ξ(k − 1), rk = i) + γ 2 w T (k − 1)w(k − 1)


 
< α 2 V (ξ(k − 2), rk = i) + γ 2 αw T (k − 2)w(k − 2) + wT (k − 1)w(k − 1)
< ...
k−1
< α k V (ξ(0), rk = i) + γ 2 αl−k−1 w T (k − l)w(k − l)
l=0

< α N V (ξ(0), rk = i) + γ 2 d 2 . (8.23)
8.4 Finite-Time Consensualization with Output Feedback 157

That is 
ξ T (k)Pi ξ(k) < α N ξ T (0)Pi ξ(0) + γ 2 d 2 . (8.24)

Letting P̃i = R −1/2 Ti−T Pi Ti−1 R −1/2 , the above inequality can be converted into
 
z T (k)R 1/2 P̃i R 1/2 z(k) < α N z T (0)R 1/2 P̃i R 1/2 z(0) + γ 2 d 2 . (8.25)
   
Denoting σ1 = λmin P̃r and σ2 = λmax P̃r , inequality (8.25) is rewritten as


σ1 z T (k)R 1/2 R 1/2 z(k) < α N σ2 z T (0)R 1/2 R 1/2 z(0) + γ 2 d 2 . (8.26)

By combining Eqs. (8.15) and (8.16), it can be obtained that

σ2 c1 + γ 2 d 2
< α −N . (8.27)
σ1

which means the discrete-time network-connected dynamic system (8.1) is said to


be finite-time consensus. This completes the proof. 

8.4 Finite-Time Consensualization with Output Feedback

In the situation that the state is not accessible, the following dynamic output controller
should be designed:

⎪ S

⎨ vr (k + 1) = Ãi vr (k) + B̃i qr h (rk ) (yr − yh )
h=1
(8.28)

⎪ S
⎩ u r (k) = C̃i vr (k) + D̃i qr h (rk ) (yr − yh )
h=1

where yr (k) = C xr (k) + Dwr (k) is the output of the system (8.1). According to the
relationship between Q(rk ) and L(rk ), we have
S S
qr h (rk ) (yr − yh ) = lr h (rk ) yh . (8.29)
h=1 h=1

Then, substituting the controller (8.29) into system (8.1), it has


158 8 Stochastic Finite-Time Consensualization for Markovian Jump…
⎧ 

⎪ S

⎪ lx (k + 1) = Ax (k) + B C̃i vr (k) + D̃i lr h (rk ) (C xr (k) + Dw (k)) + Bw wr (k)


r r u

⎪ h=1

S
vr (k + 1) = Ãi vr (k) + B̃i lr h (rk ) (C xr (k) + Dw (k)) .



⎪ h=1

⎪ S


⎩ zr (k) = xr (k) − κ(rk )x h (k)
h=1
(8.30)
Performing the matrix calculation to Eq. (8.30) is follows:
⎧      

⎪ x(k + 1) = I ⊗ A + L ⊗ B D̃ C x(k) + I ⊗ B C̃ v(k)

⎪ 
S

i u i

S u i


+ I S ⊗ Bw + L i ⊗ Bu D̃i Dw w(k)
        ,



⎪ v(k + 1) = L i ⊗ B̃i C x(k) + I S ⊗ Ã i v(k) + L i ⊗ B̃ D
i w w(k)


z(k) = (Mi ⊗ I S ) x(k)
 T
where v(k) = v1T (k), ..., vrT (k), ..., v ST (k) .
According to the relationship between ξ(k) and z(k), and combining with the
Formula (8.4), the above equation can be rewritten as
⎧! ⎡    ⎤
" ! "
⎪ I ⊗ A + J ⊗ B D̃ C I ⊗ B C̃
⎪ ξ(k + 1) = ⎣ S


i
 
u i S u i
⎦ ξ(k)

⎨ v(k + 1)
⎪ Ji ⊗ B̃i C I S ⊗ Ãi v(k)
⎡  ⎤ . (8.31)



⎪ Bw + Ji ⊗ Bu D̃i Dw


⎩ +⎣   ⎦ w(k)
J ⊗ B̃ D i i w

 T
Letting ε(k) = ξ T (k) v T (k) , Eq. (8.31) is equivalent to

ε(k + 1) = ε(k) + w(k) (8.32)

where
⎡    ⎤ ⎡  ⎤
I S ⊗ A + Ji ⊗ Bu D̃i C I S ⊗ Bu C̃i Bw + Ji ⊗ Bu D̃i Dw
i =⎣   ⎦, i = ⎣   ⎦.
Ji ⊗ B̃i C I S ⊗ Ãi Ji ⊗ B̃i Dw

If ε(k) satisfies the condition of finite-time stability, then the disagreement tra-
jectory z(k) will be confined within the prescribed bound in the fixed time interval,
which means the network-connected system (8.1) is finite-time consensus.
We! shall design
" the dynamic output controller (8.28) with controller gains
Ãi B̃i
K̃ i = to make sure that the system (8.1) is finite-time consensus with H∞
C̃i D̃i
performance.
8.4 Finite-Time Consensualization with Output Feedback 159

Theorem 8.2 For given γ , and α ≥ 1, the discrete-time network-connected dynamic


system (8.1) is said to be finite-time consensus with respect to (c1 c2 N R) and
meets the performance index (8.7), if there exists mode-dependent symmetric matrix
Pi > 0, mode-dependent symmetric matrix X i > 0, and matrices K̃ i satisfying the
following conditions:
⎡   ⎤
diag −α Pi + I 0 i1
T
⎣∗ −γ 2 I i2
T ⎦<0 (8.33)
∗ ∗ diag {−Pi }

where
  T  T 
√ √
i1
T
= πi1 1 + 2 K̃ i i3 Pi · · · πi M 1 + 2 K̃ i i3 Pi ,

 T 
√  T √ 
i2
T
= πi1 4 + 2 K̃ i i5 Pi · · · πi M 4 + 2 K̃ i i5 Pi ,
! " ! " ! "
A0 0 Bu 0 I
1 = , 2 = , i3 = ,
0 0 I 0 λri C 0
! " ! " ! "
Bw 0 I 0
4 = , i5 = ,I = .
0 λri D 00

Proof Choose the following Lyapunov function candidate as:

V (εr (k), rk = i) = εr T (k)Pi εr (k).

Simple calculation shows that

V (εr (k + 1), rk = i) − αV (εr (k), rk = i)


⎡ ⎤ ⎡ ⎛ ⎞ ⎤
M M
= εrT (k) ⎣ T
i πi j P j i − α Pi ⎦ εr (k) + H e ⎣εrT (k) ⎝ T
i πi j P j i
⎠ w(k)⎦
j=1 j=1
⎛ ⎞
M
+ wT (k) ⎝ T
i πi j P j i
⎠ w(k).
j=1

Then following the similar proof as Theorem 8.1, the inequality (8.33) will be
derived.

It should be noted that the derived condition in Theorem 8.2 is not strict linear
matrix inequality (LMI). Therefore, the non-feasibility problem should be converted
to LMI by using the algorithm proposed in [13].
160 8 Stochastic Finite-Time Consensualization for Markovian Jump…

8.5 Simulation Analysis

In this section, we will use the following example to verify the effectiveness of our
developed theoretical results. Consider system (8.1) with four subsystems and the
following parameters:
! "
−1.48 −1.96  T  T
A= , Bu = 1 0.5 , Bw = 0.1 0.3 .
1.57 1.95

The interconnection topology jumps between G 1 and G 2 with adjacency matrices


described as: ⎡ ⎤ ⎡ ⎤
0010 0100
⎢0 0 1 0⎥ ⎢0 0 0 1⎥
Q1 = ⎢ ⎥ ⎢ ⎥
⎣0 1 0 0⎦ , Q 2 = ⎣1 1 0 1⎦ .
0100 0100

The transition probability between G 1 and G 2 is


! "
0.3 0.7
= .
0.4 0.6

Then, the Laplacian matrices are calculated as


⎡ ⎤ ⎡ ⎤
1 0 −1 0 1 −1 0 0
⎢0 1 −1 0⎥ ⎢0 1 0 −1⎥
L1 = ⎢
⎣0
⎥, L = ⎢ ⎥.
−1 1 0⎦ 2 ⎣−1 −1 3 −1⎦
0 −1 0 1 0 −1 0 1

The eigenvalues of L i for i = 1, 2 can be easily obtained as


⎡ ⎤ ⎡ ⎤
0 0 0 0 0 0 0 0
⎢0 1 0 0⎥ ⎢0 1 0 0⎥
J1 = ⎢
⎣0
⎥, J = ⎢ ⎥,
0 1 0⎦ 2 ⎣0 0 2 0⎦
0 0 0 2 0 0 0 3

with the resultant transformation matrices


⎡ ⎤ ⎡ ⎤
0.5 1 0 −0.5 0.5 1 −0.5 0
⎢0.5 0 0 −0.5⎥ ⎢0.5 0 0.5 0⎥
F1 = ⎢ ⎥ ⎢
⎣0.5 0 0 0.5 ⎦ , F2 = ⎣0.5
⎥.
0 −0.5 0.5⎦
0.5 0 1 0.5 0.5 0 −0.5 0
 T
Taking the initial state of the system x0 = 0.5 −0.6 , letting c1 = 1, c2 = 6,
α = 4, N = 20, the external noise as w1 (k) = 2 sin(k), w2 (k) = − sin(k), w3 (k) =
3 sin(k), and w4 (k) = −1.5 sin(k), and by using the results obtained from Theorem
8.6 Conclusion 161

Fig. 8.1 State trajectory with time variation

8.1, the controller parameters to be solved are as follows, respectively:


 
K 1 = 0.1108 0.1198 , K 2 = −0.0053 0.0389 .

Using the obtained controllers, Figs. 8.1 and 8.2 show the disagreement trajectory
of the controller system from different perspectives, respectively. It can be seen that
the state disagreement stays within the specified bound c2 = 6 over the given time
horizon N = 20 with the designed controller, which means that the designed finite-
time controllers can achieve the consensus of network-connected systems in spite of
the communication delays and external disturbances.

8.6 Conclusion

In this chapter, the finite-time consensus controller design problem has been addressed
for a class of discrete-time network-connected systems with stochastic jumping
topologies. The state feedback controller and the dynamic output feedback con-
troller are designed to make sure that the disagreement trajectory of interconnected
networks keep confined within the prescribed bound in the fixed time interval rather
162 8 Stochastic Finite-Time Consensualization for Markovian Jump…

6 6

4 4

2 2

x22
x12

0 0

-2 -2

-4 -4

-6 -6
-6 -4 -2 0 2 4 6 -6 -4 -2 0 2 4 6
x 11 x21

6 6

4 4

2 2
x32

x42
0 0

-2 -2

-4 -4

-6 -6
-6 -4 -2 0 2 4 6 -6 -4 -2 0 2 4 6
x31 x41

Fig. 8.2 Disagreement trajectory of the controlled system

than asymptotically converge to zero, respectively. Next chapter will consider the
higher-order moment finite-time stabilization problem to ensure that not only the
mean and variance of the states remain within the desired range in the fixed time
interval, but also the higher-order moment of the states are limited to the given bound.

References

1. Ding, Z.: Consensus output regulation of a class of heterogeneous nonlinear systems. IEEE
Trans. Autom. Control 58, 2648–2653 (2013)
2. Dong, W.J., Farrell, J.A.: Cooperative control of multiple nonholonomic mobile agents. IEEE
Trans. Autom. Control 53(6), 262–268 (2009)
3. Luan, X.L., Min, Y., Albertos, P., Liu, F.: Feed furnace temperature control based on the
distributed deviations. Ind. Eng. Chem. Res. 20, 6035–6042 (2017)
4. Wang, X.X., Zheng, D.Z.: Load balancing control of furnace with multiple parallel passes.
Control Eng. Pract. 15(5), 521–531 (2007)
5. Ding, Z.T.: Consensus control of a class of Lipschitz nonlinear systems. Int. J. Control 87(11),
2372–2382 (2014)
6. Wang, C.Y., Zuo, Z.Y., Lin, Z.L.: Consensus control of a class of Lipschitz nonlinear systems
with input delay. IEEE Trans. Circ. Syst.-I 62(11), 2730–2738 (2015)
References 163

7. Luan, X.L., Zhou, C.Z., Ding, Z.T., Liu, F.: Stochastic consensus control with finite frequency
specification for Markovian jump networks. Nonlinear Control 13(2), 1833–1838 (2015)
8. Saboori, I., Khorasani, K.: Consensus achievement of multiagent systems with directed and
switching topology networks. IEEE Trans. Autom. Control 59(11), 3104–3109 (2014)
9. You, K.Y., Li, Z.K., Xie, L.H.: Consensus condition for linear multi-agent systems over ran-
domly switching topologies. Automatica 49(10), 3125–3132 (2013)
10. Zeng, L., Hu, G.D.: Consensus of linear multi-agent systems with communication and input
delays. Acta Autom. Sin. 39(7), 1133–1140 (2013)
11. Cai, N., Cao, J.W., Khan, M.J.: Almost decouplability of any directed weighted network topol-
ogy. Phys. A 436, 637–645 (2015)
12. Luan, X.L., Min, Y., Ding, Z.T., Liu, F.: Stochastic finite-time consensualization for Markovian
jump networks with disturbance. IET Control Theory Appl. 9(16), 2340–2347 (2015)
13. He, Y., Wu, M., Liu, G.P., She, J.H.: Output feedback stabilization for a discrete-time system
with a time-varying delay. IEEE Trans. Autom. Control 53(10), 2372–2377 (2008)
Chapter 9
Higher-Order Moment Finite-Time
Stabilization for Discrete-Time
Markovian Jump Systems

Abstract The higher-order moment stabilization in the finite-time domain for


discrete-time Markovian jump systems is addressed to guarantee that not only the
mean and variance of the states remain within the desired range in the fixed time
interval, but also the higher-order moment of the states is limited to the given bound.
Firstly, the derandomization method is utilized to transform the multimode stochas-
tic jumping systems into single-mode deterministic systems. Then, with the help
of the cumulant generating function in statistical theory, the higher-order moment
components of the states are obtained by first order Taylor expansion. Compared
with the existing control methods, the high-order moment stabilization improves the
effect of the control by taking the higher-order moment information of the state into
consideration.

9.1 Introduction

The research on the control problem for Markovian jump systems (MJSs) has a long
history. Because MJSs can represent most of the actual industrial processes, many
interesting results have been reported including the stability analysis and stabilization
[1, 2], state filtering [3, 4], fault detection [5, 6], and so on. The research results of
MJSs are summarized, including the following three aspects: (1) the structure of the
system, such as switching jump systems [7], non-homogeneous jump systems [8],
semi-Markovian jump systems [9], linear MJSs [10], nonlinear MJSs [11], and so on;
(2) control methods and performance, such as robust control [12], adaptive control
[13], optimal control [14], and intelligent control [15]; (3) transition probability (TP)
of the system, from completely known to partially known [16], from constant to time
varying [3], etc.
It should be noted that all the above research results consider the first-order or
second-order stability of MJSs. In other words, the control target is to make sure
that the mean and variance of the states satisfy the required performance. However,
in many control fields, such as machine tool production, spacecraft control, and
economic regulation and control, there are higher requirements about the control
performance more than the mean and variance of the states. In this situation, it is
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 165
X. Luan et al., Robust Control for Discrete-Time Markovian Jump Systems in the
Finite-Time Domain, Lecture Notes in Control and Information Sciences 492,
https://doi.org/10.1007/978-3-031-22182-8_9
166 9 Higher-Order Moment Finite-Time Stabilization for Discrete-Time …

not enough to control the mean and variance of the states. The higher-order moment
control performance is preferred to satisfy the higher demand of the system.
Take the precision machining of numerical control machine as an example. There
are very strict requirements for the feed speed of mechanical parts. Generally, it
is desired that the speed, acceleration, and the accelerated acceleration of the parts
are zero when they reach the specified position. Another example can be found in
economic field. The unbiased volatility index (VIX), as a measure of expected market
returns, is always subject to significant biases due to the volatility of the market.
The third-order moment index, the generalized VIX, is introduced to improve the
precision of regulating market expected returns [17].
Therefore, it is necessary to study the higher-order control performance for MJSs.
In the last decade or so, there has been some research results in this area. In 2006, Sun
introduced some new concepts of p-order moment stability for stochastic differential
equations with impulse jumps and Markovian switches [18]. In 2011, the p-order
moment asymptotical stability of stochastic difference systems was studied [19]. In
[20], the p-order moment exponential stability of impulsive functional differential
equations was addressed. In 2018, Luan introduced the cumulant generating function
to deal with the higher-order moment stability for MJSs [21]. Then, the higher-order
moment filtering, higher-order moment stabilization, and higher-order moment fault
detection were investigated [22–24].
Different from the abovementioned results in higher-order moment analysis and
synthesis, in this chapter, the higher-order moment performance in the finite-time
domain and specific finite-frequency domain for discrete-time MJSs have been dis-
cussed. Firstly, the finite-time stability problem with higher-order moment charac-
teristics has been addressed. Then, the higher-order moment finite-frequency perfor-
mance has been investigated. The derived results can not only cover the mean and
variance stability of the states as special cases, but also reduce the conservativeness
of the controller design.

9.2 Preliminaries and Problem Formulation

Consider the discrete-time MJS with the following structure:



⎨ x(k + 1) = A(rk )x(k) + Bu (rk )u(k) + Bw (rk )w(k)
z(k) = C(rk )x(k) + Dw (rk )w(k) . (9.1)

x(k) = x0 , rk = r0 , k = 0

Design the following state feedback controller for the system (9.1):

u(k) = K i x(k) (9.2)


9.2 Preliminaries and Problem Formulation 167

where K i is the controller gain to be designed for each mode i ∈ M. Substituting


controller (9.2) into system (9.1) yields the following closed-loop system:

x(k + 1) = Āi x(k) + Bwi w(k)
(9.3)
z(k) = Ci x(k) + Dwi w(k)

where Āi = Ai + Bui K i .


Defining the Dirac function δ Āi (rk ), the expectation of the state can be written as
follows:  
qi (k) = E x(k) δ{rk =i} (rk ) . (9.4)

Considering the characteristic of δ Āi (rk ) and Markovian chain, the following equa-
tion holds:


M
q j (k + 1) = Āi x(k)δrk =i (rk )δrk+1 = j (rk+1 ) + Bwi w(k)
i=1


M
= πi j Āi qi (k) + Bwi w(k). (9.5)
i=1

Definition 9.1 [21] For a random variable z with distribution density function p(z),
the moment generating function (MGF) is defined as z ( ) = R n z e z p(z)dz, and
T

the cumulant generating function (CGF) is defined as z ( ) = log z ( ).


If MGF z ( ) and CGF z ( ) are analytical, they can be expanded as Taylor’s
series at the neighborhood of  = 0 as

  ⊗p
z ( ) = m( p, n)T (9.6)
p=0
p!


  ⊗p
z ( ) = c( p, n)T (9.7)
p=0
p!

where m( p, n) is the pth order moment vector with dimension n p × 1, p = {1, 2,


. . . , l}. c( p, n) is the pth order moment cumulative function, which is given by


p−1
p−1
c( p, n) = m( p, n) − Q l c( p − l, n) ⊗ m(l, n) (9.8)
l
l=1

where Q l is a specific commutation matrix with appropriate dimension. According


to Eqs. (9.6) and (9.7), by taking CGF on both sides of condition (9.5) and expanding
it to Taylor series at the neighborhood of  = 0, the left-hand side of condition (9.5)
is described as
168 9 Higher-Order Moment Finite-Time Stabilization for Discrete-Time …


  ⊗p
q j (k+1) = cq j (k+1) ( p, n)T (9.9)
p=0
p!

and the right-hand side of Eq. (9.5) is formulated as


⎧ ⎫
M ⎨
 ∞ ⊗p ⎬

M = cπi j Āi qi (k) ( p, n)T
πi j Āi qi (k) ⎩ p! ⎭
i=1 i=1 p=0

 
  M ⊗p
T
= cπi j Āi qi (k) ( p, n) . (9.10)
p=0 i=1
p!

Then, it yields

M
cq j (k+1) ( p, n) = cπi j Āi qi (k) ( p, n). (9.11)
i=1

With the cumulant property cT z ( p) = T ⊗ p C z ( p), where T is the transform matrix,


we can obtain

M
⊗p
cq j (k+1) ( p) = πi j Āi cqi (k) ( p). (9.12)
i=1

 T
Defining s j (k, p) = cq j (k) ( p), S(k, p) = s1 T (k, p), . . . , s M T (k, p) , w M p (k) =
 T T  T
w (k), . . . , w TM (k) , z M p (k) = z T (k), . . . , z TM (k) , equality (9.12) can be
rewritten as 
S(k + 1, p) = Ā M p S(k, p) + BwM p w M p (k)
(9.13)
z M p (k) = C M p S(k) + DwM p w M p (k)

where   ⊗p ⊗p
Ā M p = T ⊗ In p · diag{ Ā1 , . . . , Ā M } ∈ R (n ×M)×(n ×M) ,
p p

BwM p = diag {Bw1 , . . . , BwM } ⊗ Iq p ∈ R (n ×M)×(q p ×M)


p
,

C M p = diag {C1 , ..., C M } ⊗ In p ∈ R (l ×M)×(n p ×M)


p
,

DwM p = diag {Dw1 , ..., DwM } ⊗ Il p ∈ R (l ×M)×(q p ×M)


p
,
⎛ ⎞
π11 π12 · · · π1M
⎜ π21 π22 · · · π2M ⎟
⎜ ⎟
=⎜ . .. .. .. ⎟ .
⎝ .. . . . ⎠
π M1 π M2 · · · πM M
9.3 Higher-Order Moment Stabilization in the Finite-Time Domain 169

So far, based on the cumulant generating function, the original discrete-time linear
MJS has been transformed into a deterministic linear system with the same norm.
Since the state of the transformed deterministic system has the same norm as the
higher-order moment of the original MJS, the finite-time stability of the transformed
deterministic system is equivalent to the higher-order moment finite-time stability
of the original MJS.

9.3 Higher-Order Moment Stabilization in the Finite-Time


Domain

Our first target in this subsection is to design the controller in form of (9.2) to make
sure that the MJS (9.3) is higher-order moment finite-time stabilizable with H∞
interference suppression performance.
Before deriving the main results, the following definition and lemma are intro-
duced first.

Definition 9.2 [22] For a given time-constant N > 0, the transformed deterministic
system (9.13) (setting u(k) = 0, w(k) = 0) is said to be finite-time stable with respect
to (c1 c2 N R), where c1 < c2 , R > 0 , if
   
E S T (0, p)RS(0, p) ≤ c1 ⇒ E S T (k, p)RS(k, p) ≤ c2 , ∀k ∈ {1, 2, · · · , N } .
(9.14)
To eliminate the influence of external disturbances on the finite-time stability of
system (9.13), the following H∞ performance indicator should be satisfied:
 N 

E z M p (k)z M p (k) ≤ γ 2 w TM p (k)w M p (k).
T
(9.15)
k=0


Lemma 9.1 [25] If a1 , a2 , · · · at > 0, then t a1 a2 · · · at ≤ 1t a1 + a2 + · · · + at . The
equal sign will be taken if and only if a1 = a2 = · · · = at .

Then, the following theorem is given to provide the finite-time controller design
scheme for system (9.3), and realizes the requirement of higher-order moment finite-
time stabilization with H∞ performance for system (9.3).
Theorem 9.1 For given γ and α ≥ 0, the discrete-time closed-loop system (9.3) is
said to be higher-order moment finite-time stabilizable with respect to (c1 c2 N R d)
and meet the robust performance index, if there exist mode-dependent symmetric
matrix P̃ > 0 satisfying the following conditions:
170 9 Higher-Order Moment Finite-Time Stabilization for Discrete-Time …
⎡ √  T ⎤
2 − (1 + α) X̃ 0 T
X̃C M (1 + α) X̃ A M p + Bu M p Ỹ
⎢ p
√ ⎥
⎢ ∗ −γ 2 I DwM
T
(1 + α)BwMT ⎥
⎢ p p ⎥<0 (9.16)
⎣ ∗ ∗ −I 0 ⎦
∗ ∗ ∗ − X̃

(α + 1) N λ2 c1 + γ 2 d 2 < λ1 c2 (9.17)

where
1  T 
AMp =  ⊗ In p + diag {A1 , A2 , . . . , A M } ⊗ In p ,
p+1

Bu M p = diag {B1 , B2 , . . . , B M } ⊗ In p ,

λ1 = λmin (Pt ), λ2 = λmax (Pt ).

Then, the augmented state feedback controller gain can be obtained as K M p = Ỹ X̃ −1 ,


where K M p = diag {K 1 , K 2 , . . . , K M } ⊗ Im p .

Proof Define the following Lyapunov function:

V (k, p) = S T (k, p) P̃ S(k, p) (9.18)

where P̃ = diag {P1 , P2 , . . . , PM } ⊗ In p ∈ R (n ×M)×(n p ×M)


p
.

Then, it has

V (k, p) = E [V ( S(k + 1, p), rk )| S(k, p), rk ] − V (S(k, p), rk )


= S(k + 1, p)T P̃ S(k + 1, p) − S(k, p)T P̃ S(k, p)
 
= S(k + 1, p)T ĀTM p P̃ Ā M p − P̃ S(k + 1, p)
+ 2S(k + 1, p)T ĀTM p P̃ BwM p w M p (k) + w TM p (k)BwM
T
p P̃ BwM p w M p (k).
(9.19)

Under zero initial condition V (x(k))|k=0 = 0 , define the following performance


indicator:  N 
" #
JE z M p (k)z M p (k) − γ w M p (k)w M p (k) .
T 2 T
(9.20)
k=0
9.3 Higher-Order Moment Stabilization in the Finite-Time Domain 171

The following performance indicator (9.20) can be rewritten as


 

N
" #
J≤E z TM p (k)z M p (k) − γ 2 w TM p (k)w M p (k) + (1 + α) V (S(k, p))
k=0


N
= ζ (k)T ζ (k) (9.21)
k=0

where " #T
ζ (k)  S(k, p)T w M p (k)T ,
$
(1 + α) ĀTM p P̃ Ā M p − (1 + α) P̃ + C M
T
pCMp
=
∗ %
(1 + α) ĀTM p P̃ BwM p + C MT
p DwM p
.
(1 + α)BwM
T
p P̃ BwM p + DwM p DwM p − γ I
T 2

If < 0 , which implies J < 0, the H∞ performance (9.15) is satisfied.


According to Lemma 9.1, the following inequality holds:

1  T  p
Ā M p ≤ Ã M p =  ⊗ In p + diag{ Ā1 , . . . , Ā M } ⊗ In p . (9.22)
p+1 p+1

According to Schur complement lemma, and combined with inequality (9.22),


condition (9.21) can be written as
⎡ √ ⎤
− (1 + α) P̃ 0 CMT
p √(1 + α) Ã M p
T

⎢ ∗ −γ 2 I DwM
T
(1 + α)B MT ⎥
⎢ p p ⎥ < 0. (9.23)
⎣ ∗ ∗ −I 0 ⎦
∗ ∗ ∗ − P̃ −1

Letting X̃  P̃ −1 , Ỹ  K M p X̃ , and pre- and post-multiplying the above inequality


(9.23) by diag X̃ I I I , then inequality (9.23) is equivalent to the inequality (9.16)
in Theorem 9.1.
On the other hand, for α ≥ 0, inequality (9.16) means that

E {V (k + 1, p)} < (1 + α) V (k, p)+γ 2 w TM p (k)w M p (k). (9.24)

Listing the above equation at different sampling time, we have

E {V (1, p)} − V (0, p) < αV (0, p)+γ 2 w TM p (0)w M p (0),


E {V (2, p)} − V (1, p) < αV (1, p)+γ 2 w TM p (1)w M p (1),
···
E {V (k + 1, p)} − V (k, p) < αV (k, p)+γ 2 w TM p (k − 1)w M p (k − 1).
172 9 Higher-Order Moment Finite-Time Stabilization for Discrete-Time …

It can be obtained from the above inequalities

E {V (k, p)} < (α + 1)V (k − 1, p)+γ 2 w TM p (k − 1)w M p (k − 1)


< (α + 1)2 V (k − 2, p)+γ 2 w TM p (k − 1)w M p (k − 1)
+ (α + 1)γ 2 w TM p (k − 2)w M p (k − 2)
...
N −1

< (α + 1) N V (0, p) + γ 2 (α + 1) N −1−i w TM p (i)w M p (i)
i=0

< (α + 1) V (0, p) + γ d .
N 2 2
(9.25)
 T
Considering that there exists a symmetric matrix Pt 1/2 , which has P̃ = Pt 1/2
R Pt 1/2 , then the Lyapunov function yields

V (k, p) = S T (k, p) P̃ S(k, p)


 T
= S T (k, p) Pt 1/2 R Pt 1/2 S(k, p)
> λmin (Pt )S T (k, p)RS(k, p). (9.26)

Similarly,

V (0, p) = S T (0, p) P̃ S(0, p)


 T
= S T (0, p) Pt 1/2 R Pt 1/2 S(0, p)
< λmax (Pt )S T (0, p)RS(0, p)]. (9.27)

According to the finite-time-bounded definition, when S T (0, p)RS(0, p) < c1 ,


combining inequalities (9.25), (9.26), and (9.27), the following inequality holds:

λmin (Pt )S T (k, p)RS(k, p) < (α + 1) N λmax (Pt )c1 + γ 2 d 2 . (9.28)

If condition (9.17) holds, it has

1
S T (k, p)RS(k, p) < (α + 1) N λmax (Pt )c1 + γ 2 d 2 < c2 (9.29)
λmin (Pt )

which means the system (9.13) is finite-time stabilizable. Thus, the higher-order
moment finite-time stabilization of MJS (9.3) has been realized. This completes the
proof. 
9.4 Higher-Order Moment Finite-Time Stabilization … 173

9.4 Higher-Order Moment Finite-Time Stabilization


with Finite-Frequency Performance

The above content in Sect. 9.3 considers the higher-order moment finite-time perfor-
mance with interference suppression level under the full frequency domain. However,
most of the external disturbances are energy bounded. Therefore, the controller design
considering the performance in the entire frequency range will lead to over-design
and conservativeness. The main purpose of this subsection is to design the appropri-
ate controller (9.2) so that the system (9.3) meet the higher-order moment finite-time
stabilization requirement and the following multiple-frequency performance indices:
& &
&G z (e jϑ )& < β, ∀ |ϑ| ≤ ϑl (9.30)
M p wM p

& &
&G u (e jϑ )& < ρ, ∀ |ϑ| ≥ ϑh (9.31)
M p wM p

 T
where u M p (k) = u T (k), . . . , u TM (k) .
To improve the controller performance and reduce the conservativeness of the
controller, the higher-order moment finite-time stabilization with finite-frequency
performance will be given in the next theorem.
Theorem 9.2 The discrete-time closed-loop system (9.3) is said to be higher-order
moment finite-time stabilizable with respect to (c1 c2 N R d) and meet the finite-
frequency performance indicators (9.30) and (9.31), if for given scalars ϑl , ϑh , α ≥ 0,
γ , and ρ, there exist symmetric matrix Pl , Ph , Q l > 0 and Q h > 0, matrice X̃ , Vl ,
and Vh satisfying the following conditions:
⎡ ⎤
−Pl Q l + X̃ Rl  0 0
⎢ ⎥
⎢ ∗ Pl − 2 cos ϑl Q l − H e A M p X̃ Rl + Bu M p Ỹ Rl B M p Vl −C M p X̃ Rl ⎥
⎢ ⎥<0
⎣ ∗ ∗ −γ 2 I DwM p Vl ⎦
∗ ∗ ∗ −I
⎡ ⎤ (9.32)
−Ph Q h + X̃ Rh  0 0
⎢ ⎥
⎢ ∗ Ph − 2 cos ϑh Q h − H e A M p X̃ Rh + Bu M p Ỹ Rh B M p Vh Ỹ Rh ⎥
⎢ ⎥<0
⎣ ∗ ∗ −ρ 2 I 0 ⎦
∗ ∗ ∗ −I
' ( (9.33)
T
(1 + α) X̃ (A M p X̃ + Bu M p Ỹ )
<0 (9.34)
∗ X̃

− λ2 R −1 < X̃ < −λ1 R −1 (9.35)

(α + 1) N λ2 c1 + γ 2 d 2 < λ1 c2 . (9.36)
174 9 Higher-Order Moment Finite-Time Stabilization for Discrete-Time …

Then, the state feedback controller gain can be obtained as K M p = Ỹ X̃ −1 , where


K M p = diag {K 1 , K 2 , . . . , K M } ⊗ Im p .

Proof Letting X̃ −1 = − P̃, and pre- and post-multiplying inequality (9.32) by


diag{ X̃ −1 , I }, it has

−(1 + α) P̃ (A M p + Bu M p K M p )T
< 0. (9.37)
∗ − P̃ −1

According to Schur complement lemma, the above inequality is equivalent to

(A M p + B M p K M p )T P̃(A M p + B M p K M p ) − (1 + α) P̃ < 0. (9.38)

Performing a congruence to the inequality (9.38) by S T (k), it has

E {V (k + 1, p)} < (1 + α)V (k, p) + γ 2 w TM p (k)w M p (k)


< (1 + α)2 V (k − 1, p)+γ 2 w TM p (k − 1)w M p (k − 1)
···
< (1 + α) N V (0, p) + γ 2 d 2 .

Combining the conditions (9.35) and (9.36), we can get

1
S T (k, p)RS(k, p) < (α + 1) N λmax (Pt )c1 + γ 2 d 2 < c2 ,
λmin (Pt )

which means the system (9.13) is finite-time stabilizable. Thus, the higher-order
moment finite-time stabilization of MJS (9.3) has been realized.
On the other hand, according to GKYP Lemma 7.1, condition (9.32) is equivalent
to
T
MT  ⊗ Pl + l ⊗ Q l 0 MT
T TT <0 (9.39)
I 0  I

where
01 −1 0 I 0
= , l = , = ,
10 0 −ϑl2 0 −γ 2 I

A M p + Bu M p K M p BwM p
M=
CMp DwM p
A M p BwM p Bu M p  
= + K Mp I 0
C M p DwM p 0
=  + K M p Z ,
9.4 Higher-Order Moment Finite-Time Stabilization … 175

A M p BwM p Bu M p
= , = ,
C M p DwM p 0
   T
Z = I 0 , Z + = Z ∗ (Z Z ∗ )−1 = I 0 .

Performance index (9.30) can be rewritten as:


⎧⎛ ⎞ ⎫
⎪ −I 0 0 ⎛ ⎞⎪

⎨⎜ X̃ R ⎪

 ⊗ Pl + l ⊗ Q l 0 T ⎜ 0 −I 0 ⎟ ⎟ ⎝ Vl ⎠
l
T T < He ⎝
0  ⎪
⎪ AMp BwM p Bu M p ⎠ ⎪
⎩ K M p Rl ⎪

CMp DwM p 0
⎧⎛ ⎞ ⎫
⎨ −Z + Z+Z − I ⎬
X̃ R
< H e ⎝ AMp + Bu M p K M p BwM p ⎠ l .
⎩ Vl ⎭
CMp DwM p

The above inequality is equivalent to:


 *
 ⊗ Pl +  ⊗ Q l 0 −Z + Z+Z − I X̃ Rl
T T < He
T
.
0   Z + K M p (I − Z + Z )
+
Vl
(9.40)
Letting
W = Z + W̄ Rl + (I − Z + Z )Vl , det(W ) = 0.

Then inequality (9.40) is equivalent to:


 *  *
 ⊗ Pl +  ⊗ Q l 0 −W −I
T T T < He = He ∗W .
0  W +K M p Z W M
(9.41)
can be divided as:
⎡ ⎤ ⎡ ⎤T ⎡ ⎤ ⎡ ⎤T
I 0 $ + I 0 0 0 $ + 0 0
−P Q ⎢ CT 0⎥ I 0 ⎢ CT 0⎥
= ⎣0 I⎦ l l ⎣0 I ⎦ + ⎣ Mp ⎦ ⎣ Mp ⎦ .
∗ Pl − 2 cos ϑl Q l T ∗ −γ 2 T
0 0 0 0 DwM p I DwM p I

Through Lemma 7.2, Formula (9.39) can be deduced from inequality (9.41),
which means that the low-frequency performance index (9.30) can be derived from
the condition (9.32). Similarly, the high-frequency performance index (9.31) can also
be deduced from the condition (9.33) in Theorem 9.2. This completes the proof. 
176 9 Higher-Order Moment Finite-Time Stabilization for Discrete-Time …

Fig. 9.1 a State trajectory of the closed-loop system with p = 2. b State trajectory of the closed-
loop system with p = 3

9.5 Simulation Analysis

In this section, two examples will be given to show the effectiveness and practi-
cal application of our developed theoretical results. The first numerical example is
9.4 Higher-Order Moment Finite-Time Stabilization … 177

used to show the transient performance of the system in given time interval and the
multiple-frequency performance in high-frequency and low-frequency bands.

Example 9.1 Use the Example 9.1 in Chap. 7, and the results obtained from Theorem
9.1, when order moment p = 2, the controller parameter to be solved is
   
K 1 = −0.1604 −3.3373 , K 2 = 0.6055 −0.7477 .

When order moment p = 3, the controller parameter to be solved are


   
K 1 = −0.1066 −0.1376 , K 2 = 0.1548 −0.2786 .

Using the obtained controller, Fig. 9.1a, b are given to show the second-order
moment response and the third-order moment response of the states, respectively.
It is obvious from the Fig. 9.1 that the controlled system satisfies the condition of
finite-time stabilization. In order to compare the effectiveness of the method, the
state trajectory of the open-loop system without control is given in Fig. 9.2, where
the state trajectory exceeds the desired bound c2 = 4.
To verify the effectiveness of the results presented in Theorem 9.2, when order
moment p = 2, the controller parameter to be solved are
   
K 1 = −5.9370 −3.4052 , K 2 = −13.2598 −18.4628 .

Fig. 9.2 State trajectory of the open-loop system


178 9 Higher-Order Moment Finite-Time Stabilization for Discrete-Time …

Fig. 9.3 a State trajectory of the closed-loop system. b State trajectory of the open-loop system

When order moment p = 3, the controller parameter to be solved is


   
K 1 = −4.9363 −3.2130 , K 2 = −5.9931 −15.3002 .

Figure 9.3a, b are given to show the third-order moment response of the closed-
loop system and open-loop system, respectively. It can be shown that even the open-
loop system is unstable, the controlled closed-loop system satisfies the required
performance.
9.4 Higher-Order Moment Finite-Time Stabilization … 179

Fig. 9.4 a Amplitude-frequency characteristic curve with p = 2. b Amplitude-frequency charac-


teristic curve with p = 3
180 9 Higher-Order Moment Finite-Time Stabilization for Discrete-Time …

Meanwhile, in order to verify that the system meets the multiple-frequency per-
formance indices, the amplitude-frequency characteristic curve of the closed-loop
system is shown in Fig. 9.4a. The solid& line in the& Fig. 9.4 represents the amplitude-
frequency characteristic curve of &G z M p w M p (e jϑ )& . The shaded part in blue shows
the limits of β and ρ in low-frequency and high-frequency
& bands.
& It can be clearly
seen in Fig. 9.4 that the performance indicator &G z w (e jϑ )& can be satisfied in
& & Mp Mp

low-frequency band and &G u w (e jϑ )& is satisfied in high-frequency band, while


& & M p M p & &
&G z w (e jϑ )& is greater than β in low-frequency band and &G u w (e jϑ )& is greater
Mp Mp Mp Mp
than ρ in high-frequency band. That explains the reason why the proposed method
in this chapter can reduce the conservativeness of the controller design.

9.6 Conclusion

In this chapter, the issue of higher-order moment performance for stochastic discrete-
time MJSs has been addressed with the help of the cumulant generating function
by translating the original stochastic MJSs into deterministic ones. To reduce the
conservativeness of the controller design, the requirement of asymptotic stability
in infinite-time domain is relaxed to guarantee that the state is restricted within a
certain range of the equilibrium point in the fixed time interval. Furthermore, from
the frequency point of view, the finite-frequency controller in finite-time domain
with higher-order moment performance has been designed by introducing frequency
information into controller design. In the next chapter, the model predictive controller
will be designed to online optimize the finite-time performance of the considered
systems.

References

1. Luan, X.L., Liu, F., Shi, P.: Observer-based finite-time stabilization for extended Markovian
jump systems. Asian J. Control 13(6), 925–935 (2011)
2. Oliveira, R.C.L.F., Vargas, A.N., Val, J.B.R.D.: Mode-independent H2 -control of a DC motor
modeled as a Markovian jump linear system. IEEE Trans. Control Syst. Technol. 22(5), 1915–
1919 (2014)
3. Luan, X.L., Liu, F., Shi, P.: Finite-time filtering for nonlinear stochastic systems with partially
known transition jump rates. IET Control Theor. Appl. 4(5), 735–745 (2010)
4. Luan, X.L., Liu, F., Shi, P.: H∞ filtering for nonlinear systems via neural networks. J. Frankl.
Inst. 347, 1035–1046 (2010)
5. Cheng, P., Wang, J.C., He, S.P., Luan, X.L., Liu, F.: Observer-based asynchronous fault detec-
tion for conic-type nonlinear jumping systems and its application to separately excited DC
motor. IEEE Trans. Circ. Syst-I 67(3), 951–962 (2020)
6. Luan, X.L., He, S.P., Liu, F.: Neural network-based robust fault detection for nonlinear jump
systems. Chaos Soliton Fract. 42(2), 760–766 (2009)
References 181

7. Luan, X.L., Zhao, C.Z., Liu, F.: Finite-time H∞ control with average dwell-time constraint for
time-delay Markovian jump systems governed by deterministic switches. IET Control Theor.
Appl. 8(11), 968–977 (2014)
8. Luan, X.L., Shunyi, Zhao, Liu, F.: H∞ control for discrete-time Markovian jump systems with
uncertain transition probabilities. IEEE Trans. Autom. Control 58(6), 1566–1572 (2013)
9. Ning, Z.P., Zhang, L.X., Mesbah, A., Colaneri, P.: Stability analysis and stabilization of discrete-
time non-homogeneous semi-Markovian jump linear systems: a polytopic approach. Automat-
ica 120, 1–9 (2020)
10. Ma, S., Boukas, E.K.: A descriptor system approach to sliding mode control for uncertain
Markovian jump systems. Automatica 45(11), 2707–2713 (2009)
11. Zhao, S.Y., Liu, F., Luan, X.L.: Risk-sensitive filtering for nonlinear Markovian jump systems
on the basis of particle approximation. Int. J. Adapt. Control 26(2), 158–170 (2012)
12. Luan, X.L., Shi, P., Liu, F.: Finite-time stabilization for Markovian jump systems with Gaussian
transition probabilities. IET Control Theor. Appl. 7(2), 298–304 (2013)
13. Cheng, D.Z., Zhang, L.J.: Adaptive control of linear Markovian jump systems. Int. J. Syst. Sci.
37(7), 477–483 (2006)
14. Geromel, J.C., Gabriel, G.W.: Optimal state feedback sampled-data control design of Marko-
vian jump linear systems. Automatica 54, 182–188 (2015)
15. Luan, X.L., Liu, F., Shi, P.: Neural-network-based finite-time H∞ control for extended Marko-
vian jump nonlinear systems. Int. J. Adapt. Control Signal Process 24(7), 554–567 (2010)
16. Luan, X.L., Shunyi, Zhao, Shi, P., Liu, F.: H∞ filtering for discrete-time Markovian jump
systems with unknown transition probabilities. Int. J. Adapt. Control Signal Process 28(2),
138–148 (2014)
17. Chow, V., Jiang, W., Li, J.V.: Does VIX truly measure return volatility? SSRN Electron. J.
(2014). https://doi.org/10.2139/ssrn.2489345
18. Wu, H.J., Sun, J.: P-moment stability of stochastic differential equations with impulsive jump
and Markovian switching. Automatica 42(10), 1753–1759 (2006)
19. Liu, L., Shen, Y., Jiang, F.: The almost sure asymptotic stability and p-th moment asymptotic
stability of nonlinear stochastic differential systems with polynomial growth. IEEE Trans.
Autom. Control 56(8), 1985–1990 (2011)
20. Li, X., Zhu, Q., Regan, D.: P-th moment exponential stability of impulsive stochastic functional
differential equations and application to control problems of NNs. J. Franklin Inst. 351(9),
4435–4456 (2014)
21. Luan, X.L., Huang, B., Liu, F.: Higher order moment stability region for Markovian jump
systems based on cumulant generating function. Automatica 93, 389–396 (2018)
22. Zhou, Z.H., Luan, X.L., Liu, F.: High-order moment stabilization for Markovian jump systems
with attenuation rate. J Franklin Inst. 356, 9677–9688 (2019)
23. Wan, H.Y., Luan, X.L., Karimi, H.R., Liu, F.: Higher-order moment filtering for Markovian
jump systems in finite frequency domain. IEEE Trans. Circ. Syst-II 66(7), 1217–1221 (2019)
24. Zhou, Z.H., Luan, X.L., Liu, F.: Finite-frequency fault detection based on derandomisation for
Markovian jump linear system. IET Control Theor. Appl. 12(08), 1148–1155 (2018)
25. Maligranda, L.: The AM-GM inequality is equivalent to the Bernoulli inequality. Math. Intell.
34(1), 1–2 (2012)
Chapter 10
Model Predictive Control for Markovian
Jump Systems in the Finite-Time Domain

Abstract The model predictive control is adopted to optimize the finite-time per-
formance for discrete-time Markovian jump systems and semi-Markovian jump sys-
tems. Our target is to minimize the control inputs in a given time interval while sat-
isfying the required transient performance by means of online rolling optimization.
In this way, the minimum energy consumption can be realized. Furthermore, for the
semi-Markovian jump systems whose transition probability depends on sojourn-time,
the finite-time performance under the model predictive control scheme is analyzed
in the situation that the transition probability at each time depends on the history
information of elapsed switching sequences.

10.1 Introduction

Model predictive control (MPC) has been extensively studied as a powerful tool for
managing the industrial processes. Differing from conventional control where the
control law is pre-computed offline, MPC is a form of control scheme in which the
control action is obtained online [1]. By solving an optimal control problem in which
the initial state is the current state of the processes, a control sequence is yielded at
each sampling instant and only the first control action is applied to the procedures [2].
The advantages of the MPC scheme include that it can guarantee closed stability, opti-
mality, adaptation to change parameters, and convenience to deal with constraints [3].
MPC for Markovian jump systems (MJSs) also has been attracting more and
more attention. MPC can re-compute the optimal control problem with both the
measured state and mode at each sampling time. Therefore, the performance index
using MPC has a significant reduction compared with the state feedback gain or
output feedback gain for each mode. Since Park et al. [4] firstly used MPC to optimize
control problems of MJSs, scholars have used MPC to solve many problems including
constrained issues [5], exogenous disturbances [6], uncertain transition probabilities
[7], resource saving [8], etc.
In MJSs, the sojourn-time (the interval between two consecutive jumps) of each
subsystem is subject to exponential distribution in the continuous-time domain or
geometric distribution in the discrete-time domain. However, the transition proba-
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 183
X. Luan et al., Robust Control for Discrete-Time Markovian Jump Systems in the
Finite-Time Domain, Lecture Notes in Control and Information Sciences 492,
https://doi.org/10.1007/978-3-031-22182-8_10
184 10 Model Predictive Control for Markovian Jump Systems …

bilities (TPs) can be a memory when describing the mode switching of practical
application, so semi-MJSs are proposed to explain why the TP at each time depends
on the history information of elapsed switching sequences [9]. To deal with the
complexity caused by the generality of the semi-Markovian chain in the capabil-
ity of modeling stochastic switching, some advances have been achieved so far, for
example, taking upper bounds on the sojourn-time [10].
The discussions above are all about Lyapunov asymptotic stability and optimal
performance of systems over the infinite-time domain. When transient performance
is required, MPC can also satisfy the performance requirements and optimization
objectives. As the finite-time performance of systems does not require the asymptotic
convergence of the states, the MPC algorithm can consider the minimum energy
consumption of the control inputs by minimizing control actions. This chapter focuses
on using MPC to satisfy the transient performance in a given time interval for both
discrete-time MJSs and semi-MJSs.
Consider the discrete-time MJS and semi-MJS with the following structure:

x(k + 1) = A(rk )x(k) + Bu (rk )u(k). (10.1)

When rk is a homogeneous Markovian chain, the system (10.1) is called MJS. If rk


is a semi-Markovian chain with transition probabilities dependent on sojourn-time,
the system (10.1) is named as semi-MJS. To introduce the semi-Markovian chain,
three following stochastic processes are first given and shown in Fig. 10.1:
(1) {Rn }n∈N takes values in M = {1, 2, . . . , i, . . . , M}, where Rn is the index of
system mode at the nth jump;
(2) {kn }n∈N takes values in N, where kn denotes the time at the nth jump and k0 = 0;
(3) {Sn }n∈N takes values in N, where Sn = kn − kn−1 , ∀n ∈ N≥1 denotes the sojourn-
time of mode Rn−1 between the (n − 1)th jump and the nth jump, and S0 = 0.
Then, we give the concepts on Markovian renewal chain and semi-Markovian
kernel.

Definition 10.1 The stochastic process {(Rn , kn )}n∈N is said to be a discrete-time


homogeneous Markovian renewal chain (MRC) if for any j ∈ M, τ ∈ N, and n ∈
N, Pr(Rn+1 = j, Sn+1 = τ |R0 , . . . , Rn = i; k0 , . . . , kn ) = Pr(Rn+1 = j, Sn+1 = τ |
Rn = i).
With Definition 10.1, {Rn }n∈N is called the embedded Markovian chain of MRC
{(Rn , kn )}n∈N , and the TPs matrix =[θi j ]i, j∈M is defined by θi j  Pr(Rn+1 =
j|Rn = i) with θii = 0.

Definition 10.2 The matrix (τ ) = [πi j (τ )]i, j∈M is called discrete-time semi-
Markovian kernel
 (SMK) with πi j (τ )  Pr(Rn+1 = j, Sn+1 = τ |Rn = i), ∀i, j ∈
M, ∀τ ∈ N and ∞ τ =0 j∈M πi j (τ ) = 1, πi j (0) = 0.
10.1 Introduction 185

Fig. 10.1 Illustration of stochastic processes Rn , kn and Sn (M = 3)

With the above concepts, the definition of the semi-Markovian chain is given as
follows.

Definition 10.3 {rk }k∈N is said to be a semi-Markovian chain associated with MRC
{(Rn , kn )}n∈N , if rk = R N (k) , ∀k ∈ N, where N (k)  max{n ∈ N|k ≥ kn }.
Although embedded Markovian chain {(Rn , kn )}n∈N and semi-Markovian chain
{rk }k∈N are used to describe the variation of system modes, the difference is that the
stochastic variable varies with jump instant kn in the former, while with sampling
instant k in the latter. Because the evolution of the semi-Markovian chain is generated
by the SMK (τ ) related to sojourn-time τ , the probability density function (PDF)
of sojourn-time is required. The PDF here is depending on both the current and next
system mode and defined as ωi j (τ )  Pr(Sn+1 = τ |Rn+1 = j, Rn = i), ∀i, j ∈ M,
∀τ ∈ N. Therefore,

Pr (Rn+1 = j, Rn = i) Pr (Rn+1 = j, Sn+1 = τ, Rn = i)


πi j (τ ) = = θi j ωi j (τ ).
Pr (Rn = i) Pr (Rn+1 = j, Rn = i)
(10.2)
It is worth mentioning that the practical sojourn-time is generally finite. There-
i
fore, let Tmax denote the maximum of sojourn-time for the ith mode of the system
(10.1) in the following text.

Remark 10.1 In part of the study related to the semi-MJSs, the PDF that only
depends on the current system mode is considered. It can be seen from the remark
of [10] that the PDF depending on mode jumping is more specific and capable of
describing the corresponding semi-Markovian chain accurately rather than relying
on only one system mode.
186 10 Model Predictive Control for Markovian Jump Systems …

10.2 Stochastic Finite-Time MPC for MJSs

To optimize the finite-time performance of the system (10.1), the cost function of
MPC is considered as:
T −1

J (k) = u T (k + f |k)Q i u(k + f |k) (10.3)
f =0

where T ∈ N is a given predictive horizon and Q i denotes a matrix of appropri-


ate dimensions related to the system mode i. It is worth noting that the designed
cost function is used to minimize energy consumption. At time k, the control input
sequence U (k) = {u(k|k), u(k + 1|k), . . . , u(k + T − 1)} is the solution by solving
the MPC optimization problem, and the control input in the system (10.1) is given
by the first one in sequence U (k).
Based on this cost function, the MPC controller guarantees the finite-time perfor-
mance can be solved by the following optimization problem.

Remark 10.2 At an arbitrary instant k, the corresponding optimal control sequence


can be obtained by solving the following optimization problem:

min max γ
U (k) rk ,...,rk+T −1

s.t.J (k) < γ (10.4)

x T (k + f |k)G j x(k + f |k) < c2 (10.5)

(Ai + Bui F)T P (Ai + Bui F) − P ≤ 0 (10.6)


 T
where U (k) = u T (k|k) u T (k + 1|k) · · · u T (k + T − 1|k) , f ∈ N[0,T ] , P is the
positive-definite symmetric matrix of the defined Lyapunov function, and F is the
controller gain to be designed.
It is worth noting that the predictive horizon T is not more than the finite-time
interval N in the present article. Since inequality (10.5) only ensures that the states
during the predictive horizon satisfy finite-time performance, the condition (10.6)
is considered in the present study to make states after the predictive horizon in a
finite-time interval meets the performance requirements.
Accordingly, the following results are provided on the stochastic finite-time sta-
bilization of the discrete-time MJS (10.1).
10.2 Stochastic Finite-Time MPC for MJSs 187

Theorem 10.1 System (10.1) is stochastic finite-time stabilizable with respect to


(c1 c2 N G i ), if x T (0)G r0 x(0) ≤ c1 and the following semi-definite programming
(SDP) is solvable.

min max γ
U (k) rk ,...,rk+T −1

 
γ ∗
s.t. >0 (10.7)
U (k) Q −1
T

 
c2 ∗
> 0, f ∈ N[0,T ] (10.8)
H1 ( f )x(k) + H2 ( f )U (k) X
 
X ∗
≥0 (10.9)
Ai X + Bui Y X

where
⎡ ⎤
Q rk 0
⎢ .. ⎥
X = P −1 , F = Y X −1 , Q T = ⎣ . ⎦,
0 Q rk+T −1

I, f =0
H1 ( f ) = ,
A(rk+ f −1 )A(rk+ f −2 ) . . . A(rk ), f ∈ {1, . . . , T }

⎧  
⎨ 0 · · · 0 n×mT , f =0

H2 ( f ) = A(rk+ f −1 )A(rk+ f −2 ) . . . A(rk+1 )Bu (rk ) . . . .
⎩ , f ∈ N[0,T ]
A(rk+ f −1 )Bu (rk+ f −2 ) Bu (rk+ f −1 ) 0 · · · 0]n×mT

Proof The foregoing SDP can be simply obtained from Remark 10.2 through the
 T
Schur complement lemma. With U (k)= u T (k|k) u T (k+1|k) . . . u T (k + T − 1|k) ,
inequality (10.4) can be written as γ − U T (k)Q i U (k) > 0. Then, using Schur com-
plement lemma, the above inequality can be expressed as inequality (10.7).
Due to the fact that x(k + f |k) = H1 ( f )x(k) + H2 ( f )U (k), f ∈ N[0,T ] , inequal-
ity (10.5) can be transformed to inequality (10.8) by Schur complement lemma.
Define the following Lyapunov function V (x(k))  x T (k)P x(k). A feedback
controller u(k + f |k) = F x(k + f |k), f ∈ N≥T satisfying the following condition
is considered for the part in a finite-time interval that goes over the predictive horizon:

x T (k + f + 1|k)P x(k + f + 1|k) ≤ x T (k + f |k)P x(k + f |k), f ∈ N[T,N −k−1] .


188 10 Model Predictive Control for Markovian Jump Systems …

This expression is equal to inequality (10.6). Then, pre-multiplying and post-


multiplying by X = P −1 , inequality (10.6) is equivalent to

(Ai X + Bui Y )T X −1 (Ai X + Bui Y ) − X ≤ 0,

and condition (10.9) is obtained through Schur complement lemma. Therefore, the
SDP equals to Remark 10.2, which is a solvable problem. Based on expressions
(10.8) and (10.9), it is found that inequality x T (k + f |k)G i x(k + f |k) < c2 holds at
f ∈ N[0,N −k] . Therefore, if the initial state satisfies the condition x T (0)G r0 x(0) ≤ c1 ,
then the system (10.1) is stochastic finite-time stable with respect to (c1 c2 N G i )
and the proof is completed. 

Remark 10.3 Inequality (10.9) is used to ensure that control inputs can be found in
the interval after the predictive horizon. When the feedback gain F cannot be found,
the predictive horizon is considered as T = N and inequality (10.9) is not required.

10.3 Stochastic Finite-Time MPC for Semi-MJSs

In this section, a MPC method is designed to guarantee the finite-time performance


of semi-MJSs. A result on stochastic finite-time stabilization of semi-MJSs is firstly
given as below for later use.
Proposition 10.1 Consider the discrete-time semi-MJS (10.1). The jump instants
are denoted by k0 , k1 , . . ., ks , . . . with k0 = 0. The system is stochastic finite-time
stable, if there exist a set of C 1 functions V (x(k), rk ) : R n → R 1 , two class K∞
functions α1 , α2 , and β ≥ 1 such that for any initial conditions x T (0)G r0 x(0) ≤ c1 ,
r0 ∈ M and given parameters (c1 c2 N G i ), it has
 
α1 ( x(k) ) ≤ V x(k), rks ≤ α2 ( x(k) ) (10.10)

V (x(k + 1), rks ) ≤ βV (x(k), rks ) (10.11)

E{V (x(ks+1 ), rks+1 )} ≤ E{β T V (x(ks ), rks )} (10.12)

β N c1 < c2 . (10.13)

Proof Construct the Lyapunov function as

Vi (x(k))  V (x(k), Rn )| Rn =i = x T (k)Pi x(k),

where Vi (x(k)) satisfies inequalities (10.10), (10.11), and (10.12). At any instant
k ∈ N[0,N ] , the following condition is ensured by inequality (10.11)
10.3 Stochastic Finite-Time MPC for Semi-MJSs 189

E{V (x(k), rks )} ≤ E{β k−ks V (x(ks ), rks )}. (10.14)

Then, combining conditions (10.12) and (10.14), we have

E{V (x(k), rks )} ≤ E{β k−ks +τ V (x(ks−1 ), rks−1 )}


..
.
≤ E{β k−ks +τs−1 +···+τ0 V (x(0), r0 )}. (10.15)

Because of β ≥ 1 and k − ks + τs−1 + · · · + τ0 ≤ N , we have

λmax (Pi )
E{V (x(k), rks )} < β N c1 (10.16)
λmax (G i )

where λmax (Pi ) and λmax (G i ) denote the maximal eigenvalue of Pi and G i . In com-
bination with equations (10.13) and (10.16), the following condition holds:

λmax (Pi )
E{V (x(k), rks )} < c2 . (10.17)
λmax (G i )

On the other hand, at any instant k ∈ N[1,N ] with condition (10.17), it has

λmax (G i ) T
x T (k)G i x(k) < λmax (G i )x T (k)x(k) < x (k)Pi x(k) < c2 .
λmax (Pi )

Then, the system is stochastic finite-time stabilizable. The proof is completed. 

Next, the following theorem gives a criterion of stochastic finite-time stability for
the free semi-MJSs.
Theorem 10.2 Consider the semi-MJSs (10.1) with u(k) ≡ 0 and given parameters
(c1 c2 N G i ). If ∀i ∈ M, there exist Tmax
i
∈ N≥1 , β > 1 and matrices Pi > 0 such
that
AiT Pi Ai − β Pi < 0 (10.18)

i

Tmax
T
(AiT ) Pi (τ )AiT − β T Pi < 0 (10.19)
τ =1

β N c1 < c2 (10.20)
 T i 
where Pi (τ )  j∈M πi j (τ )P j /ηi with ηi  τ max
=1 j∈M πi j (τ ), and Tmax denotes
i

the upper bound of sojourn-time for the i th mode of system (10.1), then the system
(10.1) is stochastic finite-time stable.
190 10 Model Predictive Control for Markovian Jump Systems …

Proof Define the following Lyapunov function:

Vi (x(k))  V (x(k), Rn )| Rn =i = x T (k)Pi x(k),

where Pi satisfies conditions (10.18) and (10.19).


First, it is straightforward that

inf {λmin (Pi )} x(k) 2


≤ Vi (x(k)) ≤ sup {λmax (Pi )} x(k) 2
(10.21)
i∈M i∈M

where λmin (Pi ) and λmax (Pi ) denote the minimal and maximal eigenvalue of Pi . For
the case Rn = i, it is ensured by condition (10.18) that ∀k ∈ N[ks ,ks +Tmax
i −1]

k−k T k−k
V (x(k + 1), rks ) − βV (x(k), rks ) = x T (ks )(Ai s ) (AiT Pi Ai − β Pi )Ai s x(ks ) < 0.
(10.22)

Then, for Rn = i, Rn+1 = j and sojourn-time τ = kn+1 − kn , the following condition is


ensured by condition (10.19):

E{V j (x(kn+1 ))} − E{β T Vi (x(kn ))}


⎡ ⎤
i
Tmax
⎢   T ⎥
= x T (kn ) ⎣ (AiT ) πi j (τ )P j AiT /ηi − β T Pi ⎦ x(kn )
τ =1 j∈M
⎡ ⎤
i
Tmax
⎢  T ⎥
= x T (kn ) ⎣ (AiT ) Pi (τ )AiT − β T Pi ⎦ x(kn ) < 0. (10.23)
τ =1

By conditions (10.20), (10.21), (10.22), and (10.23), it follows that the free semi-MJS
(10.1) is stochastic finite-time stable. 

To optimize the finite-time performance of the system (10.1), the cost function of
MPC is considered as:
N −1

J (k) = u T ( f |k)Q i u( f |k),
f =k

where Q i denotes a matrix of appropriate dimensions related to the system mode i.


Different from Sect. 10.2, the cost function here considers the control inputs from the
current instant k to the given finite-time interval. It is because the model predictive
controller in the section has the form:

u(k) = Fi x(k) (10.24)

where Fi is the dynamic feedback gain solved at time k.


10.3 Stochastic Finite-Time MPC for Semi-MJSs 191

Considering the cost function and Theorem 10.2, the proposed MPC algorithm
solves the following optimization problem at each time instant.

Remark 10.4 At any time instant k ∈ N[0,N ] , the feedback gain Fi can be obtained
by solving

min γ
Fi ,β,Pi

s.t.J (k) < γ (10.25)


V (x(k + 1), rks ) ≤ βV (x(k), rks ) (10.26)
E{V (x(ks+1 ), rks+1 )} ≤ E{β V (x(ks ), rks )}
T
(10.27)
λmax (Pi )
βN c1 < c2 . (10.28)
λmin (Pi )
 −1 T
This optimization problem can guarantee J (k) = Nf =k u ( f |k)Q i u( f |k) < γ
and the stochastic finite-time stabilization of the discrete-time semi-MJS (10.1).
Conditions (10.26)–(10.28) can be obtained by referring to the proof in [11]. Because
MPC algorithm calculates the feedback gain at each instant, condition (10.28) can be
modified to obtain a more accurate value of β. To solve this optimization problem,
there is a significant difficulty in deriving tractable conditions of controller design due
to the existence of the power of AiT . Accordingly, the following results are provided
on the stochastic finite-time stabilization of the discrete-time semi-MJS (10.1).

Theorem 10.3 Consider the discrete-time semi-MJS (10.1) with given parameters
(c1 c2 N G i ). For ∀i ∈ M, there exist Tmax i
∈ N≥1 , λ1 , λ2 , and β > 1 and a set of
matrices Hi and H̃i (t), ∀t ∈ N[0,Tmax
i ] , Z i , Ui such that, ∀t ∈ N[0,T i −1] , the following
max
SDP is solvable:

min γ
λ1 ,λ2 ,Z i ,Ui ,Hi , H̃i (t)

 N −k T 
β (Z i + Z i − Hi ) x(k)
>0 (10.29)
∗ γ
⎡ ⎤
Z i + Z iT − Hi 0 Ai Z i + Bui Ui
⎣ ∗ Q i−1 Ui ⎦>0 (10.30)
∗ ∗ β Hi
192 10 Model Predictive Control for Markovian Jump Systems …

⎡ ⎤
Z i + Z iT − H̃i (t + 1) 0 0 (Ai Z i + Bui Ui ) L i (t + 1)
⎢ ⎥
⎢ ∗ Z̃ + Z̃ T − H̃ 0 Ãi Z̃ i + B̃ui Ũi L̃ i (t + 1)⎥
⎢ ⎥>0
⎣ ∗ ∗ Q −1 Ui ⎦
i
∗ ∗ ∗ β H̃i (t)
(10.31)
i
β Tmax Hi − H̃i (0) > 0 (10.32)
 
λ−1
1 Gi
−1
Zi
>0 (10.33)
∗ Hi

Z i + Z iT − Hi − λ−1 −1
2 Gi > 0 (10.34)

λ−1
2 c2 − β
N −k −1 T
λ1 x (k)G i x(k) > 0 (10.35)

where Ãi  diag(M) {Ai }, B̃ui  diag(M) {Bui }, Z̃ i  diag(M) {Z i }, Ũi  diag(M) {Ui },
H̃  diag{H1 , H2 , . . . , HM }, Z̃  diag{Z 1 , Z 2 , . . . , Z M }, L i (t) = I , ∀t ∈ N[1,Tmax
i −1]

with L i (Tmax ) = 0 and L̃ i (t)  [ i1 (t)I, i2 (t)I, . . . , i M (t)I ] with i j (t) 


i T
 T i 
πi j (t)/ηi , and ηi  τ max
=1 j∈M πi j (τ ), then a mode-dependent controller of form
(10.24) can be obtained to guarantee the stochastic finite-time stabilization of the
resulting closed-loop system. In addition, the admissible controller gain is given by
Fi = Ui Z i−1 .

Proof Define the Lyapunov function

Vi (x(k))  V (x(k), Rn )| Rn =i = x T (k)Pi x(k). (10.36)

Then, consider the upper bound of the cost function in terms of the Lyapunov
function
N −1

u( f )T Q i u( f ) < x(k)T Pi x(k). (10.37)
f =k

The inequality (10.37) is obtained by summing from f = k to f = N − 1 with


E{x(N )T Pr N x(N )} > 0.

E{x T ( f + 1)Pr f +1 x( f + 1)} − E{x( f )T Pr f x( f )} < −u( f )T Q r f u( f ). (10.38)

By combining conditions (10.38), (10.18), and (10.19), we have

E{x T ( f + 1)Pr f +1 x( f + 1)} − β E{x( f )T Pr f x( f )} < −u( f )T Q r f u( f ). (10.39)

Then, condition (10.37) is transformed into


10.3 Stochastic Finite-Time MPC for Semi-MJSs 193

N −1

u( f )T Q i u( f ) < β N −k x(k)T Pi x(k). (10.40)
f =k

Hence, condition (10.25) can be obtained by

β N −k x T (k)Pi x(k) < γ . (10.41)

By Schur complement lemma, the above inequality (10.41) can be converted to


 N −k −1 
β Pi x(k)
> 0. (10.42)
∗ γ

Applying the congruence transformation to condition (10.42) by diag{Vi , I }, since


(Pi − Vi )T Pi−1 (Pi − Vi ) > 0 ensures ViT Pi−1 Vi > Vi + ViT − Pi , it gives
 N −k 
β (Vi + ViT − Pi ) ViT x(k)
> 0. (10.43)
∗ γ

After that, performing a congruence transformation to condition (10.43) by


−1
diag{Vi−1 , I }, we can obtain condition (10.29) with Hi = (ViT ) Pi Vi−1 and Z i =
Vi−1 .
Condition (10.26) is satisfied when condition (10.39) exists, and it is ensured by

β Pi − AiT Pi Ai − FiT Q i Fi > 0. (10.44)

By Schur complement lemma, the above inequality (10.44) is converted to


⎡ ⎤
Pi−1 0 Ai + Bui Fi
⎣ ∗ Q i −1 Fi ⎦ > 0. (10.45)
∗ ∗ β Pi

Performing a congruence transformation to condition (10.45) by diag{Vi , Vi , I }


and ViT Pi−1 Vi > Vi + ViT − Pi , we have
⎡ ⎤
Vi + ViT − Pi 0 ViT (Ai + Bui Fi )
⎣ ∗ ViT Q i −1 Vi ViT Fi ⎦ > 0. (10.46)
∗ ∗ β Pi

Next, implementing a congruence transformation to condition (10.46) by


diag{Vi−1 , Vi−1 , Vi−1 }, condition (10.30) can be obtained.
As a result of inequality (10.39), condition (10.27) can be satisfied by the following
inequality:
194 10 Model Predictive Control for Markovian Jump Systems …


i
i

Tmax
 T
β Tmax
Pi − πi j (τ )/ηi (Ai + Bui Fi )T P j (Ai + Bui Fi )T
τ =1 j∈M
 T 
+ (Ai + Bui Fi )τ −1 FiT Q i Fi (Ai + Bui Fi )τ −1 > 0 (10.47)

T i 
where ηi = τ max=1 j∈M πi j (τ ).
To circumvent the difficulty caused by the power of Ai + Bui Fi , we define a set
of matrices Oi (τ, t), ∀τ ∈ N[1,Tmax i ] , ∀t ∈ N[0,τ −1] which satisfies

i

Tmax
 
β Oi (τ, t) − (Ai + Bui Fi )T Oi (τ, t + 1)(Ai + Bui Fi ) − FiT Q i Fi > 0
τ =t+1
(10.48)
i

Tmax
i
Oi (τ, 0) − β Tmax Pi < 0 (10.49)
τ =1


where Oi (l, l)  j∈M πi j (l)P j /ηi .
Thus, we have
i
−1 i
 
Tmax
 
Tmax

t T
(Ai + Bui Fi ) β Oi (τ, t) − (Ai + Bui Fi )T Oi (τ, t + 1)(Ai + Bui Fi )
t=0 τ =n+1

−FiT Q i Fi (Ai + Bui Fi )t > 0 (10.50)

which is equivalent to

Tmaxi τ −1
  T 
(Ai + Bui Fi )t β Oi (τ, t) − (Ai + Bui Fi )T Oi (τ, t + 1)(Ai + Bui Fi )
τ =1 t=0

−FiT Q i Fi (Ai + Bui Fi )t > 0 (10.51)

and implies

Tmax 
i
  T
β T Oi (τ, 0) − (Ai + Bui Fi )T Oi (τ, τ )(Ai + Bui Fi )T
τ =1
 T 
− (Ai + Bui Fi )τ −1 FiT Q i Fi (Ai + Bui Fi )τ −1 > 0. (10.52)

Combining the Formulas (10.49) and (10.52) and letting Oi (τ, τ )= j∈M πi j (τ )
P j /ηi , it derives condition (10.47).
10.4 Simulation Analysis 195

T i
Letting Õi (l)  τ max
=l+1 Oi (τ, l), ∀l ∈ N[0,Tmax
i −1] , and Õi (T
max )  0, we can
i

rewrite conditions (10.48) and (10.49) as

β Õi (t) − (Ai + Bui Fi )T Õi (t + 1)(Ai + Bui Fi )


− (Ai + Bui Fi )T Oi (t + 1, t + 1)(Ai + Bui Fi ) − FiT Q i Fi > 0 (10.53)
i
Õi (0) − β Tmax Pi < 0. (10.54)

Applying the same technique as that in (10.41) ensures condition (10.29), inequal-
ities (10.31) and (10.32) can be obtained from inequalities (10.53) and (10.54) with
T T
H̃ = (Ṽ −1 ) O Ṽ −1 , H̃i (t) = (Vi−1 ) Õi (t)Vi−1 where Ṽ  diag{V1 , V2 , . . . , VM }.
To satisfy condition (10.28), the following condition is given first

λ1 G i < Pi < λ2 G i . (10.55)

By Schur complement lemma and the same technique from conditions (10.45) to
(10.46), inequalities (10.33) and (10.34) are satisfied with condition (10.55).
As mentioned in Remark 10.4, β N c1 is replaced by β N −k x(k)T G i x(k) at each
instant k to obtain a more accurate value of β. Hence, the inequality (10.35) is given
to satisfy condition (10.28). This completes the proof. 

10.4 Simulation Analysis

In this section, two examples will be given to show the effectiveness and practical
application of our developed theoretical results. The first numerical example is used
to show the transient performance of the discrete-time MJS in given time interval by
MPC. The second example focuses on the discrete-time semi-MJSs with different
probability density functions of sojourn-time in different modes.

Example 10.1 Consider the system (10.1) with two operation modes and the fol-
lowing parameters:
   
0.8 0.28 0.02
A1 = , Bu1 = ,
0 0.9 0.16

   
1.2 0.25 0.032
A2 = , Bu2 = .
0 1.12 0.28
196 10 Model Predictive Control for Markovian Jump Systems …

Moreover, the transition probability matrix is


 
0.67 0.33
= .
0.3 0.7

 are set to predictive horizon T = 5, c1 = 5,


For this problem, other parameters
10
c2 = 12.3, N = 20, G 1 = G 2 = , and the initial state and mode are set to
01
 
4
x(0) = , r = 2, respectively. Theorem 10.1 is applied to obtain the simulation
3 0
results.

By Theorem 10.1, the designed predictive controller can be obtained at each


instant, and Fig. 10.2 is drawn. Combining Figs. 10.2 and 10.3, the states of closed-
loop system are within a range during the given time interval. Because the parameters
G 1 and G 2 are given as unit matrix, we can represent the state area of finite-time
performance requirements with a circle which is shown in Fig. 10.4a, b.
In Fig. 10.4a, the state trajectory does not converge to the origin point. The reason
is that convergence to the origin often requires greater control action, especially when
there exist disturbances in the system. The object of minimum energy consumption
causes the state to move with such a trajectory.

Example 10.2 Consider the discrete-time semi-MJS (10.1) with three operation
modes and the following parameters:

Fig. 10.2 States of the closed-loop system (based on Theorem 10.1)


10.4 Simulation Analysis 197

Fig. 10.3 States of the open-loop system

   
−0.36 0.69 −0.1
A1 = , Bu1 = ,
−1.81 1.97 0.1

   
0.64 0.62 0.1
A2 = , Bu2 = ,
−0.37 1.36 −0.1

   
0.64 0.62 0
A3 = , Bu3 = .
−0.37 1.36 0

The jumping among system modes is governed by a semi-Markovian chain, where


the SMK is computed by Eq. (10.2) with
⎡ ⎤
0 0.7 0.3
 = ⎣0.4 0 0.6⎦ ,
0.5 0.5 0

⎡ 0.6T 0.410−τ 10

0.4T 0.610−τ 10
0
  ⎢
(10−τ )!τ ! (10−τ )!τ !

ωi j (τ ) ∀i, j∈M = ⎣ 0.9(τ −1) − 0.9τ 0.510 10
2
.
2

(10−τ )!τ ! ⎦
0
(τ −1)1.3 τ 1.3 (τ −1)0.8
− 0.3τ
0.8
0.4 − 0.4 0.3 0
198 10 Model Predictive Control for Markovian Jump Systems …

(a) closed-loop

state
15 c1
c2
10

5
x2

-5

-10

-15

-15 -10 -5 0 5 10 15
x1

(b) open-loop

state
15 c
1
c
2
10

5
x2

-5

-10

-15

-15 -10 -5 0 5 10 15
x1

Fig. 10.4 a, b State trajectories of the closed-loop and open-loop system


10.4 Simulation Analysis 199

Fig. 10.5 States of the closed-loop system (based on Theorem 10.3)

It should be noted that the PDF of sojourn-time are Bernoulli distribution and
Weibull distribution with different parameters. It can be seen that mode 1 corresponds
to the Bernoulli distribution, mode 3 corresponds to the Weibull distribution, and
mode 2 has both two types of distributions.
It can be checked that the open-loop system is not stochastic finite-time stable
by Theorem 10.2. Then, the model predictive controller can be designed at each
instant by Theorem 10.3 such that the resulting closed-loop system (10.1) satisfies
the required stochastic finite-time stability performance.  
10
By setting parameters c1 = 1, c2 = 25, N = 10, G 1 = G 2 = G 3 = ,T1 =
0 1 max
 
0.7
Tmax = Tmax = 4, and giving the initial conditions x(0) =
2 3
, Figs. 10.5 and 10.6
0.7
show the state responses of the open-loop and closed-loop of semi-MJS.

In Figs. 10.5 and 10.6, it is clear that the states of the closed-loop system satisfy
the finite-time performance. Then, to show it more clearly, the state trajectories con-
taining the range of finite-time performance are shown in Fig. 10.7.
200 10 Model Predictive Control for Markovian Jump Systems …

Fig. 10.6 States of the open-loop system

Remark 10.5 It is worth mentioning that the parameter c2 is chosen as the mini-
mal case in both examples. Theoretically, the controller design approach adopted
in Example 10.1 can result in a smaller c2 . The reason is that solving the control
sequence can directly consider the conditions of finite-time performance for pre-
dicted states, while solving the state feedback gain (the controller design method
used in Example 10.2) requires the transformation of these conditions.

10.5 Conclusion

In this chapter, MPC is adopted to solve the finite-time performance optimization


problem for discrete-time MJSs. In addition, considering the different distribution
of mode sojourn-time, the proposed theory is extended to discrete-time semi-MJSs.
MPC scheme takes into account the minimum energy consumption while ensuring
the transient performance, and has a practical application prospect.
10.5 Conclusion 201

(a) closed-loop
6
state
c
1
4 c
2

2
x2

-2

-4

-6
-6 -4 -2 0 2 4 6
x1

(b) open-loop

state
6 c
1
c
2
4

2
x2

-2

-4

-6

-6 -4 -2 0 2 4 6
x1

Fig. 10.7 a, b State trajectories of the closed-loop and open-loop system


202 10 Model Predictive Control for Markovian Jump Systems …

References

1. Kothare, M.V., Balakrishnan, V., Morari, M.: Robust constrained model predictive control using
linear matrix inequalities. Automatica 32(10), 1361–1379 (1996)
2. Mayne, D.Q., Rawlings, J.B., Rao, C.V., Scokaert, P.O.: Constrained model predictive control:
stability and optimality. Automatica 36(6), 789–814 (2000)
3. Kouvaritakis, B., Cannon, M.: Model Predictive Control. Springer International Publishing,
Switzerland (2016)
4. Park, B.G., Lee, J.W., Kwon, W.H.: Receding horizon control for linear discrete systems with
jump parameters. In: Proceedings of the 36th IEEE Conference on Decision and Control, San
Diego, CA, USA, vol. 4, pp. 3956–3957 (1997)
5. Patrinos, P., Sopasakis, P., Sarimveis, H., Bemporad, A.: Stochastic model predictive control
for constrained discrete-time Markovian switching systems. Automatica 50(10), 2504–2514
(2014)
6. Lu, J., Xi, Y., Li, D., Gan, Z.: Model predictive control synthesis for constrained Markovian
jump linear systems with bounded disturbance. IET Control Theor. Appl. 11(18), 3288–3296
(2017)
7. Zhang, Y., Lim, C.C., Liu, F.: Robust mixed H2 /H∞ model predictive control for Markovian
jump systems with partially uncertain transition probabilities. J. Franklin Inst. 355(8), 3423–
3437 (2018)
8. He, P., Wen, J.W., Luan, X.L., Liu, F.: Finite-time self-triggered model predictive control of
discrete-time Markovian jump linear systems. Int. J. Robust Nonlinear Control 31(13), 6166–
6178 (2021)
9. Zhang, L., Yang, T., Colaneri, P.: Stability and stabilization of semi-Markovian jump linear
systems with exponentially modulated periodic distributions of sojourn time. IEEE Trans.
Autom. Control 62(6), 2870–2885 (2016)
10. Zhang, L., Leng, Y., Colaneri, P.: Stability and stabilization of discrete-time semi-Markovian
jump linear systems via semi-Markovian kernel approach. IEEE Trans. Autom. Control 61(2),
503–508 (2015)
11. Amato, F., Ariola, M.: Finite-time control of discrete-time linear system. IEEE Trans. Autom.
Control 50(5), 724–729 (2005)
Chapter 11
Conclusion

Abstract This chapter summarizes the book and suggests some possible research
directions related to the work of the book.
Transient behavior in a given time interval for discrete-time MJSs has been researched
in this book to develop less conservative analysis and design methodology for control
engineering practice. The tools provided in the book could be applied to ecological
power systems, economic systems, power systems, and engineering designs under
environmental disturbances. Furthermore, the book also offers many methods and
algorithms to solve the finite-time stability and finite-time stabilization problems of
discrete-time MJSs with simulation examples to illustrate the design procedure and
confirm the results of the proposed methods.
Firstly, we aim at the finite-time stability and finite-time stabilization for differ-
ent kinds of discrete-time MJSs. For the simplest discrete-time linear MJSs, Chap.
2 designs a less conservative finite-time controller by relaxing the strict decreasing
requirement on the system energy function. Furthermore, combining neural networks
with robust control, the finite-time performance analysis and synthesis is extended
to discrete-time nonlinear MJSs by the intelligent method. Furthermore, for a more
general class of hybrid systems, considering the influence of the transition probabil-
ity (TPs) of the random jump and the determination of the average dwell time on the
system performance, Chap. 3 studies the finite-time stability and finite-time stabi-
lization of the discrete-time switching MJSs under complex conditions, centering on
the factors such as the time-delay and the unavailable state. Then, for discrete-time
non-homogeneous MJSs, Chap. 4 utilizes the Gaussian probability density function
(PDF) to describe the random distribution characteristics of TPs. Using the mean and
variance information of the Gaussian PDF, the expected value of TPs is obtained.
Then the finite-time stability and finite-time stabilization are investigated based on
the obtained expected value.
Then, combined with other control strategies, such as sliding mode control, pas-
sive control, and consensus control, Chaps. 5–7 present the finite-time sliding mode
control, finite-time passive control, and finite-time consensus control. Our target is

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 203
X. Luan et al., Robust Control for Discrete-Time Markovian Jump Systems in the
Finite-Time Domain, Lecture Notes in Control and Information Sciences 492,
https://doi.org/10.1007/978-3-031-22182-8_11
204 11 Conclusion

to ensure that the state trajectory of the system is restricted within a certain range of
the equilibrium point while satisfying other performance indicators, such as robust-
ness, dissipativity, and consistency between sub-modes. Especially, considering the
systems have different performance requirements at a specific frequency band or
multiple-frequency bands, in Chap. 8, the design method of finite-frequency state
feedback controller for discrete-time MJSs over the finite-time interval is studied. In
fact, it is not enough just to limit the mean or variance of the states to the desired
range. Therefore, a higher-order moment finite-time controller is designed in Chap.
9 to guarantee that not only the mean and variance of the states remain within the
desired range in the fixed time interval, but also the higher-order moment of the states
is limited to the given bound.
Next, different from the preceding finite-time control strategies proposed from
Chaps. 2– 9, where the control law is calculated offline, the model predictive control
is adopted to minimize the control inputs in a given time interval while satisfying
the required transient performance for discrete-time MJSs through online rolling
optimization in Chap. 10. Finally, for the semi-Markovian jump systems whose
transition probability depends on sojourn time, the finite-time performance under
the model predictive control scheme is analyzed in the situation that the transition
probability at each time depends on the history information of elapsed switching
sequences.
The future research direction is to introduce the reinforcement learning control
to the transient performance for discrete-time MJSs in the situation that some key
parameters of the system are unavailable. The strategy iterative learning method
can be used to find the controller while learning unknown parameters to ensure the
finite-time stabilization of the system.
Index

A method, 12, 109, 110, 165


Adaptive control, 165 output, 3, 40, 94
Adaptive synchronous control, 2 performance, 21, 90, 166
Adaptive systems, 5 problem, 2, 5, 6, 21, 23, 40, 52, 69, 70,
Adaptive tracking control, 3 109, 110, 132, 165, 183
Almost sure stability, 3, 39 protocol, 11, 151–153
Asymptotically, 6, 110, 152 scheme, 12, 109, 110, 127, 131, 183, 204
Asymptotically converge, 9, 11, 21, 151, 162 sequence, 183, 186, 200
Asymptotically stable, 4, 6 strategies, 9, 11, 40, 109, 203, 204
Asymptotic convergence, 6, 184 systems, 2, 132
Asymptotic stability, 4, 6, 7, 21, 180 target, 165
Asynchronous FTPC, 94, 99, 102, 103, 105,
theory, 4, 8, 93, 112, 120
106
Controllability, 2
Attenuation level, 10, 39, 77
Controller, 2, 4–11, 21–24, 26, 31, 33–36,
Average dwell time, 3, 10, 39, 41, 42, 46, 47,
39–41, 46, 48, 51, 52, 56, 60, 62, 64–
52, 57, 62, 65
66, 69, 70, 72, 74, 76, 80, 82, 86, 93, 94,
99, 100, 106, 109, 115, 123, 128, 131,
132, 139, 140, 142, 144, 147, 153–155,
B
157, 160, 161, 166, 167, 169, 173, 174,
Behavior, 2, 4, 11, 21
177, 180, 186, 190, 196, 199, 200, 203,
BIBO stability, 5
204
Biochemical systems, 1
design, 2, 4–6, 8, 9, 11, 21, 24, 33, 34, 36,
39, 41, 46, 51, 62, 66, 79, 86, 93, 94,
C 99, 109, 128, 131–133, 136, 144, 147,
Communication delay, 11, 151, 152, 161 154, 161, 166, 169, 173, 180, 191, 199
Communication network, 2 gain, 7, 26, 115, 123, 132, 140, 154, 167,
Communication technology, 151 170, 174, 186, 192
Conservation, 11, 131, 132 mode, 11, 93, 99
Conservative, 7, 203 parameter, 142, 146, 160, 161
Conservativeness, 8, 11, 21, 70, 128, 131, performance, 173
144, 166, 173, 180 Conventional, 183
Control, 2, 5, 9, 132, 183, 184, 196 Converge, 6, 9, 11, 21, 151, 152, 162, 184,
action, 183, 184, 196 196
design, 2, 5, 9, 132 Convergence, 184, 196
input, 40, 70, 94, 110, 183, 184, 186, 188, Convergence performance, 6
190, 204 Convergence stability, 5
© The Editor(s) (if applicable) and The Author(s), under exclusive license 205
to Springer Nature Singapore Pte Ltd. 2023
X. Luan et al., Robust Control for Discrete-Time Markovian Jump Systems
in the Finite-Time Domain, Lecture Notes in Control and Information Sciences 492,
https://doi.org/10.1007/978-3-031-22182-8
206 Index

D F
Desired, 147, 166 Filter, 2, 70, 93
bound, 23, 62, 65, 142, 144, 152, 153, 177 Filtering, 2–4, 9, 21, 93, 132, 152, 165, 166
Finite-state, 1
FTPC gains, 99 Finite-time analysis, 1, 8, 9
passive performance, 11, 93, 95, 99 Finite-time bounded, 5–7, 9, 21, 23, 36, 39,
performance, 104, 105, 110 93–95, 106
range, 11, 162, 165, 204 Finite-time boundedness, 5, 6, 9, 21, 23, 40,
rate, 10, 106 66, 95, 97
state trajectory, 109 Finite-time consensualization, 11, 151, 154,
Discrete, 4, 7, 39 157
Discrete-event, 1 Finite-time consensus, 11, 151–153, 157,
Discrete-time, 5–7, 9, 11, 21–24, 26, 28, 33, 158, 161, 203
36, 39, 40, 46, 61, 66, 70, 72, 75, 85, Finite-time control, 5–7, 9, 21–23, 40, 69,
88, 90, 93, 94, 103, 106, 109–120, 123, 70, 79, 93, 94, 204
125–128, 131, 133, 146, 147, 151–153, Finite-time controller, 6, 8, 22, 24, 26, 33,
155, 157, 158, 161, 165–169, 173, 183, 36, 39, 66, 86, 161, 169, 203
186, 188, 191, 195, 196, 200, 203, 204 Finite-time dissipative filtering, 21
Dissipative control, 2, 9 Finite-time domain, 5, 11, 94, 110, 150, 165,
Dissipative filtering, 4, 21 169, 180
Dissipative theory, 93 Finite-time filtering, 21
Dissipativity, 204 Finite-time H∞ control, 10, 21, 42, 51, 69,
Disturbance, 1, 3, 5, 7–9, 11, 21–23, 36, 39, 70, 72, 93
40, 46–49, 51, 62, 65, 70, 76–78, 90, Finite-time H∞ controller, 46, 60, 94
94, 109, 151, 152, 161, 169, 173, 183, Finite-time interval, 5, 6, 9, 10, 39, 110, 111,
196, 203 116, 128, 186, 187
attenuation, 10, 39, 76, 77 Finite-time model predictive control, 12
input, 110 Finite-time multiple-frequency control, 11,
rejection, 46–49, 76, 78 137
signals, 62 Finite-time passive control, 10, 94, 99, 104,
Dynamic, 1–3, 7, 69, 79, 80, 109, 110, 146, 106, 203
151, 153, 155, 157, 158, 161, 190 Finite-time passive controller, 11, 93
performance, 109 Finite-time performance, 1, 8–12, 39, 90,
systems, 69, 109, 151 106, 173, 180, 183, 184, 186, 188, 190,
Dynamically, 1 199, 203, 204
Finite-time sliding mode control, 9, 11, 109,
110, 115
E Finite-time stability, 4, 5, 7, 21, 23, 36, 39,
Effective, 1–2, 93, 110 69, 133, 137, 139, 142, 147, 155, 158,
Effectively, 5 166, 169, 189, 199, 203
Effectiveness, 85, 103, 109, 110, 126, 142, Finite-time stabilizable, 11, 22, 25, 29, 31,
150, 160, 176, 195 34, 41, 46, 52, 56, 60, 72, 74, 76, 80, 94,
Error, 9, 21, 28, 34, 36, 52, 56, 69, 79, 80 95, 98–101, 104, 105, 109, 112, 114,
Existence, 191 118–123, 125, 127, 133, 136, 139, 140,
Exogenous disturbance, 21, 22, 40, 70, 94, 172–174, 187, 189
152, 183 Finite-time stabilization, 6, 9, 10, 21–23, 69,
Exponential almost sure stability, 3 70, 90, 93, 94, 110, 112, 114, 128, 162,
Exponential distribution, 8, 12, 183 169, 172–174, 177, 192, 203, 204
Exponential l2 − l∞ , 3 Finite-time stabilized, 8, 36
Exponential l2 − l∞ stability, 39 Finite-time stabilizing, 6, 7, 74, 77
Exponential stability, 166 Finite-time stable, 6, 7, 23, 34, 88, 169, 188,
199
Finite-time state feedback control, 7
Index 207

Finite-time theory, 1, 152 Jumping time, 8, 12


Flight control, 93 Jump instant, 185, 188
Free MJS, 34, 35, 89 Jump mode, 34, 35, 89
Free semi-MJS, 189, 190 Jump networks, 151
Free system, 41, 62, 65 Jump systems, 1, 11, 21, 39, 69, 93, 109, 110,
Free weight, 8 131, 132, 144, 147, 165, 183, 204
Fuzzy, 2, 21, 93 Jump topologies, 11, 151, 152
Fuzzy MJSs, 2, 93
Fuzzy model, 2, 21
Fuzzy rules, 2 L
Linear, 1–2, 8, 9
Linear difference inclusions, 2, 9, 21, 22
G Linearize, 2
Global solution, 2 Linear matrix inequality, 2, 6–9, 51, 58, 60,
Guarantee, 10, 11, 21, 46, 51, 69, 93, 95, 132, 78, 132, 139, 159
136, 137, 152, 165, 180, 183, 188, 191, Linear MJSs, 2, 9, 24, 26, 36, 165, 203
192, 204 Linear systems, 2, 5–7
Guaranteed, 9, 33, 48, 60, 78, 102, 123 Local, 2
Guaranteeing, 90 Lyapunov asymptotic stability, 23, 110, 132,
184
Lyapunov candidate function, 95, 101
H Lyapunov energy function, 7, 9, 66
H∞ control, 3, 4, 8 Lyapunov function, 3, 23, 24, 42, 46, 186
H∞ decay rate, 66 Lyapunov stability, 4
H∞ disturbance attenuation, 10, 39
H∞ disturbance rejection, 46, 51, 76, 78
H∞ estimation, 4 M
H∞ filtering, 3, 4, 21 Markovian jump system, 1
H∞ interference suppression performance, Matrices, 23–29, 31, 42, 46, 48, 52, 56, 60,
169 69, 70, 72, 73, 76, 77, 80, 82, 95, 98,
H∞ norm, 8 100, 102, 109, 111, 123, 126, 132, 133,
H∞ performance, 11, 40, 56, 93, 94, 109, 137, 139, 152, 154, 155, 158–160, 189–
114, 115, 119–122, 125, 127, 128, 133, 191
158, 169, 171 Minimal, 190, 200
Hybrid system, 1, 3, 39, 93, 203 Minimize, 60, 183, 186, 204
Minimized, 8
Minimizing, 184
I Minimum, 12, 27, 46, 62, 73, 183, 184, 196,
Infinite, 110 200
Infinite-time domain, 4, 6, 21, 132, 180, 184 Model, 1–3, 7, 21, 63, 79, 93, 99, 115, 151
Infinite-time interval, 6 Model change, 8
Infinite-time region, 152 Modelling, 3, 184
Instability, 46, 69 Model parameter, 85, 144
Interconnected networks, 11, 151, 161 Model predictive control, 4, 12, 90, 183, 204
Internal stability, 10, 94, 106 Model transformation, 8, 9, 11, 151
Multi-layer, 3, 7, 27–28, 36
Multi-modal jump systems, 147
J Multimodal systems, 132
Jump, 1–3, 69, 160, 166, 183 Multi-mode, 11, 165
Jumping, 3, 4, 8, 11, 39, 46, 69, 131, 132, Multi-performance, 131
137–139, 152, 153, 161, 185, 197 Multiple frequency bands, 204
Jumping modes, 3, 9, 62, 65, 104–105, 126 Multiple frequency control, 11, 137
Jumping process, 152 Multiple frequency performance, 137, 142,
Jumping systems, 10, 11, 66, 131, 165 147, 173, 177, 180
208 Index

Multiple passes, 151 Parameter matrices, 137, 139


Multiplier, 132 Parameter perturbation, 9, 11, 110
Parameter uncertainties, 7
Partial, 93
N Partially, 8, 69, 165
Network-connected dynamic systems, 151 Particular, 70, 86, 131
Network-connected systems, 11, 151, 152, Particularity, 1, 4
161 Performance, 2, 5, 11, 39, 40, 46, 48, 51,
Network connection, 152 52, 69, 93, 94, 100–101, 104–106, 109,
Network technology, 151 110, 131, 132, 137, 139, 142, 147, 155,
Network topology, 151 165, 166, 169, 170, 173, 180, 183, 184,
Neural network, 2, 3, 7, 9, 21, 22, 27–28, 34, 203
36, 203 Practical stability, 5
Neural network model, 7 Predictive controller, 180, 190, 196, 199
Non-convex, 132 Predictive horizon, 186, 187, 196
Non-feasibility, 60, 85, 159 Preliminaries, 22, 40, 70, 152, 166
Non-homogeneous Markovian process, 4 Problem formulation, 22, 40, 70, 152, 166
Non-homogeneous transition probabilities
(TPs), 66, 69
Non-homogenous MJSs, 4 R
Nonlinear approximation, 2 Regulate, 8
Nonlinear function, 27 Regulating, 8, 166
Nonlinearities, 2, 3, 7, 9, 21, 29, 36 Regulation, 165
Nonlinear matrix inequalities, 102
Robust, 7
Nonlinear MJS, 1–3, 8, 9, 21, 22, 26, 28, 34,
control, 2–5, 8, 9, 11, 109, 165, 203
165, 203
fault detection, 3
Nonlinear part, 21
filtering, 2
Nonlinear system, 5, 7, 110, 151
finite-time control, 7
Nonlinear term, 21, 22, 27
finite-time controller, 6
Nonnegative values, 3
FTS, 7, 8
Non-periodic triggered control, 9
H∞ control, 8
Non-singular, 111, 115, 125
methodologies, 69
Non-singularity, 111, 116
performance, 169
Non-zero, 153
Robustness, 9, 11, 110, 204

O
Optimal control, 2, 3, 165, 183, 186 S
Optimality, 183 Sampling instant, 183
Optimal stochastic finite-time controller, 26 Sampling time, 25, 41, 135, 171, 183
Optimal tracking control, 3 Short time, 5, 152
Optimal weight, 34 Single-mode, 11, 131, 137, 147, 165
Optimization, 8, 12, 26, 60, 152, 183, 184, Stability analysis, 3, 4, 165
186, 191, 200, 204 State feedback control, 6, 7, 29
Optimize, 12, 180, 183, 186, 190 State feedback controller, 6, 26, 31, 48, 72,
74, 79, 80, 132, 134, 137, 139, 140, 154,
155, 161, 166, 170, 174, 204
P State feedback finite-time control, 10, 69, 70
Packet, 152 State feedback gain, 23, 200
Parameter changes, 109 Steady-state performance, 21, 131
Parameter description, 144 Stochastic differential equation, 1
Parameter information, 85 Stochastic finite-time boundedness (FTB), 9,
Parameterize, 36 21, 40
Parameterized, 8 Stochastic finite-time performance, 10
Index 209

Stochastic finite-time stability (FTS), 21, 36, Symmetric, 24, 29, 41, 72–74, 77, 80, 95, 98,
199 100, 132, 155, 169, 172, 173, 186
Stochastic finite-time stabilizable, 11, 22, Synthesis, 1, 3, 7–9, 39, 166, 203
29, 34, 41, 46, 52, 74, 76, 80, 94, 95,
98, 100, 102, 104, 105, 114, 115, 118,
122–191 T
Stochastic finite-time stabilization, 9, 21, Tracking control, 3
24–26, 39, 46, 48, 56, 72, 110, 186, 188, Transfer function, 8, 131, 147
191 Transformation, 8, 9, 11, 26, 32, 151, 154,
Stochastic finite-time stable, 34, 188–190, 160, 200
199 Transformed, 9, 28, 60, 99, 103, 132, 139,
Stochastic jumping, 10, 11, 39, 62, 131, 139, 169, 187, 192
161, 165 Transforming, 7, 11, 48, 131, 137
Stochastic Lyapunov function, 22, 30, 134 Transient behavior, 11, 21, 150, 203
Stochastic Markovian chain, 46 Transient performance, 4–6, 9–12, 21, 22,
Stochastic Markovian jump systems (MJSs), 40, 70, 93, 109, 118, 128, 132, 133, 142,
21, 39 177, 183, 184, 195, 200, 204
Stochastic Markovian process, 28 Transient requirements, 152
Stochastic multimodal systems, 131, 137, Transient response, 23
147 Transition characteristics, 5
Stochastic process, 1, 70, 184 Transition probability density function, 10,
Stochastic stability, 2 69
Stochastic system, 1, 5, 7, 110 Transition probability (TP), 10, 21, 69, 70,
Sub-modal systems, 132 90, 103, 126, 137, 139, 142, 144, 146,
Subsequent, 2, 6, 21, 42, 46, 50, 52, 60, 62, 147, 160, 165, 183, 203, 204
151
Subsystem, 1, 3, 69, 150, 152, 153, 160, 183
Subsystem models, 1 U
Switched systems, 39 Uncertain, 6, 23, 24, 26, 39, 69, 86, 110, 132,
Switches, 36, 40, 166 183
Switching different gear positions, 3 Uncertainty, 7, 8, 69, 86, 152
Switching frequency, 46 Unified, 93
Switching instants, 43, 54, 66 Uniform, 90
Switching jump systems, 165 Universal, 4, 109
Switching MJS, 1, 3, 9, 39, 46, 61, 203 Unknown, 8, 23, 27, 69, 86, 90, 110, 204
Switching mode, 44, 54 Unstable, 60, 151, 178
Switching rules, 3 Unstable operation, 151
Switching sequences, 12, 183, 184, 204
Switching signals, 3, 9, 39, 46, 62, 65
Switching subsystem, 3 V
Switching systems, 132 Variable, 1, 3, 9, 22, 40, 41, 51, 60, 70, 71,
Switching times, 41 94, 127, 128, 132, 133, 138, 152, 153,
Switching topology, 151 167, 185

You might also like