0% found this document useful (0 votes)
63 views16 pages

Computation of Matrix Exponentials of Special Matrices

This document discusses methods for computing matrix exponentials, which are important for solving linear differential equations and modeling linear dynamical systems. It first introduces the concept of using a matrix exponential to represent the solution to a homogeneous system of first-order differential equations describing a linear dynamical system. It then derives the power series representation of the matrix exponential and shows that it provides the solution to the system of differential equations. The document focuses on computing transition matrices, which are matrix exponentials, for some special types of matrices that are important in applications such as signal processing and control systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views16 pages

Computation of Matrix Exponentials of Special Matrices

This document discusses methods for computing matrix exponentials, which are important for solving linear differential equations and modeling linear dynamical systems. It first introduces the concept of using a matrix exponential to represent the solution to a homogeneous system of first-order differential equations describing a linear dynamical system. It then derives the power series representation of the matrix exponential and shows that it provides the solution to the system of differential equations. The document focuses on computing transition matrices, which are matrix exponentials, for some special types of matrices that are important in applications such as signal processing and control systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Applied Mathematics and Computation 223 (2013) 311–326

Contents lists available at ScienceDirect

Applied Mathematics and Computation


journal homepage: www.elsevier.com/locate/amc

Computation of matrix exponentials of special matrices q


Feng Ding ⇑
Key Laboratory of Advanced Process Control for Light Industry (Ministry of Education), Jiangnan University, Wuxi 214122, PR China
College of Automation and Electronic Engineering, Qingdao University of Science and Technology, Qingdao 266042, PR China

a r t i c l e i n f o a b s t r a c t

Keywords: Computing matrix exponentials or transition matrices are important in signal processing
Matrix exponential and control systems. This paper studies the methods of computing the transition matrices
Transition matrix related to linear dynamic systems, discusses and gives the properties of the transition
Similarity transformation matrices and explores the transition matrices of some special matrices. Some examples
Jordan matrix
are provided to illustrate the proposed methods.
Special matrix
Linear system
Ó 2013 Elsevier Inc. All rights reserved.

1. Introduction

In engineering, we often encounter algebraic equations or differential equations or decomposition of some special
matrices [1–3]. In linear cases, these differential equations can be transformed into a set of the first-order differential
equations of form xðtÞ_ ¼ AxðtÞ, where A 2 Rnn is a real constant matrix, xðtÞ 2 Rn is a state vector. Solving these matrix
differential equations leads to a matrix exponential eAt of the transition matrix A. Computing matrix functions plays an
important role in science and engineering, including control theory. By applying mixed interpolation methods, Dehghan
and Hajarian gave an algorithm for computing the matrix function f ðAÞ and proved its existence and uniqueness [4]. Wu
discussed the matrix exponential of the matrix A which satisfies the relation Akþ1 ¼ qAk and gave explicit formulas for
computing eA [5].
Matrix functions or matrix exponentials have wide applications in signal filtering [6,7] and controller design [8,9]. Many
different algorithms for computing the matrix exponentials have been reported. For example, Moore computed matrix expo-
nentials by adopting the idea of expanding in either Chebyshev, Legendre or Laguerre orthogonal polynomials for the matrix
exponentials [10]. Matrix operations are useful in linear algebra and matrix theory. Al Zhour and Kilicman discussed some
different matrix products for partitioned and non-partitioned matrices and some useful connections of the matrix products
[11], including Kronecker product, Hadamard product, block Kronecker product, block Hadamard product, block-matrix in-
ner product (i.e., star product or H product) [12–14], etc. Dehghan and Hajarian presented an iterative method for solving the
generalized coupled Sylvester matrix equations over generalized bisymmetric matrices and analyzed its performance
[15,16].
In the area of system identification [17–21] and parameter estimation [22–24], e.g., the multi-innovation identification
[25–27], one task is to estimate the parameter matrix/vector A and b of the state space model in (2) of the next section,
and to determine the state vector xðtÞ which leads to the matrix exponential or transition matrix eAt . This paper studies
the properties of the transition matrix and computes the transition matrices of some special matrices [3].

q
This work was supported by the National Natural Science Foundation of China (No. 61273194), the Natural Science Foundation of Jiangsu Province
(China, BK2012549), the Priority Academic Program Development of Jiangsu Higher Education Institutions and the 111 Project (B12018).
⇑ Address: Key Laboratory of Advanced Process Control for Light Industry (Ministry of Education), Jiangnan University, Wuxi 214122, PR China.
E-mail address: [email protected]

0096-3003/$ - see front matter Ó 2013 Elsevier Inc. All rights reserved.
http://dx.doi.org/10.1016/j.amc.2013.07.079
312 F. Ding / Applied Mathematics and Computation 223 (2013) 311–326

The rest of the paper is organized as follows. Section 2 derives the solution of state equations and Section 3 discusses the
state transition matrix and its properties. Section 4 computes the transition matrices of some special matrices. Section 5 pro-
vides an illustrative example to validate the proposed methods. Finally, Section 6 offers some concluding remarks.

2. Solution of state equations

This section introduces some basic facts to be used in the next sections, which can be found in Modern Control Theory
textbook.
A continuous-time system described by a linear differential equation with constant coefficients can be transformed into a
set of the first-order differential equations:
8
> x_ 1 ðtÞ ¼ a11 x1 ðtÞ þ a12 x2 ðtÞ þ    þ a1n xn ðtÞ þ b1 uðtÞ;
>
>
>
< x_ 2 ðtÞ ¼ a21 x1 ðtÞ þ a22 x2 ðtÞ þ    þ a2n xn ðtÞ þ b2 uðtÞ;
.. ð1Þ
>
> .
>
>
:
x_ 3 ðtÞ ¼ an1 x1 ðtÞ þ a22 x2 ðtÞ þ    þ a2n xn ðtÞ þ bn uðtÞ;
where t is a time variable, uðtÞ 2 R is the input of the system, xi ðtÞ 2 R; i ¼ 1; 2; . . . ; n are the state variables of the system.
Construct an n-dimensional state vector:
2 3
x1 ðtÞ
6 x2 ðtÞ 7
6 7
xðtÞ :¼ 6 7 n
6 .. 7 2 R :
4 . 5
xn ðtÞ
Let
2 3 2 3
a11 a12    a1n b1
6a a22    a2n 7 6b 7
6 21 7 6 27
A :¼ 6
6 .. .. .. 7
nn
72R ; b :¼ 6 7 n
6 .. 7 2 R :
4 . . . 5 4 . 5
an1 an2    ann bn
Eq. (1) can be written as a compact form,
_
xðtÞ ¼ AxðtÞ þ buðtÞ; xðt0 Þ ¼ x0 : ð2Þ
When the input uðtÞ ¼ 0, we obtain a homogeneous equation,
_
xðtÞ ¼ AxðtÞ; xðt0 Þ ¼ x0 : ð3Þ
Here, we use the power series to solve this equation. Assume that the state vector has the form of

xðtÞ ¼ a0 þ a1 t þ a2 t2 þ    þ ai ti þ    ; ð4Þ
where ai 2 Rn is the coefficient vector to be determined. Substituting xðtÞ into (3) gives

a1 þ 2a2 t þ 3a3 t2 þ    þ iai ti1 þ   


¼ Aa0 þ Aa1 t þ Aa2 t2 þ    þ Aai t i þ    :
Comparing the coefficients of the same power of both sides establishes equations:
8 8
> a1 ¼ Aa0 ; > a1 ¼ Aa0 ;
>
> >
>
>
> >
> a2 ¼ 12 Aa1 ¼ 2!1 A2 a0 ;
>
> 2a2 ¼ Aa1 ; >
>
< < 3
3a3 ¼ Aa2 ; ) a3 ¼ 13 Aa2 ¼ 3!1 A a0 ;
>
> >
>
>
>
> ... >
>
>
..
>
> >
> .
: :
iai ¼ Aai1 : ai ¼ 1i Aai1 ¼ 1i! Ai a0 :
Let I n be an n-dimensional identity matrix and I be an identity matrix of appropriate sizes. Inserting ai into (4) gives
 
1 1 1
xðtÞ ¼ I n þ At þ A2 t 2 þ A3 t 3 þ    þ Ai t i þ    a0 :
2! 3! i!
Let t ¼ 0, we have a0 ¼ xð0Þ, the solution of the homogeneous equation in (3) is given by
F. Ding / Applied Mathematics and Computation 223 (2013) 311–326 313

 
1 1 1
xðtÞ ¼ I n þ At þ A2 t 2 þ A3 t3 þ    þ Ai t i þ    xð0Þ: ð5Þ
2! 3! i!
Recalling the Taylor series expansion of the exponential function:
1 2 1 3
et ¼ 1 þ t þ t þ t þ 
2! 3!
we define a matrix exponential function:
1 2 2 1 3 3 1
expðAtÞ :¼ eAt ¼ I n þ At þ A t þ A t þ    þ Ai ti þ    ð6Þ
2! 3! i!
_
Thus the solution of the homogeneous equation xðtÞ ¼ AxðtÞ can be expressed as

xðtÞ ¼ eAt xð0Þ: ð7Þ


From (7), we obtain the solution the homogeneous equation in (3):

xðt 0 Þ ¼ eAt0 xð0Þ;

xðtÞ ¼ eAðtt0 Þ eAt0 xð0Þ ¼ eAðtt0 Þ xðt 0 Þ: ð8Þ


Define
1 2 2 1 3 3
UðtÞ :¼ eAt ¼ I n þ At þ A t þ A t þ  ð9Þ
2! 3!
which is called the state transition matrix of the continuous-time system. Sometimes, we call UA ðtÞ :¼ eAt the state transition
matrix of A or the matrix exponential of A.
Let xðtÞ ¼ eAt x1 ðtÞ. Substituting xðtÞ into (2) gives

AeAt x1 ðtÞ þ eAt x_ 1 ðtÞ ¼ AeAt x1 ðtÞ þ buðtÞ:


This means that

eAt x_ 1 ðtÞ ¼ buðtÞ:


Pre-multiplying both sides by eAt and integrating give
Z t
x1 ðtÞ  x1 ðt 0 Þ ¼ eAs buðsÞds:
t0

Thus we have
 Z t  Z t
xðtÞ ¼ eAt x1 ðtÞ ¼ eAt x1 ðt0 Þ þ eAs buðsÞds ¼ eAt x1 ðt 0 Þ þ eAðtsÞ buðsÞds: ð10Þ
t0 t0

Letting t ¼ t0 gives xðt0 Þ ¼ eAt0 x1 ðt 0 Þ. Thus we have

x1 ðt 0 Þ ¼ eAt0 xðt0 Þ:
Inserting it into (10), we obtain the solution of Eq. (2):
Z t
xðtÞ ¼ eAðtt0 Þ xðt 0 Þ þ eAðtsÞ buðsÞds: ð11Þ
t0

Next, we discuss the properties of the transition matrix UðtÞ.

3. State transition matrix and its properties

The transition matrix or the matrix exponential of A is given by


1 2 2 1 3 3
UðtÞ ¼ expðAtÞ ¼ eAt ¼ I n þ At þ A t þ A t þ 
2! 3!
which has the following properties.

1. The transition matrix of a zero matrix


The transition matrix of a zero matrix is an identity matrix, i.e., e0 ¼ I.
2. The initial value of the transition matrix
The initial value of the transition matrix is equal to an identity matrix, i.e., Uð0Þ ¼ I.
314 F. Ding / Applied Mathematics and Computation 223 (2013) 311–326

3. The transition matrix of the transposed matrix


The transition matrix of the transposed matrix equals the transpose of the transition matrix, e.g., UAT ðtÞ ¼ UTA ðtÞ since
expðAT tÞ ¼ ½expðAÞT ¼ UT ðtÞ.
4. The derivative of the transition matrix
The derivative of the transition matrix is given by
_
UðtÞ ¼ AUðtÞ ¼ UðtÞA: ð12Þ
This means that UðtÞ and A are commutative.
5. The initial value of the derivative of the transition matrix
The initial value of the derivative of the transition matrix equals A. That is
_
A ¼ UðtÞj _
t¼0 ¼ Uð0Þ:

From here, we can see that UðtÞ is the unique solution of the matrix differential equation:
_
UðtÞ ¼ AUðtÞ; Uð0Þ ¼ I:
6. The multiplication of the transition matrices
The multiplication of the transition matrices are commutative, i.e.,
UðtÞUðt 1 Þ ¼ Uðt 1 ÞUðtÞ ð13Þ
and

Uðt  t 1 Þ ¼ UðtÞUðt1 Þ ¼ Uðt 1 ÞUðtÞ;


Uðt  t 2 Þ ¼ Uðt  t1 ÞUðt 1  t 2 Þ:
In fact, we have

Uðt  t 1 Þ ¼ eAðtt1 Þ ¼ eAt eAðt1 Þ ¼ UðtÞUðt1 Þ;


Uðt  t 2 Þ ¼ eAðtt2 Þ ¼ eAðtt1 þt1 t2 Þ
¼ eAðtt1 Þ eAðt1 t2 Þ ¼ Uðt  t1 ÞUðt 1  t 2 Þ:
This can be extended to a general case:

Uðt þ t 1 þ t2 þ    þ tm Þ ¼ UðtÞUðt 1 ÞUðt 2 Þ    Uðt m Þ;


Uðt  t m Þ ¼ Uðt  t1 ÞUðt 1  t2 ÞUðt 2  t 3 Þ    Uðtm1  t m Þ:
This is the transition property of the state vector.
7. The transition property of the state vector
_
For any initial value xðt1 Þ, the solution of the linear system xðtÞ ¼ AxðtÞ is given by
xðtÞ ¼ Uðt  t1 Þxðt 1 Þ:
In fact, from (7) we have

xðt1 Þ ¼ eAt1 xð0Þ ¼ Uðt 1 Þxð0Þ;


xðtÞ ¼ eAt xð0Þ ¼ eAðtt1 þt1 Þ xð0Þ ¼ eAðtt1 Þ eAt1 xð0Þ
¼ Uðt  t 1 ÞUðt 1 Þxð0Þ ¼ Uðt  t1 Þxðt 1 Þ:
or

xðt 1 Þ ¼ U1 ðt  t 1 ÞxðtÞ ¼ Uðt1  tÞxðtÞ:


This shows that the transition of the state vector is reversible.
8. The inverse of the transition matrix
The inverse of the transition matrix exists and is given by

U1 ðtÞ ¼ UðtÞ; U1 ðtÞ ¼ UðtÞ: ð14Þ


This can be obtained from the equality:
I n ¼ Uðt  tÞ ¼ UðtÞUðtÞ ¼ UðtÞUðtÞ:
9. The power of the transition matrix
The power of the transition matrix satisfies the relations:
Um ðtÞ ¼ UðmtÞ; Um ðtÞ ¼ UðmtÞ: ð15Þ
This can be obtained from the facts:
F. Ding / Applied Mathematics and Computation 223 (2013) 311–326 315

m
Um ðtÞ ¼ ðeAt Þ ¼ emAt ¼ eAðmtÞ ¼ UðmtÞ:
10. The transition matrix of the sum of commutable matrices
In general, eðAþBÞt – eAt eBt – eBt eAt , because eðAþBÞt contains terms Bi Ak , and

1
eðAþBÞt ¼ I n þ ðA þ BÞt þ ðA þ BÞ2 t 2 þ   
2!
1
¼ I n þ ðA þ BÞt þ ðA2 þ AB þ BA þ B2 Þt 2 þ    ;
 2!  
1 1
eAt eBt ¼ I n þ At þ A2 t2 þ    I n þ Bt þ B2 t 2 þ   
2! 2!
1 2 2 2
¼ I n þ ðA þ BÞt þ ðA þ 2AB þ B Þt þ   
2!
At Bt 1 2
eðAþBÞt  e e ¼ ðBA  ABÞt þ   
2
If and only if AB ¼ BA, then eðAþBÞt  eAt eBt ¼ 0.
If A and B are commutative (AB ¼ BA), then we have

UAþB ðtÞ ¼ UA ðtÞUB ðtÞ ¼ UB ðtÞUA ðtÞ: ð16Þ


That is

eðAþBÞt ¼ eAt eBt ¼ eBt eAt ;


11. The transition matrix of the similarity matrix
The matrices A and B are said to be similar if there exists a non-singular matrix P such that A ¼ PBP 1 (i.e., the sim-
ilarity transformation), then we have

UA ðtÞ ¼ eAt ¼ PUB ðtÞP 1 ; ð17Þ


or

UA ðtÞ ¼ PeBt P 1 :
In fact, according to the definition of the transition matrix, we have
1
UA ðtÞ ¼ eAt ¼ ePBP t

1 2 1 3
¼ I n þ ðPBP 1 Þt þ ðPBP 1 Þ t 2 þ ðPBP 1 Þ t 3 þ      
2! 3!
1 1
¼ I n þ PBP 1 t þ PB2 P 1 t 2 þ PB3 P 1 t 3 þ   
 2! 3! 
1 1
¼ P I n þ Bt þ B2 t 2 þ B3 t 3 þ    P 1
2! 3!
¼ PeBt P 1 ¼ PUB ðtÞP 1 :
Furthermore, if A 2 Rnn is diagonalizable, its eigenvalues are k1 ; k2 ; . . . ; kn , P is the matrix which consists of the eigenvectors
of A, then we have

P 1 AP ¼ diag½k1 ; k2 ; . . . ; kn  ¼: K;
or

A ¼ PKP 1
and

UK ðtÞ ¼ eKt ¼ diag½ek1 t ; ek2 t ; . . . ; ekn t ; ð18Þ


Kt 1 k1 t k2 t kn t 1
UA ðtÞ ¼ Pe P ¼ Pdiag½e ; e ; . . . ; e P : ð19Þ
For a block diagonal matrix A :¼ blockdiag½A1 ; A2 ; . . . ; Am , we have

UA ðtÞ ¼ blockdiag½UA1 ðtÞ; UA2 ðtÞ; . . . ; UAm ðtÞ; ð20Þ


or

eAt ¼ blockdiag½eA1 t ; eA2 t ; . . . ; eAm t : ð21Þ


This property is easily obtained from the definition of the transition matrix.
316 F. Ding / Applied Mathematics and Computation 223 (2013) 311–326

12. The transition matrix of the contract matrix


If there exist the orthogonal matrix Q (Q T Q ¼ I n ) such that A ¼ QBQ T (i.e., the contract transformation), then we have

UA ðtÞ ¼ eAt ¼ Q UB ðtÞQ T ¼ Q eBt Q T :


Furthermore, if B is a diagonal matrix B ¼ K :¼ diag½k1 ; k2 ; . . . ; kn  2 Fnn , then we have

UðtÞ ¼ Q UK ðtÞQ T ¼ Q diag½ek1 t ; ek2 t ; . . . ; ekn t Q T : ð22Þ


nn
If B is a block diagonal matrix, i.e., B ¼ K :¼ blockdiag½K1 ; K2 ; . . . ; Km  2 F , then we have
T K1 t K2 t Km t T nn
UðtÞ ¼ Q UK ðtÞQ ¼ Q blockdiag½e ;e ;...;e Q 2 F : ð23Þ

4. Computation of transition matrices

Computing the transition matrices, it is required to use the series expansions of the following functions:
The exponential functions:
1 2 1 3 1 4 1 5
et ¼ 1 þ t þ t þ t þ t þ t þ 
2! 3! 4! 5!
1 2 1 3 1 4 1 5
et ¼ 1  t þ t  t þ t  t þ 
2! 3! 4! 5!
Hyperbolic functions:

et þ et 1 1 1
chðtÞ :¼ cosh t ¼ ¼ 1 þ t2 þ t4 þ t6 þ   
2 2! 4! 6!
et  et 1 1 1
shðtÞ :¼ sinh t ¼ ¼ t þ t3 þ t5 þ t7 þ   
2 3! 5! 7!
pffiffiffiffiffiffiffi
Let j ¼ 1. From
1 2 1 1 1
ejt ¼ cosðtÞ þ j sinðtÞ ¼ 1 þ jt  t  j t3 þ t4 þ j t5 þ   
2! 3! 4! 5!
we have the trigonometric functions:

ejt þ ejt 1 1 1
cosðtÞ ¼ ¼ 1  t2 þ t4  t6 þ   
2 2! 4! 6!
ejt  ejt 1 1 1
sinðtÞ ¼ ¼ t  t3 þ t5  t7 þ   
j2 3! 5! 7!
The following relations hold:

chðtÞ ¼ cosðjtÞ; shðtÞ ¼ j sinðjtÞ; cosðtÞ ¼ chðjtÞ; sinðtÞ ¼ jshðjtÞ:


Next, we compute the transition matrices of some special matrices.

1. The identity matrix


The transition matrix of an n-dimensional identity matrix I n is given by
1 1 1 1
UðtÞ ¼ expðI n tÞ ¼ I n þ I n t þ I 2n t 2 þ I 3n t3 þ    ¼ I n þ I n t þ I n t 2 þ I n t3 þ   
2! 3! 2! 3!
 
1 1
¼ 1 þ t þ t 2 þ t 3 þ    I n ¼ et I n : ð24Þ
2! 3!
2. The anti-identity matrix
The anti-identity matrix is also called the exchange matrix. The anti-identity matrix is denoted by I. We define an n-
dimensional anti-identity matrix as
2 3
1
6 7
6 1 7
I n :¼ adiag½1; 1; . . . ; 1 ¼ 6
6
7
7 2 Rnn :
6  7
4 5
1
Since I 2i 2i1 ¼ I n ; i ¼ 1; 2; 3; . . ., we have
n ¼ In; In
F. Ding / Applied Mathematics and Computation 223 (2013) 311–326 317

1 1 1 1 1 1
UðtÞ ¼ expðI n tÞ ¼ I n þ I n t þ I 2n t 2 þ I 3n t3 þ I 4n t 4 þ    ¼ I n þ I n t þ I n t2 þ I n t 3 þ I n t4 þ   
2! 3! 4! 2! 3! 4!
   
1 1 1 1
¼ I n þ I n t 2 þ I n t 4 þ    þ I n t þ I n t 3 þ I n t 5 þ   
2! 4! 3! 5!
   
1 2 1 4 1 3 1 5 et þ et et  et 
¼ 1 þ t þ t þ    I n þ t þ t þ t þ    I n ¼ chðtÞI n þ shðtÞI n ¼ In þ In
2! 4! 3! 5! 2 2
2 Rnn : ð25Þ
In mathematics, especially linear algebra, the exchange matrix is a special case of a permutation matrix, where the 1 ele-
ments reside on the counterdiagonal and all other elements are zero. In other words, it is a ‘row-reversed’ or ‘column-re-
versed’ version of the identity matrix.
Any matrix A satisfying the condition AI ¼ IA is said to be centrosymmetric. Any matrix A satisfying the condition AI ¼ IAT is
said to be persymmetric.
A symmetric matrix is a matrix whose values are symmetric in the northwest-to-southeast diagonal. If a symmetric matrix is
rotated by 90, it becomes a persymmetric matrix. Symmetric persymmetric matrices are sometimes called bisymmetric
matrices.
The following is two bisymmetric matrices or centrosymmetric matrices:
2 3
2 3 a b c d
a b c 6 7
6b e f c 7
6 7 6 7
6b e b7 6 7:
4 5; 6 7
6c f e b7
c b a 4 5
d c b a
A ð2n  1Þ-dimensional centrosymmetric matrix has 1 þ 3 þ 5 þ    þ ð2n  1Þ ¼ n2 independent elements, a 2ndimen-
sional centrosymmetric matrix has 2 þ 4 þ 6 þ    þ 2n ¼ nðn þ 1Þ independent elements.
3. The diagonal matrix
For an n-dimensional diagonal matrix
K :¼ diag½k1 ; k2 ; . . . ; kn  2 Fnn ;
we have

Ki ¼ diag½ki1 ; ki2 ; . . . ; kin  2 Fnn ; i ¼ 1; 2; 3; . . .

1 2 2 1 3 3
UðtÞ ¼ eKt ¼ I n þ Kt þ K t þ K t þ 
2! 3!
2 3
e k1 t
6 7
6 ek2 t 7
6 7
¼6
6 ..
7 2 Fnn :
7 ð26Þ
6 . 7
4 5
ekn t
4. The anti-diagonal matrix
We denote an n-dimensional anti-diagonal matrix by
 :¼ adiag½a1 ; a2 ; . . . ; an 
K
2 3
a1
6 a2 7
6 7
¼6 7 2 Fnn :
4  5
an
2
Let ai aniþ1 ¼ d . We have
2 3
ða1 an Þi
6 7
6 7
6 ða2 an1 Þi 7
 2i 6 7 2i  2i1 ¼ d2i2 K;

K ¼6 7 ¼ d In; K i ¼ 1; 2; 3; . . .
6 .. 7
6 . 7
4 5
ðan a1 Þi
318 F. Ding / Applied Mathematics and Computation 223 (2013) 311–326

  þ1K
UðtÞ ¼ eKt ¼ I n þ Kt  2 t2 þ 1 K 3 t3 þ 1 K 4 t4 þ   
2! 3! 4!
   
1 2 2 1 4 4 1 2 2 1 4 4 
¼ In þ K t þ K t þ    þ t In þ K t þ K t þ  K
2! 4! 3! 5!
   
1 2 1 4 1 2 1 4  2 Fnn :
 ¼ chðdtÞI n þ shðdtÞK=d
¼ 1 þ d t2 þ d t4 þ    I n þ t 1 þ d t2 þ d t4 þ    K ð27Þ
2! 4! 3! 5!
5. The Jordan matrix
For an n-dimensional Jordan matrix
2 3
k 1
6 7
6 k 1 7
6 7
6 7
6 .. 7
Jn ¼ 6
6 k . 7 2 Fnn ;
7
6 7
6 .. 7
6 . 17
4 5
k
we have
2 kðk1Þ k2 kðk1Þðk2Þ k3 kðk1Þðk2Þðknþ1Þ knþ1
3
kk kkk1 2!
k 3!
k  ðn1Þ!
k
6 7
6 7
6 .. 7
6 kk kkk1 kðk1Þ k2
k . 7
6 2! 7
6 7
6 .. 7
6 kðk1Þðk2Þ k3 7
k 6 kk kkk1 . k 7
Jn ¼ 6 3! 7:
6 7
6 .. .. 7
6 . . kðk1Þ k2
k 7
6 2! 7
6 7
6 7
6
4 kk kkk1 7
5
kk

1 2 2 1 3 3 1 4 4
UðtÞ ¼ expðJ n tÞ ¼ I n þ J n t þ J t þ Jnt þ Jnt þ   
2! n 3! 4!
2 3
t 2 kt t 3 kt t n1
ekt tekt 2!
e 3!
e    ðn1Þ! e kt
6 7
6 7
6 ekt tekt t 2 kt
e t n2
   ðn2Þ! e 7 kt 7
6 2!
6 7
6 7
6 . 7
6 ekt tekt .. 7
6 7
¼6 7 2 Fnn : ð28Þ
6 . 7
6 ekt . . t2 kt 7
e 7
6 2!
6 7
6 .. 7
6 7
6 . te kt 7
4 5
ekt
6. The positive negative identity matrix
For a 2n-dimensional positive negative identity matrix
 
In
A :¼ 2 Rð2nÞð2nÞ ;
I n
we have
   
eI n t et I n
UðtÞ ¼ eAt ¼ ¼ 2 Rð2nÞð2nÞ :
eI n t et I n
7. The positive negative anti-identity matrix
For a 2n-dimensional positive negative anti-identity matrix
 
In
A¼ 2 R2nð2nÞ ;
I n
F. Ding / Applied Mathematics and Computation 223 (2013) 311–326 319

we have

A2i ¼ I 2n ;A2iþ1 ¼ A;
1 1 1
UðtÞ :¼ eAt ¼ I þ At þ A2 t2 þ A3 t 3 þ A4 t 4 þ   
 2! 3!  4! 
1 1 1 1
¼ I 2n þ A2 t 2 þ A4 t 4 þ    þ At þ A3 t3 þ A5 t 5 þ   
2! 4! 3! 5!
1 2 1 3 1 4
¼ I þ At þ At þ At þ At þ   
 2! 3! 4! 
1 2 1 3
¼ I þ A 1 þ 1 þ t þ t þ t þ   
2! 3!
¼ I þ ðet  1ÞA 2 R2nð2nÞ :
8. The positive negative alternating identity matrix
For a ð2nÞ-dimensional positive negative alternating identity matrix

A ¼ diag½1; 1; 1; 1; . . . ; ð1Þn1  2 Rnn ;


we have
n1
UðtÞ ¼ eAt ¼ diag½et ; et ; et ; et ; . . . ; eð1Þ t
 2 Rnn : ð29Þ
9. The positive negative alternating anti-identity matrix
For a ð2nÞ-dimensional positive negative alternating anti-identity matrix

A ¼ adiag½1; 1; 1; 1; . . . ; ð1Þn1 


2 3
1
6 1 7
6 7
6 7
6 1 7
¼66 7 2 Rnn ;
1 7
6 7
6 7
4  5
ð1Þn1
we have

A2 ¼ I n ; A3 ¼ A;A4 ¼ I n ; A5 ¼ A; A6 ¼ I n ; A7 ¼ A;   
1 1 1
UðtÞ :¼ eAt ¼ I n þ At þ A2 t2 þ A3 t 3 þ A4 t 4 þ   
 2! 3!   4! 
1 1 1 1
¼ I n þ A2 t 2 þ A4 t4 þ    þ At þ A3 t3 þ A5 t 5 þ   
2! 4! 3! 5!
   
1 2 1 1 1
¼ I n  I n t þ I n t 4     þ At  At3 þ At5    
2! 4! 3! 5!
¼ cosðtÞI n þ sinðtÞA 2 Rnn : ð30Þ
10. The dual-diagonal identity matrix
We define a dual-diagonal identity matrix as
2 3
1 1
6 1 1 7
6 7
6 7
6 1 1 7
6 7
6 7
A¼6  7 2 Rnn :
6 7
6 1 1 7
6 7
6 7
4 1 1 5
1 1
For even number n, we have A ¼ I n þ I n . Noting that I n and I n are commutative and using (16),(24),(25), we have
 t 
e þ et et  et  e2t þ 1 e2t  1 
UA ðtÞ :¼ eAt ¼ expðI n t þ I n tÞ ¼ expðI n tÞ expðI n tÞ ¼ et I n In þ In ¼ In þ I n 2 Rnn : ð31Þ
2 2 2 2
For odd number n, let n ¼ 2k þ 1. we have A ¼ diag½I k ; 0; I k  þ I 2kþ1 . Noting that ½I k ; 0; I k  and I 2kþ1 are commutative and using
(16), (24), (25), we have
320 F. Ding / Applied Mathematics and Computation 223 (2013) 311–326

UA ðtÞ :¼ eAt ¼ expðdiag½I k ; 0; I k t þ I 2kþ1 tÞ

¼ expðdiag½I k ; 0; I k tÞ expðI 2kþ1 tÞ


 t 
e þ et et  et 
¼ diag½et I k ; 1; et I k  I 2kþ1 þ I 2kþ1
2 2
2 t 30 2 I k 31
e Ik
6 7Be þ e t t
e e 6
t t 7C
¼64 1 7B
5@ 2 In þ 6
4 1 7C
5A
2
t
e Ik 
Ik
2 e2t þ1 3 2 3
2
Ik etI k
6 7 et  et 6 7
6 et þet 7 6 7
¼6 2 7þ 4 1 5
4 5 2
2t
e þ1
Ik
t
e Ik
2
2 e2t þ1 3 2 e2t 1 
3
2
Ik 2
Ik
6 7 6 7
6 et þet 7 6 et et 7
¼6 2 7þ6 2 7
4 5 4 5
e2t þ1 e2t 1 
2
Ik 2
Ik
2 3
e2t þ1 e2t 1 
2
Ik 0 2
Ik
6 7
6 7
¼6 0 et 0 7 2 Rnn : ð32Þ
4 5
e2t þ1 e2t þ1
2
Ik 0 2
Ik

11. The square identity matrix


For a square identity matrix A with A2 ¼ I, we havee

A2k1 ¼ A; A2k ¼ I; k ¼ 1; 2;   
1 1 1
UðtÞ :¼ eAt ¼ I þ At þ A2 t2 þ A3 t 3 þ A4 t4 þ   
 2! 3!   4! 
1 1 1 1
¼ I þ A2 t2 þ A4 t 4 þ    þ At þ A3 t 3 þ A5 t5 þ   
2! 4! 3! 5!
   
1 2 1 4 1 3 1 5
¼ 1 þ t þ t þ  I þ t þ t þ t þ  A
2! 4! 3! 5!
¼ chðtÞI þ shðtÞA: ð33Þ
12. The squared anti-identity matrix
For a squared anti-identity matrix A with A2 ¼ I, we have

A3 ¼ A; A4 ¼ I;A5 ¼ A; A6 ¼ I; A7 ¼ A; A8 ¼ I;   


1 1 1
UðtÞ :¼ eAt ¼ I þ At þ A2 t2 þ A3 t 3 þ A4 t4 þ   
 2! 3!   4! 
1 1 1 1
¼ I þ A2 t2 þ A4 t 4 þ    þ At þ A3 t 3 þ A5 t5 þ   
2! 4! 3! 5!
   
1 2 1 4 1 6 1 3 1 5 1 7
¼ 1  t þ t  t þ  I þ t  t þ t  t  A
2! 4! 6! 3! 5! 7!
¼ cosðtÞI þ sinðtÞA: ð34Þ
13. The idempotent matrix
For an idempotent matrix A with Ak ¼ A; k ¼ 1; 2; . . ., we have
1 2 2 1 3 3 1 4 4
UðtÞ :¼ eAt ¼ I þ At þ A t þ A t þ A t þ 
2! 3! 4!
1 2 1 3 1 4
¼ I þ At þ At þ At þ At þ   
 2! 3! 4! 
1 2 1 3 1 4
¼ I þ t þ t þ t þ t þ  A
2! 3! 4!
¼ I þ ðet  1ÞA:
F. Ding / Applied Mathematics and Computation 223 (2013) 311–326 321

14. The anti-idempotent matrix


For an idempotent matrix A with Ak ¼ ð1Þk1 A; k ¼ 1; 2; . . ., we have
1 2 2 1 3 3 1 4 4
UðtÞ :¼ eAt ¼ I þ At þ A t þ A t þ A t þ 
2! 3! 4!
   
1 2 2 1 4 4 1 1
¼Iþ A t þ A t þ    þ At þ A3 t 3 þ A5 t5 þ   
2! 4! 3! 5!
   
1 1 1 1
¼ I þ  At 2  At 4 þ    þ At þ At3 þ At 5 þ   
2! 4! 3! 5!
   
1 1 1 1
¼ I þ  t2  t4 þ    A þ t þ t3 þ t5 þ    A
2! 4! 3! 5!
¼ I þ ½1  coshðtÞA þ sinhðtÞA
 
et þ et et  et
¼Iþ 1 þ A
2 2
¼ I þ ð1  et ÞA:
15. The block identity matrix
For a block identity matrix
 
In In
A¼ 2 Rð2nÞð2nÞ ;
In In
we have

A2 ¼ 2A; A3 ¼ 22 A; A4 ¼ 23 A; Ak ¼ 2k1 A; k ¼ 2; 3; . . .
1 1 1
UðtÞ :¼ eAt ¼ I þ At þ A2 t2 þ A3 t 3 þ A4 t 4 þ   
2! 3! 4!
1 1 1
¼ I 2n þ At þ 2At2 þ 22 At 3 þ 23 At 4 þ   
 2! 3! 4! 
1 1 1
¼ I 2n þ t þ 2t 2 þ 22 t 3 þ 23 t 4 þ    A
2! 3! 4!
 
1 1 1 1
¼ I 2n þ 2t þ ð2tÞ þ ð2tÞ3 þ ð2tÞ4 þ    A
2
2 2! 3! 4!
1 2t
¼ I 2n þ ðe  1ÞA
" 2t 2 2t
#
e þ1
2
I n e 21 I n
¼ 2t 2 Rð2nÞð2nÞ :
e 1 e2t þ1
2
I n 2
I n

In a similar way, we can find the transition matrices of the following matrices:
 
In In
A¼ 2 Rð2nÞð2nÞ :
I n In
 
In I n
A¼ 2 Rð2nÞð2nÞ :
I n In
2 3
In In    In
6I In    In 7
6 n 7
A¼6
6 .. .. .. 7
72R
ðknÞðknÞ
:
4 . . . 5
In In    In
2 3
I n In    In
6 In I n    In 7
6 7
A¼6
6 .. .. .. .. 772R
ðknÞðknÞ
:
4 . . . . 5
In In    I n
322 F. Ding / Applied Mathematics and Computation 223 (2013) 311–326

16. The unity matrix


For a unity matrix with full unity elements:
2 3
1 1  1
61 1  17
6 7
A¼6
6 .. .. .. 7
nn
72R ;
4. . .5
1 1  1
we have

A2 ¼ nA; A3 ¼ n2 A; A4 ¼ n3 A; Ak ¼ nk1 A; k ¼ 2; 3; . . .
1 1 1
UðtÞ :¼ eAt ¼ I þ At þ A2 t 2 þ A3 t3 þ A4 t 4 þ   
2! 3! 4!
1 1 1
¼ I n þ At þ nAt 2 þ n2 At 3 þ n3 At4 þ   
 2! 3! 4! 
1 1 1
¼ I n þ t þ nt 2 þ n2 t 3 þ n3 t4 þ    A
2! 3! 4!
 
1 1 1 1
¼ In þ nt þ ðntÞ þ ðntÞ3 þ ðntÞ4 þ    A
2
n 2! 3! 4!
1 nt
¼ I n þ ðe  1ÞA
n
2 ent þn1 ent 1 ent 1 ent 1
3
n n n
 n
6 ent 1 ent þn1 ent 1 ent 1 7
7
6 n 
6 n n n 7
6 .. .. 7
6 nt ent 1 ent 1 7
¼ 6 e n1 n n
. . 7 2 Rnn :
6 7
6 . .. .. .. 7
6 .. . . . ent 1 7
4 n 5
ent 1 ent 1 ent 1 ent þn1
n n
 n n

17. The nilpotent matrix


If there exists an integer k such that Ak ¼ 0, then we call A the nilpotent matrix. This means that

Akþ1 ¼ Akþ2 ¼    ¼ 0;
Thus the transition matrix of the nilpotent matrix A with Ak ¼ 0 is given by

1 2 2 1 3 3 1
UðtÞ :¼ eAt ¼ I þ At þ A t þ A t þ  þ Ak1 tk1 : ð35Þ
2! 3! ðk  1Þ!
If A is any nilpotent matrix with Ak ¼ 0, then ðI  AÞ is invertible and

ðI  AÞ1 ¼ I þ A þ A2 þ    þ Ak1 :
18. The strictly triangular matrix
The triangular matrix with zero diagonal elements is called the strictly triangular matrix. The strictly triangular matrix
includes the strictly upper triangular matrix and the strictly lower triangular matrix. A strictly triangular matrix is a
nilpotent matrix.
If A is an n  n strictly triangular matrix, then its nth power is a zero matrix, i.e., An ¼ 0. For a strictly upper triangular
matrix
2 3
0 a12 a13   a1n
60 0 a23   a2n 7
6 7
6 7
60 0 0 a34  a3n 7
6 7
Un ¼ 6 . .. .. .. .. .. 7 2 Rnn ;
6 .. . . . . . 7
6 7
6 7
40 0 0  0 an1;n 5
0 0 0  0 0
we have

1 2 2 1 3 3 1
UUn ðtÞ :¼ expðU n tÞ ¼ I n þ U n t þ U t þ Unt þ    þ U n1 t n1 : ð36Þ
2! n 3! ðn  1Þ! n
UUn ðtÞ is also a strictly upper triangular matrix.
F. Ding / Applied Mathematics and Computation 223 (2013) 311–326 323

5. Examples

Example 1. Consider the following the dynamic system:


2 3 2 3
0 1 0 1
_ 6 7 6 7
xðtÞ ¼ 4 0 0 1 5xðtÞ; xð0Þ ¼ 4 2 5:
0 0 0 6
Compute its state transition matrix UðtÞ and the solution of xðtÞ.
Solution For this example, we have
2 3 2 3 2 3
0 1 0 0 0 1 0 0 0
6 7 6 7 6 7
A ¼ 4 0 0 1 5; A2 ¼ 4 0 0 0 5; A3 ¼ A4 ¼    ¼ 4 0 0 0 5:
0 0 0 0 0 0 0 0 0
The state transition matrix is given by
1 2 2 1 3 3
UðtÞ ¼ eAt ¼ I 3 þ At þ A t þ A t þ 
2! 3!
1
¼ I 3 þ At þ A2 t 2
2 2!3 2 3 2 3
1 0 0 0 1 0 0 0 1
6 7 6 7 16 7
¼ 4 0 1 0 5 þ 4 0 0 1 5t þ 4 0 0 0 5t 2
2!
0 0 1 0 0 0 0 0 0
2 2
3
1 t t2!
6 7
¼ 4 0 1 t 5:
0 0 1
The solution is

xðtÞ ¼ UðtÞxð0Þ ¼ eAt xð0Þ


2 2
32 3 2 3
1 t t2! 1 1 þ t þ t2
6 76 7 6 7
¼ 4 0 1 t 54 2 5 ¼ 4 2 þ 6t 5:
0 0 1 6 6

Example 2. Compute the transition matrix of the following matrix,


 
3 2
A¼ :
1 0
Solution Let
 
s þ 3 2 
det½sI 2  A ¼ ¼ s2 þ 3s þ 2 ¼ ðs þ 1Þðs þ 2Þ ¼ 0;
1 s 
 
p11
whose two distinct eigenvalues are s ¼ k1 ¼ 1; s ¼ k2 ¼ 2 and A is diagonalizable. Compute two eigenvectors p1 ¼
  p21
p12
and p2 ¼ . Let Ap1 ¼ k1 p1 , or
p22
    
3 2 p11 p11
¼ 1 :
1 0 p21 p21
It leads to two equations:
3p11  2p21 ¼ p11 ; p11 ¼ p21 :
Only one equation is independent, and a set of solutions are p21 ¼ 1 and p11 ¼ 1, and
   
p11 1
p1 ¼ ¼ :
p21 1
Let Ap2 ¼ k2 p2 , or
324 F. Ding / Applied Mathematics and Computation 223 (2013) 311–326

    
3 2 p12 p12
¼ 2 :
1 0 p22 p22
This means that
3p12  2p22 ¼ 2p12 ; p12 ¼ 2p22 :
   
p12 2
Thus we have p22 ¼ 1 and p12 ¼ 2, and p2 ¼ ¼ . Construct the transformation matrix
p22 1
   
1 2 1 2
T ¼ ½p1 ; p2  ¼ ; T 1 ¼ :
1 1 1 1
Hence we have
        
1 2 3 2 1 2 1 2 1 2 1
K :¼ T 1 AT ¼ ¼ ¼ ;
1 1 1 0 1 1 2 2 1 1 2

A ¼ TKT 1 ;

1
UðtÞ :¼ eAt ¼ eTKT t ¼ TeKt T 1
  t  
1 2 e 1 2
¼
1 1 e2t 1 1
" # 
t 2t 1 2
e 2e
¼
e2t e2t 1 1
" #
et þ 2e2t 2et þ 2e2t
¼ :
et  e2t 2et  e2t

If A is not diagonalizable but is always transformed into the Jordan form such that A ¼ TJT 1 , where
J ¼ blockdiag½J 1 ; J 2 ; . . . ; J k  2 Fnn ; J i is a Jordan block. Let T is a matrix which consists of the generalized eigenvectors of A.
Then we have
1
UðtÞ :¼ eAt ¼ eTJT t
¼ Tblockdiag½eJ 2 t ; eJ 2 t ; . . . ; eJ k t T 1 2 Fnn : ð37Þ

Example 3. Assume that the transition matrix


" #
et þ 2e2t 2et þ 2e2t
UðtÞ ¼ expðAtÞ ¼ :
et  e2t 2et  e2t
Compute U1 ðtÞ; Um ðtÞ and A.
Solution According to the definition of the transition matrix, we have
" #
et þ 2e2t 2et þ 2e2t
U1 ðtÞ ¼ expðAtÞ ¼ expðAðtÞÞ ¼ UðtÞ ¼ ;
et  e2t 2et  e2t
" #
m emt þ 2e2mt 2emt þ 2e2mt
U ðtÞ ¼ exp½mAt ¼ exp½AðmtÞ ¼ UðmtÞ ¼ :
emt  e2mt 2emt  e2mt

Since
_
UðtÞ ¼ A expðAtÞ ¼ expðAtÞA;
letting t ¼ 0 gives
" #  
_ et  4e2t 2et  4e2t 3 2
A ¼ Uð0Þ ¼ ¼ :
et þ 2e2t 2et þ 2e2t t¼0
1 0

ðtÞ to Eq. (2) gives


Example 4. Assume that T is invertible. Applying the linear transformation xðtÞ ¼ T 3 x

_ ðtÞ ¼ T 1 AT x
x x
ðtÞ þ T 1 buðtÞ ¼ A 
ðtÞ þ buðtÞ; ðt 0 Þ ¼ T 1 x0 ;
x ð38Þ
where
F. Ding / Applied Mathematics and Computation 223 (2013) 311–326 325

 :¼ T 1 AT 2 Rnn ;
A  ¼ T 1 b 2 Rn :
b
At
Let UðtÞ :¼ UA ðtÞ ¼ e is the transition matrix of the system in (2). According to the property of the transition matrix of
the similarity matrix in (17), The transition matrix of the system in (38) is given by


UðtÞ ¼ T 1 UðtÞT: ð39Þ

6. Conclusions

This paper discusses some basic properties of the matrix exponentials or transition matrices and derives the transition
matrices of some special matrices related to linear system theory, signal processing, and system identification [28–31],
e.g., the maximum likelihood approaches [32–34], the hierarchical identification methods [35–37], the coupled identification
methods [38,39], the iterative identification methods [40–42].

References

[1] B. Zhou, Z.Y. Li, G.R. Duan, et al, Solutions to a family of matrix equations by using the Kronecker matrix polynomials, Applied Mathematics and
Computation 212 (2) (2009) 327–336.
[2] F. Ding, P.X. Liu, J. Ding, Iterative solutions of the generalized Sylvester matrix equations by using the hierarchical identification principle, Applied
Mathematics and Computation 197 (1) (2008) 41–50.
[3] F. Ding, Transformations between some special matrices, Computers & Mathematics with Applications 59 (8) (2010) 2676–2695.
[4] M. Dehghan, M. Hajarian, Computing matrix functions using mixed interpolation methods, Mathematical and Computer Modelling 52 (5–6) (2010)
826–836.
[5] B.B. Wu, Explicit formulas for the exponentials of some special matrices, Applied Mathematics Letters 24 (5) (2011) 642–647.
[6] Y. Shi, F. Ding, T. Chen, 2-Norm based recursive design of transmultiplexers with designable filter length, Circuits, Systems and Signal Processing 25 (4)
(2006) 447–462.
[7] Y. Shi, H. Fang, Kalman filter based identification for systems with randomly missing measurements in a network environment, International Journal of
Control 83 (3) (2010) 538–551.
[8] J.B. Zhang, F. Ding, Y. Shi, Self-tuning control based on multi-innovation stochastic gradient parameter estimation, Systems & Control Letters 58 (1)
(2009) 69–75.
[9] Y. Shi, B. Yu, Output feedback stabilization of networked control systems with random delays modeled by Markov chains, IEEE Transactions on
Automatic Control 54 (7) (2009) 1668–1674.
[10] G. Moore, Orthogonal polynomial expansions for the matrix exponential, Linear Algebra and its Applications 435 (2) (2011) 537–559.
[11] Z. Al Zhour, A. Kilicman, Some new connections between matrix products for partitioned and non-partitioned matrices, Computers & Mathematics
with Applications 54 (6) (2007) 763–784.
[12] F. Ding, T. Chen, Iterative least squares solutions of coupled Sylvester matrix equations, Systems & Control Letters 54 (2) (2005) 95–107.
[13] F. Ding, T. Chen, On iterative solutions of general coupled matrix equations, SIAM Journal on Control and Optimization 44 (6) (2006) 2269–2284.
[14] H.M. Zhang, F. Ding, On the Kronecker products and their applications, Journal of Applied Mathematics (2013) 1–8, http://dx.doi.org/10.1155/2013/
296185. Article ID 296185.
[15] M. Dehghan, M. Hajarian, An iterative method for solving the generalized coupled Sylvester matrix equations over generalized bisymmetric matrices,
Applied Mathematical Modelling 34 (3) (2010) 639–654.
[16] M. Dehghan, M. Hajarian, Analysis of an iterative algorithm to solve the generalized coupled Sylvester matrix equations, Applied Mathematical
Modelling 35 (7) (2011) 3285–3300.
[17] Y. Zhang, G.M. Cui, Bias compensation methods for stochastic systems with colored noise, Applied Mathematical Modelling 35 (4) (2011) 1709–1716.
[18] Y. Zhang, Unbiased identification of a class of multi-input single-output systems with correlated disturbances using bias compensation methods,
Mathematical and Computer Modelling 53 (9–10) (2011) 1810–1819.
[19] Y.J. Liu, Y.S. Xiao, X.L. Zhao, Multi-innovation stochastic gradient algorithm for multiple-input single-output systems using the auxiliary model,
Applied Mathematics and Computation 215 (4) (2009) 1477–1483.
[20] Y.S. Xiao, Y. Zhang, J. Ding, J.Y. Dai, The residual based interactive least squares algorithms and simulation studies, Computers & Mathematics with
Applications 58 (6) (2009) 1190–1197.
[21] L.Y. Wang, L. Xie, X.F. Wang, The residual based interactive stochastic gradient algorithms for controlled moving average models, Applied Mathematics
and Computation 211 (2) (2009) 442–449.
[22] Y.J. Liu, J. Sheng, R.F. Ding, Convergence of stochastic gradient estimation algorithm for multivariable ARX-like systems, Computers & Mathematics
with Applications 59 (8) (2010) 2615–2627.
[23] J. Ding, L.L. Han, X.M. Chen, Time series AR modeling with missing observations based on the polynomial transformation, Mathematical and Computer
Modelling 51 (5–6) (2010) 527–536.
[24] J.H. Li, R.F. Ding, Y. Yang, Iterative parameter identification methods for nonlinear functions, Applied Mathematical Modelling 26 (6) (2012) 2739–
2750.
[25] F. Ding, T. Chen, Performance analysis of multi-innovation gradient type identification methods, Automatica 43 (1) (2007) 1–14.
[26] F. Ding, P.X. Liu, H.Z. Yang, Parameter identification and intersample output estimation for dual-rate systems, IEEE Transactions on Systems, Man, and
Cybernetics, Part A: Systems and Humans 38 (4) (2008) 966–975.
[27] F. Ding, P.X. Liu, G. Liu, Auxiliary model based multi-innovation extended stochastic gradient parameter estimation with colored measurement noises,
Signal Processing 89 (10) (2009) 1883–1890.
[28] J. Ding, F. Ding, Bias compensation based parameter estimation for output error moving average systems, International Journal of Adaptive Control and
Signal Processing 25 (12) (2011) 1100–1111.
[29] F. Ding, Hierarchical multi-innovation stochastic gradient algorithm for Hammerstein nonlinear system modeling, Applied Mathematical Modelling 37
(4) (2013) 1694–1704.
[30] F. Ding, Two-stage least squares based iterative estimation algorithm for CARARMA system modeling, Applied Mathematical Modelling 37 (7) (2013)
4798–4808.
[31] F. Ding, Decomposition based fast least squares algorithm for output error systems, Signal Processing 93 (5) (2013) 1235–1242.
[32] J.H. Li, F. Ding, G.W. Yang, Maximum likelihood least squares identification method for input nonlinear finite impulse response moving average
systems, Mathematical and Computer Modelling 55 (3–4) (2012) 442–450.
[33] W. Wang, F. Ding, J.Y. Dai, Maximum likelihood least squares identification for systems with autoregressive moving average noise, Applied
Mathematical Modelling 36 (5) (2012) 1842–1853.
326 F. Ding / Applied Mathematics and Computation 223 (2013) 311–326

[34] J.H. Li, F. Ding, Maximum likelihood stochastic gradient estimation for Hammerstein systems with colored noise based on the key term separation
technique, Computers & Mathematics with Applications 62 (11) (2011) 4170–4177.
[35] Z.N. Zhang, F. Ding, X.G. Liu, Hierarchical gradient based iterative parameter estimation algorithm for multivariable output error moving average
systems, Computers & Mathematics with Applications 61 (3) (2011) 672–682.
[36] H.Q. Han, L. Xie, et al, Hierarchical least squares based iterative identification for multivariable systems with moving average noises, Mathematical and
Computer Modelling 51 (9–10) (2010) 1213–1220.
[37] J. Ding, F. Ding, X.P. Liu, G. Liu, Hierarchical least squares identification for linear SISO systems with dual-rate sampled-data, IEEE Transactions on
Automatic Control 56 (11) (2011) 2677–2683.
[38] F. Ding, G. Liu, X.P. Liu, Partially coupled stochastic gradient identification methods for non-uniformly sampled systems, IEEE Transactions on
Automatic Control 55 (8) (2010) 1976–1981.
[39] F. Ding, Coupled-least-squares identification for multivariable systems, IET Control Theory and Applications 7 (1) (2013) 68–79.
[40] F. Ding, X.P. Liu, G. Liu, Identification methods for Hammerstein nonlinear systems, Digital Signal Processing 21 (2) (2011) 215–238.
[41] F. Ding, Y.J. Liu, B. Bao, Gradient based and least squares based iterative estimation algorithms for multi-input multi-output systems, Proceedings of
the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 226 (1) (2012) 43–55.
[42] F. Ding, X.G. Liu, J. Chu, Gradient-based and least-squares-based iterative algorithms for Hammerstein systems using the hierarchical identification
principle, IET Control Theory and Applications 7 (2) (2013) 176–184.

You might also like