Computation of Matrix Exponentials of Special Matrices
Computation of Matrix Exponentials of Special Matrices
a r t i c l e i n f o a b s t r a c t
Keywords: Computing matrix exponentials or transition matrices are important in signal processing
Matrix exponential and control systems. This paper studies the methods of computing the transition matrices
Transition matrix related to linear dynamic systems, discusses and gives the properties of the transition
Similarity transformation matrices and explores the transition matrices of some special matrices. Some examples
Jordan matrix
are provided to illustrate the proposed methods.
Special matrix
Linear system
Ó 2013 Elsevier Inc. All rights reserved.
1. Introduction
In engineering, we often encounter algebraic equations or differential equations or decomposition of some special
matrices [1–3]. In linear cases, these differential equations can be transformed into a set of the first-order differential
equations of form xðtÞ_ ¼ AxðtÞ, where A 2 Rnn is a real constant matrix, xðtÞ 2 Rn is a state vector. Solving these matrix
differential equations leads to a matrix exponential eAt of the transition matrix A. Computing matrix functions plays an
important role in science and engineering, including control theory. By applying mixed interpolation methods, Dehghan
and Hajarian gave an algorithm for computing the matrix function f ðAÞ and proved its existence and uniqueness [4]. Wu
discussed the matrix exponential of the matrix A which satisfies the relation Akþ1 ¼ qAk and gave explicit formulas for
computing eA [5].
Matrix functions or matrix exponentials have wide applications in signal filtering [6,7] and controller design [8,9]. Many
different algorithms for computing the matrix exponentials have been reported. For example, Moore computed matrix expo-
nentials by adopting the idea of expanding in either Chebyshev, Legendre or Laguerre orthogonal polynomials for the matrix
exponentials [10]. Matrix operations are useful in linear algebra and matrix theory. Al Zhour and Kilicman discussed some
different matrix products for partitioned and non-partitioned matrices and some useful connections of the matrix products
[11], including Kronecker product, Hadamard product, block Kronecker product, block Hadamard product, block-matrix in-
ner product (i.e., star product or H product) [12–14], etc. Dehghan and Hajarian presented an iterative method for solving the
generalized coupled Sylvester matrix equations over generalized bisymmetric matrices and analyzed its performance
[15,16].
In the area of system identification [17–21] and parameter estimation [22–24], e.g., the multi-innovation identification
[25–27], one task is to estimate the parameter matrix/vector A and b of the state space model in (2) of the next section,
and to determine the state vector xðtÞ which leads to the matrix exponential or transition matrix eAt . This paper studies
the properties of the transition matrix and computes the transition matrices of some special matrices [3].
q
This work was supported by the National Natural Science Foundation of China (No. 61273194), the Natural Science Foundation of Jiangsu Province
(China, BK2012549), the Priority Academic Program Development of Jiangsu Higher Education Institutions and the 111 Project (B12018).
⇑ Address: Key Laboratory of Advanced Process Control for Light Industry (Ministry of Education), Jiangnan University, Wuxi 214122, PR China.
E-mail address: [email protected]
0096-3003/$ - see front matter Ó 2013 Elsevier Inc. All rights reserved.
http://dx.doi.org/10.1016/j.amc.2013.07.079
312 F. Ding / Applied Mathematics and Computation 223 (2013) 311–326
The rest of the paper is organized as follows. Section 2 derives the solution of state equations and Section 3 discusses the
state transition matrix and its properties. Section 4 computes the transition matrices of some special matrices. Section 5 pro-
vides an illustrative example to validate the proposed methods. Finally, Section 6 offers some concluding remarks.
This section introduces some basic facts to be used in the next sections, which can be found in Modern Control Theory
textbook.
A continuous-time system described by a linear differential equation with constant coefficients can be transformed into a
set of the first-order differential equations:
8
> x_ 1 ðtÞ ¼ a11 x1 ðtÞ þ a12 x2 ðtÞ þ þ a1n xn ðtÞ þ b1 uðtÞ;
>
>
>
< x_ 2 ðtÞ ¼ a21 x1 ðtÞ þ a22 x2 ðtÞ þ þ a2n xn ðtÞ þ b2 uðtÞ;
.. ð1Þ
>
> .
>
>
:
x_ 3 ðtÞ ¼ an1 x1 ðtÞ þ a22 x2 ðtÞ þ þ a2n xn ðtÞ þ bn uðtÞ;
where t is a time variable, uðtÞ 2 R is the input of the system, xi ðtÞ 2 R; i ¼ 1; 2; . . . ; n are the state variables of the system.
Construct an n-dimensional state vector:
2 3
x1 ðtÞ
6 x2 ðtÞ 7
6 7
xðtÞ :¼ 6 7 n
6 .. 7 2 R :
4 . 5
xn ðtÞ
Let
2 3 2 3
a11 a12 a1n b1
6a a22 a2n 7 6b 7
6 21 7 6 27
A :¼ 6
6 .. .. .. 7
nn
72R ; b :¼ 6 7 n
6 .. 7 2 R :
4 . . . 5 4 . 5
an1 an2 ann bn
Eq. (1) can be written as a compact form,
_
xðtÞ ¼ AxðtÞ þ buðtÞ; xðt0 Þ ¼ x0 : ð2Þ
When the input uðtÞ ¼ 0, we obtain a homogeneous equation,
_
xðtÞ ¼ AxðtÞ; xðt0 Þ ¼ x0 : ð3Þ
Here, we use the power series to solve this equation. Assume that the state vector has the form of
xðtÞ ¼ a0 þ a1 t þ a2 t2 þ þ ai ti þ ; ð4Þ
where ai 2 Rn is the coefficient vector to be determined. Substituting xðtÞ into (3) gives
1 1 1
xðtÞ ¼ I n þ At þ A2 t 2 þ A3 t3 þ þ Ai t i þ xð0Þ: ð5Þ
2! 3! i!
Recalling the Taylor series expansion of the exponential function:
1 2 1 3
et ¼ 1 þ t þ t þ t þ
2! 3!
we define a matrix exponential function:
1 2 2 1 3 3 1
expðAtÞ :¼ eAt ¼ I n þ At þ A t þ A t þ þ Ai ti þ ð6Þ
2! 3! i!
_
Thus the solution of the homogeneous equation xðtÞ ¼ AxðtÞ can be expressed as
Thus we have
Z t Z t
xðtÞ ¼ eAt x1 ðtÞ ¼ eAt x1 ðt0 Þ þ eAs buðsÞds ¼ eAt x1 ðt 0 Þ þ eAðtsÞ buðsÞds: ð10Þ
t0 t0
x1 ðt 0 Þ ¼ eAt0 xðt0 Þ:
Inserting it into (10), we obtain the solution of Eq. (2):
Z t
xðtÞ ¼ eAðtt0 Þ xðt 0 Þ þ eAðtsÞ buðsÞds: ð11Þ
t0
From here, we can see that UðtÞ is the unique solution of the matrix differential equation:
_
UðtÞ ¼ AUðtÞ; Uð0Þ ¼ I:
6. The multiplication of the transition matrices
The multiplication of the transition matrices are commutative, i.e.,
UðtÞUðt 1 Þ ¼ Uðt 1 ÞUðtÞ ð13Þ
and
m
Um ðtÞ ¼ ðeAt Þ ¼ emAt ¼ eAðmtÞ ¼ UðmtÞ:
10. The transition matrix of the sum of commutable matrices
In general, eðAþBÞt – eAt eBt – eBt eAt , because eðAþBÞt contains terms Bi Ak , and
1
eðAþBÞt ¼ I n þ ðA þ BÞt þ ðA þ BÞ2 t 2 þ
2!
1
¼ I n þ ðA þ BÞt þ ðA2 þ AB þ BA þ B2 Þt 2 þ ;
2!
1 1
eAt eBt ¼ I n þ At þ A2 t2 þ I n þ Bt þ B2 t 2 þ
2! 2!
1 2 2 2
¼ I n þ ðA þ BÞt þ ðA þ 2AB þ B Þt þ
2!
At Bt 1 2
eðAþBÞt e e ¼ ðBA ABÞt þ
2
If and only if AB ¼ BA, then eðAþBÞt eAt eBt ¼ 0.
If A and B are commutative (AB ¼ BA), then we have
UA ðtÞ ¼ PeBt P 1 :
In fact, according to the definition of the transition matrix, we have
1
UA ðtÞ ¼ eAt ¼ ePBP t
1 2 1 3
¼ I n þ ðPBP 1 Þt þ ðPBP 1 Þ t 2 þ ðPBP 1 Þ t 3 þ
2! 3!
1 1
¼ I n þ PBP 1 t þ PB2 P 1 t 2 þ PB3 P 1 t 3 þ
2! 3!
1 1
¼ P I n þ Bt þ B2 t 2 þ B3 t 3 þ P 1
2! 3!
¼ PeBt P 1 ¼ PUB ðtÞP 1 :
Furthermore, if A 2 Rnn is diagonalizable, its eigenvalues are k1 ; k2 ; . . . ; kn , P is the matrix which consists of the eigenvectors
of A, then we have
P 1 AP ¼ diag½k1 ; k2 ; . . . ; kn ¼: K;
or
A ¼ PKP 1
and
Computing the transition matrices, it is required to use the series expansions of the following functions:
The exponential functions:
1 2 1 3 1 4 1 5
et ¼ 1 þ t þ t þ t þ t þ t þ
2! 3! 4! 5!
1 2 1 3 1 4 1 5
et ¼ 1 t þ t t þ t t þ
2! 3! 4! 5!
Hyperbolic functions:
et þ et 1 1 1
chðtÞ :¼ cosh t ¼ ¼ 1 þ t2 þ t4 þ t6 þ
2 2! 4! 6!
et et 1 1 1
shðtÞ :¼ sinh t ¼ ¼ t þ t3 þ t5 þ t7 þ
2 3! 5! 7!
pffiffiffiffiffiffiffi
Let j ¼ 1. From
1 2 1 1 1
ejt ¼ cosðtÞ þ j sinðtÞ ¼ 1 þ jt t j t3 þ t4 þ j t5 þ
2! 3! 4! 5!
we have the trigonometric functions:
ejt þ ejt 1 1 1
cosðtÞ ¼ ¼ 1 t2 þ t4 t6 þ
2 2! 4! 6!
ejt ejt 1 1 1
sinðtÞ ¼ ¼ t t3 þ t5 t7 þ
j2 3! 5! 7!
The following relations hold:
1 1 1 1 1 1
UðtÞ ¼ expðI n tÞ ¼ I n þ I n t þ I 2n t 2 þ I 3n t3 þ I 4n t 4 þ ¼ I n þ I n t þ I n t2 þ I n t 3 þ I n t4 þ
2! 3! 4! 2! 3! 4!
1 1 1 1
¼ I n þ I n t 2 þ I n t 4 þ þ I n t þ I n t 3 þ I n t 5 þ
2! 4! 3! 5!
1 2 1 4 1 3 1 5 et þ et et et
¼ 1 þ t þ t þ I n þ t þ t þ t þ I n ¼ chðtÞI n þ shðtÞI n ¼ In þ In
2! 4! 3! 5! 2 2
2 Rnn : ð25Þ
In mathematics, especially linear algebra, the exchange matrix is a special case of a permutation matrix, where the 1 ele-
ments reside on the counterdiagonal and all other elements are zero. In other words, it is a ‘row-reversed’ or ‘column-re-
versed’ version of the identity matrix.
Any matrix A satisfying the condition AI ¼ IA is said to be centrosymmetric. Any matrix A satisfying the condition AI ¼ IAT is
said to be persymmetric.
A symmetric matrix is a matrix whose values are symmetric in the northwest-to-southeast diagonal. If a symmetric matrix is
rotated by 90, it becomes a persymmetric matrix. Symmetric persymmetric matrices are sometimes called bisymmetric
matrices.
The following is two bisymmetric matrices or centrosymmetric matrices:
2 3
2 3 a b c d
a b c 6 7
6b e f c 7
6 7 6 7
6b e b7 6 7:
4 5; 6 7
6c f e b7
c b a 4 5
d c b a
A ð2n 1Þ-dimensional centrosymmetric matrix has 1 þ 3 þ 5 þ þ ð2n 1Þ ¼ n2 independent elements, a 2ndimen-
sional centrosymmetric matrix has 2 þ 4 þ 6 þ þ 2n ¼ nðn þ 1Þ independent elements.
3. The diagonal matrix
For an n-dimensional diagonal matrix
K :¼ diag½k1 ; k2 ; . . . ; kn 2 Fnn ;
we have
1 2 2 1 3 3
UðtÞ ¼ eKt ¼ I n þ Kt þ K t þ K t þ
2! 3!
2 3
e k1 t
6 7
6 ek2 t 7
6 7
¼6
6 ..
7 2 Fnn :
7 ð26Þ
6 . 7
4 5
ekn t
4. The anti-diagonal matrix
We denote an n-dimensional anti-diagonal matrix by
:¼ adiag½a1 ; a2 ; . . . ; an
K
2 3
a1
6 a2 7
6 7
¼6 7 2 Fnn :
4 5
an
2
Let ai aniþ1 ¼ d . We have
2 3
ða1 an Þi
6 7
6 7
6 ða2 an1 Þi 7
2i 6 7 2i 2i1 ¼ d2i2 K;
K ¼6 7 ¼ d In; K i ¼ 1; 2; 3; . . .
6 .. 7
6 . 7
4 5
ðan a1 Þi
318 F. Ding / Applied Mathematics and Computation 223 (2013) 311–326
þ1K
UðtÞ ¼ eKt ¼ I n þ Kt 2 t2 þ 1 K 3 t3 þ 1 K 4 t4 þ
2! 3! 4!
1 2 2 1 4 4 1 2 2 1 4 4
¼ In þ K t þ K t þ þ t In þ K t þ K t þ K
2! 4! 3! 5!
1 2 1 4 1 2 1 4 2 Fnn :
¼ chðdtÞI n þ shðdtÞK=d
¼ 1 þ d t2 þ d t4 þ I n þ t 1 þ d t2 þ d t4 þ K ð27Þ
2! 4! 3! 5!
5. The Jordan matrix
For an n-dimensional Jordan matrix
2 3
k 1
6 7
6 k 1 7
6 7
6 7
6 .. 7
Jn ¼ 6
6 k . 7 2 Fnn ;
7
6 7
6 .. 7
6 . 17
4 5
k
we have
2 kðk1Þ k2 kðk1Þðk2Þ k3 kðk1Þðk2Þðknþ1Þ knþ1
3
kk kkk1 2!
k 3!
k ðn1Þ!
k
6 7
6 7
6 .. 7
6 kk kkk1 kðk1Þ k2
k . 7
6 2! 7
6 7
6 .. 7
6 kðk1Þðk2Þ k3 7
k 6 kk kkk1 . k 7
Jn ¼ 6 3! 7:
6 7
6 .. .. 7
6 . . kðk1Þ k2
k 7
6 2! 7
6 7
6 7
6
4 kk kkk1 7
5
kk
1 2 2 1 3 3 1 4 4
UðtÞ ¼ expðJ n tÞ ¼ I n þ J n t þ J t þ Jnt þ Jnt þ
2! n 3! 4!
2 3
t 2 kt t 3 kt t n1
ekt tekt 2!
e 3!
e ðn1Þ! e kt
6 7
6 7
6 ekt tekt t 2 kt
e t n2
ðn2Þ! e 7 kt 7
6 2!
6 7
6 7
6 . 7
6 ekt tekt .. 7
6 7
¼6 7 2 Fnn : ð28Þ
6 . 7
6 ekt . . t2 kt 7
e 7
6 2!
6 7
6 .. 7
6 7
6 . te kt 7
4 5
ekt
6. The positive negative identity matrix
For a 2n-dimensional positive negative identity matrix
In
A :¼ 2 Rð2nÞð2nÞ ;
I n
we have
eI n t et I n
UðtÞ ¼ eAt ¼ ¼ 2 Rð2nÞð2nÞ :
eI n t et I n
7. The positive negative anti-identity matrix
For a 2n-dimensional positive negative anti-identity matrix
In
A¼ 2 R2nð2nÞ ;
I n
F. Ding / Applied Mathematics and Computation 223 (2013) 311–326 319
we have
A2i ¼ I 2n ;A2iþ1 ¼ A;
1 1 1
UðtÞ :¼ eAt ¼ I þ At þ A2 t2 þ A3 t 3 þ A4 t 4 þ
2! 3! 4!
1 1 1 1
¼ I 2n þ A2 t 2 þ A4 t 4 þ þ At þ A3 t3 þ A5 t 5 þ
2! 4! 3! 5!
1 2 1 3 1 4
¼ I þ At þ At þ At þ At þ
2! 3! 4!
1 2 1 3
¼ I þ A 1 þ 1 þ t þ t þ t þ
2! 3!
¼ I þ ðet 1ÞA 2 R2nð2nÞ :
8. The positive negative alternating identity matrix
For a ð2nÞ-dimensional positive negative alternating identity matrix
A2 ¼ I n ; A3 ¼ A;A4 ¼ I n ; A5 ¼ A; A6 ¼ I n ; A7 ¼ A;
1 1 1
UðtÞ :¼ eAt ¼ I n þ At þ A2 t2 þ A3 t 3 þ A4 t 4 þ
2! 3! 4!
1 1 1 1
¼ I n þ A2 t 2 þ A4 t4 þ þ At þ A3 t3 þ A5 t 5 þ
2! 4! 3! 5!
1 2 1 1 1
¼ I n I n t þ I n t 4 þ At At3 þ At5
2! 4! 3! 5!
¼ cosðtÞI n þ sinðtÞA 2 Rnn : ð30Þ
10. The dual-diagonal identity matrix
We define a dual-diagonal identity matrix as
2 3
1 1
6 1 1 7
6 7
6 7
6 1 1 7
6 7
6 7
A¼6 7 2 Rnn :
6 7
6 1 1 7
6 7
6 7
4 1 1 5
1 1
For even number n, we have A ¼ I n þ I n . Noting that I n and I n are commutative and using (16),(24),(25), we have
t
e þ et et et e2t þ 1 e2t 1
UA ðtÞ :¼ eAt ¼ expðI n t þ I n tÞ ¼ expðI n tÞ expðI n tÞ ¼ et I n In þ In ¼ In þ I n 2 Rnn : ð31Þ
2 2 2 2
For odd number n, let n ¼ 2k þ 1. we have A ¼ diag½I k ; 0; I k þ I 2kþ1 . Noting that ½I k ; 0; I k and I 2kþ1 are commutative and using
(16), (24), (25), we have
320 F. Ding / Applied Mathematics and Computation 223 (2013) 311–326
A2k1 ¼ A; A2k ¼ I; k ¼ 1; 2;
1 1 1
UðtÞ :¼ eAt ¼ I þ At þ A2 t2 þ A3 t 3 þ A4 t4 þ
2! 3! 4!
1 1 1 1
¼ I þ A2 t2 þ A4 t 4 þ þ At þ A3 t 3 þ A5 t5 þ
2! 4! 3! 5!
1 2 1 4 1 3 1 5
¼ 1 þ t þ t þ I þ t þ t þ t þ A
2! 4! 3! 5!
¼ chðtÞI þ shðtÞA: ð33Þ
12. The squared anti-identity matrix
For a squared anti-identity matrix A with A2 ¼ I, we have
A2 ¼ 2A; A3 ¼ 22 A; A4 ¼ 23 A; Ak ¼ 2k1 A; k ¼ 2; 3; . . .
1 1 1
UðtÞ :¼ eAt ¼ I þ At þ A2 t2 þ A3 t 3 þ A4 t 4 þ
2! 3! 4!
1 1 1
¼ I 2n þ At þ 2At2 þ 22 At 3 þ 23 At 4 þ
2! 3! 4!
1 1 1
¼ I 2n þ t þ 2t 2 þ 22 t 3 þ 23 t 4 þ A
2! 3! 4!
1 1 1 1
¼ I 2n þ 2t þ ð2tÞ þ ð2tÞ3 þ ð2tÞ4 þ A
2
2 2! 3! 4!
1 2t
¼ I 2n þ ðe 1ÞA
" 2t 2 2t
#
e þ1
2
I n e 21 I n
¼ 2t 2 Rð2nÞð2nÞ :
e 1 e2t þ1
2
I n 2
I n
In a similar way, we can find the transition matrices of the following matrices:
In In
A¼ 2 Rð2nÞð2nÞ :
I n In
In I n
A¼ 2 Rð2nÞð2nÞ :
I n In
2 3
In In In
6I In In 7
6 n 7
A¼6
6 .. .. .. 7
72R
ðknÞðknÞ
:
4 . . . 5
In In In
2 3
I n In In
6 In I n In 7
6 7
A¼6
6 .. .. .. .. 772R
ðknÞðknÞ
:
4 . . . . 5
In In I n
322 F. Ding / Applied Mathematics and Computation 223 (2013) 311–326
A2 ¼ nA; A3 ¼ n2 A; A4 ¼ n3 A; Ak ¼ nk1 A; k ¼ 2; 3; . . .
1 1 1
UðtÞ :¼ eAt ¼ I þ At þ A2 t 2 þ A3 t3 þ A4 t 4 þ
2! 3! 4!
1 1 1
¼ I n þ At þ nAt 2 þ n2 At 3 þ n3 At4 þ
2! 3! 4!
1 1 1
¼ I n þ t þ nt 2 þ n2 t 3 þ n3 t4 þ A
2! 3! 4!
1 1 1 1
¼ In þ nt þ ðntÞ þ ðntÞ3 þ ðntÞ4 þ A
2
n 2! 3! 4!
1 nt
¼ I n þ ðe 1ÞA
n
2 ent þn1 ent 1 ent 1 ent 1
3
n n n
n
6 ent 1 ent þn1 ent 1 ent 1 7
7
6 n
6 n n n 7
6 .. .. 7
6 nt ent 1 ent 1 7
¼ 6 e n1 n n
. . 7 2 Rnn :
6 7
6 . .. .. .. 7
6 .. . . . ent 1 7
4 n 5
ent 1 ent 1 ent 1 ent þn1
n n
n n
Akþ1 ¼ Akþ2 ¼ ¼ 0;
Thus the transition matrix of the nilpotent matrix A with Ak ¼ 0 is given by
1 2 2 1 3 3 1
UðtÞ :¼ eAt ¼ I þ At þ A t þ A t þ þ Ak1 tk1 : ð35Þ
2! 3! ðk 1Þ!
If A is any nilpotent matrix with Ak ¼ 0, then ðI AÞ is invertible and
ðI AÞ1 ¼ I þ A þ A2 þ þ Ak1 :
18. The strictly triangular matrix
The triangular matrix with zero diagonal elements is called the strictly triangular matrix. The strictly triangular matrix
includes the strictly upper triangular matrix and the strictly lower triangular matrix. A strictly triangular matrix is a
nilpotent matrix.
If A is an n n strictly triangular matrix, then its nth power is a zero matrix, i.e., An ¼ 0. For a strictly upper triangular
matrix
2 3
0 a12 a13 a1n
60 0 a23 a2n 7
6 7
6 7
60 0 0 a34 a3n 7
6 7
Un ¼ 6 . .. .. .. .. .. 7 2 Rnn ;
6 .. . . . . . 7
6 7
6 7
40 0 0 0 an1;n 5
0 0 0 0 0
we have
1 2 2 1 3 3 1
UUn ðtÞ :¼ expðU n tÞ ¼ I n þ U n t þ U t þ Unt þ þ U n1 t n1 : ð36Þ
2! n 3! ðn 1Þ! n
UUn ðtÞ is also a strictly upper triangular matrix.
F. Ding / Applied Mathematics and Computation 223 (2013) 311–326 323
5. Examples
3 2 p12 p12
¼ 2 :
1 0 p22 p22
This means that
3p12 2p22 ¼ 2p12 ; p12 ¼ 2p22 :
p12 2
Thus we have p22 ¼ 1 and p12 ¼ 2, and p2 ¼ ¼ . Construct the transformation matrix
p22 1
1 2 1 2
T ¼ ½p1 ; p2 ¼ ; T 1 ¼ :
1 1 1 1
Hence we have
1 2 3 2 1 2 1 2 1 2 1
K :¼ T 1 AT ¼ ¼ ¼ ;
1 1 1 0 1 1 2 2 1 1 2
A ¼ TKT 1 ;
1
UðtÞ :¼ eAt ¼ eTKT t ¼ TeKt T 1
t
1 2 e 1 2
¼
1 1 e2t 1 1
" #
t 2t 1 2
e 2e
¼
e2t e2t 1 1
" #
et þ 2e2t 2et þ 2e2t
¼ :
et e2t 2et e2t
If A is not diagonalizable but is always transformed into the Jordan form such that A ¼ TJT 1 , where
J ¼ blockdiag½J 1 ; J 2 ; . . . ; J k 2 Fnn ; J i is a Jordan block. Let T is a matrix which consists of the generalized eigenvectors of A.
Then we have
1
UðtÞ :¼ eAt ¼ eTJT t
¼ Tblockdiag½eJ 2 t ; eJ 2 t ; . . . ; eJ k t T 1 2 Fnn : ð37Þ
Since
_
UðtÞ ¼ A expðAtÞ ¼ expðAtÞA;
letting t ¼ 0 gives
" #
_ et 4e2t 2et 4e2t 3 2
A ¼ Uð0Þ ¼ ¼ :
et þ 2e2t 2et þ 2e2t t¼0
1 0
_ ðtÞ ¼ T 1 AT x
x x
ðtÞ þ T 1 buðtÞ ¼ A
ðtÞ þ buðtÞ; ðt 0 Þ ¼ T 1 x0 ;
x ð38Þ
where
F. Ding / Applied Mathematics and Computation 223 (2013) 311–326 325
:¼ T 1 AT 2 Rnn ;
A ¼ T 1 b 2 Rn :
b
At
Let UðtÞ :¼ UA ðtÞ ¼ e is the transition matrix of the system in (2). According to the property of the transition matrix of
the similarity matrix in (17), The transition matrix of the system in (38) is given by
UðtÞ ¼ T 1 UðtÞT: ð39Þ
6. Conclusions
This paper discusses some basic properties of the matrix exponentials or transition matrices and derives the transition
matrices of some special matrices related to linear system theory, signal processing, and system identification [28–31],
e.g., the maximum likelihood approaches [32–34], the hierarchical identification methods [35–37], the coupled identification
methods [38,39], the iterative identification methods [40–42].
References
[1] B. Zhou, Z.Y. Li, G.R. Duan, et al, Solutions to a family of matrix equations by using the Kronecker matrix polynomials, Applied Mathematics and
Computation 212 (2) (2009) 327–336.
[2] F. Ding, P.X. Liu, J. Ding, Iterative solutions of the generalized Sylvester matrix equations by using the hierarchical identification principle, Applied
Mathematics and Computation 197 (1) (2008) 41–50.
[3] F. Ding, Transformations between some special matrices, Computers & Mathematics with Applications 59 (8) (2010) 2676–2695.
[4] M. Dehghan, M. Hajarian, Computing matrix functions using mixed interpolation methods, Mathematical and Computer Modelling 52 (5–6) (2010)
826–836.
[5] B.B. Wu, Explicit formulas for the exponentials of some special matrices, Applied Mathematics Letters 24 (5) (2011) 642–647.
[6] Y. Shi, F. Ding, T. Chen, 2-Norm based recursive design of transmultiplexers with designable filter length, Circuits, Systems and Signal Processing 25 (4)
(2006) 447–462.
[7] Y. Shi, H. Fang, Kalman filter based identification for systems with randomly missing measurements in a network environment, International Journal of
Control 83 (3) (2010) 538–551.
[8] J.B. Zhang, F. Ding, Y. Shi, Self-tuning control based on multi-innovation stochastic gradient parameter estimation, Systems & Control Letters 58 (1)
(2009) 69–75.
[9] Y. Shi, B. Yu, Output feedback stabilization of networked control systems with random delays modeled by Markov chains, IEEE Transactions on
Automatic Control 54 (7) (2009) 1668–1674.
[10] G. Moore, Orthogonal polynomial expansions for the matrix exponential, Linear Algebra and its Applications 435 (2) (2011) 537–559.
[11] Z. Al Zhour, A. Kilicman, Some new connections between matrix products for partitioned and non-partitioned matrices, Computers & Mathematics
with Applications 54 (6) (2007) 763–784.
[12] F. Ding, T. Chen, Iterative least squares solutions of coupled Sylvester matrix equations, Systems & Control Letters 54 (2) (2005) 95–107.
[13] F. Ding, T. Chen, On iterative solutions of general coupled matrix equations, SIAM Journal on Control and Optimization 44 (6) (2006) 2269–2284.
[14] H.M. Zhang, F. Ding, On the Kronecker products and their applications, Journal of Applied Mathematics (2013) 1–8, http://dx.doi.org/10.1155/2013/
296185. Article ID 296185.
[15] M. Dehghan, M. Hajarian, An iterative method for solving the generalized coupled Sylvester matrix equations over generalized bisymmetric matrices,
Applied Mathematical Modelling 34 (3) (2010) 639–654.
[16] M. Dehghan, M. Hajarian, Analysis of an iterative algorithm to solve the generalized coupled Sylvester matrix equations, Applied Mathematical
Modelling 35 (7) (2011) 3285–3300.
[17] Y. Zhang, G.M. Cui, Bias compensation methods for stochastic systems with colored noise, Applied Mathematical Modelling 35 (4) (2011) 1709–1716.
[18] Y. Zhang, Unbiased identification of a class of multi-input single-output systems with correlated disturbances using bias compensation methods,
Mathematical and Computer Modelling 53 (9–10) (2011) 1810–1819.
[19] Y.J. Liu, Y.S. Xiao, X.L. Zhao, Multi-innovation stochastic gradient algorithm for multiple-input single-output systems using the auxiliary model,
Applied Mathematics and Computation 215 (4) (2009) 1477–1483.
[20] Y.S. Xiao, Y. Zhang, J. Ding, J.Y. Dai, The residual based interactive least squares algorithms and simulation studies, Computers & Mathematics with
Applications 58 (6) (2009) 1190–1197.
[21] L.Y. Wang, L. Xie, X.F. Wang, The residual based interactive stochastic gradient algorithms for controlled moving average models, Applied Mathematics
and Computation 211 (2) (2009) 442–449.
[22] Y.J. Liu, J. Sheng, R.F. Ding, Convergence of stochastic gradient estimation algorithm for multivariable ARX-like systems, Computers & Mathematics
with Applications 59 (8) (2010) 2615–2627.
[23] J. Ding, L.L. Han, X.M. Chen, Time series AR modeling with missing observations based on the polynomial transformation, Mathematical and Computer
Modelling 51 (5–6) (2010) 527–536.
[24] J.H. Li, R.F. Ding, Y. Yang, Iterative parameter identification methods for nonlinear functions, Applied Mathematical Modelling 26 (6) (2012) 2739–
2750.
[25] F. Ding, T. Chen, Performance analysis of multi-innovation gradient type identification methods, Automatica 43 (1) (2007) 1–14.
[26] F. Ding, P.X. Liu, H.Z. Yang, Parameter identification and intersample output estimation for dual-rate systems, IEEE Transactions on Systems, Man, and
Cybernetics, Part A: Systems and Humans 38 (4) (2008) 966–975.
[27] F. Ding, P.X. Liu, G. Liu, Auxiliary model based multi-innovation extended stochastic gradient parameter estimation with colored measurement noises,
Signal Processing 89 (10) (2009) 1883–1890.
[28] J. Ding, F. Ding, Bias compensation based parameter estimation for output error moving average systems, International Journal of Adaptive Control and
Signal Processing 25 (12) (2011) 1100–1111.
[29] F. Ding, Hierarchical multi-innovation stochastic gradient algorithm for Hammerstein nonlinear system modeling, Applied Mathematical Modelling 37
(4) (2013) 1694–1704.
[30] F. Ding, Two-stage least squares based iterative estimation algorithm for CARARMA system modeling, Applied Mathematical Modelling 37 (7) (2013)
4798–4808.
[31] F. Ding, Decomposition based fast least squares algorithm for output error systems, Signal Processing 93 (5) (2013) 1235–1242.
[32] J.H. Li, F. Ding, G.W. Yang, Maximum likelihood least squares identification method for input nonlinear finite impulse response moving average
systems, Mathematical and Computer Modelling 55 (3–4) (2012) 442–450.
[33] W. Wang, F. Ding, J.Y. Dai, Maximum likelihood least squares identification for systems with autoregressive moving average noise, Applied
Mathematical Modelling 36 (5) (2012) 1842–1853.
326 F. Ding / Applied Mathematics and Computation 223 (2013) 311–326
[34] J.H. Li, F. Ding, Maximum likelihood stochastic gradient estimation for Hammerstein systems with colored noise based on the key term separation
technique, Computers & Mathematics with Applications 62 (11) (2011) 4170–4177.
[35] Z.N. Zhang, F. Ding, X.G. Liu, Hierarchical gradient based iterative parameter estimation algorithm for multivariable output error moving average
systems, Computers & Mathematics with Applications 61 (3) (2011) 672–682.
[36] H.Q. Han, L. Xie, et al, Hierarchical least squares based iterative identification for multivariable systems with moving average noises, Mathematical and
Computer Modelling 51 (9–10) (2010) 1213–1220.
[37] J. Ding, F. Ding, X.P. Liu, G. Liu, Hierarchical least squares identification for linear SISO systems with dual-rate sampled-data, IEEE Transactions on
Automatic Control 56 (11) (2011) 2677–2683.
[38] F. Ding, G. Liu, X.P. Liu, Partially coupled stochastic gradient identification methods for non-uniformly sampled systems, IEEE Transactions on
Automatic Control 55 (8) (2010) 1976–1981.
[39] F. Ding, Coupled-least-squares identification for multivariable systems, IET Control Theory and Applications 7 (1) (2013) 68–79.
[40] F. Ding, X.P. Liu, G. Liu, Identification methods for Hammerstein nonlinear systems, Digital Signal Processing 21 (2) (2011) 215–238.
[41] F. Ding, Y.J. Liu, B. Bao, Gradient based and least squares based iterative estimation algorithms for multi-input multi-output systems, Proceedings of
the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 226 (1) (2012) 43–55.
[42] F. Ding, X.G. Liu, J. Chu, Gradient-based and least-squares-based iterative algorithms for Hammerstein systems using the hierarchical identification
principle, IET Control Theory and Applications 7 (2) (2013) 176–184.