0% found this document useful (0 votes)
61 views5 pages

Matrix Power ComputationBand Toeplitz Structure

Abstract - A simple algorithm for computing the larger positive integer powers m (>n) of an nxn matrix is discussed in this paper. Most recent algorithms are mentioned, and comparison data on the theoretical complexity of these algorithms, and the runtime data are provided.

Uploaded by

iir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views5 pages

Matrix Power ComputationBand Toeplitz Structure

Abstract - A simple algorithm for computing the larger positive integer powers m (>n) of an nxn matrix is discussed in this paper. Most recent algorithms are mentioned, and comparison data on the theoretical complexity of these algorithms, and the runtime data are provided.

Uploaded by

iir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Integrated Intelligent Research (IIR) International Journal of Computing Algorithm

Volume: 06 Issue: 01 June 2017 Page No.55-58


ISSN: 2278-2397

Matrix Power Computation Band Toeplitz Structure


Hamide Dogan, Luis R. Suarez
University of Texas at El Paso, Mathematics, El Paso, Texas, USA
Email: [email protected], [email protected]

Abstract - A simple algorithm for computing the larger each entry in the final n×n matrix, we need exactly n
positive integer powers m ( >n ) of an nxn matrix is discussed multiplications and n - 1 additions. Since each of the n2 entries
in this paper. Most recent algorithms are mentioned, and in the first matrix is multiplied by exactly n entries from the
comparison data on the theoretical complexity of these second matrix, the total number of multiplications is n×n2 = n3,
algorithms, and the runtime data are provided. and the total number of additions is (n - 1) · n2 = n3 - n2. Thus,
we classify the naive algorithm as an O(n3) algorithm.
Keywords: Matrix Power; Algorithms; Algorithm Complexity; Applying the Naive algorithm to the mth power of an nxn mat
Runtime; Band Toeplitz Matrices (m−1)(n ).
3
AMS Classifications: 15-04; 15B05; 15A24; 15A03; 15A99
2.1.2 Winograd Algorithm
I. INTRODUCTION Winograd algorithm trades multiplications for additions, much
like Strassen’s method [1-2]. However, it is asymptotically the
Computing powers of square matrices is needed in many areas same as the naive algorithm. Instead of multiplying individual
such as economics, statistics and bioinformatics. So far, the numbers as in the naive algorithm, Winograd algorithm uses
existing matrix power algorithms require computationally pairwise multiplication of the couples of entries and then
taxable approaches, many of which require eigenvalues. Thus, subtracts the accumulated error. That is, for an nxn matrices,
it can be tedious and expensive processes. Furthermore many C=AB,
others require distinct number of eigenvectors. For many n/ 2
matrices, this fails.An Iterative algorithm has been recently Ci , j = ∑ ( ai, 2 k−1 +b 2k , j )(a i, 2k +b2 k −1 , j )− A i−B j
proposed by Abu-Saris and Ahmed [4]. This approach k =1 with
bypasses the eigenvalue computations. Hence, it also does not n/2 n /2
care whether a matrix has distinct eigenvectors. The particular
A i =∑ ai , 2k −1 ai, 2 k B j = ∑ b 2k −1, j b2 k , j
algorithm makes use of an iterative approach in computing the
coefficient of a remainder polynomial emerging from a
k =1 and k =1

division algorithm. Even though their approach does not Since Ai and Bj are precomputed only once for each row and
require eigenvalue computations, due to its iterative nature, the column, they require only n2 multiplications [3]. The final
process can be computationally taxing. The distinguishing summation does require O(n3) multiplications, but only half of
feature of these algorithms however is that the computation of those in the naive algorithm. Thus, the total number of
Ak (k>n ) does not depend upon the computation of the powers multiplications is reduced to 1⁄2 n3 + n2. For the mth power
( >n ) of the nxn matrix A contrary to the existing methods that computation then Winograd algorithm requires at most total
are integrating the classical matrix multiplication algorithms multiplications:
such as Winograd and Strassen algorithms [1-3]. In this paper, ( m−1 ) ( ( n3 /2 )+ n2 )
a simpler algorithm for computing any positive integer power k 2.2 Iterative Algorithm
( > n) of nxn matrices is described based upon the ideas Cayley Hamilton theorem coupled with the Division Algorithm
developed by Abu-Saris and Ahmed as well as others [4-5]. is an essential component of the ideas used in the iterative
The characteristic of our algorithm is that not only the algorithm by Abu-Saris and Ahmed as well as Elaydi and
computation of A k does not depend upon the computation of Harris [4-5].
the powers ( > n), it also does not require expensive iterative
processes. We furthermore provide the comparison on the Cayley Hamilton Thm:
theoretical complexity of the algorithms between the three Let A ∈ M(n, n) with the characteristic polynomial
different approaches, and the runtime comparison of the p(x)=
iterative algorithm and the new algorithm (NewAlg). We n
det ⁡( xI − A)=x + c1 x
n−1
+ c2 x
n−2
+· · ·+ c n .
provide the number of multiplications used in each algorithms
Then,
as the measure of their complexity. n n−1 n−2
p(A)= A + c1 A +c 2 A +· ··+ c n I =0.
II. PRELIMINARIES
Division Algorithm:
2.1 Matrix Multiplication Algorithms Let g(x), and p(x) be the real-valued polynomials of degrees m
Matrix multiplication algorithms are used in many matrix ≥ n respectively. Then,
power computations. There are various approaches to matrix
multiplication. One of which is the Winograd algorithm as well g( x )=q ( x ) p ( x ) +r (x )
as the Naïve algorithm [1-3]. (1)
2.1.1 Naive Algorithm
The naive algorithm is solely based on the basic mathematical In the recursive algorithm, Abu-Saris and Ahmed uses
definition for the multiplication of two matrices. To compute specifically the polynomials, g(x)=x m and p(x) as the
55
Integrated Intelligent Research (IIR) International Journal of Computing Algorithm
Volume: 06 Issue: 01 June 2017 Page No.55-58
ISSN: 2278-2397
characteristic polynomial of an nxn matrix A applied to (1)
along with the Cayley Hamilton theorem [4]. Note that (5) is equivalent to the division algorithm
n −1
Outline of the iterative algorithm:
Consider an nxn real-valued matrix A (n ≥ 2¿ ,∧let G( x )=q ( x ) F ( x ) +r ( x) where r(x)= ∑ ai x i . The
i=0
n −1
difference between the two ideas is that (5) does not require
p( x )=det ⁡( xI− A)= xn + ∑ a j x j. expensive iterative steps. In fact, it needs only basic matrix
i=0
Then, algebra approaches to solving the linear system:
m
A =q m ( A ) p ( A ) +r m ( A) for m≥ n, k
Next, by the Cayley Hamilton Thm: ∑ bm−n−k +i y n−i=z m−k where k =0 , … , m−n (6)
i=0
n −1 Thus, the system (6) is represented by an (m-n+1)x(m-n+2)
A =r m ( A )= ∑ b j ( m) A
m j
,
(2) coefficient matrix that has a structure very similar to that of the
i=0 Toeplitz Matrices.

[ ]
These ideas result in the following iterative algorithm for the
evaluation of the coefficients bj(m) in (2): 0 0 .. . 0 yn zm
0 . .. . yn y n−1 z m−1
b j ( m+1 ) =−a j b n−1 ( m ) +b j−1 ( m ) ( j=0 , … , n−1 ; b−1 ( m )=0 ) . 0 yn y n−1 y n−2 z m−2
(3)
. . .. . .. ... . .. .. .
The iteration in (3) uses one multiplication per coefficient, y n . .. . y n−(m−n )−1 y n−( m−n) z n
totaling n multiplications for a single power. Hence, for the m th
power of an nxn matrix A, the number of multiplications
carried out by (3) is:

(m-n).n+(n-2)(1/2n3+n2)+(n-1)n2 (4) (7)

Matrix polynomial in (2) requires the computation of the (n-1) th Solving (6) can be achieved via the basic back substitution
power of A. (3) provides the coefficients of the matrix process or via the row reduced echelon form of the matrix in
polynomial for the powers <n. In (4), we consider the (7). Then, one can use the solution (values of b i) to (6) to
Winograd algorithm to calculate the number of multiplications n −1
used in matrix multiplications. Note that there is no need to obtain the coefficients of the polynomial ∑ ai x i in (5):
repeat the matrix multiplication for each power in (2). Thus, i=0
i
only the number of multiplications used in the evaluation of the
(n-1)th power of A. Then, the total number of multiplications a i=z i−∑ b i−k y k , i=0 , … , n−1 .(8)
used is: (n-2)(1/2n3+n2). Finally, (2) uses at most (n-1) n2 scalar k=0
matrix multiplications.
IV. COMPUTING Am
III. DEVELOPMENT OF THE NEW ALGORITHM
(NEWALG) Consider the polynomial G ( x ) =x m (m> n). Again, let F(x)
be the characteristic polynomial of an nxn matrix A. Then, (5)
Our algorithm expends the ideas from Abu-Saris and Ahmad
gives the matrix polynomial for the mth power (m>n) of A:
[4]. Rather than applying the long division ideas resulting in n−1
the iteration (3), our algorithm applies vector space tools, in
A =∑ ai A (9)
m i
turn, solves a linear system for the coefficients of the i=0
remainder polynomial associated with the matrix power to be In fact, the system (6) is represented by the matrix:

[ ]
computed.

n 0 0 ... 0 1 1
Given a characteristic polynomial, F(x)= ∑ yi x i, of a nxn 0 .... 1 y n−1 0
i=0
matrix A. Consider the real vector space of the polynomials of 0 1 y n−1 y n−2 0
degree m (>n), and let’s consider its basis {1, x, x2, …,xn-1, F,
xF, x2F,…,xm-nF }. Then, for any polynomial of degree m, .... ... ... . ...
y n−(m−n)−1 y n−(m−n ) 0
m
G(x)= ∑ zi x , we obtain :
i 1 ....
i=0
(10)
Considering that the characteristic polynomial of A has at most
m n −1 m−n n-nonzero coefficients, yi, the
G(x)= ∑ zi x =∑ ai x + ∑ b i x i F
i i
(5) matrix in (10) gets at most n-nonzero values in each row. This
is due to its similarity to the band Toeplitz matrices.
i=0 i=0 i=0
56
Integrated Intelligent Research (IIR) International Journal of Computing Algorithm
Volume: 06 Issue: 01 June 2017 Page No.55-58
ISSN: 2278-2397
4. 10. 576. 432. 168.
EXAMPLE: 4. 15. 896. 672. 188.

( )
0 1 1 4. 20. 1216. 912. 208.
Let’s consider A= −2 3 1 . This matrix has two distinct 4. 25. 1536. 1152. 228.
−3 1 4 4. 30. 1856. 1392. 248.
eigenvectors. Its characteristic polynomial is F(x)= x3- 4. 35. 2176. 1632. 268.
7x2+16x-12. Applying (10) for the 8th power of A results in a
6x7 matrix: 4. 40. 2496. 1872. 288.

[ ]
4. 45. 2816. 2112. 308.
4. 50. 3136. 2352. 328.
0 0 0 0 0 1 1
0 0 0 0 1 −7 0 4.1 Theoretical Complexity of the algorithm for Am
Looking at (11), one can see that the first and the second rows
0 0 0 1 −7 16 0 are not using any multiplication in the back-substitution
0 0 1 −7 16 −12 0 process. The third row uses one multiplication associated with
the entry value, -7. The forth row uses two multiplications
0 1 −7 16 −12 0 0 associated with the entry values -7 and 16. This is because the
1 −7 16 −12 0 0 0 value of b5 is 1 from the first row. The rows 5 and 6 each uses
3 multiplications. Then, generalizing this pattern to an nxn
matrix A, the maximum numbers of multiplication applied in
our algorithm to solve (6) is: (n)(n-1)/2 + (m-2n)(n).

Additionally, (8) requires at most n(n+1)/2 many


(11) multiplications. Finally, (9) needs at most
3
Solving (6), we obtain: n
(n−2)( +n2 )+(n−1)n 2 multiplications. Here, we used
b0 =1611 2
b1 =473 the Winograd algorithm for the matrix powers ( < n ).
Altogether, the total number of multiplication for the m th
b2 =131 (m>2n) power of an nxn matrix is at most:
b3 =33 n(n+1)/2+ (n-1)(n)
b 4 =7 n3
2 2
¿ 2+( n)(m−2 n)+(n−2)( +n )+(n−1)n
b 5 =1 2

Using (8), one can obtain the matrix polynomial whose


evaluation results in A8:

A8= (19332)I-(20100)A+(5281)A2 =

[ ]
−7073 1024 6305
−7329 1280 6305
−13634 1024 12866

(12)
Figure 1. Comparison of the Numbers of Multiplication for 4x4
Note that to compute A8, our algorithm used about 55 matrices.
multiplications as opposed to 189 multiplications with the
Naive algorithm, about 157 multiplications with the Winograd
algorithm.

Table 1. Multiplication Count (Matrix size n; Power m).

n m Naive Wingrd NewAlg

57
Integrated Intelligent Research (IIR) International Journal of Computing Algorithm
Volume: 06 Issue: 01 June 2017 Page No.55-58
ISSN: 2278-2397
require the evaluation of the characteristic polynomial, this fact
will not change our comparisons of the algorithm complexity.

References
[1] R. P. Brent (1970) Algorithms for matrix multiplications,
Stan-CS (Tech. Rept. of the Computer Science Dept. of
Stanford University, Stanford, CA), 1970, pp. 70-157.
[2] U. Manber (1989) Introduction to Algorithms: A Creative
Approach. pp 301- 304. Pearson Education: New Jersey,
1989.
[3] S. Winograd (1968) A new algorithm for inner-product,
IEEE Trans. Computers 17 (1968), 193-194.
[4] R. Abu-Saris and W. Ahmad (2005) Avoiding Eigenvalues
in Computing Matrix Power, The American Mathematical
Monthly , Vol. 112, Number 5, (May 2005) 450-454.
[5] S.N. Elaydi and W. A. Harris, Jr. (1997) On the
Figure 2. Runtime comparison for a 3x3 Matrix.
computation of An, SIAM Rev. 40 (1998) 965-971.
V. COMPARISONS
Appendix
If we compare the number of multiplication applied by our
Mathematica Programs for the Iterative and the New
algorithm (NewAlg.) against the Naive approach, and the
Algorithm
Winograd algorithm [1-3], one sees in table 1 above that our
algorithm outperforms both algorithms. That is, even though
Iterative Algorithm
difference between the numbers of multiplication applied is
small for the smaller powers of A, as the matrix power
increases, the difference increases almost exponentially. See
fig. 1 above for the multiplication count for 3x3 matrices and
for the matrix powers ranging from 10 to 50.

As for the comparison of the NewAlg and the Iterative


Algorithm, even though the Iterative algorithm and the
NewAlgorithm use about the same number of multiplication,
looking at the runtime values in seconds, one can see that the
NewAlg performs significantly better especially for the higher
matrix powers. This behavior is clearly depicted in Fig. 2
above. In fact, one can see that for the powers less than 2000,
the two algorithms run at about the same speed with very little
difference in the seconds it took for the algorithms to calculate
the coefficients of the remainder polynomial in (2) and (9).
After this point however it takes drastically more seconds for
the Iterative algorithm to carry on calculations. We should
note that the data displayed in fig. 2 was gathered using a
Latitude E7440, Intel(R) Core(TM) i5-4210U CPU @
1.70GHz 2.40GHz in a Mathematica environment. See the
Appendix for the Mathematica programs used in each of the
two algorithms.
VI. CONCLUSION
New Algorithm (NewAlg)
NewAlg in comparison to the Naive and the Winograd
algorithm uses significantly less numbers of multiplication [1-
3]. Furthermore, in comparing to the Iterative process [4], the
NewAlg performed in real-time significantly faster. Thus,
integrating the primary ideas from NewAlg into the existing
matrix power computation algorithms may prove to be an
effective way to improve the runtime of the larger matrix
powers.As a final remark, we should note that in our
computations, we did not take into account the number of
arithmetic operations used in the evaluation of the
characteristics polynomials. We recognize that characteristic
polynomials will add to the complexity of each of the
algorithms. Since all the algorithms we discussed in the paper

58
Integrated Intelligent Research (IIR) International Journal of Computing Algorithm
Volume: 06 Issue: 01 June 2017 Page No.55-58
ISSN: 2278-2397

59

You might also like