An Improved Invariant-Norm PCA Algorithm With Complex Values
An Improved Invariant-Norm PCA Algorithm With Complex Values
Abstract-In this paper we propose an invariant-norm limits of the above discrete learning algorithm can usu-
algorithm with complex values. The solutions of the ally be solved by applying the corresponding averaging
corresponding averaging differential equations con- differential equations
Ljerge to the principal eigenvectors ojthe autocorrela-
tion rriutri.x. This PCA algorithm is suitable f o r complex dW,(t)
values ofthe input and the weight vectors. In addition, dt
we consider a possibility to reduce the computational
complexiq of the proposed algorithm.
1. INTRODUCTION
It is well known, that the principal components, i.e.
the eigenvectors corresponding to the largest eigenval- for;= I, ...,M
ues of an autocorrelation matrix, contain the desired in-
formation of the considered signal. The principal com- where R = E [ X [ n ] X T [ n ]is] the autocorrelation matrix
ponent analysis (PCA) algorithms have a. widespread of the input vector X [ n ]with eigenvalues hl 3 2
application field in signal and image processing [ l ] . . . 1 hhl 2 0 and the corresponding orthonormal
Recently an invariant-norm PCA algorithm has been eigenvectors S I ,S2, . . . , S N . Z is the unit matrix. If we
proposed [2], which is able to determine all eigenvec- set
/-I
tors and does not encounter the problem of local min-
C W;(f)WT(t)
ima. For more practical applications, the invariant-
norm PCA algorithm is further studied in this paper.
(
R ; ( t )= I -
I=I
We first generalise the PCA algorithm to the complex- $j(t) = Wy(t)Wj(t), (51
valued case and then develop the possibility to decrease and
the computational complexity.
4/01= W p ) R / ( t ) W ; ( ' ) (6)
2. THEINVARIANT-NORM
PCA ALGORITHM (3) can be written as
Consider the following linear single-layer network
N
for;= 1, ..., M .
z,[n]= w , / [ n ] x r [ n=] w p ] X l n ]
i=l For the dynamics of (7) we have the following proposi-
tions:
with the inputs X [ n ] = [XI[ n ] x, z [ n ] ,. . . , . x ~ [ n ] ]the~, Proposition I. The norm of each connection weight
outputs Z[n] = [ z l [ n ]z, z [ n ] ,. . . , z , ~ [ n ] ] and
~, J = vector W ,( t ) is invariant during the time evolution and
1, . . . , M . w,[n]= [wl/[n], z u ~ ~ [. .n. ~ are
, z,v ~ , [ n 1 1 ~ is equal to the norm of the initial state W ,(0). that is:
the connection weight vectors. The proposed PCA al-
gorithm is defined as
or
73
0-7803-3679-8/96/$5.000 1996 IEEE
and Then the corresponding averaging differential equation
reads
\ i=l
Proof. See [2].
-
i 1
/-I
3. GENERALISATION
FOR COMPLEX MATRICES -i+;(t) I - W/(t)i+T(t) E @ j ( t ) i + / ( f ) .(22)
/=I
For more practical applications it is desirable to gen-
eralise the proposed PCA algorithm to the case where Note that in (1 8) and (22) we used the relationships
the input vector X[n] and the connection weight vectors 2 , [ n ] = X[n]Tii',[n]and = E[%[n]%T[n]las shown
W ;[ n ] take complex values. The complex input signal in [ 7 ] . Denoting the complex conjugated transposed
X[n] E CN is divided into the real and imaginary part, matrix by the superscript H we have the following the-
i.e. into X!,i[nl, X:[n] E KN such that orem:
Theorem 1: l f t h e initial values of the weight vector
sarisfi W/H(O)S/ # 0 and W y ( o ) W ; ( o )> 1 (for J =
1,, , , , M ) , then w e have
"I
lim W , ( t ) = % m ~(23);
where W , ( t )is given by (13), ( 1 6 ) and (22).
The proof will be given in a full version of the paper
(see also [SI).
-z,[nlW,[nI
/-I
+ ~ z l r n l W ~ ~ r ~ l W / ~ n l W , ~(24)
/=I
nl j '
Utld
l-m
o/ ( t )
lim --
$,(0)
- A,
, f i ) r j = 1 , . . . , M.
RESULTS
5. SIMULATION
We have simulated the proposed algorithm (24) and
its differential equation (25). Two examples are given
(2%
14
2v
0I
~
R=
~
50 io0
/
150
hi, = 12.2632,
Example 2.
[
q 7
zoo
,
iteration
-I
250
[YI]
300
.350
for Example I
h2r = 1.9751
2.5765
0.3115
0.0701
2.4723 1
400
R= [ 7.0881
1.2403
3.9855
-3.6502
1.2403
2.2551
0.8076
-0.7820
3.9855
0.8076
3.6167
-2.3219
-3.6502
-0.7820
-2.3219
3.7891 1
given. In addition, we presented a modification of the
invariant-norm PCA algorithm. The advantage of this
modification is an efficient reduction of the computa-
tional effort, one disadvantage is the restriction to spe-
cial initial conditions for the weight vector.
W1 [O] = [ 0.5000 0.5000 0.5000 O.500OJT
7. ACKNOWLEDGEMENTS
W2[0]= [ 1 .0000 0.0000 0.0000 0.0000 JT
This work was partially supported by the Universitat
W1 f = [ 0.7309 0.1650 0.4760 -0.4605IT Erlangen-Nurnberg due to the Gesetz zur Forderung
des wissenschaftlkhen und kiinstlerischen Nachwuch-
W2, = [ 0.1190 -0.9863 0.0762 -0.08561’
ses, by the Deutsche Forschungsgemeinschaft (DFG)
= 12.2632, A21 = 1.9751 and by the Flughafen Frankfurt Main-Stiftung.
75
0‘
0 100 200 300 400 500 600 700 800 900 00
iteration
REFERENCES
A. Cichocki and R. Unbehauen, Neurul “vork.T,fiJr Optimisu-
tion and Signal Processing, J. Wiley-Teubner Verlag, 1993.
E-L. Luo, R. Unbehauen and Y.-D. Li, “A principal component
analysis algorithm with invariant norm,” Neurocomputing, vol.
8, 1995, pp. 213-221.
L.Ljung, “Analysis of recursive stochastic algorithms,” IEEE
Trans. Autrim. Contr., vol. AC-22, 1977, pp. 551-575.
H.J.Kushner and D.S. Clark, Stochustic approximation methods
fijr constrained und unconstrained systems, Springer-Verlag,
1978.
J. Karhunen and J. Joutsensalo, “Representation and separa-
tion of signals using nonlinear PCA type learning,” N e u r d Ner-
workr, vol. 7, 1994, pp. 113-128.
L. Xu,E. Oja and C.Y. Suen, “Modified Hebbian learning for
curve and surface fitting,” Nenrnl Networks, vol. 5, 1992, pp.
441457.
E-L. Luo, R. Unbehauen and A. Cichocki, “A minor component
analysis algorithm,” to appear In Neural Nefworh.
E-L. Luo and R. Unbehauen, A p p k d Neuriil Networky.for Sig-
nul Processing, to be published by Cambridge University Press.
76