0% found this document useful (0 votes)
21 views

An Improved Invariant-Norm PCA Algorithm With Complex Values

This document describes an improved invariant-norm PCA algorithm that can handle complex values. It generalizes the existing PCA algorithm to work with complex inputs and weight vectors. It also describes how to reduce the computational complexity of the algorithm.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

An Improved Invariant-Norm PCA Algorithm With Complex Values

This document describes an improved invariant-norm PCA algorithm that can handle complex values. It generalizes the existing PCA algorithm to work with complex inputs and weight vectors. It also describes how to reduce the computational complexity of the algorithm.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

1996 IEEE TENCON - Digital Signal Processing Applications

An Improved Invariant-Norm PCA Algorithm with Complex Values


Konrad Reif, Fa-Long LUO,Senior Member; IEEE, and Rolf Unbehauen, Fellow, IEEE
Lehrstuhl fur Allgemeine und Theoretische Elektrdtechnik,
Universitat Erlangen-Nurnberg,
CauerstraRe 7. 91058 Erlangen, Germany

Abstract-In this paper we propose an invariant-norm limits of the above discrete learning algorithm can usu-
algorithm with complex values. The solutions of the ally be solved by applying the corresponding averaging
corresponding averaging differential equations con- differential equations
Ljerge to the principal eigenvectors ojthe autocorrela-
tion rriutri.x. This PCA algorithm is suitable f o r complex dW,(t)
values ofthe input and the weight vectors. In addition, dt
we consider a possibility to reduce the computational
complexiq of the proposed algorithm.

1. INTRODUCTION
It is well known, that the principal components, i.e.
the eigenvectors corresponding to the largest eigenval- for;= I, ...,M
ues of an autocorrelation matrix, contain the desired in-
formation of the considered signal. The principal com- where R = E [ X [ n ] X T [ n ]is] the autocorrelation matrix
ponent analysis (PCA) algorithms have a. widespread of the input vector X [ n ]with eigenvalues hl 3 2
application field in signal and image processing [ l ] . . . 1 hhl 2 0 and the corresponding orthonormal
Recently an invariant-norm PCA algorithm has been eigenvectors S I ,S2, . . . , S N . Z is the unit matrix. If we
proposed [2], which is able to determine all eigenvec- set
/-I
tors and does not encounter the problem of local min-
C W;(f)WT(t)
ima. For more practical applications, the invariant-
norm PCA algorithm is further studied in this paper.
(
R ; ( t )= I -
I=I

We first generalise the PCA algorithm to the complex- $j(t) = Wy(t)Wj(t), (51
valued case and then develop the possibility to decrease and
the computational complexity.
4/01= W p ) R / ( t ) W ; ( ' ) (6)
2. THEINVARIANT-NORM
PCA ALGORITHM (3) can be written as
Consider the following linear single-layer network
N
for;= 1, ..., M .
z,[n]= w , / [ n ] x r [ n=] w p ] X l n ]
i=l For the dynamics of (7) we have the following proposi-
tions:
with the inputs X [ n ] = [XI[ n ] x, z [ n ] ,. . . , . x ~ [ n ] ]the~, Proposition I. The norm of each connection weight
outputs Z[n] = [ z l [ n ]z, z [ n ] ,. . . , z , ~ [ n ] ] and
~, J = vector W ,( t ) is invariant during the time evolution and
1, . . . , M . w,[n]= [wl/[n], z u ~ ~ [. .n. ~ are
, z,v ~ , [ n 1 1 ~ is equal to the norm of the initial state W ,(0). that is:
the connection weight vectors. The proposed PCA al-
gorithm is defined as

or

Proof. See [2].


Proposition 2. Ifthe initial values of the weight vec-
tor satisfji the conditions WT(O)S, # 0 and @ / ( 0 ) =
where y[n] is a positive scalar gain parameter. WY(O)W/( 0 ) 2 l f f o r j = 1, . . . , M ) , then we have
According to the stochastic approximation theory in
[3, 41 and further explanations in [ 5 , 61, the asymptotic

73
0-7803-3679-8/96/$5.000 1996 IEEE
and Then the corresponding averaging differential equation
reads

\ i=l
Proof. See [2].
-
i 1
/-I

3. GENERALISATION
FOR COMPLEX MATRICES -i+;(t) I - W/(t)i+T(t) E @ j ( t ) i + / ( f ) .(22)
/=I
For more practical applications it is desirable to gen-
eralise the proposed PCA algorithm to the case where Note that in (1 8) and (22) we used the relationships
the input vector X[n] and the connection weight vectors 2 , [ n ] = X[n]Tii',[n]and = E[%[n]%T[n]las shown
W ;[ n ] take complex values. The complex input signal in [ 7 ] . Denoting the complex conjugated transposed
X[n] E CN is divided into the real and imaginary part, matrix by the superscript H we have the following the-
i.e. into X!,i[nl, X:[n] E KN such that orem:
Theorem 1: l f t h e initial values of the weight vector
sarisfi W/H(O)S/ # 0 and W y ( o ) W ; ( o )> 1 (for J =
1,, , , , M ) , then w e have

"I
lim W , ( t ) = % m ~(23);
where W , ( t )is given by (13), ( 1 6 ) and (22).
The proof will be given in a full version of the paper
(see also [SI).

4. REDUCTIONOF THE COMPUTATIONAL


COMPLEXITY
For the proposed PCA algorithm we use the notations
Although the computational complexity of the
invariant-norm PCA algorithm is not very high, it can be
further decreased. For this purpose we use a special fea-
ture of the proposed PCA algorithm which is strongly
related to Proposition 1 . Because there are K , E R+,
j = 1 , . . . , N such that $ ; ( t ) = /I W ,(0)/I2 = ~j holds
for t L 0, we substitute Wy[n]W;[n] and @ , ( t )by K ,
and obtain from (2) a modified PCA algorithm

With these matrices thc PCA algorithm for the complex-


W,[n + I ] = W,[nl
valuc case is given by
/

-z,[nlW,[nI
/-I
+ ~ z l r n l W ~ ~ r ~ l W / ~ n l W , ~(24)
/=I
nl j '

Moreover using (4)-(6) glves the averaging differential


equation of (24) as

Concerning the solution of (25) we have the following


For averaging we calculate the expectation values theorems:
Theorem 2: If the initial values of the connection
R:Ii = E[X!ti[nlX~[nll$. E I X > [ n l X ~ [ n l l ~( I 9 ) weight satisfy ~ ~ W j (=o A,
) ~ ~then the norm
R;; = E[X:[nlX;[nll - E [ X : H [ ~ ] X ~ [ ~(20) ~ ] ] ofeach connection weight vector W ,( t ) is invariantdur-
in2- the time evolhtion and is equal to the norm of the
and set
N

R= [ E: -:: ] initial state W j (0),that is:


or

Utld

l-m
o/ ( t )
lim --
$,(0)
- A,

, f i ) r j = 1 , . . . , M.

Using the same arguments as in Section 2 we can


also generalise the algorithm to the case with complex-
valued signals and weight vectors. The detailed proofs
of all the above theorems will be presentcd in a full ver-
sion of the paper.

RESULTS
5. SIMULATION
We have simulated the proposed algorithm (24) and
its differential equation (25). Two examples are given
(2%
14

2v

0I
~

R=
~

50 io0
/

150

hi, = 12.2632,

Example 2.

[
q 7

zoo
,
iteration

Fig. I . Ilynainics of the variable A

5.5088 0.5451 0.1227


0.5451 1.0659 0.0148
0.1227 0.0148 1.0033
2.5765 0.31 15 0.0701
1

-I
250

[YI]
300
.350

for Example I

h2r = 1.9751

2.5765
0.3115
0.0701
2.4723 1
400

in this paper. The first example is with distinct eigenval-


ues, the second example with non distinct eigenvalues. WI [01 = [ 0.5000 0.5000 0.5000 o~~ooo17’
For simplicity we chose for the simulations a constant ~,[o=
l [ 1 .oooo 0.oooo 0,0000 0 . 0 ~ 0IT
0
y [ n ] = 0.01. A more sophisticated and better selection
of y [ n ] could be made according to the Robbins-Monro W3[0] = [ 0.5000 0.5000 0.5000 0.50001T
procedures [ 6 ] .W,[0] and W , , (for j = I , . . . , M ) are W If = [ 0.8633 0.1044 0.0235 0.4933IT
the weight vectors in the initial state and the steady state,
W2, = [ 0.3817 -0.5352 -0.5365 -0.5292IT
respectively. h ; , are the valued computed by W , , and
A,(,are the exact eigenvalues of the matrix R . Figures 1 Wj/= -0.3304 -0.3455 -0.5583 0.6781 IT
and 2 show the dynamics of the variables h l , = 7.0504, h2/ = 1.0000, hi/ = 1.0000
WT [I2 ] Rw, [ n1 hi, = 7.0504, h2< = 1.0000, h3<= 1.0000
h;[‘l] = (30)
~7 I ~wI j [nl
forj= l....,M

These simulation results demonstrate the accuracy of 6. CONCLUSION


the above analyses and the effectiveness of the proposed
In this paper we further considered a recently pro-
algorithm.
posed PCA algorithm. A generalisation for com-
Example 1.
plex values of the signal and weight vectors has been

R= [ 7.0881
1.2403
3.9855
-3.6502
1.2403
2.2551
0.8076
-0.7820
3.9855
0.8076
3.6167
-2.3219
-3.6502
-0.7820
-2.3219
3.7891 1
given. In addition, we presented a modification of the
invariant-norm PCA algorithm. The advantage of this
modification is an efficient reduction of the computa-
tional effort, one disadvantage is the restriction to spe-
cial initial conditions for the weight vector.
W1 [O] = [ 0.5000 0.5000 0.5000 O.500OJT
7. ACKNOWLEDGEMENTS
W2[0]= [ 1 .0000 0.0000 0.0000 0.0000 JT
This work was partially supported by the Universitat
W1 f = [ 0.7309 0.1650 0.4760 -0.4605IT Erlangen-Nurnberg due to the Gesetz zur Forderung
des wissenschaftlkhen und kiinstlerischen Nachwuch-
W2, = [ 0.1190 -0.9863 0.0762 -0.08561’
ses, by the Deutsche Forschungsgemeinschaft (DFG)
= 12.2632, A21 = 1.9751 and by the Flughafen Frankfurt Main-Stiftung.

75
0‘
0 100 200 300 400 500 600 700 800 900 00
iteration

Fig. 2. Dynamics of the variable L j [ n ] for Example 2

REFERENCES
A. Cichocki and R. Unbehauen, Neurul “vork.T,fiJr Optimisu-
tion and Signal Processing, J. Wiley-Teubner Verlag, 1993.
E-L. Luo, R. Unbehauen and Y.-D. Li, “A principal component
analysis algorithm with invariant norm,” Neurocomputing, vol.
8, 1995, pp. 213-221.
L.Ljung, “Analysis of recursive stochastic algorithms,” IEEE
Trans. Autrim. Contr., vol. AC-22, 1977, pp. 551-575.
H.J.Kushner and D.S. Clark, Stochustic approximation methods
fijr constrained und unconstrained systems, Springer-Verlag,
1978.
J. Karhunen and J. Joutsensalo, “Representation and separa-
tion of signals using nonlinear PCA type learning,” N e u r d Ner-
workr, vol. 7, 1994, pp. 113-128.
L. Xu,E. Oja and C.Y. Suen, “Modified Hebbian learning for
curve and surface fitting,” Nenrnl Networks, vol. 5, 1992, pp.
441457.
E-L. Luo, R. Unbehauen and A. Cichocki, “A minor component
analysis algorithm,” to appear In Neural Nefworh.
E-L. Luo and R. Unbehauen, A p p k d Neuriil Networky.for Sig-
nul Processing, to be published by Cambridge University Press.

76

You might also like