0% found this document useful (0 votes)
34 views

Discrminant Analysis

1. Discriminant analysis is a statistical technique used for classification that finds linear combinations of variables that characterize or separate two or more groups. 2. It determines discriminant functions to maximize between-group differences and minimize within-group differences. 3. New objects can then be assigned to a group based on their scores on the discriminant functions.

Uploaded by

Aparna Shaji
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views

Discrminant Analysis

1. Discriminant analysis is a statistical technique used for classification that finds linear combinations of variables that characterize or separate two or more groups. 2. It determines discriminant functions to maximize between-group differences and minimize within-group differences. 3. New objects can then be assigned to a group based on their scores on the discriminant functions.

Uploaded by

Aparna Shaji
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 29

Discriminant Analysis

Discriminant Analysis is a classification technique. It is similar to the Multiple


Regression Analysis. But the dependent variables in the Discriminant Analysis assume
the categorical (nominal) values.
Requirement
To do this classification, a priori defined groups are required. With these groups and the
corresponding independent variables, a Discriminant function is arrived. This function is
the linear combination of the independent variables. The discriminant function enables us
to assign any object into any one of the group based on the score arrived by using the
discriminant function.
The groups are mutually exclusive and collectively exhaustive. The purpose of
Discriminant Analysis is to minimize the variance within the group and to maximize
variance between the groups.
Applications
1. This analysis can be done to know the credit worthiness of the customers.
2. In materials management, to classify the items as vital, essential and desirable
(VED analysis), the Discriminant analysis can be used.
Assumptions
1. The independent variables in this model follow multivariate normal distribution.
2. The variance-covariance matrix computed in each group is same.

The model
Z = W X
Where, Z is the 1×n vector of discriminant score,
W’ is the 1×p vector of discriminant weights and
X is p×n matrix
Here n is the number of observations and p is the number of independent variables.

-1-
The formation of discriminant function is based on the principle of maximizing variations
between the groups and minimizing the variations within the group. Using the data
matrix, the mean corrected sum-of-squares and cross product matrices for the groups are
formed. We will denote these as W1, W2, W3 etc. Similarly the mean corrected sum-of-
squares and cross product matrices and for the total group are formed by considering all
the observations. We will denote these as T1, T2, T3 etc.

Now
W = W1 + W2 + W3 + …………. + Wk
Similarly
T = T1 + T2 + T3 + …………. + Tk
Where k is the number of groups
Also
T=W+B
=> B = T – W

We know that with respect to linear composite ‘w’ the between sum-of-squares is given
by
ŵBw
and similarly within sum-of-squares is given by

ŵWw

wˆ Bw
̂ = --- (1)
ˆ Ww
w

Here we should maximize the values of ̂

From (1),

̂ ŵWw = ŵBw
 ŵBw – ̂ ŵWw = 0

-2-
Now making the partial derivative of ̂ with respect to ŵ equal to zero

 
ˆ Bw ) 
(w (ˆw ˆ)0
ˆ Ww
wˆ wˆ

 Bw – ̂ Ww = 0
 (B – ̂ W)w = 0

Now pre-multiplying with W-1,


(W-1B – ̂ I)w = 0
Where ̂ are the eigen values and w is the eigen vector.
The values of the eigenvectors are called discriminant weightage.

The number of the discriminant functions to be considered can be found by min (p, k-1).
Here p is the number of independent variables and k is the number of groups. We should
keep as many discriminant functions according to the above requirement.

Classification rules:
1. Classification of object based on single discriminant function:-
a) Compute the mean discriminant values ( Z ) for each group by
substituting the mean of the observations of independent variables in the
discriminant function.
b) For the new object also, calculate the Z value.
c) Calculate the distances between the value in step (b) and each mean
discriminate value in step (a).
d) The object is classified as belonging to a particular group on the basis of
shortest distance calculated in step (c).
2. Classification of object based on two discriminant functions:-
a) Compute the mean discriminant values ( Z i ) for each group by
substituting the mean of the observations of independent variables in the
discriminant functions. Thus we will get ( Z 1 , Z 2 ) for each group.
b) For the new object also, calculate the ( Z 1 , Z 2 )

-3-
c) Calculate the distances between the value in step (b) and each mean
discriminate value in step (a) using Euclidean Distance formula.
d) The object is classified as belonging to a particular group on the basis of
shortest distance calculated in step (c).
This process can be continued even for more than two discriminant functions.

Example: A table is given below which contains data on breakfast cereals produced by
three different manufacturers (G, K and Q).

Cal. Protein Fat Sodium Fiber Carbohy- Sugar Potass-


Brand Manuf Gro
X1 X2 X3 X4 X5 Drates X6 X7 Ium X8
ACCheerios G 110 2 2 180 1.5 10.5 10 70 1
Cheerios G 110 6 2 290 2 17 1 105 1
CocoaPuffs G 110 1 1 180 0 12 13 55 1
CountChocula G 110 1 1 180 0 12 13 65 1
GoldenGrahams G 110 1 1 180 0 15 9 45 1
HoneyNutCheerios G 110 3 1 250 1.5 11.5 10 90 1
Kix G 110 2 1 260 0 21 3 40 1
LuckyCharms G 110 2 1 180 0 12 12 55 1
MultiGrainCheerios G 100 2 1 220 2 15 6 90 1
OatmealRaisinCrisp G 130 3 2 170 1.5 13.5 10 120 1
RaisinNutBran G 100 3 2 140 2.5 10.5 8 140 1
TotalCornFlakes G 110 2 1 200 0 21 3 35 1
TotalRaisinBran G 140 3 1 190 4 15 14 230 1
TotalWholeGrain G 100 3 1 200 3 16 3 110 1
Trix G 110 1 1 140 0 13 12 25 1
Cheaties G 100 3 1 200 3 17 3 110 1
WheatiesHoneyGold G 110 2 1 200 1 16 8 60 1
AllBran K 70 4 1 260 9 7 5 320 2
AppleJacks K 110 2 0 125 1 11 14 30 2
CornFlakes K 100 2 0 290 1 21 2 35 2
CornPops K 110 1 0 90 1 13 12 20 2
CracklinOatBran K 110 3 3 140 4 10 7 160 2
Crispix K 110 2 0 220 1 21 3 30 2
FrootLoops K 110 2 1 125 1 11 13 30 2
FrostedFlakes K 110 1 0 200 1 14 11 25 2
FrostedMiniWheats K 100 3 0 0 3 14 7 100 2
FruitfulBran K 120 3 0 240 5 14 12 190 2
JustRightCrunchyNuggets K 110 2 1 170 1 17 6 60 2
MueslixCrispyBlend K 160 3 2 150 3 17 13 160 2
NutNHoneyCrunch K 120 2 1 190 0 15 9 40 2
NutriGrainAlmondRaisin K 140 3 2 220 3 21 7 130 2
NutriGrainWheat K 90 3 0 170 3 18 2 90 2
Product19 K 100 3 0 320 1 20 3 45 2
RaisinBran K 120 3 1 210 5 14 12 240 2
RiceKrispies K 110 2 0 290 0 22 3 35 2

-4-
Smacks K 110 2 1 70 1 9 15 40 2
SpecialK K 110 6 0 230 1 16 3 55 2
CapNCrunch Q 120 1 2 220 0 12 12 35 3
HoneyGrahamOhs Q 120 1 2 220 1 12 11 45 3
Life Q 100 4 2 150 2 12 6 95 3
PuffedRice Q 50 1 0 0 0 13 0 15 3
PuffedWheat Q 50 2 0 0 1 10 0 50 3
QuakerOatmeal Q 100 5 2 0 2.7 1 1 110 3

x1=G1-column mean of G1

-0.5882 -0.3529 0.7647 -17.6471 0.2059 -4.0882 1.8824 -15.0000


-0.5882 3.6471 0.7647 92.3529 0.7059 2.4118 -7.1176 20.0000
-0.5882 -1.3529 -0.2353 -17.6471 -1.2941 -2.5882 4.8824 -30.0000
-0.5882 -1.3529 -0.2353 -17.6471 -1.2941 -2.5882 4.8824 -20.0000
-0.5882 -1.3529 -0.2353 -17.6471 -1.2941 0.4118 0.8824 -40.0000
-0.5882 0.6471 -0.2353 52.3529 0.2059 -3.0882 1.8824 5.0000
-0.5882 -0.3529 -0.2353 62.3529 -1.2941 6.4118 -5.1176 -45.0000
-0.5882 -0.3529 -0.2353 -17.6471 -1.2941 -2.5882 3.8824 -30.0000
-10.5882 -0.3529 -0.2353 22.3529 0.7059 0.4118 -2.1176 5.0000
19.4118 0.6471 0.7647 -27.6471 0.2059 -1.0882 1.8824 35.0000
-10.5882 0.6471 0.7647 -57.6471 1.2059 -4.0882 -0.1176 55.0000
-0.5882 -0.3529 -0.2353 2.3529 -1.2941 6.4118 -5.1176 -50.0000
29.4118 0.6471 -0.2353 -7.6471 2.7059 0.4118 5.8824 145.0000
-10.5882 0.6471 -0.2353 2.3529 1.7059 1.4118 -5.1176 25.0000
-0.5882 -1.3529 -0.2353 -57.6471 -1.2941 -1.5882 3.8824 -60.0000
-10.5882 0.6471 -0.2353 2.3529 1.7059 2.4118 -5.1176 25.0000
-0.5882 -0.3529 -0.2353 2.3529 -0.2941 1.4118 -0.1176 -25.0000

W1=x1’*x1

0.1694 0.0016 0.0008 -0.0476 0.0032 -0.0011 0.0339 0.3950


0.0016 0.0024 0.0005 0.0444 0.0016 0.0012 -0.0044 0.0520
0.0008 0.0005 0.0003 -0.0011 0.0002 -0.0007 -0.0003 0.0095
1.0e+004* -0.0476 0.0444 -0.0011 2.4706 0.0082 0.1054 -0.1565 -0.0050
0.0032 0.0016 0.0002 0.0082 0.0028 -0.0002 -0.0018 0.0930
-0.0011 0.0012 -0.0007 0.1054 -0.0002 0.0165 -0.0157 -0.0370
0.0339 -0.0044 -0.0003 -0.1565 -0.0018 -0.0157 0.0284 0.0345
0.3950 0.0520 0.0095 -0.0050 0.0930 -0.0370 0.0345 3.9750

-5-
-6-
x2=G2- column mean of G2

-41.0000 1.4000 0.3500 74.5000 6.7500 -8.2500 -2.9500 228.2500


-1.0000 -0.6000 -0.6500 -60.5000 -1.2500 -4.2500 6.0500 -61.7500
-11.0000 -0.6000 -0.6500 104.5000 -1.2500 5.7500 -5.9500 -56.7500
-1.0000 -1.6000 -0.6500 -95.5000 -1.2500 -2.2500 4.0500 -71.7500
-1.0000 0.4000 2.3500 -45.5000 1.7500 -5.2500 -0.9500 68.2500
-1.0000 -0.6000 -0.6500 34.5000 -1.2500 5.7500 -4.9500 -61.7500
-1.0000 -0.6000 0.3500 -60.5000 -1.2500 -4.2500 5.0500 -61.7500
-1.0000 -1.6000 -0.6500 14.5000 -1.2500 -1.2500 3.0500 -66.7500
-11.0000 0.4000 -0.6500 -185.5000 0.7500 -1.2500 -0.9500 8.2500
9.0000 0.4000 -0.6500 54.5000 2.7500 -1.2500 4.0500 98.2500
-1.0000 -0.6000 0.3500 -15.5000 -1.2500 1.7500 -1.9500 -31.7500
49.0000 0.4000 1.3500 -35.5000 0.7500 1.7500 5.0500 68.2500
9.0000 -0.6000 0.3500 4.5000 -2.2500 -0.2500 1.0500 -51.7500
29.0000 0.4000 1.3500 34.5000 0.7500 5.7500 -0.9500 38.2500
-21.0000 0.4000 -0.6500 -15.5000 0.7500 2.7500 -5.9500 -1.7500
-11.0000 0.4000 -0.6500 134.5000 -1.2500 4.7500 -4.9500 -46.7500
9.0000 0.4000 0.3500 24.5000 2.7500 -1.2500 4.0500 148.2500
-1.0000 -0.6000 -0.6500 104.5000 -2.2500 6.7500 -4.9500 -56.7500
-1.0000 -0.6000 0.3500 -115.5000 -1.2500 -6.2500 7.0500 -51.7500
-1.0000 3.4000 -0.6500 44.5000 -1.2500 0.7500 -4.9500 -36.7500

W2=x2’*x2

0.0598 -0.0003 0.0013 -0.0311 -0.0018 0.0042 0.0067 -0.0164


-0.0003 0.0002 0.0000 0.0037 0.0002 -0.0000 -0.0003 0.0083
0.0013 0.0000 0.0001 -0.0023 0.0001 -0.0002 0.0002 0.0060
1.0e+005 * -0.0311 0.0037 -0.0023 1.2165 0.0026 0.0343 -0.0379 0.1796
-0.0018 0.0002 0.0001 0.0026 0.0009 -0.0008 0.0000 0.0335
0.0042 -0.0000 -0.0002 0.0343 -0.0008 0.0036 -0.0022 -0.0249
0.0067 -0.0003 0.0002 -0.0379 0.0000 -0.0022 0.0038 0.0037
-0.0164 0.0083 0.0060 0.1796 0.0335 -0.0249 0.0037 1.3196

x3=G3- column mean of G3

30.0000 -1.3333 0.6667 121.6667 -1.1167 2.0000 7.0000 -23.3333


30.0000 -1.3333 0.6667 121.6667 -0.1167 2.0000 6.0000 -13.3333
10.0000 1.6667 0.6667 51.6667 0.8833 2.0000 1.0000 36.6667
-40.0000 -1.3333 -1.3333 -98.3333 -1.1167 3.0000 -5.0000 -43.3333
-40.0000 -0.3333 -1.3333 -98.3333 -0.1167 0 -5.0000 -8.3333
10.0000 2.6667 0.6667 -98.3333 1.5833 -9.0000 -4.0000 51.6667

-7-
W3=x3’*x3

0.5200 0.0030 0.0160 1.4700 0.0037 -0.0070 0.0760 0.1850


0.0030 0.0015 0.0003 -0.0337 0.0009 -0.0030 -0.0018 0.0308
0.0160 0.0003 0.0005 0.0393 0.0002 -0.0006 0.0020 0.0103
1.0e+004* 1.4700 -0.0337 0.0393 6.1283 -0.0139 0.1180 0.3010 -0.2567
0.0037 0.0009 0.0002 -0.0139 0.0006 -0.0018 -0.0008 0.0191
-0.0070 -0.0030 -0.0006 0.1180 -0.0018 0.0102 0.0049 -0.0595
0.0760 -0.0018 0.0020 0.3010 -0.0008 0.0049 0.0152 -0.0155
0.1850 0.0308 0.0103 -0.2567 0.0191 -0.0595 -0.0155 0.6683

W=W1+W2+W3

0.1287 0.0001 0.0029 0.1111 -0.0011 0.0033 0.0177 0.0417


0.0001 0.0006 0.0001 0.0048 0.0004 -0.0002 -0.0010 0.0166
0.0029 0.0001 0.0002 0.0016 0.0002 -0.0003 0.0003 0.0080
1.0e+005 * 0.1111 0.0048 0.0016 2.0763 0.0021 0.0567 -0.0234 0.1534
-0.0011 0.0004 0.0002 0.0021 0.0013 -0.0010 -0.0002 0.0447
0.0033 -0.0002 -0.0003 0.0567 -0.0010 0.0063 -0.0033 -0.0345
0.0177 -0.0010 0.0003 -0.0234 -0.0002 -0.0033 0.0082 0.0056
0.0417 0.0166 0.0080 0.1534 0.0447 -0.0345 0.0056 1.7840

Column means of whole data

103.8627 2.4287 1.0729 160.4935 1.5536 13.2794 7.0225 78.3611

t1=G1- column mean of whole data

6.1373 -0.4287 0.9271 19.5065 -0.0536 -2.7794 2.9775 -8.3611


6.1373 3.5713 0.9271 129.5065 0.4464 3.7206 -6.0225 26.6389
6.1373 -1.4287 -0.0729 19.5065 -1.5536 -1.2794 5.9775 -23.3611
6.1373 -1.4287 -0.0729 19.5065 -1.5536 -1.2794 5.9775 -13.3611
6.1373 -1.4287 -0.0729 19.5065 -1.5536 1.7206 1.9775 -33.3611
6.1373 0.5713 -0.0729 89.5065 -0.0536 -1.7794 2.9775 11.6389
6.1373 -0.4287 -0.0729 99.5065 -1.5536 7.7206 -4.0225 -38.3611
6.1373 -0.4287 -0.0729 19.5065 -1.5536 -1.2794 4.9775 -23.3611
-3.8627 -0.4287 -0.0729 59.5065 0.4464 1.7206 -1.0225 11.6389
26.1373 0.5713 0.9271 9.5065 -0.0536 0.2206 2.9775 41.6389
-3.8627 0.5713 0.9271 -20.4935 0.9464 -2.7794 0.9775 61.6389
6.1373 -0.4287 -0.0729 39.5065 -1.5536 7.7206 -4.0225 -43.3611
36.1373 0.5713 -0.0729 29.5065 2.4464 1.7206 6.9775 151.6389
-3.8627 0.5713 -0.0729 39.5065 1.4464 2.7206 -4.0225 31.6389
6.1373 -1.4287 -0.0729 -20.4935 -1.5536 -0.2794 4.9775 -53.3611
-3.8627 0.5713 -0.0729 39.5065 1.4464 3.7206 -4.0225 31.6389
6.1373 -0.4287 -0.0729 39.5065 -0.5536 2.7206 0.9775 -18.3611

-8-
T1=t1'*t1

0.2463 0.0008 0.0026 0.3771 0.0002 0.0139 0.0464 0.4709


0.0008 0.0024 0.0004 0.0396 0.0016 0.0010 -0.0045 0.0511
0.0026 0.0004 0.0004 0.0092 0.0002 -0.0003 -0.0000 0.0113
1.0e+004* 0.3771 0.0396 0.0092 4.8173 -0.0082 0.1880 -0.0874 0.4143
0.0002 0.0016 0.0002 -0.0082 0.0029 -0.0008 -0.0022 0.0901
0.0139 0.0010 -0.0003 0.1880 -0.0008 0.0194 -0.0133 -0.0222
0.0464 -0.0045 -0.0000 -0.0874 -0.0022 -0.0133 0.0304 0.0469
0.4709 0.0511 0.0113 0.4143 0.0901 -0.0222 0.0469 4.0499

t2=G2- column mean of whole data

-33.8627 1.5713 -0.0729 99.5065 7.4464 -6.2794 -2.0225 241.6389


6.1373 -0.4287 -1.0729 -35.4935 -0.5536 -2.2794 6.9775 -48.3611
-3.8627 -0.4287 -1.0729 129.5065 -0.5536 7.7206 -5.0225 -43.3611
6.1373 -1.4287 -1.0729 -70.4935 -0.5536 -0.2794 4.9775 -58.3611
6.1373 0.5713 1.9271 -20.4935 2.4464 -3.2794 -0.0225 81.6389
6.1373 -0.4287 -1.0729 59.5065 -0.5536 7.7206 -4.0225 -48.3611
6.1373 -0.4287 -0.0729 -35.4935 -0.5536 -2.2794 5.9775 -48.3611
6.1373 -1.4287 -1.0729 39.5065 -0.5536 0.7206 3.9775 -53.3611
-3.8627 0.5713 -1.0729 -160.4935 1.4464 0.7206 -0.0225 21.6389
16.1373 0.5713 -1.0729 79.5065 3.4464 0.7206 4.9775 111.6389
6.1373 -0.4287 -0.0729 9.5065 -0.5536 3.7206 -1.0225 -18.3611
56.1373 0.5713 0.9271 -10.4935 1.4464 3.7206 5.9775 81.6389
16.1373 -0.4287 -0.0729 29.5065 -1.5536 1.7206 1.9775 -38.3611
36.1373 0.5713 0.9271 59.5065 1.4464 7.7206 -0.0225 51.6389
-13.8627 0.5713 -1.0729 9.5065 1.4464 4.7206 -5.0225 11.6389
-3.8627 0.5713 -1.0729 159.5065 -0.5536 6.7206 -4.0225 -33.3611
16.1373 0.5713 -0.0729 49.5065 3.4464 0.7206 4.9775 161.6389
6.1373 -0.4287 -1.0729 129.5065 -1.5536 8.7206 -4.0225 -43.3611
6.1373 -0.4287 -0.0729 -90.4935 -0.5536 -4.2794 7.9775 -38.3611
6.1373 3.5713 -1.0729 69.5065 -0.5536 2.7206 -4.0225 -23.3611

T2=t2’*t2

0.0700 -0.0001 0.0007 0.0046 -0.0008 0.0070 0.0080 0.0028


-0.0001 0.0002 0.0000 0.0046 0.0002 0.0000 -0.0003 0.0087
0.0007 0.0000 0.0002 -0.0044 0.0001 -0.0004 0.0001 0.0048
1.0e+005* 0.0046 0.0046 -0.0044 1.3415 0.0061 0.0442 -0.0333 0.2465
-0.0008 0.0002 0.0001 0.0061 0.0010 -0.0005 0.0002 0.0354
0.0070 0.0000 -0.0004 0.0442 -0.0005 0.0044 -0.0019 -0.0196
0.0080 -0.0003 0.0001 -0.0333 0.0002 -0.0019 0.0040 0.0062
0.0028 0.0087 0.0048 0.2465 0.0354 -0.0196 0.0062 1.3555

-9-
t3=G3- column mean of whole data

16.1373 -1.4287 0.9271 59.5065 -1.5536 -1.2794 4.9775 -43.3611


16.1373 -1.4287 0.9271 59.5065 -0.5536 -1.2794 3.9775 -33.3611

-3.8627 1.5713 0.9271 -10.4935 0.4464 -1.2794 -1.0225 16.6389


-53.8627 -1.4287 -1.0729 -160.4935 -1.5536 -0.2794 -7.0225 -63.3611
-53.8627 -0.4287 -1.0729 -160.4935 -0.5536 -3.2794 -7.0225 -28.3611
-3.8627 2.5713 0.9271 -160.4935 1.1464 -12.2794 -6.0225 31.6389

T3=t3’*t3

0.6353 0.0038 0.0138 1.9870 0.0073 0.0203 0.0928 0.3516


0.0038 0.0015 0.0003 -0.0301 0.0009 -0.0028 -0.0017 0.0320
0.0138 0.0003 0.0006 0.0296 0.0002 -0.0011 0.0017 0.0072
1.0e+004* 1.9870 -0.0301 0.0296 8.4467 0.0024 0.2403 0.3764 0.4903
0.0073 0.0009 0.0002 0.0024 0.0007 -0.0010 -0.0002 0.0244
0.0203 -0.0028 -0.0011 0.2403 -0.0010 0.0167 0.0089 -0.0201
0.0928 -0.0017 0.0017 0.3764 -0.0002 0.0089 0.0177 0.0088
0.3516 0.0320 0.0072 0.4903 0.0244 -0.0201 0.0088 0.9090

T=T1+T2+T3

0.1581 0.0004 0.0023 0.2410 0.0000 0.0104 0.0220 0.0850


0.0004 0.0006 0.0001 0.0055 0.0005 -0.0002 -0.0009 0.0171
0.0023 0.0001 0.0003 -0.0005 0.0001 -0.0005 0.0003 0.0067
1.0e+005* 0.2410 0.0055 -0.0005 2.6679 0.0055 0.0870 -0.0043 0.3370
0.0000 0.0005 0.0001 0.0055 0.0014 -0.0007 -0.0001 0.0468
0.0104 -0.0002 -0.0005 0.0870 -0.0007 0.0080 -0.0023 -0.0238
0.0220 -0.0009 0.0003 -0.0043 -0.0001 -0.0023 0.0088 0.0118
0.0850 0.0171 0.0067 0.3370 0.0468 -0.0238 0.0118 1.8514

B=T-W

0.2941 0.0024 -0.0063 1.2988 0.0106 0.0704 0.0426 0.4336


0.0024 0.0001 -0.0002 0.0073 0.0003 0.0007 0.0003 0.0049
-0.0063 -0.0002 0.0004 -0.0206 -0.0007 -0.0018 -0.0008 -0.0126
1.0e+004* 1.2988 0.0073 -0.0206 5.9156 0.0347 0.3035 0.1910 1.8359
0.0106 0.0003 -0.0007 0.0347 0.0012 0.0030 0.0013 0.0210
0.0704 0.0007 -0.0018 0.3035 0.0030 0.0171 0.0101 0.1069
0.0426 0.0003 -0.0008 0.1910 0.0013 0.0101 0.0062 0.0615
0.4336 0.0049 -0.0126 1.8359 0.0210 0.1069 0.0615 0.6741

- 10 -
W-1B

-0.0031 0.0048 -0.0113 -0.2882 0.0184 0.0103 -0.0049 0.1163


4.8232 0.0588 -0.1511 20.1616 0.2508 1.2001 0.6799 7.6129
-5.7510 -0.1530 0.3763 -19.2871 -0.6193 -1.6226 -0.7333 -11.1678
0.0146 -0.0003 0.0006 0.0873 -0.0010 0.0026 0.0025 0.0115
8.1021 0.2697 -0.6581 24.0731 1.0812 2.4110 0.9826 17.0965
2.8046 0.0316 -0.0817 11.8728 0.1358 0.6918 0.3978 4.3612
2.8276 0.0296 -0.0770 12.0996 0.1281 0.6923 0.4031 4.3398
-0.1538 -0.0059 0.0144 -0.4109 -0.0236 -0.0476 -0.0179 -0.3447

Now we have to find the eigen vectors and eigen values

The eigen vectors are

0.0110 0.0352 -0.0527 0.0107 -0.0720 - 0.0709i -0.0720 + 0.0709i -0.0379 -0.0681
0.2490 -0.2144 0.7022 0.4065 -0.2650 - 0.0459i -0.2650 + 0.0459i 0.1273 -0.2781
-0.4886 -0.3488 0.6202 -0.8032 0.5703 + 0.0049i 0.5703 -0.0049i -0.8396 -0.6239
-0.0003 -0.0040 -0.0037 0.0121 0.0065 + 0.0061i 0.0065 -0.0061i 0.0023 -0.0092
0.8133 0.8854 0.1352 -0.2461 0.6898 0.6898 -0.5126 -0.2671
0.1388 -0.1436 0.2039 -0.3589 0.1092 + 0.0391i 0.1092 - 0.0391i 0.0458 0.1612
0.1347 -0.1613 0.2443 0.0013 0.3081 + 0.0837i 0.3081 - 0.0837i 0.1119 0.6566
-0.0173 -0.0226 -0.0084 0.0067 -0.0256 + 0.0156i -0.0256 - 0.0156i -0.0001 -0.0180

And the eigen values are

1.8698 0 0 0 0 0 0 0
0 0.4810 0 0 0 0 0 0
0 0 0.0000 0 0 0 0 0
0 0 0 0.0000 0 0 0 0
0 0 0 0 -0.0000 + 0.0000i 0 0 0
0 0 0 0 0 -0.0000 - 0.0000i 0 0
0 0 0 0 0 0 -0.0000 0
0 0 0 0 0 0 0 -0.0000

The two highest eigen values are 1.8698 and 0.4810


Corresponding eigen vectors are

Eigen Vector 1 Eigen Vector2


0.0110 0.0352
0.2490 -0.2144
-0.4886 -0.3488
-0.0003 -0.0040
0.8133 0.8854
0.1388 -0.1436
0.1347 -0.1613
-0.0173 -0.0226
Hence the Discriminant functions are formed as follows

- 11 -
Z1 = 0.011X1 + 0.249X2 - 0.4886X3 - 0.0003X4 + 0.8133X5 + 0.1388X6 + 0.1347X7
- 0.0173X8

Z2 = 0.0352X1 - 0.2144X2 - 0.3488X3 - 0.004X4 + 0.8854X5 - 0.1436X6


- 0.1613X7 - 0.0226X8

For Group G1:-


_
Z1 = 3.8336
_
Z2 = -2.0190

For Group G2:-


_
Z1 = 4.9193
_
Z2 = -1.1794

For Group G3:-


_
Z1 = 2.8459
_
Z2 = -0.7681

Classification of the objects is given as follows:-

- 12 -
Actual Distance Distance Distance Pred.
X1 X2 X3 X4 X5 X6 X7 X8 Z1 Z2
group from G1 from G2 from G3 Group
110 2 2 180 1.5 10.5 10 70 1 3.4841 -1.3547 0.5634447 2.090529 0.751399 1

110 6 2 290 2 17 1 105 1 3.9379 -2.4829 0.2260817 2.662258 4.133003 1


110 1 1 180 0 12 13 55 1 3.3752 -2.4793 0.4220067 4.073985 3.208364 1
110 1 1 180 0 12 13 65 1 3.2023 -2.7058 0.8702339 5.277986 3.881702 1
110 1 1 180 0 15 9 45 1 3.4257 -2.0385 0.1667627 2.968894 1.950084 1
110 3 1 250 1.5 11.5 10 90 1 3.9932 -2.0961 0.0314166 1.698 3.079881 1
110 2 1 260 0 21 3 40 1 3.7613 -2.3528 0.1166497 2.717832 3.349231 1
110 2 1 180 0 12 12 55 1 3.4895 -2.5324 0.3819844 3.874937 3.526975 1
100 2 1 220 2 15 6 90 1 3.9975 -1.5285 0.2674535 0.971586 1.904391 1
130 3 2 170 1.5 13.5 10 120 1 3.507 -2.3893 0.2437897 3.458449 3.065343 1
100 3 2 140 2.5 10.5 8 140 1 2.9695 -2.1386 0.760973 4.721785 1.893547 1
110 2 1 200 0 21 3 35 1 3.866 -2.0002 0.0014032 1.783154 2.558674 1
140 3 1 190 4 15 14 230 1 4.9773 -2.9071 2.0967713 2.988311 9.118187 1
100 3 1 200 3 16 3 110 1 4.4549 -0.8904 1.6597517 0.299188 2.603838 2
110 1 1 140 0 13 12 25 1 3.9102 -1.6227 0.1629213 1.214798 1.863076 1
100 3 1 200 3 17 3 110 1 4.5937 -1.0341 1.54778 0.127127 3.125561 2
110 2 1 200 1 16 8 60 1 4.2266 -1.7693 0.2167991 0.827815 2.908734 1

70 4 1 260 9 7 5 320 2 4.6266 -0.8725 1.9433113 0.179861 3.181792 2


110 2 0 125 1 11 14 30 2 5.371 -0.6916 4.1255895 0.441982 6.381982 2
100 2 0 290 1 21 2 35 2 4.8963 -1.3153 1.624525 0.018998 4.503568 2
110 1 0 90 1 13 12 20 2 5.3138 -0.0758 5.9670183 1.373563 6.56981 2
110 3 3 140 4 10 7 160 2 3.2606 -1.0274 1.3115996 2.77439 0.239213 3
110 2 0 220 1 21 3 30 2 5.2483 -0.7326 3.6562011 0.307871 5.772786 2
110 2 1 125 1 11 13 30 2 4.7478 -0.8791 2.1351337 0.119592 3.629545 2
110 1 0 200 1 14 11 25 2 5.1978 -0.6101 3.8460409 0.401665 5.556398 2
100 3 0 0 3 14 7 100 2 5.4386 0.1245 7.1706173 1.969828 7.518828 2
120 3 0 240 5 14 12 190 2 6.3283 -1.2034 6.8887315 1.985857 12.3166 2
110 2 1 170 1 17 6 60 2 4.1052 -1.4707 0.3743995 0.747615 2.079483 1
160 3 2 150 3 17 13 160 2 5.2598 -1.819 2.0740464 0.525028 6.931304 2
120 2 1 190 0 15 9 40 2 3.8676 -1.8279 0.0376752 1.526625 2.167047 1
140 3 2 220 3 21 7 130 2 5.2851 -1.7288 2.1910683 0.43565 6.872641 2
90 3 0 170 3 18 2 90 2 5.3318 -0.4467 4.7167305 0.707006 6.282997 2
100 3 0 320 1 20 3 45 2 4.9591 -1.8935 1.2825005 0.511523 5.732139 2
120 3 1 210 5 14 12 240 2 4.9845 -2.5649 1.6225776 1.923861 7.8021 1
110 2 0 290 0 22 3 35 2 4.4659 -2.154 0.4180283 1.155417 4.545119 1
110 2 1 70 1 9 15 40 2 4.5835 -0.9215 1.7668563 0.179274 3.042785 2
110 6 0 230 1 16 3 55 2 5.1151 -1.4781 1.9348151 0.127559 5.653369 2

120 1 2 220 0 12 12 35 3 3.195 -2.0217 0.4078173 3.68268 1.693384 1


120 1 2 220 1 12 11 45 3 3.7008 -1.2014 0.6861056 1.485226 0.918603 1
100 4 2 150 2 12 6 95 3 3.5256 -1.7092 0.19084 2.223088 1.347661 1
50 1 0 0 0 13 0 15 3 2.3412 -0.6633 4.0651803 6.912959 0.265705 3
50 2 0 0 1 10 0 50 3 2.3821 -0.354 4.8790773 7.118669 0.386589 3
100 5 2 0 2.7 1 1 110 3 1.9306 1.3413 14.913025 15.28626 5.287342 3

- 13 -
Calculation of misclassification error
1) Re-substitution method – In this method whole sample data is considered. The
available group data is re-substituted to calculate the APperent Error Rate (APER).
Let us suppose, there are two groups G1 and G2 and n1, n2 be the number of
observations in two groups respectively. After substituting the data values in the
discriminant function, if the discriminant function discriminates as shown in the table
below, then the misclassification rate is 0.
Predicted membership
Actual G1 G2
membership G1 n1 0
G2 0 n2

Let us suppose n1’ is the misclassified observation in G1 and n2’ is the misclassified
observation in G2 as shown below.
Predicted membership
Actual G1 G2
membership G1 n1- n1’ n1’
G2 n2’ n2- n2’

The error rate is calculated as follows:-


Error rate = (n1’+ n2’)  (n1 + n2)
The above type of tables are called confusion matrix.
For the given example, the confusion matrix can be formed as follows:-
Predicted membership
Actual G1 G2 G3
G1 15 2 0
membership G2 4 15 1
G3 3 0 3

APER = (2+4+1+3)/43 = 10/43 = 23.256 %


2) Hold out method – divide the samples into two divisions in each group. One
portion of each group is used for the classification purpose. The second portion in
each group is used for the calculation of error rate estimation

- 14 -
3) The U method or Cross Validation or Jack Knife method – If there are two
groups, at a time leave one element in each group and with remaining of that group
and with the other elements of the other group, do the classification. Then use the left
out element for the error rate calculation. Do this until all the elements in the group
are subjected to misclassification procedure.
Go to the second group, leave one element and do the classification for the
remaining elements with the all the elements of the other group the left out element is
used for misclassification test. Continue until all the elements are subjected to the
misclassification procedure.

Statistical Tests:
To identify whether the developed discriminant functions discriminate properly, we have
some tests to solve our purpose. One useful test is Mahalonobis’ D2 test statistic, which
is used mainly for two-group problem. Here D 2 is the generalized distance between the
centroids of the two groups. The centroid for each group is arrived by substituting the
mean values of each independent variable in the discriminant function. The
corresponding Z1 and Z2 values are the centroid of the groups.
The distance is calculated as follows:-

D2 = Z2 – Z1, if Z1 < Z2
Z1 – Z2, if Z2 < Z1

If the distance D2 is higher, then it means that the discriminant function is discriminating
effectively. It was shown that the distance D 2 depends upon a test statistic M. thus M
follows F distribution with degrees of freedom p, n1+n2-p-1. Here n1 and n2 are the
number of observations in the two groups and p is the number of independent variables.

The expression of M is as follows: -


n n 2 (n1  n 2  p  1) D 2
M=
(n1  n )(n1  n  2) p

- 15 -
In this test, M is tested for a particular ‘α’ value. If the value of M is significant at that
point then the discriminant function is significant otherwise not.
Also we can say that there is not much difference between the two groups.
If there are two groups (number of discriminant function is only one), then it is
possible to test the significance of the discriminant function by the above test. But if there
are more than two groups i.e. number of discriminant function is more than one, then the
Bartlet’s χ2 test statistic comes as handy. This will help us to retain the significant
discriminant functions.
This statistic is given by
r

V = {(n-1) - ½ (p+k)} 
j 1
ln(1+λj)

The λ values are the eigen values of the W -1B matrix. The test statistic V is tested to
identify whether it is significant or not. This is done by using χ 2 distribution with degrees
of freedom p (k-1) for particular ‘α’ value. If the V value is significant then the next job is
to identify how many discriminant functions are significant out of total discriminant
functions. This test is cariied out in the following manner.

First calculate the V1 value. This is the value corresponding to the highest eigen value.

V1 = {(n-1) - ½ (p+k)} ln(1+λ1)

Subtract V1 from V. Now test the V – V 1 value is significant or not using degrees of
freedom (p-1) (k-2). If it is significant, then the significance of the next discriminant
function is checked. If V – V1 value is not significant then it means that first discriminant
function corresponding to the highest eigen value is significant and other discriminant
functions are not significant.

- 16 -
V1

V – V1

If V – V1 is significant then it means that there are some more discriminant functions
which may contribute to the significance of the discriminate the groups.

Now V2 is calculated corresponding to the second highest eigen value as follows.

1
V2 = {(n-1) – (p+k)} ln(1+λ2)
2

Now calculate V - V1 - V2. Test the significance of this value for the degrees of freedom
(p-2)(k-3). If it is significant then the second discriminant function is significant and also
we should proceed to explore whether other significant discriminant functions are there
or not. If the V - V1 - V2 value is not significant then come to the conclusion that the
second discriminant function is significant in addition to the first one. Stop the procedure.
This whole process is given in the flowchart as follows: -

- 17 -
Calculate V
i=1
i=i+1

CalculateVi

V = V – V1
Subtract V1 from V

Discriminant function of Is V- Vi
λi value is significant Significant ?

Discriminant function Zi is significant in addition to the


previous discriminant function if any.

Stop

- 18 -
Interpretation of the attributes with respect to Discriminant axes

Labeling the discriminant space:-


For the interpretation we can follow the following steps: -
Step 1: First determine how many discriminant functions to retain, and then go for the
calculation of the discriminant loadings for the discriminant problem.
O.“
Step 2: Calculate the Wilks’ Lambda (  ) and also univariate F-value associated with
each variable.
The univariate F-value associated with a particular variable can be calculated as
follows.
 1  (n  k )
Fi    1 
 Wilks '  i  ( k  1)
Stretch the loadings for each variable by multiplying it with the univariate F-value
associated with the particular variable. Then we know the positions of the attributes in
which quadrant they lie. Next draw the vectors from the origin. The discriminant loading
is the cosine of the angle between the attribute and the discriminant axes.

Step 3: Stretch the each group centroid also by multiplying it with the approximate F-
values for the different discriminant functions. The approximate F-values for each
discriminant function can be obtained as follows.
For first Discriminant function,
(n  k )
F  Highest eigen value
(k  1)
Thus the F value for each discriminant function is calculated. The new centroid is also
calculated according to the discriminant loadings.

- 19 -
Step 1: As we have already seen how to decide about the number of discriminant
function to retain. Now let us see how the discriminant loadings are calculated.

(a) Rescaling the discriminant weights


Wj* = C*Wj

Where Wj is the discriminant weights and Wj* are rescaled discriminant weights.
C contains the square roots of the diagonal elements of total sample variance-covariance
matrix.
Covariance matrix is obtained by dividing each element of the mean corrected sum-of-
squares and cross-products matrix (S) by n-1, the sample size less 1.
The mean corrected sum-of-squares and cross-products matrix (S) can be obtained by
pre-multiplying the mean corrected data matrix by its transpose.
(b) Calculation of discriminant loadings.
1^j = R*Wj*
Where R is the correlation matrix. R can be obtained as follows
R = 1/(n-1)*(D-1/2SD-1/2)
D-1/2 contains the reciprocals of the standard deviations of the variables in original data
matrix.
For the given example
(a)
Cov_mat = (S)/42

- 20 -
1.0e+003*
0.3765 0.0009 0.0055 0.5738 0.0000 0.0247 0.0523 0.2024 
0.0009 0.0015 0.0002 0.0132 0.0011 - 0.0004 - 0.0022 0.0406 

0.0055 0.0002 0.0007 - 0.0012 0.0002 - 0.0012 0.0006 0.0159
 
0.5738 0.0132 - 0.0012 6.3522 0.0132 0.2072 - 0.0104 0.8024
0.0000 0.0011 0.0002 0.0132 0.0033 - 0.0017 - 0.0002 0.1115 
 
0.0247 - 0.0004 - 0.0012 0.2072 - 0.0017 0.0191 - 0.0055 - 0.0568 
0.0523 - 0.0022 0.0006 - 0.0104 - 0.0002 - 0.0055 0.0209 0.0280 
 
0.2024 0.0406 0.0159 0.8024 0.1115 - 0.0568 0.0280 4.4081 

Square Roots of diagonal elemnts of variance - covariance matrix(C)

19.4048 0 0 0 0 0 0 0 
 0 1.2224 0 0 0 0 0 0 
 
 0 0 0.8073 0 0 0 0 0 
 
 0 0 0 79.7004 0 0 0 0 
 0 0 0 0 1.8066 0 0 0 
 
 0 0 0 0 0 4.3703 0 0 
 0 0 0 0 0 0 4.5744 0 
 
 0 0 0 0 0 0 0 66.3932
W1* = C*(Eigen vector 1) W2* = C*(Eigen vector 2)
0.2125 0.6824
0.3044 -0.2621
-0.3944 -0.2816
-0.0243 -0.3179
1.4693 1.5996
0.6064 -0.6278
0.6160 -0.7379
-1.1479 -1.5037
(b)D-1/2 matrix

- 21 -
0.0527 0 0 0 0 0 0 0
0 0.8185 0 0 0 0 0
0 0 1.2478 0 0 0 0 0
0 0 0 0.0129 0 0 0 0
0 0 0 0 0.5558 0 0 0
0 0 0 0 0 0.2349 0 0
0 0 0 0 0 0 0.2204 0
0 0 0 0 0 0 0 0.0151

R = 1/(n-1)*(D-1/2SD-1/2) =

1.0000 0.0392 0.3621 0.3895 0.0001 0.3060 0.6075 0.1614 


0.0392 1.0000 0.2022 0.1392 0.5152 - 0.0691 - 0.3960 0.5029 

0.3621 0.2022 1.0000 - 0.0190 0.1528 - 0.3509 0.1716 0.3008 
 
0.3895 0.1392 - 0.0190 1.0000 0.0942 0.6265 - 0.0294 0.1563 
0.0001 0.5152 0.1528 0.0942 1.0000 - 0.2198 - 0.0226 0.9372 
 
0.3060 - 0.0691 - 0.3509 0.6265 - 0.2198 1.0000 - 0.2851 - 0.2017
0.6075 - 0.3960 0.1716 - 0.0294 - 0.0226 - 0.2851 1.0000 0.0934 
 
0.1614 0.5029 0.3008 0.1563 0.9372 - 0.2017 0.0934 1.0000 

Now the discriminant loadings can be arrived

1^1 = R*W1* 1^2 = R*W2*


0.4468 -0.4366
0.1234 0.0670
-0.4833 -0.1957
0.4291 -0.5392
0.3406 0.1371
0.5065 -0.3391

- 22 -
0.2443 -0.2561
0.2293 -0.1029

Variable contribution
In the previous section, we have seen that how discriminant loadings are calculated.
These discriminant loadings will help us to know about the contribution of the variables
in discriminating the objects. Discriminant loadings are the correlation between the
discriminant function and the corresponding variable. Let us suppose, if a variable
attached to the first discriminant function, then it contributes more in the discrimination
process, provided the discriminant function represent the more variation compare to other
discriminant function.
Step 2:
Calculation of Wilks’ Lambda (  )
For variable X1
G1(:,1)-Mn1(1)

-0.5882
-0.5882
-0.5882
-0.5882
-0.5882
-0.5882
-0.5882
-0.5882
-10.5882
19.4118
-10.5882
-0.5882
29.4118
-10.5882
-0.5882
-10.5882
-0.5882

w1= 1.6941e+003
G2(:,1)-Mn2(1)

-41
-1

- 23 -
-11
-1
-1
-1
-1
-1
-11
9
-1
49
9
29
-21
-11
9
-1
-1
-1

w2= 5980
G3(:,1)-Mn3(1)

30
30
10
-40
-40
10
w3 = 5200
W=w1+w2+w3 = 12874

Similarly,

T1 = G1(:,1)-Mn_whole(1))'*(G1(:,1)-Mn_whole(1))= 2.4631e+003
T2= (G2(:,1)-Mn_whole(1))'*(G2(:,1)-Mn_whole(1)) = 6.9988e+003
T2= (G3(:,1)-Mn_whole(1))'*(G3(:,1)-Mn_whole(1)) = 6.3531e+003

T = T1 + T2 +T3 = 1.5815e+004

Wilks’ lambda = W/T = 0.8140

Similarly, for other variables, Wilks’ lambda can be calculated.

Variable Wilks’ lambda


X1 0.8140
X2 0.9882

- 24 -
X3 0.8381
X4 0.7783
X5 0.9125
X6 0.7864
X7 0.9293
X8 0.9636

The univariate F-values can also be calculated as mentioned earlier. These are given in
the following table.

Variable Wilks’ lambda Univariate F-value


X1 0.8140 4.57
X2 0.9882 0.24
X3 0.8381 3.86
X4 0.7783 5.697
X5 0.9125 1.918
X6 0.7864 5.43
X7 0.9293 1.52
X8 0.9636 0.755

Now the previously calculated discriminant loadings are stretched by multiplying them
with the respective variable’s univariate F-value. The stretched discriminant loadings are
given in the following table.

- 25 -
Stretched Discriminant Loadings
Variable
Function 1 Function 2
X1 2.041876 -1.99526
X2 0.029616 0.01608
X3 -1.86554 -0.7554
X4 2.444583 -3.07182
X5 0.653271 0.262958
X6 2.750295 -1.84131
X7 0.371336 -0.38927
X8 0.173122 -0.07769

Centroid Discriminant Loadings


G=

110.5882 2.3529 1.2353 197.6471 1.2941 14.5882 8.1176 85.0000


111.0000 2.6000 0.6500 185.5000 2.2500 15.2500 7.9500 91.7500
90.0000 2.3333 1.3333 98.3333 1.1167 10.0000 5.0000 58.3333

x=
6.7255 -0.0758 0.1624 37.1536 -0.2595 1.3088 1.0951 6.6389
7.1373 0.1713 -0.4229 25.0065 0.6964 1.9706 0.9275 13.3889
-13.8627 -0.0954 0.2604 -62.1602 -0.4369 -3.2794 -2.0225 -20.0278

>> s=x'*x

1.0e+003 *

0.2883 0.0020 -0.0055 1.2901 0.0093 0.0683 0.0420 0.4178


0.0020 0.0000 -0.0001 0.0074 0.0002 0.0006 0.0003 0.0037
-0.0055 -0.0001 0.0003 -0.0207 -0.0005 -0.0015 -0.0007 -0.0098
1.2901 0.0074 -0.0207 5.8696 0.0349 0.3018 0.1896 1.8264
0.0093 0.0002 -0.0005 0.0349 0.0007 0.0025 0.0012 0.0164
0.0683 0.0006 -0.0015 0.3018 0.0025 0.0164 0.0099 0.1008
0.0420 0.0003 -0.0007 0.1896 0.0012 0.0099 0.0062 0.0602
0.4178 0.0037 -0.0098 1.8264 0.0164 0.1008 0.0602 0.6245

cov_mat=s/2
1.0e+003 *

0.1442 0.0010 -0.0028 0.6450 0.0046 0.0342 0.0210 0.2089


0.0010 0.0000 -0.0001 0.0037 0.0001 0.0003 0.0001 0.0019
-0.0028 -0.0001 0.0001 -0.0104 -0.0002 -0.0007 -0.0004 -0.0049

- 26 -
0.6450 0.0037 -0.0104 2.9348 0.0175 0.1509 0.0948 0.9132
0.0046 0.0001 -0.0002 0.0175 0.0004 0.0012 0.0006 0.0082
0.0342 0.0003 -0.0007 0.1509 0.0012 0.0082 0.0049 0.0504
0.0210 0.0001 -0.0004 0.0948 0.0006 0.0049 0.0031 0.0301
0.2089 0.0019 -0.0049 0.9132 0.0082 0.0504 0.0301 0.3122
c=

12.0072 0 0 0 0 0 0 0
0 0.1486 0 0 0 0 0 0
0 0 0.3695 0 0 0 0 0
0 0 0 54.1738 0 0 0 0
0 0 0 0 0.6096 0 0 0
0 0 0 0 0 2.8593 0 0
0 0 0 0 0 0 1.7536 0
0 0 0 0 0 0 0 17.6699

ev1 =

0.0110
0.2490
-0.4886
-0.0003
0.8133
0.1388
0.1347
-0.0173
ev2 =

0.0352
-0.2144
-0.3488
-0.0040
0.8854
-0.1436
-0.1613
-0.0226

wstar1=c*ev1

wstar1 =

0.1321
0.0370
-0.1805
-0.0163
0.4958

- 27 -
0.3969
0.2362
-0.3057

>> wstar2=c*ev2

0.4227
-0.0319
-0.1289
-0.2167
0.5397
-0.4106
-0.2829
-0.3993

STD (g)

12.0072 0.1486 0.3695 54.1738 0.6096 2.8593 1.7536 17.6699

d-1/2 =

0.0833 0 0 0 0 0 0 0
0 6.7295 0 0 0 0 0 0
0 0 2.7064 0 0 0 0 0
0 0 0 0.0185 0 0 0 0
0 0 0 0 1.6404 0 0 0
0 0 0 0 0 0.3497 0 0
0 0 0 0 0 0 0.5703 0
0 0 0 0 0 0 0 0.0566

R=

1.0004 0.5705 -0.6240 0.9940 0.6342 0.9952 0.9982 0.9850


0.5705 1.0006 -0.9980 0.4605 0.9971 0.6486 0.5158 0.7048
-0.6240 -0.9980 0.9999 -0.5189 -0.9998 -0.6979 -0.5719 -0.7505
0.9940 0.4605 -0.5189 1.0044 0.5300 0.9761 1.0002 0.9562
0.6342 0.9971 -0.9998 0.5300 0.9999 0.7072 0.5825 0.7591
0.9952 0.6486 -0.6979 0.9761 0.7072 0.9998 0.9866 0.9971
0.9982 0.5158 -0.5719 1.0002 0.5825 0.9866 1.0001 0.9715
0.9850 0.7048 -0.7505 0.9562 0.7591 0.9971 0.9715 1.0002

l*1 = R* W*1 l*2 = R* W*2


0.8938 -0.4724

- 28 -
0.9432 0.0826
-0.9698 -0.0400
0.8198 -0.5248
0.9745 0.0317
0.9412 -0.4277
0.8575 -0.4991
0.9719 -0.3905

Mean l*1 l*2 l*1* mean l*2* mean


103.863 0.8938 -0.4724 92.83275 -49.0649
2.4287 0.9432 0.0826 2.29075 0.200611
1.0729 -0.9698 -0.04 -1.0405 -0.04292
160.4935 0.8198 -0.5248 131.5726 -84.227
1.5536 0.9745 0.0317 1.513983 0.049249
13.2794 0.9412 -0.4277 12.49857 -5.6796
7.0225 0.8575 -0.4991 6.021794 -3.50493
78.3611 0.9719 -0.3905 76.15915 -30.6

- 29 -

You might also like