0% found this document useful (0 votes)
16 views

Coo Relation

Uploaded by

zizzy1029
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Coo Relation

Uploaded by

zizzy1029
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Correlation

First we know some basic terms:

Uni-variate Distribution: the distribution which involves one variable is called univariate
distribution

Bi-variate Distribution: The distribution which involves two or more variables is called Bi-variate
distribution.

Correlation:

The relation between two variables is called correlation. It is used to measure the
relationship between two variables.

If the change one variable affects a change in other variable, the variables, are correlated.
Correlation broadly classified into three ways.

Positive Correlation

If two variables deviated in the same direction. If increase in one variable in a


corresponding increase in other variable. (Or)

If decrease in one variable a corresponding decrease in the other variable. This is the same
direction. This type of correlation is called positive correlation.

Example:

1. Height and weight of certain group of persons


2. Income and expenditure.
3. Rainfall and agricultural production.

Negative Correlation:

If tow variables deviated in the opposite direction. If the increase in one variable in a
corresponding decrease in other variable. (Or)

If the decrease in one variable in a corresponding increase in the other variable. This is
the opposite direction this type of correlation is called Negative correlation.

Example: Price and demand of commodity.

Zero Correlation or Independent Correlation:

There is no relation between two variables. This is also known as zero correlaton

1
Example: Beautiful and Intelligence.

Scatter Diagram:

It is the simplest way of the diagrammatic representation of the Bi – variate data. For Bi-
variate distribution if the values of the variables (x,y) I = 1,2,3….n are plotted along the x-axis and
y- axis respectively in the xy-plane.

The diagram of dots obtained is known as scattered diagram. From this scattered diagram
we can form a fairly good, though vague, idea whether the variables are correlated or not.

Example: if the dots are very dense, that is very close to each other. We should expect a fairly
good amount of correlation between the variables. If the dots are widely scattered, we should
expect a bad correlation.

Covariance: It is a statistical measure that describes the relationship between two random
variables and how they change together. Specifically, it tells you the direction of the linear
relationship between the variables.

Covariance between two variable X and Y is given by

Cov( x, y) 
1
n

 xx y y  
Where 𝑋̅ 𝑎𝑛𝑑 𝑌̅ are the means of X and Y

Interpretations:
Positive Covariance: If covariance is positive, it means that as one variable increases, the other
tends to increase as well.
Negative Covariance: If covariance is negative, it means that as one variable increases, the other tends to
decrease.
Zero Covariance: A covariance of zero indicates no linear relationship between the variables.

2
Karl Pearson’s Correlation Coefficient:

This is used to measure the degree of linear relationship between two variables. If x and
y are two variables then “Karl Pearson’s correlation coefficient (r)” is

Cov( x, y) 
1
n
 xx y y   
Cov( x, y)
r
 x y
1
n
 xx y y   
=
1
   
2 1
  yy
2
x  x
n n

1
n
 xx y y   
=
1
   
2 1
 xx n y y
2

n
1
n
 xy  xy  x y  x y  
=

 x  2 xx  x  y 2  2 y y  y
2
2
    
2

1 1  1  1
n
 xy  x   y   y   x    x y
n  n  n
=
1
  
1 1 2  1 1 1 2
  x  2 x  x   x   y  2 y  y   y 
2 2

n n n  n n n 

1
n
 xy  x y  y x 
1
n
nx y  
=
1
n
2 2 1
n
2  1
  x  2 x  nx   y  2 y  n y 
 n
2 2 1
n
2 


   
3
1
n
 xy  x y  y x  x y
=
1 2  1 2
   
2 2
x 2
 2 x  x y 2
 2 y  y 
n  n 
1
n
 xy  x y
=
1
  
2  1 2
  x  x   y  y 
2 2

n  n 

 xy    x .  y 
 nn n 
=
 x 2  x  2  y 2  y  2 
            
 n  n   n  n  
 
n xy   x. y
= n2
 n x 2   x 2  n y 2   y 2 
      
 n 2
 n 2

  
n xy   x. y
r
 n x 2    x 
2
 n y 2    y 
2

4
Properties of the Karl Pearson’s correlation co-efficient:

The two variables x and y are linearly related. In other words, this scatter diagram of the
data will give a straight line curve.

1. Correlation co-efficient is always lies between -1 and +1. That is, -1≤ r ≤ +1.
If r = +1, the correlation is perfect and positive.
If r = 0, the correlation is zero, there is no relation between two variables.
If r = -1, the correlation is perfect and negative.
2. Correlation coefficient is independent of change of origin and scale that is r(x, y) = r(u, v)
3. Independent variables are un-correlated
4. Karl Pearson’s correlation co-efficient deals with the quantitative characteristics only.

5
Spearmen’s Rank Correlation Coefficient:
Xi, yj be the ranks of two characteristics A and B respectively the spearmen’s correlation
coefficient is denoted by

6 d 2
R(x,y)=1-
n(n2 -1)
Where ‘n’ = number of observations

d = the rank of x – rank of y (R(x)-R(y))

You might also like