0% found this document useful (0 votes)
28 views24 pages

Analysis & Interpretation of Data

Statistics.

Uploaded by

Mary Ann Habitan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views24 pages

Analysis & Interpretation of Data

Statistics.

Uploaded by

Mary Ann Habitan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Analysis and

Interpretation of Data
Analysis and Interpretation of Data
Analysis and Interpretation provide answers to the
research questions postulated in the study.

Analysis means the ordering, manipulating, and


summarizing of data to obtain answers to research
questions. Its purpose is reduce data to intelligible and
interpretable form so that the relations of research
problems can be studied and tested.
Interpretation gives the results of analysis, makes
inferences pertinent to the research studied, and draws
conclusions about these relations.

Statistics is simply a tool in research. It is a language


which, through its own special symbols and grammar,
takes numerical facts of life and translates them
meaningfully.
Statistics thus gathers numerical data. The variations of
the data gathered are abstracted based on group
characteristics and combined to serve the purpose of
description, analysis, interpretation, and possible
generalization. In research, this is known as the
process of concatenation where the statements are
“chained together” with other statements.
Descriptive Statistics – allows the researcher to
describe the population or sample used in the study.

Inferential Statistics – draws inferences from sample


data and actually deals with answering the research
questions postulated which are, in some cases cause
and effect relationships.
Descriptive Statistics
Describes the characteristics of the population or the
sample. To make them meaningful, they are grouped
according to the following measures:
 Measures of central tendency
 Measures of dispersion or variability
 Measures of Noncentral Location
 Measures of symmetry and/or asymmetry
 Measures of peakedness or flatness
Measures of Central Tendency or Averages
These include the mean, median, and mode. The mean
is the measure obtained by adding all the values in a
population or sample and dividing them by the number
of values that are added. The mode is the simply the
frequent score of score. The median is the value above
and below which one half of the observations fall.
Measures of Dispersion or Variability
These include the variance, standard deviation, and the
range. A measure of dispersion conveys information
regarding the amount of variability present in a set of
idea.

The variance is a measure of dispersion of the set of


scores. It tells us how much the scores are spread out.
Thus, the variance is a measure of the spread of
scores; it describes the extent to which the scores
differ from each other about their mean.
Standard deviation thus refers to the deviation of
scores from the mean. Where ordinal measures of 1-5
scores (low-high) are used, data most likely have
standard deviations of less than one the
unless,
responses are extremes (e.i. all ones or all fives).
The range is defined as the difference between the
highest and the lowest scores.
Measures of Noncentral Location. These include the
quantiles (i.e. percentiles, deciles, quartiles). The word
percent means “per hundred”. Therefore, in using
percentages, size is standardized by calculating the
number of individuals who would be in a given category
if the total number of cases were 100 and if the
proportion in each category remains unchanged. Since
proportions must add to unity, it is obvious that
percentages will sum up to 100 unless the categories
are not mutually exclusive or exhaustive.
Measures of Symmetry and/or Asymmetry. Theses
include skewness of the frequency distribution. If a
distribution is asymmetrical and the larger frequencies
tend to be concentrated toward the low end of the
variable and the smaller frequencies toward the high
end, it is said to be positively skewed. If the opposite
holds, the larger frequencies being concentrated
toward the high end of the variable and the smaller
frequencies toward the low end, the distribution is said
to be negatively skewed.
Measures of Peakedness or Flatness of one
distribution in relation to another is referred to as
Kurtosis. If one distribution is more peaked than
another, it may be spoken of as more leptokurtic. If it
less peaked, it is said to be platykurtic.
Nonparametric Tests
Two types of data are recognized in application of
statistical treatments, these are: parametric data and
nonparametric data.

Parametric data are measured data and parametric


statistical tests assume that the data are normally, or
nearly normally, distributed.
Nonparametric data distribution free samples which
implies that they are free, of independent of the
population distribution

The tests on these data do not rest on the more


stringent assumption or normally distributed population
Kolmogorov-Smirnov test – fulfills the function of chi-
square in testing goodness of fit and of the Wilcoxon
rank sum test in determining whether random samples
are from the same population.

Sign Test – is important in determining the significance


differences between two correlated samples. The
“signs” of the test are algebraic plus or minus values of
the difference of the paired scores.
Median test – is a sign test for two independent
samples in contradistinction to two correlated samples,
as is the case with the sign test.

Spearman rank order correlation – sometimes called


Spearmen rho (p) or Spearman’s rank difference
correlation, is a nonparametric statistic that has its
counterpart in parametric calculations in the Pearson
product moment correlation.
Kruskall -Wallis – sometimes known as the Kruskall-
Wallis H test of ranks for k independent samples. The H
is the title of the test and stands for the null hypothesis;
and the k for the classes or samples.

Kendall coefficient of concordance – is also known


as
Kendall’s
of W. It Coefficient W or thewhich
is a technique concordance
can be coefficient
used with
advantage in studies involving ranking made by
independent judges.
Mann-Whitney U-test – in nonparametric statistics it is
the counterpart of the t-test in parametric
measurements. It may find use in determining whether
the medians of two independent samples differ from
each other to a significant degree.

Wilcoxon match pairs, signed rank test – is


employed to determine whether two samples differ
from each other to significant degree when there is a
relationship between the samples
Wilcoxon rank sum test – may be used in those
nonparametric situations where measures are
expressed as ranked data in order to test the
hypothesis that the samples are from a common
population whose distribution of the measures is the
same as that of the samples.
Appropriate Statistical Methods based on
Research Problem and Levels of th
Measurement e
The statistical methods appropriate to any studies are
always determined by the research problem and the
measurement scale of the variables used in the study.
Chi-square – the most commonly used nonparamentric
test. It is employed in the instances where comparison
between the observed and theoretical frequencies is
present, or in testing the mathematical fit of a
frequency curve to an observed frequency distribution.
T-tests
Provides the capability of computing student’s t and
probability levels for testing whether or not the
difference between two samples means is significant.
This type of analysis is comparison of two subjects, ith
the group means as the basis for comparison.

Independent samples t-test – cases are classified into


2 groups and a test of mean differences is performed
for specific variables.
Paired samples t-test – for pairing observations
arrange casewise, a test of treatment effects is
performed.

Correlation Analysis
Correlation is used when one is interested to know the
relationship between two or more paired variables. This
is where interest is focused primarily on the exploratory
task of finding out which variables are related to a
given variable.
References
https://www.slideshare.net/teppxcrown98/analysis-
and-interpretation-of-data

You might also like