0% found this document useful (0 votes)
6 views

Parametric and non-parametric

Uploaded by

jacksparrowin46
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Parametric and non-parametric

Uploaded by

jacksparrowin46
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 35

Parametric and nonparametric tests

• Parametric and nonparametric statistical tests are two broad


categories of statistical procedures that differ in how they make
assumptions about the distribution of data.
Parametric tests

These tests assume that the data comes from a population with a known distribution, and that
the data is normally distributed.

Parametric tests are commonly used in many research areas

However, they can be misleading if the normality assumption is not met.

Examples of parametric tests include Student's t-tests, F test


Nonparametric tests

• These tests do not assume anything about the underlying distribution of the data.
• Nonparametric tests are often used for categorical data or continuous data that is not normally
distributed.
• They can also handle ordinal and ranked data, and are less affected by outliers than parametric
tests.
• Examples of nonparametric tests include the Mann-Whitney-Wilcoxon (MWW) test and the
Wilcoxon test.
• The Chi-square test is another nonparametric test that is used to analyze group differences.
Difference between Parametric and Non-parametric tests
“t”-Test
What Is a T-Test?

• A t-test is an inferential statistic used to determine if there is a significant


difference between the means of two groups and how they are related.
• T-tests are used when the data sets follow a normal distribution and have unknown
variances.
• Ex: the data set recorded from flipping a coin 100 times.
• The t-test is a test used for hypothesis testing in statistics and uses the t-statistic,
the t-distribution values, and the degrees of freedom to determine statistical
significance.
Understanding the T-Test

• A t-test compares the average values of two data sets and determines if they
came from the same population.
• A sample of students from class A and a sample of students from class B would
not likely have the same mean and standard deviation.
• Similarly, samples taken from the placebo-fed control group and those taken from
the drug prescribed group should have a slightly different mean and standard
deviation.
• Mathematically, the t-test takes a sample from each of the two sets and
establishes the problem statement. It assumes a null hypothesis that the two
means are equal.
Independent sample t-test
Paired sample t test
• Paired sample t test compares the means of two measurements taken from same
sample, individual, objects or individual units.
• These paired measurement can represent things like
• A measurement taken at two different times (e.g., pre and post test score with an
intervention administered between the two time points)
• A measurement taken under two different conditions (e.g., completing a test
under “control” condition and “experimental” condition)
• Measurement taken from two halves or sides of a subject or experimental units
( e.g., measuring hearing loss in a subject’s right and left ears)
One way ANOVA
• Using the formulas, values are calculated and compared against the
standard values. The assumed null hypothesis is accepted or rejected
accordingly. If the null hypothesis qualifies to be rejected, it indicates
that data readings are strong and are probably not due to chance.
• The t-test is just one of many tests used for this purpose. Statisticians
use additional tests other than the t-test to examine more variables
and larger sample sizes.
• For a large sample size, statisticians use a z-test. Other testing options
include the chi-square test and the f-test.

You might also like