0% found this document useful (0 votes)
16 views29 pages

Assumptions

The document outlines various research objectives and the assumptions necessary for statistical tests, including normality, homoscedasticity, independence, and sphericity. It details methods for testing these assumptions, such as the Shapiro-Wilk and Kolmogorov-Smirnov tests for normality, and discusses the implications of violating these assumptions on data analysis. Additionally, it emphasizes the importance of sample size, outliers, and multicollinearity in ensuring valid statistical results.

Uploaded by

mitchphilbackup
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views29 pages

Assumptions

The document outlines various research objectives and the assumptions necessary for statistical tests, including normality, homoscedasticity, independence, and sphericity. It details methods for testing these assumptions, such as the Shapiro-Wilk and Kolmogorov-Smirnov tests for normality, and discusses the implications of violating these assumptions on data analysis. Additionally, it emphasizes the importance of sample size, outliers, and multicollinearity in ensuring valid statistical results.

Uploaded by

mitchphilbackup
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 29

Research Objectives

Data Type and Distribution


Research Objectives
Comparison
Directional
Descriptive
Causality
Research Objectives
Data Type and Distribution
Data Type and Distribution

How many IV and DV?


Is it continuous or categorical?
Is the distribution normal or skewed?
Does the data passed the
assumptions of the presumptive test
to be used?
Does the data passed the assumptions
of the presumptive test to be used?

Normality Assumption
Homoscedasticity
Independence Assumption

Scale
Does the data passed the assumptions
of the presumptive test to be used?

Sample Size
Outliers
Multicollinearity
Sphericity
Normality Assumption
Description: The data should follow a normal distribution.

Measurement: You can check for normality using visual


methods (e.g., histogram, Q-Q plot) and statistical tests (e.g.,
Shapiro-Wilk, Kolmogorov-Smirnov).

Decision Making:
• If the data significantly deviates from normality (p < 0.05),
consider non-parametric tests or data transformation.

• If the deviation is not severe, parametric tests may still be


appropriate as they are robust to violations with large
Normality Assumption
Shapiro-Wilk Test:

When to Use: The Shapiro-Wilk test is often


recommended for smaller to moderately sized samples
(typically, n < 50).

Strengths: It is considered one of the most powerful


tests for normality when sample sizes are small to
moderate.

Weaknesses: It can be less reliable for very large


Normality Assumption
Anderson-Darling Test:

When to Use: The Anderson-Darling Test is often


recommended for smaller to moderately sized samples
(typically, n > 50).

Strengths: It is a more general test that can detect


departures from normality in the tails of the distribution,
making it useful for detecting both skewness and heavy
tails.
Normality Assumption
Kolmogorov-Smirnov Test:

When to Use: The Kolmogorov-Smirnov test is commonly used


for testing normality when dealing with large samples.

Strengths: It is easy to compute and widely available in


statistical software packages. It is also suitable for comparing
data against other specific distributions, not just normality.

Weaknesses: It may have lower power than the other tests,


especially for smaller samples. It is also sensitive to departures
from normality in the center of the distribution.
Homogeneity of Variance
(Homoscedasticity)
Description: The variances of different groups should be
roughly equal.

Measurement: Levene's test or Bartlett's test for


homogeneity of variances.

Decision Making:
If the variances are significantly different (p < 0.05), consider
using tests that do not assume equal variances (e.g., Welch's
t-test) or data transformation.
Homogeneity of Variance
(Homoscedasticity)
Description: The variances of different groups should be
roughly equal.

Measurement: Levene's test or Bartlett's test for


homogeneity of variances.

Decision Making:
If the variances are significantly different (p < 0.05), consider
using tests that do not assume equal variances (e.g., Welch's
t-test) or data transformation.
Homogeneity of Variance
(Homoscedasticity)
Independence Assumption

Description: Observations within and between groups


must be independent.

Measurement: Ensure that each data point is not


influenced by or related to any other data point.

Decision Making:
Design the study in a way that ensures independence.
Randomized control trials and independent samples are
common strategies.
Independence Assumption
Scale
Description: Data should be measured on an
interval or ratio scale.

Measurement: Ensure that the data have


meaningful zero points and equal intervals.

Decision Making:
If data are not on an interval or ratio scale, consider
using non-parametric tests or transforming the data
to meet this assumption.
Sample Size

Description: Sample sizes should be large enough


for the statistical test to be valid.

Measurement:
Consult sample size guidelines specific to the
chosen parametric test (e.g., t-test, ANOVA)
Outliers

Description: Extreme values or outliers in the data can


influence the results.

Measurement:
Visual inspection (box plots, scatterplots)
Statistical tests (e.g., Grubbs' test, Z-score)

Grubbs' Test or Z-score: A common threshold is 0.05. If the


p-value is less than 0.05, you may consider the data point
as an outlier.
Outliers
Grubb’s test for Outliers
Multicollinearity

Description: Independent variables should not be


highly correlated with each other.

Measurement:
Correlation matrix
Variance Inflation Factor (VIF)

Variance Inflation Factor (VIF): A VIF greater than 5 or 10


is often used as a threshold to indicate multicollinearity,
but there is no strict p-value associated with it.
Multicollinearity
Sphericity

Description: The variances of the differences


between all possible pairs of within-subject
conditions should be equal.

Measurement:
Mauchly's test
Greenhouse-Geisser correction

Mauchly's Test: A significance level of 0.05 is commonly


used. If the p-value is less than 0.05, you may conclude
Sphericity Example
Scenario: A researcher is
interested in studying the
effects of different types of
relaxation techniques (e.g.,
meditation, deep breathing,
progressive muscle
relaxation) on reducing
anxiety levels. Participants
are measured for their
anxiety levels under three
different relaxation
Analysis: The researcher wants to perform a repeated measures ANOVA to
examine if there are statistically significant differences in anxiety levels across the
relaxation techniques (within-subject factor: Technique) and whether these
differences change over time (within-subject factor: Time).

Sphericity: To check for sphericity, the researcher performs Mauchly's test of


sphericity. The test yields a p-value of 0.03, indicating that the assumption of
sphericity has been violated (p-value < 0.05).

Solution: Since sphericity has been violated, the researcher would apply a
correction to the degrees of freedom, such as the Greenhouse-Geisser or Huynh-
Feldt correction, before conducting the repeated measures ANOVA. These
corrections adjust the degrees of freedom and help ensure that the F-statistic is
valid and that the Type I error rate is controlled appropriately.

In this example, the violation of sphericity suggests that the variances of the
anxiety level differences between the relaxation techniques and time points are
not equal. Applying the appropriate correction allows for a more accurate
assessment of the significance of the relaxation technique and time effects on

You might also like