0% found this document useful (0 votes)
20 views24 pages

Type I Type II Error

Type I and Type II errors in research

Uploaded by

Aruba Malik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views24 pages

Type I Type II Error

Type I and Type II errors in research

Uploaded by

Aruba Malik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 24

TYPE I & TYPE II ERROR

ARUBA MALIK
OUTLINE

Hypotheses

Types of Hypotheses

Type I Error

Type II Error

Statistical Power

2
HYPOTHESES
Hypotheses

Dictionary Meaning
• An assumption or concession made for the sake of
argument
• An interpretation of a practical situation on condition
taken as ground for action
• Tentative assumption made in order to draw out and
test its logical or empirical consequences
• The antecedent clause of a conditional statement.

4
Types Of Hypotheses
• Research Hypothesis
• Null Hypothesis
• Alternative Hypothesis
• Directional Hypothesis
• Non-Directional Hypothesis
• Statistical Hypothesis
• Empirical Hypothesis
• Simple Hypothesis
• Complex Hypothesis

5
TYPE I ERROR
Type I Error

• A type I error occurs when the null hypothesis (Ho) is


true, but is rejected.
• It is also called false positive/ falsehood.
• The rate of the type I error is called the size of the test
and denoted by the Greek letter alpha
• It usually equals the significance level of a test
• If type I error is fixed at 5%, it means that there are
about 5 chances in 100 that we will reject Ho when Ho
is true.

7
TYPE II ERROR
Type II Error

• A type II error occurs when the null hypothesis (Ho) is


false, but erroneously fails to be rejected.
• Accepting the hypothesis which should have been
rejected..
• Occurs when we fail to believe a truth.
• When one rejects the alternative hypothesis(Fails to
reject the null hypothesis) when alternative hypothesis
is true.
• The rate is denoted by the Greek letter beta and related
to the power of a test which equals 1-beta..

9
Null Hypothesis is
Null Hypothesis
true

Tye I error Correct Outcome


Reject Null Hypothesis
False positive True positive

Fail to reject null Correct Outcome Tye II error


hypothesis True Negative False negative

10
11
Type I and type II Error

12
POWER
Power

• Power is the ability to correctly reject a null


hypothesis that is indeed false.
• Power is the probability of a study to make correct
decisions or detect an effect when one exists
• The power of a statistical test is dependent on: the level
of significance set by the researcher, the sample size,
and the effect size or the extent to which the groups
differ based on treatment
• Power is strongly associated with sample size; when the
sample size is large, power will generally not be an
issue.

14
15
Power Analysis To Determine Sample
Size

• Difference of biological or scientific interest


• Expected variability in the data (standard deviation of
the data) Effect Size of Interest
• Power of the study (1 - β)
• Significance level (α)
• Sample size
• Directionality of the effect being examined (one-sided
or two-sided test)

16
Effect Size

• The scientific importance of the diff erence: This is


about asking, "What specific difference are we interested
in, and why does it matter?" For example, if we're studying
a new medicine, we might want to see a noticeable
improvement in health – that’s the effect size we care
about.
• Existing scientific knowledge: This involves looking at
past research. What have other studies found? What is
already known about this topic?

17
Expected Variability In The
Data (Standard Deviation Of
The Data) eff ect Size Of
Interest

• the expected variability in the data is necessarily a prediction


that must be based on previous research or pilot studies.
• Taken together, the combination of the difference of biological
interest and the expected variability in the data comprises the
effect size that a study is focused on evaluating.
• It is not generally recommended to choose standard effect
sizes based purely on calculations of standard deviation. The
effect size of interest should be motivated purely by the
scientific context of the study.

18
Power Of The Study (1 -β)

• The desired power of a study affects the necessary sample


size because as sample size increases, the mean of the
observed values will more closely represent the true
mean in the population.
• Increased power causes a lower Type II error likelihood.
• Type II error has not traditionally been considered as
problematic as Type I error, so β values are often tolerated
to be about four times greater than α values.

19
Significance Level (α)

• The significance level for a study refers to the amount of


Type I error (α) deemed acceptable.
• This is almost always set to 0.05, the conventional
threshold for p values to be deemed significant.
• A result is therefore considered significant if there is less
than a 5% likelihood that the null hypothesis will be
rejected due to chance alone.
• Each statistical test will have a unique critical value that
corresponds to reaching this level of significance for a
given set of data

20
Directionality of the eff ect being
examined

• apply two-sided tests to look


for bi-directional differences,
even though this will have a
higher threshold to reach
significance (a higher critical
value).
• While it may be beneficial to
restrict some study designs to
one-sided analysis, this may
limit the ability to compare
such studies with analogous
two-sided studies.

21
Statistical Power

Statistical power is determined by:


• Size of the eff ect: Larger effects are more easily
detected.
• Measurement error: Systematic and random
errors in recorded data reduce power.
• Sample size: Larger samples reduce sampling
error and increase power.
• Significance level: Increasing the significance
level increases power.

22
Is a Type I Or Type II Error
Worse?

For statisticians, a Type I error is usually worse. In practical terms,


however, either type of error could be worse depending on your
research context.
A Type I error means mistakenly going against the main statistical
assumption of a null hypothesis. This may lead to new policies,
practices or treatments that are inadequate or a waste of resources.

In contrast, a Type II error means failing to reject a null hypothesis. It


may only result in missed opportunities to innovate, but these can also
have important practical consequences.

23
References
1. National Center for Biotechnology Information. (2020). Introduction to study
designs - intervention studies and observational studies . In Health outcomes
research. National Library of Medicine, National Institutes of Health.
https://www.ncbi.nlm.nih.gov/books/NBK557530/

2. GraphPad Software. (n.d.). How to use power analysis to determine the


appropriate sample size of a study. Retrieved October 27, 2024, from
https://www.graphpad.com/support/faq/how-to-use-power-analysis-to-determine-th
e-appropriate-sample-size-of-a-study/

3. Scribbr. (2020, May 7). Type I and type II errors: What they are and how to avoid
them. https://www.scribbr.com/statistics/type-i-and-type-ii-errors/

4. Black, S. A., & Brighton, S. M. (2012). The phantom thrombophlebitis: A new


classification of varicose veins. Phlebology, 27(1), 39-42.
https://doi.org/10.1258/phleb.2012.012j04

5. Khan Academy. (2012, October 4). Statistical power, Type I and Type II errors
[Video]. YouTube. https://www.youtube.com/watch?v=Hdbbx7DIweQ

24

You might also like