PRP1001-JXH1003 Research Methods 1 - Lecture notes - Week 11
PRP1001-JXH1003 Research Methods 1 - Lecture notes - Week 11
Class Notes
Week 11
Replication Crisis
Meta Analysis
1
Dr Simone Calabrich
helps to reconcile inconsistent findings across studies and to uncover trends that
might not be apparent in individual studies.
Effect sizes are crucial for meta-analyses as they provide a common metric
that allows for the comparison and aggregation of results from different studies. They
give context to statistical significance, helping to discern whether a result, while
statistically significant, is practically meaningful.
Publication Bias
Publication bias occurs when the outcome of a study influences its likelihood
of being published. Studies with positive or significant findings are more likely to be
published than those with non-significant or negative results.
The push for open science practices, such as the publication of all research
regardless of findings, can help mitigate this bias in the future.
Effect sizes
2
Dr Simone Calabrich
Effect sizes are particularly important because p-values alone do not provide
information about the practical significance of an effect. A p-value only indicates the
probability of obtaining the observed results under the null hypothesis, assuming the
effect does not exist. By contrast, effect sizes provide a standardised measure of the
effect, making it easier to compare and interpret results across different studies or
contexts.
There are various types of effect sizes, each appropriate for different research
designs and statistical analyses. Some common effect sizes include:
Confidence Intervals
3
Dr Simone Calabrich
When a confidence interval contains zero, it implies that the observed effect or
difference is not statistically significant at the chosen level of confidence. In other
words, the interval includes the possibility of the true parameter being zero or having
no effect.
For example, if a confidence interval for the difference in means between two
groups contains zero, it suggests that there is insufficient evidence to
conclude that there is a significant difference between the groups. Similarly, if
a confidence interval for the correlation coefficient includes zero, it indicates
that there is no significant linear relationship between the variables.
It's important to note that a confidence interval containing zero does not
definitively prove the absence of an effect or relationship. It simply suggests
that the observed data do not provide strong evidence to reject the null
hypothesis or support the presence of a statistically significant effect.
On the other hand, a large confidence interval indicates a lower level of precision
and a wider range of plausible values for the true parameter. This suggests that the
sample data provide less conclusive evidence, and the estimate is associated with a
higher degree of uncertainty or variability.
4
Dr Simone Calabrich
Error bars
In a typical setting, error bars are displayed as vertical lines extending above and
below a point estimate on a graph. The length of the error bars corresponds to the
width of the confidence interval. If the confidence interval is narrow, the error bars
will be short, indicating a more precise estimate. Conversely, if the confidence
interval is wide, the error bars will be long, indicating a larger degree of uncertainty in
the estimate.
Interpreting error bars depends on the context and the specific information being
conveyed. Here are a few common interpretations of error bars:
Overlapping error bars: When error bars from different groups or conditions
overlap, it suggests that there is no statistically significant difference between the
groups at the chosen level of confidence. This interpretation is based on the idea
that if the confidence intervals overlap, the estimated values are not significantly
different from each other.
Non-overlapping error bars: When error bars do not overlap, it suggests that
there may be a statistically significant difference between the groups. However, it is
important to note that non-overlapping error bars do not guarantee statistical
significance. The calculation of p-values or conducting hypothesis tests is necessary
to determine the significance.
It's crucial to note that error bars represent uncertainty and variability, and they
do not provide definitive proof of statistical significance. They are a visual tool to aid
in understanding the range of plausible values for the estimated parameter. The
interpretation of error bars should be done in conjunction with appropriate statistical
analysis and considering the context and research question.
5
Dr Simone Calabrich
Statistical Power
Statistical power refers to the probability of correctly rejecting the null hypothesis
when it is false. In other words, it measures the ability of a statistical test to detect a
true effect or relationship if it exists in the population.
Power is influenced by several factors, including the significance level (α), effect
size, sample size, and variability in the data. A higher power indicates a greater
likelihood of correctly detecting a true effect, while a lower power suggests an
increased chance of failing to detect a true effect.
Sample size plays a crucial role in determining statistical power. Increasing the
sample size generally leads to higher power, as it provides more information
and reduces variability in the data. With a larger sample size, the estimate of the
effect becomes more precise, making it easier to distinguish it from random
variability.