0% found this document useful (0 votes)
60 views

Evaluation of Evidence

Uploaded by

gatluaklony40
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views

Evaluation of Evidence

Uploaded by

gatluaklony40
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 51

Evaluation of Evidence

Judgement of Causality

By
Tesfaye Shumet
(MPH in Epidemiology)
1 Sunday, January 19, 2025
Session
objectives
After this session, the students are able to
Understand the validity of epidemiological
studies
Appreciate the role of chance, bias and
confounding in epidemiological studies
Understand possible ways of controlling their
roles in your study
Appreciate the concept of disease causation
Apply bill’s criteria for judgments of causality

2 Sunday, January 19, 2025


Evaluation of evidence

Is the observed finding a reflection of the truth?

Observed:
Prevalence
Incidence Are they true?
Relative Risk (Accuracy)
Odds Ratio…
Hazard Ratio

3 Sunday, January 19, 2025


The existence of statistical significant association
does not in itself constitute a proof of causation.
The observed association could be real or false
(artifactual)
A. The association may be the result of chance
B. The association may be the result of bias
C. The association can be the result of a third
extraneous variable

4 Sunday, January 19, 2025


Judging observed association
Could it be due to selection or
measurement bias?
No

Could it be due to confounding?

No
Could it be a result of chance?

Probably not
Could it be causal?
Apply criteria for
causation
5 Sunday, January 19, 2025
Validity of epidemiological
studies
Validity is the extent to which data collected
actually reflect the truth.
Two types of validity - internal and external.

A. Internal validity - is the degree to which


the results of the study are correct for the
particular group of people studied

b. External validity (generalizability) - is


the extent to which the results of the study
apply to people not in it.

6 Sunday, January 19, 2025


Judgment about causality should address 2 major
areas:
a) whether the observed association
between exposure and disease is valid
for any individual study
b) whether the totality of evidence taken
from a number of sources supports the
findings of this study
First assess whether for any individual study the
observed association is valid ( check the role of
chance, bias and confounding) then assess
other supportive evidences

7 Sunday, January 19, 2025


Evaluation of Evidence
A valid research output has to pass through
judgements in relation to:
Bias
 Whether systematic error has been built into the study
design
Confounding
 Whether a third factor influences the relation between
‘disease’ and ‘cause’
Role of chance
 How likely is it what we have found is a true finding

In epidemiologic studies it is essential to


Avoid bias
Control confounding
Undertake accurate replication (to avoid the
effect of chance)
8 Sunday, January 19, 2025
The role of
bias
Bias - is any systematic error in the design,
conduct, or analysis of a study that results in a
distorted estimate of what the study is attempting
to measure.
•The key word in the understanding of the concept
of bias is “different”.
• If the way in which participants are selected into
the study is different for cases and controls, for
example, and that difference is related to their
exposure status, then the possibility of a bias
exists in assessment of association between the
exposure and disease.
9 Sunday, January 19, 2025
The role of bias
Bias can occur in all types of epidemiologic
studies
Retrospective studies are more susceptible to
bias than prospective ones
When evaluating a study for the presence of
bias, investigators must:
 identify its source
 estimate the magnitude or strength, and
 assess its direction

10 Sunday, January 19, 2025


Types of Bias

a. Information/ Observation bias


Introduced during data collection
b. Selection bias
Introduced during recruitment of
study participants

11 Sunday, January 19, 2025


Selection Bias
• Invitational (who gets invited into the
study?)
Example
– Healthy worker bias
– Berkison’s bias
– Incidence-prevalence bias (missing deaths and
recovered cases)

• Acceptance (who accepted the


invitation?)
Example
– Loss to follow-up
12
– Volunteer/Compliance bias Sunday, January 19, 2025
– Non-response bias
Examples of selection bias

1 Differential surveillance, Diagnosis, or Referral


(ascertainment bias)
Selection bias can occur in case-control study
as a result of differential surveillance,
diagnosis, or referral of cases that is related to
the exposure

13 Sunday, January 19, 2025


Examples of selection bias
Example - women who take oral contraceptives (OCs)

may be screened more often for breast cancer than


women who do not take OCs because of the suspected
link between oral contraceptive and breast cancer.
This would result in breast cancer being diagnosed

more readily in those who are exposed to Ocs.


In turn this would introduce a bias in that exposed

cases may be more likely to come to medical attention


and be included in a study than non-exposed cases.

14 Sunday, January 19, 2025


Examples of selection bias
2. Self selection/ Volunteer bias/ Compliance
bias
People who accept to participate in a study, or
people who refuse to participate are often quite
different from the general population.

3. Non-response bias
• This is due to differences in the characteristics
between the responders and non-responders to the
study.
• Non-response reduces the effective sample size,
resulting in loss of precision of the survey estimates.
• Rates of response in many studies may be related to
exposure status.
15 Sunday, January 19, 2025
Examples of selection bias
4 Loss to follow up
major source of bias in cohort studies
also a problem in intervention studies
relates to the necessity of following individuals for
a period of time after exposure to determine the
development of the outcome
If the proportion of losses to follow-up is large, in
the range of 30 to 40 percent, this would certainly
raise serious doubts about the validity of the study
results.
the more difficult issue for interpretation is that
even if the rate of loss is not that extreme, the
16
probability of loss may be related to Sunday,
the exposure,
January 19, 2025
to the outcome, or to both.
Examples of selection bias
5 Berkson’s bias
Case control studies carried out
exclusively in hospital settings are subject to
selection bias attributable to the fact that
risks of hospitalization can combine in
patients who have more than one
condition

17 Sunday, January 19, 2025


Examples of selection bias

6 Healthy worker bias


 Refers to the bias in occupational health
studies which tend to underestimate the risk
associated with an occupation due to the fact
that employed people tend to be healthier
than the general population

18 Sunday, January 19, 2025


Examples of selection bias
8. Prevalence-incidence (Nyman) bias
• Studies based on prevalence will produce a
distorted picture of what has happened in terms
of incidence.
• Example: "silent" MI's may leave no clear
electrocardiographic evidence some time later)
and/or risk factor change after a
pathophysiologic process has been initiated

19 Sunday, January 19, 2025


Examples of selection bias
9. Membership bias
Membership in a group may imply a degree of

health which differs systematically from others in


the general population.
Example: if people who participate in a health

promotion program subsequently make more


beneficial lifestyle changes than nonparticipants
due not to the program itself but to the
participants' motivation and readiness to change.
20 Sunday, January 19, 2025
Ways of minimizing selection
bias
1. Population-based studies are preferable
2. Avoid the inclusions as study subjects of people
who have volunteered on their own
3. In case-control study, it is useful to select several
different control groups.
4. In hospital-based case control study, controls are
usually selected among patients with diseases
other than the disease studied.
5. keep losses to follow-up to an absolute minimum.
 For those who are lost, an assessment of as
much outcome data as possible.
21 Sunday, January 19, 2025
Information bias
/Observation bias
 Also called misclassification bias
 This refers to bias which arises during the
data collection process, because of mistakes
in categorizing/classifying/ study subjects with
respect to their exposure or disease status
This could be due to:-
 Instrumentation - an inaccurately calibrated
instrument creating systematic error
 Misdiagnosis - if a diagnostic test is consistently

inaccurate, then information bias would occur


22 Sunday, January 19, 2025
Misclassification can be differential or non-differential
Differential misclassification: the probability of
misclassification varies for the different study groups,
i.e., misclassification is conditional upon exposure or
disease status.
Are we more likely to misclassify cases than controls?

 For example, if you interview cases in-person for a

long period of time, extracting exact information


while the controls are interviewed over the phone for
a shorter period of time using standard questions,
this can lead to a differential misclassification of
23 exposure status between controls andSunday,
cases. January 19, 2025
Non-differential misclassification: the
probability of misclassification does not vary
for the different study groups; is not
conditional upon exposure or disease status,
but appears random.
Using the above example, if half the subjects

(cases and controls) were randomly selected


to be interviewed by the phone and the other
24 half were interviewed in person, Sunday, January 19, 2025
Information Bias: Examples
1. Interviewer bias
An interviewer’s knowledge may influence the
structure, presentation and response to a
questionnaire
2. Recall bias
Those with a particular disease are likely to
remember events than healthy people
3. Socially desirability bias - if study participants
consistently give the answer that the
investigator wants to hear, then information
bias would occur

25 Sunday, January 19, 2025


Information Bias…
4. Observer bias
Preconceived expectations may affect research findings
 If a physician knows who takes intervention, he will be
biased in progress of the patient
5. Hawthorn effect bias
Refers to the changes in the dependent variable which
may be due to the process of measurement or
observation itself.
6. Placebo effect bias
In experimental studies which are not placebo-
controlled, observed changes may be ascribed to the
positive effect of the subject's belief that the intervention
will be beneficial.

26 Sunday, January 19, 2025


Information bias….
7. Lead time bias
• Lead time is defined as the interval between the time a
condition is detected through screening and the time it
would normally have been detected by the reporting of
symptoms or signs.
• Lead time bias is overestimation of survival time, due to the
backward shift in the starting point for measuring survival
that arises when diseases such as cancer are detected early,

27
as by screening procedures. Sunday, January 19, 2025
Cases of
CA
detected Follow up – Survival time
by
screening
Lead time

Cases of CA
detected by
Follow up –
signs &
survival time
symptoms

Can we say screening leads to longer survival?


28 Sunday, January 19, 2025
Lead time…
• E.g. interventions for women whose breast cancer is
detected by screening cannot be validly compared
with interventions for women whose disease is first
detected by clinical examination at a later stage of
the disease for survival
• Thus, in comparing survival rates, between
nonrandomized groups, lead time bias may
spuriously cause the screened case group to
have a higher survival rate than the control
29 group at any particular time after diagnosis.
Sunday, January 19, 2025
Information bias….

7. Diagnostic suspicion bias


 The diagnostic process includes a great deal

of room for judgment.


 If knowledge of the exposure or related
factors influences the intensity and outcome
of the diagnostic process, then exposed cases
have a greater (or lesser) chance of
becoming diagnosed, and therefore, counted.
30 Sunday, January 19, 2025
Information bias….
8. Exposure suspicion bias
 Knowledge of disease status may influence the

intensity and outcome of a search for exposure to


the putative cause.

31 Sunday, January 19, 2025


Bias in Practice
Each study design has particular types of bias to
which it is most vulnerable:
Example
Cross sectional study
 Temporal bias
 Nyman bias

Case control studies


 Selection bias
 Recall bias

Cohort studies
 Loss to follow up
 Lead time

32 Sunday, January 19, 2025


Bias in Practice…
Bias may
 May mask an association or cause a spurious one
 May cause over or underestimation of the effect size
Increasing the sample size will not eliminate
any bias
A study that suffers from too much bias lacks
internal validity

Despite all preventive efforts, bias should always


be considered among alternative explanations
of a finding!

33 Sunday, January 19, 2025


Control of bias in your study
I. Choose study design carefully
 If ethical and feasible, a double blind randomized control
trial has the least potential for bias.
 If loss to follow-up will not be substantial, a prospective
cohort study may have less bias than a case-control
study.
 Controls for case-control studies should be maximally
comparable to cases except for the variable under study
II. Choose objective rather than subjective outcomes.
III. Blind interviewers or examiners wherever possible.
IV. Use well-defined criteria for identifying a "case" and
use closed ended questions whenever possible.

34 Sunday, January 19, 2025


Controls for Bias
 Be purposeful in the study design to minimize
the chance for bias
 Example: use more than one control group

 Define, a priori, who is a case or what


constitutes exposure so that there is no overlap
 Define categories within groups clearly (age groups,
aggregates of person years)
 Set up strict guidelines for data collection
 Train observers or interviewers to obtain data in the
same fashion
 It is preferable to use more than one observer or
interviewer, but not so many that they cannot be
trained in an identical manner

35 Sunday, January 19, 2025


Controls for Bias …
Randomly allocate observers/interviewer
data collection assignments
Institute a masking process if appropriate
 Single masked study – subjects are unaware of whether
they are in the experimental or control group
 Double masked study – the subject and the observer are
unaware of the subject’s group allocation
 Triple masked study – the subject, observer and data
analyst are unaware of the subject’s group allocation

Build in methods to minimize loss to follow-


up

36 Sunday, January 19, 2025


The role of confounding
Confounding is mixing of the effect of exposure under
study on the outcome with that of a third factor that is
independently associated with the exposure and the
outcome
Confounder an extraneous variable that has association
with the factor variable and also a factor for the disease
Exposure Disease

Confounder
E.g. Association test between alcoholism and lung cancer
(confounded by smoking)

37 Sunday, January 19, 2025


To bring a confounding effect, that confounding variable
must fulfill each of the following criteria
1) The variable must be associated with the exposure in the
population that produced the cases
i.e. the confounder must be more or less common in the
exposed group than the comparison group
2) The variable must be an independent cause or predictor of
the disease
3) Confounder must not be an intermediate link in a causal
pathway between exposure and outcome

38 Sunday, January 19, 2025


The role of
confounding…
Example of confounding effect.
An observed association between
consumption of coffee and increased
risk of MI could be due, at least in
part, to the effect of cigarette
smoking, since coffee drinking is
associated with smoking and,
independent of coffee consumption,
smoking is a risk factor for MI.

39 Sunday, January 19, 2025


Effect of Confounding

1.Totally or partially accounts for the apparent


effect

2.Mask an underlying true association

3.Reverse the actual direction of the association

40 Sunday, January 19, 2025


Control for Confounding
Variables
• In the design:
 Randomization
 Restriction

• During analysis:
 Matching
 Stratified analysis
 Multivariable analysis

41 Sunday, January 19, 2025


The role of
chance
we can draw inferences about the experience of an
entire population based on an evaluation of only a
sample.
Chance may always affect the results observed simply
because of random variation from sample to sample.
Sample size is one of the major determinants of
chance.
 the larger the sample on which the estimate is based,
the less variability and the more reliable the inference.
It is important to quantify the degree to which chance
variability may account for the results observed in any
individual study.

42 Sunday, January 19, 2025


The role of
chance
This is done by performing an appropriate
test of statistical significance.
A measure that is often reported from all
tests of statistical significance is the P value

 P < 0.05 - statistically significant.


 P> 0.05 – no statistically significant
association ( chance can not be excluded as
a likely explanation

43 Sunday, January 19, 2025


The role of
chance
It is always advisable to report the actual P
value rather than merely that the results did
or did not achieve statistical significance.
e.g. P- value of 0.04
confidence interval (CI) is far more
informative measure than P value to evaluate
the role of chance

44 Sunday, January 19, 2025


Confidence
1.
Interval
Provide information that p-value
gives.
 If null value is included in a 95%
confidence interval, by definition the
corresponding P-value is >0.05.

2. Indicate the amount of variability


(effect of sample size) by the width
of the confidence interval.
 This information can not be obtained
from p-value.
45 Sunday, January 19, 2025
Interpretation of CI
 width of CI

indicate greater variability

suggest inadequacy of the sample
size
Particularly important in interpreting non-significant results
Narrow CI suggests that truly there is no association

Wide CI suggest inadequacy of sample size to have


adequate statistical power

46 Sunday, January 19, 2025


causation

(Establishing Causal
Association )

“To know the causes of disease and to understand the


use of the
various methods by which disease may be prevented
amounts to the
same thing as being able to cure the disease”
Hippocratus.

47 Sunday, January 19, 2025


Disease causation
Cause and effect understanding is the highest form of
achievement in scientific knowledge
Causal knowledge is the basis for rational actions to
break the links between the disease and the factors
causing the disease
So the primary objective in epidemiology is to judge
whether an association is, in fact, causal.
A cause in epidemiology can be defined as something
that alters the frequency of disease
Scientific proof of a causation is often difficult to
obtain, since experimental studies are often neither
feasible nor ethical. Since associations documented by
other kinds of epidemiologic studies do not constitute
proof of causation,
48 Sunday, January 19, 2025
Establishing causation
The observed association in epidemiologic studies may be
Artefactual associations /false/
Due to chance- random error
Due to bias - systematic error
Non-causal (indirect) associations,
Reverse causation
Reciprocal causation
 Confounding association
Causal associations, which can be established only when
other potential explanations of the association can be ruled
out.

49 Sunday, January 19, 2025


Judgements of causality
Bradford-Hill Criteria of disease causation
1. Strength of association
2. Dose – response relationship
3. Consistency of the relationship
4. Temporal relationship
5. Specificity of the association
6. Biological plausibility (coherence)
7. Experimental confirmation

Read more

50 Sunday, January 19, 2025


Yo u
a n k
T h

You might also like