Designs For Program Evaluationas
Designs For Program Evaluationas
04/27/24
Objective of this session
• After completing this session participants will
be:
– Familiar with the commonly used evaluation
designs
– Understand the strength and weakness of the
different designs
– Understand the different threats to internal
validity
04/27/24
Outline of the presentation
• Evaluation Design: definition
• Commonly used Evaluation Designs
• Process evaluation designs:
– Case study
– Survey
• Outcome/impact evaluation designs:
– Experimental
– Quasi-experimental
– Non-experimental designs
• Threats to internal validity
04/27/24
Evaluation Design
• An evaluation design is the skeleton
(structure) of evaluation study
• It is a specific plan or protocol for conducting
any evaluation study
• Study designs are usually framed by the
evaluation questions and the type of
evaluation
04/27/24
Evaluation designs
• The commonly used evaluation designs
include:
Types of Evaluation Types of evaluation
Designs
Survey Process evaluation
04/27/24
Case study….
• Case study is an empirical inquiry that investigates a
contemporary phenomenon within its real-life context
04/27/24
Case selection
• Case study can be single or multiple
Single Case:
– Cases can be a person, a program, country, ….
Selecting best case: to share lessens learned from the best
archiving site
Worst case: to assess what is gone wrong and correct
accordingly
Representative: to generalize the information to the whole
program ..
04/27/24
Case selection
• Multiple case selection:
• Best and worst (extreme) cases: to see the
implementation difference
• Ex. Clinical care for HIV/AIDS Pts at good performing and poor
performing Hospitals
• Cases at different levels:
• Ex. Clinical care for HIV/AIDS Pts at different level Hospitals
• Cases at different settings:
– Ex. Urban and rural setting
04/27/24
Data sources
• Case study uses multiple data sources, including primary and
secondary data
• Triangulation and complementing of data from different
sources to produce credible information is common
04/27/24
Case
• The Performance Monitoring team of a RHB collected
data on ART service of the region. When the data was
analyzed and interpreted, it showed that the
achievement of three hospitals found to be below the
expectation. But four hospitals found to be best
achievers. And the rest achieved medium.
04/27/24
Case study vs. survey
Case Study Survey
04/27/24
Outcome/Impact Evaluation:
04/27/24
Difficulty in outcome/Impact evaluation
04/27/24
Difficulty in outcome/Impact evaluation
04/27/24
Commonly used outcome/impact evaluation designs
Is randomized
assignment used?
YES NO
YES No
Quasi-Experiment Non-Experiment
04/27/24
Experimental Designs
04/27/24
What is Randomization?
• Assigning participants to a treatment or control group
randomly
• Ensures that any differences (religion, age) that can
“confound” the results of intervention is evenly distributed
between the treatment and the control group
• Limits:
• Ethical concerns:
E.g. Denying treatment to a control group?
• Need highly qualified experts, and huge resources
• Short time frame for program implementation
• Full coverage programs not amenable to randomization
04/27/24
Experimental Designs Characteristics
04/27/24
Experimental Designs
Before and After Design
R O1 X O2
R O1c O2c
•Random assignment • R: Random assignment of
•Measures before and after participants
•Effect is measured as the • O:observations/Measurement
changes in outcomes • X: treatment (intervention)
between treatment and
control
04/27/24
Experimental Designs
Post test only
R X O1
R O2C
•Random assignment
•No pre–intervention measure • R: Random assignment of
•Measures only after treatment participants
• Effect is measured as the • O: observations
changes in outcomes • X: treatment (intervention)
between treatment and
control
04/27/24
Quasi-experimental designs
• Quasi-experiments are defined as experiments that do not
have random assignment of participants
• Like an experimental designs it involves a comparison of
groups or conditions
• Applicable when implementation is universal and
implemented at the same time and randomization cannot be
done
• Control groups can be constructed through matching and/or
statistical control
• We can match participants based on some characteristics like
age, sex, education…
04/27/24
Quasi-experimental …
• Goal is to identify a group that is highly
comparable to the treatment group
• Must document at baseline the similarities and
differences between the groups
• Peer-generated
• There may be uncontrolled differences
between the intervention and control groups
04/27/24
Quasi-Experimental Designs, Nonequivalent
Comparison Group
O1 X O 2
Build on one-group
pre/posttest design
O1c O2c
Adds comparison group
Impact/outcome shows as
differences in indicators
between treatment and Groups may differ in
comparison group factors that matching or
statistical analysis cannot
account for
04/27/24
Quasi-Experimental Designs, Nonequivalent
Comparison Group post test only
X O1
No pretest measure
There is comparison
O1c
group
Impact shows as
differences in indicators
between treatment and Groups may differ in factors
comparison group that matching or statistical
analysis cannot account for
04/27/24
Quasi-Experimental Designs
No comparison groups
04/27/24
Non-experimental designs
Pre and Posttest
• No control group
• Measure indicators before the intervention
and after intervention
• Outcome/Impact shown as a change in
indicators after intervention
04/27/24
Non-Experimental Designs:
Pre and Posttest
O1 X O 2
To conduct a pilot test
To demonstrate outcome/impact of short term
intervention
Provides a measure of
change, but not strong
conclusive results because no
random assignment and no
Weak design to assess
comparison groups impacts or outcomes
04/27/24
Non-Experimental Designs:
Time-series
O1 O 2 O 3 X O 1 O 2 O 3
04/27/24
Non-Experimental Designs
Posttest only
X O1
•• For interventions
For interventions that
that have
have already
already started
started
•• When baseline
When baseline data
data collection
collection isis not
not feasible
feasible
04/27/24
Threats to internal validity
• History:
• Extraneous events that occur during the intervention
period, can influence the outcome of the
intervention
• To Minimize this threat:
• Select a study design with a control or comparison
group
• Monitor extraneous events (mass media.)
• Process evaluation..
04/27/24
Threats to internal validity
Testing:
• Testing prior to an intervention is likely to affect the
responses given in a posttest
• Pre- and post intervention differences in indicators might be
due to familiarity with the questions
– Ex. Health Education for school children
• To minimize this threat:
• Use a control or comparison group
• Change the order, but not the format or content, of items
included on the posttest
04/27/24
Threats to Internal Validity
Maturation:
• It occurs as time passes and may produce outcomes in
participants that are unrelated to the effect of the program
– Example: adolescent reproductive health education
– Youth who are in the program may experience changes in
attitudes and behaviors as they grow older, which are
unrelated to the program
• To minimize this threat:
• Use a control or comparison group to account for maturation
• Minimize the duration b.n pre-and post test
04/27/24
Threats to Internal Validity
Dropout:
• It can affect evaluation results
• If a program is implemented over a long period of time,
participants may dropout, move or die, and so be lost to
follow up studies.
• If those that are lost significantly differ from those who
remain, the results of a post-intervention study may reflect
those differences rather than the effect of program
• To minimize this threat:
• Compare characteristics of those who continue the program
with those who drop out
04/27/24
Threats to Internal Validity
• Instrumentation changes:
• If questionnaire between the pretest and the posttest are
different, it can result in an effect independent of the
program interventions
To minimize this threat:
04/27/24
Considerations in Choosing Evaluation
Design
04/27/24
Considerations in Choosing Evaluation
Design…
04/27/24
Reference
• Baker Q. E., et al. An Evaluation Framework for
Community Health Programs. June 2002
• Evaluation Designs to Assess Program Impact
• Sarah Earl, Fred Carden,and Terry Smutylo. Outcome
Mapping: Building Learning and Reflection into
Development Programs
• Yin, R. K. (1984). Case Study Research: Design and
methods
04/27/24