0% found this document useful (0 votes)
6 views

Designs For Program Evaluationas

The document discusses different evaluation designs commonly used in impact evaluations. It describes experimental designs which use random assignment to treatment and control groups as well as quasi-experimental designs which do not use random assignment but attempt to approximate experimental designs through matching or statistical controls. Nonexperimental designs which do not have a comparison group are also discussed.

Uploaded by

resom tsegaye
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Designs For Program Evaluationas

The document discusses different evaluation designs commonly used in impact evaluations. It describes experimental designs which use random assignment to treatment and control groups as well as quasi-experimental designs which do not use random assignment but attempt to approximate experimental designs through matching or statistical controls. Nonexperimental designs which do not have a comparison group are also discussed.

Uploaded by

resom tsegaye
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 39

Evaluation Designs

04/27/24
Objective of this session
• After completing this session participants will
be:
– Familiar with the commonly used evaluation
designs
– Understand the strength and weakness of the
different designs
– Understand the different threats to internal
validity

04/27/24
Outline of the presentation
• Evaluation Design: definition
• Commonly used Evaluation Designs
• Process evaluation designs:
– Case study
– Survey
• Outcome/impact evaluation designs:
– Experimental
– Quasi-experimental
– Non-experimental designs
• Threats to internal validity

04/27/24
Evaluation Design
• An evaluation design is the skeleton
(structure) of evaluation study
• It is a specific plan or protocol for conducting
any evaluation study
• Study designs are usually framed by the
evaluation questions and the type of
evaluation

04/27/24
Evaluation designs
• The commonly used evaluation designs
include:
Types of Evaluation Types of evaluation
Designs
Survey Process evaluation

Case Study Process evaluation

Experimental Outcome/Impact evaluation


(Comparison Group)
Quasi-experimental Outcome/Impact evaluation
(Comparison group)

Non-experimental Outcome/Impact evaluation


04/27/24
(No comparison group)
Evaluation designs
• Case study:
– A case study is a method for learning about a
complex instance, based on a comprehensive
understanding of that instance obtained by
extensive description and analysis of that
instance taken as a whole and in its context.
– It emphasize contextual analysis of a limited
number of events

04/27/24
Case study….
• Case study is an empirical inquiry that investigates a
contemporary phenomenon within its real-life context

• A case study is an in-depth study as opposed to studies with


many observations (breath)

• Usually answers one or more questions which begin with


"how" or "why"

• It is commonly used in process evaluation because it can


explore the different factors that can explain program
implementation process

04/27/24
Case selection
• Case study can be single or multiple

Single Case:
– Cases can be a person, a program, country, ….
Selecting best case: to share lessens learned from the best
archiving site
Worst case: to assess what is gone wrong and correct
accordingly
Representative: to generalize the information to the whole
program ..

04/27/24
Case selection
• Multiple case selection:
• Best and worst (extreme) cases: to see the
implementation difference
• Ex. Clinical care for HIV/AIDS Pts at good performing and poor
performing Hospitals
• Cases at different levels:
• Ex. Clinical care for HIV/AIDS Pts at different level Hospitals
• Cases at different settings:
– Ex. Urban and rural setting

04/27/24
Data sources
• Case study uses multiple data sources, including primary and
secondary data
• Triangulation and complementing of data from different
sources to produce credible information is common

04/27/24
Case
• The Performance Monitoring team of a RHB collected
data on ART service of the region. When the data was
analyzed and interpreted, it showed that the
achievement of three hospitals found to be below the
expectation. But four hospitals found to be best
achievers. And the rest achieved medium.

1. What type of design should be employed to identify the


possible reasons for the poor achievement of the three
hospitals?
2. What type of design should be employed to compare the
implementation process of best performing hospitals and
poor performing hospitals?

04/27/24
Case study vs. survey
Case Study Survey

In depth analysis Breadth

Cases selection using some criteria Random sampling (usually)


(information-oriented sampling)

Cases can be a person, a program, Sampled study subjects


a country, continent….

No statistical generalization (Can Statistically generalizable


be generalized to areas with
similar settings)
Answer questions: How, why? Answer questions: Who, how
much, where…?

04/27/24
Outcome/Impact Evaluation:

04/27/24
Difficulty in outcome/Impact evaluation

•Designing outcome evaluation and claim attribution is a difficult


task because of different confounding factors

• programs are part of an interconnected web of actors, and


relationships

•Change in a given society is continuous ,complex, non-linear,


multidirectional and not controllable

04/27/24
Difficulty in outcome/Impact evaluation

• If change is too complex:


– How can we increase our knowledge of the
processes we engage in?
– How can we know if we made a difference?
– How can we understand change?
– How can we recognize contributors and share the
credit?
• Choosing appropriate Designs!!

04/27/24
Commonly used outcome/impact evaluation designs

Evaluation designs Specific designs


Experimental Before and After Design
Post test only
(Control group RT)

Quasi-experimental Pre and post test


Post test only
(Control group)

Non experimental Pre and post test


Time series
Post test only
04/27/24
Types of Evaluation Designs

Is randomized
assignment used?

YES NO

Randomized or True Is there a control group or


Experiment multiple measures?

YES No

Quasi-Experiment Non-Experiment

04/27/24
Experimental Designs

• Participants are randomly assigned to treatment or


control group
• Can be conducted in a clinical or non-clinical
(Community or Field) settings
• Data of highest quality
– resembles controlled experiments by basic science
researchers
• Its advantage is randomization

04/27/24
What is Randomization?
• Assigning participants to a treatment or control group
randomly
• Ensures that any differences (religion, age) that can
“confound” the results of intervention is evenly distributed
between the treatment and the control group
• Limits:
• Ethical concerns:
E.g. Denying treatment to a control group?
• Need highly qualified experts, and huge resources
• Short time frame for program implementation
• Full coverage programs not amenable to randomization

04/27/24
Experimental Designs Characteristics

• Groups created by methods of randomization


• Considered the most rigorous of all evaluation
designs
• Strong control over threats to internal validity
• Yield strongest evidence of causal effectiveness of
the intervention

04/27/24
Experimental Designs
Before and After Design

R O1 X O2
R O1c O2c
•Random assignment • R: Random assignment of
•Measures before and after participants
•Effect is measured as the • O:observations/Measurement
changes in outcomes • X: treatment (intervention)
between treatment and
control

04/27/24
Experimental Designs
Post test only

R X O1
R O2C
•Random assignment
•No pre–intervention measure • R: Random assignment of
•Measures only after treatment participants
• Effect is measured as the • O: observations
changes in outcomes • X: treatment (intervention)
between treatment and
control

04/27/24
Quasi-experimental designs
• Quasi-experiments are defined as experiments that do not
have random assignment of participants
• Like an experimental designs it involves a comparison of
groups or conditions
• Applicable when implementation is universal and
implemented at the same time and randomization cannot be
done
• Control groups can be constructed through matching and/or
statistical control
• We can match participants based on some characteristics like
age, sex, education…

04/27/24
Quasi-experimental …
• Goal is to identify a group that is highly
comparable to the treatment group
• Must document at baseline the similarities and
differences between the groups
• Peer-generated
• There may be uncontrolled differences
between the intervention and control groups

04/27/24
Quasi-Experimental Designs, Nonequivalent
Comparison Group

O1 X O 2
Build on one-group
pre/posttest design
O1c O2c
Adds comparison group

Impact/outcome shows as

differences in indicators
between treatment and Groups may differ in
comparison group factors that matching or
statistical analysis cannot
account for

04/27/24
Quasi-Experimental Designs, Nonequivalent
Comparison Group post test only

X O1
No pretest measure
There is comparison
O1c
group
Impact shows as

differences in indicators
between treatment and Groups may differ in factors
comparison group that matching or statistical
analysis cannot account for

04/27/24
Quasi-Experimental Designs
No comparison groups

• No need to create control groups


• Compare changes in program indicators to the
status of the general population
• Limitation: Data for relevant indicators and
appropriate comparison population are rarely
available

04/27/24
Non-experimental designs
Pre and Posttest

• No control group
• Measure indicators before the intervention
and after intervention
• Outcome/Impact shown as a change in
indicators after intervention

04/27/24
Non-Experimental Designs:
Pre and Posttest
O1 X O 2
To conduct a pilot test
To demonstrate outcome/impact of short term
intervention

Provides a measure of
change, but not strong
conclusive results because no
random assignment and no
Weak design to assess
comparison groups impacts or outcomes

04/27/24
Non-Experimental Designs:
Time-series

O1 O 2 O 3 X O 1 O 2 O 3

Enhanced one-group pre and Length increases risk for


posttest design threats
Adds number of observation points Selection threat
Requires a large number of Testing threat
observations Requires advanced statistical
Impact shown as a change in procedures
trends after intervention
Time and resource intensive

04/27/24
Non-Experimental Designs
Posttest only

X O1

•• For interventions
For interventions that
that have
have already
already started
started
•• When baseline
When baseline data
data collection
collection isis not
not feasible
feasible

04/27/24
Threats to internal validity
• History:
• Extraneous events that occur during the intervention
period, can influence the outcome of the
intervention
• To Minimize this threat:
• Select a study design with a control or comparison
group
• Monitor extraneous events (mass media.)
• Process evaluation..

04/27/24
Threats to internal validity
Testing:
• Testing prior to an intervention is likely to affect the
responses given in a posttest
• Pre- and post intervention differences in indicators might be
due to familiarity with the questions
– Ex. Health Education for school children
• To minimize this threat:
• Use a control or comparison group
• Change the order, but not the format or content, of items
included on the posttest

04/27/24
Threats to Internal Validity
Maturation:
• It occurs as time passes and may produce outcomes in
participants that are unrelated to the effect of the program
– Example: adolescent reproductive health education
– Youth who are in the program may experience changes in
attitudes and behaviors as they grow older, which are
unrelated to the program
• To minimize this threat:
• Use a control or comparison group to account for maturation
• Minimize the duration b.n pre-and post test

04/27/24
Threats to Internal Validity
Dropout:
• It can affect evaluation results
• If a program is implemented over a long period of time,
participants may dropout, move or die, and so be lost to
follow up studies.
• If those that are lost significantly differ from those who
remain, the results of a post-intervention study may reflect
those differences rather than the effect of program
• To minimize this threat:
• Compare characteristics of those who continue the program
with those who drop out

04/27/24
Threats to Internal Validity
• Instrumentation changes:
• If questionnaire between the pretest and the posttest are
different, it can result in an effect independent of the
program interventions
To minimize this threat:

• Keep the exact wording of baseline questions


• Ensure that interviewers are unaware of participant and
control group members assignments, so they will not
anticipate responses

04/27/24
Considerations in Choosing Evaluation
Design

• Ethical or legal considerations


• Stage of program implementation
• Feasibility of establishing a control group
• The amount of resources available
• Time frame
– Evaluation design should be decided before the start of the
project –if possible
– Time can be a limit in the choice of your design (e.g. time
series)

04/27/24
Considerations in Choosing Evaluation
Design…

• The strength of evidence required to address cause-


effect relationship
• Decision makers demand different type information:
• What indicator do we want to measure?
– Provision: services must be provided so that they are available and
accessible
– Utilization: The population must accept the service and utilize them.
– Coverage: the target population reached
– Impact: the change in health and health related trends

04/27/24
Reference
• Baker Q. E., et al. An Evaluation Framework for
Community Health Programs. June 2002
• Evaluation Designs to Assess Program Impact
• Sarah Earl, Fred Carden,and Terry Smutylo. Outcome
Mapping: Building Learning and Reflection into
Development Programs
• Yin, R. K. (1984). Case Study Research: Design and
methods

04/27/24

You might also like