Evaluating Communication For Development
Evaluating Communication For Development
AND EVALUATING
INFORMATION AND
COMMUNICATION FOR
DEVELOPMENT (ICD)
PROGRAMMES
GUIDELINES
Contents
GUIDELINES
MARCH 2005
MONITORING
AND EVALUATING
INFORMATION AND
COMMUNICATION FOR
DEVELOPMENT (ICD)
PROGRAMMES
Forward:
12
16
22
28
34
Appendix
39
Contents
03
Forward
Forward
If you work for the Department for International Development (DFID) and need advice on
monitoring and evaluating Information and Communication for Development (ICD)
programmes, these guidelines are for you. They don't provide a set of rules, but do
introduce a range of approaches for you to choose from at various stages in your
programme. Where possible, we signpost you to sources of further information. You can
use the guidelines as a reference tool or to help you work with consultants.
2.
use media and ICTs to add value to development sectors and programmes
Forward
05
ICD
ICT
IDRC
JHU CCP
KABP
KAP
PEER
PM&E
PRA
PRCA
RAP
Forward
Section 1
Section 1
DFID policy says we should monitor and evaluate our communications to:
It is difficult to define a specific target audience for initiatives that have an effect
over a wide area. (For example: broadcast campaigns)
It is not always clear that an ICD programme - rather than political, social or
economic factors - has been responsible for change
If developing-world audiences have little media choice, it can be hard to find out
their opinions on the quality of ICD programmes
But you should bear in mind that it is difficult to evaluate human behaviour and social
process, so there are no ready-made ways for measuring the success of ICD projects.
06
So you should be aware that monitoring and evaluation processes rely on personal
judgement as well as theory. Bear in mind that there is no single, best evaluation method.
07
Section 2
Section 2
2.
3.
08
focus groups
document analysis
usage and site observation
exit polls
questionnaires
interviews
09
Section 2
Setting a budget
How will the data collected be analysed (statistical analysis, content analysis)?
What types and levels of resource are needed (personnel, supplies, cash)?
With small projects and those with short time-frames, evaluation might take up a larger
part of the budget. This might also be the case with pilot projects, which try to determine
how successful a programme will be if it is rolled out at a later date. In cases like this,
evaluation costs might take up 30 per cent or more of the total budget.
Section 2
In the past, expatriates have played a significant role in conducting research. But it's now
considered good practice to use local resources, where possible, and to train and
employ local assessors.
With any external evaluator, whether expatriate or local, it is a good idea to ask for an
evaluation framework, which should include:
a timeframe
11
Section 3
Section 3
FORMATIVE APPRAISAL
This section outlines methods for carrying out research at the start of your project.
1. Measuring KABP
The approach
KABP (sometimes reduced to KAP) is an acronym that stands for knowledge, attitude,
behaviour and practice. Research that measures KABP is based on the assumption that a
person's knowledge influences their attitude, which in turn influences their behaviour. It
usually involves written, standardised questionnaires that are composed of yes/no questions.
12
The difficulties
Sometimes human behaviour doesnt follow a logical progression. Knowledge of an issue
doesnt always result in a change of attitude and behaviour. Community values can
override individual interests. So sometimes collective or institutional changes are necessary
before individuals can be targeted effectively.
Other things to be aware of:
People might lie on questionnaires, particularly if they've been asked about sensitive
or sexual matters
Using closed, predetermined, inflexible questions can mean you miss out on
vital information
People are generally suspicious of surveys.
Your target audience might be experiencing so-called 'questionnaire fatigue'.
Attitude: Would you share a meal with someone who is HIV positive?
Behaviour/practice: Did you use a condom at your last sexual encounter?
Example: Research for an information campaign on Rwandas gacaca process
The application
KABP surveys are useful for finding out what your target audience already knows and
does. They can give an insight into a large group of people in a short time frame, and
are particularly useful if you plan to paint a before-and-after picture of a programme's
success. Data has statistical significance if you randomly select your interviewees, and it
can be used as a baseline against which to measure findings at the end of your project.
Most KABP surveys need to be supplemented by qualitative research. This combined
approach provides valuable information for developing messages for campaign-type
programmes.
Formative Appraisal
In 2001, research was carried out into how the Rwandan public viewed the proposed
gacaca process, which aimed to bring genocide suspects before community courts.
Researchers used a mixture of quantitative survey and qualitative focus group methods to
gauge public opinion. Findings were used to inform an awareness-raising campaign,
providing valuable information on what media should be used and what key messages
should be. The study also provided a baseline against which the success of the project
could be measured.
More information can be found at www.jhuccp.org/pubs/sp/19/English/ch1.shtml
Formative Appraisal
13
Section 3
The approach
The approach
RAP offers a qualitative alternative to measuring KABP. In RAP, the researcher gets an
insight into a cultural belief system through a continual process of forming questions and
generating ideas, based on information collected from a few key local informants.
14
Section 3
The application
RAP can be used at the start of a project or while it is running (to help you make
adjustments to your work as your programme develops).
The application
Because it involves the target audience in decision-making, it can ensure relevance and
ownership by the people involved. It can lead to joint planning of communication
programmes, instead of the traditional approach in which professionals plan
communication interventions without input from the community.
The difficulties
PRCA can be time-consuming and cannot be used as part of an experimental enquiry.
For more information, visit the SADCs website:
www.sadc-fanr.org.zw/sccd/sadc%20ccd%20profile.htm
The difficulties
This approach results in detailed, qualitative information, but it cannot provide the
baseline for an experimental design, as the sample isn't large or random enough to stand
up to statistical scrutiny.
Formative Appraisal
Formative Appraisal
15
Section 4
Section 4
PROCESS EVALUATION
You can use the research methods outlined in this section while your project is ongoing.
The approach
The approach
Classic audience research uses quantitative surveys to obtain data on audience numbers,
characteristics and preference. Most cases use well-established tools of market research
and involve large samples.
Two practical guides to audience research are:
16
The application
Audience research is one of the basics for monitoring communications programmes:often
essential for understanding audience size, distribution and preferences. It is especially
useful in message-based, or campaign-type situations.
The application
This method can give a rich overall picture of how people respond to ICD programmes,
and leaves room for the unintended and unexpected. For more information see the users
handbook online at http://cirac.qut.edu.au/ictpr/downloads/handbook.pdf
The difficulties
Hiring audience research firms can be expensive, and qualitative methods are often
needed to give more depth to the findings.
Process Evaluation
The difficulties
Ethnographic research is usually very time-consuming, because it takes place over several
months - or even years. It is not a method suited to evaluating one-off behaviour-change
campaigns.
Process Evaluation
17
Section 4
3. Outcome mapping
The approach
The approach
Participatory monitoring and evaluation (PM&E) is a term that covers any process that
allows all stakeholders - particularly the target audience - to take part in the design of a
project, its ongoing assessment and the response to findings. It gives stakeholders the
chance to help define a programmes key messages, set success indicators, and provides
them with tools to measure success. These usually include Participatory Rural Appraisal
(PRA) tools - such as mapping, problem-ranking and seasonal calendars - as well as
surveys, oral testimonies and in-depth interviews.
The application
18
Section 4
Outcome mapping offers an evaluation alternative for projects where achievements are
difficult to measure using traditional qualitative methods. For more information, see a
brochure at http://web.idrc.ca/en/ev-64698-201-1-DO_TOPIC.html
The difficulties
Because it is a relatively new method, it is still work in progress. It clearly will not be
appropriate where qualitative proof of impact is required.
Process Evaluation
There are four key principles to keep in mind with this approach:
1.
2.
3.
4.
For information about involving your audience in defining your messages, see Designing
messages for development communication: an audience participation based approach,
by Bella Mody (1991).
Process Evaluation
19
Section 4
In projects that are not about messages, but more about enhancing communication itself
or about fostering social change, you can apply Participatory Ethnographic Evaluation
and Research (PEER). PEER is a rapid approach to programme design, monitoring,
evaluation, and research. It has been used in a range of cultural contexts, notably in
HIV/AIDS programmes. You will find more information on PEER at
www.mande.co.uk/docs/PEER%20flyer%20Options%20May%2004.pdf
Many communication programmes are structured around what is known as the P process
(for more information, visit www.hcpartnership.org/Publications/P-Process.pdf). This
model has M&E at its heart, since ongoing evaluation is essential to shaping and
improving messages and the communication process itself.
20
Section 4
The application
21
PM&E adds value to programme design and contents. For example, radio listeners can
not only provide broadcasters with feedback about radio programmes, but they can
actually make programmes themselves in response to issues discussed on-air.
The difficulties
PM+E can be time-consuming and requires staff to be trained as facilitators.
Process Evaluation
Process Evaluation
Section 5
Section 5
MEASURING IMPACTS
AND OUTCOMES
The following methods can be used throughout the project-cycle, but are particularly
suitable for end-of-programme research.
Time series
This tracks behaviour over time, normally at one given location or with a given group,
comparing pre- and post-programme. Again it allows you to be more certain that
changes are due to your programme.
The approach
After only
With this approach, the only research that takes place is carried out when a programme
finishes. This could be assessing a population's knowledge, behaviour or health status, for
example. For findings to be valid, they must be compared to an external standard, such as:
22
The application
Experimental methods are useful when you need to show how a programme has affected
behaviour. All four methods involve some kind of data collection on key indicators such as
KABP. They often involve a mixture of quantitative surveying and qualitative interviewing,
and tend to work best when evaluating campaigns with a specific aim (such as improving
awareness of an issue by a given percentage).
The JHU-CCP have carried out many experimental or semi-experimental studies of ICD
programmes - usually in campaign-type programmes with an individual behaviour-change
focus. For examples, see Entertainment-education and HIV/AIDS prevention: A field
experiment in Tanzania by P.W. Vaughan and E.M. Rogers (2000).
Before-and-after
You can collect baseline data before or during a programme and then compare it to
post-completion research (using the same indicators) to note changes or variations. The
weakness of this method is that it cannot indicate whether changes are due to your
programme or another influencing factor.
The difficulties
All but the first approach outlined above are technically demanding and can be
expensive. You should also bear in mind the problems experienced with KABP-based
approaches (see Section 3, method 1).
23
Section 5
24
Section 5
The World Banks strategic communication toolkit recommends the following indicators for
measuring the outcomes of communication activities:
The Consortium for Social Change and other organisations such as the Communications
Initiative are in the process of defining indicators to measure communication for social
change. These include:
1.
2.
3.
4.
Percentage who express knowledge, attitude and beliefs consistent with message
5.
6.
7.
The authors note that the most crucial of these indicators are extremely difficult to measure.
For example, respondents might claim to have better skills than they actually have, but
verification by observation may be almost impossible (consider the difficulty of checking
correct condom-use, for example). Behaviour change might also take a long time to show
and may not be sustained over time.
(Source: Strategic communication for development projects, C. Cabanera-Verzosa, 1999.)
increased accuracy of the information that people share in the dialogue and debate
the means available that enable people and communities to feed their voices into
debate and dialogue
linked people and groups with similar interests who might otherwise not be in contact
Legal and social norms protect and promote free speech and access to public information
2.
3.
Multiple news sources provide citizens with reliable and objective news
4.
5.
For a full copy of the MSI 2003, covering Southeast Europe and Eurasia, visit
www.irex.org/msi/2003/MSI03-intro.pdf
25
Section 5
The difficulties
The approach
Clearly, this method is not suitable if impact has to be measured objectively. It is also
time-intensive, and project staff often have to receive extra training as facilitators.
This is a participative method that aims to draw meaning from actual events, rather than
being based on indicators. The method involves collecting stories from stakeholders about
what they think is the most significant change a project has brought about. These stories
are then analysed, discussed and verified.
The application
26
Section 5
This method has the advantage of capturing the unexpected and also helps to identify why
change happens. For more detail see www.healthcomms.org/comms/eval/le02.html and the
MandE news website www.mande.co.uk/ which is a news service focusing on developments
in monitoring and evaluation methods relevant to development projects and programmes with
social development objectives.
The difficulties
It is a wholly qualitative approach and is therefore unsuitable if you need qualitative data
to prove a programmes impacts.
3. Participatory evaluation
The approach
See Section 4, method 4 (PM&E).
The Radio Authority (now Ofcom) has developed a tool for measuring the social impact of
community radio stations. Before they start transmitting, radio station managers have to list
the key benefits their services are intended to bring to the community. They also have to
compile information about the services that already exist locally. This enables every
project to measure the value it adds to the community over time. Criteria include:
providing training and work experience (for example: training youth volunteers at
the station)
contributing to local social inclusion objectives (for example: reporting on the work
of local voluntary groups)
contributing to local education (for example: forging links with schools and colleges)
giving local people access to the station (for example: providing disabled access
and on-site child-care for volunteer presenters)
The full report on the initiative, New voices: an evaluation of 15 access radio projects
can be found at www.ofcom.org.uk/radio/ifi/rl/commun_radio/new_voices.pdf
The application
Participatory evaluation allows target audiences to measure a programmes success
against the parameters they set themselves. Applying participatory methods to
communications work can help avoid the problems that outsider-led methods might create.
27
Section 6
Section 6
28
Various computer programmes are available for analysing survey information, the best
known being SPSS and EPInfo.
29
In-depth interviewing
Observation
Observation is one of the most important and widely used methods for formative
appraisal, monitoring and validating findings. Participatory or ethnographic observation is
a variant that is usually lengthy and requires the researcher to be totally immersed in the
environment being examined. In some cases, observation is done against a checklist of
'correct' behaviours (for example: observing good counselling practice) and can
sometimes be done by a 'hidden' researcher to observe more natural behaviour. It is
usually used in conjunction with other research methods.
Both in-depth interviewing and focus groups can be organised using computer
programmes such as ANTHROPAC, NUD*ist and ETHNOGRAPH. These packages
facilitate the organisation of large amounts of information and help find patterns in the
results by identifying themes, points of agreement or disagreement within groups and
topics that have been discussed most.
Section 6
30
Section 6
Pre-testing
This tool targets people who are judged to have extensive experience and knowledge often community or organisation leaders. As with in-depth interviews, the interviewer must
gain the confidence of the interviewees, so that they are more prepared to share their
experience, insights and deeply held beliefs.
2.
3.
4.
5.
Does the product Cater or appeal to the audience's heart and mind?
6.
7.
A useful instrument for testing audio-visual messages is the 7 Cs assessment table. This
device assigns scores to qualitative findings, as in the following example:
For the question Does the project Command attention? add 20 points if the
audience pays attention all the time; subtract ten points if the audience gets lost
during the production... and so on.
The full instrument can be found in Toolkit for development of evaluation strategies for
radio producers by L.E. Porras (1998).
31
Section 6
Section 6
Other tools for collecting data include mapping, preference ranking, problem tree or
causal diagrams and visual story-boards. Many of these can be adapted to help measure
the impact of communications at a community level. You can find a comprehensive guide
in the Participatory development tool kit by N. Deepa and L. Srinivasan (1994).
Tracking or tracer studies normally involve disseminating messages and then asking
questions about them. Responses are compared to those for control questions, for which
information was not disseminated. These studies only work in very controlled
circumstances, with well-defined messages. They also tend to work only in areas where
there are few alternative sources of information (so that the source in question can be
assumed to be the main source of information for the target population).
Keeping logs
32
Keeping logs, journals and documenting letters and other feedback may seem obvious,
but they are all extremely important monitoring procedures that are often overlooked.
Transcripts of broadcasts must be made and stored carefully, web-hits must be recorded,
as must notes on every activity such as training events and workshops, press-coverage,
and any informal feedback received. You will find a checklist of regular documentation
for broadcast projects in the Monitoring and evaluation manual by K. Warnock (2002).
Delphic surveys
These are a sometimes used to identify trends and predict future developments in a given
field (for example: how telecommunications are likely to spread in rural areas). They use
a panel of carefully selected experts, who answer a series of questionnaires. Each series
is analysed, and the tool is revised to reflect the responses of the group. Then a new
questionnaire is prepared that includes the revised material, and the process is repeated
until a consensus is reached.
33
Section 7
Section 7
PANOS
Panos has published a Toolkit for development monitoring and evaluation (K. Warnock,
Panos London, 2002) that concentrates on communications and media-strengthening
projects. A work in progress, it contains useful advice about conducting content
analysis, working with listening/viewing groups, interviewing community groups and
audience surveys.
www.developmentgateway.org/node/317776/
HEALTH E COMMUNICATIONS
Has a section devoted to evaluation, and a digest of several different research and
evaluation examples and methodologies, with links to the full reports or books.
www.comminit.com/healthecomm/research.php
GENDER-RELATED INDICATORS
Information on assessing the gender sensitivity of ICT programmes can be found at:
34
www.comminit.com/steval/sld-8650.html
LEAP IMPACT
Aims to improve the institutional performance of monitoring and evaluation practice
related to information services, information products and information projects. It is open to
all individuals and organisations interested in the evaluation of information.
www.dgroups.org/groups/leap/impact/index.cfm
HEALTH-RELATED INDICATORS
A link to UNICEF's evaluation indicators for health communication. They are quite
basic, but they take the reader through a set of useful questions for different types of
health projects.
www.comminit.com/evalindicators/sld-2380.html
35
Section 7
Section 7
Further reading
Evaluation framework for ICT pilot projects
Batchelor, B. and P. Norrish, 2004
See www.infodev.org
36
www.synergyaids.com/documents/HIVPreventionProj_NGOEval.pdf
HIGH-END ICTS
For an interesting collection of case studies that looks at high-end ICTs see Making a
difference: measuring the impact of information on development edited by Paul
McConnell (the International Development Research Centre, 1995).
A copy is available online at:
http://web.idrc.ca/es/ev-9372-201-1-DO_TOPIC.html
37
Learning from change: issues and experiences in participatory monitoring and evaluation
Estrella, M (ed), 2000
London: Intermediate Technology Publications
Perceptions about the gacaca law in Rwanda: evidence from a multi-method study
Gabisirege, S. and S. Babalola, 2001
Special Publication 19, JHU CCP
Section 7
Appendix
ACKNOWLEDGEMENTS
Behaviour and beyond: an evaluation perspective in Involving People Evolving Behaviour
Manoncourt, E. and D. Webb, 2000
Acknowledgements
These guidelines were written by Mary Myers, with the support of Nicola Woods and
Sina Odugbemi of the ICD team, DFID. Valuable inputs and insights have been
contributed by Aquarium Writers Ltd, Gordon Adam, Simon Batchelor, Simon Davison,
Nick Ishmael-Perkins, Kate Lloyd-Morgan, Tag McEntegart, Pat Norrish, Francis Rolt,
Andrew Skuse, and Peter Vaughan
Social survey methods, A fieldguide for development workers, Development Guidelines no. 6
Nichols, P., 1991
Oxfam:Oxford
38
Toolkit for development of evaluation strategies for radio producers in Media in development:
towards a toolkit for communication monitoring and impact assessment methodologies
Porras, L. E., 1998
39
Participatory tools and techniques: a resource kit for participation and social assessment
Reitbergen-McCracken, J and D. Narayan, 1998
Washington DC: World Bank
Acknowledgements