0% found this document useful (0 votes)
187 views

Evaluating Communication For Development

This document provides guidelines for monitoring and evaluating information and communication for development (ICD) programmes. It covers topics such as: - Planning monitoring and evaluation activities before a programme begins, including deciding what to evaluate and setting a budget. - Conducting formative appraisal at the start of a programme to establish a baseline and gather information on target audiences. - Ongoing process evaluation methods to monitor implementation. - Measuring impacts and outcomes at the end of a programme to determine effectiveness. - Recommended tools and further resources for monitoring and evaluation. The guidelines emphasize using both quantitative and qualitative methods, considering experimental and non-experimental designs, and involving local stakeholders and resources where possible. Overall
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
187 views

Evaluating Communication For Development

This document provides guidelines for monitoring and evaluating information and communication for development (ICD) programmes. It covers topics such as: - Planning monitoring and evaluation activities before a programme begins, including deciding what to evaluate and setting a budget. - Conducting formative appraisal at the start of a programme to establish a baseline and gather information on target audiences. - Ongoing process evaluation methods to monitor implementation. - Measuring impacts and outcomes at the end of a programme to determine effectiveness. - Recommended tools and further resources for monitoring and evaluation. The guidelines emphasize using both quantitative and qualitative methods, considering experimental and non-experimental designs, and involving local stakeholders and resources where possible. Overall
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

MONITORING

AND EVALUATING
INFORMATION AND
COMMUNICATION FOR
DEVELOPMENT (ICD)
PROGRAMMES
GUIDELINES

Contents

GUIDELINES
MARCH 2005
MONITORING
AND EVALUATING
INFORMATION AND
COMMUNICATION FOR
DEVELOPMENT (ICD)
PROGRAMMES

Forward:

About these guidelines

Section 1: Before You Start

Section 2: Planning and Budgeting

Section 3: Formative Appraisal

12

Section 4: Process Evaluation

16

Section 5: Measuring Impacts and Outcomes

22

Section 6: The Tools of Good Practice

28

Section 7: Useful Websites and Further Reading

34

Appendix

39

Contents

03

Forward

Forward

ABOUT THESE GUIDELINES


Who are they for?

Using the guidelines

If you work for the Department for International Development (DFID) and need advice on
monitoring and evaluating Information and Communication for Development (ICD)
programmes, these guidelines are for you. They don't provide a set of rules, but do
introduce a range of approaches for you to choose from at various stages in your
programme. Where possible, we signpost you to sources of further information. You can
use the guidelines as a reference tool or to help you work with consultants.

Guidance is structured around the programme cycle:

What ICD programmes do the guidelines apply to?


04

Face-to-face communication or information activities such as counselling or extension visits

TV, radio, film and video

Community-level communications such as theatre, role-playing, workshops, posters and


other print materials

Internet and email communications programmes


Telecommunications-based projects

DFID mainly uses ICD programmes that:


1.

support media and information and communication technologies (ICTs) as ends in


themselves

2.

use media and ICTs to add value to development sectors and programmes

Forward

Section 1 - things to think about before you start


Section 2 - planning and budgeting
Section 3 - monitoring and evaluation at the start of your programme
Section 4 - methods for ongoing monitoring and evaluation
Section 5 - measuring impacts and outcomes at the end of your programme
Section 6 - introduces the tools of good practice
Sources of further information are contained in Section 7

05

Acronyms used in the guidelines


FGD

Focus Group Discussion

ICD

Information and Communication for Development

ICT

Information and Communication Technology

IDRC

International Development Research Centre (Canada)

JHU CCP

Johns Hopkins University Center for Communications Programs (USA)

KABP

Knowledge, Attitudes, Behaviour and Practice

KAP

Knowledge, Attitudes and Practice

PEER

Participatory Ethnographic Evaluation and Research

PM&E

Participatory Monitoring and Evaluation

PRA

Participatory Rural Appraisal

PRCA

Participatory Rural Communication Appraisal

RAP

Rapid Assessment Procedures

Forward

Section 1

Section 1

BEFORE YOU START


Why monitor and evaluate ICD programmes?

Practical difficulties in evaluating ICD programmes

DFID policy says we should monitor and evaluate our communications to:

It is difficult to define a specific target audience for initiatives that have an effect
over a wide area. (For example: broadcast campaigns)

In some sectors (like farming), change happens slowly. So it is hard to measure


impact over a short period

It is not always clear that an ICD programme - rather than political, social or
economic factors - has been responsible for change

Some communications goals - good governance, social gain, empowerment - are


difficult to measure objectively or put a value on

If developing-world audiences have little media choice, it can be hard to find out
their opinions on the quality of ICD programmes

It is difficult to evaluate communications in highly politicised areas or places of conflict

demonstrate good management;


learn lessons for future projects; and
show that we are accountable for our work.

But you should bear in mind that it is difficult to evaluate human behaviour and social
process, so there are no ready-made ways for measuring the success of ICD projects.

06

Problems behind the theory


Communications initiatives can be divided into two approaches, each with its own problems:
1. Behaviour-change initiatives

Finally, the fast-changing nature of new technologies makes it difficult to measure


their impact

The behaviour-change approach uses targeted messages to change an individual's


behaviour. Its main problem is that human behaviour isn't always a logical response to a
held belief. So the indicators we use to measure change might be fundamentally flawed.
2. Social-change initiatives
Some initiatives try to inspire social change by giving people information to use however they
like - perhaps to inspire community dialogue or collective action. The main problem with
evaluating social change is that it is often too fluid, long-term and intangible to measure.

So you should be aware that monitoring and evaluation processes rely on personal
judgement as well as theory. Bear in mind that there is no single, best evaluation method.

Before you start

Before you start

07

Section 2

Section 2

PLANNING AND BUDGETING


Three questions to start with
1.

What will your monitoring and evaluation activity focus on?

Formative appraisal at the start of your programme - see section 3

2.

What is it you want to find out?

3.

Are all stakeholders aware of what questions need to be asked?

Ongoing processes - see section 4


Impact and outcomes at the end of your programme - see section 5

Will you use quantitative or qualitative methods?


Monitoring and evaluation methods are either quantitative or qualitative, but you can use
a combination of the two approaches. Many people use quantitative methods to define
audience characteristics and to analyse statistical findings. Then they add depth and
texture using qualitative methods - which answer 'how' and 'why' questions using a
section of the target audience.

All or a combination of these?

08

Choosing a suitable methodology


Is experimental research right for your programme?
Experimental research uses scientific tests to show how effective communications are.
This normally involves before-and-after surveys or treatment-and-control groups with
randomly chosen respondents.
It is almost impossible to do experimental research without baseline data, and this usually
determines your approach. But the reality is that many ICD projects start without baseline
data, so alternative research methods are sometimes more appropriate.

Example: Multi-method evaluation of community telecentres in Africa


In 2000-01, the International Development Research Centre (IDRC) commissioned a series
of studies of community telecentres in Africa. It used both qualitative and quantitative
approaches to collect data from actual and potential telecentre users, combining:

focus groups
document analysis
usage and site observation
exit polls
questionnaires
interviews

(Ref.Etta and Parvyn-Wamahiu 2003)


How will you measure success?
You will need to decide what indicators to use to measure your programme's success.
If you are considering participatory research methods, it is a good idea to involve your
target audience in deciding how success will be measured.

Planning and budgeting

Planning and budgeting

09

Section 2

Setting a budget

Questions to ask at the planning stage

Opinions vary as to what percentage of your budget should be dedicated to evaluation.


A rough rule of thumb would be between ten and 15 per cent.

Who are being evaluated (individuals, groups, social networks, organisations)?

What kind of sample will be used (random, stratified, cluster)?

What type of data will be collected? Who will it be collected from?


And how will it be collected (surveys, focus groups)?

What types of evaluation methods will be used (before-and-after, time-series)?

How will the data collected be analysed (statistical analysis, content analysis)?

How will it be ensured that the findings are valid?

How will the results be shared with others?

What is the proposed time frame?

What logistical and administrative arrangements do you/the contractor need to make?

How will ethical issues and confidentiality be handled?

What types and levels of resource are needed (personnel, supplies, cash)?

With small projects and those with short time-frames, evaluation might take up a larger
part of the budget. This might also be the case with pilot projects, which try to determine
how successful a programme will be if it is rolled out at a later date. In cases like this,
evaluation costs might take up 30 per cent or more of the total budget.

Who should carry out the work?


10

Section 2

In the past, expatriates have played a significant role in conducting research. But it's now
considered good practice to use local resources, where possible, and to train and
employ local assessors.
With any external evaluator, whether expatriate or local, it is a good idea to ask for an
evaluation framework, which should include:

a basic evaluation design (will it be descriptive, experimental, participatory,


ethnographic?)

a timeframe

11

(Taken from Behaviour and beyond: an evaluation perspective,


Manoncourt and Webb, 2000)

data collection methods


an analytical framework
an outline of what resources will be needed

It is also worth bearing in mind DFIDs five measures of evaluation quality:


utility, accuracy, independence, credibility and propriety.

Planning and budgeting

Planning and budgeting

Section 3

Section 3

FORMATIVE APPRAISAL
This section outlines methods for carrying out research at the start of your project.

1. Measuring KABP
The approach
KABP (sometimes reduced to KAP) is an acronym that stands for knowledge, attitude,
behaviour and practice. Research that measures KABP is based on the assumption that a
person's knowledge influences their attitude, which in turn influences their behaviour. It
usually involves written, standardised questionnaires that are composed of yes/no questions.

12

Example: Typical KABP questions


Typical KABP questions in an HIV/AIDS survey might be:
Knowledge: Do you know how HIV/AIDS is transmitted?

The difficulties
Sometimes human behaviour doesnt follow a logical progression. Knowledge of an issue
doesnt always result in a change of attitude and behaviour. Community values can
override individual interests. So sometimes collective or institutional changes are necessary
before individuals can be targeted effectively.
Other things to be aware of:

People might lie on questionnaires, particularly if they've been asked about sensitive
or sexual matters

People can distort what other people think or do.

Using closed, predetermined, inflexible questions can mean you miss out on
vital information
People are generally suspicious of surveys.
Your target audience might be experiencing so-called 'questionnaire fatigue'.

Attitude: Would you share a meal with someone who is HIV positive?
Behaviour/practice: Did you use a condom at your last sexual encounter?
Example: Research for an information campaign on Rwandas gacaca process

The application
KABP surveys are useful for finding out what your target audience already knows and
does. They can give an insight into a large group of people in a short time frame, and
are particularly useful if you plan to paint a before-and-after picture of a programme's
success. Data has statistical significance if you randomly select your interviewees, and it
can be used as a baseline against which to measure findings at the end of your project.
Most KABP surveys need to be supplemented by qualitative research. This combined
approach provides valuable information for developing messages for campaign-type
programmes.

Formative Appraisal

In 2001, research was carried out into how the Rwandan public viewed the proposed
gacaca process, which aimed to bring genocide suspects before community courts.
Researchers used a mixture of quantitative survey and qualitative focus group methods to
gauge public opinion. Findings were used to inform an awareness-raising campaign,
providing valuable information on what media should be used and what key messages
should be. The study also provided a baseline against which the success of the project
could be measured.
More information can be found at www.jhuccp.org/pubs/sp/19/English/ch1.shtml

Formative Appraisal

13

Section 3

2. Rapid assessment procedures (RAP)

3. Participatory Rural Communication Appraisal (PRCA)

The approach

The approach

RAP offers a qualitative alternative to measuring KABP. In RAP, the researcher gets an
insight into a cultural belief system through a continual process of forming questions and
generating ideas, based on information collected from a few key local informants.

PRCA is an example of a participatory research method. It includes rural people in the


formation of communication strategies. Pioneered by the SADC Regional Centre of
Communication for Development, in Zimbabwe, it uses visualisation techniques, interviews
and group work to generate information that can be used when creating communication
strategies, materials, media and key messages.

For more information visit the following website:


www.unu.edu/unupress/food2/uin08e/uin08e00.htm
A similar method is rapid ethnography, which uses a variety of methods to capture rich
data when there are substantial time constraints. A relatively new tool, it can be used at
the design stages of communication projects.

14

Section 3

For an example of how it was used in a programme aimed at improving awareness of


HIV and AIDS, see www.comminit.com/healthecomm/research.php?showdetails=137

The application
RAP can be used at the start of a project or while it is running (to help you make
adjustments to your work as your programme develops).

The application
Because it involves the target audience in decision-making, it can ensure relevance and
ownership by the people involved. It can lead to joint planning of communication
programmes, instead of the traditional approach in which professionals plan
communication interventions without input from the community.

The difficulties
PRCA can be time-consuming and cannot be used as part of an experimental enquiry.
For more information, visit the SADCs website:
www.sadc-fanr.org.zw/sccd/sadc%20ccd%20profile.htm

The difficulties
This approach results in detailed, qualitative information, but it cannot provide the
baseline for an experimental design, as the sample isn't large or random enough to stand
up to statistical scrutiny.

Formative Appraisal

Formative Appraisal

15

Section 4

Section 4

PROCESS EVALUATION
You can use the research methods outlined in this section while your project is ongoing.

2. Ethnographic action research

1. Market-style audience research

The approach

The approach
Classic audience research uses quantitative surveys to obtain data on audience numbers,
characteristics and preference. Most cases use well-established tools of market research
and involve large samples.
Two practical guides to audience research are:

16

Know your audience by Dennis List (2001), available as an online book at


www.audiencedialogue.org/kya.html

Handbook on radio and television audience research by Graham Mytton (1999),


available to order online at www.audiencedialogue.org/books-mytton.html

Ethnographic action research was developed at the London School of Economics, in


conjunction with UNESCO, specifically to look at how mass-media and ICTs work within
local social networks. It is based on the concept of communicative ecologies, which
means the complete range of communication media and information flow existing within
a community. It involves training local researchers to use in-depth interviews, participant
observation, diaries and surveys to uncover the structures and experience of poverty and
media-use in their community.
The idea of looking at information flow in a given community is not a new one. For
example, techniques for analysing knowledge and information systems have long been
used in agriculture and natural resources. One such tool is the RAAKS resource box,
about which you can find information at
www.iac.wur.nl/ppme/content.php?ID=394&IDsub=572

The application
Audience research is one of the basics for monitoring communications programmes:often
essential for understanding audience size, distribution and preferences. It is especially
useful in message-based, or campaign-type situations.

The application
This method can give a rich overall picture of how people respond to ICD programmes,
and leaves room for the unintended and unexpected. For more information see the users
handbook online at http://cirac.qut.edu.au/ictpr/downloads/handbook.pdf

The difficulties
Hiring audience research firms can be expensive, and qualitative methods are often
needed to give more depth to the findings.

Process Evaluation

The difficulties
Ethnographic research is usually very time-consuming, because it takes place over several
months - or even years. It is not a method suited to evaluating one-off behaviour-change
campaigns.

Process Evaluation

17

Section 4

3. Outcome mapping

4. Participatory monitoring and evaluation

The approach

The approach

Outcome mapping challenges more traditional approaches to monitoring and evaluation.


Although it can be used at all stages of the project-cycle, it usually takes place while a
project is ongoing. It moves away from assessing a programmes developmental impacts
(such as policy relevance, poverty alleviation, or reduced conflict) toward changes in a
target audience's behaviours, relationships, or activities.

Participatory monitoring and evaluation (PM&E) is a term that covers any process that
allows all stakeholders - particularly the target audience - to take part in the design of a
project, its ongoing assessment and the response to findings. It gives stakeholders the
chance to help define a programmes key messages, set success indicators, and provides
them with tools to measure success. These usually include Participatory Rural Appraisal
(PRA) tools - such as mapping, problem-ranking and seasonal calendars - as well as
surveys, oral testimonies and in-depth interviews.

The application

18

Section 4

Outcome mapping offers an evaluation alternative for projects where achievements are
difficult to measure using traditional qualitative methods. For more information, see a
brochure at http://web.idrc.ca/en/ev-64698-201-1-DO_TOPIC.html

The difficulties
Because it is a relatively new method, it is still work in progress. It clearly will not be
appropriate where qualitative proof of impact is required.

Process Evaluation

There are four key principles to keep in mind with this approach:
1.

Local people are active participants, not just sources of information.

2.

Stakeholders evaluate, outsiders facilitate.

3.

The focus is on building stakeholders capacity for analysis and problem-solving.

4.

The process should build commitment to implementing recommended corrective actions.

For information about involving your audience in defining your messages, see Designing
messages for development communication: an audience participation based approach,
by Bella Mody (1991).

Process Evaluation

19

Section 4

In projects that are not about messages, but more about enhancing communication itself
or about fostering social change, you can apply Participatory Ethnographic Evaluation
and Research (PEER). PEER is a rapid approach to programme design, monitoring,
evaluation, and research. It has been used in a range of cultural contexts, notably in
HIV/AIDS programmes. You will find more information on PEER at
www.mande.co.uk/docs/PEER%20flyer%20Options%20May%2004.pdf
Many communication programmes are structured around what is known as the P process
(for more information, visit www.hcpartnership.org/Publications/P-Process.pdf). This
model has M&E at its heart, since ongoing evaluation is essential to shaping and
improving messages and the communication process itself.

20

Section 4

Example: Continuous PM&E to improve an Afghan educational soap opera


When the long running BBC radio soap opera New Home, New Life for Afghan listeners
was started in 1993, PM&E was built into the production cycle. A three-person evaluation
team was hired to obtain listener feedback (in focus groups or one-to-one) on planned
story lines and evaluate the impact of past episodes, which initially took the form of
before-and-after tests to check knowledge of issues highlighted by the drama. Later,
PRA methods such as health walks and seasonal calendars were used to determine the
priorities of different groups. The findings informed future story lines in the soap.
Ref. G. Adam 2004, personal communication.

The application

21

PM&E adds value to programme design and contents. For example, radio listeners can
not only provide broadcasters with feedback about radio programmes, but they can
actually make programmes themselves in response to issues discussed on-air.

The difficulties
PM+E can be time-consuming and requires staff to be trained as facilitators.

Process Evaluation

Process Evaluation

Section 5

Section 5

MEASURING IMPACTS
AND OUTCOMES
The following methods can be used throughout the project-cycle, but are particularly
suitable for end-of-programme research.

1. Experimental impact studies

Time series
This tracks behaviour over time, normally at one given location or with a given group,
comparing pre- and post-programme. Again it allows you to be more certain that
changes are due to your programme.

The approach
After only
With this approach, the only research that takes place is carried out when a programme
finishes. This could be assessing a population's knowledge, behaviour or health status, for
example. For findings to be valid, they must be compared to an external standard, such as:

22

national or international goals


historical trends or patterns
precedents in the target geographical region

The application
Experimental methods are useful when you need to show how a programme has affected
behaviour. All four methods involve some kind of data collection on key indicators such as
KABP. They often involve a mixture of quantitative surveying and qualitative interviewing,
and tend to work best when evaluating campaigns with a specific aim (such as improving
awareness of an issue by a given percentage).
The JHU-CCP have carried out many experimental or semi-experimental studies of ICD
programmes - usually in campaign-type programmes with an individual behaviour-change
focus. For examples, see Entertainment-education and HIV/AIDS prevention: A field
experiment in Tanzania by P.W. Vaughan and E.M. Rogers (2000).

Before-and-after
You can collect baseline data before or during a programme and then compare it to
post-completion research (using the same indicators) to note changes or variations. The
weakness of this method is that it cannot indicate whether changes are due to your
programme or another influencing factor.

The difficulties
All but the first approach outlined above are technically demanding and can be
expensive. You should also bear in mind the problems experienced with KABP-based
approaches (see Section 3, method 1).

Before-and-after with comparison groups


This approach is similar to the before-and-after method above, but it involves comparing
groups - only one of which is exposed to the intervention; the other group acting as a
control group. It is important to match the groups as closely as possible in all other
respects (age, sex, socio-economic characteristics), so that any changes in the target
audience can be more confidently attributed to the intervention and not to other factors.

Measuring impacts and outcomes

Measuring impacts and outcomes

23

Section 5

24

Section 5

Example: Measuring the success of message-based communications

Example: Measuring the success of social change communications

The World Banks strategic communication toolkit recommends the following indicators for
measuring the outcomes of communication activities:

The Consortium for Social Change and other organisations such as the Communications
Initiative are in the process of defining indicators to measure communication for social
change. These include:

1.

Number of communications produced, by type, during the reference period

2.

Number of communications disseminated, by type, during the reference period

3.

Percentage of target audience who correctly comprehend a given message

4.

Percentage who express knowledge, attitude and beliefs consistent with message

5.

Percentage who acquire the skills recommended by the message

6.

Percentage who discuss message with others, by type of person

7.

Percentage who engage in recommended practices.

The authors note that the most crucial of these indicators are extremely difficult to measure.
For example, respondents might claim to have better skills than they actually have, but
verification by observation may be almost impossible (consider the difficulty of checking
correct condom-use, for example). Behaviour change might also take a long time to show
and may not be sustained over time.
(Source: Strategic communication for development projects, C. Cabanera-Verzosa, 1999.)

expanded public and private dialogue and debate

increased leadership by and agenda-setting role for disadvantaged people

increased accuracy of the information that people share in the dialogue and debate
the means available that enable people and communities to feed their voices into
debate and dialogue

linked people and groups with similar interests who might otherwise not be in contact

For further discussion on monitoring and evaluating social change communication


programmes, see Who measures social change? An introduction to participatory
monitoring and evaluation of communication for social change by W. Parks (2005).

Example: Measuring the success of media systems


The Media Sustainability Index (MSI) is a tool developed by International Research and
Exchanges Board (with USAID support) to assess the development of independent media systems
over time and across countries. The MSI assesses five objectives for shaping a successful media
system, each with a series of sub-criteria, scored by an annual panel of experts:
1.

Legal and social norms protect and promote free speech and access to public information

2.

Journalism meets professional standards of quality

3.

Multiple news sources provide citizens with reliable and objective news

4.

Independent media are well-managed businesses allowing editorial independence

5.

Supporting institutions function in the professional interests of independent media.

For a full copy of the MSI 2003, covering Southeast Europe and Eurasia, visit
www.irex.org/msi/2003/MSI03-intro.pdf

Measuring impacts and outcomes

Measuring impacts and outcomes

25

Section 5

2. Most significant change

The difficulties

The approach

Clearly, this method is not suitable if impact has to be measured objectively. It is also
time-intensive, and project staff often have to receive extra training as facilitators.

This is a participative method that aims to draw meaning from actual events, rather than
being based on indicators. The method involves collecting stories from stakeholders about
what they think is the most significant change a project has brought about. These stories
are then analysed, discussed and verified.

Example: Measuring the social impact of radio in the UK

The application

26

Section 5

This method has the advantage of capturing the unexpected and also helps to identify why
change happens. For more detail see www.healthcomms.org/comms/eval/le02.html and the
MandE news website www.mande.co.uk/ which is a news service focusing on developments
in monitoring and evaluation methods relevant to development projects and programmes with
social development objectives.

The difficulties
It is a wholly qualitative approach and is therefore unsuitable if you need qualitative data
to prove a programmes impacts.

3. Participatory evaluation
The approach
See Section 4, method 4 (PM&E).

The Radio Authority (now Ofcom) has developed a tool for measuring the social impact of
community radio stations. Before they start transmitting, radio station managers have to list
the key benefits their services are intended to bring to the community. They also have to
compile information about the services that already exist locally. This enables every
project to measure the value it adds to the community over time. Criteria include:

providing training and work experience (for example: training youth volunteers at
the station)

contributing to local social inclusion objectives (for example: reporting on the work
of local voluntary groups)

contributing to local education (for example: forging links with schools and colleges)

giving local people access to the station (for example: providing disabled access
and on-site child-care for volunteer presenters)

having linguistic impact (for example: increasing broadcasts in minority languages)

providing services to neighbourhood or local interest groups (for example: free


advertising for groups working with young people who are at risk)

The full report on the initiative, New voices: an evaluation of 15 access radio projects
can be found at www.ofcom.org.uk/radio/ifi/rl/commun_radio/new_voices.pdf

The application
Participatory evaluation allows target audiences to measure a programmes success
against the parameters they set themselves. Applying participatory methods to
communications work can help avoid the problems that outsider-led methods might create.

Measuring impacts and outcomes

Measuring impacts and outcomes

27

Section 6

Section 6

THE TOOLS OF GOOD


PRACTICE
The following is a basic guide to essential tools for monitoring and evaluating
ICD programmes.

Questionnaires and surveys


Questions need to be codable (for example: yes/no answers or ones that allow you to
grade responses or opinions). Guidance on devising good questionnaires can be found
in the basic social science research literature (see Further reading in Section 7). A basic
guide to sampling in both qualitative and quantitative research can be found at
www.cpc.unc.edu/measure/publications/pdf/ms-04-10.pdf

28

Focus group discussion


Focus group discussion (FGD) is an informal, guided discussion about a particular topic,
normally with six to ten people. As a qualitative research technique, FGD can explore
topics in some depth and answer 'how' and 'why' questions. Making sense of focus
group findings: a systematic participatory analysis approach (de Negri and Thomas,
2003) explores the use of FGDs in development, specifically in the context of a health
communication strategy.
A copy is available online at
www.comminit.com/healthecomm/uploads/making_sense_final.pdf

Various computer programmes are available for analysing survey information, the best
known being SPSS and EPInfo.

29

In-depth interviewing
Observation

In-depth interviewing is sometimes called semi-structured or case-study interviewing. It


mainly uses open-ended questions and is particularly useful for eliciting responses to
pilots, new materials and exploring deeply-held beliefs and attitudes.

Observation is one of the most important and widely used methods for formative
appraisal, monitoring and validating findings. Participatory or ethnographic observation is
a variant that is usually lengthy and requires the researcher to be totally immersed in the
environment being examined. In some cases, observation is done against a checklist of
'correct' behaviours (for example: observing good counselling practice) and can
sometimes be done by a 'hidden' researcher to observe more natural behaviour. It is
usually used in conjunction with other research methods.

Both in-depth interviewing and focus groups can be organised using computer
programmes such as ANTHROPAC, NUD*ist and ETHNOGRAPH. These packages
facilitate the organisation of large amounts of information and help find patterns in the
results by identifying themes, points of agreement or disagreement within groups and
topics that have been discussed most.

The tools of good practice

The tools of good practice

Section 6

30

Section 6

Pre-testing

Key informant interviewing

Pre-testing is widely accepted as an early and essential part of any communication


strategy or campaign - especially those involving messages that have been shaped by
anyone other than the target audience. Good practice in pre-testing involves measuring
comprehension of the intended message under normal circumstances. However, in
practice, pre-tests tend to take place in controlled conditions, with people gathered in
groups. They generally examine not only how a message is being understood but also
what respondents have to say about the finer details of communications materials. One
particularly useful tool for pre-testing is the following 7 Cs of Communication.

This tool targets people who are judged to have extensive experience and knowledge often community or organisation leaders. As with in-depth interviews, the interviewer must
gain the confidence of the interviewees, so that they are more prepared to share their
experience, insights and deeply held beliefs.

Example: The 7 Cs of communication


1.

Does the project Command attention?

2.

Does the project Clearly communicate the intended message?

3.

Does it Communicate meaningful benefit?

4.

Are the ideas addressed Consistent with each other?

5.

Does the product Cater or appeal to the audience's heart and mind?

6.

Does the product Create trust?

7.

Does the product include a Call to action?

Exit polls/intercept interviews


Researchers stop members of the target audience and ask fairly structured questions to
gather opinion about a programme, service or product. Respondents are sometimes
randomly selected by choosing every nth user or passer-by.

Role-playing, drama and story-telling


These methods can be used to gauge how people respond to sensitive issues that might
be best represented through allegory or exaggerated representations of the issue. Stories
can be validated by asking repeat questions of the storyteller, comparing one person's
account with that of another and by checking the factual accuracy of stories. You can find
an example of how story-telling has been used in research at
http://rogharris.org/Using_Stories.pdf

A useful instrument for testing audio-visual messages is the 7 Cs assessment table. This
device assigns scores to qualitative findings, as in the following example:
For the question Does the project Command attention? add 20 points if the
audience pays attention all the time; subtract ten points if the audience gets lost
during the production... and so on.
The full instrument can be found in Toolkit for development of evaluation strategies for
radio producers by L.E. Porras (1998).

The tools of good practice

The tools of good practice

31

Section 6

Section 6

Other participatory tools

Tracking or tracer studies

Other tools for collecting data include mapping, preference ranking, problem tree or
causal diagrams and visual story-boards. Many of these can be adapted to help measure
the impact of communications at a community level. You can find a comprehensive guide
in the Participatory development tool kit by N. Deepa and L. Srinivasan (1994).

Tracking or tracer studies normally involve disseminating messages and then asking
questions about them. Responses are compared to those for control questions, for which
information was not disseminated. These studies only work in very controlled
circumstances, with well-defined messages. They also tend to work only in areas where
there are few alternative sources of information (so that the source in question can be
assumed to be the main source of information for the target population).

Keeping logs

32

Keeping logs, journals and documenting letters and other feedback may seem obvious,
but they are all extremely important monitoring procedures that are often overlooked.
Transcripts of broadcasts must be made and stored carefully, web-hits must be recorded,
as must notes on every activity such as training events and workshops, press-coverage,
and any informal feedback received. You will find a checklist of regular documentation
for broadcast projects in the Monitoring and evaluation manual by K. Warnock (2002).

The tools of good practice

Delphic surveys
These are a sometimes used to identify trends and predict future developments in a given
field (for example: how telecommunications are likely to spread in rural areas). They use
a panel of carefully selected experts, who answer a series of questionnaires. Each series
is analysed, and the tool is revised to reflect the responses of the group. Then a new
questionnaire is prepared that includes the revised material, and the process is repeated
until a consensus is reached.

The tools of good practice

33

Section 7

Section 7

USEFUL WEBSITES AND


FURTHER READING
General monitoring and evaluation

Tools, indicators, guidelines and handbooks

DEVELOPMENT GATEWAY MONITORING AND EVALUATION (ICT Projects)


This website provides resources on monitoring and evaluation, especially for those
working on ICT for development.

PANOS
Panos has published a Toolkit for development monitoring and evaluation (K. Warnock,
Panos London, 2002) that concentrates on communications and media-strengthening
projects. A work in progress, it contains useful advice about conducting content
analysis, working with listening/viewing groups, interviewing community groups and
audience surveys.

www.developmentgateway.org/node/317776/

HEALTH E COMMUNICATIONS
Has a section devoted to evaluation, and a digest of several different research and
evaluation examples and methodologies, with links to the full reports or books.

Visit www.panos.org.uk for more information.

www.comminit.com/healthecomm/research.php

GENDER-RELATED INDICATORS
Information on assessing the gender sensitivity of ICT programmes can be found at:

34

www.comminit.com/steval/sld-8650.html
LEAP IMPACT
Aims to improve the institutional performance of monitoring and evaluation practice
related to information services, information products and information projects. It is open to
all individuals and organisations interested in the evaluation of information.
www.dgroups.org/groups/leap/impact/index.cfm

TOOLS FOR EXPERIMENTAL EVALUATION DESIGNS


Primarily for health campaigns, as well as links through to a wealth of evaluation research
experience documented by Johns Hopkins University (CCP)
See www.jhuccp.org/research/

HEALTH-RELATED INDICATORS
A link to UNICEF's evaluation indicators for health communication. They are quite
basic, but they take the reader through a set of useful questions for different types of
health projects.
www.comminit.com/evalindicators/sld-2380.html

Useful website and further reading

Useful website and further reading

35

Section 7

MESSAGE-BASED AND CAMPAIGN-TYPE COMMUNICATIONS


See Strategic communication for development projects handbook by C Cabanera-Verzosa
(Washington DC: World Bank, 1999), a copy of which is available online at:
www.worldbank.org/developmentcommunications/Publications/toolkit-web_jan2004.pdf

Section 7

Further reading
Evaluation framework for ICT pilot projects
Batchelor, B. and P. Norrish, 2004
See www.infodev.org

QUALITATIVE AND QUANTITATIVE METHODS AND ANALYSIS


For a simple introduction to both qualitative and quantitative methods and analysis
(including statistical methods such as chi-square analysis and random sampling) see
Evaluating HIV/AIDS prevention projects: a manual for NGOs by J. Bertrand and M.
Solis (Carolina Population Centre: University of North Carolina at Chapel Hill, 2000).
A copy is available online at:

36

www.synergyaids.com/documents/HIVPreventionProj_NGOEval.pdf

HIGH-END ICTS
For an interesting collection of case studies that looks at high-end ICTs see Making a
difference: measuring the impact of information on development edited by Paul
McConnell (the International Development Research Centre, 1995).
A copy is available online at:
http://web.idrc.ca/es/ev-9372-201-1-DO_TOPIC.html

BEHAVIOUR CHANGE AND SOCIAL CHANGE APPROACHES TO COMMUNICATIONS


For a short and clear article setting out the differences between behaviour change and
social change approaches to communications, see Communication that works by A.
Chetley (Health Exchange:London, 2002). A copy is available online at:
www.ecdpm.org/Web_ECDPM/Web/Content/Navigation.nsf/index.htm

Strategic communication for development projects


Cabanera-Versoza, C.1999
Washington DC: World Bank

Participatory development tool kit


Deepa, N. and L. Srinivasan, 1994
Washington DC: World Bank

37

Learning from change: issues and experiences in participatory monitoring and evaluation
Estrella, M (ed), 2000
London: Intermediate Technology Publications

Information and communication technologies for development in Africa: volume 2,


the experience with community telecentres
Etta, F. and S. Parvyn-Wamahiu, 2003
CODESRIA/IDRC: Ottawa

Perceptions about the gacaca law in Rwanda: evidence from a multi-method study
Gabisirege, S. and S. Babalola, 2001
Special Publication 19, JHU CCP

Designing messages for development communication: an audience participation


based approach
Mody, B., 1995
New Delhi/Newbury Park/London, Sage

Useful website and further reading

Useful website and further reading

Section 7

Appendix

ACKNOWLEDGEMENTS
Behaviour and beyond: an evaluation perspective in Involving People Evolving Behaviour
Manoncourt, E. and D. Webb, 2000

Acknowledgements

N. McKee, E. Manoncourt, C.S. Yoon and R. Carnegie. Penang Southbound/UNICEF

These guidelines were written by Mary Myers, with the support of Nicola Woods and
Sina Odugbemi of the ICD team, DFID. Valuable inputs and insights have been
contributed by Aquarium Writers Ltd, Gordon Adam, Simon Batchelor, Simon Davison,
Nick Ishmael-Perkins, Kate Lloyd-Morgan, Tag McEntegart, Pat Norrish, Francis Rolt,
Andrew Skuse, and Peter Vaughan

Social survey methods, A fieldguide for development workers, Development Guidelines no. 6
Nichols, P., 1991
Oxfam:Oxford

Qualitative evaluation and research methods


Patton M.G., 1990
Newbury Park:Sage

38

Toolkit for development of evaluation strategies for radio producers in Media in development:
towards a toolkit for communication monitoring and impact assessment methodologies
Porras, L. E., 1998

39

A, Skuse, London: DFID

Participatory tools and techniques: a resource kit for participation and social assessment
Reitbergen-McCracken, J and D. Narayan, 1998
Washington DC: World Bank

Impact assessment: perceptions and practice


Sayce, K with Norrish, P (Autumn 2005)
CTA: Wageningen

Entertainment-education and HIV/AIDS prevention: a field experiment in Tanzania


Vaughan, P.W., Rogers E.M. et. al., 2000
The Journal of Health Communication 5 (supplement)

Monitoring and evaluation manual


Warnock, K., 2002
Kampala: Panos Institute

Useful website and further reading

Mary Myers (PhD)


Development Communications Consultant
Wardour, Wiltshire, UK.
[email protected]

Acknowledgements

DFID, THE DEPARTMENT FOR


INTERNATIONAL DEVELOPMENT:
leading the British government's
fight against world poverty
One in five people in the world today, over
1 billion people, live in poverty on less than
one dollar a day. In an increasingly
interdependent world, many problems - like
conflict, crime, pollution and diseases such as
HIV and AIDS - are caused or made worse by
poverty. DFID responds to emergencies, both
natural and man-made. It also supports longterm programmes which aim to reduce poverty
and disease and to increase the number of
children in school, in support of the
internationally agreed UN Millennium
Development Goals.

You might also like