Unit 5 Quick Guide Indicators and Means of Verification
Unit 5 Quick Guide Indicators and Means of Verification
QUICK GUIDE
LEARNING
Introduction to Monitoring and Evaluation
Course
Indicators are developed for all three levels of results (outputs, outcomes and impact) to provide
feedback on areas of success and on areas in which improvement may be required. The focus of this unit
is mainly on monitoring indicators, but where relevant, evaluation will be dealt with to illustrate where
differences may exist.
A good tip when developing indicators is to remember that measuring change is costly and large
volumes of data can be difficult to manage and analyse. So, use as few indicators as possible while
ensuring you can measure the breadth of change. A good rule of thumb is no more than three indicators
per output, outcome or impact.
LEVELS OF INDICATORS
Impact indicators describe changes in people’s lives and development conditions. These are often
long-term changes that have taken place and are linked to larger developmental impacts such as
poverty alleviation, skills that lead to such things as more jobs, or policy change in an educational
setting.
Outcome indicators assess the progress against specified outcomes. They help to verify that the
intended positive change in the development situation has actually taken place. It is useful to have
two or more indicators to capture different dimensions of outcome – if time and resources allow.
These include outcomes such as examination pass rates from an educational intervention,
sustainable farming techniques adopted, and number of girls successfully moving from primary to
secondary level education.
Output indicators assess progress against specified outputs. Since outputs are tangible and
deliverable, their indicators may be easier to identify. It is useful to have two or more indicators to
capture different dimensions of output. Some examples of output indicators are: number of
textbooks delivered; number of teachers trained; percentage of girls taking STEM subjects as a total
of all girls enrolled.
TYPES OF INDICATORS
Quantitative indicators are measures (some statistical) that measure results in terms of number,
percentage, rate or ratio, mean, median, standard deviation. For example, percentage of faculty
successfully trained in a new distance learning methodology.
Qualitative indicators measure the results in terms of compliance with, quality of, extent of, or level
of, changes in institutional processes, attitudes and behaviours of individuals in non-numeric terms.
They reflect people’s perceptions; for example, an independent assessment of the readiness of a
government department to sustain an innovation introduced in the project.
Composite indicators can be used to measure complex outputs or outcomes that cannot be
measured by a single indicator. For example, a quantitative output indicator could be
complemented by client or learner comments on his or her satisfaction.
Proxy indicators are used when it is difficult to measure the outcome indicators directly or regular
data collection is not feasible. Proxy indicators must provide approximate evidence on performance
in order to be reliable and valid. For example, a proxy indicator of the above may be ‘percentage
pass rate of students of faculty who underwent innovative skills training’. Similarly, if it is not
possible to measure student’s learning outcomes, then drop-out rates or progression to next
grade/secondary phase of schooling could be used as an indicator.
FORMULATING INDICATORS
An indicator’s suitability depends on how it relates to the result it intends to describe (output or
outcome). The key to good indicator is credibility. It should direct the focus to what is critical for
achievement of results. This should consider the following key questions:
How can we measure that the expected results are being achieved?
What type of information can demonstrate a change?
2
What can be feasibly monitored or measured within the given resources and capacity constraints?
Will timely information be available for the different monitoring and evaluation exercises?
What will the system of data collection be? Who will be responsible?
It is crucial that the indicator can be measured. For example, to see improved classroom practice, there
must be a clear idea of what should improve (e.g. student participation, use of group work) and that it is
possible to construct a tool to reliably and validly measure it or that it is feasible to do this with the kinds
of enumerators (data collectors) etc., that are available.
As far as possible, indicators should be disaggregated (broken down) by gender, age, ethnicity,
geographical area etc. This is because averages tend to hide disparities. For instance, consider what the
following indicator tells you:
Percentage of primary school children progressing to secondary school.
It provides important information on the success of the primary level education and the socio-economic
environmental factors that assist children to progress to secondary level. However, we don’t know who
exactly progresses, where they came from, what socio-economic group they come from –the average
hides the inequality. It would be better to know:
Percentage of female primary learners that progressed to the secondary level (disaggregated by age,
socio-economic status, geographical location).
But recognise that this is a lot more data to collect, and you will need to plan for this when you develop
your indicators. Disaggregation is also a requirement of most funding bodies, especially where the focus
is on disadvantaged learners. Therefore, it is essential for programming to recognise and address
differential outputs, outcomes and impacts.
Indicators, especially quantitative ones, should be disaggregated where possible. This means ensuring
that information can be separated out to show how change affects different target groups. Common
criteria for disaggregation include gender, disability, and characteristics of other marginalised groups.
Where indicators are designed to be disaggregated, associated information such as baselines,
milestones and targets also needs to be disaggregated i.e. to show it is consistently done over time.
3
information for project management, it should be revised. Over time, new indicators may need to be
adopted and others dropped. This usually involves a negotiation with the funder of the project.
• Specific — the change that is the subject of the indicator must be specific, unambiguous, leaving
minimal room for misinterpretation.
• Measurable — it should be technically and financially possible and, to capture the data, within the
power and scope of the project operation.
• Adequate — it should provide sufficient evidence to be a valuable guide to the performance being
monitored or evaluated.
• Relevant — it should be central to the performance result that you want to monitor or evaluate.
• Timely — it should be possible to make good use of the indicator results in planning and other
management processes.
Gender-responsive indicators
Box 3. 5 Steps for Developing Gender-Responsive Indicators
The purpose of developing
gender-responsive indicators Step 1: Identify gender equality objectives in your results framework. If you do
is to help ensure that not have any explicit gender equality objectives, this is an opportunity to revisit
development interventions your Theory of Change and integrate them.
are contributing to the
achievement of gender Step 2: Identify the gender-related changes required to achieve gender equality
objectives, both at the outcome and output level. These desired changes should
be closely related to your gender equality objectives.
4
Step 3: Identify appropriate indicators, both quantitative and qualitative, that
will show progress towards gender-related changes
Step 4: Ensure benefits and returns to both men and women are considered.
equality goals and not perpetuating any existing gender inequalities. The extent to which projects
address gender equality will vary, but gender-responsive indicators are still important for understanding
the differential impacts on men/boys and women/girls.
Qualitative or quantitative?
As a general rule the more specific the result statement the easier it is to develop valid quantitative
indicators. So, it is usually easier, and more practical, to develop quantitative indicators for outputs than
for outcome and impact level results. This is not to say that one should ignore the use of quantitative
indicators to measure outcomes and impacts, but rather to make the point that qualitative measures
must also be considered.
That said, qualitative indicators should not be viewed as second best. They are often the best way to
gain insights into a wide range of variables such as changes in institutional processes, attitudes, beliefs,
motives and behaviours of individuals, or to have measures that show the complexity of a situation.
Data collection for qualitative indicators typically involves the use of open-ended questionnaires that are
administered through face-to-face interviewing, online responses (as opposed to aggregate statistics
about participation or timing of online use), focus groups, etc. The downside of this is the amount of
time and skills needed to collect and analyse the data. And, because they usually involve subjective
judgements, qualitative indicators are more difficult to collect and analyse reliably.
INIDICATOR DEFINITIONS
It is important to have clear definitions of indicators so that funders, and those developing data
collection tools have the same understanding of what an indicator means as the person compiling the
data. Key terms within indicators should be clearly defined and these definitions should align with the
means of verification; namely how you will measure the indicator. 1 An example of the need for clear
definition, for an indicator such as ‘% increase in enrolment in quality-assured training’ would require
clarity about how is quality defined. In one context, quality might mean it is appropriate to the users’
needs, and thus a means of verification might be user feedback/satisfaction rating of the training or
programme. On the other hand, quality could be defined as relevant to the labour market or industry, in
which case the means of verification would differ – it might include reports of quality assurance
processes which have engaged labour market stakeholders in the validation of a course or programme.
ESTABLISHING BASELINES
After selecting indicators to evaluate results, our focus is to establish baseline data. It is difficult to set
targets without first establishing baseline – how can change be measure if we do not know the starting
point. Baseline data are collected usually only for outcome indicators and this information can be
quantitative or qualitative. This is the starting point for measuring results. The construction of baselines
can be based on available secondary data sources and it can be improved or sharpened through
subsequent work of the project. In some cases, baseline data may not be available, or it may be zero.
For example, a programme to introduce financial education in schools, which previously had none,
1
As you will see later, these need not be details of a measurement tool, but a general indication of how the
measurement will be done; e.g. if the indicator is about improvement in teacher classroom practice, then the
MofV would be observation of classrooms/classroom practice.
5
might assume students had no financial knowledge and hence a baseline for an outcome could be
assumed to be zero. In relation to an output of ‘the number of skills training toolkits implemented’
would be zero at the start of the programme, and would increase incrementally along the life of the
programme – with a target set for how many the programme wants to achieve. At outcome level, a
baseline could also use existing data such as ‘number of primary school children (disaggregated by
gender, age and geography) that enrol in secondary school’. The baseline could be the last 3 year’s
enrollment rates if these data were already collected by the Departments of Education. This would be
especially important if these data showed an increase over the three years, which would have to be
taken into account when setting targets and seeing how much the programme had contributed to the
improvement in these rates.
SELECTING TARGETS
After gathering baseline data on indicators, the next step is to establish targets – what can be achieved,
taking into account the available resources and timeframe? Each indicator should have one target over
a specified timeframe. It need not be a single numerical value and in some cases, it can be a range Figure
1 shows a baseline, amount of improvement and thus target for an outcome on primary level students
who gain enrollment into secondary school.
MEANS OF VERIFICATION
While indicators are the means of measuring outcomes, means of verification are the source data used
to evaluate the indicators. At a very general level, an indicator would state ‘number of girls passing
6
Grade 4’. The means of verifying this would be the collection of the marks (from the school or education
department) of the girls in the grade 4 group.
This is a straightforward example with readily available data. Other indicators may not be readily
available. It is important to therefore set out what data will be needed at the start of a project. For
instance, for the indicator ‘improvement in quality of life at the end of a mentorship programme’ will
require a decision to be made on data that can be used to verify the indicator, such as: change in
psycho-social outlook; increased satisfaction with work-life balance;, increased statements of self-
efficacy in accessing opportunities. There are a variety of measures that may be used – the challenge is
in choosing the most appropriate ones.
The choice of data to be collected and the methods for collecting the data are influenced by the
following:
The evidence needed to track the indicators: is it located at the site (e.g. amount of training
provided, use of technology by participants, blood levels of beneficiaries) or at a different location
(e.g. Ministry of Education may collect pass rates, gender, age data)?
The feasibility of collecting these data, given time and budgetary constraints: is it is easier and
cheaper to collect pass rates versus classroom observation of a new teaching methodology in
progress?
The data management systems and internal capacities to manage data: are there trained people to
collect the data? Is an easy template or tool available? Or is it completely new?
These choices and questions will of course be returned to when the details of evaluation and monitoring
methods to be used are considered (see Units 6-9). The following table shows how the information on
the means of verification can be planned:
7
This will be sent to the evaluator.
How often will the data be 4 times across the year
collected?
Cost and difficulty of collecting data Data are easily accessible as part of standardised mathematics
test schedule in the school. Cost-efficient as teacher collates
information, with some training/instructions on use of
spreadsheet.
Collecting test scores may not be the only data that can be collected. The evaluator could also create a
checklist for observing the girls’ performance and problem-solving skills. An independent mathematics
test could be administered to reduce possible teacher bias. The longevity of the impact can be measured
by assessing mathematics performance over 3 years, but this might require multiple tests (rather than
repeating the same test).
There are two sources of data: primary (collected directly by the organisation) and secondary (collected
by other organisations), which could be used to measure the desired changes. Primary data would
ideally be that collected by the evaluator at a site (e.g. an evaluator administers a test at a school).
Secondary data may include sources such as national statistics available on government websites.
The pros and cons of primary and secondary data needs to be considered when running an evaluation.
Choosing to use primary or secondary data depends on a number of factors. For instance:
Is the primary data collection feasible? This would be the best-case scenario to ensure the most
control over the validity and reliability of the data.
Is the source accessible? If yes, is it accessible on a regular basis? For example, if the primary data
are not available, secondary data may need to be used.
What is the quality of data? How reliable are the data? Independent, well-planned data collection by
the evaluator is usually the ideal. However, this is not always possible. Data may be collected from
other people or institutions and, in this case, the quality of what is provided is important to
consider. One way to ensure the quality of data is ensured is to have a standardised data collection
protocol, and also not to only rely on one source of data.
Primary data collected from beneficiaries or other stakeholders allow for more in-depth exploration and
yields information that can facilitate deeper understanding of observed changes in outcomes and
outputs and the factors that have contributed to those changes. The strategy to collect primary data
involves making a choice between informal/less structured and formal/more structured instruments. 2
Secondary data or information can complement and supplement the primary data, but need not replace
collecting data from primary sources. Secondary data could include formal surveys and reports (National
Sample Surveys, Demographic and health survey). In case of data related to education, health and
livelihoods, the secondary source is often a more feasible option.
In monitoring outputs, a combination of data collection strategies (primary and secondary) might work
best in building the information system to support tracking each indicator. Nonetheless, there are trade-
2
These issues will be explored in more detail in Units 6-9.
8
offs with respect to cost, precision, credibility and timeliness. More structured formal methods are more
precise, costly and time consuming. So, when data are needed frequently, then the choice invariably is
less precise methods. Further, there is need for a certain degree of adaptability in the system to identify
new data sources, new data collection techniques and new ways of reporting.
When planning the data collection process for the indicators of the project, there are a number of
considerations. Table 2 is indicative of data sources and collection strategies commonly used for
monitoring purposes, and factors to consider regarding available resources. It has been presented as a
checklist for a project planning team to consider and to inform the data collection plan (see also Unit 6
on Developing and M&E Strategy, specifically the sections on data collection plans). Though further
detail is provided in later units3, it is included here to demonstrate some considerations for selecting or
identifying the means of verification.
Table 2: Example data sources and information available for data collection
9
THE COMMONWEALTH OF
RESOURCES
LEARNING
Introduction
Collection strategyto Monitoring and Evaluation Suitable or not? Explain
Course
Web use statistics and service subscriber data
Existing data, often at country level such as census data
UNIT
and educational statistics such as pass rates
Surveys INDICATORS AND MEANS OF VERIFICATION
5
Interviews and group discussions (time consuming, but can
be good quality – smaller sample needed for interviews
than questionnaires but the latter reaches more people)
Direct observation (i.e. through site visits )
Self, peer and partnership assessment of programme
progress and change
Evaluation studies, including independent studies that
investigate progress toward the achievement of longer-
term results or the effectiveness (or ineffectiveness) of the
strategies being employed. Remember to factor in the cost
if undertaken by external independent party (best practice)
ADDITIONAL RESOURCES
10
Social Impact Navigator. Developing indicators: 4 steps. A 4-step approach to
developing indicators: http://www.social-impact-navigator.org/impact-analysis/indicators/develop/
Compass. How to Develop Indicators. A practical ‘how-to’ guide for developing indicators
with downloadable resources and templates, including information on SMART indicators:
https://www.thecompassforsbc.org/how-to-guides/how-develop-indicators
Infodev. Core Indicators for Monitoring and Evaluation Studies in ICTs for
Education. This resource provides guidance on developing indicators in an educational setting:
https://www.infodev.org/sites/default/files/resource/InfodevDocuments_286.pdf
The contents of this document except logos/ graphics which are property of the respective owners, is
made available under Attribution-ShareAlike 4.0 International (CC BY-SA 4.0).
11