Monitoring and Evaluation
Monitoring and Evaluation
Page | i
Table of Contents
Introduction ................................................................................................................................ 2
Monitoring .............................................................................................................................. 4
Evaluation .............................................................................................................................. 5
Indicators ...........................................................................................................................16
Page | ii
2. Generate data collection options .................................................................................21
Page | iii
List of Annexure
Annex 1: Monitoring and Evaluation Plan
Page | 1
Introduction
This Manual is a reference document which provides detailed information about Monitoring and
Evaluation (M&E) and is intended to provide assistance to users with the objective of developing
policies and procedures at the organizational level. The purpose of the manual is to promote a
good understanding and reliable practices of M&E for projects or programmes. It covers the key
planning steps and processes needed to set up and implement an M&E system for project
planning, implementation and evaluation. It has been designed to be used by M&E specialists,
managers of civil society organizations, in particular humanitarian relief and development
programs, and decision makers responsible for programme oversight and funding.
The manual focuses on the key components of an M&E system that allows planners to develop
and strengthen M&E policies and procedures for projects or programmes. The manual has been
organized into three chapters which cover the theory, policy and application of M&E systems.
The first chapter lays out the theoretical foundation of M&E systems and the narrative is designed
to help readers develop an understanding of essential concepts. The first chapter is divided into
three sections which follow a logical train of thought from hypothesis of how change will come
about, to development of specific objectives needed for this change, to methods on how to
measure the projects achievements of its stated objectives, and finally procedures for collecting
and analyzing data and information used in the measurement.
In Chapter two a template for framing an M&E policy is laid out which is designed to provide
guiding principles to the organization in setting up and implementing M&E systems. This chapter
also helps define and determine the role of M&E functions and further mandates addressing a
number of M&E priorities.
Chapter three provides the structure for the practical application of an M&E system at the
organizational level. In this chapter an outline of an M&E framework consisting of a list of standard
M&E procedures, information required to perform these procedures, timelines and responsibilities
is provided. In addition outlines to establish an M&E system is also given. This chapter proposes
a list of tasks under each step to help M&E staff establish a central M&E function and meet project
specific M&E requirements and includes the steps. This section is supported by a number of
templates annexed with this manual to help perform standard M&E procedures.
Page | 2
CHAPTER 1
THEORY OF MONITORING
AND EVALUATION
Page | 3
Section 1: Monitoring and Evaluation
Monitoring
Monitoring is a periodically recurring task already beginning in the planning stage of a project or
programme. Monitoring documents results, processes and experiences and uses this information
as a basis to guide decision-making and learning processes. Monitoring is about checking
progress against plans. The data acquired through monitoring is used for evaluation.
Monitoring is the systematic and routine collection of information from projects and programmes
for four main purposes:
To learn from experiences to improve practices and activities in the future;
To have internal and external accountability of the resources used and the results obtained;
To take informed decisions on the future of the project or programme;
To promote empowerment of beneficiaries of the project or programme.
Monitoring focuses on the measurement of the following aspects of an intervention:
On quantity and quality of the implemented activities (outputs: What do we do? How do we
manage our activities?)
On processes inherent to a project or programme (outcomes: What were the effects /changes
that occurred as a result of your intervention?)
On processes external to an intervention (impact: Which broader, long-term effects were
triggered by the implemented activities in combination with other environmental factors?)
Common types of monitoring in projects or programmes include;
Results monitoring tracks effects and impacts. This is where monitoring merges with
evaluation to determine if the project/programme is on target towards its intended results
(outputs, outcomes, impact) and whether there may be any unintended impact (positive or
negative). For example, a psychosocial project may monitor that its community activities
achieve the outputs that contribute to community resilience and ability to recover from a
disaster.
Process (activity) monitoring tracks the use of inputs and resources, the progress of
activities and the delivery of outputs. It examines how activities are delivered the efficiency
in time and resources. It is often conducted in conjunction with compliance monitoring and
feeds into the evaluation of impact. For example, a water and sanitation project may monitor
that targeted households receive septic systems according to schedule.
Compliance monitoring ensures compliance with donor regulations and expected results,
grant and contract requirements, local governmental regulations and laws, and ethical
standards. For example, a shelter project may monitor that shelters adhere to agreed national
and international safety standards in construction.
Context (situation) monitoring tracks the setting in which the project/programme operates,
especially as it affects identified risks and assumptions, but also any unexpected
considerations that may arise. It includes the field as well as the larger political, institutional,
funding, and policy context that affect the project/programme. For example, a project in a
conflict-prone area may monitor potential fighting that could not only affect project success
but endanger project staff and volunteers.
Beneficiary monitoring tracks beneficiary perceptions of a project/programme. It includes
beneficiary satisfaction or complaints with the project/programme, including their participation,
treatment, access to resources and their overall experience of change. Sometimes referred
to as beneficiary contact monitoring (BCM), it often includes a stakeholder complaints and
feedback mechanism. It should take account of different population groups, as well as the
perceptions of indirect beneficiaries (e.g. community members not directly receiving a good
or service). For example, a cash-for work programme assisting community members after a
natural disaster may monitor how they feel about the selection of programme participants, the
Page | 4
payment of participants and the contribution the programme is making to the community (e.g.
are these equitable?).
Financial monitoring accounts for costs by input and activity within predefined categories of
expenditure. It is often conducted in conjunction with compliance and process monitoring. For
example, a livelihoods project implementing a series of micro-enterprises may monitor the
money awarded and repaid, and ensure implementation is according to the budget and time
frame.
Organizational monitoring tracks the sustainability, institutional development and capacity
building in the project/programme and with its partners. It is often done in conjunction with the
monitoring processes of the larger, implementing organization. For example, a National
Societys headquarters may use organizational monitoring to track communication and
collaboration in project implementation among its branches and chapters.
Although there are variations in the tools and processes according to the monitoring need as
described above, there are however some best practices which are summarized below;
Monitoring data should be well-focused to specific audiences and uses (only what is
necessary and sufficient).
Monitoring should be systematic, based upon predetermined indicators and assumptions.
Monitoring should also look for unanticipated changes with the project/ programme and its
context, including any changes in project/programme assumptions/risks; this information
should be used to adjust project/programme implementation plans.
Monitoring needs to be timely, so information can be readily used to inform project/programme
implementation.
Whenever possible, monitoring should be participatory, involving key stakeholders this can
not only reduce costs but can build understanding and ownership.
Monitoring information is not only for project/programme management but should be shared
when possible with beneficiaries, donors and any other relevant stakeholders.
Evaluation
Evaluation is assessing, as methodically and objectively as possible, a completed project or
programme (or a phase of an ongoing project or programme that has been completed).
Evaluations assess data and information that inform strategic decisions in order to improve the
project or programme in the future. During an evaluation, information from previous monitoring
processes is used to understand the ways in which the project or programme developed and
stimulated change. Evaluations should help to draw conclusions about five main aspects of the
intervention:
Relevance
Effectiveness
Efficiency
Impact
Sustainability
Information gathered in relation to these aspects during the monitoring process provides the basis
for the evaluative analysis. The evaluation process is an analysis or interpretation of the collected
data which delves deeper into the relationships between the results of the project, the effects
produced by the project and the overall impact of the project.
The major types of evaluation which can occur during the project or programme are defined under
three headings;
Page | 5
Summative evaluations occur at the end of project/programme implementation to assess
effectiveness and impact.
Midterm evaluations are formative in purpose and occur midway through implementation.
For projects/ programmes that run for longer than 24 months, some type of midterm
assessment, evaluation or review is usually required by the donor. Typically, this does not
need to be independent or external, but may be according to specific assessment needs.
Final evaluations are summative in purpose and are conducted (often externally) at the
completion of project/programme implementation to assess how well the project/programme
achieved its intended objectives.
Ex-post evaluations are conducted some time after implementation to assess long-term
impact and sustainability.
Page | 6
Summary: A Comparison between Monitoring and Evaluation
Monitoring Evaluation
Timing: When is it Continuous throughout the Periodic review at significant point in
done? project project progress end of project,
midpoint of project, change of phase
Scope: What Day to day activities, outputs, Assess overall delivery of outputs and
information is indicators of progress and progress towards objectives and goal
collected? change
Main participants: Project staff, project users External evaluators / facilitators,
Who does it? project users, project staff, donors
Process: How is it Regular meetings, interviews, Extraordinary meetings, additional data
done? monthly, quarterly reviews, etc. collection exercises etc.
Written outputs: What Regular reports and updates to Written report with recommendations
is produced? project users, management and for changes to project presented in
donors workshops to different stakeholders
How are the results To improve quality of To judge the impact on the target
used? implementation and adjust population; adjust objectives; decide
planning. about the future of the programme
As input to evaluation.
Page | 7
Section 2: The M&E System
The M&E system provides the information needed to assess and guide the project strategy,
ensure effective operations, meet internal and external reporting requirements, and inform future
programming.
A functional M&E system provides a continuous flow of information that is useful internally and
externally.
The internal use of information on progress, problems, and performance is a crucial
management tool that helps managers ensure that specific targets are met
The information from an M&E system is also important to those outside the organization who
are expecting results, wanting to see demonstrable impacts
M&E should be an integral part of project design and also part of project implementation and
completion. It is therefore important to understand the key stages of the project life cycle and how
an M&E system corresponds to this (see Figure 1).
Page | 8
analyzing the business needs/requirements in measurable goals
reviewing of the current operations
financial analysis of the costs and benefits including a budget
stakeholder analysis, including users, and support personnel for the project
project charter including costs, tasks, deliverables, and schedule
Outputs from this stage may include approval to proceed to the next stage, documentation of the
need for the project, and rough estimates of time and resources to perform it, and an initial list of
people who may be interested in, involved with or affected by the project.
Outputs from this stage include a Project Plan documenting the intended project results and the
time, resources and supporting processes to help create them, along with all the other controls
that the project needs, such as for risk management.
Page | 9
Outputs from this stage may include project progress reports, financial reports and further detailed
plans.
Contract closure: Complete and settle each contract (including the resolution of any open
items) and close each contract applicable to the project or project phase.
Project close: Finalize all activities across all of the process groups to formally close the
project or a project phase
Outputs from this stage may include final, accepted and approved project results and
recommendations and suggestions for applying lessons learned from this project to similar efforts
in the future.
the overall goal of the project or the desired change you wish to bring about or the effect you
want to happen;
the beneficiaries the project aims to benefit;
the hypotheses or assumptions of the project and how these connect the project objectives to
project activities or interventions;
the scope and size of the project;
the capacity for M&E and the extent of participation;
the duration of the project;
the overall budget of the project.
Therefore, each project will have different M&E needs and these will be dependent on; the context
in which the project operates; the capacity of the implementing team; the requirements of the
donor. These needs should be identified when preparing an M&E plan and the methods,
procedures and tools used to meet them coordinated accordingly. In doing this, resources will be
conserved and planning of M&E activities streamlined.
The foundation on which an M&E system is built consists of four key components; the Problem
Analysis Framework, the Logframe or Logical Framework, the M&E Plan or Indicator Matrix and
the Data Collection and Analysis Plan. A further two components Reporting and Utilization and
M&E Staffing and Capacity Building which are integral and important are also discussed.
These components play a vital role in M&E planning and answer the following questions;
What does the project want to change and how?
What are the specific objectives to achieve this change?
What are the indicators and how will they measure this?
How will the data be collected and analyzed?
In the following section we shall discuss each of the key components in detail and exemplify how
they contribute to the development of an M&E system.
Page | 10
Section 3: The Key Components of an M&E System
The major problem and condition(s) that the project seeks to change
Factors that cause the condition(s)
Ways to influence the causal factors, based on hypotheses of the relationships between the
causes and likely solutions
Interventions to influence the causal factors
The expected changes or desired outcomes (see Table 1).
The information needed to carry out the analysis should be based on careful study of local
conditions and available data and in consultation with target beneficiaries, implementing partners,
stakeholders and technical experts. This information should be obtained if available from needs
assessments, feasibility studies, participatory rapid appraisals (PRAs), community mapping, and
SWOT (strengths, weaknesses, opportunities, threats) analysis.
Page | 11
Cause/Conditions
Interventions
Mothers do not know that IF mothers are aware of the
Educate mothers about the
unclean water will make infants dangers of unclean water,
dangers of unclean water
sick (knowledge).
Mothers believe that breast AND that breast milk is Educate mothers about the
milk alone does not satisfy nutritionally sufficient for infants nutritional value of breast milk
infants younger than 6 months younger than 6 for infants younger than 6
(attitude). months, months
Mothers are giving breast milk THEN they will breastfeed their Desired Outcomes
substitutes to infants younger infant exclusively to avoid Increased breastfeeding of
than 6 months (practice). exposure to unclean water, infants younger than 6 months
The framework presented in Table 1 hypothesizes that mothers will breastfeed their infants once
they learn about the dangers of unclean water. However, if mothers are not breastfeeding for
other reasons, such as cultural norms or working away from home, then different interventions
are needed. In effect, the M&E system tests the hypotheses to determine whether the projects
interventions and outputs contributed to the desired outcomes.
Please note that other forms of analysis exist which can achieve the same outcomes and the one
you choose to implement should be familiar to you. Included on this list are problem analysis
techniques, such as problem trees, the methodology this uses is to isolate conditions and
consequences that help identify objectives and strategies, and theory of change analysis, which
uses backwards mapping to identify conditions required to bring about desired outcomes.
The logframe is a matrix made up of columns and rows which specifies the project objectives,
what you are trying to achieve and, the project indicators, how the achievement will be measured.
The project objectives and indicators are assessed against the project inputs, outputs,
outcomes and impact (goal) and it is important to understand the difference between these. In
Page | 12
table 1 the key terms and components of a classic 4x5 logframe matrix are presented and the
terms defined.
The core of the Logical Framework is the "temporal logic model" that runs through the matrix. This
takes the form of a series of connected propositions:
If these Activities are implemented, and these Assumptions hold, then these Outputs will be
delivered
If these Outputs are delivered, and these Assumptions hold, then this Outcome will be
achieved.
If this Outcome is achieved, and these Assumptions hold, then this Goal will be achieved.
In table 2 a completed logframe is presented with an example of an actual real life example.
Page | 13
Table 1: Definitions of key terms and components of Logframe
Means of Risks &
Project Objectives Indicators
Verification Assumptions
GOAL Impact Indicator Measurement External
A broad statement of a desired, Quantitative or qualitative method, data factors
usually longer-term, outcome of a means to measure source, and data necessary to
project. Goals express general achievement or to reflect collection sustain the
project intentions and help guide the changes connected to frequency for long-term
the development of a project. stated goal stated indicator impact, but
Each goal has a set of related, beyond the
specific objectives that, if met, will control of the
collectively permit the project
achievement of the stated goal.
Page | 14
Table 2: Completed Logframe Matrix
Project Objectives Indicators Means of Verification Risks/Assumptions
Page | 15
3. Indicators and M&E Plan (Indicator Matrix)
Indicators
Indicators are defined as quantitative or qualitative variable that provides a valid and reliable way
to measure achievement, assess performance, or reflect changes connected to an intervention.
Indicators help in observing and measuring the actual results as compared to the expected
results. Indicators help by generating information and or data, which are required to guide
effective actions. An indicator is neutral in nature and does not embed any direction or target. A
bowing treetop is an indicator of the wind, an indicator does not require indicating the direction or
extent to which a tree top should bow down.
Effective indicators are a critical logframe element. Technical expertise is helpful, and before
indicators are finalized, it is important to jointly review them with key implementers to ensure that
they are realistic and feasible and meet user informational need.
It is also important to understand the logframes hierarchy of indicators. For instance, it is usually
easier to measure lower-level indicators such as the number of workshop participants, whereas
the higher-level indicators, such as behavioural change, typically require more analysis and
synthesis of information. This affects the M&E data collection methods and analysis and has
implications for staffing, budgets, and timeframe.
Are the indicators relevant, measurable, achievable and time-bound? Indicators should be
easy to interpret and explain, timely, cost-effective, and technically feasible. Each indicator
should have validity (be able to measure the intended concept accurately) and reliability (yield
the same data in repeated observations of a variable).
Are there international or industry standard indicators? For example, indicators
developed by the United Nations Millennium Development Goals (eg: Achieve Universal
Primary Education, Reduce Child Mortality), and the Demographic and Health Surveys have
been used and tested extensively. Other indicators include those developed by the Sphere
Project and the Humanitarian Accountability Partnership (HAP).
Are there indicators required by the donor, grant or program? This can be especially
important if the project-level indicator is expected to roll up to a larger accountability framework
at the program level.
Are there secondary indicator sources? It may be cost-effective to adopt indicators for
which data have been or will be collected by government departments, private sector,
international agency or other NGOs, and so on.
Page | 16
Indicators Define key Identify Identify Identify the Describe Identify
can be either terms in information how often people process intended
quantitative indicator for sources and the data responsible for audience
(numeric) or precise data will be and compilin and use of
qualitative measurement collection collected, accountable g and data, i.e.,
(descriptive and explain methods/tool monthly, for data analyzin monitoring,
observations how the s Indicate quarterly, collection/ g data, evaluation,
) and are indictor will be whether data or annually analysis List i.e., or reporting
typically calculated, collection List start- each statistical to policy
taken i.e., the tools up and end persons analysis makers or
directly from numerator (surveys, dates for name and donors State
the logframe. and checklists) data position title ways the
denominator exist or need collection to ensure findings will
of a percent to be and clarity in be formatted
measure; also developed deadlines case of and
note any to develop personnel disseminate
disaggregatio tools changes d
n i.e., by sex,
age, or
ethnicity
Page | 17
Persons Responsible: This column lists the people responsible and accountable for the data
collection and analysis, i.e., community volunteers, field staff, project managers, local
partner/s, and external consultants. In addition to specific peoples names, use the position
title to ensure clarity in case of personnel changes. This column is useful in assessing and
planning for capacity building for the M&E system.
Data Analysis: This column describes the process for compiling and analyzing the data to
gauge whether the indicator has been met or not. For example, survey data usually require
statistical analysis, while qualitative data may be reviewed by research staff or community
members.
Information Use: This column identifies the intended audience and use of the information.
For example, the findings could be used for monitoring project implementation, evaluating the
interventions, planning future project work, or reporting to policy makers or donors. This
column should also state ways that the findings will be formatted (e.g., tables, graphs, maps,
histograms, and narrative reports) and disseminated (e.g., Internet Web sites, briefings,
community meetings, listservs, and mass media).
The indicator matrix should be developed in conjunction with those who will be using it.
Completing the matrix requires in-depth knowledge of the project and its context and this
information is best provided by the local project team and partners. This involvement has the
benefit of improving data quality because they will understand better what data they are to collect
and how they will collect them. Table provides a sample format for an indicator matrix, with
column definitions in the first row and a sample indicator in the second row.
Page | 18
Table 4: Example of an M&E Plan (Indicator Matrix)
Indicators Indicator Methods/ Frequency/ Persons Data Analysis Information Use
Definition Sources Schedules Responsible
Example 1. Children refer to 1. Endline 1. Endline External 1. Project 1. Project
Outcome 1a. age between 3 randomized survey Evaluation Team management implementation and
Percent of days and 1 year household survey depends on the team during decision making
children younger
project timeline project reflection with community
than one-year old 2. Fully 2. Community
who are fully meeting
immunized for focus group 2. School 2. Monitoring
immunized for
polio polio refers to discussions Focus Group 2. Post-project process of project
(immunization getting polio Discussions meeting with with management
coverage) immunization 3. Community (FGDs): implementing of implementing
key informant partners
vaccine according teachers, partners
interviews facilitated by
to MOH standards students, and project manager
(1st dose at any administration 3. Impact
evaluation to justify
time after birth, at the end of
intervention to
2nd dose at 1-2 the project Ministry of Health
months later, 3rd and donors
dose at 6-12 3. Beginning of
months after data collection
second according to
vaccination) the project
timeline
3. Numerator:
number of fully 4. Endline
immunized survey
children in the questionnaire
community pending
Denominator: depends on the
Total number of project timeline
children in the
community per
defined age
category
Page | 19
4. Data Collection and Data Analysis
Data can be gathered and collected from a variety
of sources using a variety of methods. Some
methods are hands-on and highly participatory,
while others are more exclusive and rely on the
opinion of one or two specialist sources. In most
cases, it is best to use more than one data collection
method. The process of identifying quality data
sources and developing data collection methods
can be broken down into four sub-steps.
If there are no feasible or reliable sources available, then consider proxy indicators for which good
data will be available. Major sources of data and information for project monitoring and evaluation
include;
Secondary Data. Useful information can be obtained from other research, such as surveys
and other studies previously conducted or planned at a time consistent with the projects M&E
needs, indepth assessments, and project reports. Secondary data sources include
government planning departments, university or research centres, international agencies,
other projects/programs working in the area, and financial institutions.
Sample Surveys. A survey based on a random sample taken from the beneficiaries or target
audience of the project is usually the best source of data on project outcomes and effects.
Although surveys are laborious and costly, they provide more objective data than qualitative
methods. Many donors expect baseline and endline surveys to be done if the project is large
and alternative data are unavailable.
Project output data. Most projects collect data on their various activities, such as number of
people served and number of items distributed.
Qualitative studies. Qualitative methods that are widely used in project design and
assessment are: participatory rapid appraisal, mapping, key informant interviews, focus group
discussions, and observation.
Checklists. A systematic review of specific project components can be useful in setting
benchmark standards and establishing periodic measures of improvement.
External assessments. Project implementers as well as donors often hire outside experts to
review or evaluate project outputs and outcomes. Such assessments may be biased by brief
exposure to the project and over-reliance on key informants. Nevertheless, this process is
Page | 20
less costly and faster than conducting a representative sample survey and it can provide
additional insight, technical expertise, and a degree of objectivity that is more credible to
stakeholders.
Participatory assessments. The use of beneficiaries in project review or evaluation can be
empowering, building local ownership, capacity, and project sustainability. However, such
assessments can be biased by local politics or dominated by the more powerful voices in the
community. Also, training and managing local beneficiaries can take time, money, and
expertise, and it necessitates buy-in from stakeholders. Nevertheless, participatory
assessments may be worthwhile as people are likely to accept, internalize, and act upon
findings and recommendations that they identify themselves.
Page | 21
Participatory rapid appraisal (PRA). This uses community engagement techniques to
understand community views on a particular issue. It is usually done quickly and intensively
over a 2 to 3 week period. Methods include interviews, focus groups, and community mapping.
Questionnaire. A data collection instrument containing a set of questions organized in a
systematic way, as well as a set of instructions to the enumerator/interviewer about how to
ask the questions (typically used in a survey).
Rapid appraisal (or assessment). A quick costeffective technique to gather data
systematically for decision-making, using qualitative and quantitative methods, such as site
visits, observations, and sample surveys. This technique shares many of the characteristics
of participatory appraisal (such as triangulation and multi-disciplinary teams) and recognizes
that indigenous knowledge is a critical consideration for decision-making.
Self-administered survey. Written surveys completed by the respondent, either in a group
setting or in a separate location. Respondents must be literate (for example, it can be used to
survey teacher opinions).
Statistical data review. A review of population censuses, research studies, and other
sources of statistical data.
Survey. Systematic collection of information from a defined population, usually by means of
interviews or questionnaires administered to a sample of units in the population (e.g., person,
beneficiaries, and adults).
Visual techniques. Participants develop maps, diagrams, calendars, timelines, and other
visual displays to examine the study topics. Participants can be prompted to construct visual
responses to questions posed by the interviewers, for example, by constructing a map of their
local area. This technique is especially effective where verbal methods can be problematic
due to low literate or mixed language target populations or in situations where the desired
information is not easily expressed in either words or numbers.
Written document review. A review of documents (secondary data) such as project records
and reports, administrative databases, training materials, correspondence, legislation, and
policy documents.
Page | 22
below table 5 lists some factors and related questions to consider when selecting an appropriate
method.
What is a reasonable cost for the team to incur for collecting the data? Some low
Cost
cost data collection methods limit the type of information that can be collected
How much time is available and reasonable for data collection and processing? How
Speed will shorter collection times impact other data characteristics - accuracy/level of
detail?
Geographic What is the geographic area impacted by the program? How can data be effectively
Diversity collected in hard-to-reach or widely-dispersed geographic areas?
How much diversity is present in the target audience (e.g., income, size of
Demographic organization, ethnicity)? A diverse population whose target audience is non
Diversity homogeneous on one or more factors may require a bigger sample size to capture
impact accurately.
Level of How accurate should the data be? How accurate are the local government
Accuracy statistics? How do you balance level of accuracy against the cost of collecting data?
Reliability Can comparable data be collected using this same method in the future?
How often are the data to be collected? How does this impact data collection in
Frequency
terms of staff/partner resources and costs associated with collecting the data?
Page | 23
Case study Guidelines, Define the problem and formulate the scope and
Checklists objective of the query with specific attention
toward the nature and context of subject
Identify samples to be used in the study. They
should address the representational needs of
the range of data being evaluated and show the
relevance of the study
Select the type of case most appropriate to the
needs of the program
Collect the data to be analyzed through a
combination of sources
Analyze the data, accounting for rival
explanations, reproduction of findings, internal
validity, plausibility, ability to generalize, and
overall coherence
Evaluate the results regarding ability to
generalize and internal data validity
Write the report and share the findings
Surveys Questionnaire Define the areas of evaluation and develop
applicable questions
Establish a survey plan
Develop a sampling protocol that includes a well
thought out method of data collection, sampling
techniques and method of analysis
Develop the questionnaire
Field test the questionnaire, individual questions
and the time it takes to administer the test
Distribute the questionnaire to respondents with
a return date.
Provide a follow-up contact with non-
respondents
Analyze data and share the results with
stakeholders
Page | 24
Utilization and Reporting
Collecting information on project activities and achievements can serve many important functions,
such as improving the quality of services; ensuring accountability to beneficiaries, donors, and
other stakeholders; and advancing learning. Project reporting is closely related to M&E work,
since data are needed to support the major findings and conclusions presented in a project report.
Often the focus and frequency of M&E processes are determined by reporting requirements and
schedules.
Page | 25
CHAPTER 2
Page | 26
M&E POLICY of [Name of Org]
[Name of Org] is committed to ensure transparency, accountability and effectiveness in all its
development efforts, projects and programs.
To ensure transparency, accountability and effectiveness, [Name of Org] requires its
management to establish and strengthen an M&E function at organizational level.
[Name of Org] endorses allocating necessary human and capital resources required for
establishment and proper functioning of its M&E function.
[Name of Org] firmly believes that our program management practices should be guided by
certain M&E Principles. [Name of Org] requires its management to adhere to these principles.
Effectiveness:
the project results represent the most desirable changes in the lives of the target beneficiaries
the intervention logic is defined correctly
the project outputs are significantly contributing towards the project purpose
the project inputs are identified correctly
Efficiency:
the project inputs are organized and utilized efficiently to ensure best value for money (the
project benefits reach to the maximum beneficiaries by utilizing the available resources)
the project inputs are the best available resources to achieve the desired results
the project targets are achieved on planned timelines
Impact
the project is contributing towards the solution of the subject problem
the project is contributing towards the long-term goals
the changes caused or influenced by the project sustain after the life of the projects
Sustainability:
the project beneficiaries and partners are enabled to sustain and augment the changes
caused or influenced by the project
the reforms pursued by the project in policies, administrative structures, systems, processes
and practices are institutionalized within respective entities.
the project is not producing any changes (intentionally or un-intentionally), which are harmful
for the target beneficiaries and the society at large.
Page | 27
the stakeholders especially beneficiaries are included in designing, planning and
implementation processes.
no team member is excluded from management processes on the basis of religious, ethnic,
sectarian or any other identity.
no potential beneficiary is excluded from availing the benefits on the basis of religious, ethnic,
sectarian or any other identity.
Accountability:
the stakeholders especially beneficiaries are made part of the monitoring processes.
A feedback/ complaint system is established and activated for the beneficiaries.
responsibilities of stakeholders and staff are clearly identified in ways that cater to conflict of
interest between implementation and monitoring roles.
Reporting mechanisms are clearly established specifying the timelines and nature of required
information.
All programmatic decisions/ approvals are recorded adequately.
A. [Name of Org] believes that achieving results is the central thrust of our development efforts.
[Name of Org] hence requires its M&E function to ensure continuous information gathering,
assessment, analysis, learning and reporting around results.
B. [Name of Org] requires its management to constitute a Monitoring and Evaluation Committee
(MEC). The Monitoring and Evaluation Committee will be custodian of [Name of Org] M&E
function. The following Terms of References (ToRs) spell out the composition and
responsibilities of [Name of Org] Monitoring and Evaluation Committee (see below).
Page | 28
Terms of Reference of
Monitoring and Evaluation Committee (MEC)
(ToRs of MEC)
1. The BoD/ BoG/ BoT of [Name of Org] notifies constitution of MEC, which will be
custodian of [Name of Org] M&E function.
2. MEC is constituted by having one representation each from the following segment;
a. BoD/ BoG/ BoT members
b. stakeholders/ partners
c. beneficiaries,
d. civil society e.g. academia, media, social activists etc.
e. M&E function of [Name of Org]
3. The representative of M&E function of [Name of Org] is ex-officio secretary of the
MEC.
4. MEC meets quarterly (or as and when deemed necessary basis).
5. MEC is responsible to ensure that M&E principles are adhered by achieving
respective standards.
6. MEC is responsible to ensure that every project complies with [Name of Org]
monitoring procedures as well as project specific monitoring requirements if arise.
7. MEC is responsible to furnish Quarterly Monitoring Note (QMN) extracting findings of
all M&E activities. The QMN is supposed to outline recommendations for BoD/ BoG/
BoT requiring strategic adaptations and/or implementation level changes. QMN may
include recommendations such as revisiting the intervention logic, revision of budget,
indicators, targets, activities, processes and replacement of spatial coverage etc.
8. The secretary MEC is responsible to draft QMN for MEC in consultation with other
members and project/ program teams. The MEC is responsible to finalize the QMN
and submit to the BoD/ BoG/ BoT subsequently.
Page | 29
CHAPTER 3
M&E POLICY:
STANDARD PROCEDURES AND METHODOLOGY
Page | 30
Standard Procedures and Methodology
[Name of Org] M&E system strives to attain procedures, which ensure effectiveness, transparency and accountability in to the program management
practices at various stages of program cycle. The table below outlines a set of M&E procedures and identifies the type of information required to
perform these procedures. Tentative timelines and responsibilities to perform these M&E procedures are also proposed, which may be appropriated
as required.
Page | 32
19 Compiling/ Required M&E Function
consolidating/ Step 4: Reporting of results reporting
collating progress Plan for reporting results. This will require embedding the reporting requirements, timelines and intervals
responsibilities in to the project monitoring plans.
against all plans
Identify and know your audience. It is important to know the requirements and interests of the
20 Reporting key lessons target audience, which will determine the purpose of reporting results. In general, results are Quarterly M&E Function
to further improve mainly reported for accountability, advocacy (including awareness raising) and/ or participatory
program design and decision making purpose/s to seek financial support, commitment for action, cooperation and/ or
delivery improved coordination. The identification of the right purpose further guides us in knowing;
21 Periodic internal What type of information (e.g. numeric or subjective) is required for desired reporting? Annual M&E Function
How much information is required? (It is very important to limit our wish list. Since the information
evaluations
gathering involves time, efforts and resources, it should therefore be determined that how much
22 External evaluations information would suffice our reporting requirements) Mid-term/ Management
What writing style should be adopted or how to narrate the results? project End and M&E
Form a team for reporting/ communicating results. Reporting/ communicating results in ways Function
23 M&E Institutional preferred by the target audience can earn back huge support/ response. This is recommended to Annual M&E Function
Assessment of the pool multiple professional experts in to the reporting team. Involve people with skills in presenting with Program
statistical data, writing catchy narratives, formatting and most importantly people with ability to
organization team/s
interpret results by drawing conclusions from M&E findings. (In case the organization does not
have in-house capacity to pool the abovementioned skills, involve volunteer students, teachers
and social workers).
Raise consensus on M&E findings. Discuss the M&E findings with the stakeholders and reach to
the agreed conclusions. This will help in enhancing the accuracy of the M&E findings.
Draft narrative on performance against results. The narrative should be based on the agreed M&E
findings. The narrative on results performance may include the following elements.
what was the main idea/ crux behind the output and how the output is linked with the project
outcome/s
progress on the output delivered (quantity, spatial coverage, beneficiaries etc)
immediate effect of the output (how the output has contributed towards achieving the desired
change e.g. change in behavior, practices, knowledge, skills, capacities etc)
what are the examples to qualify reporting the change
If the outputs delivered took in to consideration the inclusion, gender balance, participation of and
accountability to the beneficiaries and stakeholders?
What are the activities performed to deliver these outputs, implementation methodology/ process
Users perception on the quality and contribution of the output.
Efficiency of resources invested
Major learning outcomes (what went well and what went wrong)
Note: it is important to understand that these are the generic elements of reporting results. The
emphasis and order of these elements may vary in different reporting environments.
Share narrative with the concerned team members and seek feedback. Incorporate the feedbacks,
which enhance clarity of the narrative and presentation of the information to capture the interest
of the target audience as well as serve the intended purpose.
Finalize the narrative and share with the target audience.
Page | 33
ANNEX 1
MONITORING AND EVALUATION PLAN
[Insert logo(s)]
[Insert Name of Project / Organization]
[Dates of project duration]
Acronyms
List relevant acronyms and terms used by your project.
Acronym Definition
[Organization/Project] Overview
Vision Statement
List your organizations vision statement. This helps all users of this document
link project activities and our work in monitoring and evaluation to our more
fundamental reason of coming together.
If youre not sure of your organizations vision statement, go find out! Does it
still apply? Do you think it needs revisions?
Project Description
This is your opportunity to describe in summary form the main points of your
project, activity, or organization. Provide relevant background information on
the work that you are doing including any pertinent demographical information
on the public health issue you are addressing.
Goal: What is the goal of your project? This is the main goal that drives all of
the activities and related sub-activities.
Objective: What are the specific objectives that you have outlined as steps in
order for you to take to accomplish your desired goal?
Activities: The activities are what you to do carry out your objectives.
Quantifiable resources 1) What you do Immediate results Longer-term change in Long-term, population
going in to your activities to accomplish from your activity knowledge, attitude, level change.
the things you budget your behaviour, etc.
Level for. objectives? - people trained, Can relate to a
services provided Related to programme Goal programme or
organizations vision /
mission statement
- # of training manuals # of people trained Measure of change in quality ILI Case Fatality Ratio
Indicator of care provided to patients
- amount of money spent Training # of trainings infected with Influenza
(example) on the training workshop conducted A/H5N1
What are we Who collects How are data List any possible To whom will this How can this
collecting? this data, from aggregated? opportunities to information be information be used
where, and how transform the data reported? to make informed
often? into more meaningful decisions? List
information and thus specific opportunities
for further review for use.
Where are the Are there other Link to Data Use
data stored? pieces of information Template (4.2)
available?
Data elements Data elements Data elements Data elements Indicators Indicators
Indicators Indicators
List by indicator What are the Who will you want How will you How should this What steps must be
multiple uses for the to communicate this communicate this information be taken to ensure that
information information to? information? formatted to best this information is
generated from this reach the intended used?
indicator? user?
Any follow up
needed?
Feedback?
Stakeholder Analysis
Stakeholder Stakeholder Stakeholder What information Why is the When is the How will the
Background Demographic is required? information information information be
(knowledge, Characteristics (Stakeholder required? required? communicated?
experience, etc.) needs and (format)
interests)
External Stakeholder
Internal Stakeholder
Name of Indicator Data Quality Issues Actions Taken or Planned to Address Additional Comments
this Limitation
(list by indicator) List possible risks to the quality of How will the identified possible risks to
data collected. Consider the five the quality of data be managed?
criteria for data quality: validity,
reliability, integrity, precision, and
timeliness.
Evaluation can help you learn additional information from programmes such as
activity outcomes and quality of services provided that cannot be gained from a
routine monitoring system.
Process Evaluation
What intervention can work in this context (interventions that have proven effective in
this context)? Are we doing the right things are we doing it right and on a large enough
scale?
Outcome Evaluation
Is the intervention working, is it making the intended difference in outcomes such as
changes in knowledge and behaviour?
Impact Evaluation
Are our combined efforts affecting change on a population level?
Additional Information
List any evaluation activities that you are currently implementing and the evaluation
questions they are addressing.
Note: Not every indicator requires a complete set of information filled out in the
indicator information sheets. You will find that for INPUT and OUTPUT data, the
information and detail you will need to manage is much less.
Name of Indicator: Simply put as possible, insert the name of this indicator
Result to Which Indicator Responds: The specific result that this indicator corresponds to.
Level of Indicator: Does this indicator respond to an INPUT, OUTPUT, OUTCOME, or IMPACT
level result?
Description
Unpack as much as possible the specific definition of this indicator. Spell out nearly
Definition:
every word so that all who come across use of this indicator have the same complete specific
understanding of the intention of what this indicator is intended to measure.
In what unit will this indicator be captured and are there any
Unit of Measurement and Desegregations:
disaggreations (male / female, age, etc.)
Plan for Data Acquisition
Data Source: Where was the data collected? (Where was the data borne?)
Frequency and Timing of Data Acquisition: How often are the data collected?
Individual Responsible: Who is responsible (what position) is responsible for collected the data?
Location of Data Storage: Where, specifically (which office, which drawer) are the raw data stored?
Data Quality Issues
Known Data Limitations and Significance:Are there identified threats to the quality of this data?
Consider: Validity / Reliability / Integrity / Precision / Timeliness
Actions Taken or Planned to Address this Limitation: What are some steps you have taken to manage the
possible threats to data quality.
Internal Data Quality Assessments: Have you performed your own Data Quality Assessment?
Do the data from this indicator require a specific plan for analysis? If yes, please
Data Analysis:
describle. If not, please delete this section for this indicator.
Do the data from this indicator require a specific plan for review (internal / external)
Review of Data:
before dissemination? If not, please delete this section.
Where must the data from this indicator go? Funders? Internal / external decision
Using Data :
makers. Who needs this information to make decisions?
Who is involved in your M&E team? Identify all individuals involved with various
aspects of monitoring and evaluation in your organization: data collectors, information
system personnel, programme managers, directors, etc. This team should meet on a
regular basis to check in with progress on planned M&E activities and to use
information from our monitoring and evaluation systems to inform decision making
within your organizations.
Key M&E Salaries Consultant Travel Meetings Documentation Dissemination Other Direct Activity
Activities Costs e.g. Subtotal
computers
(Survey, Focus Group,
Data Base software
Development, M&E
Plan Development,
Dissemination, Data
Quality Assessment)
M&E Activity 1
M&E Activity 2
Total