100% found this document useful (5 votes)
2K views

2020 Guide On Monitoring and Evaluation For Beginners

This document provides a 10-step guide for designing a monitoring and evaluation (M&E) system for projects. It begins by outlining the first 5 steps: 1) define the scope and purpose of the M&E system, 2) define evaluation questions, 3) identify monitoring questions, 4) identify indicators and data sources to answer the questions, and 5) identify responsibilities for data collection, storage, reporting, budget, and timelines. The next 5 steps continue with guidance on developing data collection tools, establishing a baseline, setting targets, analyzing data, and reporting and using evaluation results. The document aims to help those new to M&E develop an effective system for collecting relevant data and evaluating project performance and impact.

Uploaded by

marija
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (5 votes)
2K views

2020 Guide On Monitoring and Evaluation For Beginners

This document provides a 10-step guide for designing a monitoring and evaluation (M&E) system for projects. It begins by outlining the first 5 steps: 1) define the scope and purpose of the M&E system, 2) define evaluation questions, 3) identify monitoring questions, 4) identify indicators and data sources to answer the questions, and 5) identify responsibilities for data collection, storage, reporting, budget, and timelines. The next 5 steps continue with guidance on developing data collection tools, establishing a baseline, setting targets, analyzing data, and reporting and using evaluation results. The document aims to help those new to M&E develop an effective system for collecting relevant data and evaluating project performance and impact.

Uploaded by

marija
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

Monitoring and

Evaluation
Beginners Guide 2020

COACH ALEXANDER
Who is this Book For?

A Beginners Guide to Monitoring and Evaluation


This Ebook has been designed for those who are completely new to Monitoring and Evaluation and would like to
learn the step by step process of conducting M&E activities in your projects. It can also be useful to those who've
been doing monitoring and evaluation for sometime but only want to refresh their understanding of M&E.

Hope you enjoy reading everything in this ebook

Regards

COACH ALEXANDER

1.
CHAPTER 1: What is monitoring and evaluation (M&E)?

This section provides a brief introduction to what M&E is.

Monitoring is the systematic and routine collection of information from projects and programmes for four main
purposes:

To learn from experiences to improve practices and activities in the future;


To have internal and external accountability of the resources used and the results obtained;
To take informed decisions on the future of the initiative;
To promote empowerment of bene ciaries of the initiative.

Monitoring is a periodically recurring task already beginning in the planning stage of a project or programme.
Monitoring allows results, processes and experiences to be documented and used as a basis to steer decision-
making and learning processes. Monitoring is checking progress against plans. The data acquired through
monitoring is used for evaluation.

Evaluation is assessing, as systematically and objectively as possible, a completed project or programme (or a phase
of an ongoing project or programme that has been completed). Evaluations appraise data and information that
inform strategic decisions, thus improving the project or programme in the future.

Evaluations should help to draw conclusions about ve main aspects of the intervention:

relevance
effectiveness
ef ciency
impact
sustainability
Information gathered in relation to these aspects during the monitoring process provides the basis for the
evaluative analysis.

Monitoring & Evaluation


M&E is an embedded concept and constitutive part of every project or programme design (“must be”). M&E is not an
imposed control instrument by the donor or an optional accessory (“nice to have”) of any project or programme.
M&E is ideally understood as dialogue on development and its progress between all stakeholders.

What is monitoring and evaluation (M&E)?


Why Monitoring and Evaluation Matters – Who Are We Doing This For?

In general, monitoring is integral to evaluation. During an evaluation, information from previous monitoring
processes is used to understand the ways in which the project or programme developed and stimulated change.
Monitoring focuses on the measurement of the following aspects of an intervention:

On quantity and quality of the implemented activities (outputs: What do we do?


How do we manage our activities?)
On processes inherent to a project or programme (outcomes: What were the
effects /changes that occurred as a result of your intervention?)
On processes external to an intervention (impact: Which broader, long-term
effects were triggered by the implemented activities in combination with other
environmental factors?)
The evaluation process is an analysis or interpretation of the collected data which delves deeper into the
relationships between the results of the project/programme, the effects produced by the project/programme and
the overall impact of the project/programme.

Private companies have many indicators for analyzing performance. At least a couple of them are familiar to
everyone, regardless their profession. Let’s use pro t as an example: this variable can be calculated, it is comparable
and depictures company’s success. Shareholders can take into account pro t as valuable information when
analyzing company’s position. Similarly, development organizations have established their own systems to increase
transparency and accountability of work, track progress and demonstrate the impact of the undertaken activities.

Why Monitoring and Evaluation Matters – Who Are We Doing This For?
Why Monitoring and Evaluation Matters – Who Are We Doing This For?

M&E tools used in international development aim to support decision-making processes and provide all relevant
parties with the necessary information, allowing them to assess performance. During the monitoring process the
organization collects data, both quantitative and qualitative, that indicate progress in reaching out previously
selected objectives. These data are regularly collected and ensure a basis for evaluation and learning. Evaluation
provides assessment of ongoing and completed activities, determines if the objectives have been ful lled and under
which assumptions, circumstances and conditions, and enables lessons that can be implemented in the future work.

Although M&E has been conducted for many years, in the recent period there have been increased efforts to put this
framework into practice and connect it with organizations’ core activities. Some of the reasons are rising need to
demonstrate the effective use of funds and implementation of the adaptive management approach. A KPMG study*
showed that the motivation for monitoring a project is driven by the intention to improve development impact,
ensure that lessons are learned from the existing programs and to be accountable toward funders. This research
also indicated that although there exist various available evaluation techniques, there are top three techniques that
are most commonly used: logical frameworks, performance indicators and focus groups.

Why Monitoring and Evaluation Matters – Who Are We Doing This For?
CHAPTER 2; 10 Steps To Design a Monitoring and evaluation (M&E) System

Step 1: De ne the scope and purpose

This step involves identifying the evaluation audience and the purpose of the M&E system. M&E purposes include
supporting management and decision-making, learning, accountability and stakeholder engagement.

Will the M&E be done mostly for learning purposes with less emphasis on accountability? If this is the case, then the
M&E system would be designed in such a way as to promote ongoing re ection for continuous programme
improvement.

If the emphasis is more on accountability, then the M&E system could then collect and analyse data with more rigor
and to coincide with the reporting calendar of a donor.

It is No Secret: 10 Steps To Design a Monitoring and evaluation (M&E) System


10 Steps To Design a Monitoring and evaluation (M&E) System

It is important that the M&E scope and purpose be de ned beforehand, so that the appropriate M&E system is
designed. It is of no use to have a M&E system that collects mostly qualitative data on an annual basis while your
‘evaluation audience’ (read: 'donor') is keen to see the quantitative results of Randomised Controlled Trials (RCTs)
twice a year.

'Be on the same page as the ‘evaluation audience''

Step 2: De ne the evaluation questions


Evaluation questions should be developed up-front and in collaboration with the primary audience(s) and other
stakeholders who you intend to report to. Evaluation questions go beyond measurements to ask the higher order
questions such as whether the intervention is worth it or if it could have been achieved in another way.

Step 3: Identify the monitoring questions


For example, for an evaluation question pertaining to 'Learnings', such as "What worked and what did not?" you may
have several monitoring questions such as "Did the workshops lead to increased knowledge on energy ef ciency in
the home?" or "Did the participants have any issues with the training materials?".

It is No Secret: 10 Steps To Design a Monitoring and evaluation (M&E) System


10 Steps To Design a Monitoring and evaluation (M&E) System

The monitoring questions will ideally be answered through the collection of quantitative and qualitative data. It is
important to not start collecting data without thinking about the evaluation and monitoring questions. This may
lead to collecting data just for the sake of collecting data (that provides no relevant information to the programme).

Step 4: Identify the indicators and data sources


In this step you identify what information is needed to answer your monitoring questions and where this
information will come from (data sources). It is important to consider data collection in terms of the type of data
and any types of research design. Data sources could be from primary sources, like from participant themselves or
from secondary sources like existing literature. You can then decide on the most appropriate method to collect the
data from each data source.

“Data, data and more data”

Step 5: Identify who is responsible for data collection, data storage, reporting, budget and timelines
It is advisable to assign responsibility for the data collection and reporting so that everyone is clear of their roles
and responsibilities.

Collection of monitoring data may occur regularly over short intervals, or less regularly, such as half-yearly or
annually. Likewise the timing of evaluations (internal and external) should be noted.

You may also want to note any requirements that are needed to collect the data (staff, budget etc.). It is advisable to
have some idea of the cost associated with monitoring, as you may have great ideas to collect a lot of information,
only to nd out that you cannot afford it all.

Additionally, it is good to determine how the collected data will be stored. A centralised electronic M&E database
should be available for all project staff to use. The M&E database options range from a simple Excel le to the use of
a comprehensive M&E software such as LogAlto.

LogAlto is a user-friendly cloud-based M&E software that stores all information related to the programme such as
the entire log frame (showing the inputs, activities, outputs, outcomes) as well as the quantitative and qualitative
indicators with baseline, target and milestone values. LogAlto also allows for the generation of tables, scorecards,
charts and maps. Quarterly Progress reports can also be produced from LogAlto.

Step 6: Identify who will evaluate the data and how it will be reported
In most programmes there will be an internal and an independent evaluation (conducted by an external consultant).

It is No Secret: 10 Steps To Design a Monitoring and evaluation (M&E) System


10 Steps To Design a Monitoring and evaluation (M&E) System

For an evaluation to be used (and therefore useful) it is important to present the ndings in a format that is
appropriate to the audience. A 'Marketing and Dissemination Strategy’ for the reporting of evaluation results should
be designed as part of the M&E system.

‘Have a strategy to prevent persons from falling asleep during the presentation of evaluation ndings’

Step 7: Decide on standard forms and procedures

Once the M&E system is designed there will be a need for planning templates, designing or adapting information
collection and analysis tools, developing organisational indicators, developing protocols or methodologies for
service-user participation, designing report templates, developing protocols for when and how evaluations and
impact assessments are carried out, developing learning mechanisms, designing databases and the list goes on
Simister, 2009.

However, there is no need to re-invent the wheel. There may already be examples of best practice within an
organisation that could be exported to different locations or replicated more widely. This leads to step 9.

It is No Secret: 10 Steps To Design a Monitoring and evaluation (M&E) System


10 Steps To Design a Monitoring and evaluation (M&E) System

Step 8: Use the information derived from Steps 1- 7 above to ll in the 'M&E System'template
You can choose from any of the templates presented in this article to capture the information. Remember, they are
templates, not cast in stone. Feel free to add extra columns or categories as you see t.

Step 9: Integrate the M&E system horizontally and vertically

Where possible, integrate the M&E system horizontally (with other organisational systems and processes) and
vertically (with the needs and requirements of other agencies). Simister, 2009

Try as much as possible to align the M&E system with existing planning systems, reporting systems, nancial or
administrative monitoring systems, management information systems, human resources systems or any other
systems that might in uence (or be in uenced by) the M&E system.

Step 10: Pilot and then roll-out the system

Once everything is in place, the M&E system may be rst rolled out on a small scale, perhaps just at the Country
Of ce level. This will give the opportunity for feedback and for the ‘kinks to be ironed out’ before a full scale launch.

Staff at every levels be should be aware of the overall purpose(s), general overview and the key focus areas of the
M&E system.

It is also good to inform persons on which areas they are free to develop their own solutions and in which areas they
are not. People will need detailed information and guidance in the areas of the system where everyone is expected
to do the same thing, or carry out M&E work consistently.

This could include guides, training manuals, mentoring approaches, staff exchanges, interactive media, training days
or workshops.

In conclusion, my view is that a good M&E system should be robust enough to answer the evaluation questions,
promote learning and satisfy accountability needs without being so rigid and in exible that it sti es the emergence
of unexpected (and surprising!) results.

It is No Secret: 10 Steps To Design a Monitoring and evaluation (M&E) System


Monitoring and Evaluation Course on Udemy

ENROL FOR THE MONITORING AND EVALUATION COURSES ON UDEMY

ENROLL FOR THE M&E


COURSES IN 2020
PROJECT MONITORING AND EVALUATION FOR
BEGINNERS

CLICK HERE FOR BEGINNER LEVEL

CLICK HERE FOR PROFESSIONAL LEVEL

Monitoring and Evaluation written Test questions and answers


CHAPTER 3; THE 12 KEY COMPONENTS OF M&E SYSTEMS

Monitoring and Evaluation Systems require twelve main components in order to function effectively and ef ciently
to achieve the desired results. These twelve M&E components are discussed in detail below:

1. Organizational Structures with M&E Functions

The adequate implementation of M&E at any level requires that there is a unit whose main purpose is to coordinate
all the M&E functions at its level. While some entities prefer to have an internal organ to oversee its M&E functions,
others prefer to outsource such services. This component of M&E emphasizes the need for M&E unit within the
organization, how elaborate its roles are de ned, how adequately its roles are supported by the organizations
hierarchy and how other units within the organization are aligned to support the M&E functions within the
organization.

2. Human Capacity for M&E

An effective M&E implementation requires that there is only adequate staff employed in the M&E unit, but also that
the staff within this unit have the necessary M&E technical know-how and experience. As such, this component
emphasizes the need to have the necessary human resource that can run the M&E function by hiring employees who
have adequate knowledge and experience in M&E implementation, while at the same time ensuring that the M&E
capacity of these employees are continuously developed through training and other capacity building initiatives to
ensure that they keep up with current and emerging trends in the eld.

3. Partnerships for Planning, Coordinating and Managing the M&E System

A prerequisite for successful M&E systems whether at organizational or national levels is the existence of M&E
partnerships. Partnerships for M&E systems are for organizations because they complement the organization’s M&E
efforts in the M&E process and they act as a source of veri cation for whether M&E functions align to intended
objectives. They also serve auditing purposes where line ministries, technical working groups, communities and
other stakeholders are able to compare M&E outputs with reported outputs.

4. M&E frameworks/Logical Framework

The M&E framework outlines the objectives, inputs, outputs and outcomes of the intended project and the
indicators that will be used to measure all these. It also outlines the assumptions that the M&E system will adopt.
The M&E framework is essential as it links the objectives with the process and enables the M&E expert know what to
measure and how to measure it.

THE 12 KEY COMPONENTS OF M&E SYSTEMS


THE 12 KEY COMPONENTS OF M&E SYSTEMS

5. M&E Work Plan and costs

Closely related to the M&E frameworks is the M&E Work plan and costs. While the framework outlines objectives,
inputs, outputs and outcomes of the intended project, the work plan outlines how the resources that have been
allocated for the M&E functions will be used to achieve the goals of M&E. The work plan shows how personnel, time,
materials and money will be used to achieve the set M&E functions.

6. Communication, Advocacy and Culture for M&E

This refers to the presence of policies and strategies within the organization to promote M&E functions. Without
continuous communication and advocacy initiatives within the organization to promote M&E, it is dif cult to
entrench the M&E culture within the organization. Such communication and strategies need to be supported by the
organizations hierarchy. The existence of an organizational M&E policy, together with the continuous use of the
M&E system outputs on communication channels are some of the ways of improving communication, advocacy and
culture for M&E

7. Routine Programme Monitoring

M&E consists of two major aspects: monitoring and evaluation. This component emphasizes the importance of
monitoring. Monitoring refers to the continuous and routine data collection that takes place during project
implementation. Data needs to be collected and reported on a continuous basis to show whether the project
activities are driving towards meeting the set objectives. They also need to be integrated into the program activities
for routine gathering and analysis.

8. Surveys and Surveillance

This involves majorly the national level M&E plans and entails how frequently relevant national surveys are
conducted in the country. National surveys and surveillance needs to be conducted frequently and used to evaluate
progress of related projects. For example, for HIV and AIDS national M&E plans, there needs to be HIV related
surveys carried at last bi-annually and used to measure HIV indicators at the national level.

THE 12 KEY COMPONENTS OF M&E SYSTEMS


THE 12 KEY COMPONENTS OF M&E SYSTEMS

9. National and Sub-national databases

The data world is gradually becoming open source. More and more entities are seeking data that are relevant for
their purposes. The need for M&E systems to make data available can therefore not be over-emphasized. This
implies that M&E systems need to develop strategies of submitting relevant, reliable and valid data to national and
sub-national databases.

10. Supportive Supervision and Data Auditing

Every M&E system needs a plan for supervision and data auditing. Supportive supervision implies that an individual
or organization is able to supervise regularly the M&E processes in such a way that the supervisor offers suggestions
on ways of improvement. Data auditing implies that the data is subjected to veri cation to ensure its reliability and
validity. Supportive supervision is important since it ensures the M&E process is run ef ciently, while data auditing
is crucial since all project decisions are based on the data collected.

11. Evaluation and Research

One aspect of M&E is research. The other is evaluation. Evaluation of projects is done at speci c times most often
mid- term and at the end of the project. Evaluation is an important component of M&E as it establishes whether the
project has met he desired objectives. It usually provides for organizational learning and sharing of successes with
other stakeholders.

THE 12 KEY COMPONENTS OF M&E SYSTEMS


12. Data Dissemination and Use

The information that is gathered during the project implementation phase needs to be used to inform future
activities, either to reinforce the implemented strategy or to change it. Additionally, results of both monitoring and
evaluation outputs need to be shared out to relevant stakeholders for accountability purposes. Organizations must
therefore ensure that there is an information dissemination plan either in the M&E plan, Work plan or both.

What is a LogFrame? | American University Online


CHAPTER 4; What is a LogFrame?

A Logframe is another name for Logical Framework, a planning tool consisting of a matrix which provides an
overview of a project’s goal, activities and anticipated results. It provides a structure to help specify the
components of a project and its activities and for relating them to one another. It also identi es the measures by
which the project’s anticipated results will be monitored.

The logical framework approach was developed in the late 1960s to assist the US Agency of International
Development (USAID) with project planning. Now most large international donor agencies use some type of logical
or results framework to guide project design.

Logical Framework Structure

A Logical Framework (or LogFrame) consists of a matrix with four columns and four or more rows which
summarize the key elements of the project plan including:

The project's hierarchy of objectives. The rst column captures the project’s
development pathway or intervention logic. Basically, how an objective or
result will be achieved. Each objective or result should be explained by the
objective or result immediately below. Although different donors use different
terminology, a LogFrame typically summarizes the following in its rst column:
The GOAL / OVERALL OBJECTIVE/ DEVELOPMENT OBJECTIVE
The PURPOSE / IMMEDIATE OBJECTIVE
The OUTPUTS
The ACTIVITIES
In developing a logframe, it is very important to pay attention to how the objectives and results are formulated. For
reference, see Catholic Relief Services' (CRS) Guidance for Developing Logical and Results Frameworks.

The second and third columns summarize how the project’s achievements will be monitored and consists of the
following:

What is a LogFrame? | American University Online


What is a LogFrame? | American University Online

Indicators- a quantitative or qualitative measurement which provides a reliable


way to measure changes connected to an intervention. In essence “a
description of the project’s objectives in terms of quantity, quality, target
group(s), time and place”
Sources of veri cation- Describes the information sources necessary for data
compilation that would allow the calculation of indicators.
Developing objectively veri able indicators must also be a very careful process. The USAID provides tips for
selecting performance indicators.

Lastly, the nal column lists the following:

Assumptions

-the external factors or condition outside of the project’s direct control that are necessary to ensure the project’s
success.

Example of the Logical Framework Structure and Intervention Logic

Logical Frameworks can look very different from one another depending on a donors requirements and the design
team. The terminology used also differs between donors. See “The Rosetta Stone of Logical Frameworks.” Other
similar tools include the Logic Model which is also an overall summary of a project plan and anticipated results or
outcomes. The following example is more in the format of a Logic Model:
What is a LogFrame? | American University Online

Strengths of the Logical Framework Approach

It draws together all key components of a planned activity into a clear set of
statements to provide a convenient overview of a project.
It sets up a framework for monitoring and evaluation where planned and actual
results can be compared.
It anticipates project implementation and helps plan out development activities.
Weaknesses of the Logical Framework Approach

It may cause rigidity in program management.


It is not a substitute for other technical, economic, social and environmental
analyses.
LogFrames are often developed after the activity has been designed rather than
used as the basis for design.
It can sti e innovative thinking and adaptive management.

What is a LogFrame? | American University Online


CHAPTER 5; Uunderstanding a Monitoring and Evaluation Plan

What is a Monitoring and Evaluation Plan?


A monitoring and evaluation (M&E) plan is a document that helps to track and assess the results of the interventions
throughout the life of a program. It is a living document that should be referred to and updated on a regular basis.
While the speci cs of each program’s M&E plan will look different, they should all follow the same basic structure
and include the same key elements.

An M&E plan will include some documents that may have been created during the program planning process, and
some that will need to be created new. For example, elements such as the logic model/logical framework, theory of
change, and monitoring indicators may have already been developed with input from key stakeholders and/or the
program donor. The M&E plan takes those documents and develops a further plan for their implementation.

Why develop a Monitoring and Evaluation Plan?


It is important to develop an M&E plan before beginning any monitoring activities so that there is a clear plan for
what questions about the program need to be answered. It will help program staff decide how they are going to
collect data to track indicators, how monitoring data will be analyzed, and how the results of data collection will be
disseminated both to the donor and internally among staff members for program improvement. Remember, M&E
data alone is not useful until someone puts it to use! An M&E plan will help make sure data is being used ef ciently
to make programs as effective as possible and to be able to report on results at the end of the program.

Who should develop a Monitoring and Evaluation Plan?


An M&E plan should be developed by the research team or staff with research experience, with inputs from
program staff involved in designing and implementing the program.

How to Develop a Monitoring and Evaluation Plan


CHAPTER 6; Uunderstanding a Monitoring and Evaluation Plan

When should a Monitoring and Evaluation Plan be developed?


An M&E plan should be developed at the beginning of the program when the interventions are being designed. This
will ensure there is a system in place to monitor the program and evaluate success.

How to Develop a Monitoring and Evaluation Plan


How to Develop a Monitoring and Evaluation Plan

Steps
Step 1: Identify Program Goals and Objectives
The rst step to creating an M&E plan is to identify the program goals and objectives. If the program already has
a logic model or theory of change, then the program goals are most likely already de ned. However, if not, the M&E
plan is a great place to start. Identify the program goals and objectives.

De ning program goals starts with answering three questions:

1. What problem is the program trying to solve?


2. What steps are being taken to solve that problem?
3. How will program staff know when the program has been successful in solving the problem?

It is also necessary to develop intermediate outputs and objectives for the program to help track successful steps on
the way to the overall program goal. More information about identifying these objectives can be found in the logic
model guide.

How to Develop a Monitoring and Evaluation Plan


How to Develop a Monitoring and Evaluation Plan

Step 2: De ne Indicators
Once the program’s goals and objectives are de ned, it is time to de ne indicators for tracking progress towards
achieving those goals. Program indicators should be a mix of those that measure process, or what is being done in
the program, and those that measure outcomes.

Process indicators track the progress of the program. They help to answer the question, “Are activities being
implemented as planned?” Some examples of process indicators are:

Number of trainings held with health providers


Number of outreach activities conducted at youth-friendly locations
Outcome indicators track how successful program activities have been at achieving program objectives. They help
to answer the question, “Have program activities made a difference?” Some examples of outcome indicators are:

Number and percent of trained health providers offering family planning


services to youth
These are just a few examples of indicators that can be created to track a program’s success. More information
about creating indicators can be found in the How to Develop Indicators guide.

Step 3: De ne Data Collection Methods and TImeline


After creating monitoring indicators, it is time to decide on methods for gathering data and how often various data
will be recorded to track indicators. This should be a conversation between program staff, stakeholders, and donors.
These methods will have important implications for what data collection methods will be used and how the results
will be reported.

The source of monitoring data depends largely on what each indicator is trying to measure. The program will likely
need multiple data sources to answer all of the programming questions. Below is a table that represents some
examples of what data can be collected and how.

How to Develop a Monitoring and Evaluation Plan


How to Develop a Monitoring and Evaluation Plan

Information to be collected Data source(s)

Implementation process and Program-speci c M&E tools


progress

Service statistics Facility logs, referral cards

Reach and success of the program Small surveys with primary audience(s),
intervention within audience such as provider interviews or client exit
subgroups or communities interviews

The reach of media interventions Media ratings data, brodcaster logs, Google
involved in the program analytics, omnibus surveys

Reach and success of the program Nationally-representative surveys,


intervention at the population level Omnibus surveys, DHS data

Qualitative data about the outcomes Focus groups, in-depth interviews,


of the intervention listener/viewer group discussions,
individual media diaries, case studies

Once it is determined how data will be collected, it is also necessary to decide how often it will be collected. This
will be affected by donor requirements, available resources, and the timeline of the intervention. Some data will be
continuously gathered by the program (such as the number of trainings), but these will be recorded every six months
or once a year, depending on the M&E plan. Other types of data depend on outside sources, such as clinic and DHS
data.

After all of these questions have been answered, a table like the one below can be made to include in the M&E plan.
This table can be printed out and all staff working on the program can refer to it so that everyone knows what data is
needed and when.

How to Develop a Monitoring and Evaluation Plan


How to Develop a Monitoring and Evaluation Plan

Indicator Data source(s) Timing

Number of trainings held with health providers Training Every 6


attendance sheets months

Number of outreach activities conducted at Activity sheet Every 6


youth-friendly locations months

Number of condoms distributed at youth- Condom Every 6


friendly locations distribution sheet months

Percent of youth receiving condom use messages Population-based Annually


through the media surveys

Percent of adolescents reporting condom use DHS or other Annually


during rst intercourse population-based
survey

Number and percent of trained health providers Facility logs Every 6


offering family planning services to adolescents months

Number and percent of new STI infections DHS or other Annually


among adolescents population-based
survey

Step 4: Identify M&E Roles and Responsibilities


The next element of the M&E plan is a section on roles and responsibilities. It is important to decide from the early
planning stages who is responsible for collecting the data for each indicator. This will probably be a mix of M&E
staff, research staff, and program staff. Everyone will need to work together to get data collected accurately and in a
timely fashion.

Data management roles should be decided with input from all team members so everyone is on the same page and
knows which indicators they are assigned. This way when it is time for reporting there are no surprises.

How to Develop a Monitoring and Evaluation Plan


How to Develop a Monitoring and Evaluation Plan

An easy way to put this into the M&E plan is to expand the indicators table with additional columns for who is
responsible for each indicator, as shown below.

Data
Indicator Data source(s) Timing manager

Number of trainings held with health Training Every 6 Activity


providers attendance months manager
sheets

Number of outreach activities Activity sheet Every 6 Activity


conducted at youth-friendly locations months manager

Number of condoms distributed at Condom Every 6 Activity


youth-friendly locations distribution months manager
sheet

Percent of youth receiving condom use Population- Annually Research


messages through the media based survey assistant

Percent of adolescents reporting DHS or other Annually Research


condom use during rst intercourse population- assistant
based survey

Number and percent of trained health Facility logs Every 6 Field


providers offering family planning months M&E
services to adolescents of cer

Number and percent of new STI DHS or other Annually Research


infections among adolescents population- assistant
based survey

How to Develop a Monitoring and Evaluation Plan


How to Develop a Monitoring and Evaluation Plan

Step 5: Create an Analysis Plan and Reporting Templates


Once all of the data have been collected, someone will need to compile and analyze it to ll in a results table for
internal review and external reporting. This is likely to be an in-house M&E manager or research assistant for the
program.

The M&E plan should include a section with details about what data will be analyzed and how the results will be
presented. Do research staff need to perform any statistical tests to get the needed answers? If so, what tests are
they and what data will be used in them? What software program will be used to analyze data and make reporting
tables? Excel? SPSS? These are important considerations.

Another good thing to include in the plan is a blank table for indicator reporting. These tables should outline the
indicators, data, and time period of reporting. They can also include things like the indicator target, and how far the
program has progressed towards that target. An example of a reporting table is below.

How to Develop a Monitoring and Evaluation Plan


How to Develop a Monitoring and Evaluation Plan

% of
Lifetime target
Indicator Baseline Year 1 target achieved

Number of trainings held with health 0 5 10 50%


providers

Number of outreach activities 0 2 6 33%


conducted at youth-friendly locations

Number of condoms distributed at 0 25,000 50,000 50%


youth-friendly locations

Percent of youth receiving condom use 5% 35% 75% 47%


messages through the media.

Percent of adolescents reporting 20% 30% 80% 38%


condom use during rst intercourse

Number and percent of trained health 20 106 250 80%


providers offering family planning
services to adolescents

Number and percent of new STI 11,000 10,000 10% 20%


infections among adolescents reduction
22% 20% 5 years

Step 6: Plan for Dissemination and Donor Reporting


The last element of the M&E plan describes how and to whom data will be disseminated. Data for data’s sake should
not be the ultimate goal of M&E efforts. Data should always be collected for particular purposes.

Consider the following:

How to Develop a Monitoring and Evaluation Plan


How to Develop a Monitoring and Evaluation Plan

How will M&E data be used to inform staff and stakeholders about the success
and progress of the program?
How will it be used to help staff make modi cations and course corrections, as
necessary?
How will the data be used to move the eld forward and make program
practices more effective?
The M&E plan should include plans for internal dissemination among the program team, as well as wider
dissemination among stakeholders and donors. For example, a program team may want to review data on a monthly
basis to make programmatic decisions and develop future workplans, while meetings with the donor to review data
and program progress might occur quarterly or annually. Dissemination of printed or digital materials might occur
at more frequent intervals. These options should be discussed with stakeholders and your team to determine
reasonable expectations for data review and to develop plans for dissemination early in the program. If these plans
are in place from the beginning and become routine for the project, meetings and other kinds of periodic review
have a much better chance of being productive ones that everyone looks forward to.

After following these 6 steps, the outline of the M&E plan should look something like this:

1. Introduction to program
Program goals and objectives
Logic model/Logical Framework/Theory of change
2. Indicators
Table with data sources, collection timing, and staff member responsible

3. Roles and Responsibilities


Description of each staff member’s role in M&E data collection, analysis, and/or reporting

4. Reporting
Analysis plan
Reporting template table
5. Dissemination plan
Description of how and when M&E data will be disseminated internally and externally

How to Develop a Monitoring and Evaluation Plan


How to Develop a Monitoring and Evaluation Plan

Glossary & Concepts


Process indicators track how the implementation of the program is
progressing. They help to answer the question, “Are activities being
implemented as planned?”
Outcome indicators track how successful program activities have been at
achieving program goals. They help to answer the question, “Have program
activities made a difference?”

How to Develop a Monitoring and Evaluation Plan


m&e guide for beginners
compiled by

2020

You might also like