2020 Guide On Monitoring and Evaluation For Beginners
2020 Guide On Monitoring and Evaluation For Beginners
Evaluation
Beginners Guide 2020
COACH ALEXANDER
Who is this Book For?
Regards
COACH ALEXANDER
1.
CHAPTER 1: What is monitoring and evaluation (M&E)?
Monitoring is the systematic and routine collection of information from projects and programmes for four main
purposes:
Monitoring is a periodically recurring task already beginning in the planning stage of a project or programme.
Monitoring allows results, processes and experiences to be documented and used as a basis to steer decision-
making and learning processes. Monitoring is checking progress against plans. The data acquired through
monitoring is used for evaluation.
Evaluation is assessing, as systematically and objectively as possible, a completed project or programme (or a phase
of an ongoing project or programme that has been completed). Evaluations appraise data and information that
inform strategic decisions, thus improving the project or programme in the future.
Evaluations should help to draw conclusions about ve main aspects of the intervention:
relevance
effectiveness
ef ciency
impact
sustainability
Information gathered in relation to these aspects during the monitoring process provides the basis for the
evaluative analysis.
In general, monitoring is integral to evaluation. During an evaluation, information from previous monitoring
processes is used to understand the ways in which the project or programme developed and stimulated change.
Monitoring focuses on the measurement of the following aspects of an intervention:
Private companies have many indicators for analyzing performance. At least a couple of them are familiar to
everyone, regardless their profession. Let’s use pro t as an example: this variable can be calculated, it is comparable
and depictures company’s success. Shareholders can take into account pro t as valuable information when
analyzing company’s position. Similarly, development organizations have established their own systems to increase
transparency and accountability of work, track progress and demonstrate the impact of the undertaken activities.
Why Monitoring and Evaluation Matters – Who Are We Doing This For?
Why Monitoring and Evaluation Matters – Who Are We Doing This For?
M&E tools used in international development aim to support decision-making processes and provide all relevant
parties with the necessary information, allowing them to assess performance. During the monitoring process the
organization collects data, both quantitative and qualitative, that indicate progress in reaching out previously
selected objectives. These data are regularly collected and ensure a basis for evaluation and learning. Evaluation
provides assessment of ongoing and completed activities, determines if the objectives have been ful lled and under
which assumptions, circumstances and conditions, and enables lessons that can be implemented in the future work.
Although M&E has been conducted for many years, in the recent period there have been increased efforts to put this
framework into practice and connect it with organizations’ core activities. Some of the reasons are rising need to
demonstrate the effective use of funds and implementation of the adaptive management approach. A KPMG study*
showed that the motivation for monitoring a project is driven by the intention to improve development impact,
ensure that lessons are learned from the existing programs and to be accountable toward funders. This research
also indicated that although there exist various available evaluation techniques, there are top three techniques that
are most commonly used: logical frameworks, performance indicators and focus groups.
Why Monitoring and Evaluation Matters – Who Are We Doing This For?
CHAPTER 2; 10 Steps To Design a Monitoring and evaluation (M&E) System
This step involves identifying the evaluation audience and the purpose of the M&E system. M&E purposes include
supporting management and decision-making, learning, accountability and stakeholder engagement.
Will the M&E be done mostly for learning purposes with less emphasis on accountability? If this is the case, then the
M&E system would be designed in such a way as to promote ongoing re ection for continuous programme
improvement.
If the emphasis is more on accountability, then the M&E system could then collect and analyse data with more rigor
and to coincide with the reporting calendar of a donor.
It is important that the M&E scope and purpose be de ned beforehand, so that the appropriate M&E system is
designed. It is of no use to have a M&E system that collects mostly qualitative data on an annual basis while your
‘evaluation audience’ (read: 'donor') is keen to see the quantitative results of Randomised Controlled Trials (RCTs)
twice a year.
The monitoring questions will ideally be answered through the collection of quantitative and qualitative data. It is
important to not start collecting data without thinking about the evaluation and monitoring questions. This may
lead to collecting data just for the sake of collecting data (that provides no relevant information to the programme).
Step 5: Identify who is responsible for data collection, data storage, reporting, budget and timelines
It is advisable to assign responsibility for the data collection and reporting so that everyone is clear of their roles
and responsibilities.
Collection of monitoring data may occur regularly over short intervals, or less regularly, such as half-yearly or
annually. Likewise the timing of evaluations (internal and external) should be noted.
You may also want to note any requirements that are needed to collect the data (staff, budget etc.). It is advisable to
have some idea of the cost associated with monitoring, as you may have great ideas to collect a lot of information,
only to nd out that you cannot afford it all.
Additionally, it is good to determine how the collected data will be stored. A centralised electronic M&E database
should be available for all project staff to use. The M&E database options range from a simple Excel le to the use of
a comprehensive M&E software such as LogAlto.
LogAlto is a user-friendly cloud-based M&E software that stores all information related to the programme such as
the entire log frame (showing the inputs, activities, outputs, outcomes) as well as the quantitative and qualitative
indicators with baseline, target and milestone values. LogAlto also allows for the generation of tables, scorecards,
charts and maps. Quarterly Progress reports can also be produced from LogAlto.
Step 6: Identify who will evaluate the data and how it will be reported
In most programmes there will be an internal and an independent evaluation (conducted by an external consultant).
For an evaluation to be used (and therefore useful) it is important to present the ndings in a format that is
appropriate to the audience. A 'Marketing and Dissemination Strategy’ for the reporting of evaluation results should
be designed as part of the M&E system.
‘Have a strategy to prevent persons from falling asleep during the presentation of evaluation ndings’
Once the M&E system is designed there will be a need for planning templates, designing or adapting information
collection and analysis tools, developing organisational indicators, developing protocols or methodologies for
service-user participation, designing report templates, developing protocols for when and how evaluations and
impact assessments are carried out, developing learning mechanisms, designing databases and the list goes on
Simister, 2009.
However, there is no need to re-invent the wheel. There may already be examples of best practice within an
organisation that could be exported to different locations or replicated more widely. This leads to step 9.
Step 8: Use the information derived from Steps 1- 7 above to ll in the 'M&E System'template
You can choose from any of the templates presented in this article to capture the information. Remember, they are
templates, not cast in stone. Feel free to add extra columns or categories as you see t.
Where possible, integrate the M&E system horizontally (with other organisational systems and processes) and
vertically (with the needs and requirements of other agencies). Simister, 2009
Try as much as possible to align the M&E system with existing planning systems, reporting systems, nancial or
administrative monitoring systems, management information systems, human resources systems or any other
systems that might in uence (or be in uenced by) the M&E system.
Once everything is in place, the M&E system may be rst rolled out on a small scale, perhaps just at the Country
Of ce level. This will give the opportunity for feedback and for the ‘kinks to be ironed out’ before a full scale launch.
Staff at every levels be should be aware of the overall purpose(s), general overview and the key focus areas of the
M&E system.
It is also good to inform persons on which areas they are free to develop their own solutions and in which areas they
are not. People will need detailed information and guidance in the areas of the system where everyone is expected
to do the same thing, or carry out M&E work consistently.
This could include guides, training manuals, mentoring approaches, staff exchanges, interactive media, training days
or workshops.
In conclusion, my view is that a good M&E system should be robust enough to answer the evaluation questions,
promote learning and satisfy accountability needs without being so rigid and in exible that it sti es the emergence
of unexpected (and surprising!) results.
Monitoring and Evaluation Systems require twelve main components in order to function effectively and ef ciently
to achieve the desired results. These twelve M&E components are discussed in detail below:
The adequate implementation of M&E at any level requires that there is a unit whose main purpose is to coordinate
all the M&E functions at its level. While some entities prefer to have an internal organ to oversee its M&E functions,
others prefer to outsource such services. This component of M&E emphasizes the need for M&E unit within the
organization, how elaborate its roles are de ned, how adequately its roles are supported by the organizations
hierarchy and how other units within the organization are aligned to support the M&E functions within the
organization.
An effective M&E implementation requires that there is only adequate staff employed in the M&E unit, but also that
the staff within this unit have the necessary M&E technical know-how and experience. As such, this component
emphasizes the need to have the necessary human resource that can run the M&E function by hiring employees who
have adequate knowledge and experience in M&E implementation, while at the same time ensuring that the M&E
capacity of these employees are continuously developed through training and other capacity building initiatives to
ensure that they keep up with current and emerging trends in the eld.
A prerequisite for successful M&E systems whether at organizational or national levels is the existence of M&E
partnerships. Partnerships for M&E systems are for organizations because they complement the organization’s M&E
efforts in the M&E process and they act as a source of veri cation for whether M&E functions align to intended
objectives. They also serve auditing purposes where line ministries, technical working groups, communities and
other stakeholders are able to compare M&E outputs with reported outputs.
The M&E framework outlines the objectives, inputs, outputs and outcomes of the intended project and the
indicators that will be used to measure all these. It also outlines the assumptions that the M&E system will adopt.
The M&E framework is essential as it links the objectives with the process and enables the M&E expert know what to
measure and how to measure it.
Closely related to the M&E frameworks is the M&E Work plan and costs. While the framework outlines objectives,
inputs, outputs and outcomes of the intended project, the work plan outlines how the resources that have been
allocated for the M&E functions will be used to achieve the goals of M&E. The work plan shows how personnel, time,
materials and money will be used to achieve the set M&E functions.
This refers to the presence of policies and strategies within the organization to promote M&E functions. Without
continuous communication and advocacy initiatives within the organization to promote M&E, it is dif cult to
entrench the M&E culture within the organization. Such communication and strategies need to be supported by the
organizations hierarchy. The existence of an organizational M&E policy, together with the continuous use of the
M&E system outputs on communication channels are some of the ways of improving communication, advocacy and
culture for M&E
M&E consists of two major aspects: monitoring and evaluation. This component emphasizes the importance of
monitoring. Monitoring refers to the continuous and routine data collection that takes place during project
implementation. Data needs to be collected and reported on a continuous basis to show whether the project
activities are driving towards meeting the set objectives. They also need to be integrated into the program activities
for routine gathering and analysis.
This involves majorly the national level M&E plans and entails how frequently relevant national surveys are
conducted in the country. National surveys and surveillance needs to be conducted frequently and used to evaluate
progress of related projects. For example, for HIV and AIDS national M&E plans, there needs to be HIV related
surveys carried at last bi-annually and used to measure HIV indicators at the national level.
The data world is gradually becoming open source. More and more entities are seeking data that are relevant for
their purposes. The need for M&E systems to make data available can therefore not be over-emphasized. This
implies that M&E systems need to develop strategies of submitting relevant, reliable and valid data to national and
sub-national databases.
Every M&E system needs a plan for supervision and data auditing. Supportive supervision implies that an individual
or organization is able to supervise regularly the M&E processes in such a way that the supervisor offers suggestions
on ways of improvement. Data auditing implies that the data is subjected to veri cation to ensure its reliability and
validity. Supportive supervision is important since it ensures the M&E process is run ef ciently, while data auditing
is crucial since all project decisions are based on the data collected.
One aspect of M&E is research. The other is evaluation. Evaluation of projects is done at speci c times most often
mid- term and at the end of the project. Evaluation is an important component of M&E as it establishes whether the
project has met he desired objectives. It usually provides for organizational learning and sharing of successes with
other stakeholders.
The information that is gathered during the project implementation phase needs to be used to inform future
activities, either to reinforce the implemented strategy or to change it. Additionally, results of both monitoring and
evaluation outputs need to be shared out to relevant stakeholders for accountability purposes. Organizations must
therefore ensure that there is an information dissemination plan either in the M&E plan, Work plan or both.
A Logframe is another name for Logical Framework, a planning tool consisting of a matrix which provides an
overview of a project’s goal, activities and anticipated results. It provides a structure to help specify the
components of a project and its activities and for relating them to one another. It also identi es the measures by
which the project’s anticipated results will be monitored.
The logical framework approach was developed in the late 1960s to assist the US Agency of International
Development (USAID) with project planning. Now most large international donor agencies use some type of logical
or results framework to guide project design.
A Logical Framework (or LogFrame) consists of a matrix with four columns and four or more rows which
summarize the key elements of the project plan including:
The project's hierarchy of objectives. The rst column captures the project’s
development pathway or intervention logic. Basically, how an objective or
result will be achieved. Each objective or result should be explained by the
objective or result immediately below. Although different donors use different
terminology, a LogFrame typically summarizes the following in its rst column:
The GOAL / OVERALL OBJECTIVE/ DEVELOPMENT OBJECTIVE
The PURPOSE / IMMEDIATE OBJECTIVE
The OUTPUTS
The ACTIVITIES
In developing a logframe, it is very important to pay attention to how the objectives and results are formulated. For
reference, see Catholic Relief Services' (CRS) Guidance for Developing Logical and Results Frameworks.
The second and third columns summarize how the project’s achievements will be monitored and consists of the
following:
Assumptions
-the external factors or condition outside of the project’s direct control that are necessary to ensure the project’s
success.
Logical Frameworks can look very different from one another depending on a donors requirements and the design
team. The terminology used also differs between donors. See “The Rosetta Stone of Logical Frameworks.” Other
similar tools include the Logic Model which is also an overall summary of a project plan and anticipated results or
outcomes. The following example is more in the format of a Logic Model:
What is a LogFrame? | American University Online
It draws together all key components of a planned activity into a clear set of
statements to provide a convenient overview of a project.
It sets up a framework for monitoring and evaluation where planned and actual
results can be compared.
It anticipates project implementation and helps plan out development activities.
Weaknesses of the Logical Framework Approach
An M&E plan will include some documents that may have been created during the program planning process, and
some that will need to be created new. For example, elements such as the logic model/logical framework, theory of
change, and monitoring indicators may have already been developed with input from key stakeholders and/or the
program donor. The M&E plan takes those documents and develops a further plan for their implementation.
Steps
Step 1: Identify Program Goals and Objectives
The rst step to creating an M&E plan is to identify the program goals and objectives. If the program already has
a logic model or theory of change, then the program goals are most likely already de ned. However, if not, the M&E
plan is a great place to start. Identify the program goals and objectives.
It is also necessary to develop intermediate outputs and objectives for the program to help track successful steps on
the way to the overall program goal. More information about identifying these objectives can be found in the logic
model guide.
Step 2: De ne Indicators
Once the program’s goals and objectives are de ned, it is time to de ne indicators for tracking progress towards
achieving those goals. Program indicators should be a mix of those that measure process, or what is being done in
the program, and those that measure outcomes.
Process indicators track the progress of the program. They help to answer the question, “Are activities being
implemented as planned?” Some examples of process indicators are:
The source of monitoring data depends largely on what each indicator is trying to measure. The program will likely
need multiple data sources to answer all of the programming questions. Below is a table that represents some
examples of what data can be collected and how.
Reach and success of the program Small surveys with primary audience(s),
intervention within audience such as provider interviews or client exit
subgroups or communities interviews
The reach of media interventions Media ratings data, brodcaster logs, Google
involved in the program analytics, omnibus surveys
Once it is determined how data will be collected, it is also necessary to decide how often it will be collected. This
will be affected by donor requirements, available resources, and the timeline of the intervention. Some data will be
continuously gathered by the program (such as the number of trainings), but these will be recorded every six months
or once a year, depending on the M&E plan. Other types of data depend on outside sources, such as clinic and DHS
data.
After all of these questions have been answered, a table like the one below can be made to include in the M&E plan.
This table can be printed out and all staff working on the program can refer to it so that everyone knows what data is
needed and when.
Data management roles should be decided with input from all team members so everyone is on the same page and
knows which indicators they are assigned. This way when it is time for reporting there are no surprises.
An easy way to put this into the M&E plan is to expand the indicators table with additional columns for who is
responsible for each indicator, as shown below.
Data
Indicator Data source(s) Timing manager
The M&E plan should include a section with details about what data will be analyzed and how the results will be
presented. Do research staff need to perform any statistical tests to get the needed answers? If so, what tests are
they and what data will be used in them? What software program will be used to analyze data and make reporting
tables? Excel? SPSS? These are important considerations.
Another good thing to include in the plan is a blank table for indicator reporting. These tables should outline the
indicators, data, and time period of reporting. They can also include things like the indicator target, and how far the
program has progressed towards that target. An example of a reporting table is below.
% of
Lifetime target
Indicator Baseline Year 1 target achieved
How will M&E data be used to inform staff and stakeholders about the success
and progress of the program?
How will it be used to help staff make modi cations and course corrections, as
necessary?
How will the data be used to move the eld forward and make program
practices more effective?
The M&E plan should include plans for internal dissemination among the program team, as well as wider
dissemination among stakeholders and donors. For example, a program team may want to review data on a monthly
basis to make programmatic decisions and develop future workplans, while meetings with the donor to review data
and program progress might occur quarterly or annually. Dissemination of printed or digital materials might occur
at more frequent intervals. These options should be discussed with stakeholders and your team to determine
reasonable expectations for data review and to develop plans for dissemination early in the program. If these plans
are in place from the beginning and become routine for the project, meetings and other kinds of periodic review
have a much better chance of being productive ones that everyone looks forward to.
After following these 6 steps, the outline of the M&E plan should look something like this:
1. Introduction to program
Program goals and objectives
Logic model/Logical Framework/Theory of change
2. Indicators
Table with data sources, collection timing, and staff member responsible
4. Reporting
Analysis plan
Reporting template table
5. Dissemination plan
Description of how and when M&E data will be disseminated internally and externally
2020