100% found this document useful (1 vote)
138 views

Monitoring and Evaluation Tutorial 1

The document discusses monitoring and evaluation (M&E), defining M&E as the process of gathering and analyzing information to determine if project goals are being achieved and identify unintended effects. It explains that monitoring regularly tracks project implementation against targets, while evaluation assesses design, implementation and results. The document outlines key M&E activities like establishing objectives and indicators, collecting data, and using findings to improve decision making.

Uploaded by

Seyfe Mesay
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
138 views

Monitoring and Evaluation Tutorial 1

The document discusses monitoring and evaluation (M&E), defining M&E as the process of gathering and analyzing information to determine if project goals are being achieved and identify unintended effects. It explains that monitoring regularly tracks project implementation against targets, while evaluation assesses design, implementation and results. The document outlines key M&E activities like establishing objectives and indicators, collecting data, and using findings to improve decision making.

Uploaded by

Seyfe Mesay
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 27

PROJECT MONITORING AND

EVALUATION

Tutorial 01
Introduction to Monitoring & Evaluation

What is Monitoring and Evaluation ?


Monitoring and Evaluation is a process of continued gathering of information and its analysis, in order
to determine whether progress is being made towards pre-specified goals and objectives, and highlight
whether there are any unintended (positive or negative) effects from a project/programme and its
activities.

What is monitoring? Regular systematic collection and analysis of information to track the progress of
program implementation against pre-set targets and objectives. It provides an answer to Did we deliver?

What is evaluation? Objective assessment of an ongoing or recently completed project, program or


policy, its design, implementation and results. It provides an answer to What has happened as a result?
Purpose/Importance of Monitoring and Evaluation
Timely and reliable M&E provides information to
 Support project/ programme implementation
 Contribute to organizational learning and knowledge sharing
 Uphold accountability and compliance
 Provide opportunities for stakeholder feedback
 Promote and celebrate project/program work
 Build the capacity, self-reliance and confidence of stakeholders
Characteristics of monitoring and evaluation
Monitoring Evaluation
 Clarifies program objectives.  Analyzes why intended results were or were
 Links activities and their resources to not achieved.
objectives.  Assesses specific casual contributions of
 Translates objectives into performance activities to results.
indicators and sets targets.
 Examines implementation process.
 Explores unintended results.
 Routinely collects data on these indicators,  Provides lessons, highlights significant
compares actual results with targets.
accomplishments or program potential and
 Report progress to managers & alert them to
offers recommendations for improvement.
problems.

Impact Assessment is an aspect of evaluation that focuses on ultimate benefits. It Assesses what
has happened as a result of the intervention and what may have happened without it - from a
future point in time. Provide answers to ‘Have we made a difference and achieved our goal?’
Comparison Between Monitoring & Evaluation
Item Monitoring Evaluation
Frequency periodic, , regular episodic
Main action keeping track/ oversight assessment assessment

Basic purpose improve efficiency adjust work plan improve effectiveness, impact,
future programming

Focus inputs, outputs, process outcomes, work plans effectiveness, relevance, impact, cost effectiveness

Information routine or sentinel systems, field observation, same, plus surveys, studies
sources progress reports, rapid assessments

Undertaken by Programme managers Programme managers


community workers supervisors, funders
community (beneficiaries) supervisors, funders external evaluators
community (beneficiaries)
Reporting to Programme managers Programme managers
community workers supervisors, funders policy-makers, beneficiaries,
community (beneficiaries) community (beneficiaries)
supervisors, funders
programme managers
supervisors, funders
Monitoring and Evaluation Standards and Ethics

1. M&E should uphold the principles and standards of the concerned organizations.
2. M&E should respect the customs, culture and dignity of human subjects.
3. M&E practices should uphold the principle of “do no harm.”
4. When feasible and appropriate, M&E should be participatory.
5. M&E system should ensure that stakeholders can provide comment and voice any
complaints about the work.
Key Monitoring and Evaluation Activities in Project/ Programme cycle
 There is no one generic project/programme cycle and associated M&E activities. The ‘Usual’
Stages and key activities in project/programme are Planning, Monitoring, Evaluation and
Reporting (PMER)

 The PMER activities include


1. Initial needs assessment
2. Project design Logframe
3. M&E planning
4. Baseline study
5. Midterm evaluation and/or reviews
6. Final evaluation
7. Dissemination and use of lessons
The Monitoring and Evaluation Plan/strategy
 M&E plan is a comprehensive planning document for all monitoring and evaluation activities
within a program.

 Typically, the components of an M&E plan are:


 Establishing goals and objectives
 Setting the specific M&E questions
 Determining the activities to be implemented
 The methods and designs to be used for monitoring and evaluation
 The data to be collected
 The specific tools for data collection
 The required resources
 The responsible parties to implement specific components of the plan
 The expected results
 The proposed timeline
The Challenges of M&E
Contextual challenges

 Different stakeholders and development partners have different


Complexity
requirements;
 Requirements change during the life cycle of a program;
 Different donior reporting requirements;
Data  Baselines not conducted;
 Limited availability of local, especially current, data;
availability
 Limited disaggregation of data;
 Lack of sample frames;
 Where there are multiple stakeholders it is difficult to engage collective
commitment;
Attitudes and
 Stakeholders may be suspicious about how and why information will be
Commitment used, especially if progress is slow or limited
Diversity and  Recognizing issues of diversity and inclusion explicitly;
Inclusion
The Challenges of M&E cont.…

Design and Analysis challenges


Counterfactuals  How to measure what the outcome would have been if the reform measure had not been
implemented. Ho
Causality and  How to account for complex impact relationships between program activities, outputs and use of
attribution outputs by partner organizations and eventually their impact on enterprises;
 How to isolate individual reform measures in embedded programs or multi-donor settings;

Timeframes  Time lags and long gestation periods between activities, outputs and outcomes.

Diversity and  Capturing issues of diversity;


inclusion  Ensuring inclusion in the evaluation process.
Practical challenges
Cost  Finding funds to undertake robust M&E throughout the program and not just at the end;
 Ensuring the M&E budget is in proportion to the scale of the intervention.

Skills and abilities  Coping with a low level of local/internal evaluation skills and experience;
 Utilizing an appropriate mix of local and external resources;
 Building local capability and capacity for ongoing evaluation activities and oversight.
Five Components of Good Monitoring and Evaluation Design
1. Clear statement of measurable objectives for the projects and its components
2. A structured set of indicators – outputs of goods and services to be generated
3. Provisions for collecting data and managing project records
4. Institutional arrangements for gathering, analyzing and reporting project data,
investing in capacity building to sustain the M & E service
5. Proposals for the ways in which M & E findings will be fed back into decision making.
Key terms and concepts in Monitoring and Evaluation
Input: The resources that will be used including people, funds, expertise, technical assistance and
information to deliver the activities/tasks of the project/program.
Activities: Actions taken or work performed through which inputs, such as funds and other types of
resources are mobilized to produce specific outputs.
Outputs are the immediate results of activities. For example, the output of digging wells would be the
number of functioning wells in a community e. Or the output of a training session would be the number
of trained individuals.
Outcomes are results caused by outputs. Examples of outcomes might be households with access to
clean drinking water or the percentage of trainees who start a business or find a job.
Impact: is the long-term, broad societal change that the outcomes lead to. For example, reduced poverty
or decreased mortality rate might be the impact of a project.
Baselines: Qualitative or quantitative information that provide data at the beginning of,
or just prior to, the implementation of an intervention, and act as a reference point against
which progress can be assessed or comparisons made.

Indicator is defined as a quantitative measurement or qualitative ( descriptive


observation) that allows the measurement and verification of whether intended changes
have occurred.

Targets are established for each indicator by starting from the baseline level, and by
including the desired level of improvement in that indicator. Indicators are a means by
which change will be measured; targets are definite ends or amounts which will be
measured.

Milestones: Significant points in the lifetime of a project. A particular point in the project
by which specified progress should have been made
Monitoring

 Annual work plans and budget of the projects are the fundamental prerequisites to monitoring

 Monitoring is the basis for providing corrective action while it reinforces an initial positive
results.

 Through monitoring, we determine the relevance of projects


 Whether the projects support development priorities
 Whether appropriate groups were targeted
 Whether objectives remain valid in light of changes in the program environment

 Monitoring requires ongoing data collection during project implementation.

 Monitoring involves Identifying the activities/indicators/outcome measures .


Types of monitoring

 Results monitoring
 Process (activity) monitoring
 Compliance monitoring
 Context (situation) monitoring
 Beneficiary monitoring
 Financial monitoring
Frameworks and Indicators for Monitoring and
Evaluation

A range of frameworks exist for planning and management of projects. The two types of frameworks
that are commonly used in M&E : Logistic Framework and Results- Based Framework
.
1. Logical Framework (LF)

Developed by Rosenberg & Posner, 1979) also known as Logframes are commonly used to help set
clear program objectives and define indicators of success. It aids in the identification of the expected
causal links – the ‘program logic’ - in the following results chain: inputs, processes, outputs, outcomes,
and impact. It leads to the identification of performance indicators at each stage in this chain, looks at the
evidence needed to verify these indicators as well as the assumptions that underlie them and the risks
which might impede the attainment of results.
The program Logic Model
LF presented in a table ( matrix format) is a useful way of capturing both the content of a
project together with the key components of the M&E plan. (Table 2.1, page 44)
Monitoring Questions and the Logframe
2. Results-Based Frameworks
 Serve as a management tool with an emphasis on results. The purpose of results frameworks
is to increase focus, select strategies, and allocate resources accordingly. This approach is a
variant to the LF in the sense that it is based on similar logic and uses some of the same
terminology.
 The term results reinforces the view that benefits can be produced throughout the
implementation of a given program and not just towards the end of the project period
 The different results that are derived from the inputs, activities, outputs, and outcomes of a
project are linked through a logical process called a causal impact chain.
 The results-based impact chain provides explicit acknowledgement of the challenges of
attributing cause and effect (or impact) to a given intervention, by identifying when the
attribution of impact to an intervention becomes compromised.
Results-Based Impact Chain
Indicators

 Are measures of change brought about by an activity


 Communicate information about progress towards particular goals
 Provides clues about matters of larger significance
 Are used for
 Providing a framework for collecting and reporting information
 Providing guidance for various organizations on needs, priorities and policy effectiveness
 Facilitating local community efforts to undertake and strengthen development plans
Indicators should be

 Specific and measurable


 Relevant and substantial
 Sensitive
 Cost effective
 Verifiable and available
 logical
Possible outcome indicators

 Percent change in annual revenue


 Percent change in amount of spoiled crop
 Percent change in crop pricing due to competition
 Percent change in agricultural employment
Types of Indicators
Input indicators : Include indicators that measure the human and financial resources, physical
facilities, equipment and supplies that enable implementation of a program. Information on
these indicators comes largely from accounting and management records
Process indicators : Reflect whether a program is being carried out as planned and how well
program activities are being carried out.
Output indicators : Relate to activities, measured in physical or monetary units /results of
program efforts . They include performance measures based on cost and operational ratios
Outcome indicators : Measure the program’s level of success in improving service
accessibility, utilization or quality.
Impact indicators: Refer to the long-term, cumulative effects of programs over time, beyond
the immediate and direct effects on beneficiaries
Proxy indicators – an indirect way to measure the subject of interest. used only when data for
direct indicators are not available, when data collection is to costly or when it is nor feasible to
collect data at regular intervals. ( Example: the number of tin roof to proxy household income )
Example
Level
Project Name Strengthening irrigation in a specific country
area
project goals Improve agricultural productivity
Raise farm income
Indicators Outcome indicators New area under irrigation
Higher yield
Increased production

Increased farm income


Output indicators Construction of 10 new irrigation schemes
Reconstruction of five old irrigation schemes

Twenty five farmer training sessions

You might also like