Module 1 - Introduction To RBME
Module 1 - Introduction To RBME
MODULE 1:
INTRODUCTION OF
RESULTS-BASED MONITORING
AND EVALUATION
(version-4)
November 2009
SMES Monitoring and Evaluation Training Manual
Table of Contents
Abbreviations
Overview of Module 1 .......................................................................................I-1
I-1
Chapter 1: What is Monitoring and Evaluation?
Both of monitoring and evaluation are conducted to see the progress of interventions
such as policy, programs and projects. However, they have different characteristics, and
they are complementary to each other.
I-2
On the other hand, evaluation is conducted at certain periods of the implementation of
interventions usually by external experts, who are equipped with technical knowledge
and analytical as well as communication skills, hired by the implementation bodies.
Evaluation reports, which cover the evaluation results, lessons and recommendations,
are prepared by the experts and shared with the stakeholders inside and outside of the
implementation bodies. The following table summarizes the differences between
monitoring and evaluation.
I-3
The results of M&E may:
- help ministries and agencies manage activities at the sector, program and project
levels;
- enhance future planning of policies, programs and projects;
- help ministries in their policy analysis, and policy and program development;
- support ministries in evidence-based policy making, results-based budgeting
(budgeting is made based on predefined objectives and expected results that are
measurable), and performance-informed budgeting (the past and future performance
information are referred in the budget decision-making process).
- help ministries enhance transparency and accountability to the stakeholders including
related government agencies, donor agencies, tax payers and beneficiaries.
The results of the project evaluation need to be reflected in future projects formulation.
Therefore, the project formulation-implementation-evaluation process is not a one-way
process but a cycle. The system to manage projects with this viewpoint is called “project
cycle management.” According to EC’s Project Cycle Management Guidelines, Project
Cycle Management is a term used to describe the management activities and
decision-making procedures used during the life-cycle of a project.
As shown below, three key components of the project cycle are “formulation (=Plan)”,
“implementation (=Do)”, and “monitoring and evaluation (=See).” The project cycle starts
with project formulation, and after implementation, ends with evaluation. During the
implementation, the progress of the intervention is being monitored.
Formulation of a
new project
PLAN
Formulation
Feedback
DO
Evaluation Implementation
Feedback
SEE Monitoring
I-4
1-2 Types of Evaluation
There are several types of evaluation categorized by stages of the project cycle and by
subject. Each agency has its own evaluation system, in which types and timing of
evaluation to be conducted are stipulated.
Evaluation
Timing Purpose
Type
Ex-ante Before To determine the necessity and conformity of an
Evaluation commencement intervention
of an intervention To clarify the details of an intervention and set
indicators
Mid-term At the mid- point To examine the progress of an intervention
Evaluation of implementation To revise the original plan and/or operation structure if
period necessary
Terminal Upon completion To review an intervention focusing on its efficiency,
Evaluation of an intervention effectiveness and sustainability
To determine if the follow-up is necessary or not
Ex-post After completion of To review an intervention focusing on impact and
Evaluation an intervention sustainability
To obtain lessons and recommendations for improving
formulation and implementation of future interventions
I-5
(2) Evaluation Types by Subject
Evaluation is also classified by subject such as project level, program level, sector level,
thematic level, country program level, and policy level. The definitions of the major
evaluation types by subject are summarized in below Table-3.
Type Definitions
Project-level evaluation Evaluation of an individual development intervention
designed to achieve specific objectives within specified
resources and implementation schedule
Program-level evaluation Evaluation of a set of interventions, which usually covers a
number of related projects or activities in one country,
seeking to attain specific development objectives,
Sector program evaluation Evaluation of a single sector or sub-sector such as health,
education, primary education, etc.
Thematic evaluation Evaluation of selected aspects, such as gender,
environment, etc. of various development interventions, or
evaluating a range of sector program in different countries
Policy-level evaluation Evaluation of a country development policy or sector
development policy
The Logical Framework clarifies objectives of any project, program or policy as well as
causal links among inputs, processes, outputs, outcomes and impacts. It also helps
identify indicators at each stage with which the performance and achievement are
I-6
measured, as well as risks which might impede the attainment of the objectives.
Because of this nature of Logical Framework, it serves as an effective tool to improve
quality and design of project/program and to prepare detailed operation plans. It is also
useful to review progress and take corrective action because it provides objective basis
for activity review, monitoring and evaluation.
Rapid appraisal methods are quick, low-cost ways to gather the views and feedback of
beneficiaries and other stakeholders to respond to decision-makers’ needs for
information. Opinions of beneficiaries can be obtained by key informant interview,
focus group discussion, community group interview, direct observation, and simple
survey. Rapid appraisal provides qualitative understanding of complex socio-economic
changes, highly interactive social situations, or people’s values, motivations, and
reactions. It also helps to provide context and interpretation for quantitative data
collected by more formal surveys.
Cost-benefit and cost-effectiveness analysis are tools for assessing whether or not the
costs of any activity can be justified by the outcomes and impacts. Cost-benefit analysis
measures both inputs and outputs in monetary terms. Cost-effectiveness analysis
estimates inputs in monetary terms and outcomes in non-monetary quantitative terms
(such as improvements in students’ scores). They are used for informing decisions about
the most efficient allocation of resources and for identifying projects that offer the
highest rate of return on investment.
I-7
(5) Impact Evaluation
(6) Meta-evaluation
I-8
Chapter 2: Trend towards Results-based Monitoring and
Evaluation
“Managing for developing results (MfDR)” is one of the key issues emerged from such
efforts. With the definition agreed in Marrakech as shown in Box-2, MfDR calls for
developing countries to increase their commitment to policies and actions that promote
economic growth and reduce poverty, and developed countries to support them through
more effective aid and trade policies.
I-9
MfDR is a new concept and tool to enhance the impact and effectiveness of
development and poverty reduction programs as well as to improve performance
management of development interventions.
The concepts that underlie MfDR are that global development assistance can be made
more effective by enhancing country ownership, aligning assistance with country
priorities, harmonizing development agencies’ policies and procedures, and focusing
more consistently on the achievement of development outcomes.
In the Paris Declaration of 2005, donors and partner countries expressed their
commitments to MfDR as follows (Source: OECD/DAC MfDR Source Book 2006):
In order to achieve MfDR, there are issues to be worked on at the national level as well
as at the sector program and project levels. The followings are some of the major topics
on which developing countries and development agencies are working at each level.
I-10
(1) At the National Level
Development countries need to better manage their development process toward desired
outcomes. MfDR begins with identifying national goals and developing the strategies to
achieve the goals.
Many countries are making progress in linking their poverty reduction strategy or
national development strategy to results-based expenditure management and
performance orientation in public administration. At the same time, central and line
ministries are developing more results-focused strategies accompanied by results
frameworks to monitor progress.
Partner countries and development agencies work together in strengthening the focus on
results in their sector programs and projects through aligning national strategies and
sector development plans, harmonizing results reporting, and strengthening capacity to
manage for results in programs and projects.
Within the context of poverty reduction strategies and national development plans,
development agencies and partner countries are collaborating to identify priority sectors
in which targeted programs and projects can support achievement of country outcomes.
Its main purposes are to improve organizational learning and to fulfill accountability
obligations through performance reporting. Although RBM is nearly synonyms with
MfDR, MfDR incorporates broader ideas such as collaboration, partnership, country
ownership, harmonization, and alignment, focusing on performance and improvements
in country outcomes whereas RBM can be used for shorter-term, smaller-scale
intervention.
I-11
Emergence of RBM and its development
Phases of RBM
The following table shows the seven phases of RBM. As shown below, in RBM, (1)
objectives are formulated, (2) indicators to show progress toward the objectives are
identified, and (3) targets are set.
These three phases are called Strategic Planning. The next two phases, which are (4)
monitoring results, and (5) reviewing and reporting the results are called Performance
Measurement. Finally, (6) evaluation is conducted and (7) the information obtained
through M&E are used for improvement of interventions (In RBM, evaluation is
considered to supplement monitoring information).
Thus, RBM entails not only setting objectives and implementing M&E activities but
also using M&E results to improve management.
3 Setting targets
4 Monitoring results
5 Reviewing and reporting results
6 Integrating evaluation
7 Using performance information
(Source: Annette Binnendijk, Results Based Management in the Development Co-operation Agencies: A
review of Experience Background Report, 2001)
I-12
Performance Measurement, Performance Indicators and Performance Monitoring
For example, some agencies such as CIDA, DFID and AusAID came to report the
performance of their major interventions annually in performance monitoring reports. In
addition to the performance monitoring, some agencies such as CIDA, DFID and World
Bank make it compulsory for the country offices to submit final reports with a focus on
the preventing and promoting factors of the project achievement and its process.
Evaluation also plays an important role in this context.
The following table shows how some of major development agencies, which employ
RBM, conduct Performance Measurement at the project/program level1.
1
As USAID and UNDP are not conducting performance measurement at the project level, the
description on the USAID and UNDP are about performance measurement at the program level.
I-13
Table-5: Performance Measurement System of Major Donors at the Project Level
I-14
higher goals. Therefore, it can be used rather at the program or strategy level than at the
project level.
I-15
As far as the above table tells us, as a general trend, Logframe is used by the agencies
which mainly conduct performance measurement and evaluation at the project level,
while Results Framework is adopted by the organizations which conduct performance
measurement and evaluation rather at the program level.
It is often pointed out that introduction of RBM has changed the role of traditional
monitoring and evaluation. Conventionally, monitoring was conducted to check the
implementation process. As for evaluation, when to conduct an evaluation used to be
pre-determined according to the duration of the project period (ex-ante, mid-term,
terminal, ex-post, etc.), and performance analysis and reporting were conducted after a
project has terminated.
However, in RBM, monitoring is conducted continuously, not only to check the
process but to measure the performance. Also, many agencies started to conduct an
evaluation only when necessary in order to complement monitoring activities.
Regarding reporting, many development agencies now report their performance
annually instead of after the termination of an intervention. This way, RBM enabled
assessing performance in a timely manner and feeding the results into ongoing
strategies to better manage their interventions.
Results-based M&E systems are designed to address the “so what” question. A
results-based M&E provides feedback on the actual outcomes and goals of government
actions. Results-based system helps answer the following questions:
I-16
Box-4: Essential Actions to Build
Results-based M&E System
Results-based M&E is a major component of the MfDR toolbox that helps both
countries and agencies systematically measure the progress of program and project
outcomes. Thus, effective results-based M&E is useful to better manage policies,
programs, and projects, and to demonstrate progress to civil society stakeholders. It also
shows the extent to which specific activities or programs contribute to achieving
national outcomes.
I-17
Box-6: Use of Aid Evaluation for Developing Countries
I-18
Chapter 3: Use of Monitoring and Evaluation Results
As we have learned in the Chapter 1 in the Module 1 (See below Figure 1), project cycle
of “formulation (=Plan)”, “implementation (=Do)”, and “monitoring and evaluation
(=See)” not only applies to the project level, but also to the program and policy level.
In another word, the results of program/policy evaluation are reflected in the
formulation of program and policy plans such as PRSP.
PLAN
Formulation
Feedback
DO
Evaluation Implementation
Feedback
SEE Monitoring
Findings from monitoring and evaluation results can be used in a variety of ways, as
shown in Box7:
I-19
Box 7: Ten Uses of Results Findings
I-20
Figure 3 Donors NGOs Beneficiaries
Budgetary Office
Auditors
Accountability
Media
Learning Learning
Evaluation
Operational/Related
Operational Departments
Departments
Forming new policies/
Modifying ongoing
programs/projects
policies/programs/
projects
Implementation Planning
(Source: JICA, Textbook for the Area Focused Training Course in the Forum on Institutionalization of
Evaluation System, 2007)
I-21
Chapter 4: M&E System and Stakeholders in Nepal
The government of Nepal has its own monitoring and evaluation system, the details of
which are explained in Module 6: Project Reporting for Monitoring. Many actors are
involved in the system, and each of them plays its role in the process. Key M&E
stakeholders and their functions in the context of development interventions in Nepal
are summarized as follows:
(2) NPCS is to design and update monitoring reporting formats and guidelines to be
used by other ministries, line offices at the district level and project offices. NPCS is
also to compile M&E results collected from other ministries, and report the results
in NDAC meetings. NPCS provides other ministries with feedback on their M&E
results;
(3) All the officers of NPCS as well as Planning/M&E division or section officers of
other ministries are to design, conduct and report M&E activities to related offices.
At the same time, they are to utilize M&E results in future program and project
formulation and planning;
(4) Planning/M&E officers of departments, regional offices, district offices and project
offices are to plan, conduct and report M&E activities to related offices as well as to
utilize M&E results in future program and project formulation and planning;
(5) M&E experts and academicians of external institutions conduct evaluation surveys
for the government and prepare evaluation reports. They refer and utilize the
government M&E results/reports for their research as well;
(6) Target groups and other stakeholders join M&E activities of related interventions
which employ participatory M&E methods, and/or to be interviewed for M&E data
collection;
(7) Tax payers are to check how the government uses the financial resources and
achieves the intended goals based on the M&E results, and
(8) Development Partner agencies are to design, conduct and report M&E on their own
I-22
assistance policy, programs and projects. Nepali counterpart agencies are to be
involved in the process.
Box-8: The System to Monitor the MDGs in Nepal - PMAS and DPMAS
I-23
CONGRATURATIONS!! THIS IS THE END OF THE MODULE 1.
Please check whether or not you understand the contents of this module by
answering the following checklist.
CHECKLIST
What is the difference between monitoring and evaluation?
What types of evaluation are there in terms of stages of project cycle?
What types of evaluation are there in terms of level?
I-24
Recommended Readings:
Jody Zall Kusek, Ray C. Rist, World Bank, “10 Steps for Results-Based M&E”,
2004
Keith Mackay, Independent Evaluation Group, World Bank, “How to Build M&E
Systems to Support Better Government”, 2007
I-25
Useful Website of International Development Partners
Agency Website
JICA Evaluation: http://www.jica.go.jp/english/operations/evaluation/
USAID 1) Evaluation: http://evalweb.usaid.gov/index.cfm
2) Guidelines (ADS) : http://evalweb.usaid.gov/adsreqs/adsrequirements.cfm
DFID Evaluation: http://www.dfid.gov.uk/aboutdfid/evaluation.asp
GTZ /KfW 1) GTZ (Evaluation): http://www.gtz.de/en/leistungsangebote/6332.htm
2) KfW (Evaluation):
http://www.kfw-entwicklungsbank.de/EN_Home/Ex-post_Evaluation_at_KfW/index.jsp
AFD Annual Report: http://www.afd.fr/jahia/Jahia/lang/en/pid/18
SIDA 1) Evaluation: http://www.sida.se/sida/jsp/sida.jsp?d=1509&language=en_US
CIDA 1) Evaluation: http://www.acdi-cida.gc.ca/CIDAWEB/acdicida.nsf/En/JUD-111795644-KJX
2) Annual Report: http://www.tbs-sct.gc.ca/dpr-rmr/2006-2007/index-eng.asp
NORAD 1) Evaluation: http://www.norad.no/default.asp?V_ITEM_ID=3507
2) Annual Report 2004-2005:
http://www.norad.no/items/3875/38/7079169984/ENGELSK_aarsrapport_2004-2005.pdf
DANIDA Evaluation: http://www.um.dk/en/menu/DevelopmentPolicy/Evaluations/
AusAID 1) Aid Effectiveness: http://www.ausaid.gov.au/ode/default.cfm
2) Annual Reports:
http://www.ausaid.gov.au/publications/pubout.cfm?ID=8691_5877_871_8496_1205
World Bank 1) Evaluation: http://www.worldbank.org/oed/
2) M&E Manual:
http://wbln0018.worldbank.org/Institutional/Manuals/OpManual.nsf/05TOCpages/Management
3) Project Cycle:
http://web.worldbank.org/WBSITE/EXTERNAL/PROJECTS/0,,contentMDK:20120731~menu
PK:41390~pagePK:41367~piPK:51533~theSitePK:40941,00.html
4) Measuring Results:
http://web.worldbank.org/WBSITE/EXTERNAL/PROJECTS/0,,contentMDK:20120723~menu
PK:41393~pagePK:41367~piPK:51533~theSitePK:40941,00.html
5) Annual Report on Operations Evaluation:
http://web.worldbank.org/WBSITE/EXTERNAL/EXTOED/EXTANNREPOPEEVA/0,,content
MDK:21115852~menuPK:3073384~pagePK:64168427~piPK:64168435~theSitePK:3073310,0
0.html
UNDP 1) Evaluation: http://www.undp.org/eo/
2) Guidelines: http://www.undp.org/eo/methodologies.htm
Asian 1) Evaluation: http://www.adb.org/Evaluation/
Development 2) Evaluation Reports: http://www.adb.org/Evaluation/reports.asp
Bank
I-26