0% found this document useful (0 votes)
13 views

SES Inception Report

Uploaded by

poxaj28588
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

SES Inception Report

Uploaded by

poxaj28588
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Contents

Acronyms ................................................................................................................................................... ii
I. Introduction ..................................................................................................................................... 1
II. Background ...................................................................................................................................... 1
III. Objectives, Purpose, Scope and Limitations ........................................................................ 1
IV. Approach ........................................................................................................................................... 4
V. Methodology ................................................................................................................................. 14
VI. Phases, Deliverables and Workplan/timeline ................................................................. 19
Annex 1: Terms of Reference .......................................................................................................... 22
Annex 2: Interview Template – Executive Directors ............................................................. 30
Annex 3: Interview Template – Managers and Staff (HQ and Regional hubs) ............. 31
Annex 4: Survey Template ............................................................................................................... 32
Annex 5: Report Outline .................................................................................................................... 33

i
Acronyms

AfDB African Development Bank


ADF African Development Fund
ADOA Additionality and Development Outcome Assessment
APPR Annual Portfolio Performance Review
BTOR Back-to-Office Report
CPPR Country Portfolio Performance Review
CSP Country Strategy Paper
DAM Delegation of Authority Matrix
DBDM Development and Business Delivery Model
EVRD Evaluation Results Data Base
IDEV Independent Development Evaluation Department
IPRR Implementation Progress and Results Reporting
KPI Key Performance Indicator
M&E Monitoring & Evaluation
MTR Mid-Term Review
OECD Organization for Economic Co-operation and Development
OM Operations Manual
PCR Project Completion Report
PCREN Project Completion Report Evaluation Note
PMG Performance Monitoring Group
PPER Project Performance Evaluation Report
QoS Quality of Supervision
RISP Regional Integration Strategy Paper
RLF Results-based Logical Framework
RMF Results Measurement Framework
RMR Results monitoring and reporting
RPPR Regional Portfolio Performance Review
SES Self-Evaluation System
TOC Theory of Change
TYS Ten-Years Strategy
WBG World Bank Group
XSR Extended Supervision Report
ii
I. Introduction
The Independent Evaluation Department (IDEV) of the African Development Bank Group (AfDB
or the Bank) has retained the services of Centennial Group International to carry out the
Evaluation of the Bank’s self-evaluation system and processes (SES). Annex 1 provides the
Terms of Reference for the assignment. This Inception Report contains six sections covering:
(i) background; (ii) objectives, purpose, scope, and limitations; (iii) approach; (iv)
methodology; (v) phases, deliverables, and workplan/timeline.

II. Background
Aside from the independent evaluation function performed by IDEV, AfDB’s management also
employs self-evaluation systems and processes (SES) that help in: assessing the Bank’s
investment efforts, learning from operational experiences, and monitoring and improving the
Bank’s performance. The SES are defined in different Bank documents including:
 Operations Manual (OM), which was initially adopted in 1993 and revised in 1999, and
more recently in 2014. (The next revision of the current OM is expected to commence in
2019.);
 Delegation of Authority Matrix (DAM) and relevant Presidential Directives;
 Additionality and development outcomes assessment (ADOA) framework for the Bank’s
non-sovereign operations; and
 4-level Results Measurement Framework (RMF), which is central to the Bank’s SES.
Various strategy and policy papers are also important. On the strategy front the Bank’s Medium-
Term Strategy (MTS), was replaced in 2013 with a Ten-Year Strategy (TYS)—At the Center of
Africa’s Transformation 2013-2022. Following the adoption of the TYS, the Bank has undergone
major changes, particularly the adoption of the High 5s priorities and related strategies. This
has been accompanied by the Board’s endorsement and subsequent implementation of the new
Development and Business Delivery Model (DBDM).
In response to changing contexts within and outside the Bank, the Bank’s SES continues to
evolve over time.

III. Objectives, Purpose, Scope and Limitations


The overall objective of this assessment is to provide support for the evaluation of the SES as
per the attached Terms of Reference (Annex 1) and as modified in this Inception Report. The
evaluation team will be responsible for identifying and systematically synthesizing evidence
from various sources and for conducting the evaluation following the Organization for
Economic Co-operation and Development/Development Assistance Committee (OECD/DAC)
evaluation criteria and principles as well as the good practices from the Evaluation Cooperation
Group (ECG).
More specifically, the objectives are to:
 Assess how well the Bank’s SES perform, focusing on their relevance, coherence,
efficiency, effectiveness, and short-term impact in serving three primary objectives—
improving performance, enhancing accountability for performance, and promoting
learning;
 Identify and assess the enablers and barriers that affect the design, implementation, and
results of the Bank’s self-evaluation systems and processes;

| Page 1
 Distil lessons, good practices, and formulate recommendations to enable the Bank to
enhance the quality and performance (design, scope, implementation and results) of its
self-evaluation systems and processes.
The focus of this evaluation will be the Bank’s SES as they functioned between 2013 and 2018.
This period covers a considerable part of the implementation of the TYS, the adoption of the
High 5s strategies, as well as recent reforms including implementation of the DBDM and process
reengineering. This period also encompasses the issuing of the updated Operations Manual in
2014; findings from the evaluation would thus support and inform the upcoming 2019 revision.
The purpose of the evaluation will be to support the Bank’s management and operational staff
through its findings, conclusions and recommendations in:
 Improving self-evaluation, and performance management of operations, strategies, and
policies;
 Improving the relevance and quality of the Bank’s Operations Manual whose revision is
to start in 2019;
 Promoting learning from experience, and enhancing operational effectiveness;
 Supporting the implementation of the New Development and Business Delivery Model
(DBDM), and process engineering;
 Accounting to the Board of Directors and other stakeholders for the results of the
investments in the Bank’s SES.
The evaluation will build on and complement the:
 IDEV Evaluation of the African Development Bank’s quality assurance across the project
cycle (2013-2017) and Evaluation of the Integrated Safeguards System; and
 Independent audit of Bank results monitoring and reporting (RMR).
Scope. The evaluation’s intended audience are primarily Bank staff and managers as well as
members of the Bank’s Board. The evaluation is also expected to be of interest to comparator
organizations.
With the Bank’s self-evaluation systems covering many different operational product lines the
scope of the evaluation would include the range, processes, and methodologies being employed
by these products. Some of the Bank’s operations and reports are validated by IDEV and feed
into corporate scorecards and results measurement systems. An important distinction can be
made between the mandatory self-evaluation products (e.g. IPR, PCR, XSR, CSPCR) and
voluntary evaluation studies such as impact evaluations and occasional programmatic
evaluations or retrospective studies commissioned by individual business units. Table 1 below
shows the current coverage of AfDB’s self-evaluation tools and systems.
The evaluation will assess SES for sovereign and non-sovereign development operations. It will
focus on:
• Country strategy papers (CSPs) and Regional Integration Strategy Papers (RISPs);
• Sector and, more recently, High -5s Strategies;
• Investment operations (both sovereign and non-sovereign);
• Program-based Operations (PBOs); and
• Self impact evaluations undertaken by Bank.

| Page 2
Table 1: Self-evaluation tools and processes
Activity Area of focus Name of the self- Who is responsible Frequency and IDEV Role & Main Purpose of
level evaluation tool for its coverage validation self- evaluation
(acronym) preparation? coverage
Project Lending IPR, XSR Management Bi-annual Performance
Management
MTR Management Mid-term Performance
Management. Course
correction.
Learning
PCR Management Completion Validation

PCREN, XSREN IDEV Completion Validation Learning

Learning
Impact Evaluation Management Completion
(occasional and
voluntary).
Advisory services Completion Report Management Completion Learning
and analytics
Trust Fund Completion Report, Management Completion Accountability
activities Annual Progress
Trust Fund grants Reports
Program Country CSP-CR Management Completion Validation (pilot) Learning
Program
Regional and RISP-CR, Management Completion Validation Learning
Sector Program Sector Strategy, Management Occasionally Accountability
Progress Reports

Completion Reports Management Completion Learning


Global and Completion Management Completion Learning
Regional Reports
Partnership
Programs
Corporate Corporate Results Progress Reports Management Completion Learning
framework
TYS, High 5s Progress Management Completion Learning
Reports

| Page 3
Aggregated self-evaluation results feed into apex reports that provide more aggregate,
corporate accountability. These include:
 The Bank’s RMF and the associated Annual Development Effectiveness Review;
 Portfolio monitoring reports; and
 Reporting to the Board on progress in implementing strategies.

Limitations. As indicated in the Terms of Reference, personnel self-evaluation systems and


processes fall beyond the scope of this assignment.
Evaluations that typically are not covered by self-evaluation and hence outside the scope of this
evaluation include:
 Board operations
 Control functions
 Treasury operations

Possible limitations to the evaluation may come from the availability of SES products and
documents as filed by the task managers in the Bank’s system.

IV. Approach
Theory of Change (ToC): IDEV’s evaluation policy, the OECD-DAC criteria and the Evaluation
Cooperation Group’s Big Book on Evaluation Good Practice Standards will guide the
evaluation that will be based on a theory of change presented schematically in Figure 1.
The theory of change underpinning the self-evaluation architecture is based on the
fundamental logic that well-functioning SES can play a useful and effective role in helping to
improve:
(i) performance management and how the availability of reliable information and
evidence can help management take timely decisions to improve portfolio
performance;
(ii) accountability and how the provision of key information at different levels
(project, program, corporate) signals that the AfDB holds itself accountable for
achieving results; and
(iii) learning and how the SES can be a tool for continuous learning and adaptation
internally and externally.
The Evaluation will look at broad aspects of the architecture of the SES (components,
processes and mechanisms). It will also examine the causal pathways going from the basic
inputs into the self-evaluation systems (the portfolio at entry, the M&E systems, the business
processes, the leadership signals and incentive structure, the budget, and the various strategic
and guidance documents put in place by the institution) and how they influence the
achievement of outputs, outcomes and impact. In this context the evaluation will also consider
the stakeholders of the self-evaluation systems’, and their perspectives and inter-
relationships.

| Page 4
Figure 1: Preliminary Theory of Change

| Page 5
Also to be examined during the evaluation are the links between inputs and outcomes
that are ensured through the periodical production of a number of reports (outputs)
generated during project supervision and at closing (PSR/IPPR, XSR, MTR, and
CSP/RISP-CR etc., and occasionally self-impact evaluations). These reports feed into
broader schemes of reporting arrangements at corporate level. Other links between
the self-evaluation systems and other major systems will be reviewed to assess how
they influence the overall response culture; the incentive structure and performance,
such as project logframes; IDEV’s own independent evaluations and ‘validation’
exercises (e.g., PCREN, XSREN); commitments made at the corporate level; the Bank’s
Operations Manual; and other requirements. The interfaces between the various
systems, gaps in coverage, overlaps, relevance, periodicity, and the overall supporting
environment will be analysed as they can influence performance of the self-
evaluation systems architecture.
In examining the various causal pathways, a number of assumptions and facts will be
tested to probe the robustness and credibility of the system and identify the weak
links that could lead to recommendations for improvements. The key assumptions for
the different levels of ToC causality cover:
 Assessment of the enabling environment and barriers to effective self-
evaluation;
 Assessment of prevailing incentive structure, including how it influences
individual behaviours;
 Balance between compliance and results-orientation leading to the intended
results;
 Cost-effectiveness of what is produced compared to its added value;
 Adequate production, use and relevance of the various project rating
systems; and
 Likelihood that timely corrective action can be undermined by transaction
costs and aversion to risks, etc.

The evaluation will be forward looking and offer Management a number of


recommendations that can enhance the performance of the tools, methods,
indicators, processes and incentives that are most likely to establish the trust in the
SES and the credibility of their results. This, in turn, would promote more effective
use of self-evaluation outputs especially for decision-making, learning, and
accountability for improving the Bank’s development effectiveness.
This evaluation will primarily rely on the OECD-DAC criteria of relevance,
effectiveness and efficiency, while assessing results beyond outputs. Coherence
within the Bank’s self-evaluation systems and processes and the Bank’s independent
evaluation function will also be assessed accordingly.

| Page 6
Evaluation Questions: As suggested in the Theory of Change, the overarching
question to be addressed by the evaluation is:
Do the self-evaluation systems and processes (SES) support Performance
Management, Accountability, and Learning at the Bank?
With two underlying sub-questions:
1. How well are the SES performing?
2. To what extent are the SES impacting on the quality of development
results?
Questions/sub-questions in the Evaluation Matrix below are organized for each of the
three pillars (Performance Management, Accountability and Learning) and
subdivided according to the four evaluation criteria (Relevance & Coherence, Cost-
efficiency, Effectiveness and Impact contribution, and Incentives & barriers).

| Page 7
Table 2: Evaluation Matrix
Relevance & Coherence
Evaluation Questions Assessment Criteria and Indicators Source
Performance  EQ1. Are the SES and its  Extent to which the set of policy and guidance documents describe a
Management different instruments coherent process for the SES, with added value by each instrument, Desk-based review
relevant and coherent on clear roles and responsibilities, coherent and objective description of
how it is aligned with the the rating process. Stakeholders survey,
Bank’s strategy and interviews, and focus
guidance and serves its Main instruments to be analysed are: group discussions (staff
purpose for performance i) Projects and project portfolio – IPRR/XSR (for non-sovereign and managers; HQ and
management operations), MTR, PCR; CPPR and APPR. Aide Memoires and field)
BTOs will also be considered
[To what extent there is ii) Country/regional strategies – CSP/RISP-MTR, CSP/RISP-CR Recent Evaluation reports,
evidence that the iii) Sector/thematic strategies – “Reviews” quality at entry, quality of
implementation of the supervision, CEDR, PCR
SES results in enhanced  Extent to which the set of SES instruments is aligned with the main and XSR evaluation notes,
performance of policy and guidance documents: cluster evaluations
project/country i) Operational Manual, Presidential Directives and Delegation of
programs] Authority Matrix (DAM), DBDM, H5s, TYS, etc… Case studies, process
ii) Focus on priority areas: gender, fragility, safeguards, fiduciary, reviews
governance
Benchmarking analysis
 Extent to which the implementation of the SES and its instruments
comply with the main guidance documents
 Extent to which the provided ratings system reflects a consistent,
timely and accurate approach to performance management for the
various processes around each instrument
Extent to which the SES is used as a tool for corrective action and
analysis of possible impediments
Accountability  EQ2. Are the SES and its  To what extent the SES relies on strong M&E systems as a critical
different instruments a input for the credibility of the system Desk-based review
reliable and relevant  To what extent are the different policy and guidance documents
framework geared describing the SES adequate to verify achievements of results and Stakeholders survey,
towards verification of lines of accountability interviews, and focus

| Page 8
results, accountability  To what extent is the SES internally coherent and consistent with the group discussions (staff,
and reporting at: Bank’s independent evaluation function managers, and Executive
- staff/management level  To what extent the SES provides a broad and relevant perspective on Directors; HQ and field)
- corporate level the results achieved and communicate overall performance in an
easily understood way Recent Evaluation reports,
[does the SES generate  To what extent is staff and management being held accountable for quality at entry, quality of
useful information for staff, the proper implementation of the SES towards its intended results- supervision, CEDR, PCR
management and the based objectives. and XSR evaluation notes,
Board signalling that the  To what extent the SES and its different levels of aggregation cluster evaluations
AfDB holds itself provides a relevant and accurate reporting of results through the RMF
accountable for achieving and other reporting tools (dashboard, etc..) Benchmarking analysis
results]
Learning  EQ3. Are the SES being  To what extent are processes, roles and tools well-defined,
used as a reliable and comprehensive and integrated for learning and knowledge
relevant framework for management
learning and innovation  To what extent are comprehensive scoring and IT systems in place to
ensure learning from relevant project cycle and thematic activities
[is the information  Is the SES providing the most relevant information to the different
produced being used to audiences
shape how learning takes To what extent is the SES used as a tool for learning, and a repository
place] of evidence of good practices, failures and lessons learned.

Cost-Efficiency
Evaluation Questions Assessment Criteria and Indicators Source
Performance  EQ4. To what extent the  Are the costs of the Bank’s SES across the project cycle appropriate
Management SES and its instruments relative to the results achieved Desk-based review
provide a reliable and  To what extent are the resource requirements comparable to the
cost-effective framework frameworks of other organizations Stakeholders survey,
for portfolio  To what extent the elimination of possible overlaps, redundancies or interviews, and focus
management requirements could improve cost-efficiency group discussions (staff,
 To what extent the different instruments of the SES are adding value managers, HQ and field)
[Are the SES implemented compared to their costs
in a cost-effective and  To what extent is the budget or staff overload a factor in the proper Recent Evaluation reports,
timely manner in terms of implementation of the SES as described quality at entry, quality of
time, financial and human supervision, CEDR, PCR

| Page 9
resources to deliver its  To what extent is proactivity by staff for corrective action promoted and XSR evaluation notes,
intended benefit] or impeded by transaction costs, project restructuring procedures, cluster evaluations
etc…
Case studies
Accountability  EQ5. Do the SES  To what extent quality at entry is a factor in the implementation of a
provide a reliable and cost-effective SES system for accountability and reporting Benchmarking analysis
cost effective  To what extent are M&E systems a factor in the establishment of a
framework for credible accountability framework
reporting and  To what extent reporting requirements are adequate and cost-
accountability effective in providing the right information to the different audiences
internally and  To what extent users find data input costly in terms of time
externally compared to the benefits
 To what extent are the ratings and their aggregation being used as a
[is the money spent for cost-efficient tool for corporate performance reporting
the SES commensurate  To what extent the way aggregation takes place for corporate
to the quality and the reporting is adequate and cost-efficient
usefulness of the
information provided]
Learning  EQ6. Are the SES being  To what extent templates and instruments support efficient
implemented as a cost- recording of lessons and add most value
effective tool for learning  To what extent SES and rating validation could be a factor in
increasing the opportunities for learning
[is the money spent for  To what extent the SES is being used to record and gather data for
the SES commensurate to learning purposes, events and knowledge management
the amount of learning  To what extent the SES is trusted enough to be a source of learning
and innovation that it and what could be the role of independent evaluations
generates]

Effectiveness & Impact Contribution


Evaluation Questions Assessment Criteria and indicators Source
Performance  EQ7. Is the SES  To what extent is the system designed to provide the right balance
Management architecture being between ensuring compliance and pursuing results Desk-based review
implemented as a tool  To what extent the SES guidance is geared towards promoting
to enhance portfolio proactivity in addressing issues and corrective actions

| Page 10
performance and the  To what extent the inputs in the SES (quality at entry, M&E, Stakeholders survey,
achievement of results processes, instruments, budget) are adequate to ensure proper interviews, and focus
delivery of the SES group discussions (staff,
[What is the evidence  To what extent the implementation of the rating system is managers, HQ and field)
that the SES contributes sufficiently trusted to be a reliable tool for performance management
to improve quality at  To what extent has the SES facilitated the implementation of Recent Evaluation reports,
exit] mitigation and safeguard measures quality at entry, quality of
 To what extent has the SES contributed to effectively addressing supervision, CEDR, PCR
cross-cutting issues such as gender, fiduciary, fragility and XSR evaluation notes,
cluster evaluations
Accountability  EQ8. Is the SES  Are roles and responsibilities sufficient clear in the definition of the
architecture being different processes and requirements surrounding the preparation, Case studies
implemented as a tool conduct, review, sign-off, follow-up for the various steps of the
to enhance different SES instruments Benchmarking analysis
accountability  Is the design and guidance around corporate reporting processes
consistently with effective and well integrated in the SES system
provided guidance and  Has the Bank’s SES been delivered as expected for accountability
Bank’s priorities purposes
 To what extent has enforcement of procedures been enacted upon
[Is the degree of when needed
accountability  To what extent the implementation of the rating system is
generated by the SES sufficiently trusted to be a reliable tool for accountability and
conducive to improve reporting requirements
results]  To what extent is attribution a factor in determining the degree of
the Bank’s accountability

Learning  EQ9. Have the SES  To what extent the focus on accountability can undermine the
contributed to the relevance and usefulness of the SES for learning.
identification and use  To what extent independent or arm-length evaluation can play a role
of lesson learned in enhancing learning opportunities
 To what extent the concerns over ratings and disconnects distract
[is the SES, the way it is from learning
designed and  To what extent the SES can be used more strategically to meet
implemented, the right knowledge gaps and for lesson learning
tool for learning]

| Page 11
Incentives and barriers
Evaluation Questions Assessment Criteria and indicators Source
Performance  EQ10. Are the  To what extent a compliance mindset and a focus on ratings can
Management incentives in place distort the systems and create biases or a candour gap. Desk-based review
conducive to candid  To what extent guidance provided can enhance incentives for results
assessments and and corrective action rather than compliance and transaction costs Stakeholders survey,
proactivity for  To what extent can team or third-party validation help the system to interviews, and focus
portfolio performance remain honest group discussions (staff,
and corrective action  To what extent is evidence available to justify the ratings and what is managers, HQ and field)
the risk of “gaming the system”
[Do the incentives in  To what extent is poor project performance seen as having a link Recent Evaluation reports,
place ensure that the with staff performance potentially leading to risk aversion and fear of quality at entry, quality of
SES is implemented as tainting staff reputation supervision, CEDR, PCR
designed for quality  To what extent are the incentive in place reflecting the right balance and XSR evaluation notes,
results] between the focus on lending vs. supervision cluster evaluations

Accountability  EQ11. Are the  To what extent should the adequate implementation of the SES be Case studies
incentives in place linked to staff performance evaluation
conducive to candid  To what extent greater focus on the inputs (quality at entry, budget, Benchmarking analysis
assessments for business processes, M&E systems) is likely to improve the incentive
accountability structure in place for effective implementation of the SES
 To what extent is Management exerting leadership over the correct
[Can incentives implementation of the system and lines of accountability
influence behaviours so  To what extent staff and managers see the SES as a relevant
that accountability is accountability tool that requires trust and close follow up for it to be
towards the useful
achievements of results
rather than the
compliance with rules]

Learning  EQ12. Is the incentive  To what extent new mechanisms for lessons sharing, “safe space” for
structure geared debate and incentives for transparency would enhance the credibility
towards the use of the and relevance of the SES for learning purposes
SES for continuous  To what extent learning and feedback loops can lead to system
learning and improvement in enhancing flexibility, better procedures for project
innovation

| Page 12
restructuring, recognition for excellence, differentiation according to
[Do the incentives in specific situations (fragile context)
place ensure that the  To what extent a more direct involvement of third independent
SES serve as a learning parties and specific thematic/country events are likely to increase the
tool] credibility of the system and learning opportunities
 To what extent incentives can change behaviours in terms of
documenting learning evidence of proactivity for corrective action,
best practices of ratings follow up, awards for innovation, etc..
 To what extent better incentives can increase the perceived value of
the knowledge created, mainstream risks and failures as part of the
business, create opportunities for mining lessons and knowledge and
organizational learning
 To what extent the focus on corporate results reporting for
accountability and ratings contributes to weaken staff attention for
other purposes such as learning

| Page 13
Evaluation Framework. The framework adopted for the evaluation is illustrated in
Figure 2 below.

Figure 2: Evaluation Framework

V. Methodology
In conducting this evaluation, the team will cast its analytical efforts in the context of
a continuum of evaluative efforts carried by the Bank and IDEV in the last couple of
years to accompany the transformation that the institution has been going through.
Some very recent evaluation studies have been carried out by IDEV and are relevant
for the SES evaluation, as they will provide an opportunity to carry it out in the
context of a logical sequence of validated analysis aimed at improving the Bank’s
capacity to deliver a renewed strategic and operational framework. In particular,
some of these evaluation studies have examined the relevance, efficiency,
effectiveness and institutionalization of the Bank’s Quality Assurance processes
through the project cycle at entry, supervision and exit (2013-2017), country strategy
and program evaluations, the Integrated Safeguards System, the independent audit of
Bank results monitoring and reporting (RMR), the preparation of PCR Evaluation

| Page 14
Notes for all projects closed in 2017), and the Comprehensive Evaluation of the
Development Results of the African Development Bank Group (2017).
This evaluation will not duplicate efforts but build on this already-validated base of
evidence to focus its efforts more specifically on the performance of the SES system
itself and tailor the methodology accordingly. Therefore, to the extent possible it will
build on the relevant data and evidence already collected, while filling the gaps as
needed vis a vis new data requirements specific to the SES. More specifically, this
evaluation will pair effectively with the Quality at Supervision and Exit evaluation
(QoS) and with the independent preparation of the 2017 PCRENs. While the QoS has
looked at the various components of project supervision including from the
Borrower’s perspective, the SES evaluation will focus on the internal processes,
instruments and mechanisms put in place to monitor, report and learn on project and
sectoral performance and results. To that extent, while the quality of supervision
relies in good part on the performance of country Governments, partners, local
stakeholders, etc.., the SES is entirely under the Bank’s control as well as in its capacity
to adopt and implement recommendations. The PCREN prepared in 2018 will
represent the end-result of the application of the SES over all the closed projects and
a very valuable pool to draw from for validation purposes.
The evaluation will follow a mixed methods approach and rely on diverse
methodological approaches targeted to answer particular evaluation questions. Data
collection methods will match particular questions and multiple sources will be used
to triangulate information. A range of information sources will be used.
Meta-analysis: This will comprise literature and desk reviews of evaluations of self-
evaluation systems conducted by other MDBs, during which the team will assess the
experiences of organizations with similar self-evaluation systems and examine how
these institutions assess their self-evaluation systems with respect the three pillars
of Performance management, Accountability and Learning (individual and
corporate). It will also examine the common issues across the MDBs including factors
affecting the three pillars.
Benchmarking: This will be conducted in parallel with the meta-analysis and
compare various components of the self-evaluation system of comparator/sister
organizations, including other MDBs. It will complement the meta-evaluation and
will include the key elements of each self-evaluation system, coverage, processes, and
discussions with key stakeholders. It will also cull lessons of experience and good-
practice. For the purpose of this analysis we will examine the experience of the World
Bank Group, IFAD and the Asian Development Bank.
Case Studies: The evaluation will analyse:
1. CSPs (inception to mid-term review and CSP-CR) and CPPRs for 2 countries. It
will examine the coherence of the SES with respect to: i) the clarity of strategic
objectives and underlying theory of change/results framework, policy
dialogue, risk assessment process, consultation with RMC/ partners on
strategic issues including cross-cutting issues, ii) clarity of results monitoring
indicators and M&E system (data collection, data analysis and use), frequency
| Page 15
and timeliness, iii) adequate production, relevance of the CSP-MTR and CSP-
CR and use (learning), and iv) use of Self Evaluation results in independent
evaluation of the CSP (IDEV). The countries for case study are proposed to be
selected from Cote d’Ivoire, Kenya, Morocco and Tunisia.
2. A Purposive sample of closed projects or PBOs (roughly 15) out of the pool of
projects reviewed by IDEV in 2018 and for which a PCREN was independently
prepared and PCRs validated. This would include a small sub-set of projects
for private sector operations. The sample will also include projects in the
countries selected for the CSP/CPPR case studies.
The projects would be followed through their life cycle to evaluate the
consistency and sequencing of the various reports and action. The purpose
would be to review the coherence of the SES with respect to: i) efforts for
corrective actions taken between project effectiveness and closing, ii)
conformity in the application of existing reporting requirements, quality of the
results framework, theory of change and underlying assumptions, iii) clarity
of results monitoring indicators and M&E system (data collection, data
analysis and use), iv) quality production and relevance of project readiness
review, IPR, MTR and PCR and use (for learning), v) review of results
aggregation at sector/country level and use in the RMF, vi) consultation with
stakeholders for the production of country portfolio performance review
(mutual accountability and learning), and vii) use of IPR, PCR data and analysis
in synthesis reviews and validation of PCR (IDEV).
3. The East-Africa RISP will be used to analyse the application and the
effectiveness of the SES for a strategy paper associated to a multi-country
environment. It will mostly look at its evaluation at exit and learning
opportunities for the next RISP or RISPs in other sub-regions.

Case studies approach. The approach taken is consistent with the intention of the
evaluation to look at the SES as a whole and across the board on how it performs vis
a vis the three main outcomes identified in the Theory of Change as: i) performance
management, ii) accountability, and iii) learning.

 Under performance management the evaluation will examine efforts made


(or lack thereof) to: i) provide relevant and accurate information on
project performance, and ii) use the SES to take corrective measures when
needed to improve quality at exit. The availability of the PCREN for each of
these projects will provide a solid benchmark against which to compare
SES performance during the life of the project/program. The evaluation
will seek to better understand the correlation between the availability of
quality SES products with quality at exit.
 Under accountability the evaluation will assess the extent to which the
Bank complies with its own policy, operational guidance and processes,
and accountability mechanisms enacted in line with roles and the
| Page 16
responsibility at the various levels. This implies a review at two levels to
assess whether: i) the degree of accountability generated by the SES is
conducive to improve results, and ii) extent to which accountability at the
corporate level generated by the SES through the RMF and other
aggregated instruments provides relevant and credible information that is
well integrated and connected with portfolio and strategic performance.
To that extent, the capacity of the RMF to distil the main outcomes and
lessons of the SES for external reporting will be analysed to understand
how it links with the portfolio and how it connects with the various project
reports.
 Under learning the evaluation will examine available documents for
follow-up projects under preparation out of the pool of the 2017 PCRENs.
This analysis will allow to verify the use being made of the SES for its
learning potential in the preparation of project documents and knowledge
products.
Case studies interviews. The Case studies will be supplemented with interviews with
the task managers and managers responsible for the selected projects, the 2 CSPs and
the RISP. The questions will evolve in the form of a conversation relative to the
specific product that they were responsible for along the following lines.
Table 3: Illustrative areas of enquiry for the case studies interviews with Task
Managers
Performance Accountability Learning
 Based on your experience,  The nature of leadership signals  The extent to which the
what were some of the main during the period of main outcomes and
lessons derived from the implementation of the projects lessons learned
application of the SES you managed, and the degree of (PCRENs) were used
guidelines, particularly with accountability exerted by for the preparation of a
respect to the production management and by the Board follow-up project,
and use of the mandated (through the RMF), portfolio reviews or
documents and ratings,  Within the context of your thematic documents,
 Existing incentives or specific projects, to what extent  The nature of the
disincentives around was the application of the SES incentives that could be
candour, objectivity and and portfolio performance put in place for a more
relevance that contributed discussed with management or effective use of the SES
to behavioural changes at the level of the staff mechanisms for future
within the specific projects performance evaluation. projects,
you were involved in  Issues around compliance-  Would the use of more
 Main reasons for the source check vs proactive follow-up on independent third-
of disconnect between IDEV problems identified during party interventions
validation (and the QoS supervision. How it impacts help improve the
report) and the findings staff capacity to call on the right credibility of the
brought forth by the SES expertise and support system and
application in your projects borrower’s implementation opportunities for
capacity and accountability. learning.

| Page 17
Interviews: The key informant interviews (beyond those related to the cases studies)
will cover staff and managers as practitioners and resource persons knowledgeable
about the SES. Executive Directors will also be included as key informants as part of
the analysis of external accountability.
Table 4: Key Informant Interviews (case studies)
HQ Regional Hubs Total
Task Managers (sovereign) 5 5 10
Managers 3 2 5
Task Managers (private sector) 3 2 5
Task Managers CSPs and RISP 1 2 3
TOTAL 12 11 23

Interview questions: Given the proposed methodology, and the coverage already
provided to the same or similar questions in the QoS, the interviews can be relatively
limited and focused on the handling and performance of the SES from the perspective
of the staff and managers. Evidence from the QoS will be used to complement SES
data. The questions presented in Annexes 2 and 3 will be used as the underlying
thread for a conversation on that subject. The questions for staff and managers will
be the same, allowing the team to detect possible discrepancies and misalignments.
Questions for the EDs will be different.
Survey: A survey, to be conducted early in the evaluation, will focus on the
professional, mostly operational, staff who are directly or indirectly involved in the
production or utilization of the information from the self-evaluation system (see
Annex 4). The survey will seek to gather information on depth of knowledge of the
various components of the system, familiarity with issues related to compliance and
quality, understanding and use of learning tools (e.g. EVRD), incentives and other
drivers, and various accountability mechanisms. It will also gather information on
how specific priorities are being addressed such as gender, fragility, and safeguards.
Together with the informant interviews, the survey will provide a rich source of
information on the broad question of system architecture, performance and drivers.
A client survey is not proposed since the self-evaluation system is internal to an
organization and directly impacts the three pillars, but only indirectly affects the
development outcomes; the benefits of a client survey might not justify the cost
including the imposition on RMC officials. Annex 4 contains a proposed survey
template.
Data Analysis: The documents and data sources, highlighted above, will form the
basis for the generation of findings, and drawing of conclusions, lessons and
recommendations, which will be the basis for the preparation of the background
papers/reports, and the evaluation synthesis report. Emerging findings will be
shared with stakeholders for feedback.

| Page 18
VI. Phases, Deliverables and Workplan/timeline
Phases of the Evaluation: The Evaluation process will comprise the following
phases:
 Inception: The inception phase is of vital importance. It will include (i)
obtaining Bank documents and data; (ii) a broad document review – including
Bank policy and guidance, Operations Manual, relevant evaluations1, guidance
notes for preparation of self-evaluation reports; (iii) Bank processes for
preparation of various self-evaluation reports including validation and
internal/external reporting; (iv) examples of Sovereign and non-sovereign
project documentation; (v) scoping interviews/focus groups with key
informants amongst Bank staff and management; (vi) reconstruction of the
theory of change including elaboration of the evaluation matrix, and mapping
of stakeholders; (v) rapid assessment of available data; (vi) development of
the survey and interview instruments; and (v) an evaluability assessment.

 Meta-analysis and Benchmarking: will begin in parallel with the inception


phase.
 Data collection: This phase will comprise the staff survey, interviews with
key informants and case studies supplemented with interviews of task
managers and managers responsible for selected projects and other outputs
covered in the case studies.
 Data analysis: This phase will concern all the data sources, highlighted above,
allowing for the generation of findings, and drawing of conclusions, lessons
and recommendations, which will be the basis for the preparation of the
background papers/reports, and the evaluation synthesis report. Emerging
findings will be shared with stakeholders for feedback.
 Synthesis and Report preparation: This phase will entail: synthesizing the
analysis and the messages from the background papers/reports to prepare the
draft evaluation synthesis report; and presenting the evaluation findings to
the Evaluation Reference Group (ERG) and other stakeholders for feedback.
 Production and delivery of the final evaluation report: In this phase, the
Evaluation report will be finalized to take account of the feedback as the basis
for dissemination and follow up.
 Communication and dissemination of evaluation results: This would be
the final phase.
This assignment will begin in February 2019 and last for a period of 5 months.

1Especially Quality Assurance across the Project cycle, PCREN synthesis reports, CEDR and associated
background documents, Mid-term evaluation of CSP, various cluster evaluations.
| Page 19
Deliverables: The Evaluation team will deliver three principal outputs, each first in
draft and then in final form after taking account of feedback from IDEV and, where
relevant, the Reference Group:
1. An Inception Report: The inception report will set out in greater depth
the Evaluation team’s understanding of the assignment as well as its
approach to undertake it; the plan will identify the sources of primary and
secondary data that will be used and the detailed schedule of work. It will
also include a framework for semi-structured interviews, focus-group
discussions, sources for content analysis of project performance data, and
institutional benchmarking. This draft is being provided to IDEV and will
be finalized to reflect IDEV feedback. The final inception report, reflecting
IDEV feedback, will provide the basis to proceed with the diagnostic study;
and

2. Background Reports: The team will prepare the following brief


background papers/reports:
i. Inception Report
ii. Benchmarking and Meta-analysis
iii. Survey/Interview findings
iv. Case studies

3. The Evaluation report: The summary report will include a brief (2 page)
Executive Summary. See Annex 5 for a detailed outline of the report.

Excluding annexes, it is anticipated that the main report will be around 25 pages in
length, with additional information provided as annexes or, where appropriate,
provided separately for IDEV records; any data files will be provided in soft copy. The
team will provide a draft for IDEV feedback/comments and for consultation within
the Bank and revise the report in light of the feedback/on the draft.
The deliverables are shown in Table 3 and the associated timeline are shown in Table
4 on the following page. The timeline assumes contracting and availability of the
required documents by end-January.

Table 3: Deliverables
Deliverable Date (2019)
Draft Inception Report February (week 1)
Final Inception Report February (week 4)
Draft Background Reports May (week 2)
Draft Evaluation Report May (week 3)
Revised Background Reports May (week 4)
Presentation of report to RG June (week 2)
Final Report June (week 4)

| Page 20
Table 4: Timeline

Tentative Dates (Weeks beginning)


Tasks 14 21 28 4 11 18 25 4 11 18 25 1 8 15 22 29 6 13 20 27 3
Jan Jan Jan Feb Feb Feb Feb Mar Mar Mar Mar Apr Apr Apr Apr Apr May May May May Jun

Data colletion and analysis


Drafting Inception Report (first draft)
Feedback on the the Inception Report
Revising Inception Report (final draft)
Drafting Background Reports
Feedback on the the Background
Reports
Revising Background Reports (final
draft)
Drafting Evaluation Report
Feedback from Reference Group
Drafting Final Report
Note: The timeline assumes contracting and availability of documents by the end of January.

| Page 21
Annex 1: Terms of Reference
AFRICAN DEVELOPMENT BANK

TERMS OF REFERENCE

Consultancy Services to Conduct an Evaluation of the Self-


evaluation Systems & Processes of the African
Development Bank
I. Introduction
The Independent Development Evaluation Department (IDEV) of the African Development Bank
Group (hereafter "the Bank") requires the services of a consultancy firm (hereafter, “consultant”)
familiar with International Financial Institutions’ operations, and monitoring and evaluation
systems and processes to evaluation its self-evaluation systems and processes. The evaluation
aims to assess the relevance, coherence, effectiveness, efficiency and contribution of the Bank’s
self-evaluation systems and processes. The assignment will be conducted under the general
supervision of an IDEV Task Manager.

II. Context
The Bank has both independent evaluation and self-evaluation systems and processes, which are
mutually dependent. These systems and processes help the Bank to account for its investment
effort, to learn from its experiences, and to monitor and improve its performance. The mandate of
independent evaluation resides with IDEV, while that of self-evaluation rests with Bank
management including operational complexes.
The Bank’s self-evaluation systems are defined in various Bank documents including the
Operations Manual (OM), and policies. They are multi-purpose including monitoring, and
judgement of development results, accountability for and learning from development
implementation and results. They concern various aspects of the Bank including policies,
strategies, programs, projects, processes and systems. They depend on multiple actors, guidelines,
processes and tools including the results measurement framework (RMF) at corporate, program
and project levels, and the additionality and development outcomes assessment (ADOA)
framework. The RMF and ADOA framework are core to the Bank’s self-evaluation systems and
processes.

| Page 22
In responding to the changing contexts within and outside the Bank, the Bank’s self-evaluation
function has evolved over time. The Bank adopted its operations manual (OM) in 1993, and revised
it in 1999, and then in 2014. The revision of the 2014 OM will start in 2019. Bank replaced in 2013
its Medium Term Strategy (MTS) with a Ten Year Strategy (TYS), 2013-2022. The Bank, since
adopting its TYS in 2013, has gone through major organizational restructuring and adjustments in
2014 and 2016, and changes in policies, and operational and institutional processes including:
 The adoption in 2015 of the High5 priorities within the context of the TYS, leading the
development of appropriate strategies for each of the High5s.
 The development and adoption of the New Development and Business Delivery Model
(DBDM) in support of the High5s; comprising five major pillars.
 Creation of structures such as Delivery Accountability and Process Efficiency Committee
(DAPEC), and Technical Quality Assurance Committee (TQAC) for improving the
operational and institutional processes.

Certain aspects of the Bank’s self-evaluation function have been the subject of
evaluations/reviews. Some of these evaluations/reviews have been completed, and others are
ongoing including:
IDEV’s evaluations:
 Evaluation of the African Development Bank’s quality assurance across the project cycle
(2013-2017), September 2018. This evaluation covers project quality at entry, quality of
supervision, quality at exit, and environmental and social safeguards.
 Evaluation of the Integrated Safeguards System (ongoing)
 Independent evaluation of the Bank’s additionality and development outcomes assessment
(ADOA) framework (2014)
 Synthesis Report – Second Independent Assessment of Quality at Entry in Public Sector
Operations (2013). Independent Evaluation of Quality at Entry for ADF-11 Operations and
Strategies African Development Bank Group (2010)
 Project Supervision at the African Development Bank (2001-2008) – An Independent
Evaluation (2009)
 Country strategy and program evaluations, which include aspects of self-evaluation
systems, and processes.

Other evaluations/reviews of the Bank include:


 The ongoing independent audit of Bank results monitoring and reporting (RMR), which
focuses on adequacy and compliance of Bank policies, procedures, organizational
arrangements, monitoring and reporting frameworks, and data management.
 An assessment of the Bank’s quality assurance tools (2018).
 The diagnostic study of AfDB’s practices to assure the quality at entry of public sector
operations (2018).
 Multiple Mid-term reviews, and completion reports at strategy, program and project levels.

III. Evaluation purpose, objectives, scope and questions


a) Purpose and objectives
| Page 23
The purpose of the evaluation is to support the Bank’s management and operational staff in:
 Improving self-evaluation, and performance management of operations, strategies, and
policies;
 Improving the relevance and quality of the Bank’s operations manual whose revision is to
start in 2019;
 Promoting learning from experience, and operational effectiveness;
 Enhancing the implementation of the New Development and Business Delivery Model
(DBDM), and process engineering;
 Accounting to the Board of Directors and other stakeholders for the results of the
investments in the Bank self-evaluation systems and processes.;

The evaluation will also build on and complement the:


 IDEV’s Evaluation of the African Development Bank’s quality assurance across the project
cycle (2013-2017), and Evaluation of the Integrated Safeguards System;.
 Independent audit of Bank results monitoring and reporting (RMR).

The evaluation’s intended users are primarily staff and Board members of the Bank. The inception
phase of the evaluation will clearly define the intended users and uses of the evaluation.
The objectives of the evaluation are to:
 Assess how well the Bank’s self-evaluation systems and processes performed, focusing on
their relevance, coherence, efficiency, effectiveness, and short-term impact;
 Assess the enablers and barriers that affected the design, implementation and results of the
Bank’s self-evaluation systems and processes;
 Distil lessons, good practices, and recommendations to enable the Bank to enhance the
quality and performance (design, scope, implementation and results) of its self-evaluation
systems and processes;

b) Scope and questions

The evaluation will focus on the Bank’s self-evaluation systems and processes during the period
2013-2018, which covers a substantial part of the implementation of the TYS, and recent
institutional reforms including the DBDM and process engineering. This period is also that of the
2014 OM.
The evaluation will only cover self-evaluation systems and processes for sovereign and non-
sovereign development. It will not deal with personnel self-evaluation systems and processes. The
evaluation will mainly use the OECD-DAC criteria of relevance, effectiveness, and efficiency, and
focusing on results beyond outputs. It will also assess coherence within the Bank’s self-evaluation
systems and processes, and with the Bank’s independent evaluation function. The evaluation will
be forward looking, and its key issues include:
(i) the self-evaluation systems’ stakeholders, and their perspectives and inter-
relationships;
(ii) the components, processes and mechanisms of the self-evaluation;
| Page 24
(iii) the enabling environment for, and barriers to self-evaluation;
(iv) appropriate data collection, analysis and storage tools and systems for self-evaluation;
(v) the use of self-evaluation outputs especially for decision-making, learning,
accountability and for improving development quality and effectiveness.

The key evaluation questions, presented in the table below, are indicative. They will be refined and
finalized during the inception phase of the evaluation.
Criteria Questions
How well are the Bank’s self-evaluations systems and processes performed?

Relevance & To what extent are the self-evaluation systems and processes relevant
coherence and coherent?
To what extent are the parts (e.g. structures; instruments; processes;
methods; mechanisms) of the self-evaluation systems complete,
relevant and credible?
To what extent is the theory of change for the self-evaluation systems
explicit, complete, embedded, relevant, and credible?
To what extent are the designs of the self-evaluation systems and
processes adequate?
To what extent are the self-evaluation systems and processes aligned
with the bank key policies, strategies (H5s; TYS), business model
(DBDM) and guidance documents (OM)?
To what extent are the self-evaluation systems internally coherent, and
coherent with the Bank’s independent evaluation function?
How well the self-evaluation systems and processes mainstreamed
cross-cutting issues; e.g. gender/inclusivity; safe-guards; fragility.
Efficiency To what extent are the self-evaluation systems and processes efficient,
effective and impacting on development quality and organizational
learning?
How efficient are self-evaluation systems and processes in producing
desired outputs; credible evidence/information? (are the self-
evaluation systems’ results cost effective?)
How timely are self-evaluation systems and processes in producing
desired outputs; evidence/information?
To what extent has the Bank deployed adequate resources including
human in self-evaluation and processes? How efficiently were these
resources used to achieve the planned results?
To what extent the Bank invested in good and/or innovative practices
in self-evaluation processes, mechanisms, tools and methods?

| Page 25
Effectiveness To what extent are the self-evaluation systems and processes
producing relevant, reliable, timely and useful outputs?
To what extent are the self-evaluation systems and processes‘ outputs
used for (i) informing decision-making (e.g.
programming/improvement; policy, strategy, & program/project
design/implementation); (ii) accountability; (iii) learning?
Impact/contribution What is the evidence of the contribution of the self-evaluation systems
and processes to development quality (project; strategy; policy),
outcomes, and learning?
What contribution have the self-evaluation systems and processes
made to corporate development effectiveness and organizational
learning?
Enablers and What are the factors that have enabled or constrained the design,
barriers implementation and results of the self-evaluation systems and
processes?

Lessons and What relevant lessons, good practices, and recommendations that can
recommendations be drawn for improving the quality and performance of the self-
evaluation systems and processes?

IV. Methodology and processes


The IDEV evaluation policy and the Evaluation Cooperation Group’s Big Book on Evaluation Good
Practice Standards 2 will guide this evaluation. The evaluation approach will require a
reconstitution of the supposed theory of change, underlying the Bank’s self-evaluation systems and
processes. The supposed theory of change will guide the refinement of the indicative evaluation
questions, and the development of the evaluation methodological framework. The inception phase
of the evaluation will clearly define and detail the most credible methodological framework for
responding to the evaluation questions. The methodological approach should be of mixed designs
and methods. The data sources, the basis for the evaluation streams of evidence, should include
but not be limited to the following:
 Desk review of relevant documents/reports and databases including those of the Bank,
other MDBs, and the literature;
 Substantive interviews and discussions with key stakeholders within and outside.
 Staff survey (staff; RMC official);
 In-depth case studies, based on appropriate sampling;
 Benchmarking with other MDBs and other appropriate agencies.

2ECG Big Book on evaluation good practice standards, http://www.ecgnet.org/document/ecg-big-


book-good-practice-standards. Both documents reflect the standard OECD-DAC development
evaluation criteria and quality standards.
| Page 26
The evaluation process will include the following phases:
 Inception phase to produce the inception report, which will include the full evaluation
methodology (including sampling, evaluation matrix, limitations, risks and mitigations, data
collection and analysis tools/instruments, rating scale and standards), evaluation team
composition and responsibilities for each of the individual evaluation team members. This
will involve inter-alia desk reviews and discussions with key stakeholders, rapid
assessment of available data, reconstruction of the supposed theory of change, stakeholder
mapping and preparation of the inception report.
 Meta-analysis phase covering: (i) evaluations on self-evaluation systems; and (ii) Bank
policies; strategies, programs/projects, and management action record system (MARS).
This phase will overlap with the inception phase.
 Data collection and analyses for the generation of findings, and drawing of conclusions,
recommendations and lessons learned: This phase will concern all the data sources,
highlighted above. It will be the basis for the preparation of the background reports, and
the evaluation synthesis report. Emerging findings will be shared with stakeholders for
feedback.
 Synthesis, report writing and feedback leading to the draft evaluation synthesis report and
its presentation to the evaluation Reference Group (defined under the quality assurance
section below), and other stakeholders for feedback on the draft evaluation findings.
 Production and delivery of the final evaluation report in the appropriate format (in English)
for dissemination and follow up.
 Communication and dissemination of evaluation results

Risks and mitigation actions: The evaluation risks and mitigation actions will be identified at the
inception phase by the evaluation team.
Available documents and databases including the following: Bank strategies, policies, project
databases, MARS, results measurement frameworks, annual reports, work programmes, progress
reports, and self-evaluation guidelines and reports, budget reports, relevant IDEV
evaluations/reviews.

V. Deliverables and timeline


The consultant will deliver the following outputs (in English):
 Inception report (draft and final).
 Background reports (Benchmarking; meta-analysis; case study; survey; interview notes;
case studies…).
 Draft evaluation report and its presentation to the evaluation reference group, and for peer
review; the evaluation report will include an executive summary, background and context,
evaluation purpose, objectives and questions, key aspects of the methodological approach
and limitations, findings, conclusions, lessons and recommendations, and annexes (see
Annex 5 for further details).
 Final evaluation report including an executive summary of up to two pages and essential
annexes.

| Page 27
 Technical annexes including the methodology and its instruments and evidences.
 Electronic version of data collected and evidence set (analyzed data).

The evaluation will have an indicative input duration of 280-person days over a period of five
months, and its timeline is presented in the table below. A final evaluation report is scheduled to
be completed and delivered in June 2019.

Evaluation phase-Delivery-Timeline
Phase/Output Deadline Responsibility
Contracting phase December 2018 IDEV
Inception phase: Week 1 February 2019 Consultant & IDEV
Draft inception report Consultant
Comments on draft report IDEV
Final inception report Consultant
incorporating comments
 Approval of inception report IDEV
Data collection & analysis phase Week 3 April 2019 Consultant
(including literature/document
review, interviews, survey meta-
analysis, benchmarking, and case
studies)
Reporting phase: Consultant
Draft & revised background Week 1 June 2019
reports
 Draft evaluation report Week 2 May 2019
 Presentation of draft findings Week 2 May 2019
to RG for feedback Week 3 May
 Final report incorporating Week 1 June
comments/suggestions
 Feedback on evaluation
process Week 1 June
Communication & dissemination From February- 2019 IDEV
phase:

VI. Profile of the Evaluation Team (qualifications, experiences and competencies)


Centennial Group International A firm (the consultant) will undertake the evaluation using a
balanced team with demonstrated professional knowledge, skills and experience in:

| Page 28
 Evaluation/review/synthesis/benchmarking theories/practices
 Evaluation/monitoring and evaluation systems and processes in international
development
 International development work and issues especially within the contexts of Africa and
MDBs.
 How the MDBs work, and MDB activities including evaluation functions
 Evaluation report writing and presentation
 Fluency in English, and working knowledge of French; at least two of the evaluation team
members plus the research assistant are fluent in both English and French.
 Standard applications and analytical packages

VII. Management and Quality Assurance Arrangements


An IDEV Task Manager will be responsible for: (i) providing overall guidance to the consultant, and
approval of the evaluation process and outputs (inception report; background reports, draft and
final evaluation reports); (ii) quality assurance processes including the external peer review of the
key evaluation products, and receiving comments from the Evaluation Reference Group (ERG); (iii)
recruiting the consultant (iv) briefing the consultant; (iv) establishing the Evaluation Reference
Group (ERG); (vi) receiving from the consultant all data, files (including raw data, coded data,
interview notes, databases) that will be produced; (vii) communicating to the Bank’s Management
and Board of Directors, and disseminating the final evaluation results to the key stakeholders. IDEV
will also recruit at least two competent and experienced international experts (content-area;
evaluation) for the external peer review of the evaluation process and outputs; (viii) ensuring the
payment of the consultant.
The Evaluation Reference Group (ERG) will comprise selected Bank staff from the relevant
complexes/Departments/Units. The ERG will review and comment on the evaluation process and
outputs (inception report; evaluation reports), and provide a sounding platform for rapid feedback
especially on the evaluation plan (including design and methods) and emerging evaluation
findings.

VIII. Evaluation Budget


The evaluation budget will comprise all expenses including fees, travel and taxes. The
firm/consultant will provide a detailed budget with breakdown against activities and key
milestones.

| Page 29
Annex 2: Interview Template – Executive Directors
The interviews/conversations with the Executive Directors will evolve around the following main
questions:

Executive Directors
Performance
 To what extent the indicators provided to the Board as an aggregation of the SES through
the RMF and the dashboard, provide sufficient evidence of the performance of the portfolio
and achievement of results
 Degree of satisfaction of the RMF and overall corporate reporting. Is the RMF the result of
a negotiated reporting template reflecting the requests and wishes of the shareholders or
deriving from the aggregation of the Bank’s operational evidence and results as normally
supported and collected by the SES.

Accountability
 Is the SES and its aggregated reporting tools (RMF, etc..) providing to the Board the
required comfort that the Bank is holding itself accountable for the achievement of results
 Could a focus on accountability work against the capacity of the SES to be an accurate and
trusted tool for verifying and reporting the achievement of results and learning
opportunities.

Learning
 To what extent a more direct and structured involvement of third independent parties
(including Board members) and specific thematic/country events are likely to increase the
credibility of the system and learning opportunities.
 For what the Board is concerned, what kind of incentives could be put in place by
Management for the SES to be used more strategically to meet knowledge gaps and for
lesson learning

| Page 30
Annex 3: Interview Template – Managers and Staff (HQ and Regional hubs)
Interviews (beyond those for the case studies) would cover the following main points:

Performance Accountability Learning


 Is the implementation of the  Is Management exerting  To what extent the lending
SES, through its various leadership and enforcement pressure, the focus on
reporting instruments, over the correct corporate results reporting
generally in compliance with implementation of the SES for accountability, and the
the main guidance documents ? and over the lines of ratings system contributes to
What are the main accountability, weaken staff attention to
impediments to a more cost-  To what extent is project learning,
efficient design and performance and staff  What kind of incentives could
implementation of the SES, performance perceived to be change behaviours in terms of
 Do the provided ratings reflect related and is the current better documenting and using
a candid, timely and accurate incentive structure learning evidence, best
approach to performance conducive to candour, practices, lessons learned,
management for the various  To what extent do you see the ratings follow up, innovation.
instruments? What is the SES as a relevant and trusted  To what extent a more direct
general perception about accountability tool. What are involvement of third
accurate reporting on special its strengths and weaknesses independent parties and
priorities (gender, fragility, relative to the inputs (M&E, specific thematic/country
safeguards). What are the quality at entry, budget, events are likely to increase
limitations and how to improve processes, transaction costs) the credibility of the system
its use. and to the outputs (IPRs, and learning opportunities.
 To what extent is the SES being MTRs, PCR, etc..).
used as a tool for analysis of
possible impediments and
corrective action vs a
compliance mechanism against
a check-list of requirements.

| Page 31
Annex 4: Survey Template
The survey will be designed to use “survey monkey” (or equivalent) software with a scale of ratings
from 1 to 6. The number of questions will be kept to a minimum given that the typical response rate
is proportional to simplicity and time taken to complete (estimated at 8-10min). The survey will be
organized along the three main outcomes of the ToC: Performance, Accountability, and Learning,
and is in line with the thrust and the main questions of the evaluation matrix. The survey will also
elicit open-ended comments on more generic issues allowing to examine underlying criteria such
as Relevance & Coherence, Cost-efficiency, Effectiveness & Impact Contribution, and Incentives.

Performance Accountability Learning

 Is the application of the SES  Do the SES provide a reliable,  Are the SES being used as a
and the production of the timely and cost-effective reliable and relevant framework
different reports and framework for verification of for learning and innovation?
documents aligned with the results, reporting and  Is the money spent for the SES
Bank’s strategy and accountability at: commensurate to the amount of
guidance provided? i) staff/management level, learning and innovation that it
 To what extent do the SES ii) corporate level? generates?
and its instruments provide  Is the degree of accountability  Are the SES contributing to the
a reliable, timely and cost- generated by the SES conducive to identification and use of lesson
effective framework for improve results? learned for follow up projects,
portfolio management?  In which direction are the CSP, RISPs, thematic reviews?
 Do the SES contribute to incentive around the SES mostly
improved quality at exit influencing behavior:
(project closing)? i. Corrective and timely action for
 Are the incentives in place improving quality of results
conducive to candid ii. Fear for staff reputation and
assessments and proactivity risk aversion
for corrective action and iii. Little need for accountability if
portfolio performance? no enforcement
iv. Compliance and box-ticking
v. Avoiding a disconnect with
IDEV validation
vi. Corporate reporting (RMF)
vii. Learning

Open-ended questions

 What would be the main incentives in the use of the SES that would influence staff behavior for better
compliance with reporting requirements, candid assessment, achievement of results? Please elaborate.
 What are the main impediments to increasing the relevance of the SES, in its design and
implementation? Please elaborate.
 Do you think there are redundancies or gaps in the use of the SES? What would be your
recommendation with respect to improving cost-efficiency. Please elaborate.
 How useful are the SES to address specific priorities such as gender, fragility, and safeguards.
 How to address issues around compliance-check vs proactive follow-up on problems identified during
supervision, and how it impacts staff capacity to call on the right expertise to support borrower’s
implementation capacity.
| Page 32
Annex 5: Report Outline
Executive Summary
1. Background and Context
2. Purpose, Objectives and Scope
3. Methodology
i. Evaluation Methods
ii. Theory of Change
iii. Limitations
iv. Evaluation Questions
4. Architecture and cost of the Bank’s Self-Evaluation Systems vs those of comparators
5. Main SES characteristics and finding related to:
i. Performance Management
ii. Promoting Accountability
iii. Learning
6. Conclusions, Lessons and Recommendations
i. Findings
ii. Recommendations
Annexes
1. Terms of Reference
2. Evaluation Matrix
3. Comparator Review
4. Executive Directors Interview Results
5. Staff and Managers Interview Results
6. Case Studies
7. Survey Results

| Page 33

You might also like