Me Class Notes Todate
Me Class Notes Todate
1.1 Introduction
Thank you for your interest in studying monitoring and evaluation of projects which is an
indispensable management function. You can call it “M & E” – it is much easier. In this lecture
we will try to review a few background issues on the projects that you covered in the unit LDP
604: Project planning, design and implementation. This will give us a good foundation to discuss
Project Monitoring and Evaluation.
1.2. Objectives
At the end of this lecture you should be able to;
1. Define a project
Problem
identification
Project
implementation
Source: Ogula (2002) Monitoring and Evaluation of Educational Projects and Programmes.
Nairobi. New Kemit Publishers
From the above demonstration of stages in project cycle it is clear that monitoring and
evaluation forms a very key component. For instance in figure 1 above it is implied that at all the
stages of project cycle monitoring and evaluation is required. For instance:
1). At the problem identification or project conceptualization stage one needs to undertake project
needs analysis in which data is collected and evaluated to identify the needs of the communities;
possible project ideas to satisfy needs identified are also evaluated and closely analyzed (filtered)
to finally arrive at the indented projects.
3). Implementation stage involves rolling out the project activities. This calls for monitoring to
ensure that the activities are implemented as planned.
4). At the end of the project cycle, the terminal evaluation is done to determine the impact of the
whole project to the project beneficiaries.
Take Note
We can therefore conclude that Monitoring and Evaluation is a very
important component of project design and project life cycle
At this point, we need to examine the components of a project design and see how they all
hinge on monitoring and evaluation. It is important to note that a well designed project should
have a written document which is logical and complete.
Lets us look at some of the components of a project design. The project document has the
following;
Statement of project: Describe the areas that emerged during the need assessment and that the
project seeks to address.
Project strategy: Explain clearly the beneficiaries of your project. Show the beneficial changes to
be brought by the project. Indicate the partners/ stakeholders involved and show how the project
will deliver its benefits to the intended group.
Goals/ purpose/vision: This is the ultimate objective of the project. It is the long term objective
e.g. to ensure that every youth at Kwa kavoo village is self employed by 2015.
Objectives/mission: State the immediate achievement at the end of the project e.g at the end of
the project 400 youth from Kwa kavoo village will have been trained on how to run their own
small businesses.
Outputs: Describe the products that would result from the project activities
Activities: Show all the activities which will be undertaken to produce the desired output e.g
workshops, developing training manual/ modules.
Inputs: Give a full range of the resources needed (human, financial, technical etc) to carry out the
activities in terms of costs.
1.8 Summary
When you view a project in many perspective the most notable
aspect that one may not fail to notice is the ability of the project
to produce results that can be measured and thus provide a
change from one state of being to a desired state. This Lecture
provided definition of a project and highlighted on project
management cycle with a view of demonstrating that monitoring
and evaluation is part and parcel of a project design. The lecture
also elaborates on the components of the project design.
2.1 Introduction
After looking at the overview of projects, we will now focus on in-depth understanding of
the major concepts of Monitoring and Evaluation, and Social Research.
CONTENT
Through routine data gathering, analysis and reporting, project monitoring aims at providing
project management staff and other stakeholders with information on whether progress is being
made towards achieving project objectives. In this regard, monitoring represents a continuous
assessment of project implementation in relation to project plans, resources, infrastructure and use
of services or products by project beneficiaries. Let us try to discuss the importance of project
monitoring.
1. Project managers and their stakeholders (including funding agencies) need to know the
extent to which their projects activities are implemented as per the plan. Meeting the set
objectives and leading to their desired effect.
2. Monitoring and to some extent evaluation builds greater transparency and accountability in
terms of use of project resources. All project stakeholders develop confidence in the
project when they know that resources are well spent on the planned project activities.
3. Information generated through Monitoring exercise, provides project managers and staff
with a clearer basis for decision-making. This decision concerns the continuing or
discontinuing certain activities that may be expensive to implement and which may be
having less impact as far as achieving project objectives.
4. Future project planning and development is improved when guided by lessons learned
from project experience. Documented results of previous monitoring activities may serve
as good lessons for future project implementation.
5. Monitoring allows the project manager to maintain control of the project by providing
him/her with information on the project status at all times.
6. Project monitoring alerts managers to actual and potential project weaknesses, problems
and shortcomings before it is too late. This provides managers with the opportunity to
make timely adjustments and corrective actions to improve on the program/project design,
work plan and implementation strategies. In short, monitoring activities must be
undertaken throughout the lifetime of the project.
Effective monitoring needs adequate planning; baseline data; reliable indicators of performance
and results; practical implementation mechanisms that include actions such as field visits,
stakeholder meetings, documentation of project activities, regular reporting etc. Project monitoring
is normally carried out by project management staff and other stakeholders
1. First and foremost, project evaluation provides managers with information regarding
project performance. You will realize that, sometimes during project implementation,
project plans may change significantly. In this case evaluation may come in handy to
verify if the program is running as originally planned. In addition, evaluations provide
signs of project strengths and weaknesses and therefore, enable managers to improve
future, planning, delivery of services and decision making.
2. Project Evaluation assists project managers, staff and other stakeholders to determine in a
systematic and objective way the relevance, effectiveness and efficiency of activities
(expected and unexpected) in light of specific objectives.
3. Mid-term evaluations may serve as a means of validating the results of initial assessments
obtained from project monitoring activities.
4. If conducted after the termination of a project, an evaluation determines the extent to
which the interventions were successful in terms of their impact and sustainability of
results.
5. Evaluations assist managers to carry out a thorough review and rethinking about projects
in terms of their goals and objectives and means to achieve them.
6. Evaluation can be used to generate detailed information about project implementation
process and results. Such information can be used for public relations, fundraising, and
promotion of services in the community as well as identifying possibilities for project
replication.
7. Evaluation improves the learning process- Evaluation results should be documented to
help in explaining the causes and reasons why the project succeeded or failed. Such
documentation can help in making future project activities more relevant and effective.
8
There is need for all project stakeholders to have a clear knowledge and understanding of
Monitoring and Evaluation. This is because knowledge of M&E helps project staff to improve
on their ability to effectively monitor and evaluate the progress of the projects. It also enables
them to strengthen the performance of their projects thus increasing the impact of the project
results to beneficiaries. With basic orientation and training in monitoring and evaluation, project
staff can implement appropriate techniques to carry out a useful evaluation of their projects.
Project staff with knowledge in monitoring and evaluation can be in a good position to vet and
evaluate external evaluators’ capacity to evaluate their projects - Program/project evaluations
carried out by inexperienced persons might be time consuming, costly and could generate
impractical or irrelevant information.
Take Note
As in monitoring, evaluation activities must be planned at the
program/project level. Baseline data and appropriate indicators of
performance and results must be established.
Project strengths and weaknesses might not be interpreted fairly when
data and results are analyzed by project staff members that are
responsible for ensuring that the project is successful. It is preferred
therefore, that the management recruits an external evaluation consultant
to lead the evaluation process.
If the management does not have an expert to carry out the evaluation
and cannot afford to hire an external evaluator or prefers to use its own
resources in carrying out the evaluation, it is recommended that it
engages an experienced evaluation expert to advice on developing the
evaluation plan, selecting evaluation methods and analyzing and
reporting results.
9
1. Through routine tracking of project progress, monitoring can provide quantitative and
qualitative data useful for designing and implementing project evaluation exercises.
2. Through the results of periodic evaluation monitoring tools and strategies can be refined
and further developed.
3. Good monitoring may substitute evaluation in cases where:
-projects are short-term
-projects are small-scale
4. The main objective of Monitoring is to obtain information that can be used in improving
the process of implementation of an ongoing project, however, when a final judgment
regarding project results, impact, sustainability and future development is needed, an
evaluation must be conducted.
5. Project evaluations are less frequent than monitoring activities, considering their costs
and time needed.
It is important to understand that project monitoring can be different from project evaluation in
some aspect. The table be low shows a summary of the differences between project monitoring
and project evaluation.
10
Source: A UNICEF, Guide for Monitoring and Evaluation; making a difference? New York,
1991, p3
Kurze and Rist (2004) identifies other complementary roles of Monitoring and Evaluation as
Monitoring Evaluation
We would like to introduce a concept of ‘social research’ which you well known from the
Research Methods unit you covered in your first year. The discussion of the concept of project
evaluation may have left you wondering how different the concept is from research. In this
section we will attempt to highlight on the differences between evaluation and social research
Activity
2. Using the definition that you wrote down, try to list down
11
at least three similarities and differences between social
research and Evaluation
Social research is an inquiry that is based on logic through observation and involves the
interaction between ideas and evidence. Ideas help social researchers make sense of evidence and
use such evidence to test, extend or revise existing knowledge or facts. Social research is based
on logic through observation and involves the interaction between ideas and evidence. Ideas
makes social researchers make sense of evidence and use such evidence to test, extend or revise
existing knowledge or facts. Social research thus attempts to create or validate theories through
data collection and analysis and its goals are exploration, description, prediction, control and
explanation
Take Note
Research is a process that involves systematic collection, analysis and
interpretation of data with the purpose of describing, explaining,
predicting and controlling a phenomenon.
From the above description of social research we can note that research shares some aspects with
evaluation in that they both are concerned with generation of knowledge and are both aimed at
finding answers to significant inquiry questions. In addition, both employ scientific approaches
of inquiry which is systematic in nature. However, the two concepts differ to some extent as
shown below;
1. Evaluation findings are concerned with phenomena which are not generalized beyond
their application to a given project or program while research aims at generalizing
findings to the population
2. Research and evaluation are undertaken for different reasons. Research satisfies curiosity
by advancing knowledge while evaluation contributes to the solution of practical
problems through judging the value of whatever is evaluated
3. Research seeks conclusions while evaluation leads to decisions
4. Research is concerned with relationships among two or more variables while evaluation
describes the objects of evaluation
5. The researcher sets his own problems. Evaluations are normally commissioned by clients
6. Evaluation follows the set standards of Feasibility, Propriety, Accuracy and Utility while
7. research does not.
12
Contribute to organizational learning and knowledge sharing by reflecting upon and
sharing experiences and lessons.
Uphold accountability and compliance by demonstrating whether or not our work has
been carried out as agreed and in compliance with established standards and with any
other stakeholder requirements
Provide opportunities for stakeholder feedback,.
Promote and celebrate project/program work by highlighting accomplishments and
achievements, building morale and contributing to resource mobilization.
Strategic management in provision of information to inform setting and adjustment of
objectives and strategies.
Build the capacity, self-reliance and confidence stakeholders, especially beneficiaries and
implementing staff and partners to effectively initiate and implement development
initiatives.
13
2.0. Summary
This lecture provided a discussion on the concepts of monitoring and
evaluation. In these discussions, the need for undertaking project monitoring
and evaluation is discussed. The lecture also examines the relationships, c
and complementary roles played by monitoring and evaluation in projects.
Finally the lecture concludes by focusing on the concept of social research.
The similarities and differences between the concept of social research and
M&E is examined.
Activity 1.5: Self assessment questions
14
LECTURE THREE
In lecture two we discussed the concepts of monitoring and evaluation. In this section we are
going to discuss the concept of project evaluators a concept that is closely related with what we
discussed in the previous lectures.
Activity
Let’s now focus on what you wrote in the attempt to answer the above question. It is clear that
for one to be called project ‘evaluator’, he or she must be qualified and experienced in carrying
out monitoring and evaluation. We can therefore conclude that project evaluators are individuals
with skills, knowledge and hands on experience involving theories and practices in monitoring
and evaluation. These individuals may either be within the projects or outside the project. In
general, there are two types of project evaluators: external evaluators what we commonly refer to
as consultants, and internal evaluators – those within the project. All these evaluators are at the
disposal of the project manager only that he or she must determine what type of evaluator would
be most beneficial to the project. Let us now try to examine possible option that a project
manager can explore in terms of choosing and utilizing project evaluators;
15
1. External Evaluator
External evaluators are contracted from outside the project. These may include qualified and
experienced individuals, agency or organization with credible track record concerning
evaluation. These evaluators often are found in the universities, colleges, hospitals, consulting
firms, or within the home institution of the project. Because external evaluators maintain their
positions with their organizations, they generally have access to more resources than internal
evaluators (i.e., computer equipment, support staff, library materials, etc.). In addition, they may
have broader evaluation expertise than internal evaluators, particularly if they specialize in
project evaluation or have conducted extensive research on your target population. External
evaluators may also bring a different perspective to the evaluation because they are not directly
affiliated with your project. However, this lack of affiliation can be a drawback. External
evaluators are not staff members; they may be detached from the daily operations of the project,
and thus have limited knowledge of the project’s needs and goals, as well as limited access to
project activities.
2. Internal Evaluator
A project manager may have an option of assign the responsibility for evaluation to one of the
staff members or to hire an evaluator to join the project as a staff member. This internal evaluator
could serve as both an evaluator and a staff member with other responsibilities. Because an
internal evaluator works within the project, he or she may be more familiar with the project and
its staff and community members, have access to organizational resources, and have more
opportunities for informal feedback with project stakeholders. However, an internal evaluator
may lack the outside perspective and technical skills of an external evaluator.
A final option combines the qualities of both evaluator types. An internal staff person conducts
the evaluation, and an external consultant assists with the technical aspects of the evaluation and
helps gather specialized information. With this combination, the evaluation can provide an
external viewpoint without losing the benefit of the internal evaluator’s first-hand knowledge of
the project. This may be an appropriate option but it may be too expensive.
16
Take note.
In most cases the project manager will draft the roles of a project evaluator depending on the
nature of the evaluation and the kind of the information required. The roles will also be based on
the option of the evaluator that the project manager deems fit. For those evaluators that are
recruited as part of the staff of a project their roles may be defined by job specification and
description, while the external evaluators roles may be specified by term of reference (TORs).
Depending on the primary purpose of the evaluation and with whom the evaluator is working
most closely (funders vs. program staff vs. program participants or community members), an
evaluator might be considered a consultant for program improvement, a team member with
evaluation expertise, a collaborator, an evaluation facilitator, an advocate for a cause, or a
synthesizer. If the purpose of evaluation is to determine the worth or merit of a project, the
project manger may look for an evaluator with methodological expertise and experience. If the
evaluation is focused on facilitating project improvements, an evaluator with a good
understanding of the project and is reflective may be suitable. If the primary goal of the
evaluation is to design new projects based on what works, an effective evaluator would need to
be a strong team player with analytical skills.
2. Project Adequacy: Project adequacy means that the project objectives, inputs or activities
are enough for the purpose indented
3. Project Relevancies: Relevancy is related to how the project’s objectives and activities
respond to the needs of indented beneficiaries.
4. Validity of the project design: validity of project design assesses the extent to which the
project design;
17
i. Sets out clear immediate objectives and indicators of their achievement,
ii. Focuses on the identified problems and needs and clearly spell out the strategies to
be followed for solving the problems and meting identified needs,
iii. Describes the main inputs , outputs and activities needed to achieve the objectives
5. Project effectiveness: Effectiveness refers to the extent to which a project produces the
desired result. Effectiveness measures the degree of attainment of the pre-determined
objectives of the project. A project is effective if its results are worthwhile.
6. Project efficiency: this is an expression of the extent to which the methods used by the
project, or activities are the best in terms of their cost, resources used, time required and
appropriateness of the task. It examines whether there was an adequate justification for the
resource used and identifies alternative strategies to achieve better results with the same
inputs.
7. Project impact: Measurement of impact is concerned with determining the overall effect
of a project activities in terms of socio-economic and other aspects of the community
8. Project cost –effectiveness analysis: This refers to the evaluation of alternatives according
to both their costs and their effect with regard to producing an outcome or a set of
outcomes
9. Project sustainability: sustainability examines the extent to which the projects strategies
and activities are likely to continue to be implemented after the termination of the project
and the withdrawal of external assistance.
10. Project Unintended outcomes: Unintended outcomes are unforeseen negative or positive
effects of a project. For example an adjacent community benefiting as a result of a project
implemented in the neighboring community.
11. Project alternative Strategies: Alternative strategies to solving the identified needs or
problems are analyzed and recommended for the next phase of the project, normally if the
original strategy is found inappropriate.
12. Project cost benefits: Cost benefit analysis compares the financial costs of a project to the
financial benefits of that project. It is normally conducted on more than one project
18
Activity
1. Now close your text book and try to list down some of the
aspect of projects that you need to focus on when assessing
projects
3. Open the text book and try to compare what you have done
with what is in the text book
19
Ensure that project inputs are available and utilized in the right way as planned
The activities for monitoring and evolution at this level include:
Identify community’s needs
Organize the needs in order of priority
Develop projects to address those priority areas
Identify teams and their roles to spearhead the projects
Design work plans and their performance standards
Compare what is happening with what was planned, to determine whether the project is
on schedule as planned
Involve the local community to ascertain the quality of the projects
The monitoring teams should ensure that they make frequent visits to the project sites to observe,
and discuss with everyone involved in the projects. This should be captured in field visit reports.
This information can be utilized to improve the implementation of the project or stored for future
use.
2. Monitoring and evaluation at District and Local Authority level
The monitoring and evaluation team should get information from the teams at local levels. It is
important for the team to monitor and evaluate the outcome of the project. They should also
monitor and evaluate the increase in strength, capacity and power of the target community to
stimulate its own development. With the above example, the team should be able to establish
whether the community will be able to maintain and manage the even health centers even when
the donor funding is withdrawn.
The objectives of Monitoring and evolution at this level include:
Supporting the improvement in project performance
Measuring the applicability of the way the project was designed in relation to community
strengthening
The methods used include routine monitoring and supervisory support by the district project
coordinator, community development assistance, other technical staff, and politicians
The major issues to consider in the routine monitoring include:
Levels of actual community, local authorities, districts and donor contributions (in terms
of funds, materials, time and expertise)
Timely implementation and quality of projects
Appropriate use and accountability of community and donor resources; levels of
community involvement in the project
Community involvement in projects
Timely use of information generated through the community routine monitoring and
evaluation
National and Donor Level
At the national or country level, there are two main stakeholders,
a) The ministry or agency that is implementing the intervention or project – the government
interest in projects is to ensure national wide community development. Their interests
20
will be to ensure community participation in projects that caters for their interests. Major
involvement of the government agencies (Ministry of Agriculture) will be to ensure that
the project evaluation methodology is well known to the community. The evaluation will
be concerned with the impact of the project to a wider target group- This will involve the
contribution of the agricultural project to the economic development of the country as a
whole.
b) Any external Nation or international donors – the major concern is the effectiveness of
the projects. Their major focus is the percentage of output attained as a result of the
projects
3.5 Summary
The lecture explores various levels of evaluation by first looking at the
concept of evaluator and types of evaluator that the project manager can
exploit in terms of project evaluation. The lecture also provides an insight on
what project managers should look at when selecting types of evaluators for
project evaluation. Core concerned for project monitoring and evaluation
have also been discussed. The effects of project that of National concern can
be assessed adequately as per certain levels. This lecture using relevant
illustration discussed the levels of project monitoring. The lecture also
demonstrates how project monitoring and evaluation activities differ as per
each level of monitoring and evaluation.
21
LECTURE FOUR
TYPES OF MONITORING AND EVALUATION
4.1 Introduction
In lecture three we discussed the concept of evaluator, core concerns of evaluation and
various levels of evaluation. We also established that monitoring and evaluation varies with
different levels of evaluation, however the levels complement each other. In this lecture we
shall examine in details various types of monitoring and evaluation.
4.2Lecture objectives
At the end of this lecture you should be able to:
1. Explain types of project monitoring
2. Describe types of project evaluations
You recall that in lecture one, we learned that among the main components of a project design
included project purpose which the ultimate objective of the project is; project objectives which
state the immediate achievement at the end of the project; project outputs which describe the
2.10 Further reading
Worthon, B,R., Sanders, J,R and Fitzpatric J.L (1997);
Programme evaluation- Alternative approaches and practical
guidance (2Ed): New York, Longman Inc.
kind of products produced by the project; Project activities which show all actions that will be
undertaken to produce the desired output; and project inputs that give a full range of the
resources needed (human, financial, technical etc) to carry out the project activities. During the
implementation of the project, all these aspects of the project must be monitored closely. Figure
4.1 shows the types of monitoring that a project manager can employ in monitoring the above
mentioned components.
22
Figure 4.1 Key Types of Monitoring
Impact
Results Result
Monitoring
Outcomes
Output
Activities Implementation
Implementation Monitoring
Inputs
Activity
Figure 5.1 shows two main types of monitoring: implementation monitoring and results
monitoring. Let use examine each one of this types of monitoring.
1. Implementation Monitoring: This is concerned with tracking the means and strategies
used in project implementation. It involves ensuring that the right inputs and activities are
used to generate outputs and that the work plans are being complied with in order to
achieve a given outcome. Implementation monitoring as the name suggests is the type of
monitoring carried out during the roll out of project plans. Figure 4.1 clearly shows that
the main concern of implementation monitoring is the inputs, activities and outcome. It
involves determining both the amount of activity and the compliance to the plan’s
standards. The question regarding amounts of activities is addressed for the entire project
23
rather than for an individual activity. The other concern of project managers is on
whether planned inputs are utilized for intended purpose. This sort of monitoring is
normally done annually to determine if the planned projects and activities are completed
on time and then use that information to better interpret the ‘effectiveness of the
projects’ - monitoring results.
2. Results Monitoring: This looks at the overall goal/impact of the project and its impacts
on society. It is broad based monitoring and aligns activities, processes, inputs and
outputs to outcomes and benefits. Ideally, all monitoring should be results based.Now, let
us focus on the second type of monitoring otherwise known as results monitoring. When
you look at figure 4.1 carefully, you will notice that the monitoring for results is a stage
higher than implementation monitoring. It defines the expected results in terms of project
outcome and project impact. A single project activity may be divided into different
milestones. The milestones can be referred to as segments of an overall result.
Monitoring for result therefore means that the project managers’ major concern is
whether the project has attained the milestones that lead it to the overall results.
3. Activity based monitoring: This focuses on the activity. Activity Based Monitoring
seeks to ascertain that the activities are being implemented on schedule and within
budget. The main short coming of this type of monitoring is that activities are not aligned
to the outcomes. This makes it difficult to understand how the implementation of these
activities results in improved performance.
4. Process (activity) monitoring : Tracks the use of inputs and resources, the progress of
activities, how activities are delivered – the efficiency in time and resources and the
delivery of outputs
5. Compliance monitoring: Ensures compliance with, say, donor regulations and expected
results, grant and contract requirements, local governmental regulations and laws, and
ethical standards.
6. Context (situation) monitoring: Tracks the setting in which the project/programme
operates, especially as it affects identified risks and assumptions, and any unexpected
considerations that may arise, including the larger political, institutional, funding, and
policy context that affect the project/programme.
7. Beneficiary monitoring: Tracks beneficiary perceptions of a project/programme. It
includes beneficiary satisfaction or complaints with the project/programme, including
their participation, treatment, access to resources and their overall experience of change.
24
8. Financial monitoring: Accounts for costs by input and activity within predefined
categories of expenditure, to ensure implementation is according to the budget and time
frame.
9. Organizational monitoring: Tracks the sustainability, institutional development and
capacity building in the project/programme and with its partners.
Take Note
Take an example of a Bore hole project that has a general purpose of
providing clean and safe drinking water to the community, the milestone
can be considered as:
Securing funds for the project
Sensitizing the community
Putting together a steering management committee
Procuring the consultancy for the project
The actual sinking of the bore hole
Commissioning of the bore hole
All the above milestones lead to the final result which is complete
borehole that can provide clean and safe water to the community. These
milestones are arranged in order of priority leading towards the overall
results. Achieving of the first one leads to the achievement of the second
milestone. The achievement of each milestone gives us an assurance of
achieving the overall results.
25
What are the objectives of the project?
Who are the intended beneficiaries and how are they to benefit?
What are the main intended inputs (financial, technical, manpower e.t.c)?
What are the main intended outputs?
How do the outputs relate to the objectives?
What is the implementation plan?
Have the alternative methods of achieving objectives considered?
Take Note
The following areas should be addressed at this stage of evaluation:
Needs assessment to determine who needs the project and how
great is the need
Evaluability assessment to determine whether the evaluation is
feasible and how stakeholders can help to shape its usefulness
Project structure conceptualization-defines project or
technology, the target population and possible outcomes
Project implementation evaluation- determines the fidelity of
the project or technology delivery
Process evaluation- investigates the processes required to deliver
the project including alternative delivery procedures
2. Formative evaluation
Conducted during the implementation of the project. Used to determine the efficiency and
effectiveness of the implementation process, to improve performance and assess compliance.
Provides information to improve processes and learn lessons. Process evaluation investigates
the process of delivering the program or technology, including alternative delivery procedures.
Outcome evaluations investigate whether the program or technology caused demonstrable
effects on specifically defined target outcomes. Cost-effectiveness and cost-benefit analysis
address questions of efficiency by standardizing outcomes in terms of their dollar costs and
values.Formative evaluation is conducted during the development and implementation of a
project in order to provide project managers with information necessary for improving the
project. This types of evaluation is sometime referred to as Mid-term evaluation.
In general, formative evaluations are process oriented and involve a systematic collection of data
to assist decision-making during the planning or implementation stages of a project. They usually
focus on operational activities, but might also take a wider perspective and possibly give some
consideration to long term effects. While staff members directly responsible for the activity or
project are usually involved in planning and implementing formative evaluations, external
evaluators might also be engaged to bring new approaches or perspectives (Nadris, 2002).
Questions typically asked in those evaluations include:
1. To what extent do the activities and strategies correspond with those presented in the
plan? If they are not in harmony, why are there changes? Are the changes justified?
26
2. To what extent did the project follow the timeline presented in the work plan?
3. Are activities carried out by the appropriate personnel
Other issues addressed by formative evaluations include:
1. To what extent are project actual costs in line with initial budget allocation?
2. To what extent is the project moving towards the anticipated goals and objective of the
project?
3. Which of the activities or strategies are more effective in moving towards achieving the
goals and objectives?
4. What barriers were identified? How and to what extent were they dealt with?
5. What are the main strengths and weaknesses of the project?
6. To what extent are the beneficiaries of the project active in decision making and
implementation?
7. To what extent do project beneficiaries have access to services provided by the project?
What are the obstacles?
8. To what extent are the project beneficiaries satisfied with project services
3. Ex-post evaluation: Conducted after the project is completed. Used to assess sustainability
of project effects, impacts. Identifies factors of success to inform other projects. Conducted
sometime after implementation to assess long-term impact and sustainability.
4. External evaluation: Initiated and controlled by the donor as part of contractual agreement.
Conducted by independent people – who are not involved in implementation. Often guided
by project staff
27
end of project/programme implementation to assess effectiveness and impact. Summative
evaluation (also called outcome or impact evaluation) addresses the first set of issues from
those discussed above. They look at what the project has actually accomplished in terms of its
stated goals. There are two approaches under this type of evaluation.
1. End Evaluation that aims at establishing the project status at the end of the project cycle.
For example when external aid is terminated and there is need to identify the possible need
for follow- up activities either by donor or project staff.
2. Ex-post – these evaluations are carried out two to three years after external support is
withdrawn. The main purpose is to assess what lasting impact the project has had or is
likely to have and to extract lessons of experience. This type of evaluation is sometimes
referred to as impact evaluation.
Summative evaluation questions include:
To what extent did the project meet its overall goals and objective?
What impact did the project have on the lives of the beneficiaries?
Was the project equally effective for all the beneficiaries?
What components were the most effective?
What significant unintended impacts did the project have?
Is the project replicable?
Is the project sustainable?
For each of those questions qualitative data and quantitative data can be useful.
Take Note
The following areas should be addressed at this stage of evaluation:
Outcome evaluation-to investigate whether the programme or
technology caused demonstratable effects on specifically defined
target outcome
Impact evaluations- to assess the overall or net effects intended
or untended of the project or the technology as a whole
Cost effectiveness and cost benefits analysis – to address
questions of efficiency by standardizing outcomes in terms of
their dollar costs and values
Secondary analysis – to reexamine existing data to address new
questions or use methods not previously employed
Meta-analysis –to integrate the outcome estimates from multiple
studies to arrive at an overall or summary judgment on an
evaluation questions
4.5 Summary
Using illustrations as shown by figure 4.1 the lecture introduces you to two
types of monitoring implementation(inputs, activities, outputs) and Results
28
monitoring(outcomes ,impacts). Three types of evaluation namely, Ex-Ante,
Formative and Summative evaluation are also discussed in details.
29
LECTURE FIVE
5.1 Introduction
In our previous lecture we learned that different evaluations can have different demands
depending on core concerns of the evaluators. This has made different scholars devise different
ways of approaching various evaluations activities. In this lecture we are going to look at some
of the evaluation models and approaches that have been employed over years in project
evaluation.
5.2 Lecture objectives
At the end of this lecture you should be able to;
1. Differentiate between evaluation model and evaluation
theories
2. Outline at least five models employed in evaluations of
projects
3. Form a structure for placing different evaluation needs in
terms of methodologies
4. Distinguish between different models used in project
evaluation
5. Outline at least five advantages and disadvantages of
various evaluation Models
30
“model” is loosely used to refer to a conception or approach or sometimes even a method (e.g.,
naturalistic, goal-free) of doing evaluation ‘Models’ are to ‘paradigms’ as ‘hypotheses’ are to
‘theories’, which means less general but with some overlaps.
5.2.3.1 Behaviourism
Behaviourists believe in the stimulus response pattern of condition behaviour. According to the
behaviourist theory of learning, “a child must perform and receive reinforcements before being
able to learn”. Behaviourism is based on observable changes in behaviour. As a learning theory
as a ‘black box’ in the sense that responses to stimulus can be observed quantitatively, totally
ignoring the possibility of thought processes occurring in the mind?
5.2.3.2 Cognition
The cognitive theory of learning is based on the thought process behind the behaviour
“Cognitive theorists recognise that much learning involves associations established
through contiguity and repetition. They also acknowledge the importance of
reinforcement, although they stress its role in providing feedback about the correctness of
responses over its role as a motivator. However, even while accepting such
behaviouristic concepts, cognitive theorists view learning as involving the acquisition or
reorganization of the information.” (Good and Brophy, 1990, p. 187). After
understanding the differences between evaluation models and evaluation theories let us
try to discuss the various evaluation models that are commonly used in evaluations of
projects.
31
Take Note
The government plans to initiate road construction projects in highly
agriculturally productive areas of Kenya. The purpose of the project is to
improve access of the community to basic social services such as schools,
and health services and also to increase access of the community to a
ready market for their farm products. Objective oriented evaluation
model will focus on the extent to which the project improved the
community access to basic social services. The evaluation will also seek
to establish the extent at which the project increased community access to
the market for their products.
The objective oriented approach was developed in 1930s and was credited with the works of
Ralph Tyler. Tyler regarded evaluation as the process of determining the extent to which the
objectives of a project are actually attained. He proposed that for one to evaluate a project he or
she must:
1. Establish broad goals or objectives of that project
2. Classify the goals or the objectives
3. Define those objectives in measurable terms
4. Find situations in which achievement of objectives can be shown
5. Develop or select measurement techniques
6. Collect performance data
7. Compare performance data with measurable terms stated
These can be conceptualized in the model below:
Figure5. 1 Tyler’s Model
Project Goals
Objective
Actual performance
Activities specified Standards
Performance
standards
Specified
Discrepancy
32
From this figure it is clear that the purpose of objective oriented model of evaluation is to
determine the extent to which the objectives of a project have been achieved and emphasis is on
the specification of objectives and measuring outcomes. To determine the outcome between
project specified performance standards and actual project performance there is need to perform
pre-tests and post test to determine the extent to which the objectives have been achieved.
Advantages of objective – oriented model
1. It is easy to assess whether the project objectives are being achieved
2. The model checks the degree of congruency between performance and objective
3. The model focuses on clear definition of the objectives
4. It is easy to understand in terms of implementation
5. It produces relevant information to the project
Disadvantages of the model
1. It tends to focus on terminal rather than on-going programme performance
2. It has a tendency to focus directly and narrowly on objectives with the little attention on
the worth of the objectives
3. It neglect the value of the objectives themselves
4. It neglect the transaction the occurs within the project being evaluated
5. It neglect the context in which the evaluation is taking place
6. It ignores important outcomes other than those covered by the objectives
7. It promotes linear, inflexible approach to evaluation
8. There is a tendency to oversimplify project and tendency to focus on terminal rather than
on –going and pre-project information
9. It does not take unplanned outcomes into account. This is because it focuses on the stated
objectives.
10. It does not pay enough attention to process evaluation. In other words it does not consider
how the activities that lead to achievement of project objectives are carried.
5.4.2 Management Oriented Approaches
The management oriented evaluation model is more concerned with providing information that
can help project managers make crucial decision about the project. The rationale of the
management –oriented evaluation approach is that evaluation data is an essential component of
good decision making. Management oriented model of evaluation manifest in various ways. Let
us discuss some of these approaches.
5.4.2.1 The Context –Input- Process –Product evaluation model (CIPP)
The purpose of this model is to provide relevant information to decision makers for judging
decision alternatives. The proponent of this model is Daniel Stafflebeam who argues that
evaluation should assume a cyclical approach whereby feedback is continuously provided to the
decision makers. The models highlights different levels of decision makers and how, where and
in what aspects of the project the results will be used for decision making. The model assumes
that the decision maker is an audience to whom management oriented evaluation is directed
33
(Worthen, et al, 1997). The model has various types of evaluation that must be accomplished.
Let us analyze each one of them.
.
1. Context Evaluation
Context evaluation is the most basic type of evaluation under CIPP model. Its purpose is to
provide a rationale for determining of objectives. Specifically, it defines the relevant
environment, identifies unmet needs and unused opportunities and diagnoses the problems that
prevent needs from being met and opportunities from being used. Diagnosis of the problems
provides an essential basis for developing objectives whose achievement results in project
improvement.
2. Input evaluation
The purpose of input evaluation is to provide information for determining how to utilize
resources to meet project goals. This is accomplished by identifying and assessing relevant
capabilities of the responsible agency, strategies for achieving project goals, and designs for
implementing a selected strategy. The end product of input evaluation is an analysis of one or
more procedural designs in terms of cost benefit. Specifically, alternative designs are assessed
concerning staffing, time, budget requirements, potential procedural barriers, the consequences
for not overcoming these barriers and the possibilities and the cost of overcoming theme,
relevant of design to the project objectives, and overall potential of the design to meeting the
objectives. Essentially, the input evaluation provides information to decide if outside assistance
is required to meet the objectives.
3. Process Evaluation
34
with the objectives of the activity comparing this measurements with predetermined absolute of
relative standards and making rational interpretations of the outcomes using the recorded
context, input and process information.
Strengths of CIPP
1. It provides data to administrators and other decision makers on a regular basis.
2. It is sensitive to feedback.
3. It allows for evaluation to take place at any stage of the programme/project.
Limitations of CIPP
1. It lays little emphasis on value concerns.
2. Decision-making process is unclear.
3. Evaluation may be costly in terms of funds and time if this approach is widely used.
35
Strengths
1. It provides administrators and other decision makers with useful information.
2. It allows for evaluation to take place at any stage of the programme. It is holistic.
3. It stresses timely use of feedback by decision makers.
Limitations
1. It gives preference to top management.
2. The role of value in evaluation is unclear.
3. Description of decision-making process is incomplete.
4. It may be costly and complex.
5. It assumes that important decisions can be identified in advance.
5.4.2.3 Provu’s Discrepancy Model:
Some aspect of the model is directed towards serving the information needs of project managers.
It is system oriented and it focuses on input, process, and output at each of five stages of
evaluation: project definition, project installation, project process, project products, and cost-
benefit analysis.
5.4.2.4 Utilization- focused evaluation:
This approach was developed by Patton (1986). He emphasized that the process identifying and
organizing relevant decision makers and information users is the first step in evaluation. In his
view the use of evaluation findings require that decision makers determine what information is
needed by various people and arrange for that information to be collected and provided to those
people. He recommends that evaluators work closely with primary intended users so that their
needs will be met. This requires focusing on stakeholders’ key questions, issues, and intended
uses. It also requires involving intended users in the interpretation of the findings, and then
disseminating those findings so that they can be used. One should also follow up on actual use. It
is helpful to develop a utilization plan and to outline what the evaluator and primary users must
do to result in the use of the evaluation findings. Ultimately, evaluations should, according to
Patton, be judged by their utility and actual use
5.4.2.5 System analysis approach:
The approach has been suggested to be linked to management – oriented evaluation model.
However, most system analysis may not be evaluative oriented due to their narrow research
focus.
5.5 Expertise - Oriented Evaluation Approaches
The expertise oriented approaches to evaluation depend primarily on professional expertise to
judge an educational activity, programme or product. Some scholars regard evaluation as a
process of finding out the worth or merit of a programme. Stake (1975), for example, views
evaluation as being synonymous with professional judgments. These judgments are based on the
opinion of experts. According to these approaches, the evaluator examines the goals and
objectives of the programme and identifies the area of failures or successes.
5.6 Consumer oriented evaluation approache
36
Some theorists consider evaluation a consumer service. They stress that although the needs of
project funder and mangers are important, they are often not the same as those of consumers.
The main proponents of this theory are Michael Scrivens. A consumer-oriented evaluation
approach typically occurs when independent agencies, governmental agencies, and individuals
compile educational or other human services products information for the consumer. Such
products can include a range of materials including: curriculum packages, workshops,
instructional media, in-service training opportunities, staff evaluation forms or procedures, new
technology, and software. The consumer-oriented evaluation approach is increasingly being used
by agencies and individuals for consumer protection as marketing strategies are not always in the
best interest of the consumer. Consumer education typically involves using stringent evaluation
criteria and checklists to evaluate products.
37
Example of this model includes multiple “experts” otherwise known as blue-ribbon panel, where
multiple experts of different backgrounds argue the merits of some policy or project. Some
committees also operate, to some degree, along the lines of the judicial model. As one set of
authors put it, adversary evaluation has “a built-in metaevaluation” (Worthen and Sanders,
1999). A metaevaluation is simply an evaluation of an evaluation.
38
approaches used in evaluation you can clearly trace and locate each
model of evaluation in terms of its applicability in the evaluation of
projects.
4.12 Self Evaluation questions
4.13Further Reading
Patton, M.Q. (1997). Utilization-focused evaluation: The new
century text. Thousand Oaks, CA: Sage.
Phi Delta Kappa National study committee on evaluation; (1971):
Educational evaluation & decision making; Illinois: F.E
Peacock Publishers, INC.
Worthon. B.R, Sanders. J.R and Fitzpatric J.L (1997); Programme
evaluation- Alternative approaches and practical guidance
(2Ed): New York, Longman
Fitpatrick, J.L., Sanders, J.R., & Worthen, B.R. (2004). Program evaluation
: Alternative approaches and practical guidelines. New York: Pearson.
Guba, E.G., and Lincoln, Y.S. (1989. Fourth Generation Evaluation. London:
Sage Publications.
Madaus, G. F., Scriven M., & Stufflebeam (eds) (1983). Evaluation Models:
Viewpoints on Educational and Human Services Evaluation.
Boston: Kluwer – Nijhoff.
Scriven, M (1974). Pros and Cons about Goal-free Evaluation. In W. J. Popham
(Ed.), Evaluation in Education: Current Applications. Berkeley,
CA: McCutchan.
Stake, R.E., et.al. (1975). Evaluating the Arts in Education. A Responsive
approach. Columbus: O. H. Charles E. Meril.
Stufflebeam, D., (Ed.,) (1971) Educational Evaluation and decision-making.
Itasca K: F.E. Peacock.
Tyler, R.W. (1950). Basic Principles of Curriculum and Instruction. Chicago:
University of Chicago Press.
Worthen, B.R. & Sanders, J.R. (1973). Educational Evaluation: Theory and
Practice. Belmont, C.A.: Wadsworth.
Worthen, B.R., Sanders J.R. and Fitz, Patrick J.L. (1997). Programme
Evaluation. New York: Longman.
39
LECTURE SIX
6.1 Introduction
In the previous lecture we discussed monitoring and evaluation theories and models. In this
lecture we are going to discuss indicators for monitoring and evaluation. More specifically, we
will attempt to define the term ‘indicator’ and then examine various types of indicators. The
importance of indicator in monitoring and evaluation will also be discussed. We will later
examine the characteristics of good indicators, and steps that a project manager can follow in
selecting SMART indicators for monitoring and evaluation.
6.2 Lecture objectives
By the end of this lecture you should be able to:
1. Definition the term indicator
2. Explain the importance of indicators in monitoring and evaluation
3. Outline categories of indicators used in monitoring and evaluation
4. Explain types of indicators
5. Discus the characteristics of good indicators
6. Describe steps in selecting SMART indicators
7. Describe the vertical and horizontal logic used in logical framework
An indicator is a specific, observable and measurable characteristic that can be used to show
changes or progress a programme is making toward achieving a specific outcome. There should
be at least one indicator for each outcome. The indicator should be focused, clear and specific.
The change measured by the indicator should represent progress that the programme hopes to
make.
An indicator should be defined in precise, unambiguous terms that describe clearly and exactly
what is being measured. Where practical, the indicator should give a relatively good idea of the
data required and the population among whom the indicator is measured. Indicators do not
40
specify a particular level of achievement -- the words “improved”, “increased”, or “decreased”
do not belong in an indicator.
An indicator is a sign showing the progress of a situation. It is a basis for measuring progress
towards the objectives. A specific measure, that when tracked systematically over time indicates
progress (or no progress) toward a specific target. An indicator asks the question: - How will we
know success when we see it? You can also consider an indicator as road signs that show
whether you are on the right road, how far you have traveled, and how far you are yet to travel to
reach your destination.
Take Note
In other words project indicators can be viewed as benchmarks or milestones
that show progress towards project objective
6.1 Activity
Imagine moving between two major towns in your country - say town A to
town B (Use example of towns that are familiar with you)
1 List down some of the landmarks and signs (Indicators) that make you
know that you are heading towards Town B.
2 What do you think is the importance of those indicators to other
travellers moving in the same direction as you
3 Try to link your answers in question two with the discussion in section
6.3.2
6.3.2 Importance of project indicators in project monitoring and evaluation
Indicators play very important roles in project monitoring and evaluation. Let’s now focus on
some of these importances.
1. Indicators measure progress in project inputs, activities outputs, outcomes and goals
2. Indicators enable you to reduce a large amount of data down to its simplest form. (for
instance a project to sink borehole with an aim of improving access of a certain
community to safe drinking water may have outcome indicator reduced to ‘ the percent of
households in that community with safe drinking water)
3. When compared with targets or goals, indicators can signal the need for corrective
management action. For instance, if in the project of sinking the borehole for the
community was supposed to be completed in the duration of one year and it happens to
overrun the duration, (One year in this case serves as a time indicator in which the project
should be completed) Project managers need to make quick corrective decision to ensure
that the project is within its completion time.
4. Indicators can evaluate the effectiveness of various project management action
5. Indicators can provide evidence as to whether the objectives are being achieved
6. Indicators provide the qualitative and quantitative details to a set of objectives
6.4 Classification and types of indicators
41
Indicators can be classified in three categories as follows;
1. Quantitative indicators; these types of indicators provides hard data to demonstrate
results achieved. They also facilitate comparisons and analysis of trends over time.
Quantitative indicators are statistical measure that are expressed in numbers, percentages,
rates, ratio e.t.c
2. Qualitative indicators: these are indicators that provide insight in changes in
organizational process, attitudes, beliefs, motives and behaviuors of individuals. They
imply qualitative assessments, compliance with, quality of, extent of, level of e. t c.
Qualitative indicators must be expressed quantitatively (in figures) in order to illustrate
change.
3. Efficiency indicators: These tell us whether we are getting the best value for our
investment. In order to establish such indicator, we need to know the market, i.e. the
current price of desired output considering quantity and quality aspects. Efficiency
indicators are unit cost measures expressed in cost per unit of client , students, schools
e.t.c
6.4.1 Types of indicators
The above classifications of indicators give rise to various types of indicators. The main
criterion for differentiating them is the level at which the project is assessed e.g output, outcome,
or impact. Some of the types of indicators are discussed below;
1. Input indicators:
These are quantified statements about the resources provided to the project. They rely on
management, accounting and other resource’s used in the development of the project. They use
management records illustrating the use of resources by the project. Because indicators use the
functioning of the organization at the input level, a good accounting system is needed to keep
track of expenditures and schedules developed to track timelines. Input indicators are used
mainly by managers closest to the tasks at the implementation level and are consulted frequently,
probably as often as daily or weekly. They focus on the use of funds personnel, materials and
other inputs necessary to produce the intended outputs of project activities. These indicators can
utilize the relevance and performance criteria applicable at implementation level
2. Process indicators
The term ‘process; is used to imply all that goes on during the implementation phase of
the project. Process indicators therefore are those indicators that measure the progress of the
project during implementation. That is, the extent to which stated objectives are being achieved.
The indicators capture information from project management records from the field or project
sites. They are based on cost, timelines and the scope of the project. They apply at the relevance
and performance criteria of the project. Examples include: date by which building site clearance
must be completed, latest date for delivery of fertilizers to the firm store, number of health
outlets, number of women receiving contraceptive, status of procurement of school textbooks.
42
3. Output indicators
Outputs are tangible products of project activities. They show the immediate output of the
project availed after each of the tasks conducted at the project implementation. They are the
results of activities performed by different components of the project and use quantitative ways
of measuring physical entities or some sort of qualitative judgment on timed production of
outputs. Decision on the performance of the project is determined by reading the output
indicators. They show the worth of the project strategy, more so where the outputs are weak and
poor, then the project effectiveness is cynical and hence needs adjustment. Therefore, output
indicators will use the effectiveness criteria to show the performance of the project. Outputs
include; physical quantities, improved capacities, services delivered, systems introduced,
milestones achieved, legislation passed, awareness campaigns affected etc. Examples may
percentage of community members attending community workshop, number of buildings
constructed by the project.
4. Impact indicators
Impact is the positive or negative long-term changes that can be attributed to the project
intervention. When developed, they forecast long –term effects of the project on the target
population after some duration from the project completion. Precisely impact refers to medium
or long-term development changes expected on the beneficiaries or target region upon project
completion. They are at a higher level of project process. Impact depends on data gathered from
beneficiaries. To obtain early indication of impact, a survey of beneficiary perception about
project services is conducted. Measures of change often involve complex statistics about
economic or social welfare and depend on data that is gathered from the beneficiaries.
5. Exogenous indicators
These are indicators that cover factors outside the control of the project but which might
affect its outcome. They include risks and the performance of the sector in which the project
operates. Data collection for monitoring and evaluation cover a wider external environment if
expected to impinge on the projects performance not withstanding additional burden on the
projects monitoring and evaluation effort. Exogenous indicators will help in checking the project
assumptions and risks that are likely to affect the project. Example is during project
implementation, policy decision about currency exchange rates can adversely affect profitability.
Management should carefully monitor and alert project participants about deteriorating situations
if the indicators of environment dictate so.
6. Proxy indicators
These refer to indirect measures or signs that approximate or represent a phenomenon in
the absence of a direct measure. Cost, complexity or the timeliness of data collection may
43
prevent results from being measured directly. Proxy indicators are expected to provide reliable
estimation of the direction of movement of the ideal but un attainable indicators for example,
number of children fully immunized is a reliable proxy for infant mortality from immunizable
diseases because immunization is known to be highly infective. The proxy indicators that qualify
as a measure must have strong causality link to the direct measure and should be measurable on
regular basis. It can supplement available information by obtaining data from related topics or
different sources. These is often the case for outcomes in behavioral change, social cohesion and
other results that are difficult to measure. For example, if ethnicity in target villages is
unavailable, you can complement the data by use of data on the mother tongue or spoken
language. Therefore, caution should be taken when interpreting proxy indicators because over
reliance on indicators that can be manipulated by individuals like mother tongue may lead to
wrong interpretation.
Take Note
1. Indicators only indicate:-
An indicator will never completely capture the richness
and complexity of a system
Indicators are designed to give ‘slices’ of reality
They might provide the truth but they rarely give the
whole truth
2. Indicators encourage explicitness in that they forces us to be clear
and explicit about what we are trying to do
3. Indicators usually rely on numbers and numerical techniques
Indicators should not just be associated with faulty
finding: they can help us understand our performance be
it good or bad
Well designed measurement systems identify high
performers (from whom we can learn), as well as
systems (or parts of the systems), that may warrant
further investigation and intervention
44
Reliability
The indicator is reliable when it consistently measures what it purports to measure in the same
way even when used by different evaluators.
Precise:
Indicator should be operationally defined in clear terms and should be context specific,
subjective, or specified with clear yard sticks. This reduces confusion between indicators
Independent
Indicators should be non-directional and un-dimensional, depicting a specific definite value at
one point in time. Example of directional could be healthier families or policy improvement. In
this indicator you will realize that the result is one. Example of mult-dimension indicators could
be sustainability or quality. The characteristic of independent captures the idea that the value of
the indicator should stand alone. It is best to avoid ratio rate of increase or decrease, or other
directional definition.
Objectively verifiable indicator
An indicator is said to be objectively verifiable if;
it shows the right direction (progress or failure of the project)
it produces the same value in repeated measures/calculation on the same observation
it leads to the same conclusions if underlying situations are similar or same
its interpretation is independent of evaluator or researcher
Integrity
Indicators should be truthful.
For example number of HIV positive tested by ELISA against the number of HIV
positive tested by RAPID HIV CHECK
To improve service delivery you train service providers; what indicator would be more
truthful – number of providers trained or number of trained providers?
How truthful can an indicator be on self reported sexual behaviour?
Measurable
One should be able to quantify an indicator by using available tools and methods. An evaluator
should consider whether tools and methods for collecting or calculating the indicator information
are available.
Timely
An indicator should provide a measurement of a period of time of interest with data available for
all appropriate intervals. Timelines considerations include:
- Reporting schedules
- Recall periods
- Survey schedules
- Length of time in which project change can be detected
Programmatically important
45
This implies that indicator should be linked to an impact or to achieving the project objectives
that are needed for impact
Take Note
When designing indicators, effort should be made to link them to
project activities. Failure to do so renders the indicator ineffective in
terms of providing information useful in measuring the performance
of the project.
Disaggregated if possible
It is important to disaggregated project output by either gender, age, location or any other
dimensions suitable for the project. This is very important for better management and reporting.
Projects often require different approaches for different target groups and therefore
disaggregated indicator could help decide whether or not specific groups participate in and or
benefit from projects.
Feasible
Data can be gathered over a specific time period and at an acceptable level of effort and cost
Comparability:
This assists in understanding results across different population groups and project approaches
Step Two
Develop a list of possible indicators. With the help of project stakeholders try brainstorm all the
indicators listed at each level of results. This brainstorming can be internal, consultation with
experts, seeking experiences of other similar organizations or pre-existing resources.
Step Three
Assess each possible indicator in terms of:
Measurability- can it be quantified and measured by some scale
Practicability – can data be collected on timely basis and at reasonable cost
Reliability- can it be measured repeatedly with precision by different people
Relevance– is the indicator attributed to your organization
46
Management usefulness- Does the project staff and audience feel that the information provided
by the measure is critical to decision-making
Directness/Precision – does the indicator closely track the result it is intended to measure?
Sensitivity – does it serve as early warning of changing conditions
Capability of being disaggregated- can data be broken down to by gender age location or other
dimension e.g. class tribe where appropriate
Step Four
Select the best indicators:
Based on your analysis and the context, narrow the list to the final indicator that will be used in
the monitoring system. Ensure that every element of the indicator and how it is measured is
defined. There should be an optimum set that meets management needs at a reasonable cost
You should limit the number of indicators used to track each objective or result to a few (two or
three) while remembering your target audiences both external and internal.
47
When used dynamically it is an effective management tool to guide the implementation
of monitoring and evaluation
Disadvantages
If managed rigidly, it stifles creativity and innovation
If not updated during implementation, it can be a static tool that does not reflect changing
conditions
b) Means of Verification:
Information for the means of verification (MOVs) column should be developed at the
same time as the indicators. It provides information to help justify the achievement of the project
at the indicator level. The means of verification is like the exhibit to help verify what has been
said to have been done by the project manager at various project levels. During the process of the
project, care should be taken to keep these exhibits which are in the forms of registers, receipts,
48
records, notices, memos, e.t.c. It can also be data previously captured by various means which
can be available when needed in the cause of evaluation. Means of verification should clearly
specify the anticipated source of information, the methods to collect that data, such as sample
surveys, administrative records, workshops or focus group, observation, Participatory Rural
Appraisal (PRA) techniques or Rapid Rural Appraisal (RRA) techniques. MOVs should also
specify those who are responsible for data collection e.g. project staff, independent survey teams
e.t.c. Also they should indicate the frequency with which the information should be provided (eg
Monthly quarterly, annually etc) and the format required to collect the data. The means of
verification is either more or less structured depending on the intervention logic level. At the
lower level, monitoring and evaluation relies more on secondary information than primary.
Secondary information is captured from such items as receipts; register records etc. which is
more applicable at the lower level of the matrix. At the upper level, which measures the project
impact relies on the interviews questionnaires etc which are more primary. This is well
illustrated by figure 6.1
Figure 6.1 means of verification stage
Primary Data
GOAL
Upper Level
PURPOSE
OUTPUTS
ACTIVITIES
Lower Level
Secondary data
c) Assumptions
Assumptions are conditions external to the project that may affect the progress or success
of the project and over which the project management has little control. They are stated as
positive conditions that need to exist to permit progress of the project to the next level e.g. price
changes, rainfall, political situations etc.
An assumption needs to be relevant to the project or otherwise relevant to the level of the
objectives to allow the project to progress to the next level. Contrary to a risk, which is negative
statement of what might prevent objectives project from being achieved; assumptions are a
positive statement of a condition that must be met in order for project objectives to be achieved.
It is important to note that assumptions are not delicate community problems. If the assumptions
49
prove that they will impede on the project moving to the next level, it is extremely significant to
capture them and strategically manage the project to bypass these problem or otherwise, redesign
or terminate the project.
Assumptions are normally forecast and should be relevant and probable. Therefore, the
decision to select an assumption depends on some sort of value judgment on the part of the
evaluator. This can be based on the normal occurrences of risks or events. If something rarely
happens as risks, then the assumption is based on the rare occurrence aspects. The chance of that
thing happening is treated as rare. As a suggestion, the best way to go about the assumption is by
probably giving a percentage chance of something happening or not happening. Several aspects
can be evaluated in this way and those with higher risks are definitely denoted. These can help
the evaluator make valid judgment on assumption than can affect the project.
For instance, if a project is located in an arid region you will not assume that the climate
will be conducive for growing maize where maize has never grown. You may also not assume
that it is going to rain in March when there is rare rain in that month. Provisionally, estimate that
the assumption has a chance of happening before deciding on it as a problem. From your
estimate if it has no chance of not happening do not bother about it. Logical framework demands
that all hypotheses, assumptions and risks relevant to a project are made explicit. This then
further demands that the appropriate action is considered (necessary taken) before problems
materialize and affects the project. Some factors to consider include:
1. How important are the assumptions?
2. How big are the risks?
3. Should the project be redesigned?
4. Should some elements of the proposed project be abandoned?
In logical framework, relationships between the assumptions and the intervention logic are
presented as causal, one step leading to the next. If one step is not completed successfully then
the next will not be achieved.
The casual relationship between the intervention logic elements and assumptions is as follows:
if the preconditions are complied with, then the activities can be started;
if the activities are realized, and if the assumptions at the activities level have come true,
then the outputs will be realized;
if the outputs are realized, and if the assumptions at the results level have come true then
the project purpose would be realized
if the project purpose is realized and if the assumption and the project purpose level have
come true, then the goal will have been significantly be contributed to
Consider the following figure;
50
Figure 6.2 Flow of the logic in Horizontal logic
GOAL
AND IF
PURPOSE ASSUMPTION
OUTPUTS ASSUMPTION
AND IF
ASSUMPTION
ACTIVITIES
AND IF
ASSUMPTION
VERTICAL LOGIC
The vertical logic has four levels where each lower level of activity must contribute to the
next higher level. It elucidates between the casual relationships between the different levels
of objects and specifies the important assumptions and uncertainties beyond project
management control. The capacity of the project to move to the next higher level will be
determined by the assumptions, which are concrete determinant factors of the project
proceeding from one lower level to the next higher level all the way to the goal as the
highest level. Each level has a set of the logical framework items referred to as objectives.
The items are the intervention logic, with its corresponding means of verification,
objectively verifiable indicators and the assumptions. As a set the items are addressed by the
logic framework sequentially upwards from the lower level to the higher level. For example,
project activities contribute to the achievement of project outputs. The achievement of the
project outputs lead to the achievement of project purpose and finally the purpose contribute
to the goal of the project. The goal of the project is the ultimate aim of the project; the
reason for the project existence. The description in the matrix involves a detailed breakdown
of the sequence of causality. This can be expressed in terms of
1 if inputs are provided, then the activities can be undertaken;
2 if the activities are undertaken, then outputs will be produced;
3 if outputs are produced, then the purpose will be supported, and;
4 if the purpose is supported, this should then contribute to the overall goal
51
Figure 6.3 Logic in the objectives
THEN
GOAL
PURPOSE THEN
IF
OUTPUTS
THEN IF
ACTIVITIES
IF
Each level thus provides the justification for the next level for instance the goal helps justify the
purpose, the purpose the output, the output the activities and the activities the inputs
Logical Framework Matrix
After determining all the necessary items to be entered into the log frame matrix, it is developed
by drawing a table with four columns and four rows. The first rows enter the item names, goal,
purpose output and activities. Append appropriate information besides each of the items in the
first column. Remember as mentioned earlier, they are written down wards but read upwards.
The next are the indicators corresponding to each of the first column items. The indicators vary
depending on the level they are corresponding to. The various types of indicators mentioned
earlier in this chapter are indicated at each of the levels. For instance the input indicators appear
at the input level while the output at the output level, the purpose indicators at the purpose level
and lastly the impact indicators at the goal level.
Next is the means of verification (MOV). The MOV also fall into each level. Incidentally, the
MOVs vary according to the levels appropriate for data collected. At the lower levels there is
more secondary information in the form of receipts, documents. At the upper level there is more
of primary information collected through such tools as questionnaires, interview etc
52
judging whether long run
these broad
objectives are being
achieved (Estimated
time)
Purpose: the What are the Source of Conditions
development quantitative measure information and necessary for the
outcome expected at or qualitative methods used achievements of the
the end of the evidence by which project’s purpose to
project achievement and reaching the project
distribution of impact goal
and benefits can be
judged (estimated
time)
Outputs: the direct What kind and Source of Factors if not
measurable outputs quantity of outputs information and present are liable to
(goods and services) and by when will methods used restrict progress
of the project they be produced from outputs to
(quantity, quality and achievements of
time) project purpose
Activities: activities Implementation/work Source of Factors that must be
that must be project targets. Used information and realized to obtain
undertaken to during monitoring methods used planned outputs on
accomplish the schedules
output
See Appendix A ‘ Awendo OVC Scholarship Project’ for practical example of a logical
framework
6.10 Summary
The lecturer provides definition of indicators used in project
monitoring and evaluation and their importance. It also provides a
detail analysis of the characteristics and their categories. It is clearly
explained in this lecture that it is important for project managers to
understand steps to be followed in selecting SMART indicators if
relevant and useful information is to be collected. The lecture also
shows the indicators as a central factor in the logical framework
which is regarded as one of the key tools for project planning and
even project monitoring and evaluation. In-depth understanding of
the logical framework is also provided in terms of both vertical and
horizontal logic.
53
6.11 Self Evaluation questions
54
LECTURE SEVEN
A well-functioning M&E system integrates the formal data commonly associated with the task of
M&E, together with informal monitoring and communication such as field staff sharing their
experiences of project implementation over a cup of tea. M&E system therefore is an integral
55
system of reflection and communication supporting project implementation that should be
planned for and managed throughout the projects’ life. It is therefore disastrous for project
managers to view M&E as a statistical task or a tedious external obligation of little relevance to
those implementing projects. From the previous lecture we have seen that it is hard to separate
Monitoring from evaluation. Therefore it is not wise to separate project monitoring functions
from project evaluation functions such that high – level impact – related assessments are
subcontracted, while project staff focuses only on tracking short term activities. This limits the
opportunities to learn since short term activities forms part of the long and high level impact of
the project.
To ensure that the M&E provides integrated support to those involved in project
implementation, the project manager requires:
1. Create an M&E process that will lead to clear and regular learning for all those
involved in project strategy and operations.
2. Understand the link between M&E and management functions
3. Use existing processes of learning, communication and decision making among
stakeholders as the basis for project oriented M&E.
4. Put in place the necessary conditions and capacities for M&E to be carried out.
7.4 Linkage of M&E system to project strategy and operations
In the above section we have seen how M&E forms an integral system which assists in project
implementation. In this section we are going to focus on how M&E links up to all the operations
within the project to satisfy the project objectives.
Figure 7.1: How M&E Links to Project strategies
Communicating
56
Source: IFAD (2002) setting up a M&E System; Managing for impact in Rural Development A
guide for Project M&E; http://www.ifad.org/evaluation/guide/toc.pdf
Figure 7.1 shows how the M&E fits in within the project. The figure focuses on the elements of
M&E and how it links with two components of the project: project strategies and operations. Let
us now discuss some elements as shown in figure 7.1.
1. Project Strategy: project strategy is considered as the plan for what will be achieved and
how it will be achieved. This forms a starting point for project implementation and
setting up an M&E system. The strategy is the basis for working out the project
operations required to implement activities efficiently and effectively.
2. The completion of project activities leads to a series of actual outputs, outcomes and
impacts. Comparing the actual outputs, outcomes and impacts with what was planned in
the project strategy and understanding the differences in order to identify changes in the
strategy and operations is the core function of M&E system
3. It is clear from figure 7.1 that M&E system consist of four interlinked parts
The first parts consist of: Developing an M&E system. This is done by identifying the
information need to guide the project strategy, ensure effective operations and meet
external reporting requirements. There is need for one to decide on how to gather and
analyze this information and document a plan for M&E system. The process of working
out on how to monitor and evaluate a project, inevitably raises questions about project
strategy itself, which can help improve the initial design. Setting up an M&E system with
a participatory approach builds stakeholders’ understanding about the project and starts
creating a learning environment.
The second part is Gathering management information. This is regarded as the
implementation of the M&E system. Information can be gathered through informal as
well as structured approaches. Information comes from tracking which outputs, outcomes
and impacts are being achieved and checking project operations (e.g. activity
completions, financial management and resource use). After you start on gathering
information and management, you will have the capacity to solve some problems or you
will have a lot of ideas that may lead to revising the initial M&E plan.
o The third part of the M&E is that of involving project stakeholders on a critical
reflection process for improvement of the activity. Once the information has been
collected it needs to be analyzed and discussed by the project stakeholders. This may
happen formally – for example during an annual project review workshop. Or it may
happen informally – for example talk to project beneficiaries about the project during
field visits. In these reflections and discussions, you will probably notice information
gaps. This can trigger adjustment on the M&E plan to ensure the necessary
information is being collected.
o The fourth part of the M&E system is the communication of M&E results to the
people who need to use it. This is the part that determines the success of the M&E
57
system. The part includes reporting to the funding agencies but it is broader. For
example, problems experienced by field staff need to be understood by project
managers. Project progress and problems must be shared with project participants to
enable you to find solutions together. Reports to funding agencies need to balance the
success and mistakes, and above all, be analytical and action-oriented. Some of those
who are to use the information may have been involved in collecting data or
analyzing part of it. However you need to plan on how to inform those who were not
involved.
4. The results from M&E must improve the project strategy and operations. The senior
management with the support of project staff is responsible for this. Sometime the
improvement may be immediate depending on the availability of resources and
sometimes improvement may require negotiation between key project stakeholders. Or
there may be need to change the sequence of certain activities and thus require time to be
effected.
7.5 Factors to be considered before constructing M&E system
The readiness assessment is a diagnostic tool that can be used to determine whether the
prerequisites are in place for building an M&E system. The following factors must be considered
before setting up an M&E system.
i) Potential Pressures Are Encouraging the Need for the M&E
It is important to know where the demand for creating an M&E system is emanating from
and why. Are the demands and pressures coming from internal, multilateral, or international
stakeholders, or some combination of all these? These requests will need to be acknowledged
and addressed if the response is to be appropriate to the demand. To some extent internal
demands may arise from calls for reforms in public sector governance and for better
accountability and transparency. Anti-corruption campaigns may be a motivating force.
Externally, pressures may arise from the donor community for tangible development results for
their investments. International organizations investing in development projects, such as the
European Union, expect a feedback system on public sector performance via M&E for each of
the accession countries. The competitive pressures of globalization may come into play, and the
rule of law, a strong governance system, and clearly articulated rules of the game are now
necessary to attract foreign investment. Financial capital and the private sector are looking for a
stable, transparent investment climate, and protection of their property and patents, before
committing to invest in a country. There are multitudes of pressures that project management
may need to respond to, and these will drive the incentives for building a results-based M&E
system.
ii) Project staff attitude towards M&E System?
Champions in an organization implementing projects are critical to the sustainability and success
of an M&E system. Within a given organization implementing projects, there are individuals or
groups who will likely welcome and champion such an initiative, while others may oppose or
even actively counter the initiative. It is important to know who the champions are and where
58
they are located in the organization. Their support and advocacy will be crucial to the potential
success and sustainability of the M&E system. However, if the emerging champion is located
away from the center of policymaking and has little influence with key decision makers in that
particular organization, it will be difficult, although not impossible, to envision an M&E system
being used and trusted. It will be hard to ensure the viability of the system under these
circumstances.
Viability is dependent upon the information being viewed as relevant, trustworthy, useable, and
timely. M&E systems with marginally placed champions who are peripheral to the decision
making process will have a more difficult time meeting these viability requirements. Information
from assessment on the champions of M&E system will help the Project manager together with
the stakeholders come up with the roles and responsibilities that must be stated prior to the
development of the system. Clearly identify those who Will be involved in the design,
implementation and reporting and allocate them such responsibilities. This will ensure that there
is staff for the supervision of the system by assigning the responsibilities and roles. It will be
clear as to who will do what.
iii) Ownership, utilization and sustaining of M&E system
Frequently, a careful institutional assessment should be made to assess the real capacity of the
users to actually create, utilize, and sustain the system. A carefully done readiness assessment
helps provide a good understanding of how to design the system to be responsive to the
information needs of its users, determine the resources available to build and sustain the system,
and assess the capacities of those who will both produce and use the information. Understanding
these issues helps to tailor the system to the right level of complexity and completeness. For a
results-based M&E system to be effectively used, it should provide accessible, understandable,
relevant, and timely information and data. These criteria drive the need for a careful readiness
assessment prior to designing the system, particularly with reference to such factors as ownership
of the system, and benefits and utility to key stakeholders. From a technical perspective, issues to
be addressed include the capacity of the organization to collect, analyze and interpret the data,
produce reports, manage and maintain the M&E system, and use the information produced.
Thus, the readiness assessment will provide important information and baseline data against
which capacity-building activities—if necessary— can be designed and implemented.
Furthermore, there is an absolute requirement to collect no more information than is required.
Time and again, M&E systems are designed and are immediately overtaxed by too much data
collected too often—without sufficient thought and foresight into how and whether such data
will actually be used.
59
relevant. M&E systems can help policymakers track and improve the outcomes and impacts of
resource allocations. Most of all, they help organizations make better informed decisions and
policies by providing continuous feedback on results. Experience shows that the creation of a
results-based M&E system often works best when linked with other public sector reform
programs and initiatives, such as creating a medium-term public expenditure framework,
restructuring public administration, or constructing a National Poverty Reduction Strategy.
Linking the creation of M&E systems to such initiatives creates interdependencies and
reinforcements that are crucial to the overall sustainability of the systems. The readiness
assessment can provide a road map for determining whether such links are structurally and
politically possible.
60
manage it. It is important to assess organization’s capacity to monitor and evaluate. As part of
the preparation an appropriate organization structure should be identified. This would provide
the management team the authority to determine the course of the system and t o avoid the
confusion on whose authority the system is working. The project manager needs also to scout
capacities from outside the organization, such as NGOs, universities, research institutes, and
training centers that may provide part of the necessary technical capacity to support the
organizations’ M&E system if there be any need. It is important for the project manager to assess
the following as they manifest in the project: Technical skills; Managerial skills; Existing data
systems and their quality; Technology available; Fiscal resources available; Institutional
experience
61
6. Planning for the necessary conditions and capacities – what is needed to ensure that
our M&E system actually works?
Although these factors have been extensively examined by the Kuzek and Rist (2004) model,
there is need to look at them keenly. It is imperative to note that a good project appraisal report
will include an indicative M&E framework that provides enough details about the above
questions to enable budgeting and allocation of technical expertise, giving funding agencies an
overview of how M&E will be undertaken, and guide project and partner staff during project
start-up phase. Let briefly focus on what each step entails.
1).Purpose and scope of M&E system: Clear definition of the purpose and scope of the
intended M&E system helps when deciding on issues such as budget levels, number of indicators
to track, type of communication needed. Specifying the purpose also helps to make clear what
can be expected of the M&E system as it forces you to think about the nature of the project and
the implications for information needed to manage it well.
2). Performance questions, information needs and indicators: it may be difficult for a project
manager to list quantitative indicators directly from the project objectives in the log frame
matrix. This is because some objectives are so complex to the extent that they cannot be
summarized in terms of one or a few indicators. Also, while it might be possible for quantitative
information to be found that show if objectives are being met, it does not necessarily explain
why and if this can be attributed to the project, therefore multiple source of quantitative and
qualitative information are critical to explain what is happening and look closely at the
relationship between different pieces of information, rather than single indicator.
Working with performance questions to guide indicator analysis will give you a more
integrated and meaningful picture of objective achievements. Answering these questions requires
descriptive analysis and qualitative information. Starting by identifying performance questions
makes it easier to recognize which specific indicators are really necessary. Sometimes a
performance question may be answered directly with a simple quantitative indicator. However,
very often the question can only be answered by a range of quantitative and qualitative
information.
Table 7.1 Tasks needed when detailing the M&E plan based on a project appraisal report
M&E Design Steps Outputs in project Tasks during project Start up to
appraisal develop detailed M&E system
(M&E Framework)
1. Establish the purpose Broadly define purpose and Review purpose and scope with key
and scope scope of M&E in project stakeholders
context
2. Identify performance List of indicative key Assess the information needs and
questions, indicators questions and indicators for interest of all key stakeholders
and the goal, purpose and output Precisely define all questions,
information needs levels indicators and information needs
for all levels of objective hierarchy
62
Check each bit of information for
relevance and end-use
3.Plan information Generally describe Plan information gathering and
gathering & information gathering and organizing in details i.e (who will
organizing organizing methods to enable do what, use which methods to
resource allocation gather/synthesize what
information, how often and when,
where, with whom, with what
expected information product)
Check the technical and resource
feasibility of information needs,
indicators and methods
Develop formats for data collection
and synthesis
4. Plan for Broad description of key Make a precise list of all key
communicating audiences and type of audiences, what information they
and Reporting information that should be need, when they need it and in
communicated to them to what format.
enable resource allocation Define what is to be done with the
information – simply send it,
provide a discussion for analysis,
seek relevance feedback for
verification e.t.c
Make a comprehensive schedule
for information production,
showing who is to do what by
when in order to have information
needy on time
5. Plan critical reflection General outline of processes Precisely detail which
processes and events and events methods/approaches are to be used
with which stakeholder group and
for what purpose
Identify who is responsible for
which reflective events
Make a schedule that integrate all
the key events and
reporting/decision making
moments.
6. Plan for the necessary Indicative staff levels and Come to precise description of: the
conditions and capacities types, clear description of number of M&E staff, their
63
organization structure of responsibilities and linkages,
M&E, indicative budget incentive needed to make M&E
work, organizational relationships
between key M&E stakeholders,
the type of information
management system to be
established and detailed budget
7.6.2 A Ten Step Model for Setting up an M&E system
Kuzek and Rist (2004) argues that the above model for setting up an M& E systems is defective
in that it ignores some key factors that makes the system not impressive to the key project
implementers. He adds that the models do not cater for organizational, political and cultural
factors. He proposes a 10 Step model which differs from others because it provides extensive
details on how to build, maintain—and perhaps most importantly—sustain a results based M&E
system. There 10-steps proposed by Kuzek and Rist (2004) are briefly discussed below:
Step 1: Conducting a readiness assessment
This model differs from other approaches in that it contains a unique readiness assessment. Such
an assessment must be conducted before the actual establishment of a system. The readiness
assessment is, in essence, the foundation of the M&E system. Just as a building must begin with
a foundation, constructing an M&E system must begin with the foundation of a readiness
assessment. Without an understanding of the foundation, moving forward may be fraught with
difficulties and, ultimately, failure. Readiness Assessment can be considered as an analytical
framework to assess the project’s capacity and willingness to monitor and evaluate its project’s
goals.
Step 2: Agreeing on Outcomes to Monitor and Evaluate
This stage will be described in detail in lecture eight. However what we need to know is that the
outcomes help the project stakeholders ‘Know where you are going before you get moving”. It
is important that all project stakeholders agree on what outcomes to monitor and evaluate.
Clearly defined outcomes provide a foundation for designing and building sustainable M&E
system. They also help in Budgeting for outputs, and general management of the outcomes. The
outcomes are usually not directly measured they are only reported on. At some level outcomes
must be translated to a set of key indicators.
64
Step 4: Baseline Data on Indicators
Step 4 of the model relates to establishing performance baselines—qualitative or quantitative—
that can be used at the beginning of the monitoring period. The performance baselines establish a
starting point from which to later monitor and evaluate results. The baseline provides a
measurement ‘to find out where we are today’. This stage will be discussed in details in lecture
eight. Other steps as suggested by Kuzek and Rist (2004) include;
Step 5 which builds on the previous steps and involves the selection of results targets, that is,
interim steps on the way to a longer-term outcome. Targets can be selected by examining
baseline indicator levels and desired levels of improvement.
Step 6 of the model, includes both implementation and results monitoring. Monitoring for results
entails collecting quality performance data, for which guidelines are given;
Step 7 deals with the uses, types, and timing of evaluation. Reporting findings;
Step 8, looks at ways of analyzing and reporting data to help decision makers make the necessary
improvements in projects, policies, and programs;
Step 9 which talks more on using M&E findings and emphasizes the importance of generating
and sharing knowledge and learning within the organizations; and ,Finally,
Step 10, covers the challenges in sustaining results-based M&E systems including demand, clear
roles and responsibilities, trustworthy and credible information, accountability, capacity, and
appropriate incentives.
7.9 Summary
In this lecture M&E system is regarded as one of the most important aspect
of project design. The Lecture starts by defining the M&E system and
linking the system to project strategy and operations. It also explores factors
that a project manager will need to consider before setting up and M&E
system. Two models of setting up an M&E system are also discussed.
65
LECTURE EIGHT
MEASURING OF PROJECT PERFORMANCE INDICATORS
8.1 Introduction
In lecture seven we discussed the basic models for setting up M&E system. During this
discussion we realized that the decision on outcomes and setting of targets was key in the build
up of M&E system. Despite the fact that indicators were discussed earlier in this lecture, it is
important for you to understand that one cannot set indicators before determining outcomes. This
is because it is the outcomes—not the indicators—that will ultimately produce the project
benefits. In this lecture we are going to discuss outcomes and baseline targets as foundations of
measuring of project performance indicators.
66
Millennium Development Goals. At the country level, there could already be some stated
national, regional, or sectoral goals. Also, political and electoral promises may have already been
made that specify improved governmental performance in a given area. In addition, there may be
citizen polling data indicating particular societal concerns. Parliamentary actions and authorizing
legislation are other areas that should be examined in determining desired national goals. There
may also be a set of simple goals for a given project or program, or for a particular region of a
country. From these goals, specific desired outcomes can be determined. It should be noted that
developing countries may face special challenges in formulating national outcomes.
2. Stakeholder interest
When setting outcome it is important to capture the stakeholders’ interest. it is important to note
that the projects outcomes target to fulfill felt needs of the society/organizations. In order to
capture the stakeholders’ interests there is need to launch a participatory process involving key
stakeholders in the formulation of the outcomes.
3. Available capacity
Available capacity in terms of finances and other resources such as human resource and
technological capacity are important factors that should be considered while formulation of the
project outcomes. A project performance is only realized in an environment where adequate
resources interact in an effective and efficient way to achieve the desired outcome. It will be
needless to formulate outcomes that will never be realized due to lack of capacity.
8.3.2 The Overall Process of Setting and Agreeing upon Outcomes
After looking at factors that you need to consider when choosing outcome to monitor, let’s now
discuss the process of setting and agreeing upon outcome to monitor. In order to jump start the
process of setting the outcome to monitor, you need to know where you are going, why you are
going there, and how you will know when you get there. There is a political process involved in
setting and agreeing upon desired outcomes. Each part is critical to the success of achieving
stakeholder consensus with respect to outcomes. The following are the steps involved in setting
and agreeing upon outcome to monitor;
1. Identify Specific Stakeholder Representatives
Who are the key parties involved around an issue area (health, education, and so forth)? How are
they categorized, for example, NGO, Government, donor? Whose interests and views are to be
given priority?
2. Identify Major Concerns of Stakeholder Groups
Use information gathering techniques such as brainstorming, focus groups, surveys, and
interviews to discover the interests of the involved groups. Numerous voices must be heard—not
just the loudest, richest, or most well-connected. People must be brought into the process to
enhance and support a democratic public sector.
3. Translate Problems into Statements of Possible Outcome Improvements
It should be noted that formulating problems as positive outcomes is quite different from a
simple reiteration of the problem. An outcome oriented statement enables one to identify the
road and destination ahead. We encourage outcomes to be framed positively rather than
67
negatively. Stakeholders will respond and rally better to positive statements, for example, “We
want improved health for infants and children,” rather than “We want fewer infants and children
to become ill.” Positive statements to which stakeholders can aspire seem to carry more
legitimacy. It is easier to gather a political consensus by speaking positively to the desired
outcomes of stakeholders.
4. Disaggregate to Capture Key Desired Outcome
Outcomes should be disaggregated sufficiently to capture only one improvement area in each
outcome statement. A sample outcome might be to “increase the percentage of employed
people.” To know whether this outcome has been achieved, the goal needs to be disaggregated
to answer the following: Agreeing on Outcomes to Monitor and Evaluate
Table 8.1 shows an example of formulating various concern identified by stakeholders into
positive and desired outcomes.
Figure 8.1 Developing Outcome statements
Increase employment
Idle youth in the rural opportunities for the youth
areas due to few in the rural areas
employment opportunities
From the figure above the problem is translated in desired outcomes. However, there is need
to disaggregate the positive statement by considering the following questions: For whom?;
Where? How much? By when? If we take an example from figure 8.1; ‘increase
employment opportunities for youth in rural areas’ we can disaggregate this outcome to
“increase employment among youth in the rural sector by 20 percent over the next four
years.” It is only through disaggregating the outcome and articulating the details that we will
know if we have successfully achieved it. Simplifying and distilling outcomes at this point
68
also eliminates complications that may arise later when we start to build a system of
indicators, baselines, and targets by which to monitor and evaluate. By disaggregating
outcomes into subcomponents, we can set indicators to measure results.
69
There are eight key questions that should be asked in building baseline information for
every indicator. (These questions continue to apply in subsequent efforts to measure the
indicator.)
1. What are the sources of data?
2. What are the data collection methods?
3. Who will collect the data?
4. How often will the data be collected?
5. What is the cost and difficulty to collect the data?
6. Who will analyze the data?
7. Who will report the data?
8. Who will use the data?
Another consideration in setting targets is the expected funding and resource levels—existing
capacity, budgets, personnel, funding resources, facilities, and the like—throughout the target
period. This can include internal funding sources as well as external funding from bilateral and
multilateral donors. Targets should be feasible given all of the resource considerations as well as
organizational capacity to deliver activities and outputs. Most targets are set annually, but some
could be set quarterly. Others could be set for longer periods. However, setting targets more than
three to four years forward is not advisable. There are too many unknowns and risks with respect
70
to resources and inputs to try to project target performance beyond three to four years. In short,
be realistic when setting targets.
The political nature of the process also comes into play. Political concerns are important.
What has the government or administration promised to deliver? Citizens have voted for a
particular government based on articulated priorities and policies that need to be recognized and
legitimized in the political process. Setting targets is part of this political process, and there will
be political ramifications for either meeting or not meeting targets. Setting realistic targets
involves the recognition that most desired outcomes are longer term, complex, and not quickly
achieved. Thus, there is a need to establish targets as short-term objectives on the path to
achieving an outcome. So how does an organization or country set longer-term, strategic goals to
be met perhaps 10 to 15 years in the future, when the amount of resources and inputs cannot be
known? Most governments and organizations cannot reliably predict what their resource base
and inputs will be 10 to 15 years ahead. The answer is to set interim targets over shorter periods
of time when inputs can be better known or estimated. “Between the baseline and the . . .
[outcome] there may be several milestones [interim targets] that correspond to expected
performance at periodic intervals” (UNDP 2002, p. 66). For example, the Millennium
Development Goals ( MDGs) have a 15-year time span. While these long-term goals are
certainly relevant, the way to reach them is to set targets for what can reasonably be
accomplished over a set of to four-year periods.
The aim is to align strategies, means, and inputs to track progress toward the MDGs over
shorter periods of time with a set of sequential targets. Targets could be sequenced: target one
could be for years one to three; target two could be for years four to seven, and so on.
Flexibility is important in setting targets because internal or external resources may be
cut or otherwise diminished during budgetary cycles. Reorientation of the program, retraining of
staff, and reprioritization of the work may be required. This is an essential aspect of public
management.
If the indicator is new, be careful about setting firm targets. It might be preferable to use
a range instead. A target does not have to be a single numerical value. In some cases it can be a
range. For example, in 2003, one might set an education target that states “by 2007, 80 to 85
percent of all students who graduate from secondary school will be computer literate.” It takes
time to observe the effects of improvements, so be realistic when setting targets. Many
development and sector policies and programs will take time to come to fruition. For example,
environmental reforestation is not something that can be accomplished in one to two years.
Finally, it is also important to be aware of the political games that are sometimes played
when setting targets. For example, an organization may set targets so modest or easily achieved
that they will surely be met. Another game that is often played in bureaucracies is to move the
target as needed to fit the performance goal. Moving targets causes problems because indicator
trends can no longer be discerned and measured. In other cases, targets may be chosen because
they are not politically sensitive.
71
8.6 Summary
The lecturer discusses Outcomes, baseline data, targets and the
foundation of measuring project indicators. It examines various
factors to be considered when choosing outcomes to monitor and
evaluate and also factor to consider when selecting targets. The
lecture also outlines the overall process of setting and agreeing of
on outcomes to monitor. In details and with illustrations, the
lecture also looks at how outcomes, baseline and targets are set
for effective monitoring and evaluation
72
LECTURE NINE
9.1 Introduction
Credibility of any evaluation is measured against standards of quality established by the
International Community of Evaluators. This lecture will introduce you to commonly agreed
standards that you can apply while planning for evaluation up to the implementation stage. The
lecture covers; utility standards, feasibility standards, propriety standards and accuracy
standards.
9.2 Lecture objectives
At the end of this lecture you should be able to;
3. Identify the utility standards
4. Explain the feasibility standards
5. Describe the propriety standards
6. Discuss accuracy standards
73
The information surrounding the evaluation should be carefully selected. Information
collected should be responsive to the interest and needs of the stakeholders. Therefore it is
important to gather enough information that would answer all pertinent questions about the
project
Take Note
All stages of evaluation should be describe; context, purpose, questions,
procedures, funding etc.
Take Note
Interim and final reports are equally important for they may have an
impact on the future action of the target group. Reporting is important.
74
Under this standard it is important to ensure that the evaluation process embraces
practical methods and instruments. If this is adhered to the evaluation process will guarantee
production of the needed information and disruption will be minimized. To validate the
evaluation methods and instruments there is need to involve all the stakeholders.
75
constrained with available resources in terms of time and budget. These are factors that ensure
the exercise is complete and fair. Therefore any issues that may cause difficult in the process of
evaluation should be discussed and agreed upon before the exercise.
Take Note
The evaluation interests can be categorized into:
1. Donors interests;
2. Top management interests;
3. Stakeholders interests and;
4. The evaluators interests
All of the above interests will always want to influence the evaluation
findings. This is a source of dilemma on the side of the evaluator
The integrity of the evaluation cannot be compromised just to accommodate conflicts of interest
76
Take Note
Details on writing of evaluation study will be discussed later.
77
Take Note
All data analysis should follow rules of methodological soundness.
9.7 Summary
1. This lecture extensively examines various standards used in
monitoring and evaluations. Identify the utility standards
2. Explain the feasibility standards
3. Describe the propriety standards
4. Discuss accuracy standards
78
LECTURE TEN
10.1 Introduction
After exposing yourself to different aspects of evaluation, the next task as an evaluator is
to design an evaluation exercise. At this juncture you will be required to have knowledge of
various methodology and tools that you will use to conduct the evaluation. In this lecture we are
going to discuss, key monitoring and evaluation methods, factors to consider when selecting the
monitoring and evaluation tools and preparation of monitoring and evaluation document
(proposal).
79
approach include: survey designs, cross-sectional design, longitudinal design, ex-post facto
design, experimental design, and quasi-experimental design. We know that you have covered
this design in the Research Method module. For the purpose of reminding ourselves, we will
briefly discus each of the above mentioned designs as below:
80
project target groups or even the kind of services rendered to the groups by the project. The
experimental evaluation involves two groups, an experimental group and a control group. The
experimental group receives a new novel treatment, while the control either receives a different
treatment or the usual treatment. The control group is needed for comparison purpose.
The experimental design can be in form of true-experimental design, factorial design, or
even quasi-experimental design.
81
This design attempt to examine an individual or unit in depth as an endeavour to describe
the behaviours or events and the relationships of these behaviours or events to the subject’s
history and environment. The emphasis of the design is to understand why an individual does
what he/she does and how behaviour changes as the individual responds to the environment
Case study design aims at a comprehensive, systematic and in-depth gathering of
information about a case of interest. In such a study raw case data is assembled, a case record is
constructed and ultimately a case study narrative is produced. Types of Case studies include:
Historical Organizational case study – this is a study that traces the historical
development of an organization/project over time. It relies on document review
and interviews
Observational case study – this is mostly used to study the interaction of group
of people over a period of time. Their major data collection technique is
participant observation.
Situation analysis – in this form of case study a particular event is studied from
the view point of all major participants. The collective views of the participants
are synthesized by the evaluator to provide an understanding of the subject under
study.
Take Note
82
10.4 Selecting Monitoring and evaluation tools
Monitoring and evaluation may use various tools for data collection such as format
interviews, literature review, questionnaire and surveys, in-depth interview, focus group
discussions, document reviews, field work reports case studies, participants’ observations,
community meetings.
These tools have advantages and disadvantages as illustrated below
Table 10.1 Evaluation methods and tools
83
organize to assess face contact with Interviewee may distort
perceptions views and respondent and information through
satisfaction of provide recall errors
beneficiaries opportunity to Selective perception and
explore topics in desire to please
depth Flexibility can results in
Allows inconsistencies across
interviewer to interviews
probe explain or Volume of information
help clarify too large and may be
questions difficult to reduce data
increasing the
likelihood of
useful responses
Allow interviewer
to be flexible in
administering
interviews to
particular
individuals or
circumstances
Document Review Impression of how Give impression Often takes a lot of time
programme operates and historical Information may be
without interrupting information incomplete. Quality of
the programme by Does not interrupt documentation might be
review of applications programme or poor
finances, memos, clients routine in Need to be clear about
minutes etc. programme purpose
Information Not a flexible means to
already exist get data. Data restricted
Few biases about to what already exists
information
Observations Involves inspections, Well-suited for Dependent on observer’s
field visits and understanding understanding and
observation to process views, interpretation
understand process, operation of the Has limited potential for
infrastructure/services programme while generalization
and their utilization they are actually Can be difficult to
occurring interpret exhibited
Gathers accurate Can adapt to behaviuors
84
information about events as they Can be complex to
how a programme occurs and exist categories observations
actually operates in natural, Can influence behaviour
particularly about unstructured and of programme
processes flexible setting participants
Provide direct Can be expensive and
information about time consuming
behaviour of Needs well qualified,
individuals and highly trained observers
groups and all content experts
Permits evaluator Investigators have little
to enter into and control of the situation
understand
situations/context
Provide good
opportunity for
identifying un
anticipated
result.
Focus Groups A focus group brings Efficient and Can be hard to analyze
together a reasonable in responses.
representative of 8 – terms of cost Need good facilitators
10 People who are Stimulates the Difficult to schedule 8 -
asked a series of generation of new 10 people
questions related to ideas
the task at hand Quickly and
reliably gets
Used for analysis of common
specific complex impressions
problems in order to Can be an
identify attitudes and efficient way to
priorities in sample get a wide range
groups and depth of
information in a
Explore a topic in shot time
depth through group Can convey key
discussion e.g. about information about
reactions to an programme
experience or Useful in project
suggestions, design and in
85
understanding assessing the
common complaints impact of a
e.t.c project on a given
set of
stakeholders
Case study In-depth view of one Well-suited for Usually time consuming
or a small number of understanding to collect, organize and
selected cases processes and for describe.
formulating Represents depth of
To fully understand hypothesis to be information, rather than
or depict tested later breadth
beneficiaries’ Fully depicts
experience in a client’s
project, and conduct experience in
comprehensive programme input,
examination through process and
cross comparison of results
cases Powerful means
to portray
programme to
outsiders
Key informants Interviews with Flexible, in- Risk of biased
interviews persons who are depth approach. presentation/interpretatio
knowledgeable about Easy to n from informants
the community implement /interviewer
targeted by the Provides Time required to select
project information and get commitment may
concerning be substantial.
A Key informants is a causes, reasons Relationship between
person (or group) and/or best evaluator and informants
who has unique skills approaches from may influence type of
or professional back – an ‘insider’ point data obtained.
ground related to the of view Informants may interject
issue/intervention Advice/feedback own biases and
being evaluated, is increases impressions
knowledgeable about credibility of
the project study
participants and / or May have side
has access to other benefit to solidify
information of relationships
86
interest to the between
evaluator evaluator,
beneficiaries and
other stakeholders
Direct measurement Registration of Precise Registers only facts and
quantifiable or Reliable and not explanations
classifiable data by often requiring
means of analytical few resources
instrument
There are many ways of writing an M&E proposal. The most common way are explained
below;
1. Preliminary pages
Title page may contain: the name of project, programme or theme being evaluated; the
Country/ies of project/programme or theme; name of the organization to which the report is
submitted; names and affiliations of the evaluators and the date. Finally in the preliminary pages
we have table of contents and acronyms and abbreviations
Step Two
87
This constitutes chapter one of the proposal that contains the introduction to the evaluation
which may include the following areas:
i) Context of the evaluation: Briefly describe the project to establish whether it is new,
developing or firmly established
ii) Purpose of the evaluation: After describing the context of evaluation make a
statement of need and then state the purpose of the evaluation. Before writing the
purpose consider the following:
why is this evaluation important?
what are the implication of the evaluation and how does it relate to the
future work of the area?
iii) Evaluation question and objectives: These are derived from the statement of the
purpose of the evaluation. Examples of evaluation questions are
Are the planned activities actually being carried out?
Is the programme achieving its intended objectives
How does the project compare with alternative strategies for achieving the
same ends?
Examples of evaluation objectives may include to:
Determine the effect of the project to the beneficiaries
Extent to which project objectives have been achieved
The impact of the project
iv) Significance of the evaluation: The evaluator should first identify the decision areas
of the project and explain how the results of the evaluation will guide the
effectiveness of the projects. The evaluator will also require identifying key project
stakeholders and explaining how the evaluation findings will be important to them.
v) Limitation and Delimitation of the evaluation: A single evaluation may not cover
all the aspect of interest. It can be limited to certain type of projects, geographical
areas etc. These should be stated and justified. This section describes the limits or the
scope of the study. The evaluator should give reasons why they are not extending
beyond the determined scope
vi) Assumption of the evaluation: An evaluator should indicate those factors that
operate in his/her study out of which he/she will assume that will not affect the
evaluation results. Such factors should be those the evaluator can do nothing about
through sampling or studying and therefore has to accept to live with them. For
Example, an evaluator will assume that the evaluation participants will give honest
and frank answers.
vii) Definition of the significant terms: This should be restricted only to those terms
which may convey different meanings to different people. Such definitions are
sometimes called operational definitions. There are other terms which are not
observable but which can only be inferred by subject behaviour when faced with a
specific situation. Such terms are called constructs
88
viii) Evaluation Model: Evaluator needs to look at the entire evaluation and make
decision on the type of evaluation model that fits it (see lecture Four).
ix) Conceptual framework: The evaluator needs to explain a framework that shows the
interrelationship of independent and dependent variables under the evaluation. These
helps in focusing the evaluation.
x) Outline of the final evaluation report: All the chapters for writing the evaluation
report must be highlighted on
2. Step three
This is referred to as chapter two. It contains the description of the project being
evaluated. These may include:
Historical summary of the highlights of the projects or group of activities to be
evaluated
Day the project was started
Philosophy behind the programme
Types of beneficiaries for who the project is designed
Project out come
Project scheduling
Content
Administrative and management procedures
3. Step Four
This is known as chapter two of the proposal. It entails review of previous Evaluation studies
related to the evaluation. The section is very important because it helps one to understand various
methods of evaluation that were used elsewhere and the kind of results that were realized
4. Step Five
This step entails the methodology that will be used in the evaluation. It constitutes chapter
three which is concerned with Evaluation design and methodology. It touches on:
Evaluation Design
Target population and sample
Description of the sample
Description of the instruments
Data collection
Data analysis plan
Work Plan
Budget
89
10.6 Self assessment test
1. Identify various method of collecting data
2. Explain the characteristics of good data collection instruments
3. State the advantages and limitations of different types of methods and
tools of data collection
4. With relevant examples explain the steps involved in preparation of
M&E proposal
10.7 Summary
It is important to note that when conducting an Evaluation, the methods
and tools that one chooses define the quality of the results to be achieved.
This lecture therefore disuses two approaches that a project manager can
employ in designing a monitoring and evaluation plan. The two methods
include qualitative and quantitative techniques. The lecture also provides
an in-depth outlook of subcomponent of each of the above methodology
and tools that are used in the evaluation of projects. The lecture finally
examines various components of a project evaluation proposal Identify
the key monitoring and evaluation methods and tools.
90