0% found this document useful (0 votes)
26 views

Monitoring

Monitoring and evaluation (M&E) provides essential information for improving government performance by identifying effective and ineffective policies, programs, and projects. M&E also enhances transparency and accountability by revealing a government's progress toward its objectives. However, simply requiring M&E is not enough - governments must understand M&E's value and use the information to strengthen performance. Participatory approaches that incorporate community perspectives are important for evaluation. When done properly, M&E supports evidence-based decision-making, management, and accountability.

Uploaded by

aisah
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views

Monitoring

Monitoring and evaluation (M&E) provides essential information for improving government performance by identifying effective and ineffective policies, programs, and projects. M&E also enhances transparency and accountability by revealing a government's progress toward its objectives. However, simply requiring M&E is not enough - governments must understand M&E's value and use the information to strengthen performance. Participatory approaches that incorporate community perspectives are important for evaluation. When done properly, M&E supports evidence-based decision-making, management, and accountability.

Uploaded by

aisah
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Monitoring and evaluation (M&E) provides decision makers with the ability to draw on causal linkages

between the choice of policy priorities, resourcing, programmes, the services actually delivered and the
ultimate impact on communities. M&E provides answers to the “so what” question, thus addressing the
accountability concerns of stakeholders and give public sector managers information on progress
toward achieving stated targets and goals. It also provides substantial evidence as the basis for any
necessary corrections in policies, programmes, or projects. Whilst the primary purpose of monitoring is
to determine if what is taking place is as planned, evaluation is intended to identify the causality
between intervention, a public policy and its results.

The main aim is to ensure that performance is improved and the desired results are achieved, by
measuring and assessing performance in order to more effectively manage the outcomes and associated
outputs known as development results.

Of importance to state is the cost of repairing, rehabilitation and reconstruction of damaged


infrastructure after every disaster has occurred. For this reason, it is important to develop an M&E
system that will link outcome information appropriately with the budget allocation process, so as to
illustrate the benefits that arise from expenditure especially for post disaster rehabilitation and disaster
risk reduction.

Monitoring is the systematic collection and analysis of information as a project/programme progresses.


It is aimed at improving the efficiency and effectiveness of a project or organisation. It is based on
targets set and activities planned during the planning phases of work. It helps to keep the work on track,
and can let management know when things are going wrong.

Evaluation is the comparison of actual project impacts against the agreed strategic plans. It looks at
what you set out to do, at what you have accomplished, and how you accomplished it.

In order to bring context to the development of the M&E framework, the Monitoring and Evaluation
Chief Directorate embarked on a process of conducting a Rapid Assessment on how disaster
management is being coordinated at both national and provincial level also in terms of the legislative
mandate.

Some of the challenges identified with regards to monitoring and evaluation includes the following:

Lack of appropriate institutional arrangements for M&E, leading to confusion on who implements, who
monitors and reports;

Lack a central database/data warehouse to store and retrieve information as and when needed

The above challenges can only be overcome through institutionalisation and coordination of an
integrated M&E system.

M&E can provide unique information about the performance of government


policies, programs, and projects. It can identify what works, what does not, and the
reasons why. The value of M&E does not come simply from conducting M&E or from having such information
available; rather, the value comes from using the information to help improve government performance.
To enhance transparency and support accountability relationships by revealing the extent to which government
has attained its desired objectives.
M&E provides the essential evidence necessary to underpin strong accountability relationships by revealing the
extent to which the government, LDRRMO in particular, has attained its desired objectives. These uses of M&E
place it at the center of sound governance arrangements as a necessary condition for the effective management of
public expenditures for LDRRMF. Thus M&E is necessary to achieve evidence-based policy making, evidence-based
management, and evidence-based accountability.

It is not enough to issue an edict that M&E is important and should be done; this is likely to produce only lip
service and is certainly unlikely to produce good quality monitoring information and evaluations. Such efforts
at top-down compliance can, unless accompanied by a range of other actions, easily lead to ritual compliance or
even active resistance.

There is a lack of government demand for M&E because of the lack of understanding of M&E and
what it can provide; there is a lack of understanding because of the lack of experience with it; and there is a lack of
experience because of weak demand.

M&E can provide unique information about the performance of government policies, programs, and projects.
It can identify what works, what does not, and the reasons why. The value of M&E does not
come simply from conducting M&E or from having such information available; rather, the value
comes from using the information to help improve government performance.

To enhance transparency and support accountability relationships by revealing the extent to which
government has attained its desired objectives. M&E provides the essential evidence necessary to underpin
strong accountability relationships,

These uses of M&E place it at the center of sound governance arrangements as a necessary condition
for the effective management of public expenditures for poverty reduction. Thus M&E is necessary to achieve
evidence-based policy making, evidence-based management, and evidence based accountability. An
emphasis on M&E is one means to achieve a results-oriented and accountable public sector, including a
performance culture in the civil service. For this reason M&E should not be viewed as a narrow, technocratic
activity.

It is not enough to issue an edict that M&E is important and should be done; this is likely to produce only lip
service and is certainly unlikely to produce good quality monitoring information and evaluations. Such efforts
at top-down compliance can, unless accompanied by a range of other actions, easily lead to ritual compliance
or even active resistance.

So there is frequently a chicken-and-egg problem: There is a lack of government demand for M&E because
of the lack of understanding of M&E and what it can provide; there is a lack of understanding because of the
lack of experience with it; and there is a lack of experience because of weak demand.
M&E must be planned carefully. Evaluations can take many forms, including real-time evaluations, after-
action reviews with communities, internal or self-evaluations by project staff and partners, and formal,
externally-led evaluations. The purpose and methods of any monitoring exercise, review or evaluation
should be clearly defined and agreed.

18.4 Accountability and participation

It is best to approach M&E as a mutual learning process for all involved, not merely as an information-
gathering exercise. This encourages flexibility, openness and debate. The principles of accountability to
vulnerable people outlined in Chapter 11 are very important here. Communities’ views should be central
to evaluation, and communities (or beneficiaries) should be able to take an active part in the evaluation
process. Participatory evaluation enables the voices of project stakeholders, particularly beneficiary
communities and vulnerable groups, to be heard, draws on their local knowledge, stimulates dialogue
and mutual learning and creates wider ‘ownership’ of the evaluation’s findings. However, many M&E
systems are still top-down, designed to extract information from the field to give to headquarters staff
and donors. Collecting data solely for external use can undermine the participatory process.

Beneficiary participation in M&E can take various forms. In some projects, it may be no more than
providing information to review or evaluation teams, but this is too limiting. Beneficiaries should be
involved in planning the assessment (including selecting indicators), providing information on what was
and was not achieved, analysing and verifying the results and making decisions about future activities.
Findings should always be fed back to communities. The needs of communities in this regard may differ
from those of outside agencies, and the targets, indicators and priorities developed by communities may
differ considerably from those of agency staff. Adopting participatory approaches does not prevent the
use of more formal data collection methods: these can complement or validate information gathered in
a participatory way. Methods should be selected according to their usefulness in helping to understand
impact.

Since it is never possible to involve everyone, careful thought must be given to ensuring that those who
are consulted are representative of the range of groups concerned, paying particular attention to the
most marginalised as well as people who may have dropped out of the project.

Beneficiaries are one group of stakeholders. Project staff are another. NGOs and other local institutions,
local and national government officials, and, where appropriate, international donor agencies and other
kinds of organisation (e.g. the private sector) should be consulted if they have been involved in the
project, are affected by it or have some influence on its outcome. It can be difficult to reconcile the
views of such diverse groups. This makes it all the more important to be clear from the start about what
M&E is designed to look at. Meetings should be held to discuss and explain this. Where stakeholders
have different priorities and perspectives, this should be made explicit at the start to avoid
misunderstandings later.

Involvement of a range of people makes it more likely that an evaluation’s lessons will be shared and its
findings acted upon.

M&E is of little value if it does not lead to improvements in agencies’ work to reduce risk. M&E reports
are potentially very useful documents. They enable practical lessons to be learned and applied within
and across programmes and regions. They feed into strategic planning by providing a basis for discussion
about better practice and policy change. They also contribute to institutional memory, which is
important in organisations that suffer from rapid staff turnover. Good-quality presentation is essential
here: no matter how good the evidence and analysis they contain, reports will not inform and influence
if they are not well written and presented. Evaluation should be embedded within an organisation’s
systems and regular practice to ensure that learning takes place. In reality, many agencies are poor at
absorbing the lessons from evaluations, with the result that the same problems recur. Too often, the
review or evaluation report is filed away to be acted upon later, but then forgotten amidst the pressure
of work. Many organisations have poor information storage and retrieval systems, making it very
difficult to find documents, and feedback mechanisms are weak. Few staff have sufficient time to reflect
on the lessons from individual projects, and fewer still are able to consider what can be learnt from
several projects and countries. Overwork and pressures of work, which are common among staff in DRR
agencies, prevent clear thinking and innovation. Knowledge management and learning systems need to
be given higher priority and more resources in most organisations. Plans for sharing and using results
and findings, in the field and across the organisation, should be built into the evaluation process from
the start. These should be based on consultations with potential users of the evaluations.

Transparency in M&E is a key element in making operational agencies more accountable. Evaluation
processes should be as open as possible, and their results should be made widely available, particularly
to project stakeholders (who should also be consulted before reports are submitted, for clarification and
confirmation). However, there is still much to be done here. The widespread failure to share and publish
DRR evaluations means that practitioners are unable to learn lessons from each other and so are
frequently reinventing the wheel. It also runs counter to the principle of accountability that agencies
claim to follow. There is a particular reluctance to document mistakes and share their lessons. In some
cases, joint reviews by agencies could be carried out to encourage mutual learning, knowledge sharing
and transparency. Participatory M&E creates a sense of ‘ownership’ of the final product among
stakeholders, which greatly increases the likelihood that lessons will be noted and acted upon.

The progress of implementation of the Plan of Action for DRRM in Agriculture


should be regularly monitored
Community M&E is monitoring and evaluation that significantly involves the community. It is
sometimes also referred to as participatory M&E.

To Monitoring and evaluation are important to guide the local authorities and the communities about
the successful implementation of a project or activity on disaster risk reduction. Local authorities and
the community members jointly can agree on the purpose and methodology for the monitoring of their
activities. It is important that representatives of the local authorities, community and any other local
organizations; e.g. NGOs, mass unions participate in the process.

Monitoring is the review and overseeing by stakeholders and management of the implementation of an
activity to ensure that input deliveries, work schedules, targets and outputs are achieved according to
the plan. Through monitoring we get timely, accurate and complete information on project
effectiveness. Through monitoring we learn whether the activities are implemented on time, and in the
right manner as planned. If the implementation is not going as per the plan, the local authorities and
community members will decide to take appropriate action to achieve the desired results.
Evaluation will help the local authorities and the community members to assess the achievements,
results and effects of a disaster risk reduction project or activity. Evaluation is usually done after the
completion of an activity or project. However, evaluations can also be done during the implementation
itself. The purpose is to find out whether the activity or project is successful or not in achieving its
objectives of disaster risk reduction.

Evaluation results will inform the local authorities and the community members about the effects of the
risk reduction activities on vulnerability reduction of the target groups. If vulnerability is not significantly
reduced, the reasons for this should be analyzed. Evaluation will also help them in learning about
successful strategies that were applied. They would like to continue the good practices in future
activities and promote them to other areas. On the basis of evaluation and analysis the local authorities
and the communities can identify lessons to improve their future disaster risk reduction activities.

M&E can provide unique information about the performance of government


policies, programs, and projects. It can identify what works, what does not, and the
reasons why. The value of M&E does not come simply from conducting M&E or from having such information
available; rather, the value comes from using the information to help improve government performance.

M&E provides the essential evidence necessary to underpin strong accountability relationships by revealing the
extent to which the government, LDRRMO in particular, has attained its desired objectives. Thus M&E is necessary
to achieve evidence-based policy making, evidence-based management, and evidence-based accountability.

It is not enough to issue an edict that M&E is important and should be done; this is likely to produce only lip
service and is certainly unlikely to produce good quality monitoring information and evaluations. Such efforts
at top-down compliance can, unless accompanied by a range of other actions, easily lead to ritual compliance or
even active resistance.

M&E must be planned carefully. The purpose and methods of any monitoring exercise, review or
evaluation should be clearly defined and agreed. Monitoring and evaluation perform best when
integrated into the project cycle, rather than being independent
exercises.
It is best to approach M&E as a mutual learning process for all involved, not merely as an information-
gathering exercise. This encourages flexibility, openness and debate. Participatory evaluation enables
the voices of project stakeholders, particularly beneficiary communities and vulnerable groups, to be
heard, draws on their local knowledge, stimulates dialogue and mutual learning and creates wider
‘ownership’ of the evaluation’s findings. However, many M&E systems are still top-down, designed to
extract information

M&E is of little value if it does not lead to improvements in agencies’ work to reduce risk. M&E reports
are potentially very useful documents. They enable practical lessons to be learned and applied within
and across programmes and regions. They feed into strategic planning by providing a basis for discussion
about better practice and policy change. They also contribute to institutional memory, which is
important in organisations that suffer from rapid staff turnover. Good-quality presentation is essential
here: no matter how good the evidence and analysis they contain, reports will not inform and influence
if they are not well written and presented. Evaluation should be embedded within an organisation’s
systems and regular practice to ensure that learning takes place. In reality, many agencies are poor at
absorbing the lessons from evaluations, with the result that the same problems recur. Too often, the
review or evaluation report is filed away to be acted upon later, but then forgotten amidst the pressure
of work. Many organisations have poor information storage and retrieval systems, making it very
difficult to find documents, and feedback mechanisms are weak. Few staff have sufficient time to reflect
on the lessons from individual projects, and fewer still are able to consider what can be learnt from
several projects and countries. Overwork and pressures of work, which are common among staff in DRR
agencies, prevent clear thinking and innovation. Knowledge management and learning systems need to
be given higher priority and more resources in most organisations. Plans for sharing and using results
and findings, in the field and across the organisation, should be built into the evaluation process from
the start. These should be based on consultations with potential users of the evaluations.

Transparency in M&E is a key element in making operational agencies more accountable. Evaluation
processes should be as open as possible, and their results should be made widely available, particularly
to project stakeholders (who should also be consulted before reports are submitted, for clarification and
confirmation). However, there is still much to be done here. The widespread failure to share and publish
DRR evaluations means that practitioners are unable to learn lessons from each other and so are
frequently reinventing the wheel. It also runs counter to the principle of accountability that agencies
claim to follow. There is a particular reluctance to document mistakes and share their lessons. In some
cases, joint reviews by agencies could be carried out to encourage mutual learning, knowledge sharing
and transparency. Participatory M&E creates a sense of ‘ownership’ of the final product among
stakeholders, which greatly increases the likelihood that lessons will be noted and acted upon

Lessons from post disaster reviews should be reflected in policy. The distinction made in the United
Kingdom between “lessons identified” and “lessons learned” is meaningful in this regard because in
some cases, usually due to resource limitations, lessons are dismissed while vulnerability remains.
Implementing policies based on lessons should be the status quo for avoiding and mitigating recurrent
losses.

This means that people are at the heart of decision-making and implementation of disaster risk management
activities.
CBDRM approach is people and development oriented. It empowers people to address the root causes of
vulnerabilities. The key aspect of community involvement is the sustainability of community level initiatives
for disaster reduction. Unless the disaster risk management efforts are sustainable at individual and
community level, it would be difficult to reduce the vulnerability and losses. It is therefore important to involve
people in decision making on policies and strategies that should be followed for their development in the
community.

Effective feedback mechanism should be in place so the results of this regular monitoring of Department
of Health regional office can be used for enhanced information dissemination at the local level.

You might also like