0% found this document useful (0 votes)
7 views

M&E Short Note

Monitoring and evaluation reports

Uploaded by

ikhtiargul
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

M&E Short Note

Monitoring and evaluation reports

Uploaded by

ikhtiargul
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Monitoring and Evaluation (M&E)

M&E is a process that helps to improve the performance and achieve the results, its goal is to
improve the current and future management of Outputs, Outcomes and Impacts. Monitoring and
evaluation enable you to assess the quality and impact of your work.
Monitoring:

 Monitoring is a continuous process of collecting and analyzing information on key indicators, and
comparing actual results to expected results. Monitoring improves efficiency within a project.
 Monitoring is the collection and analysis of information about a project or programme,
undertaken while the project/programme is ongoing.
 Monitoring is continues process that start alongside with the implementation of the project to help
the management to achieve the project goal on the right track.
Types of Monitoring:
Generally there are two types of monitoring as follows:

 Process monitoring: It is monitoring the activities to improve performance.


 Output monitoring: It is monitoring outputs to ensure targets are achieved.

Importance of Monitoring:
 Monitoring makes sure that we are achieving the targets as per the work plan. If we do not
monitor, we cannot collect data, if we cannot collect data, we cannot report the figures and facts
to donors, if we do not monitor we cannot do the evaluation.

Evaluation:
 Periodic review of a project with feedback and recommendations for the next project cycle.
 Evaluation is a comparison of objectives with accomplishments and how the objectives were
achieved.
 A Management Tool of Assessing the Impact/Result/Change of a Program/Project Activities.
 Evaluation is the periodic, retrospective assessment of an organization, project or programme that
might be conducted internally or by external independent evaluators.
 Evaluation is the comparison of actual project impacts against the agreed strategic plans. Evaluation
means what we have achieved and what we were intended to achieve.

Different types of Evaluation:


Formative Evaluation: This is generally conducted before the project implementation phase.
Process Evaluation: It is conducted as soon as the project implementation stage begins.
Outcome Evaluation: This type of evaluation is conducted once the project activities have been
implemented. It measures the immediate effects or outcomes of the activities in the target population and
helps to make improvements to increase the effectiveness of the project.
Impact Evaluation: Impact evaluation assesses the long term impact or behavioral changes as a result of
a project invention.
Real Time Evaluation: Real-time evaluation is undertaken during the project implementation phase. It is
often conducted during emergency scenarios, where immediate feedback for modifications is required to
improve ongoing implementation. The emphasis is on immediate lesson learning over impact evaluation
or accountability.

Objective of M&E
 Accountability:
 Relevance:
 Efficiency:
 Effectiveness:
 Impact:
 Sustainability:

Differences between Monitoring & Evaluation


S.No Monitoring Evaluation
1 Definition of Monitoring Definition of Evaluation
2 Monitoring is a continuous/routine process Evaluation is a periodic process
3 Monitoring is related to observation Evaluation is related to judgment
4 Monitoring occurs in operational level Evaluation occurs in business level
5 Monitoring is a short term process (Quicker Evaluation is a long term process (Lengthy
process) process)
6 Focus on improving efficiency Focus on improving effectiveness
7 Conducted or performed by internal party Conducted or performed usually by external
party and sometimes by internal party

Importance of Evaluation:
Evaluation is important because it provides the management with the conclusion of projects. The purpose
or objective of the evaluation is to know whether the project achieved the desired results (outcomes and
impacts).

Purpose of Monitoring and Evaluation:


• Ensure informed decision-making;
• Enhance organizational and development learning;
• Assist in policy development and improvement;
• Provide mechanisms for accountability;
• Promote partnerships with, and knowledge transfer to, key stakeholders;
• Build capacity in M&E tools and techniques.

Why we do the Monitoring and Evaluation:


• Review progress
• Identify problems in planning and/or implementation
• To help the program on achieving the overall goals.

Components of M&E:
• M&E Indicators
• M&E Logical Framework
• M&E System
• The analysis of the data
• Use of the information

M&E Tools
M&E is combined practice in the project that help us on use of activities, input, output, outcome, impact
through the bellow tools.
• M&E Logical framework
• M&E plan
• M&E system
• M&E Indicators
• System data
• Survey
• Statistics
• Sample size

Inputs, Outputs, Outcomes and Impacts:


• Inputs--- are resources such as finance (money), man (people), machines, materials, equipment, or
whatever which is used for doing activities.
• Process--- are activities, or its combination of resources which are working together to produce outputs.
Examples include trainings, events, campaigns, workshops, seminar, building hospitals, the conduct of
awareness sessions for community people, doing advocacy activities for people, holding literacy classes,
distributing condoms to families for family planning, establishing road signs along the roads for drivers to
make them have controlled driving, distribution of agricultural products to farmers, all are examples of
activities.
• Outputs--- outputs are immediate tangible or intangible results of project. For examples, how many
farmers revived agricultural equipment, how many people attended awareness campaigns, how many
people were trained, number of condoms distributed, number of people attended workshops, etc.
• Outcome --- are mid-term results of intervention/projects. For example, percentage of increase in the
knowledge of participants, percentage of decrease in mortality rates of the mothers, percentage of rural
women received legal advice, and percentage of families received family planning trainings.
• Impact --- is the long-term result of intervention/projects. For example, improved access to health,
improved access to justice, increased and great understanding of people on their social rights, increased
access to legal courts, improved and increased access to education.

Indicatores:
 Indicators are variable to measure the progress of the activities, input, output, outcome, and
impact.
 The indicators are categorized based on measurement of the activities which is used to measure
the quality and quantity of the project progress.
 Indicators measure the achievement of the project activities.
 Indicators should be SMART.
Characteristics of a good indicator:
The indicator must have the SMART criteria: the acronym SMART means Specific, Measurable,
Achievable/Attainable, Relevant and Time-bound, which are good qualities of indicators.

Different types of Indicators


Quantities indicator (input and Output)
Qualitative indicator (Outcome and Impact)
• Input indicator: Refer to the resources needed for implementation of an activity, and intervention.
(Money, resources, staffs, materials).
• Output indicator: Measure the quantity, quality and timeliness of the products (goods or services)
that are the result of an activity, project or program.
• Outcome indicator: Is a specific, observable, and measurable characteristic or change that will
represent achievement of the outcome.
• Impact indicator: Include changes in awareness, knowledge and skills and its long term effects

Importance of Indicatores:
If we do not select our indicators in the planning phase of the project, we cannot measure progress,
change and results of an intervention or project.

M&E System:
 M&E System: M&E stands for Monitoring and Evaluation. An M&E system refers to the overall
structure or arrangement of processes, methods, tools, and resources used to monitor and evaluate
the progress and outcomes of a project, program, or organization. It includes various components
such as data collection, analysis, reporting, and decision-making mechanisms.

M&E Plan:
• M&E Plan is a table that builds upon a project/program's log frame to detail key M&E
requirements for each indicator and assumption. It allows program staff at the field level to track
progress towards specific targets for better transparency and accountability within and the project
life cycle.
 An M&E plan is a set of documents that states which information you will collect, how it will be
collected, and what you will do with the information.
M&E Framework:
• M&E Framework is a table that describes the indicators that are used to measure whether the
program is a success.

What is Data?
• Data is raw material, figures that is not form in to shape.
• Information: Data that have been interpreted and shaped into a form that is meaningful and useful to
human being.
• Processing: The conversion, manipulation and analysis of raw input into a form that is more meaningful
to human being.

What is Data Collection?


Before we define collection, it’s essential to ask the question, “What is data?” The abridged answer is,
data is various kinds of information formatted in a particular way. Therefore, data collection is the process
of gathering, measuring, and analyzing accurate data from a variety of relevant sources to find answers to
research problems, answer questions, evaluate outcomes, and forecast trends and probabilities

Data Collection Methods


• Survey
• FGD focal group discussion
• Interview
• Observation
• Assessment
• Phone call
• Internet
• Digital system (MoDa, Kobo Collect)

Data collection Tools


• Checklist
• Questioners
• Flip chart
• Media
• Hard copy of document
• Digital software

Types of data
Qualitative data: Qualitative data describes qualities or characteristics. It is collected using
questionnaires, interviews, or observation, and frequently appears in narrative form.
Quantitative data: data that can be counted or measured in numerical values.
Primary data: is a data that is collected by ourselves.
Secondary data: is data that someone else collected that and it’s available in in the library, books, and
internet.
Data Analyze and cleaning
Data cleaning is the process of preparing data for analysis by removing or modifying data that is
incorrect, incomplete, irrelevant, duplicated, or improperly formatted.

• Step 1: Remove duplicate or irrelevant observations. Remove unwanted observations from


your dataset, including duplicate observations or irrelevant observations.
• Step 2: Fix structural errors.
• Step 3: Filter unwanted outliers.
• Step 4: Handle missing data.
• Step 5: Validate and QA.

Data analysis tools


 SPSS
 EXCEL
 WORD
 Statistics
 Power Point
 Kobo collect
 MoDa
 Triangulation
What is Survey?
A survey is a method of gathering and compiling information from a group of people, more
often known as the sample, to gain knowledge by organization or institutions. This information
or opinion collected from the sample is more often generalization of what a large population
thinks.
Types of Survey:
1. Online Survey
2. Paper Survey
3. Telephonic Survey
4. Face-to-face interview
What is Baseline study?
A baseline study is an initial assessment conducted before the implementation of an
intervention or project to gather data on the current situation, attitudes, and behaviors of the
target population. The data collected serves as a reference point against which progress can be
measured and evaluated throughout the project cycle. The baseline study provides a clear
understanding of the current situation, identifies the gaps, and provides valuable insights that
guide the development of appropriate interventions and the establishment of indicators to
track progress towards achieving desired outcomes.
What Is Data Quality?
Data quality refers to the accuracy or worth of the information collected and emphasizes the high
standards required of data capture, verification, and analysis, such that they would meet the
requirements of an internal or external data quality audit.

Eight key criteria for assessing the quality of data


1. Validity: The first data quality criterion is validity. For a data set to be valid, we need to ensure
that data adequately represent performance. The key question on validity is whether the data
actually represent what they are supposed to represent.
2. Reliability: The second data quality criterion is reliability. For a data set to be reliable, data
collection processes must be stable and consistent over time, with reliable internal quality
controls in place and data procedures handled in a transparent manner.
3. Integrity: The third data quality criterion is integrity. For a data set to have integrity, the data
must be accurate and free of error introduced by either human or technological means, either
willfully or unconsciously. Specifically, we need to make sure no manipulation or bias has been
introduced into the data set.
4. Precision: The fourth data quality criterion is precision. According to USAID TIPS 122, “Precise
data have a sufficient level of detail to present a fair picture of performance and enable
management decision making.” For a data set to be “precise,” relevant data should also be
collected by the designated disaggregation characteristics, such as sex, age, and geographic
location.
5. Timeliness: A fifth data quality criterion is timeliness. To be considered “timely,” the data must
be collected frequently enough and must be current.
6. Completeness: How complete is the data set? Completeness refers to the degree to which all
the necessary steps in data collection, data entry, data cleaning, and data analysis have been
carried through. In addition, no data are missing and no responses are incomplete, uncollected,
or, because of other data quality issues, unusable.
7. Confidentiality: The Global Fund to Fight AIDS, Tuberculosis and Malaria—an international
financing organization—uses a seventh criterion in addition to the six discussed above:
confidentiality. In assessing your data against this standard, consider whether you and your
team have in place and utilize processes and systems ensuring that data are collected, stored,
and reported in a way that protects respondents’ privacy. Privacy is especially important when
you are collecting personally identifiable, sensitive, or geographic data—as would be the
situation, for example, if you were pinpointing on a map the homes of all those living with HIV in
a specified area.
8. Ethics: Ethics are a system of moral ideas and rules about our conduct that reflect international
standards and the values of the culture we work in and of the communities we serve. Data
ethics are the rules or standards governing the conduct of a person collecting, collating,
reporting on, or utilizing data, and represent our standard of what’s “right.”

What is Data Management?


Managing data means thinking about how data cycle through the organization: controlling how the data
are collected and how the raw data are assembled and analyzed; determining the most appropriate
presentation formats for the data; and ensuring data use by decision makers. Six key stages make up
this data management cycle: data source, data collection, data collation, data analysis, data reporting,
and data usage.
A.Six Stages of the Data Management Cycle
Managing data means thinking about how data cycle through the organization: controlling how the
data are collected and how the raw data are assembled and analyzed; determining the most
appropriate presentation formats for the data; and ensuring data use by decision makers. Six key
stages make up this data management cycle:

1. Data source

2. Data collection

3. Data collation

4. Data analysis

5. Data reporting

6. Data usage

Report: A report is a document that presents information in an organized format for a specific
audience and purpose.
Characteristics of a good report:
• Precision

• Accuracy of Facts

• Relevancy

• Simple Language

• Conciseness

• Grammatical

• Unbiased Recommendation

• Clarity

• Presentation

• Complete Information

List of content of a Report:

• Content page

• Executive summary

• Introduction

• Findings

• Conclusions

• Recommendations

• Appendices

You might also like