0% found this document useful (0 votes)
45 views

Learning From Monitoring Evaluation-A Blueprint Fo PDF

This document discusses principles for learning from social forestry monitoring and evaluation (M&E) within the Forestry Commission and its partners. It outlines how learning is dependent on activities and structures within three interrelated domains: 1) intra- and inter-organizational structures and practices that promote learning, 2) the overall design and aims of the M&E process, and 3) participatory evaluation research principles. It proposes a research program to test these principles by applying them to "live" M&E projects, with the goal of producing guidance to enhance learning outcomes from M&E.

Uploaded by

Eduardo Mundt
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views

Learning From Monitoring Evaluation-A Blueprint Fo PDF

This document discusses principles for learning from social forestry monitoring and evaluation (M&E) within the Forestry Commission and its partners. It outlines how learning is dependent on activities and structures within three interrelated domains: 1) intra- and inter-organizational structures and practices that promote learning, 2) the overall design and aims of the M&E process, and 3) participatory evaluation research principles. It proposes a research program to test these principles by applying them to "live" M&E projects, with the goal of producing guidance to enhance learning outcomes from M&E.

Uploaded by

Eduardo Mundt
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/228520453

Learning from Monitoring & Evaluation–a blueprint for an adaptive


organisation

Article · January 2010

CITATION READS

1 697

2 authors, including:

Anna Lawrence
University of the Highlands and Islands
124 PUBLICATIONS   1,952 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

FACES MAP COST Action View project

Private forest ownership View project

All content following this page was uploaded by Anna Lawrence on 30 July 2014.

The user has requested enhancement of the downloaded file.


Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation

Learning from Monitoring & Evaluation – a blueprint


for an adaptive organisation
Jake Morris and Anna Lawrence
Social & Economic Research Group, Forest Research

Aim and structure


Learning is an essential characteristic of an adaptive organisation. Monitoring and
evaluation (M&E) provide important data and experiences that can contribute to such
learning.

In this paper we set out principles for learning from social forestry M&E within the
Forestry Commission (FC) and its partners. Smith (2010) notes that writing on learning
organisations is highly idealised and that there are few actual examples of organisations
that have managed to put principles into action. This is not at odds with our objective to
set out the principles of learning from M&E, and to investigate how they might be
realised within the FC. To that end, we also outline a programme of research to test and
develop these principles, the outcome of which will be guidance on M&E design and
implementation for enhanced learning outcomes.

Learning from M&E will be dependent on activities and structures within three inter-
related domains that are addressed in separate sections of this paper. In each section
we present a summary review of key literature to present an idealised vision of practices
and organisational structures for the promotion of learning outcomes:

1. The ‘intra- and inter-organisational domain’ refers to the structures, knowledge


cultures and communicative practices within and between organisations that can
promote learning outcomes.

2. The ‘M&E domain’ refers to the overall organisation, analytical orientation (aims
and objectives), and to the data gathering tools (indicators) of a given M&E
project that can promote learning outcomes.

3. The ‘research domain’ refers to the principles of participatory evaluation and how
they may be operationalised to help realise the learning potential within domains
1, 2 and 3.

Finally, in Section 4 we set out a programme of participatory evaluation research within


‘live’ M&E projects to test and develop the principles set out in previous sections.

1 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation

Background
This paper is an output of the research project ‘Learning from Monitoring & Evaluation’
(Study No. FR09031), which aims to inform best practice in M&E, and to develop and
test models to improve the use of M&E data within the FC so that the organisation and
its partners can become more responsive, adaptive and, ultimately, sustainable. The
project started from the recognition that the FC and its partners could make better use
of data that are gathered as part of evaluations of social forestry policy, programme and
project delivery, and which have the potential to inform processes of decision-making,
planning and design.

It is commonly said of evaluation reports that they have merely ‘ticked the box’ of
fulfilling funders’ requirements, or are ‘gathering dust on the shelf’. A study of
community forestry in Great Britain noted that although initiatives have been evaluated,
they ‘are often completed as a formality, or as an outward looking defence of public
spending, and do not feed into internal learning processes’ (Lawrence et al. 2009). As
such, the FC, like many organisations, is missing important opportunities to learn from
experience, communicate successes, and develop organisationally.

This focus on learning and adaptation mirrors an important shift within the general field
of monitoring and evaluation, originally conceived within the international development
sector as a form of ‘evaluation for accountability’ or ‘summative evaluation’, whereby a
donor or sponsor is given the necessary information to demonstrate that a funded
intervention has delivered against its stated aims and objectives. The last 20 years has
seen a gradual shift in practical and analytical emphasis to respond to the needs of
development funders, planners and practitioners to learn from previous experience.
Central to this development has been the increasingly strong emphasis placed on the
translation of new knowledge into better policy and practice. This shift in emphasis has
given rise to ‘evaluation for learning’, also referred to as ‘formative evaluation’ 1 .

Evidence-based practice
A variety of evaluation approaches have emerged that are aimed at learning and
informing improvements to the practical dimensions of project and programme delivery.
Patton (2002: 179) catalogues and references some key approaches, such as ‘action
learning’ (Pedler 1991), ‘reflective practice’ (Tremmel 1993), ‘action research’ (Whyte
1989), internal evaluation (Sonnichsen 2000), organisational development (Patton
1999), and systematic praxis (Bawden & Packham 1998). With these approaches, the
primary purpose of the evaluation is to yield insights that change practice and enable
programme participants to think more systematically and reflexively about what they’re

1
It should be stressed that evaluation for accountability and evaluation for learning are not
mutually exclusive – the need to provide summative judgments about whether a programme was
effective or not can, and often does, sit perfectly comfortably alongside the need for insights that
can improve programme effectiveness.

2 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation

doing.

Evidence-based policy
The concept of evidence-based policy has also gained currency in recent years, as has
the strategic application of monitoring and evaluation within evidence-based policy
making. In 2003 (updated in 2006) the Government Social Research Unit (GSR)
published The Magenta Book - a set of guidance notes for policy evaluation
commissioners, policy evaluators and analysts. A Defra-commissioned review of the role
of monitoring and evaluation in policy-making across Whitehall highlighted the potential
learning gains where evidence can help identify what has worked in previous policies,
can improve policy / programme implementation, and can identify and measure the
impact of a policy / programme (Yaron 2006).

Defining the terms

Learning
Here we are discussing the concept of learning in a specific context: the improved
effectiveness of people, projects and organisations, through conscious processing of
information gained from experience. This conscious processing of information and
experience can be structured through M&E.

Learning occurs on at least two levels, giving rise to the concepts of ‘single-loop learning’
and ‘double-loop learning’ developed in the field of organisational learning (Argyris and
Schön 1978, Bateson 1972).

Single-loop learning leads actors to modify their behaviour to adjust to goals within the
status quo.
Double-loop learning challenges mental models and the policies based on those, and
involves learning from others as well as from one's own experience 2 .

There are two main routes to learning in the sense being discussed here:
 reflexivity or introspection – conscious reflection on one’s own experience
 exchange - sharing experiences amongst different stakeholders.

2
A third, more profound level of learning and change, is sometimes mentioned as triple loop
learning: when a complete transformation in world view takes place. This kind of learning is not
likely to take place within organisational structures but is the kind that can inspire new social
movements.

3 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation

Reflexive learning is built into some kinds of research. The best-known example is action
research where the conscious processing of research experiences becomes the data
which informs behavioural change of the participants in the research.

Learning through exchange is sometimes referred to as ‘interactive’ or ‘social’ learning


(Webler et al. 1995), although this term also refers to behaviour learned through social
interaction. It can also be conducted within the framework of collaborative learning,
which can take the form of teams working to build new understanding from experiences
and data.

Both reflexive and interactive learning can be enhanced by participatory research, or


participatory monitoring and evaluation. These forms of participation include a range of
different stakeholders in planning the evaluation, gathering and interpreting data, and
drawing conclusions. If the participants include those most affected by the research topic
(sometimes called the ‘evaluand’), they can be highly motivated to implement the
findings, or to pressurise those in positions of influence to implement the findings.

Monitoring & Evaluation


Although the term ‘Monitoring & Evaluation’ tends to get run together as if it refers to a
single research activity, it actually refers to two distinct, albeit closely related, sets of
data gathering practices. The distinction between monitoring and evaluation is primarily
one of analytical depth:

‘Whereas monitoring may be nothing more than a simple recording of activities and
results against plans and budgets, evaluation probes deeper. Although monitoring
signals failures to reach targets and other problems to be tackled along the way, it can
usually not explain why a particular problem has arisen, or why a particular outcome has
occurred or failed to occur’ (Molund & Schill 2007, emphasis added).

The following definitions of Monitoring and Evaluation help both to reinforce and to
carefully demarcate the distinction outlined above:

Monitoring:

‘... the periodic oversight of the implementation of an activity which seeks to establish
the extent to which input deliveries, work schedules, other required actions and targeted
outputs are proceeding according to plan, so that timely action can be taken to correct
deficiencies detected. Monitoring is also useful for the systematic checking on a condition
or set of conditions, such as following the situation of women and children’ (UNICEF,
undated: 2) .

4 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation

‘... a continuing function that uses the systematic collection of data on specified
indicators to inform management and the main stakeholders of an ongoing [...]
operation of the extent of progress and achievement of results in the use of allocated
funds’ (IFRC, 2002: 1-5).

‘... the continuous follow-up of activities and results in relation to pre-set targets and
objectives’ (Molund & Schill, 2007: 12).

Evaluation:

‘... the systematic collection of information about the activities, characteristics, and
outcomes of programs to make judgments about the program, improve program
effectiveness, and/or inform decisions about future programming’ (Patton, 2002: 10)

‘... a process which attempts to determine as systematically and objectively as


possible the relevance, effectiveness, efficiency and impact of activities in the light of
specified objectives. It [evaluation] is a learning and action-oriented management tool
and organizational process for improving both current activities and future planning,
programming and decision-making’ (UNICEF, ibid: 2).

‘... the systematic and objective assessment of an on-going or completed operation,


programme or policy, its design, implementation and results. The aim is to determine
the relevance and fulfilment of objectives, as well as efficiency, effectiveness, impact
(overall Goal) and sustainability. An evaluation should provide information that is
credible and useful, enabling the incorporation of lessons into management decision-
making’ (IFRC, ibid: 1-6).

Learning from M&E


Patton (2002) sorts M&E approaches into two broad categories, based on distinctions
between those that merely measure whether goals and objectives have been attained
(summative M&E), and those that enable stakeholders to learn and change in response
to a given programme’s successes and failures (formative M&E). A critical consideration
here is whether a given M&E application moves beyond providing an account of what has
happened, to offering explanations of why certain (positive and negative) outcomes have
arisen. It is this explanatory capacity of M&E that is critical in terms of its contribution to
learning and organisational change, and is a key focus of UK governmental guidance on
the role of evaluation within evidence-based policy making (Government Social Research
Unit, 2006).

This explanatory function of M&E works in two ways:


1. When M&E establishes the causal linkages between elements in the project cycle
(e.g. inputs, activities, outputs, outcomes and impacts), allowing stakeholders to
determine cause and effect and to identify where improvements can be made.

5 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation

2. When M&E feeds into processes of collective reflection and analysis between
stakeholders to determine the reasons behind success / failure.

Section 1: The intra- and inter-organisational domains


The extent to which an organisation learns from the experience of project, programme
or policy delivery will depend in part on the nature of the structures, knowledge cultures
and communicative practices within that organisation. Key principles relating to these
factors are outlined below.

Many organisations (including the FC), however, work with a wide range of private,
public and third sector organisations to achieve their objectives. A recent review of
partnerships between the FC and Third sector organisations, for example, revealed the
extent of partnership working, with more than 140 different Third Sector organisations
listed as operating with the Commission in England alone (Ambrose-Oji et al., 2010.). In
a delivery context characterised by partnership working, monitoring and evaluation of
delivery will entail reporting performance against aims and objectives shared across
organisations. In these cases the need to learn from M&E will also be shared – learning
can be an inter-, as well as an intra-organisational phenomenon. As such, the principles
outlined below apply as much to individual as to multiple, collaborating organisations.

Here it is helpful to re-emphasise the distinction between learning individuals and


learning organisations. As indicated above, learning can be ‘single-loop’ or ‘first order’
learning: it helps individuals within an organisation to do their jobs more effectively
through processes of conscious reflection on their own experiences. However some
commentators have criticised this type of learning, exposing its limited potential to
enable organisational learning which requires communicative exchanges between
individuals so that experiences can be shared. They point out that organisational
learning is inherently interactive (Voss & Kemp 2006, Senge 1990, Smith 2010).

Smith (2010) notes that writing on learning organisations is highly idealised and that
there are few actual examples of organisations that have managed to put principles into
action. This is not at odds with our objective to set out the principles of learning from
M&E, and to investigate how they might be realised within the FC.

Donald Schon was an early advocate of the development of institutions that are ‘learning
systems’, by which he meant systems capable of bringing about their own continuing
transformation (1973: 28). Much of the subsequent literature on learning organisations
has been a development of Schon’s ideas. Three key criteria for learning organisations
emerge from this literature: Systemic thinking, dialogue and social capital.

6 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation

Systemic thinking: The conceptual cornerstone of Peter Senge’s (1990) work on learning
organisations, systemic thinking enables an appreciation of the inter-relatedness of sub-
systems, of individual actors and actions, and an accommodation of sophistication and
complexity within strategic thinking and decision-making. The ability to accommodate
complexity is likely to be even more important when policies, programmes and projects
are delivered through partnerships.

Dialogue: Senge also places an emphasis on dialogue, especially as a component of


successful team learning. The concept of dialogue goes beyond mere communication - it
involves individuals adopting a stance of openness and preparedness to accommodate
the views and opinions of others - the concern is not to 'win the argument', but to
advance understanding of the situation and, collectively, to draw out the appropriate
lessons. As Senge has argued, learning through dialogue entails the capacity of team
members to suspend assumptions and enter into a genuine “thinking together”’ (1990:
10).

Social capital: Cohen and Prusak (2001: 4) refer to social capital as “the stock of active
connections among people: the trust, mutual understanding, and shared values and
behaviours that bind the members of human networks and communities and make
cooperative action possible”. The development of social capital constitutes a valuable
investment for any organisation interested in promoting learning because creating
opportunities for people to connect provides the medium for effective dialogue and
fosters the appropriate conditions for genuine participation in collective thinking, analysis
and decision-making.

As well as revealing the extent of partnership working within the FC, Ambrose-Oji et al.
(2010) also identified some key characteristics of successful partnership working that
overlap substantially with the principles of dialogue and social capital, namely:

Mutual communication - “the ability of the individuals within each organisation in a


relationship or partnership being able to discuss, transmit and network information,
responses and feedback about day to day situations, the progress of partnership working
and other process issues.” (ibid, 45).

Mutual understanding – “Real communication and trust between organisations is


supported by a mutual understanding of the professional context and aims of the
organisations involved in a relationship.” (ibid, 49).

Mutual trust and respect – “trust and honesty between partners, built through
communication and mutual understanding” (ibid, 50).

7 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation

Summary - Key principles for organisations:

In order to enhance the potential for learning and adaptation, organisations and the
teams within them should aspire to become learning systems. This will involve
developing the capacity for systemic thinking within strategic planning and decision-
making, fostering open and genuine dialogue between project team members, and
fostering the conditions for cooperative action through the development of social capital.

Section 2: The M&E domain


The extent to which an organisation learns from the data it gathers about project,
programme or policy delivery will depend on the organisation, the overall analytical
orientation, and the data gathering tools (e.g. indicators) of a given M&E project. Key
principles relating to these factors are outlined below.

Broadly speaking, learning outcomes are achieved through M&E in two ways:
 end of cycle learning - Learning forms the final connection in the project cycle,
where the M&E data is used as the basis for appraisal and renewed planning.
 within-cycle learning - M&E structures the information that feeds into learning
processes that form part of the project cycle.

The stages of the project cycle are illustrated in Figure 1. The same cyclical concept is
applied to programme and policy planning and implementation. In all cases, the
‘monitoring’ stage provides an opportunity for learning during implementation, and the
‘evaluation’ stage provides an opportunity for learning after project completion.

8 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation

Figure 1: the project cycle (various versions are in use, but the basic concept is
the same for all)

Evaluat- Needs
ion Identifi-
cation

Complet- Planning
ion

Monitor- Implem-
ing entation

Is learning an intentional outcome?


Opportunities for learning do not necessarily arise of their own accord, however, and
much depends on the way in which M&E is built into the project (or programme, or
policy) process. If monitoring is equated merely with keeping track of spend, there will
be few, if any, opportunities for learning during the project cycle. Where monitoring data
is being collected as a matter of routine, to track change over time independently of any
specific project or intervention, there may be no process associated with the generation,
interpretation and use of the data for learning purposes. If, however, the project cycle
has been designed to include internal feedback loops that allow project activities to be
adjusted in the light of intermediate results, the monitoring stage can provide
opportunities for adaptive management.

Similarly, evaluation does not automatically lead to learning. Evaluation can just be
based on project outputs, research results, or policy implementation targets. The terms
of reference for these may be set at the planning stage and can provide opportunities for
first-order learning - they will inform evaluators whether the project has been completed
successfully and, at best, may also provide some opportunity to learn under which
circumstances this success can be achieved. If, however, the evaluation has been set up

9 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation

as a means of drawing out key lessons that can be fed into the design of subsequent
iterations of delivery, then learning outcomes can be realised.

The point here is that learning must be a deliberate objective of the M&E process.
Learning must be ‘written into’ the M&E plan in the form of scheduled checkpoints at
which data is formally analysed and reflected upon.

To achieve learning outcomes M&E also needs to produce a certain kind of data. Again, if
the M&E only produces evidence of delivery, on budget, on time, and against specified
objectives, then learning will be limited. In order for policy makers and practitioners to
be able to make the necessary adjustments to the design and implementation of
interventions, they need to understand the reasons behind the failings or successes of
previous applications of policies, programmes or projects. This level of understanding
necessitates a particular kind of evidence – one that enables the precise identification
and description of the causal linkages between inputs and their outputs and outcomes –
it requires data that explains why failings or successes occurred under a certain set of
circumstances (ChannahSorah 2003).

Because project outputs lead, often indirectly, to project outcomes, and wider, less
predictable or controllable impacts, M&E also needs to be flexible enough to
accommodate and learn from the unexpected. It is possible to set indicators for these
outcomes and impacts at the planning stage, but it is often more difficult to measure
them and attribute them to the project alone. To rely only on ‘pre-cooked’ indicators
would preclude learning from the unexpected. More open-ended forms of evaluation are
also needed to inform stakeholders of the wider effects of a project, programme or
policy. In summary, in project or programme cycles, learning can take place by
comparing M&E findings with the expected or desired state. But if M&E data is structured
in a rigid way that has missed potential outcomes, this structure can stand in the way of
learning and more open research approaches will be needed.

Organisation
The link between evaluation and changes in policy or practice is rarely strong (Bell and
Morse 2003). Rigby et al., (2000) argue that:

“Much of the measurement of indicators, has, at the end of the day, largely resulted just
in the measurement of indicators. The actual operationalization of indicators to influence
or change, for instance, policy is still in its infancy” (cited in Bell and Morse, 2003: 51).

There are some who argue that real learning processes take place not in examining the
results of M&E processes, but in formulating the indicators themselves.

“indicators that are influential are developed with participation of those who are to use
them… the joint learning process around indicator development and use is far more

10 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation

important in terms of impact than are the actual indicator reports. It is this process that
assures that the indicators become part of the players’ meaning systems. They act on
the indicators because the ideas the indicators represent have become second nature to
them and part of what they take for granted” (Innes and Booher c. 1999).

One area that has attracted particular attention is that of the social ‘ownership’ of M&E.
Despite widespread attention given to the need for participatory M&E, the handing over
of monitoring systems to local communities has rarely been successful (Garcia and
Lescuyer 2008). Several authors from Canada have highlighted the gap between
community M&E, and decision making – whether locally or nationally (Conrad 2006,
Conrad and Daoust 2008, Faith 2005).

Where participatory monitoring does take place, it can be a key factor contributing to
success (Hartanto et al. 2002, Wollenberg et al. 2001). Some positive examples are
provided by North America. A case study of ‘improving forest monitoring through
participatory monitoring’ focuses on four organisations who ‘shared a common goal of
creating learning communities to better address the complex array of forest health and
forest livelihood issues’ (Ballard et al. 2010). They highlight ‘important changes in social
and institutional relationships’ between the community, the forestry department and
environmental NGOs. The process helped local people to appreciate the complexity of
forest management, and contributed to increased social capital. More open-ended
approaches to ‘learning from experience’ include the recent emphasis on social forestry
networks and ‘writeshops’, to help practitioners analyse experience and present it in
written format for sharing (e.g. Mahanty et al. 2006).

It follows that M&E should not just be a means of generating technical data, but a means
of bringing people together in the collective activities of gathering, analysing, and
interpreting data and, critically, applying the lessons drawn from them in renewed cycles
of delivery. In short, learning from M&E can be enhanced by an inclusive and
participatory approach, with adequate resources allocated to improving communication
between researchers, policymakers, operational staff, and community members and to
facilitating their combined involvement in all stages of the evaluation cycle.

Summary - Key principles for M&E:

 Learning outcomes should figure as a deliberate objective of the M&E process;


 M&E should explain project successes and failures by clearly linking inputs,
activities and outputs to their outcomes and impacts;
 M&E design should be flexible enough to accommodate and learn from the
unexpected;
 M&E should be participatory because involving stakeholders in indicator design,

11 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation

data gathering, analysis and interpretation increases the likelihood that lessons
will be applied.

Section 3: The research domain


The weak link between evaluation and changes in policy or practice highlighted by Bell
and Morse (2003) can be attributed, in part, to a model of evaluation where the
gathering, analysis and interpretation of evaluative data is treated as distinct from the
delivery of the policy, programme or project being evaluated. This distinction is often
organisational as well as conceptual – M&E is often contracted out to an external
research company as a separate, bounded task that runs in parallel to project activities.
Outsourcing M&E in this way is considered beneficial in terms of providing an
independent and objective assessment of project successes and failures. However, this
objectivity comes at a price because project staff themselves are not directly involved in
the analysis or interpretation of data and, therefore, not best placed to react and adjust
to evidence of good or poor performance. Under this model, the potential for
organisational learning is limited and is highly dependent upon regular and detailed
feedback and input from the researchers.

The broad school of participatory evaluation offers alternative models of evaluative


research where the distinction between researcher and researched is de-emphasised, or
abolished altogether. Rather than providing a detached, objective assessment over
which he / she has ultimate control, the evaluator’s role is to assist programme
participants in making their own assessment and the research process is controlled by
the people in the programme (these can be practitioners as well as community
members). Their participation can be an end in itself, an expression of the right for
people to have a voice in matters that significantly affect them. It can also be justified in
instrumental terms, as it helps to mobilise local knowledge and helps to make the
intervention more relevant and effective (Molund & Schill 2007).

Patton (2002: 179) catalogues and references some key approaches, such as action
learning (Pedler 1991), reflective practice (Tremmel 1993), action research (Whyte
1989), internal evaluation (Sonnichsen 2000), organisational development (Patton
1999c), and systematic praxis (Bawden & Packham 1998). With these approaches, the
primary purpose of the evaluation is to yield insights that change practice and enable
programme participants to think more systematically and reflexively about what they’re
doing. Specific findings about programme performance emerge, but they are secondary
to the more general learning outcomes of the inquiry – a distinction that Patton captures
through his differentiation of ‘process use’ from ‘findings use’ (1997).

12 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation

Participatory evaluation’s emphasis on breaking down boundaries and bridging gaps


between all the stakeholders involved (including researchers, policy-makers,
practitioners and community members) is sometimes reflected in the design of the
indicators used. This is particularly relevant for social forestry M&E where the
intangibility and mutability of many social and economic outcomes forces a recognition
that there are sometimes no simple, objective indicators of social benefit.

Recognition of this has had two notable impacts. Firstly, it has given rise to the
development of indictors that attempt to capture the more intangible outcomes and the
processes of social change that bring them about. Canadian researchers, for example,
describe the ‘next generation’ of (forestry) socio-economic indicators as "process"
indicators. These deal more with causal affects than outcomes and include things like
sense of place or attachment to place. Process indicators also include variables such as
leadership, volunteerism, entrepreneurship, and social cohesion (Beckley, Parkins, and
Stedman 2002). Secondly, and related to this, it has led to the development of
participatory approaches to indicator development whereby the terms of reference of the
evaluation are sensitised and tailored to specific socio-cultural and economic contexts.
The International Centre for Forestry Research (CIFOR) has led work in this field,
particularly in relation to the development of indicators of sustainable forest
management (SFM). Because the ecological, social and economic are all interconnected
in SFM they advocate qualitative multi-criteria approaches to developing indicators
(Mendoza and Prabhu 2003). Methods include cognitive mapping, a tool that can help to
assess the interconnectedness of indicators.

Summary - Key principles for research:

M&E research should be participatory and should bring researchers, policy-makers,


practitioners and community members together in the collective enterprise of indicator
design, data gathering, analysis and interpretation. This will create the conditions for
shared ownership of the M&E process and increase the likelihood that lessons will be
applied.

M&E research should only focus on outcomes and impacts, but also the processes
through which social change occurs. Process indicators should be considered during
evaluation design and planning.

Section 4: Testing the principles


Thus far in this paper we have set out key principles for learning from M&E that emerge
from a summary review of the literature. These principles are listed in the text boxes at

13 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation

the end of each section, for easy reference. We have purposefully ‘de-cluttered’ our
presentation of the principles; they are presented as ideals of M&E design, application
and organisational orientation without consideration for how they might actually be
operationalised. In this section we outline a programme of research whereby these
principles can be scrutinised, tested and developed through their ‘real life’ application in
the context of social forestry policy, programme and project delivery in Great Britain.

This research programme consists of two phases, which seek to learn from experience
(phase 1), and develop improved approaches collaboratively (phase 2). We describe
these phases in more detail below.

Phase 1: Scoping interviews - A series of semi-structured interviews will be


conducted with key social forestry M&E stakeholders within the FC to examine the extent
to which learning and adaptation outcomes inform the design and implementation of
M&E within the organisation; the extent to which M&E has led to learning and
adaptation, whether planned or not; and to document examples of M&E applications that
have been successful or unsuccessful with respect to learning and adaptation. The
analysis of the interview results will enable us to formulate a number of ‘learning
efficacy’ criteria (factors that have been present in a given M&E application and which
enabled learning outcomes to be achieved). These criteria can be compared with the
principles set out in this paper (Sections 1-3) to see how the FC has fared to-date
against this idealised vision for learning from M&E, and can then also be applied and
tested through the case study component of the research project (Phase 2).

Scoping interviews will address the following research questions:

RQ1: Why do FC staff gather data about the performance of policies, programmes and
projects?
RQ2: Do FC staff members consider learning outcomes when they are thinking about
M&E design and implementation?
RQ3: What factors lead to successful learning outcomes?
RQ4: What factors prevent successful learning outcomes?

Phase 2: Case studies – SERG is currently involved in three projects (see below) to
develop and implement frameworks of social forestry M&E. These projects provide
opportunities to put into practise, test and develop the principles set out in this paper.
We propose to carry out this work between 2010/11 and 2012/13. Our research will be
orientated around the principles set out in Section 3. We will adopt an action research
approach, based around the integration of monitoring and evaluation activities into the
wider project cycle, and the active involvement of researchers, policy-makers,
practitioners and community members in M&E tasks, such as indicator development, and

14 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation

data gathering, analysis and interpretation. Implementing a participatory evaluation


approach will enable us to test whether, and in what ways, the active involvement of a
range of stakeholders delivers opportunities for learning and adaptation.

In principle, the adoption of this participatory approach should deliver benefits in terms
of learning. However, we are aware that it will require increased levels of commitment
and responsibility for policy / operational staff and community members who, under
more conventional models of M&E, would play a fairly passive role. As such, trying to
implement a participatory approach will enable us to document the process and assess it
in terms of practical feasibility and resource implications.

We now set out the approach to be taken in each case study:

Case study 1: An evaluation of the community impacts and key lessons from the
Neroche Landscape Partnership Scheme

The Neroche Landscape Partnership Scheme (LPS) is a five year programme of


landscape and heritage based activities, seeking to maximise the value of the northern
part of the Blackdown Hills AONB for wildlife conservation, recreation, learning and skills
development. It is funded by the Heritage Lottery Fund (HLF) and a partnership of local
authorities and agencies, under the HLF’s Landscape Partnership programme. The
Forestry Commission (Peninsula FD) is lead partner, and the staff team are based with
the Blackdown Hills AONB team. The LPS began in October 2006 and runs to 2012.

In August 2010, SERG were asked to carry out an evaluation of the LPS. Although the
Neroche Partnership is not obliged to carry out a formal evaluation under the terms of its
HLF funding, it is keen to maximise the learning value from the programme, both as a
way of rounding off the LPS and providing useful data to underpin legacy activities.

The evaluation work led by SERG will have two main focuses. Firstly it will assess the
impacts of the project on its participants in terms of their enjoyment, learning, skills and
involvement. Secondly, and to offer useful learning value, the evaluation will try to
document and explain positive and negative project outcomes.

Given the short time frame to carry out the work (3 months for primary research), and
the fact that the evaluation is starting quite late on the delivery of the LPS, a ‘belt and
braces’ approach to participatory evaluation with active stakeholder involvement in the
design, implementation and analysis stages of the M&E is just not feasible. However,
because the LPS case study is a good example of many delivery projects where time and
resources for M&E are limited, and because of the LPS team’s own stated need to draw
out key lessons, we feel that trying to facilitate a level of stakeholder participation will

15 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation

yield valuable insights that can inform our development of practical guidance for learning
from M&E.

Framework development - Working with the LPS team, SERG have produced a plan
for the evaluation that incorporates as much active involvement by stakeholders as
possible within the obvious time and resource constraints, based around the two main
focuses outlined above. SERG researchers will work with the LPS team to design short
questionnaires for use by LPS staff (FC and delivery partners) to evaluate the impacts of
the scheme on participants. These questionnaires are to be distributed to participants at
forthcoming events and emailed/mailed to participants using the LPS contacts database.
Critically, distribution and data input will be led by the LPS team, putting them in a
position to experience directly the feedback given by project participants. Their
experience of implementing the survey and gaining feedback will be drawn out during
interview conducted by SERG researchers (see below).

Framework implementation – SERG researchers will conduct focus groups with


selected project participants, focusing on two LPS activities (to be decided). LPS activity
leaders will be encouraged to attend these focus groups, so that they can contribute to
discussions about the impacts of the activity in question. Furthermore, this will provide
an opportunity for them to gain first hand experience of the feedback provided by
participants. Lessons that they draw from this feedback will again be drawn out in the
interviews conducted by SERG researchers.

SERG researchers will conduct one-to-one and/or small group interviews with members
of the LPS project team, partnership board members, and key members of stakeholder
group to elicit their views on the success/difficulties/failure regarding the governance of
the LPS, its processes/activities and the impacts on the project team, the Landscape
Partnership (funding bodies) and the perceived impacts on project participants and the
affected area. A focus group will also be conducted with the stakeholder group. A
particular focus of discussion during these interviews will be the ways in which various
data types (including M&E data) have been used to inform the design and delivery of the
LPS. We will also examine how the structures, knowledge cultures and communicative
practices within and between the various organisations involved in the governance of the
LPS have inhibited or enabled learning outcomes.

Reporting of the SERG evaluation will be carried out in close collaboration with the LPS
project team to ensure that we maximise the capacity of the evaluation outputs to
identify the successes and challenges of the Neroche LPS so that lessons can be learnt
by FC for other/future projects.

Case study 2: Woodlands In and Around Towns (WIAT)

16 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation

WIAT supports the creation and management of accessible woodlands for individuals
and communities in urban and post-industrial areas as a way to bring the quality of life
benefits of forests closer to where people live and work. It is the flagship social forestry
programme delivered by Forestry Commission Scotland (FCS) under the Scottish Rural
Development Programme. WIAT was launched in 2005 and by the end of its first three-
year phase it had made a capital investment of £30m in over 110 woods across
Scotland. Now nearing the end of Phase II (2008-2011), FCS have undertaken to
designate 12 existing and new sites to demonstrate the range of benefits delivered
through the WIAT programme. These ‘priority’ sites will provide a strategic focus for the
targeting of future resources to develop exemplars of sustainable urban forest
management, laying the foundation for the delivery of WIAT Phase III.

There is an ongoing need to evaluate the WIAT programme. For WIAT Phase II a number
of evaluation resources were developed, including a set of indicators 3 and guidance on
how to carry out monitoring and evaluation (M&E) of social forestry initiatives to be used
by WIAT Challenge Fund grant applicants 4 . In addition, FCS have contracted OpenSpace
Research to carry out an evaluation of three WIAT sites 5 .

As part of wider objectives to develop good practice in community engagement, and the
design and delivery of sites, projects and interventions, FCS intend to use the 12 priority
sites to develop a broad M&E framework that can be integrated into the design and
delivery of WIAT Phase III. To that end, FCS have contracted SERG to lead the
development of bespoke M&E framework to be implemented and tested at the priority
sites. The terms of reference for this work are as follows:

1. To develop a M&E framework that will produce evidence that shows how WIAT
sites and interventions are helping to deliver against wider Scottish policy
objectives;
2. To provide a M&E resource that can be used by those delivering and managing
WIAT sites, as a way of informing the development of best practice;
3. To develop a M&E framework that enables processes of learning and adaptation at
both operational and policy levels.

Framework development consists of indicator and methods design, and producing a


number of protocols that provide instructions and guidance to those using the
framework. Following on from this, the framework will be implemented at a number of

3
Forestry Commission Scotland (2008: 11) Woodlands In and Around Towns: Phase 2. Available at:
http://www.forestry.gov.uk/pdf/fcfc120.pdf/$FILE/fcfc120.pdf
4
Available at: http://www.forestry.gov.uk/forestry/infd-7djf9c
5
Copies of the baseline report can be downloaded at:
http://www.forestry.gov.uk/pdf/WIATBaselineSurveyFinal300307.pdf/$FILE/WIATBaselineSurveyFinal300307.p
df

17 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation

priority sites. It is likely that SERG will be contracted on an on-going basis to provide
M&E advice and support to WIAT stakeholders and to assist with data gathering, analysis
and interpretation.

We will adopt a participatory approach to framework development and implementation in


order to test and develop the principles set out in this paper, as follows:

Framework development – the development process will draw on the experience,


knowledge and expertise of a number of key research, policy and operational
stakeholders to ensure the appropriate selection and design of indicators and to facilitate
the necessary buy-in and ownership for successful implementation. Preliminary indicator
design and selection will be achieved through workshop meetings to agree a preliminary
indicator framework structure that corresponds to key policy and operational delivery
agendas (FC and partners). The resulting draft indicators will be subject to alteration and
further development pending their implementation and testing at the WIAT priority sites.
Indeed, scope for the proposal of additional indicators by FCS, community groups and
partners will be written into the framework. This is to allow the flexibility to
accommodate the data and evidence requirements of specific sites, projects and local
contexts.

Framework implementation – at each site an ‘inquiry group’ made up of researchers,


policy and operational stakeholders, partner organisations and community members will
be formed to steer the implementation phase. Each group will select indictors, agree
responsibilities for data gathering, and write a plan for the analysis and interpretation of
data to include scheduled reflection ‘checkpoints’ at which M&E findings are discussed
and plans for renewed phases of delivery are drafted.

Case study 3: Forestry Commission Thames Beat M&E Strategy

The Thames Beat team has recently drafted a strategy for the delivery of community
engagement (CE) activities within the Forestry Commission Thames Beat (FCTB) for the
period 2010 - 2012. The CE strategy encompasses a monitoring and evaluation strategy
focused on the delivery of CE activities (‘the social offer’). SERG is working with the
FCTB to support the detailed design and implementation of the CE and M&E strategies.

Refining and improving the social offer (i.e. a better service to the public) is a stated
objective of the M&E strategy, and there is a strongly stated intention to use M&E data
to feed into processes of learning and adaptation:

“It is critical that any M&E serves a valuable function in the day to day running and
strategic development of the TB. In order to ensure this relevance, any M&E package

18 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation

must focus on the aspects of the TB work that can be varied on a relatively short cycle
(such as annually).”

Framework development – SERG are soon to meet with the Thames Beat team to
discuss plans for the 2011/12 CE programme. At this meeting, a participatory approach
to M&E framework development and implementation will be discussed.

Framework implementation – Where appropriate (depending on the nature of sites /


projects being evaluated), inquiry groups made up of researchers, operational
stakeholders, partner organisations and community members will be formed to steer the
M&E framework development and implementation. As in the WIAT case study, inquiry
groups will select indictors, agree responsibilities for data gathering, and write a plan for
the analysis and interpretation of data to include scheduled reflection ‘checkpoints’ at
which M&E findings are discussed and plans for renewed phases of delivery are drafted.

--
For further information about the research project ‘Learning from Monitoring &
Evaluation’, contact Jake Morris: [email protected]

References

Ambrose-Oji, B., J. Wallace, A. Lawrence, and A. Stewart. 2010. Forestry


Commission working with civil society. Forest Research. Available at
http://www.forestresearch.gov.uk/fr/INFD-7WCDZH

Argyris, C., and D. A. Schön. 1978. Organizational Learning: A Theory of Action


Perspective. Wokingham.

Ballard, H. L., V. Sturtevant, and M. E. Fernandez-Gimenez. 2010. "Improving


forest management through participatory monitoring: A comparative case study of four
community-based forestry organizations in the Western United States," in Taking stock
of nature: participatory biodiversity assessment for policy and planning. Edited by A.
Lawrence, pp. 266-287. Cambridge: Cambridge University Press.

Bateson, G. 1972. Steps to an Ecology of Mind: Collected Essays in Anthropology,


Psychiatry, Evolution, and Epistemology: University Of Chicago Press.

19 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation

Bawden, R.J. & Packham, R.G. 1998. ‘Systematic praxis in the education of the
agricultural systems practitioner.’ Systems Research and Behavioural 15: 403-12.

Bell, S. and Morse, S. 2003. ‘Measuring sustainability: learning from doing’. Earthscan,
London. 189 pp.

ChannahSorah, V. 2003. ‘Moving from Measuring Processes to Outcomes: Lessons


Learned from GPRA in the United States.’ Presented at World Bank and Korea
Development Institute joint conference on Performance Evaluation System and
Guidelines with Application to Large- Scale Construction, R&D, and Job Training
Investments. Seoul, South Korea. July 24–25.

Cohen, D. and Prusak, L. 2001. 'In Good Company. How social capital makes
organizations work'. Boston, Ma.: Harvard Business School Press.

Conrad, C. 2006. Towards meaningful community-based ecological monitoring in Nova


Scotia: Where are we versus where we would like to be. Environments 34:25-36.

Conrad, C. T., and T. Daoust. 2008. Community-based monitoring frameworks:


Increasing the effectiveness of environmental stewardship. Environmental Management
41:358-366.

Faith, D. P. 2005. Global biodiversity assessment: integrating global and local values
and human dimensions. Global Environmental Change-Human and Policy Dimensions
15:5-8.

Garcia, C. A., and G. Lescuyer. 2008. Monitoring, indicators and community based
forest management in the tropics: Pretexts or red herrings? Biodiversity and
Conservation 17:1303-1317.

Government Social Research Unit. 2006. ‘The Magenta Book: guidance notes for
policy evaluation and analysis.’ HM Treasury, London.

Hartanto, H., M. C. B. Lorenzo, and A. L. Frio. 2002. Collective action and learning in
developing a local monitoring system. International Forestry Review 4:184-195.

IFRC. 2002. ‘Handbook for Monitoring and Evaluation.’ Geneva, Switzerland.

Innes, J. E., and D. E. Booher. 1999. Indicators for sustainable communities: a


strategy building on complexity theory and distributed intelligence. Available at
http://www.rmi.org/images/other/ER-InOpp-Indicators.pdf

20 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation

Lawrence, A., B. Anglezarke, B. Frost, P. Nolan, and R. Owen. 2009. What does
community forestry mean in a devolved Great Britain? . International Forestry Review
11:281-297.

Mahanty, S., J. Fox, L. McLees, M. Nurse, and P. Stephen. Editors. 2006. Hanging
in the Balance: equity in community-based natural resource management in Asia.
Bangkok: RECOFTC and East-West Center.

Mendoza, G. A., and R. Prabhu. 2003. Qualitative multi-criteria approaches to


assessing indicators of sustainable forest resource management. Forest Ecology and
Management 174:329-343.

Molund, S. & Schill, G. 2007. ‘Looking Back, Moving Forward. Sida Evaluation Manual -
2nd revised edition.’ Sida, Sweden.

Patton, M. 2002. Qualitative Research & Evaluation Methods 3rd Edition. Sage,
Thousand Oaks, California.

- 1999. ‘Organizational development and evaluation.’ Special issue of Canadian Journal


of Program Evaluation, pp. 93-113

- 1997. ‘Utilization-focused evaluation: The new century text.’ 3rd Edition. Sage,
Thousand Oaks, California.

Pedler, M. (ed.) 1991. ‘Action Learning in Practice’. Gower, Aldershot, Hants UK

Rigby, D., Howlett, D. & Woodhouse, P. 2000. ‘A Review of Indicators of Agricultural


and Rural Livelihood Sustainability’. Institute for Development Policy and Management,
Manchester University, Manchester

Schön, D. A. 1973. ‘Beyond the Stable State. Public and private learning in a changing
society’. Harmondsworth: Penguin.

Senge, P. M. 1990. The Fifth Discipline: Doubleday/Currency.

Smith, M. 2010. "learning in organizations," infed. Available at


http://www.infed.org/biblio/organizational-learning.htm

Sonnischen, R.C. 2000. ‘High impact internal evaluation.’ Sage, Thousand Oaks,
California

21 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation

Tremmel, R. 1993. ‘Zen and the art of reflective practice in teacher education’ Harvard
Educational Review 63 (4): 434-58

UNICEF. Undated. ‘A UNICEF Guide for Monitoring and Evaluation - Making a


Difference?’

Voss, J.-P., and R. Kemp. 2006. "Sustainability and reflexive governance:


introduction," in Reflexive governance for sustainable development. Edited by J.-P. Voss,
D. Bauknecht, and R. Kemp, pp. 3-28: Edward Elgar.

Webler, T., H. Kastneholz, and O. Renn. 1995. Public participation in impact


assessment: a social learning perspective. Environmental Impact Assessment Review
15:443-463.

Whyte, W.F. (ed.) 1989. ‘Action research for the twenty-first century: Participation,
reflection and practice.’ Special issue of American Behavioural Scientist 32 (5, May/June)

Wollenberg, E., J. Anderson, and D. Edmunds. 2001. Pluralism and the less
powerful: accommodating multiple interests in local forest management. International
Journal of Agricultural Resources, Governance and Ecology 1:199-222.

Yaron, G. 2006. ‘Good Practice from Whitehall and Beyond.’ GY Associates Ltd,
Harpenden UK.

22 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010

View publication stats

You might also like