Learning From Monitoring Evaluation-A Blueprint Fo PDF
Learning From Monitoring Evaluation-A Blueprint Fo PDF
net/publication/228520453
CITATION READS
1 697
2 authors, including:
Anna Lawrence
University of the Highlands and Islands
124 PUBLICATIONS 1,952 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Anna Lawrence on 30 July 2014.
In this paper we set out principles for learning from social forestry M&E within the
Forestry Commission (FC) and its partners. Smith (2010) notes that writing on learning
organisations is highly idealised and that there are few actual examples of organisations
that have managed to put principles into action. This is not at odds with our objective to
set out the principles of learning from M&E, and to investigate how they might be
realised within the FC. To that end, we also outline a programme of research to test and
develop these principles, the outcome of which will be guidance on M&E design and
implementation for enhanced learning outcomes.
Learning from M&E will be dependent on activities and structures within three inter-
related domains that are addressed in separate sections of this paper. In each section
we present a summary review of key literature to present an idealised vision of practices
and organisational structures for the promotion of learning outcomes:
2. The ‘M&E domain’ refers to the overall organisation, analytical orientation (aims
and objectives), and to the data gathering tools (indicators) of a given M&E
project that can promote learning outcomes.
3. The ‘research domain’ refers to the principles of participatory evaluation and how
they may be operationalised to help realise the learning potential within domains
1, 2 and 3.
1 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation
Background
This paper is an output of the research project ‘Learning from Monitoring & Evaluation’
(Study No. FR09031), which aims to inform best practice in M&E, and to develop and
test models to improve the use of M&E data within the FC so that the organisation and
its partners can become more responsive, adaptive and, ultimately, sustainable. The
project started from the recognition that the FC and its partners could make better use
of data that are gathered as part of evaluations of social forestry policy, programme and
project delivery, and which have the potential to inform processes of decision-making,
planning and design.
It is commonly said of evaluation reports that they have merely ‘ticked the box’ of
fulfilling funders’ requirements, or are ‘gathering dust on the shelf’. A study of
community forestry in Great Britain noted that although initiatives have been evaluated,
they ‘are often completed as a formality, or as an outward looking defence of public
spending, and do not feed into internal learning processes’ (Lawrence et al. 2009). As
such, the FC, like many organisations, is missing important opportunities to learn from
experience, communicate successes, and develop organisationally.
This focus on learning and adaptation mirrors an important shift within the general field
of monitoring and evaluation, originally conceived within the international development
sector as a form of ‘evaluation for accountability’ or ‘summative evaluation’, whereby a
donor or sponsor is given the necessary information to demonstrate that a funded
intervention has delivered against its stated aims and objectives. The last 20 years has
seen a gradual shift in practical and analytical emphasis to respond to the needs of
development funders, planners and practitioners to learn from previous experience.
Central to this development has been the increasingly strong emphasis placed on the
translation of new knowledge into better policy and practice. This shift in emphasis has
given rise to ‘evaluation for learning’, also referred to as ‘formative evaluation’ 1 .
Evidence-based practice
A variety of evaluation approaches have emerged that are aimed at learning and
informing improvements to the practical dimensions of project and programme delivery.
Patton (2002: 179) catalogues and references some key approaches, such as ‘action
learning’ (Pedler 1991), ‘reflective practice’ (Tremmel 1993), ‘action research’ (Whyte
1989), internal evaluation (Sonnichsen 2000), organisational development (Patton
1999), and systematic praxis (Bawden & Packham 1998). With these approaches, the
primary purpose of the evaluation is to yield insights that change practice and enable
programme participants to think more systematically and reflexively about what they’re
1
It should be stressed that evaluation for accountability and evaluation for learning are not
mutually exclusive – the need to provide summative judgments about whether a programme was
effective or not can, and often does, sit perfectly comfortably alongside the need for insights that
can improve programme effectiveness.
2 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation
doing.
Evidence-based policy
The concept of evidence-based policy has also gained currency in recent years, as has
the strategic application of monitoring and evaluation within evidence-based policy
making. In 2003 (updated in 2006) the Government Social Research Unit (GSR)
published The Magenta Book - a set of guidance notes for policy evaluation
commissioners, policy evaluators and analysts. A Defra-commissioned review of the role
of monitoring and evaluation in policy-making across Whitehall highlighted the potential
learning gains where evidence can help identify what has worked in previous policies,
can improve policy / programme implementation, and can identify and measure the
impact of a policy / programme (Yaron 2006).
Learning
Here we are discussing the concept of learning in a specific context: the improved
effectiveness of people, projects and organisations, through conscious processing of
information gained from experience. This conscious processing of information and
experience can be structured through M&E.
Learning occurs on at least two levels, giving rise to the concepts of ‘single-loop learning’
and ‘double-loop learning’ developed in the field of organisational learning (Argyris and
Schön 1978, Bateson 1972).
Single-loop learning leads actors to modify their behaviour to adjust to goals within the
status quo.
Double-loop learning challenges mental models and the policies based on those, and
involves learning from others as well as from one's own experience 2 .
There are two main routes to learning in the sense being discussed here:
reflexivity or introspection – conscious reflection on one’s own experience
exchange - sharing experiences amongst different stakeholders.
2
A third, more profound level of learning and change, is sometimes mentioned as triple loop
learning: when a complete transformation in world view takes place. This kind of learning is not
likely to take place within organisational structures but is the kind that can inspire new social
movements.
3 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation
Reflexive learning is built into some kinds of research. The best-known example is action
research where the conscious processing of research experiences becomes the data
which informs behavioural change of the participants in the research.
‘Whereas monitoring may be nothing more than a simple recording of activities and
results against plans and budgets, evaluation probes deeper. Although monitoring
signals failures to reach targets and other problems to be tackled along the way, it can
usually not explain why a particular problem has arisen, or why a particular outcome has
occurred or failed to occur’ (Molund & Schill 2007, emphasis added).
The following definitions of Monitoring and Evaluation help both to reinforce and to
carefully demarcate the distinction outlined above:
Monitoring:
‘... the periodic oversight of the implementation of an activity which seeks to establish
the extent to which input deliveries, work schedules, other required actions and targeted
outputs are proceeding according to plan, so that timely action can be taken to correct
deficiencies detected. Monitoring is also useful for the systematic checking on a condition
or set of conditions, such as following the situation of women and children’ (UNICEF,
undated: 2) .
4 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation
‘... a continuing function that uses the systematic collection of data on specified
indicators to inform management and the main stakeholders of an ongoing [...]
operation of the extent of progress and achievement of results in the use of allocated
funds’ (IFRC, 2002: 1-5).
‘... the continuous follow-up of activities and results in relation to pre-set targets and
objectives’ (Molund & Schill, 2007: 12).
Evaluation:
‘... the systematic collection of information about the activities, characteristics, and
outcomes of programs to make judgments about the program, improve program
effectiveness, and/or inform decisions about future programming’ (Patton, 2002: 10)
5 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation
2. When M&E feeds into processes of collective reflection and analysis between
stakeholders to determine the reasons behind success / failure.
Many organisations (including the FC), however, work with a wide range of private,
public and third sector organisations to achieve their objectives. A recent review of
partnerships between the FC and Third sector organisations, for example, revealed the
extent of partnership working, with more than 140 different Third Sector organisations
listed as operating with the Commission in England alone (Ambrose-Oji et al., 2010.). In
a delivery context characterised by partnership working, monitoring and evaluation of
delivery will entail reporting performance against aims and objectives shared across
organisations. In these cases the need to learn from M&E will also be shared – learning
can be an inter-, as well as an intra-organisational phenomenon. As such, the principles
outlined below apply as much to individual as to multiple, collaborating organisations.
Smith (2010) notes that writing on learning organisations is highly idealised and that
there are few actual examples of organisations that have managed to put principles into
action. This is not at odds with our objective to set out the principles of learning from
M&E, and to investigate how they might be realised within the FC.
Donald Schon was an early advocate of the development of institutions that are ‘learning
systems’, by which he meant systems capable of bringing about their own continuing
transformation (1973: 28). Much of the subsequent literature on learning organisations
has been a development of Schon’s ideas. Three key criteria for learning organisations
emerge from this literature: Systemic thinking, dialogue and social capital.
6 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation
Systemic thinking: The conceptual cornerstone of Peter Senge’s (1990) work on learning
organisations, systemic thinking enables an appreciation of the inter-relatedness of sub-
systems, of individual actors and actions, and an accommodation of sophistication and
complexity within strategic thinking and decision-making. The ability to accommodate
complexity is likely to be even more important when policies, programmes and projects
are delivered through partnerships.
Social capital: Cohen and Prusak (2001: 4) refer to social capital as “the stock of active
connections among people: the trust, mutual understanding, and shared values and
behaviours that bind the members of human networks and communities and make
cooperative action possible”. The development of social capital constitutes a valuable
investment for any organisation interested in promoting learning because creating
opportunities for people to connect provides the medium for effective dialogue and
fosters the appropriate conditions for genuine participation in collective thinking, analysis
and decision-making.
As well as revealing the extent of partnership working within the FC, Ambrose-Oji et al.
(2010) also identified some key characteristics of successful partnership working that
overlap substantially with the principles of dialogue and social capital, namely:
Mutual trust and respect – “trust and honesty between partners, built through
communication and mutual understanding” (ibid, 50).
7 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation
In order to enhance the potential for learning and adaptation, organisations and the
teams within them should aspire to become learning systems. This will involve
developing the capacity for systemic thinking within strategic planning and decision-
making, fostering open and genuine dialogue between project team members, and
fostering the conditions for cooperative action through the development of social capital.
Broadly speaking, learning outcomes are achieved through M&E in two ways:
end of cycle learning - Learning forms the final connection in the project cycle,
where the M&E data is used as the basis for appraisal and renewed planning.
within-cycle learning - M&E structures the information that feeds into learning
processes that form part of the project cycle.
The stages of the project cycle are illustrated in Figure 1. The same cyclical concept is
applied to programme and policy planning and implementation. In all cases, the
‘monitoring’ stage provides an opportunity for learning during implementation, and the
‘evaluation’ stage provides an opportunity for learning after project completion.
8 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation
Figure 1: the project cycle (various versions are in use, but the basic concept is
the same for all)
Evaluat- Needs
ion Identifi-
cation
Complet- Planning
ion
Monitor- Implem-
ing entation
Similarly, evaluation does not automatically lead to learning. Evaluation can just be
based on project outputs, research results, or policy implementation targets. The terms
of reference for these may be set at the planning stage and can provide opportunities for
first-order learning - they will inform evaluators whether the project has been completed
successfully and, at best, may also provide some opportunity to learn under which
circumstances this success can be achieved. If, however, the evaluation has been set up
9 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation
as a means of drawing out key lessons that can be fed into the design of subsequent
iterations of delivery, then learning outcomes can be realised.
The point here is that learning must be a deliberate objective of the M&E process.
Learning must be ‘written into’ the M&E plan in the form of scheduled checkpoints at
which data is formally analysed and reflected upon.
To achieve learning outcomes M&E also needs to produce a certain kind of data. Again, if
the M&E only produces evidence of delivery, on budget, on time, and against specified
objectives, then learning will be limited. In order for policy makers and practitioners to
be able to make the necessary adjustments to the design and implementation of
interventions, they need to understand the reasons behind the failings or successes of
previous applications of policies, programmes or projects. This level of understanding
necessitates a particular kind of evidence – one that enables the precise identification
and description of the causal linkages between inputs and their outputs and outcomes –
it requires data that explains why failings or successes occurred under a certain set of
circumstances (ChannahSorah 2003).
Because project outputs lead, often indirectly, to project outcomes, and wider, less
predictable or controllable impacts, M&E also needs to be flexible enough to
accommodate and learn from the unexpected. It is possible to set indicators for these
outcomes and impacts at the planning stage, but it is often more difficult to measure
them and attribute them to the project alone. To rely only on ‘pre-cooked’ indicators
would preclude learning from the unexpected. More open-ended forms of evaluation are
also needed to inform stakeholders of the wider effects of a project, programme or
policy. In summary, in project or programme cycles, learning can take place by
comparing M&E findings with the expected or desired state. But if M&E data is structured
in a rigid way that has missed potential outcomes, this structure can stand in the way of
learning and more open research approaches will be needed.
Organisation
The link between evaluation and changes in policy or practice is rarely strong (Bell and
Morse 2003). Rigby et al., (2000) argue that:
“Much of the measurement of indicators, has, at the end of the day, largely resulted just
in the measurement of indicators. The actual operationalization of indicators to influence
or change, for instance, policy is still in its infancy” (cited in Bell and Morse, 2003: 51).
There are some who argue that real learning processes take place not in examining the
results of M&E processes, but in formulating the indicators themselves.
“indicators that are influential are developed with participation of those who are to use
them… the joint learning process around indicator development and use is far more
10 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation
important in terms of impact than are the actual indicator reports. It is this process that
assures that the indicators become part of the players’ meaning systems. They act on
the indicators because the ideas the indicators represent have become second nature to
them and part of what they take for granted” (Innes and Booher c. 1999).
One area that has attracted particular attention is that of the social ‘ownership’ of M&E.
Despite widespread attention given to the need for participatory M&E, the handing over
of monitoring systems to local communities has rarely been successful (Garcia and
Lescuyer 2008). Several authors from Canada have highlighted the gap between
community M&E, and decision making – whether locally or nationally (Conrad 2006,
Conrad and Daoust 2008, Faith 2005).
Where participatory monitoring does take place, it can be a key factor contributing to
success (Hartanto et al. 2002, Wollenberg et al. 2001). Some positive examples are
provided by North America. A case study of ‘improving forest monitoring through
participatory monitoring’ focuses on four organisations who ‘shared a common goal of
creating learning communities to better address the complex array of forest health and
forest livelihood issues’ (Ballard et al. 2010). They highlight ‘important changes in social
and institutional relationships’ between the community, the forestry department and
environmental NGOs. The process helped local people to appreciate the complexity of
forest management, and contributed to increased social capital. More open-ended
approaches to ‘learning from experience’ include the recent emphasis on social forestry
networks and ‘writeshops’, to help practitioners analyse experience and present it in
written format for sharing (e.g. Mahanty et al. 2006).
It follows that M&E should not just be a means of generating technical data, but a means
of bringing people together in the collective activities of gathering, analysing, and
interpreting data and, critically, applying the lessons drawn from them in renewed cycles
of delivery. In short, learning from M&E can be enhanced by an inclusive and
participatory approach, with adequate resources allocated to improving communication
between researchers, policymakers, operational staff, and community members and to
facilitating their combined involvement in all stages of the evaluation cycle.
11 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation
data gathering, analysis and interpretation increases the likelihood that lessons
will be applied.
Patton (2002: 179) catalogues and references some key approaches, such as action
learning (Pedler 1991), reflective practice (Tremmel 1993), action research (Whyte
1989), internal evaluation (Sonnichsen 2000), organisational development (Patton
1999c), and systematic praxis (Bawden & Packham 1998). With these approaches, the
primary purpose of the evaluation is to yield insights that change practice and enable
programme participants to think more systematically and reflexively about what they’re
doing. Specific findings about programme performance emerge, but they are secondary
to the more general learning outcomes of the inquiry – a distinction that Patton captures
through his differentiation of ‘process use’ from ‘findings use’ (1997).
12 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation
Recognition of this has had two notable impacts. Firstly, it has given rise to the
development of indictors that attempt to capture the more intangible outcomes and the
processes of social change that bring them about. Canadian researchers, for example,
describe the ‘next generation’ of (forestry) socio-economic indicators as "process"
indicators. These deal more with causal affects than outcomes and include things like
sense of place or attachment to place. Process indicators also include variables such as
leadership, volunteerism, entrepreneurship, and social cohesion (Beckley, Parkins, and
Stedman 2002). Secondly, and related to this, it has led to the development of
participatory approaches to indicator development whereby the terms of reference of the
evaluation are sensitised and tailored to specific socio-cultural and economic contexts.
The International Centre for Forestry Research (CIFOR) has led work in this field,
particularly in relation to the development of indicators of sustainable forest
management (SFM). Because the ecological, social and economic are all interconnected
in SFM they advocate qualitative multi-criteria approaches to developing indicators
(Mendoza and Prabhu 2003). Methods include cognitive mapping, a tool that can help to
assess the interconnectedness of indicators.
M&E research should only focus on outcomes and impacts, but also the processes
through which social change occurs. Process indicators should be considered during
evaluation design and planning.
13 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation
the end of each section, for easy reference. We have purposefully ‘de-cluttered’ our
presentation of the principles; they are presented as ideals of M&E design, application
and organisational orientation without consideration for how they might actually be
operationalised. In this section we outline a programme of research whereby these
principles can be scrutinised, tested and developed through their ‘real life’ application in
the context of social forestry policy, programme and project delivery in Great Britain.
This research programme consists of two phases, which seek to learn from experience
(phase 1), and develop improved approaches collaboratively (phase 2). We describe
these phases in more detail below.
RQ1: Why do FC staff gather data about the performance of policies, programmes and
projects?
RQ2: Do FC staff members consider learning outcomes when they are thinking about
M&E design and implementation?
RQ3: What factors lead to successful learning outcomes?
RQ4: What factors prevent successful learning outcomes?
Phase 2: Case studies – SERG is currently involved in three projects (see below) to
develop and implement frameworks of social forestry M&E. These projects provide
opportunities to put into practise, test and develop the principles set out in this paper.
We propose to carry out this work between 2010/11 and 2012/13. Our research will be
orientated around the principles set out in Section 3. We will adopt an action research
approach, based around the integration of monitoring and evaluation activities into the
wider project cycle, and the active involvement of researchers, policy-makers,
practitioners and community members in M&E tasks, such as indicator development, and
14 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation
In principle, the adoption of this participatory approach should deliver benefits in terms
of learning. However, we are aware that it will require increased levels of commitment
and responsibility for policy / operational staff and community members who, under
more conventional models of M&E, would play a fairly passive role. As such, trying to
implement a participatory approach will enable us to document the process and assess it
in terms of practical feasibility and resource implications.
Case study 1: An evaluation of the community impacts and key lessons from the
Neroche Landscape Partnership Scheme
In August 2010, SERG were asked to carry out an evaluation of the LPS. Although the
Neroche Partnership is not obliged to carry out a formal evaluation under the terms of its
HLF funding, it is keen to maximise the learning value from the programme, both as a
way of rounding off the LPS and providing useful data to underpin legacy activities.
The evaluation work led by SERG will have two main focuses. Firstly it will assess the
impacts of the project on its participants in terms of their enjoyment, learning, skills and
involvement. Secondly, and to offer useful learning value, the evaluation will try to
document and explain positive and negative project outcomes.
Given the short time frame to carry out the work (3 months for primary research), and
the fact that the evaluation is starting quite late on the delivery of the LPS, a ‘belt and
braces’ approach to participatory evaluation with active stakeholder involvement in the
design, implementation and analysis stages of the M&E is just not feasible. However,
because the LPS case study is a good example of many delivery projects where time and
resources for M&E are limited, and because of the LPS team’s own stated need to draw
out key lessons, we feel that trying to facilitate a level of stakeholder participation will
15 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation
yield valuable insights that can inform our development of practical guidance for learning
from M&E.
Framework development - Working with the LPS team, SERG have produced a plan
for the evaluation that incorporates as much active involvement by stakeholders as
possible within the obvious time and resource constraints, based around the two main
focuses outlined above. SERG researchers will work with the LPS team to design short
questionnaires for use by LPS staff (FC and delivery partners) to evaluate the impacts of
the scheme on participants. These questionnaires are to be distributed to participants at
forthcoming events and emailed/mailed to participants using the LPS contacts database.
Critically, distribution and data input will be led by the LPS team, putting them in a
position to experience directly the feedback given by project participants. Their
experience of implementing the survey and gaining feedback will be drawn out during
interview conducted by SERG researchers (see below).
SERG researchers will conduct one-to-one and/or small group interviews with members
of the LPS project team, partnership board members, and key members of stakeholder
group to elicit their views on the success/difficulties/failure regarding the governance of
the LPS, its processes/activities and the impacts on the project team, the Landscape
Partnership (funding bodies) and the perceived impacts on project participants and the
affected area. A focus group will also be conducted with the stakeholder group. A
particular focus of discussion during these interviews will be the ways in which various
data types (including M&E data) have been used to inform the design and delivery of the
LPS. We will also examine how the structures, knowledge cultures and communicative
practices within and between the various organisations involved in the governance of the
LPS have inhibited or enabled learning outcomes.
Reporting of the SERG evaluation will be carried out in close collaboration with the LPS
project team to ensure that we maximise the capacity of the evaluation outputs to
identify the successes and challenges of the Neroche LPS so that lessons can be learnt
by FC for other/future projects.
16 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation
WIAT supports the creation and management of accessible woodlands for individuals
and communities in urban and post-industrial areas as a way to bring the quality of life
benefits of forests closer to where people live and work. It is the flagship social forestry
programme delivered by Forestry Commission Scotland (FCS) under the Scottish Rural
Development Programme. WIAT was launched in 2005 and by the end of its first three-
year phase it had made a capital investment of £30m in over 110 woods across
Scotland. Now nearing the end of Phase II (2008-2011), FCS have undertaken to
designate 12 existing and new sites to demonstrate the range of benefits delivered
through the WIAT programme. These ‘priority’ sites will provide a strategic focus for the
targeting of future resources to develop exemplars of sustainable urban forest
management, laying the foundation for the delivery of WIAT Phase III.
There is an ongoing need to evaluate the WIAT programme. For WIAT Phase II a number
of evaluation resources were developed, including a set of indicators 3 and guidance on
how to carry out monitoring and evaluation (M&E) of social forestry initiatives to be used
by WIAT Challenge Fund grant applicants 4 . In addition, FCS have contracted OpenSpace
Research to carry out an evaluation of three WIAT sites 5 .
As part of wider objectives to develop good practice in community engagement, and the
design and delivery of sites, projects and interventions, FCS intend to use the 12 priority
sites to develop a broad M&E framework that can be integrated into the design and
delivery of WIAT Phase III. To that end, FCS have contracted SERG to lead the
development of bespoke M&E framework to be implemented and tested at the priority
sites. The terms of reference for this work are as follows:
1. To develop a M&E framework that will produce evidence that shows how WIAT
sites and interventions are helping to deliver against wider Scottish policy
objectives;
2. To provide a M&E resource that can be used by those delivering and managing
WIAT sites, as a way of informing the development of best practice;
3. To develop a M&E framework that enables processes of learning and adaptation at
both operational and policy levels.
3
Forestry Commission Scotland (2008: 11) Woodlands In and Around Towns: Phase 2. Available at:
http://www.forestry.gov.uk/pdf/fcfc120.pdf/$FILE/fcfc120.pdf
4
Available at: http://www.forestry.gov.uk/forestry/infd-7djf9c
5
Copies of the baseline report can be downloaded at:
http://www.forestry.gov.uk/pdf/WIATBaselineSurveyFinal300307.pdf/$FILE/WIATBaselineSurveyFinal300307.p
df
17 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation
priority sites. It is likely that SERG will be contracted on an on-going basis to provide
M&E advice and support to WIAT stakeholders and to assist with data gathering, analysis
and interpretation.
The Thames Beat team has recently drafted a strategy for the delivery of community
engagement (CE) activities within the Forestry Commission Thames Beat (FCTB) for the
period 2010 - 2012. The CE strategy encompasses a monitoring and evaluation strategy
focused on the delivery of CE activities (‘the social offer’). SERG is working with the
FCTB to support the detailed design and implementation of the CE and M&E strategies.
Refining and improving the social offer (i.e. a better service to the public) is a stated
objective of the M&E strategy, and there is a strongly stated intention to use M&E data
to feed into processes of learning and adaptation:
“It is critical that any M&E serves a valuable function in the day to day running and
strategic development of the TB. In order to ensure this relevance, any M&E package
18 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation
must focus on the aspects of the TB work that can be varied on a relatively short cycle
(such as annually).”
Framework development – SERG are soon to meet with the Thames Beat team to
discuss plans for the 2011/12 CE programme. At this meeting, a participatory approach
to M&E framework development and implementation will be discussed.
--
For further information about the research project ‘Learning from Monitoring &
Evaluation’, contact Jake Morris: [email protected]
References
19 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation
Bawden, R.J. & Packham, R.G. 1998. ‘Systematic praxis in the education of the
agricultural systems practitioner.’ Systems Research and Behavioural 15: 403-12.
Bell, S. and Morse, S. 2003. ‘Measuring sustainability: learning from doing’. Earthscan,
London. 189 pp.
Cohen, D. and Prusak, L. 2001. 'In Good Company. How social capital makes
organizations work'. Boston, Ma.: Harvard Business School Press.
Faith, D. P. 2005. Global biodiversity assessment: integrating global and local values
and human dimensions. Global Environmental Change-Human and Policy Dimensions
15:5-8.
Garcia, C. A., and G. Lescuyer. 2008. Monitoring, indicators and community based
forest management in the tropics: Pretexts or red herrings? Biodiversity and
Conservation 17:1303-1317.
Government Social Research Unit. 2006. ‘The Magenta Book: guidance notes for
policy evaluation and analysis.’ HM Treasury, London.
Hartanto, H., M. C. B. Lorenzo, and A. L. Frio. 2002. Collective action and learning in
developing a local monitoring system. International Forestry Review 4:184-195.
20 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation
Lawrence, A., B. Anglezarke, B. Frost, P. Nolan, and R. Owen. 2009. What does
community forestry mean in a devolved Great Britain? . International Forestry Review
11:281-297.
Mahanty, S., J. Fox, L. McLees, M. Nurse, and P. Stephen. Editors. 2006. Hanging
in the Balance: equity in community-based natural resource management in Asia.
Bangkok: RECOFTC and East-West Center.
Molund, S. & Schill, G. 2007. ‘Looking Back, Moving Forward. Sida Evaluation Manual -
2nd revised edition.’ Sida, Sweden.
Patton, M. 2002. Qualitative Research & Evaluation Methods 3rd Edition. Sage,
Thousand Oaks, California.
- 1997. ‘Utilization-focused evaluation: The new century text.’ 3rd Edition. Sage,
Thousand Oaks, California.
Schön, D. A. 1973. ‘Beyond the Stable State. Public and private learning in a changing
society’. Harmondsworth: Penguin.
Sonnischen, R.C. 2000. ‘High impact internal evaluation.’ Sage, Thousand Oaks,
California
21 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010
Learning from Monitoring & Evaluation – a
blueprint for an adaptive organisation
Tremmel, R. 1993. ‘Zen and the art of reflective practice in teacher education’ Harvard
Educational Review 63 (4): 434-58
Whyte, W.F. (ed.) 1989. ‘Action research for the twenty-first century: Participation,
reflection and practice.’ Special issue of American Behavioural Scientist 32 (5, May/June)
Wollenberg, E., J. Anderson, and D. Edmunds. 2001. Pluralism and the less
powerful: accommodating multiple interests in local forest management. International
Journal of Agricultural Resources, Governance and Ecology 1:199-222.
Yaron, G. 2006. ‘Good Practice from Whitehall and Beyond.’ GY Associates Ltd,
Harpenden UK.
22 |Learning from Monitoring & Evaluation| Morris & Lawrence | September 2010