0% found this document useful (0 votes)
103 views6 pages

2006 Legg & Nagy Why Most Conservation Monitoring Is, But Need Not Be, A Waste of Time

This document discusses the importance of proper design and statistical power in conservation monitoring programs. It notes that while monitoring has grown significantly as a conservation activity, many programs are inadequate and unlikely to meet their objectives due to a lack of clear goals, hypotheses, survey design, and statistical power. Proper quantitative methods and experimental designs are needed to ensure programs can reliably detect ecologically significant changes. Without such rigor, monitoring results may be misleading and create a false sense that useful information is being collected when it is not.

Uploaded by

Sidney Santos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
103 views6 pages

2006 Legg & Nagy Why Most Conservation Monitoring Is, But Need Not Be, A Waste of Time

This document discusses the importance of proper design and statistical power in conservation monitoring programs. It notes that while monitoring has grown significantly as a conservation activity, many programs are inadequate and unlikely to meet their objectives due to a lack of clear goals, hypotheses, survey design, and statistical power. Proper quantitative methods and experimental designs are needed to ensure programs can reliably detect ecologically significant changes. Without such rigor, monitoring results may be misleading and create a false sense that useful information is being collected when it is not.

Uploaded by

Sidney Santos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Journal of Environmental Management 78 (2006) 194–199

www.elsevier.com/locate/jenvman

Why most conservation monitoring is, but need not be, a waste of time
Colin J. Legga,*, Laszlo Nagyb
a
School of GeoSciences, University of Edinburgh, Crew Building, King’s Buildings, Mayfield Road, Edinburgh EH9 3JN, Scotland
b
McConnell Ecological Research, 41 Eildon Street, Edinburgh EH3 5JX, Scotland
Received 1 July 2004; revised 16 March 2005; accepted 12 April 2005
Available online 19 August 2005

Abstract

Ecological conservation monitoring programmes abound at various organisational and spatial levels from species to ecosystem. Many of
them suffer, however, from the lack of details of goal and hypothesis formulation, survey design, data quality and statistical power at the start.
As a result, most programmes are likely to fail to reach the necessary standard of being capable of rejecting a false null hypothesis with
reasonable power. Results from inadequate monitoring are misleading for their information quality and are dangerous because they create the
illusion that something useful has been done. We propose that conservation agencies and those funding monitoring work should require the
demonstration of adequate power at the outset of any new monitoring scheme.
q 2005 Elsevier Ltd. All rights reserved.

Keywords: Conservation management; Monitoring; Power analysis; Statistical design

1. Objectives of monitoring work in the 1990s shows itself in the number of publications
on the subject including, for example, the annotated
The aims of conservation management are either to bibliography on vegetation monitoring by Elzinga and
maintain the status quo or to manipulate the system to Evenden (1997), which cites 1406 references. Many of the
achieve some predefined target by modifying the processes main conservation organisations are doing or commission-
that are fundamental to ecosystem structure and functioning. ing monitoring work—but will the data that are being
Monitoring [‘intermittent recording of the condition of a collected ever be of much use? There are undoubtedly good
feature of interest to detect or measure compliance with a examples of long-term monitoring programmes collecting
predetermined standard’ (Hellawell, 1991)] is an essential valuable data, but many projects seem unlikely to meet their
tool in three main tasks: to inform the conservationist when stated objectives. Monitoring is often inadequate as, for
the system is departing from the desired state; to measure example, Yoccoz et al. (2001), Byron et al. (2000), Wood
the success of management actions; and to detect the effects et al. (2000) concluded in their reviews of the effectiveness
of perturbations and disturbances. of environmental impact statements and biodiversity
monitoring. The results of inadequate monitoring can be
both misleading and dangerous not only because of their
2. Growth in monitoring as a conservation activity inability to detect ecologically significant changes, but also
because they create the illusion that something useful has
Monitoring seems to be the automatic response of been done (Peterman, 1990a). Such work may need to be
conservationists to any change or development that is seen repeated to a higher standard later with added costs.
as a potential threat to the environment, whether or not it is Probable reasons for poor quality monitoring are not hard
appropriate. A steep increase in the amount of monitoring to find. One concerns the preferential use of qualitative or
semi-quantitative monitoring techniques (e.g. recording of
* Corresponding author. only presence/absence, estimation of population size/condi-
E-mail addresses: [email protected] (C.J. Legg), lnsc16324@ tion, site condition assessment), which may be adequate for
blueyonder.co.uk (L. Nagy). some purposes, in place of quantitative methods (JNCC
0301-4797/$ - see front matter q 2005 Elsevier Ltd. All rights reserved. Common Standards Monitoring, http://www.jncc.gov.uk/
doi:10.1016/j.jenvman.2005.04.016 page-2274; http://www.jncc.gov.uk/page-2282). The usual
C.J. Legg, L. Nagy / Journal of Environmental Management 78 (2006) 194–199 195

reason given for using qualitative methods is financial practitioners. The reasons for this may be found in the
constraints. But the choice of a quantitative method does not inadequate coverage of monitoring design in degree and
guarantee success. A selection of textbooks on ecological post-graduate courses, the lack of availability of suitable
monitoring and environmental impact assessment revealed digests of scientific publications for in-service staff who
that the majority of those examined (Southwood and commission monitoring work, and the lack of scientific
Henderson, 2000; Glasson, 1999; Petts, 1999; Calow, peer-review of tenders by contract researchers (see for
1998; Gilpin, 1995; Morris and Therivel, 1995; Wood, example Warnken and Buckley, 2000).
1995; Goldsmith, 1991; Spellerberg, 1991; Fortlage, 1990)
give little or no reference to the important issue of ensuring
that the survey design is capable of detecting an impact on 3. What are the requirements for a good monitoring
the system with adequate power. Only two of the books programme?
examined gave brief mention to the important question of
hypothesis testing with references to more detailed It appears there is no cookbook recipe for the success and
methodology (Michener and Brunt, 2000; Treweek, 1999). effectiveness of long-term studies. However, Strayer (1986)
Before the publication of the field manual on monitoring by emphasised the importance of a simple and accommodating
Elzinga et al. (2001), more specialist books on research design in which the essential measurements and exper-
methodology (Ford, 2000; Krebs, 1989) or statistical imental treatments should be straightforward and unam-
analysis (Zar, 1999; Underwood, 1997; Cohen, 1988) had biguously repeatable even by staff lacking sophisticated
to be searched for an adequate treatment of the subject; this training (see Boxes 1 and 2).
despite the fact that there are numerous excellent papers Surveillance projects require good estimates of the
published in the ecological literature (see below). These accuracy and precision of parameters estimated. While
papers seem to have been largely ignored by many this may also be true of monitoring, the ultimate-test of a

Box 1
Criteria for good management of a monitoring programme

† secure long-term funding and commitment


† develop flexible goals
† refine objectives
† pay adequate attention to information management
† train personnel and ensure commitment to careful data collection
† locations, objectives, methods and recording protocols should be detailed in the establishment report
† obtain peer review and statistical review of research proposals and publications
† obtain periodic research programme evaluation and adjust sampling frequency and methodology accordingly
† develop an extensive outreach programme

Based on Stohlgren (1995), Stewart et al. (1989), Hirst (1983).

Box 2
Recommendations for good design and field methods in monitoring

† take an experimental approach to sampling design


† select methods appropriate to the objectives and habitat type
† minimise physical impact to the site
† avoid bias in selection of long-term plot locations
† field markings must be adequate to guard against loss of plots
† ensure adequate spatial replication
† ensure adequate temporal replication
† blend theoretical and empirical models with the means (including experiments) to validate both
† synthesise retrospective, experimental and related studies
† integrate and synthesise with larger and smaller scale research, inventory, and monitoring programmes

Based on Yoccoz et al. (2001), Bakker et al. (1996), Stohlgren (1995), Stewart et al. (1989), and Strayer (1986).
196 C.J. Legg, L. Nagy / Journal of Environmental Management 78 (2006) 194–199

good monitoring programme will collect data that provide 1. ‘the system has not changed beyond the predetermined
sufficient information to reject the null hypothesis if it is limits of acceptable change’
false (see Box 3). Typical null hypotheses may be of one of 2. ‘the system has changed according to predetermined
the following forms: management objectives and is within the acceptable limits’

Box 3
Ways to increase power in monitoring
The power of a test depends on effect size, error variance, sample size and the Type I error rate (a). For example, the
power of a t-test is derived from the t-distribution and the value of t given by:
d
tbð1Þ;y Z qffiffiffiffi K ta;y
s2
n

The sample size required to detect a difference between means of d with power (1Kb) is:
s2
nZ ðta;y C tbð1Þ;y Þ2
d2
(Note that t is a function of n so the solution must be obtained by iteration) where:
n is sample size,
s2 is an estimate of variance,
d is the minimum detectable difference,
ta,n is the critical value of t for a probability of a (one-tailed or two-tailed as appropriate)
tb,n is the critical value of t for a one-tailed probability level b,
b is the probability of Type II error and
n is the degrees of freedom.

Based on Zar (1999); Cohen (1988); Pearson and Hartley (1976), and Dixon and Massey (1969).
Effect size. The larger the effect, or the greater the change in the system, the easier the change will be to detect. Effect size
can be increased by using more sensitive indicators, or by increasing the intensity of the treatment. However, when
planning a monitoring programme the size of the effect is usually unknown. Power analysis, therefore, requires that the
limits of acceptable change should be fixed at the planning stage and the monitoring designed so that a change of that
magnitude will be detected if it occurs. There are no particular guidelines on how the limits of acceptable change should
be fixed other than common sense (Toft and Shea, 1983), but see Cohen (1988).
Error variance. The power of a test depends on variability in the data. The greatest source of variability in the data in most
cases stems from the fact that every sample unit is different from every other. This variance can be reduced at the design
stage by, for example, increasing the size of the sample unit, stratification to reduce variance within strata, or the use of
permanent plots, and by observer training (e.g. Pauli et al., 2004).
There is often an implicit assumption that different observers would obtain the same results when making observations.
An estimate of the between-observer error is essential for long-term monitoring programmes where the same observer is
unlikely to be responsible for the observations throughout the programme. In the few examples where between-observer
and within-observer errors have been assessed for estimation of vegetation cover, for example, it has been found to be not
insignificant (e.g. G10–20%, Nagy et al., 2002; Dethier et al., 1993; Kennedy and Addison, 1987; Sykes et al., 1983;
Clymo, 1980).
Ecological systems may fluctuate from year to year because of chance events and changes in weather patterns and
between-year variance cannot usually be assessed until the monitoring programme has been running for a few years.
However, absence of this information is not grounds for ignoring power analysis at the design stage. There may be related
studies available giving good estimates of the expected annual fluctuations and a power analysis based on an intelligent
‘guesstimate’ of between-year variance is considerably better than no power analysis at all.
Sample size. The simplest way to increase power is to increase sample size but this costs time and money and sample size
should be traded-off against the quality of information that can be obtained from each observation. For example,
estimates of plant cover made by averaging the visual estimates of cover in subunits within gridded quadrats show much
less between-observer and within-observer error than visual estimates from ungridded quadrats. If the between-quadrat
variance is high then large numbers of low-precision ungridded quadrats give greater power than the same amount of time
spent on a few high-quality gridded quadrats (Legg, 2000; Nagy et al., 2002). Prior knowledge or a pilot study will be
required to find the optimal method.
C.J. Legg, L. Nagy / Journal of Environmental Management 78 (2006) 194–199 197

Manley (1992) suggested a practical approach to assessing required sample sizes. At first, a calculation is made about the
maximum size of sample that can be collected given the resources available. From that, one can estimate the power of the
test that one wishes to apply. If the estimated power is inadequate then one needs to decide whether to proceed or to abandon
the study altogether as there is little point in a monitoring programme that cannot reject a null hypothesis that is false.
If large differences are to be detected the calculated sample size may be rather small. Statisticians caution that samples
smaller than 20 may be too small to assume that the calculated variance would reasonably reflect population variance
(Ebdon, 1985).
Type I error rate. By convention the Type I error rate (the probability of rejecting a true null hypothesis) is usually set
arbitrarily at aZ0.05, but increasing the acceptable Type I error rate can greatly increase the power of the test. This raises
questions about the balance to be struck between Type I errors and Type II errors. For example, it has been proposed that the
ratio of probability of Type I and Type II errors should equal the inverse of the ratio of the cost of the two errors (Di Stefano,
2001). In conservation ecology the cost of Type II errors—failure to reject the false null hypothesis—may be greater than
the cost of Type I errors—rejection of a true null hypothesis (Shrader-Frechette and McCoy, 1992). Type II errors may
mean the failure to detect damage to the resource and may result in loss of the resource. Type I errors mean that unnecessary
additional management is applied—there is a cost implication, but the resource is not lost. A higher risk of Type I errors
should, therefore, be accepted in order to increase the power of the test.
In the case of environmental threats where the costs of Type II errors are high the burden of proof should shift from the
regulatory bodies to those causing the impact. However, the polluter must be required to demonstrate that the effect does not
exceed acceptable limits with high power (Ebdon, 1985).
Type of test used. Monitoring programmes should be designed around a simple and powerful statistical model (e.g. analysis
of variance, ANOVA) that can make use of all of the information available to reduce residual errors. Power can be increased
by making assumptions about the data so that, for instance, parametric tests are usually more powerful than non-parametric
tests. The hypotheses may also be refined; for example one-tailed tests are more powerful than two-tailed tests, although
good a priori reasons must be present before one-tailed tests are used. Similarly, specifying planned means comparisons in
ANOVA can increase power (Foster, 2001).

3. ‘the perturbation of concern has had no impact on the and others). This may cause shortcomings in the interpret-
system; all observed changes to the system can be ation of statistically non-significant results which are
attributed to other causes’ frequently (Peterman, 1990a), but erroneously, interpreted
in the ecological literature as implying that the null
However, the third null hypothesis of ‘no impact’ will hypothesis is true (i.e. that the perturbation has had no
rarely be appropriate in ecology because it will almost always effect, the system is within the acceptable limits). But if the
be false, even if the effect is exceedingly small (Johnson, test has low power then there will be a high probability of a
1999). What is interesting is not to know that the null non-significant result even if the null hypothesis is false.
hypothesis is false, but to ask if the change that has occurred is Non-significant results may lead to the assumption that the
within acceptable limits. Null hypothesis 3 should, therefore, perturbation has had no particular consequence when in fact
be re-written in most cases as ‘the effects of the perturbation of there is serious loss of conservation value; inappropriate
concern do not exceed the limits of acceptable change’. management may be continued even though the system has
In all cases hypothesis testing requires not only accuracy deviated well beyond the acceptable limits.
and precision in the data but, most importantly, information For the above reasons, power analysis is fundamental to
about the statistical properties of the data; that is the planning of long-term monitoring programmes because
information about the degree of accuracy and precision. the consequences of inadequate design may not be obvious
until the end of the programme by which time it will be too
late to correct the problem. Whilst the importance of power
4. Power analysis analysis is being highlighted throughout this paper, the
authors are mindful of the work of Hoenig and Heisey
The importance of estimating the power of a statistical (2001) and of the warning by Fox (2001) about the need for
test (powerZ1.0 minus the probability of a Type II error, ‘due diligence, a mild degree of scepticism and appropriate
i.e. the probability of rejecting the null hypothesis when it is attention to assumptions [about distribution and error
false) is well understood in the statistical literature (e.g. Zar, structure]’ whilst performing power analysis.
1999; Sokal and Rohlf, 1995; Cohen, 1988). Nonetheless, It is all the more important that a power analysis is used
the statistical power of ecological experiments is too rarely to balance the risk of Type I and Type II errors against their
considered in the design stage (Nagy et al., 2002; Peterman, respective costs in terms of both socio-economic and
1990a,b; Toft and Shea, 1983; Warnken and Buckley, 2000 conservation objectives. For example, Di Stefano (2003)
198 C.J. Legg, L. Nagy / Journal of Environmental Management 78 (2006) 194–199

has recently argued that the frequently used rule of thumb of model of the impact that makes precise predictions of the
the 5 and 20% rates of Type I and Type II error are nature of the changes that should be expected. In the
inappropriate. It is particularly important to select a design absence of a clear causal chain, a convincing case, therefore,
with high power when the cost of Type II errors is relatively requires that: results for several species follow a consistent
high as has been pointed out by Field et al. (2004), quoting pattern; plausible mechanisms for an ecological impact can
the example of monitoring panda populations. be identified; and reasonable alternative mechanisms have
The power of a test (Box 3) depends on several factors been explored and ruled out (Schroeter et al., 1993). Several
that are within the control of the observer: effect size authors have emphasised the need for experimental work
(acceptable change); survey design and statistical test that must be conducted in association with the monitoring in
applied; sample size and the Type I error rate. order to provide and calibrate a model of the changes (e.g.
Bakker et al., 1996; Strayer, 1986).

5. Statistical/process-model approach
6. Conclusion
Numerous authors emphasise the importance of devel-
oping a clear ‘model’ or hypothesis (e.g. Yoccoz et al.,
Few monitoring programmes pay sufficient attention to
2001; Humphrey et al., 1995; Pickett, 1991; Haug, 1983;
the details of hypothesis formulation, survey design, data
Johnson and Bratton, 1978) so that the monitoring can be
quality and statistical power at the start. There is, therefore, a
designed to test well-formed hypotheses using classical
high probability that most monitoring will fail to reach the
experimental approaches. ‘These models, either explicit or
necessary standard of being capable of rejecting a false null
subconscious, are part of every monitoring project and are
hypothesis with reasonable power. It is the responsibility of
usually characterised by being simple, correlated to causes,
the sponsoring bodies that commission the monitoring to
dynamic (incorporating temporal variability), discrete
ensure that a sufficiently high standard is maintained. ‘When
(reflecting periodic measurement), and analysed either
planning budgets managers should either give scientists
statistically or by simulation’ (Hirst, 1983).
sufficient funds and time to carry out a high power test of the
A clear model has two important roles in the present
null hypothesis, or not fund them at all’ (Peterman, 1990a,b).
context. Firstly, it focuses attention on the processes of
change that are likely to be taking place. This will be
important for identifying the best indicators that should be
measured. The best indicators will be those that closely Acknowledgements
reflect the processes of change. Thus, measures of
reproductive output or mortality may be more sensitive We thank Prof E.F. Bruenig, Dr A. Pullin, Dr N. Yoccoz
indicators than estimates of population size, provided that and an anonymous referee for their comments on the
they remain relevant to the hypotheses. A common problem manuscript.
with conservation monitoring is the selection of suitable
control sites and sites for valid replication as required by
statistical analysis. The statistically balanced careful exper-
References
imental design is rarely possible and the ecologist must make
do with what is (or can be made) available. This means that Bakker, J.P., Olff, H., Willems, J.H., Zobel, M., 1996. Why do we need
one is often left comparing the system at the end of the permanent plots in the study of long-term vegetation dynamics. Journal
monitoring programme with the baseline data, rather than of Vegetation Science 7, 147–155.
comparing them with replicated control sites. Significant Byron, H.J., Treweek, J.R., Sheate, W.R., Thompson, S., 2000. Road
developments in the UK: an analysis of ecological assessment in
change in the system cannot then be automatically attributed
environmental impact statements produced between 1993 and 1997.
to the impact of concern because all ecological systems Journal of Environmental Planning and Management 43, 71–97.
change with time anyway. The change observed may be quite Calow, P., 1998. Environmental Risk Assessment and Management.
unconnected with the impact or management of interest. The Blackwell Science, Oxford.
frequently used BACI (Before–After-Control-Impact) Clymo, R.S., 1980. Preliminary survey of the peat-bog Knowe Moss using
design, although addressing some of these problems varius numerical methods. Vegetatio 42, 129–148.
Cohen, J., 1988. Statistical Power Analysis for the Behavioral Sciences.
(Stewart-Oaten, 1992; Stewart-Oaten and Bence, 2001), Lawrence Erlbaum, Hillsdale.
has also been demonstrated as inadequate unless there is Dethier, M.N., Graham, E.S., Cohen, S., Tear, L.M., 1993. Visual versus
appropriate replication of sites (Underwood, 1994). random-point percent cover estimations: objective is not always better.
The second important role for a clear process-based Marine Ecology Progress Series 96, 93–100.
model of the expected change is, therefore, to distinguish Di Stefano, J., 2001. Power analysis and sustainable forest management.
Forest Ecology and Management 154, 141–153.
changes that are of no particular consequence, from changes Di Stefano, J., 2003. How much power is enough? Against the development
that can be attributed to the impact or management of an arbitrary convention for statistical power calculations. Functional
treatment of interest. This can be achieved with an a priori Ecology 17, 707–709.
C.J. Legg, L. Nagy / Journal of Environmental Management 78 (2006) 194–199 199

Dixon, W.J., Massey, F.J., 1969. Introduction to Statistical Analysis. Pauli, H., Gottfried, M., Hohenwallner D., Reiter, K., Casale, R., &
McGraw Hill, New York. Grabherr, G., 2004. The GLORIA Field Manual - Multi-Summit
Ebdon, D., 1985. Statistics in Geography. Blackwell, Oxford. Approach. European Commission, DG Research, EUR 21213, Official
Elzinga, C.L., Evenden, A.G., 1997. Vegetation Monitoring: An Annotated Publications of the European Communities.
Bibliography, USDA, Forest Service, Intermountain Research Station Pearson, E.S., Hartley, H.O., 1976. Biometrika Tables for Statisticians, vol.
INT-GTR-352. 2. Charles Griffin, London.
Elzinga, C.L., Salzer, D.W., Willoughby, J.W., Gibbs, J.P., 2001. Peterman, R.M., 1990a. Statistical power analysis can improve fisheries
Monitoring Plant and Animal Populations. Blackwell, Oxford. research and management. Canadian Journal of Fisheries and Aquatic
Field, S.A., Tyre, A.J., Jonzen, N., Rhodes, J.R., Possingham, H.P., 2004. Science 47, 2–15.
Minimizing the cost of environmental management decisions by Peterman, R.M., 1990b. The importance of reporting statistical power: the
optimizing statistical thresholds. Ecology Letters 7, 669–675. forest decline and acidic deposition example. Ecology 71, 2024–2027.
Ford, E.D., 2000. Scientific Method for Ecological Research. Cambridge Petts, J., 1999. Handbook of Environmental Impact Assessment. Blackwell,
University Press, Cambridge, Cambridge. Oxford.
Fortlage, C.A., 1990. Environmental Assessment: A Practical Guide. Pickett, S.T.A., 1991. Long-term studies: past experience and recommen-
Gower, Brookfield. dations for the future. In: Risser, P.G. (Ed.), Long-Term Ecological
Foster, J.R., 2001. Statistical power in forest monitoring. Forest Ecology Research: An International Perspective. John Wiley and Sons,
and Management 151, 211–222. Chichester, pp. 71–85.
Fox, D.R., 2001. Environmental power analysis - a new perspective. Schroeter, S.C., Dixon, J.D., Kastendiek, J., Smith, R.O., 1993. Detecting
Environmetrics 12, 437–449. the ecological effects of environmental impacts-a case-study of kelp
Gilpin, A., 1995. Environmental Impact Assessment (EIA): Cutting Edge forest invertebrates. Ecological Applications 3, 331–350.
for the Twenty-First Century. Cambridge University Press, Cambridge. Shrader-Frechette, K.S., McCoy, E.D., 1992. Statistics, cost and rationality
Glasson, J., 1999. Introduction to Environmental Impact Assessment: in ecological inference. Trends in Ecology and Evolution 7, 96–99.
Principles and Procedures. UCL Press, London. Sokal, R.R., Rohlf, F.J., 1995. Biometry. Freeman, New York.
Goldsmith, F.B., 1991. Vegetation monitoring. In: Goldsmith, F.B. (Ed.), Southwood, T.R.E., Henderson, P.A., 2000. Ecological Methods. Black-
Monitoring for Conservation and Ecology. Chapman and Hall, London, well, Oxford.
pp. 77–86. Spellerberg, I.F., 1991. Monitoring Ecological Change. Cambridge
Haug, P.T., 1983. Resource inventory and monitoring under NEPA. In: University Press, Cambridge.
Bell, J.F, Atterbury, T. (Eds.), Renewable Resource Inventories for Stewart, G.H., Johnson, P.N., Mark, A.F., 1989. Monitoring terrestrial
Monitoring Changes and Trends: Proceedings of an International vegetation for biological conservation. In: Craig, B. (Ed.), Proceedings
Conference. Oregon State University, College of Forestry, pp. 261–265. of a Symposium on Environmental Monitoring in New Zealand with
Hellawell, J.M., 1991. Development of a rationale for monitoring. In: Emphasis on Protected Natural Areas. Department of Conservation,
Goldsmith, F.B. (Ed.), Monitoring for Conservation and Ecology. Wellington, pp. 199–208.
Chapman and Hall, London, pp. 1–14. Stewart-Oaten, A., 1992. Assessing the effects of unreplicated pertur-
Hirst, S.M., 1983. Ecological and institutional bases for long-term bations: no simple solutions. Ecology 73, 1396–1404.
monitoring of fish and wildlife populations. In: Bell, J.F., Atterbury, Stewart-Oaten, A., Bence, J.R., 2001. Temporal and spatial variation in
T. (Eds.), Renewable Resource Inventories for Monitoring Changes and environmental impact assessment. Ecological Monographs 71, 305–
Trends: Proceedings of an International Conference. Oregon State 339.
University, College of Forestry, pp. 175–178. Stohlgren, T.J., 1995. Planning long-term vegetation studies at landscape
Hoenig, J.M, Heisey, D.M., 2001. The abuse of power: the pervasive fallacy scales. In: Powell, T.M., Steele, J.H. (Eds.), Ecological Time Series.
of power calculations for data analysis. The American Statistician 55, Chapman and Hall, London, pp. 209–241.
19–24. Strayer, D., 1986. Long-Term Ecological Studies: An Illustrated Account
Humphrey, C.L., Faith, D.P., Dostine, P.L., 1995. Baseline requirements of their Design, Operation, and Importance to Ecology. Occasional
for assessment of mining impact using biological monitoring. Publication of the Institute of Ecosystem Studies, New York Botanical
Australian Journal of Ecology 20, 150–166. Garden, No. 2.
JNCC Common Standards Monitoring http://www.jncc.gov.uk/page-2274;. Sykes, J.M., Horrill, A.D., Mountford, M.D., 1983. Use of visual cover
http://www.jncc.gov.uk/page-2282. estimates as quantitative estimators of some British woodland taxa.
Johnson, D.H., 1999. The insignificance of statistical significance testing. Journal of Ecology 71, 437–450.
Journal of Wildlife Management 63, 763–772. Toft, C.A., Shea, .P.J., 1983. Detecting community-wide patterns:
Johnson, W.C., Bratton, S.P., 1978. Biological monitoring in UNESCO estimating power strengthens statistical inference. American Naturalist
biosphere reserves with special reference to the great Smoky Mountain 122, 618–625.
National Park. Biological Conservation 13, 105–115. Treweek, J.R., 1999. Ecological Impact Assessment. Blackwell, Oxford.
Kennedy, K.A,, Addison, P.A., 1987. Some considerations for the use of Underwood, A.J., 1994. On beyond BACI: sampling designs that might
visual estimates of plant cover in biomonitoring. Journal of Ecology 75, reliably detect environmental disturbances. Ecological Applications 4,
151–157. 3–15.
Krebs, C.J., 1989. Ecological Methodology. Harper Row, New York. Underwood, A.J., 1997. Experiments in Ecology. Cambridge University
Legg, C.J., 2000. Review of Published Work in Relation to Monitoring of Press, Cambridge.
Trampling Impacts and Change in Montane Vegetation. Scottish Warnken, J., Buckley, R., 2000. Monitoring diffuse impacts: Australian
Natural Heritage Review No. 131, Battleby. tourism developments. Environmental Management 25, 453–461.
Manley, B.F.J., 1992. The Design and Analysis of Research Studies. Wood, C., 1995. Environmental Impact Assessment. A Comparative
Cambridge University Press, Cambridge. Review. Longman, Harlow.
Michener, W.K., Brunt, J.W., 2000. Ecological Data: Design, Management Wood, C., Dipper, B., Jones, C., 2000. Auditing the assessment of the
and Processing. Blackwell, Oxford. environmental impacts of planning projects. Journal of Environmental
Morris, P., Therivel, R., 1995. Methods of Environmental Impact Planning and Management 43, 23–47.
Assessment. UCL Press, London. Yoccoz, N.G., Nichols, J.D., Boulinier, T., 2001. Monitoring of biological
Nagy, L., Nagy, J., Legg, C.J., Sales, D.I., Horsfield, D., 2002. Monitoring diversity in space and time. Trends in Ecology and Evolution 16, 446–
vegetation change caused by trampling: a study from the Cairngorms, 453.
Scotland. Botanical Journal of Scotland 54, 191–207. Zar, J.H., 1999. Biostatistical Analysis. Prentice Hall, Englewood Cliffs.

You might also like