How To Write A Research Protocol
How To Write A Research Protocol
To cite this article: Christopher C Rout & Colleen Aldous (2016) How to write a research
protocol, Southern African Journal of Anaesthesia and Analgesia, 22:4, 101-107, DOI:
10.1080/22201181.2016.1216664
a
S chool of Clinical Medicine, University of KwaZulu-Natal, Durban, South Africa
b
Department of Anaesthetics, Critical Care, Nelson R Mandela School of Clinical Medicine, Durban, South Africa
A research protocol is best viewed as a key to open the gates between the researcher and his/her research objectives. Each
gate is defended by a gatekeeper whose role is to protect the resources and principles of a domain: the ethics committee
protects participants and the underlying tenets of good practice, the postgraduate office protects institutional academic
standards, the health authority protects provincial resources etc. The protocol must explicitly address the issues likely to be
raised by these gatekeepers, demonstrating evidence of a clear understanding of the issues involved and that all components
of the research plan have been addressed. The purpose of this paper is to add flesh to the skeleton provided in step six (‘write
the protocol’) of the Biccard and Rodseth paper of 2014, orientated towards the first-time researcher working towards the
MMed degree. Although occasional reference will be made to qualitative approaches, it is likely that the majority of these
studies will be quantitative designs and these form the focus of this paper.
This paper assumes that the student has a clear idea of what ‘The purpose of this … (observational/descriptive,
interests him/her, where the knowledge gap lies (from literature comparative, correlational, survival, analytical etc.) study is
review) and has framed either a research question or hypothesis, to … (explore, describe, compare etc.) the … (central focus,
even if not fully developed (steps 1–4, Biccard and Rodseth1). i.e. what you are actually measuring) for/of/in … (population
Although requirements for protocol format vary between academic sampled) at/in/presenting to … (location) from/ over/ for
centres, we have kept largely to the structure recommended by the period … (Dates, time period).’
Biccard and Rodseth, with slight modification (Table 1).
Southern African Journal of Anaesthesia and Analgesia is co-published by Medpharm Publications, NISC (Pty) Ltd and Taylor & Francis, and Informa business.
102 Southern African Journal of Anaesthesia and Analgesia 2016; 22(4):101–107
In the above example, recording the birthweight of all i. the methods used will ensure that the sample (and sampling
participants and a history of TB between the ages of 6 and 9 years frame) matches the population in which the problem has been
would address objective 1. Testing for HIV status would not
identified and the research question asked (Representativity).
address any of the objectives as stated, so cannot be included in
Where uneven distributions of a variable are known within a
the methodology. Adding this as an ‘afterthought’, before the
study commences, can only happen if the role of HIV status is population (e.g. disease distributions related to age, gender or
included in a rewritten background and literature and added as geographical distribution), a probabilistic (random) sampling
an additional objective in the protocol, which would then have process should match these via stratified or cluster sampling. In
to be resubmitted. Once the study has commenced, any non-probabilistic sampling, methods should be used to demonstrate
additions or changes cannot be made to the protocol without avoidance of sampling error by ensuring adequate proportional
ethical review. sampling of the known characteristics of the population;
The methods section is written in the future tense. It should be ii. any comparisons of a variable will be made between
written so that anybody can use it to reproduce your study similarly constructed sample groups (Comparability). When
exactly (although perhaps with different results). Scrupulous random allocation is used, sufficient relevant demographic
adherence to well-written methods enables complete ‘cut and
data must be recorded to enable subsequent comparability
paste’ transfer to a report, simply changing future to past tense.
testing. With non-probabilistic sampling the protocol must
document methods used to prevent selection bias;
Each of the following must be addressed:
not use vague statements such as ‘statistical analysis will be • standard deviation (σ, sd) of your variable;
performed’. Be specific, for example: • your chosen α (probability of accepting a result as a
statistically significant difference when in reality there is no
‘Descriptive statistics (mean and standard deviation or difference);
median and interquartile range as appropriate) will be •
required statistical power (probability that a study will
used to describe the sample groups. Continuous variable detect an effect when there is an effect there to be detected).
group means will be compared using unpaired t-tests for
normally distributed data, otherwise non-parametric
These are the estimates the statistician will request in order to
(Mann–Whitney U) methods will be used. A p-value
assist with the calculation.
of < 0.05 will be regarded as statistically significant.’
The control mean may be known (e.g. from previous research, or
a physiological value such as a systolic pressure of 120 mm Hg),
If relevant (or indeed possible), any planned participant follow- and similarly the standard deviation. In which case, the ‘standard’
up should be included at the end of this section and the purpose chosen values of α of 0.05 and power of 0.8 might be used (N.B.
of the follow-up identified. If supplementary data are to be these ‘standard’ values are by convention, not rule; there are
sought they should be described here and included on the data situations, e.g. differences in mortality, where you would want to
form to accompany the protocol. be more certain and therefore choose a smaller value of α and/or
a higher power).
f. Sample size, statistical power and variable selection.
Statistical advice should be sought. Sample size calculations Defining an important clinical difference is more of a challenge,
can depend upon circumstances. For example: but it represents your only justifiable method of obtaining a study
size matched to your resources. Strictly speaking, the important
clinical difference is the smallest difference that would make you
1. Sufficient resources (personnel, time, funding and a high
change your practice. The most commonly used calculable
prevalence or incidence) and an estimate of the mean and estimate is known as the standardised mean difference (ES):
standard deviation of your outcome variable of main interest
may be available. In this case you can calculate the required size X̄ 1 − X̄ 2
ES =
of a comparative study to achieve a given statistical level of SD
significance for a predetermined difference of clinical importance
between means. You will need to have an estimate of: The difference from your control mean (X1- X2, desired effect size)
can be altered until the difference divided by the control standard
• the control mean;
deviation represents an appropriate value for the primary
• an important clinical difference (Δ or effect size, ES); outcome measure of the study.
How to write a research protocol 105
Table 2: Statistical treatment: what the reviewer will be looking for (adapted from Bland et al.22)
(1) The study design and aims must be appropriate to the proposed statistical treatment that will depend upon issues such as randomisation,
number of groups, whether repeated measures are to be used, possible confounding or interacting factors etc.
(2) The types of data (continuous, discrete, nominal etc.) to be collected should be specified, to ensure that the proposed statistical analysis is
appropriate, with clear identification of the outcome variable(s) of main interest on which any power analysis is based
(3) The number of outcome measures that are to be recorded should be stated, to avoid or adjust for issues of multiple testing
(4) Any power analysis to estimate required sample sizes for the outcome of interest must be based upon the statistical test identified for use
in its subsequent evaluation
(5) The information used in any power analysis (means, proportions, standard deviations, effect size, proposed alpha and power etc.) must be
included to permit the reviewer to repeat the calculation
(6) Confirmation that the investigator has the required knowledge to perform the subsequent analysis, or whether assistance will be provided
by a statistician or person with the necessary expertise
A general view of the values of ES might be: number of positive outcomes (about 5–10) for each risk factor
• ≤ 0.2: a very small effect, of negligible importance; added to the regression. An additional problem with this type of
study is the confounding effect of two or more related variables
• 0.5: of moderate importance;
(e.g. height, weight and body mass index).
• 0.8: a large difference of considerable importance;
• ≥ 1: cannot be ignored. Methodological challenges and study
limitations
But again, this is context specific. If your primary outcome This should be a concise, realistic view of the challenges to
measure is death, an effect size of 0.2 is important, whereas if the achieving the aims and objectives of the study. It should be long
outcome were a readily treatable decrease in systolic blood enough and detailed enough to demonstrate to the reviewer
pressure (say from 120 to 108) then an effect size of 1 might not that the study team has insight into what it is doing, but not so
be considered very important. long and detailed as to suggest that the project has no hope of
success. Each challenge presented must be accompanied by a
2. Alternatively, you may have few resources and not know summary of how the protocol meets the challenge. For example:
the size and range of the variable of interest and wish to describe • How have response rates to questionnaires been improved?
it in a pilot study for future research. You should, however, have • What efforts have been put in place to select a representative
some idea of how many potential participants you will be able to sample?
see in the time available (e.g. from a clinic’s records).
• Can the results be generalised?
In this case, statistical calculations can indicate how accurate your
estimates of average and range will be. This is important in a Feasibility
descriptive study, and explains why protocols containing ‘this is a
descriptive study only and requires no statistical analysis’ may be a. Time lines and project management. It must be
rejected by reviewers. To underscore this point, Figure 2 depicts demonstrated that the study can be completed in the time
the upper and lower 95% confidence bounds on a proportion of available. All stages of the research must be included and time
0.1. Assuming a true prevalence of 0.1, if samples of 10 were allocated to literature search, protocol preparation and realistic
repeatedly taken 95% of these estimates would be found turnaround time for necessary review following submission,
between 0.0025 and 0.445, which represent a large range of recruitment and data collection, data collation and entry into
possible answers far removed from the real one. This would not
electronic format, statistical analysis and review, and finally
represent an adequate sample size for a useful description of an
write-up; use of a Gantt chart is recommended. The project
outcome measure, in contrast to repeated samples of 200 (95%
confidence limits 0.062–0.15). manager (usually the principal investigator) is responsible for
ensuring timeous completion of each stage of the project.
Variables selected for documentation and analysis should be
kept to the minimum necessary to achieve the aims and b. Study team, contributors and authorship. From the outset it
objectives of the study and answer the research question. Avoid should be clear who is responsible for each component of the
the temptation to over-test, either by multiple testing of the study and who should be acknowledged and who should be an
same variable, or unnecessary testing of additional variables author on any papers published from the research. This not
(usually in pursuit of critical p-values). Just as buying several only clarifies everybody’s role in the project but also avoids
lottery tickets increases the probability of winning a prize, so possible future embarrassment or acrimony. Also, naming
multiple testing increases the probability of finding an erroneous individuals responsible for each part of the research project can
statistically significant difference (type I error). This is a particular
ensure that everything gets done. For example, any laboratory
problem with predictive observational outcome studies when
analysis requires identification of the individual responsible for
too many risk factors are added to a multiple logistic regression
analysis. If using this type of study design, the reviewer will check the analysis additional to permission to use the laboratory
the anticipated outcome incidence to ensure an appropriate facilities for the project.
106 Southern African Journal of Anaesthesia and Analgesia 2016; 22(4):101–107
Contributions not complying with all the criteria merit a detailed Study significance
‘Acknowledgement’ at the end of the paper. Include a brief concluding paragraph as to the expectations of the
study in terms of improving knowledge and how the results can be
Proposed authorship need not be cast in stone, as required roles applied to the underlying clinical problem addressed by the study.
may change during the course of the study but changes to the
study personnel may require notification to the Ethics Committee. Example:
c. Participating centres. If the study is to be conducted in ‘The significance of this study into the factors underlying
more than one centre, all centres should have the requisite the pharmacogenetic basis of mitochondrial disorders
resources (time, personnel, equipment and expertise) to fulfil uncovered by HIV infection or initiation of NRTI drugs will
study requirements. result in more effective design of NRTI drugs with
enhanced activity and minimal toxicity, leading to
d. Study funding and progress. Protocol submission for a study improved patient outcome.’
without adequate funding will never bear fruit and is a waste of
everybody’s time. However, it is acceptable to submit a realistic
budget with the protocol before a grant has been awarded, as
Appendices
These should include your research instrument (i.e. questionnaire
most grants will be subject to ethical (and in the case of a degree,
or data-collection tool), patient information sheet and consent
postgraduate committee) approval, and grant application will
form, letters of approval, certificates of ethical and clinical good
require your protocol. This section of the protocol must be standing and brief curriculum vitae of the principal investigator,
completed even if there are no direct costs (e.g. a historical chart any co-investigators, supervisors and co-supervisors etc. Ensure
review) or the stationery etc. can be covered by departmental that all required documentation is included with your protocol
resources. Costs must match funds. Reasons should be given and HREC submission, using any checklist provided.
why grants are delayed or deferred if that is the case.
Conclusion
Ethical considerations Following this format should, it is hoped, result in a smooth
While the underlying principles of autonomy, beneficence, non- passage through review committees. The format can be
maleficence and justice form the basis of research ethics, ethical converted into a template design that can be used to complete a
review has to be more extensive. Notably, the participants in the draft protocol within a day, provided that the initial literature
study have to be protected from inexpert and unqualified review and conceptualisation have been completed beforehand.
How to write a research protocol 107
References 13. Schulz KF, Grimes DA. Sample size slippages in randomised trials:
1. Biccard BM, Rodseth RN. Taking an idea to a research protocol. South exclusions and the lost and wayward. The Lancet. 2002 Mar
Afr J Anaesth Analg. 2014;20(1):14–18. 2;359(9308):781–5.
2. Cresswell JW. Educational research. planning, conducting, and 14. Grimes DA, Schulz KF. Uses and abuses of screening tests. The Lancet.
evaluating quantitative and qualitative research. 4th ed. London: 2002;359:881–4.
Pearson Education; 2012. ISBN-10: 0-13-136739-0, ISBN-13: 978-0-13- 15. Schulz KF, Grimes DA. Unequal group sizes in randomised trials:
136739-5. p. 109–139. guarding against guessing. The Lancet. 2002 Mar 16;359(9310):966–70.
3. Aldous C, Rheeder P, Esterhuizen T. Writing your first clinical research 16. Schulz KF, Grimes DA. Sample size calculations in randomised trials:
protocol. Cape Town:Juta and Company; 2011. mandatory and Mystical. The Lancet. 2005 Apr 9;365(9467):1348–53.
4. Grimes DA, Schulz KF. An overview of clinical research: the lay of the 17. Grimes DA, Schulz KF. Compared to what? Finding controls for case-
land. The Lancet. 2002 Jan 5;359(9300):57–61. www.thelancet.com control studies. The Lancet. 2005 Apr 16;365(9468):1429–33.
http://dx.doi.org/10.1016/S0140-6736(02)07283-5 18. Grimes DA, Schulz KF. Refining clinical diagnosis with likelihood
5. Schulz KF, Grimes DA. The lancet handbook of essential concepts in ratios. The Lancet. 2005 Apr 23;365(9469):1500–05.
clinical research (The Lancet Handbooks). Amsterdam: Elsevier. 2006; 19. Schulz KF, Grimes DA. Multiplicity in randomised trials I: endpoints
ISBN-13: 978-0080448664, ISBN-10:0080448666. and treatments. The Lancet. 2005 Apr 30;365:1591–95.
6. Grimes DA, Schulz KF. Descriptive studies: what they can 20. Schulz KF, Grimes DA. Multiplicity in randomised trials II: subgroup
and cannot do. The Lancet. 2002 Jan 12;359(9301):145–9. and interim analyses. The Lancet. 2005 May 7;365(9470):1657–61.
http://dx.doi.org/10.1016/S0140-6736(02)07373-7 21. Brown CA, Lilford RJ. The stepped wedge trial design: a systematic
7. Grimes DA, Schulz KF. Bias and causal associations in review. BMC Med Res Methodol. 2006 Nov 8;6(1):54. doi:10.1186/1471-
observational research. The Lancet. 2002 Jan 19;359(9302):248–52. 2288-6-54. Available from: http://www.biomedcentral.com/1471-
http://dx.doi.org/10.1016/S0140-6736(02)07451-2 2288/6/54.
8. Grimes DA, Schulz KF. Cohort studies: marching towards outcomes. 22. Bland JM, Butland BK, Peacock JL, Poloniecki J, Reid F, Sedgwick P.
The Lancet. 2002 Jan 26;359(9303):341–5. Statistics guide for research grant applicants. Department of Public
9. Schulz KF, Grimes DA. Case-control studies: research in reverse. The Health Sciences St George’s Hospital Medical School. 2012[cited by
Lancet. 2002 Feb 2;359(9304):431–4. 2015 Nov]. Available from: https://www-users.york.ac.uk/~mb55/
10. Schulz KF, Grimes DA. Generation of allocation sequences in guide/guide14.pdf.
randomised trials: chance, not choice. The Lancet. 2002 Feb 22. Available from: http://www.icmje.org/recommendations/browse/
9;359(9305):515–9. roles-and-responsibilities/defining-the-role-of-authors-and-
11. Schulz KF, Grimes DA. Allocation concealment in randomised contributors.html
trials: defending against deciphering. The Lancet. 2002 Feb
16(9306);359:614–8. Received: 02-03-2016 Accepted: 21-07-2016
12. Schulz KF, Grimes DA. Blinding in randomised trials: hiding who got
what. The Lancet. 2002 Feb 23;359(9307):696–700.