How To Conduct Meta Analysis
How To Conduct Meta Analysis
Arindam Basu
University of Canterbury
Concepts of meta-analyses
Meta analysis refers to a process of integration of the results of many studies to arrive at evidence syn-
thesis (Normand, 1999). Meta analysis is essentially systematic review; however, in addition to narrative
summary that is conducted in systematic review, in meta analysis, the analysts also numerically pool the
results of the studies and arrive at a summary estimate. In this paper, we discuss the key steps of conducting
a meta analysis. We intend to discuss the steps of a simple meta analysis with a demonstration of the key
steps from a published paper on meta analysis and systematic review of the effectiveness of salt restricted
diet on blood pressure control. This paper is a basic introduction to the process of meta-analysis. In subse-
quent papers in this series, we will discuss how you can conduct meta analysis of diagnostic and screening
studies, and principles of network meta analyses, where you can conduct a meta analysis with more than
one intervention or exposure variable.
PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2978v1 | CC BY 4.0 Open Access | rec: 15 May 2017, publ: 15 May 2017
individuals or population of interest to us. For example, if we are interested in the effectiveness of a drug
such as nedocromil on bronchoconstriction (narrowing of air passages) among adult asthma patients, then
we shall include only adult asthmatics for our study, not children or older adults (if such individuals are not
of our interest); on the other hand, if we are interested to study the effectiveness of mindfulness meditation
for anxiety for adults, then again adult age group would be our interest; we could further narrow down the
age band to our interest.
Intervention needs to be as broadly or as narrowly defined keeping only the interventions of our interest.
Usually, meta-analyses are done in assimilating studies that are RCTs or quasi-experimental studies where
pairs of interventions (intervention versus placebo or interventions versus conventional treatment or inter-
ventions and no treatment) are compared (Normand, 1999). Note that meta-analyses are not necessarily
restricted only to randomised controlled trials, these are now increasingly applied to observational study de-
signs as well for example cohort and case control studies; in these situations, we refer to the specific expsoure
variables of our interest (Stroup et al., 2000). Meta-analyses are also conducted for diagnostic and screening
studies (Hasselblad and Hedges, 1995)
Let’s say we are interested to test the hypothesis that consumption of plant-based diets is associated with
reduced risk of cardiovascular illnesses. You can see that for ethical reasons, it is not possible to conduct
randomised controlled trials so that one group will be forced to consume plant based diet and the other
group will be forced to consume non-plant based diet, but it is possible to obtain that information about
heart diseases from two groups of people who have consumed and not consumed certain levels of vegetarian
items in their diets. Such studies are observational epidemiological studies and using observational studies
such as cohort and case control studies. In such situations, it is useful to summarise findings of cohort and
case control studies. Intervention then is not appropriate; however, we use the term ”Exposure”. Likewise,
the comparison group is important as well. The comparison group can be ”no intervention”, or ”placebo”,
or ”usual treatment”.
The outcomes that we are interested can be narrowly or broadly defined based on the objective of the meta
analysis. If the outcome is narrowly defined, then the meta analysis is only restricted to that outcome, for
instance, if we are interested to study the effectiveness of mindfulness meditation on anxiety then, anxiety is
our outcome; we are not interested to find out if mindfulness is effective for depression. On the other hand,
if the objective of hte study is to test if mindfulness meditation is useful for ”any health outcome”, then the
scope of the search is much wider. So, after you have set up your theory and your question, now is the time
to rewrite the question and reframe it as a PICO formatted question. Say we are interested to find out if
minduflness meditation is effective for anxiety, then we may state the question in PICO as follows:
• P: Adults (age 18 years and above), both sexes, all ethnicity, all nationality
• I: Mindfulness Meditation
• C: Placebo, Or No Intervention, or Anxiolytics Or Traditional Approaches, or Drug Based Approaches,
or Other Cognitive Behavioural Therapy
• O: Anxiety Symptom Scores, or Generalised Anxiety
Then, on the basis of PICO, we reframe the question as follows: ”Among Adults, compared with all other
approaches, what is the effectiveness of Mindfulness Meditation for the relief of Anxiety?”
PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2978v1 | CC BY 4.0 Open Access | rec: 15 May 2017, publ: 15 May 2017
”AND”, ”OR”, and ”NOT” in various combinations to expand or narrow down search results and findings.
For example,
• ”Adults” AND ”Mindfulness Meditation” will find only those articles that have BOTH adults AND
mindfulness meditation as their subject topics. While,
• ”Adults” OR ”Mindfulness Meditation” will find all articles that have EITHER ”Adults” OR ”Mind-
fulness Meditation” in their subject topics, so the number of results returned will be larger.
• ”Adults” NOT ”Mindfulness Meditation” will find only those articles that contain ”Adults” but will
exclude all articles that have ”Mindfulness Meditation” as their topic area.
In addition to the use of Boolean logic, you can also use ”fuzzy logic” to search for specific articles. When you
use fuzzy logic, you use search terms where you use words like ”Adults” NEAR ”Mindfulness” or ”Adults”
WITHIN 5 Words of ”Mindfulness” to search for articles that are very specific. These can be combined in
many different ways.
Many databases, such as Pubmed/Medline, contain MeSH (Medical Subject Headings) as controlled vo-
cabulary where hte curators of thse databses maintain or archive difernet articles under specific search
terms (Robinson and Dickersin, 2002). When you search Medline or Pubmed, you can use MeSH terms to
search for your studies. You can use or combine MeSH terms along with other terms to search more widely
or more comprehensively.
Besides these, you will use specific symbols such as asterisk (*) marks and dollar signs to indicate truncation
or find related terms to find out articles. For example, if you use something like ”Meditat$” in a search
term, then you can find articles that use the terms ”meditating”, or ”meditation”, or ”meditative” or
”Meditational”; you will find list of such symbols in the documentation section of the database that you
intend to search (Robinson and Dickersin, 2002).
Finally, search terms can occur in many different sections and parts of a study report. One way to search is
to search the title and abstract of most studies. Another way to search place to search is within the entire
body of the article. Thus, combining these various strategies, you can run a comprehensive search of the
publications or research that will contain data that you can use for your meta-analysis.
Step III: Select the articles for meta analysis by reading Titles and Abstracts
and full texts
First, read the titles and abstracts of all relevant searched papers. But before you do so, set up a scheme
where you will decide that you will select and reject articles for your meta analysis. For example, you can
set up a scheme where you can write:
• The article is irrelevant for the study question
• The article does not have the relevant population
• The article does not have the relevant intervention (or exposure)
• The article does not have a relevant comparison group
• The article does not discuss the outcome that is of interest to this research
• The article is published in a non-standard format and not suitable for review
• The article is published in a foreign language and cannot be translated
• The article is published outside of the date ranges
• The article is a duplicate of another article (same publication published twice)
PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2978v1 | CC BY 4.0 Open Access | rec: 15 May 2017, publ: 15 May 2017
Use this scheme to go through each and every article you retrieved initially on the basis of reading their titles
and abstracts. Usually only one clause is good enough to reject a study and note it that study got rejected
on that criterion, and the first clause that rejects the study is noted down as the main cause. So, even if a
study can be rejected on two clauses, the first one that rejects the study is mentioned as the main clause
of rejection; you will need to put together a process diagram to indicate which articles were rejected and
why. Such a process diagram is referred to as PRISMA (Preferred Reporting Items of Systematic Reviews
and meta-analyses) chart (Moher et al., 2009). After you have run through this step and have identified a
certain number of studies which must be included in the meta-analysis, obtain their full texts. Then read
the full text once more and conduct this rejection exercise and note the numbers. As may be expected, you
will reject fewer articles in this round. Then, read the full text and hand search the reference lists of these
articles to widen your research. This step is critical. Often, in this step, you will find out sources that you
must search, or identify authors whose work you must read to get a full list of all works and researches that
have been conducted on this topic. Do not skip this step. In this step, you will note that some authors
feature prominently, and some research groups surface; take a note of them; you may have to write to a few
authors to identify if they have published more research. All this is needed to run a thorough search of the
studies so that you will not miss any study that may be relevant for this meta analysis.
PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2978v1 | CC BY 4.0 Open Access | rec: 15 May 2017, publ: 15 May 2017
16. A quality score or a note on the quality or critical appraisal of each study
This is just a suggestion; I do not recommend a fixed set of variables and you will determine what variables
you need for each meta analysis. If you use a software such as Revman, then that will guide you with the
process of abstraction of data from each article and you should follow the steps there. Note that in this case,
we are only considering tabulation of these information per article. Also note that in this case, we will work
with one intervention and one outcome in each table. You may have more than one outcome in the paper; in
that case, you will need to set up different tables. Enter this information on a spreadsheet, and export the
spreadsheet in the form of a csv file that you can input into R. In this exercise we will use R for statistical
computing (R Core Team, 2013)
Step VI: Determine the extent to which the articles are heterogeneous
Think about the distinction between a systematic review and a meta analysis. A systematic review is one
where the analysts follow the same steps as above (frame a question, conduct a search, identify the right
type of research, extract information from the articles). Then, in a systematic review but not in a meta
analysis, all studies that are fit to be included in the review get summarised and patterns of information
are tabulated and itemised. This means, that all study findings for a set of outcomes and interventions
are identified, tabulated and discussed in systematic reviews. On the other hand, in a meta analysis, there
is an implicit assumption that the studies have come from a population that is fairly uniform across the
intervention and outcomes. This may indicate one of the two issues: either that the body of the studies
that you have identified are exhaustive and the estimates that you will obtain for the association between
the exposure or intervention and the outcome are based on the subset of evidence that you have identified
and define or estimate the true association. This is the concept of fixed effects meta analysis (Hunter and
Schmidt, 2000). Alternatively, you can conceptualise that the studies that you have identified for this meta
analysis constitute a sample that is part of a larger population of studies. That said, this subset of studies
from that larger population is interchangeable with any other study in that wider population. Hence this
set of studies is just a random sample of all possible studies. This is the notion of random effects meta
analysis (Hunter and Schmidt, 2000). So, are the studies very similar or homogeneous in the scope of the
PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2978v1 | CC BY 4.0 Open Access | rec: 15 May 2017, publ: 15 May 2017
intervention or population, or outcomes? Therefore, it is important that when we conduct a meta-analysis,
because if the studies are so different from each other that it is impossible to pool the results together, then
we will have to abandon any notion of pooling the study findings to arrive at a summary estimate. If the
findings are close enough then the studies are homogeneous and we would conclude that it would be OK to
pool the study results together using what is referred to as fixed effects meta analysis. If on the other hand,
we see that the studies are different by way of their results but nevertheless there are other areas (selection
of the population, the intervention, and the outcomes) that are sufficiently uniform, then we can combine
the results of the studies themselves but we may conclude that the apparent lack of homogeneity would arise
as these studies are part of a larger wider population of all possible studies and hence we would rather report
a random effects meta analysis.
We will discuss two ways to measure heterogeneity of the studies. One way to test for heterogeneity is to
use a statistic referred to as Cochran’s Q statistic. The Q statistic is a chi-square statistic. The assumption
here is that the studies are all from the same “population” and therefore homogeneous and therefore a
fixed-effects meta-analysis would be an appropriate measure to express the summary findings. Accordingly,
the software first estimates a fixed-effects summary estimate. The fixed effects summary estimate is a sum
of the weighted effect size. The weight of each study is determined by the variance of the effect estimate.
Then, the sum of squared difference between the summary estimate and each individual estimate would have
a chi-squared distribution with K-1 degrees of freedom where K = number of studies. If the chi-square value
would be low, this would indicate that the studies were indeed homogeneous, otherwise, it would indicate
that the studies are heterogeneous. If the studies are found to be statistically heterogeneous, the next step
for you would be to test whether there are real reasons for them to be heterogeneous, i.e., the population,
the intervention, and the outcomes are very different from each other. If this indeed would be the case,
then, you would summarise the study findings as you would with a systematic review. On the other hand, if
you find that the studies are otherwise similar, but perhaps one or more studies were to drag the summary
estimate to one direction rather than another, you would assume while the studies are not homogeneous,
they may be based on a larger pool of studies. Hence you may conduct a random effect meta analysis.
Another measure of heterogeneity or statistical heterogeneity for meta analyses is mathPlaceholder0 es-
timate. I 2 estimate is derived from another related estimate referred to as H 2 , and H 2 is given by:
H 2 = Q/K − 1 where K is the number of studies. Then, if Q > K - 1, then I 2 is defined as (H 2 − 1)/H 2 ;
otherwise I 2 is given a value of 0. For example, let’s say are working with 10 studies, and the Q statistic is
36 (this will mean that the weighted sum of squared differences between the estimated fixed effect size and
the individual effect size estimates in this case is 36); As Q > 9 for 10 studies (K = 10), therefore I 2 will
be defined as 3/4 or 75%. A high I-squared statistic would mean gross heterogeneity while a low I-squared
value would imply homogeneity of the studies (usually conventionally set at 30%)
PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2978v1 | CC BY 4.0 Open Access | rec: 15 May 2017, publ: 15 May 2017
based on fixed effects and random effects meta-analyses, we will also inspect a plot of all included studies
in the meta analysis; this is referred to as ”forest plot”. In the ”forest plot”, the effect estimate of each
study is presented in the form of a square box; the area of the square box is proportional to the weight
assigned to this particular study; the weight in turn is assessed on the basis of their variances - the higher
the variance the lower the area (so the area is inverse of the variance of each study). Then, across each
study estimate runs a horizontal line - the length of this line is same as the width of the 95% confidence
interval for the effect estimate for that particular study. The studies themselves are organised along the
y-axis of the plot; the order in which the studies are arranged can be varied or as presented in the data
set you created. On the x-axis of the forest plot the effect sizes are presented. A neutral point is plotted
on the x-axis (this is either “1.0” when binary variables are studied in the meta analysis so your effect for
each study is measured in terms of relative risk or odds ratio, or 0 when you used continuous measures for
your outcome variables, so your effect measure is in terms of differences in the effect size between those with
intervention or exposure and those in the control arm). A vertical broken line passes through the neutral
point to indicate the information on each side of the line. When you are testing intervention, it will state
that one side of the neutral line “favours intervention”, and the other side of the line “favours control”. In
addition to these two indicators (that is the x-axis and the effect measures of each study in the form of
boxes), we also get to see two diamonds. These diamonds represent the summary effect estimate in the form
of fixed and random effects meta analysis final or summary estimates. The diamonds do not have a line that
corresponds to their 95% confidence interval, instead the width of the diamond represent the 95% confidence
interval bands around them.
PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2978v1 | CC BY 4.0 Open Access | rec: 15 May 2017, publ: 15 May 2017
4. The smaller a study, more variable will be the distribution of their results. So, if we consider small
studies, they will be widely distributed around the neutral line or a line representing summary estimate
when plotted in a graph.
These can be tested by plotting the effect estimates of the studies on x axis and either the sample size of
the studies or the effect measure variability (variance or standard deviation or a similar measure) on the y
axis of a plot. If there would not be serious publication biases, the plot would resemble a funnel with one or
two dots representing studies with large sample size or low variance and effect estimate close to or identical
to the summary estimate. The base of the funnel will be populated by small sized studies (or studies with
large variances) with effect estimates scattered evenly around the summary estimate (Duval and Tweedie,
2000). If on the other hand, there is publication bias, then we would expect that one of the quadrants of
the “funnel” in the lower side will be absent or blank. This is a visual assessment and most meta analysis
packages and software allow for this plot.
PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2978v1 | CC BY 4.0 Open Access | rec: 15 May 2017, publ: 15 May 2017
PICO
“Whether for normotensive individuals (that is, those with normal levels of blood pressure), moderate
restriction of salt in the diet as opposed to no salt restriction leads to reduction in blood pressure over long
time?” Following this, here is the screen shot of the search they conducted:
Figure 1: Figure 1. The Search Terms they included in the paper. labelPlaceholder1
Identification of studies
The search terms and the search processes are shown in the following diagram The search algorithm and the
criteria of selection of the studies are here The PRISMA diagram of how the studies were selected is here
We will work on the basis of the 28 studies the authors identified (we can identify additional studies if we
want or if we take this as a starting point but for this exercise this serves as a good illustrative example)
PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2978v1 | CC BY 4.0 Open Access | rec: 15 May 2017, publ: 15 May 2017
Figure 2: Figure 2. The PRISMA chart to select the studies for this review. labelPlaceholder1
Examination of Heterogeneity
If we review the data set and summarise without weighing the studies in any way, we get to see that the
average drop in the diastolic blood pressure with normotensive individuals with prolonged ingestion of low
salt diet was about 1 point and for systolic blood pressure was about 3 points. So, let’s run a formal meta
analysis to see if the weighted averages are any different
10
PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2978v1 | CC BY 4.0 Open Access | rec: 15 May 2017, publ: 15 May 2017
library(meta)
## Loading ‘meta’ package (version 4.8-1).
htn meta m <- metagen(d sbp, sbp se, data = htn meta)
print(summary(htn meta m))
## Number of studies combined: k = 11
## ## 95%-CI z p-value
## Fixed effect model -2.0196 [-2.5483; -1.4908] -7.49 < 0.0001
## Random effects model -2.2689 [-3.4881; -1.0496] -3.65 0.0003 ##
## Quantifying heterogeneity: ## tauˆ2 = 2.3111; H = 1.90 [1.40; 2.57]; Iˆ2 =
72.2% [48.9%; 84.9%]
## ## Test of heterogeneity: ## Q d.f. p-value ## 35.99 10 < 0.0001
So, several things to note here:
• The first point is that, the studies are heterogeneous,
• Q is high 35.99 with K = 11 and therefore K-1 = 10
• Q is also highly significant statistically
• The Iˆ2 is at 72.2% which is very high
• The fixed effects summary estimate is that, there is a 2 point drop in systolic blood pressure.
11
PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2978v1 | CC BY 4.0 Open Access | rec: 15 May 2017, publ: 15 May 2017
The forest plot suggests that there are a few small studies with strong effect size but majority of the studies
are within the 2 point drop mark.
Let’s check the summary estimates for diastolic blood pressure,
diastolic <- metagen(d dbp, dbp se, data = htn meta)
print(summary(diastolic)) ## Number of studies combined:
k = 11 ## ## 95%-CI z p-value
## Fixed effect model -0.9811 [-1.3992; -0.5631] -4.60 < 0.0001
## Random effects model -0.8199 [-1.7468; 0.1070] -1.73 0.0830
## ## Quantifying heterogeneity: ## tauˆ2 = 1.3987;
H = 1.86 [1.37; 2.53];
Iˆ2 = 71.2% [46.9%; 84.4%] ## ## Test of heterogeneity:
## Q d.f. p-value ## 34.78 10 0.0001
Figure 4: Figure 4. Forest Plot to study distribution of the effect estimates of the diastolic blood pressure
for the DASH study
12
PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2978v1 | CC BY 4.0 Open Access | rec: 15 May 2017, publ: 15 May 2017
left side of the funnel base).
Figure 5: Figure 5. Funnel Plot, where we see that there is a relative absence of studies in the right lower
quadrant
13
PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2978v1 | CC BY 4.0 Open Access | rec: 15 May 2017, publ: 15 May 2017
subgroup analysis <- update.meta(diastolic, byvar = design)
#print(summary(subgroup analysis))
subgroup syst <- update.meta(htn meta m, byvar = design)
print(summary(subgroup syst))
## Number of studies combined: k = 11 ## ## 95%-CI z p-value
## Fixed effect model -2.0196 [-2.5483; -1.4908] -7.49 < 0.0001
## Random effects model -2.2689 [-3.4881; -1.0496] -3.65 0.0003
## ## Quantifying heterogeneity: ## tauˆ2 = 2.3111; H = 1.90 [1.40; 2.57]; Iˆ2
= 72.2% [48.9%; 84.9%]
## ## Test of heterogeneity: ## Q d.f. p-value ## 35.99 10 < 0.0001
## ## Results for subgroups (fixed effect model): ## k 95%-CI Q tauˆ2 Iˆ2
## design = P 4 -1.4190 [-2.1504; -0.6876] 0.44 0 0.0%
## design = X 7 -2.6773 [-3.4427; -1.9119] 30.12 4.788 80.1%
## ## Test for subgroup differences (fixed effect model):
## Q d.f. p-value ## Between groups 5.43 1 0.0198
Summary
If we were to summarise the findings of this meta analysis, we see that for normotensive individuals, the
studies that were included in the analyses were heterogeneous, that their effects were small and most studies
pointed to a small amount of reduction in systolic and diastolic blood pressure that might not be clinically
very relevant, and that, this meta analysis has missed studies that are small and that had effect estimates in
different directions, leaving room for publication bias. Based on this meta analysis, you will need to conduct
more studies on the relationship between salt restriction (longer term) among normotensive individuals to
test their effectiveness as a treatment. So, even though you may have well conducted studies that would
suggest that salt restriction works, available evidence over many studies would not justify such a conclusion.
References
Kay Dickersin. The existence of publication bias and risk factors for its occurrence. Jama, 263(10):1385–1389,
1990.
Sue Duval and Richard Tweedie. Trim and fill: a simple funnel-plot–based method of testing and adjusting
for publication bias in meta-analysis. Biometrics, 56(2):455–463, 2000.
Vic Hasselblad and Larry V Hedges. Meta-analysis of screening and diagnostic tests. Psychological bulletin,
117(1):167, 1995.
Feng J He and Graham A MacGregor. Effect of modest salt reduction on blood pressure: a meta-analysis
of randomized trials. Implications for public health. Journal of human hypertension, 16(11):761, 2002.
John E Hunter and Frank L Schmidt. Fixed effects vs. random effects meta-analysis models: implications for
cumulative research knowledge. International Journal of Selection and Assessment, 8(4):275–292, 2000.
David Moher, Alessandro Liberati, Jennifer Tetzlaff, Douglas G Altman, Prisma Group, et al. Preferred
reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS med, 6(7):
e1000097, 2009.
Sharon-Lise T Normand. Tutorial in biostatistics meta-analysis: formulating, evaluating, combining, and
reporting. Statistics in medicine, 18(3):321–359, 1999.
R Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical
Computing, Vienna, Austria, 2013. URL http://www.R-project.org/.
14
PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2978v1 | CC BY 4.0 Open Access | rec: 15 May 2017, publ: 15 May 2017
Karen A Robinson and Kay Dickersin. Development of a highly sensitive search strategy for the retrieval of
reports of controlled trials using PubMed. International journal of epidemiology, 31(1):150–153, 2002.
Frank M Sacks, Laura P Svetkey, William M Vollmer, Lawrence J Appel, George A Bray, David Harsha,
Eva Obarzanek, Paul R Conlin, Edgar R Miller, Denise G Simons-Morton, et al. Effects on blood pressure
of reduced dietary sodium and the Dietary Approaches to Stop Hypertension (DASH) diet. New England
journal of medicine, 344(1):3–10, 2001.
Connie Schardt, Martha B Adams, Thomas Owens, Sheri Keitz, and Paul Fontelo. Utilization of the PICO
framework to improve searching PubMed for clinical questions. BMC medical informatics and decision
making, 7(1):16, 2007.
Donna F Stroup, Jesse A Berlin, Sally C Morton, Ingram Olkin, G David Williamson, Drummond Rennie,
David Moher, Betsy J Becker, Theresa Ann Sipe, Stephen B Thacker, et al. Meta-analysis of observational
studies in epidemiology: a proposal for reporting. Jama, 283(15):2008–2012, 2000.
Alison Thornton and Peter Lee. Publication bias in meta-analysis: its causes and consequences. Journal of
clinical epidemiology, 53(2):207–216, 2000.
Brandi D Tuttle, Megan von Isenburg, Connie Schardt, and Anne Powers. PubMed instruction for medical
students: searching for a better way. Medical reference services quarterly, 28(3):199–210, 2009.
15
PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2978v1 | CC BY 4.0 Open Access | rec: 15 May 2017, publ: 15 May 2017